Chapter 4

Virtualization Architecture

IN THIS CHAPTER

check Examining the basics of virtualization

check Looking at what a hypervisor does

check Considering how disks and networks are virtualized

check Weighing the benefits of virtualization

check Choosing host servers

check Considering how virtualization affects Microsoft licensing for Windows Server

Virtualization is one of the hottest trends in networking today. According to some industry pundits, virtualization is the best thing to happen to computers since the invention of the transistor. If you haven’t already begun to virtualize your network, you’re standing on the platform watching as the train is pulling out.

This chapter introduces you to the basic concepts of virtualization, with an emphasis on using it to leverage your network server hardware to provide more servers using less hardware. Virtualization can dramatically simplify the design of your network — you can support more servers on less hardware, and with less hardware, your network will have fewer interconnects that link servers to the private network. Win, win!

tip Mastering a virtualization environment calls for a book of its own. I recommend two titles, both from John Wiley & Sons, Inc.: Virtualization For Dummies, by Bernard Golden, and VMware Infrastructure 3 For Dummies, by William Lowe (no relation, honest).

Understanding Virtualization

The basic idea behind virtualization is to use software to simulate the existence of hardware. This powerful idea enables you to run more than one independent computer system on a single physical computer system. Suppose that your organization requires a total of 12 servers to meet its needs. You could run each of these 12 servers on a separate computer, in which case you would have 12 computers in your server room. Or, you could use virtualization to run these 12 servers on just two computers. In effect, each of those computers would simulate six separate computer systems, each running one of your servers.

Each of the simulated computers is called a virtual machine (VM). For all intents and purposes, each VM appears to be a complete, self-contained computer system with its own processor (or, more likely, processors), memory, disk drives, CD-ROM/DVD drives, keyboard, mouse, monitor, network interfaces, USB ports, and so on.

Like a real computer, each virtual machine requires an operating system to do productive work. In a typical network server environment, each virtual machine runs its own copy of Windows Server. The operating system has no idea that it’s running on a virtual machine rather than on a real machine.

Here are a few terms you need to be familiar with if you expect to discuss virtualization intelligently:

Understanding Hypervisors

At the core of virtualization is a hypervisor, a layer of software that manages the creation and execution of virtual machines. A hypervisor provides several core functions:

There are two basic types of hypervisors you should know about:

tip For production use, you should always use type-1 hypervisors because they’re much more efficient than type-2 hypervisors. Type-1 hypervisors are considerably more expensive than type-2 hypervisors, however. As a result, many people use inexpensive or free type-2 hypervisors to experiment with virtualization before making a commitment to purchase an expensive type-1 hypervisor.

Understanding Virtual Disks

Computers aren’t the only things that are virtualized in a virtual environment. In addition to creating virtual computers, virtualization also creates virtual disk storage. Disk virtualization lets you combine a variety of physical disk storage devices to create pools of disk storage that you can then parcel out to your virtual machines as needed.

Virtualization of disk storage is nothing new. In fact, there are actually several layers of virtualization involved in any disk storage environment. At the lowest level are the actual physical disk drives. Physical disk drives are usually bundled together in arrays of individual drives. This bundling is a type of virtualization in that it creates the image of a single large disk drive that isn’t really there. For example, four 2TB disk drives might be combined in an array to create a single 8TB disk drive.

Note that disk arrays are usually used to provide data protection through redundancy. This is commonly called RAID, which stands for Redundant Array of Inexpensive Disks.

One common form of RAID, called RAID-10, lets you create mirrored pairs of disk drives so that data is always written to both of the drives in a mirror pair. So, if one of the drives in a mirror pair fails, the other drive can carry the load. With RAID-10, the usable capacity of the complete array is equal to one-half of the total capacity of the drives in the array. For example, a RAID-10 array consisting of four 2TB drives contains two pairs of mirrored 2TB disk drives, for a total usable capacity of 4TB.

Another common form of RAID is RAID-5, in which disk drives are combined and one of the drives in the group is used for redundancy. Then, if any one of the drives in the array fails, the remaining drives can be used to re-create the data that was on the drive that failed. The total capacity of a RAID-5 array is equal to the sum of the capacities of the individual drives, minus one of the drives. For example, an array of four 2TB drives in a RAID-5 configuration has a total usable capacity of 6TB.

In a typical virtual environment, the host computers can be connected to disk storage in several distinct ways:

Regardless of the way the storage is attached to the host, the hypervisor consolidates its storage and creates virtual pools of disk storage typically called data stores. For example, a hypervisor that has access to three 2TB RAID-5 disk arrays might consolidate them to create a single 6TB data store.

From this data store, you can create volumes, which are essentially virtual disk drives that can be allocated to a particular virtual machine. Then, when an operating system is installed in a virtual machine, the operating system can mount the virtual machine’s volumes to create drives that the operating system can access.

For example, let’s consider a virtual machine that runs Windows Server. If you were to connect to the virtual machine, log in, and use Windows Explorer to look at the disk storage that’s available to the machine, you might see a C: drive with a capacity of 100GB. That C: drive is actually a 100GB volume that is created by the hypervisor and attached to the virtual machine. The 100GB volume, in turn, is allocated from a data store, which might be 4TB in size. The data store is created from disk storage contained in a SAN attached to the host, which might be made up of a RAID-10 array consisting of four 2TB physical disk drives.

So, you can see that there are at least four layers of virtualization required to make the raw storage available on the physical disk drives available to the guest operating system:

Although it may seem overly complicated, these layers of virtualization give you a lot of flexibility when it comes to storage management. New disk arrays can be added to a SAN, or a new NAS can be added to the network, and then new data stores can be created from them without disrupting existing data stores. Volumes can be moved from one data store to another without disrupting the virtual machines they’re attached to. In fact, you can increase the size of a volume on the fly, and the virtual machine will immediately see the increased storage capacity of its disk drives, without even requiring so much as a reboot.

Understanding Network Virtualization

When you create one or more virtual machines on a host system, you need to provide a way for those virtual machines to communicate not only with each other but also with the other physical computers on your network. To enable such connections, you must create a virtual network within your virtualization environment. The virtual network connects the virtual machines to each other and to the physical network.

To create a virtual network, you must create a virtual switch, which connects the virtual machines to each other and to a physical network via the host computer’s network interfaces. Like a physical switch, a virtual switch has ports. When you create a virtual switch, you connect the virtual switch to one or more of the host computer’s network interfaces. These interfaces are then connected with network cable to physical switches, which effectively connects the virtual switch to the physical network.

Then, when you create virtual machines, you connect each virtual machine to a port on the virtual switch. When all the virtual machines are connected to the switch, the VMs can communicate with each other via the switch. And they can communicate with devices on the physical network via the connections through the host computer’s network interfaces.

Considering the Benefits of Virtualization

You might suspect that virtualization is inefficient because a real computer is inherently faster than a simulated computer. Although it’s true that real computers are faster than simulated computers, virtualization technology has become so advanced that the performance penalty for running on a virtualized machine rather than a real machine is only a few percent.

The small amount of overhead imposed by virtualization is usually more than made up for by the simple fact that even the most heavily used servers spend most of their time twiddling their digital thumbs, waiting for something to do. In fact, many servers spend nearly all their time doing nothing. As computers get faster and faster, they spend even more of their time with nothing to do.

Virtualization is a great way to put all this unused processing power to good use.

Besides this basic efficiency benefit, virtualization has several compelling benefits:

Choosing Virtualization Hosts

Having made the decision to virtualize your servers, you’re next faced with the task of selecting the host computers on which you’ll run your virtual servers. The good news is that you need to purchase fewer servers than if you use physical servers. The not-so-good news is that you need to purchase really good servers to act as hosts, because each host will support multiple virtual servers. Here are some tips to get you started:

Understanding Windows Server 2016 Licensing

When planning your server architecture, you’ll need to account for the fact that you must purchase sufficient licenses of Windows Server to cover all the servers you’re running. Before virtualization, this was easy: Each server required its own license. With virtualization, things get tricky — and Microsoft doesn’t make it easier by trying to simplify things.

Windows Server 2016 comes in three editions. These editions are as follows:

So, you’ve got to do some real math to figure out which licenses you’ll need. Let’s say you need to run a total of 16 servers on two hosts. Here are two licensing scenarios that would be permissible:

It’s obvious that Microsoft charges more to run Windows Server on more powerful hosts, which makes for an interesting pricing strategy. As it turns out, over the next few years, you’ll be hard pressed to purchase hosts that fall below the single-license core limit of 8 cores per processor or two processors per host. That’s because Intel’s dual-socket Xeon processors are getting more and more cores with each successive generation. The current generation of Xeon processors sports up to 18 cores per processor. While Intel still makes 4-, 6-, and 8-core versions of the Xeon processor, who knows what the future will bring.

In any event, the per-core nature of Microsoft’s licensing encourages you to purchase host processors with cores as close to but below 8 per processor increments. In other words, use 8- or 16-core processors in your hosts; avoid 10- or 18-core processors, because they nudge you just over the core limits for licensing.