CHAPTER 18

Virtualization

In this chapter, you will learn how to

•   Describe the concepts of virtualization

•   Explain why virtualization is so highly adopted

•   Create and use a virtual machine

•   Describe how networks use virtualization

•   Describe the service layers and architectures that make up cloud computing

For those of us used to the idea that a single computer system consists of one operating system running on its own hardware, virtualization challenges our understanding. In the simplest terms, virtualization is the process of using powerful, special software running on a computer to create a complete environment that imitates (virtualizes) all of the hardware you’d see on a real computer. We can install and run an operating system in this virtual environment exactly as if it were installed on its own physical computer. That guest environment is called a virtual machine (VM). Figure 18-1 shows one such example: a Windows system using a program called Hyper-V to host two guest virtual machines, one running Ubuntu Linux and another running Windows 10.

Image

Figure 18-1 Hyper-V running Linux and Windows 10

This chapter begins by explaining the ideas behind virtualization. The chapter then explores the motivating factors behind the widespread adoption of virtualization throughout the IT industry. The third section outlines steps for setting up a virtual machine. The next section explores the use of virtual machines in networks, which is a little beyond where you need to go for the CompTIA A+ 220-902 exam, but where you’ll run into virtualization in modern computing. With this knowledge as a foundation, the chapter finishes with an examination of important concepts in cloud computing (including the role virtualization plays).

Historical/Conceptual

What Is Virtualization?

Ask 100 people what the term virtual means and you’ll get a lot of different answers. Most people define virtual with words like “fake” or “pretend,” but these terms only begin to describe it. Let’s try to zero in on virtualization using a term that hopefully you’ve heard: virtual reality. For most of us, the idea of virtual reality starts with someone wearing headgear and some kind of glove or handheld input device, as shown in Figure 18-2.

Image

Figure 18-2 Virtual reality training (photo courtesy of NASA)

The headgear and the gloves work together to create a simulation of a world or environment that appears to be real, even though the person wearing them is located in a room that doesn’t resemble the simulated space. Inside this virtual reality you can see the world by turning your head, just as you do in the real world. Software works with the headset’s inputs to emulate a physical world. At the same time, the gloves/hand controllers enable you to touch and move objects in the virtual world.

To make virtual reality effective, the hardware and software need to work together to create an environment convincing enough for a human to work within it. Virtual reality doesn’t have to be perfect—it has limitations—but it’s pretty cool for teaching someone how to fly a plane or do a spacewalk without having to start with the real thing (Figure 18-3).

Image

Figure 18-3 Using virtual reality with Oculus Rift (photo courtesy of Oculus VR)

Virtualization on a computer is virtually (sorry, can’t pass up the pun) the same as virtual reality for humans. Just as virtual reality creates an environment that convinces humans they’re in a real environment, virtualization convinces a guest operating system it is running on its own hardware by using a very cool piece of software called a hypervisor

Meet the Hypervisor

A normal operating system uses programming called a supervisor to handle very low-level interaction among hardware and software, such as task scheduling, allotment of time and resources, and so on. Figure 18-4 shows how the supervisor works between the OS and the hardware.

Image

Figure 18-4 Supervisor on a generic single system

Because virtualization enables one machine—called the host—to run multiple guest operating systems simultaneously, full virtualization requires an extra layer of sophisticated programming called a hypervisor to manage the vastly more complex interactions. Figure 18-5 shows a single hypervisor hosting three different guest virtual machines.

Image

Figure 18-5 Hypervisor on a generic single system hosting three virtual machines

There are a number of companies that make hypervisors. One of the oldest, and arguably the one that really put PC virtualization on the map, is VMware (www.vmware.com). VMware released their now incredibly popular VMware Workstation way back in 1999 for Windows and Linux systems. Since then VMware has grown dramatically, offering a broad cross-section of virtualization products (see Figure 18-6).

Image

Figure 18-6 Author’s busy VMware server

Microsoft’s Hyper-V comes with Windows Server as well as desktop Windows starting with Windows 8 Pro. While not as popular as VMware products, it has a large base of users that’s growing all the time (see Figure 18-7).

Image

Figure 18-7 Hyper-V on a Windows 8.1 system

Another very popular hypervisor is Oracle VM VirtualBox (see Figure 18-8). Virtual-Box is powerful and runs on Windows, Mac OS X, and Linux.

Image

Figure 18-8 Oracle VirtualBox

If you run Mac OS X, the most popular hypervisor choices are VMware Fusion and Parallels Desktop (see Figure 18-9). Both of these Mac OS X hypervisors are quite powerful, but unlike Hyper-V or VirtualBox, they cost money.

Image

Figure 18-9 Parallels Desktop

This is in no way a complete list of all the hypervisors available out there. Many Linux users swear by KVM, for example, but the hypervisors introduced in this section are the ones you’re most likely to see on desktop systems. Make sure you are aware of at least Microsoft’s Hyper-V for the exam!

Images

NOTE VMware makes a number of amazing virtualization products, but they can be pricey. Microsoft’s Hyper-V, Linux’s KVM, and Oracle’s VirtualBox are all free.

Emulation Versus Virtualization

Virtualization takes the hardware of the host system and allocates some portion of its power to individual virtual machines. If you have an Intel system, a hypervisor creates a virtual machine that acts exactly like the host Intel system. It cannot act like any other type of computer. For example, you cannot make a virtual machine on an Intel system that acts like a Nintendo 3DS. Hypervisors simply pass the code from the virtual machine to the actual CPU.

Emulation is very different from virtualization. An emulator is software or hardware that converts the commands to and from the host machine into an entirely different platform. Figure 18-10 shows a Super Nintendo Entertainment System emulator, Snes9X, running a game called Donkey Kong Country on a Windows system.

Image

Figure 18-10 Super Nintendo emulator running on Windows

Images

EXAM TIP While the CompTIA A+ 220-902 exam objectives include emulator requirements as a part of virtualization, the concepts are not the same. For the sake of completeness, however, know that emulating another platform (using a PC to run Sony PlayStation 3 games, for example) requires hardware several times more powerful than the platform being emulated.

Client-Side Virtualization

This chapter will show you a few of the ways you can use virtualization, but before I go any further, let’s take the basic pieces you’ve learned about virtualization and put them together in one of its simplest forms, client-side virtualization.

The basic process for creating virtual machines is as follows:

1. Verify and set up your system’s hardware to support virtual machines.

2. Install a hypervisor on your system.

3. Create a new virtual machine that has the proper virtualized hardware requirements for the guest OS.

4. Start the new virtual machine and install the new guest OS exactly as you’d install it on a new physical machine.

Hardware Support

While any computer running Linux, Windows, or Mac OS X will support a hypervisor, there are a few hardware requirements we need to address. First, every hypervisor will run better if you enable hardware virtualization support.

Every Intel-based CPU since the late 1980s is designed to support a supervisor for multitasking, but it’s hard work for that same CPU to support multiple supervisors on multiple VMs. Around 2005, both AMD and Intel added extra features to their CPUs just to support hypervisors: Intel’s VT-x and AMD’s AMD-V. This is hardware virtualization support.

If your CPU and BIOS support hardware virtualization, you can turn it on or off inside the system setup utility. Figure 18-11 shows the virtualization setting in a typical system setup utility.

Image

Figure 18-11 BIOS setting for CPU virtualization support

Apart from hardware virtualization support, the second most important concern is RAM. Each virtual machine needs just as much RAM as a physical one, so it’s common practice to absolutely stuff your host machine with large amounts of RAM. The more virtual machines you run, the more RAM you need. Generally, there are two issues to keep in mind:

•   Leave enough RAM for the hypervisor to run adequately

•   Add enough RAM so that every VM you run at the same time will run adequately

It will take some research to figure out how much RAM you need. I have a Virtual-Box hypervisor running on a Windows 8.1 (64-bit) host system. There are three VMs running at all times: Windows XP running a test bank simulation tool, Ubuntu Linux (64-bit) running a Web server, and Windows 10 (64-bit) being used as a remote desktop server. All of these VMs are quite busy. I determined the following requirements by looking around the Internet (and guessing a little):

•   4 GB for the host OS and VirtualBox

•   1 GB for Windows XP

•   512 MB for Ubuntu

•   2.5 GB for Windows 10

Not wanting to run short, I just dumped 32 GB of RAM into my system. So far it runs pretty well! Be careful here. It is difficult to get perfect answers for these situations. If you research this you may get a different answer.

Disk Space

Disk space can also be a problem. VM files can be huge because they include everything installed on the VM; depending on the OS and how the VM is used, the VM file could range from megabytes to hundreds of gigabytes. On top of that, every snapshot you take (snapshots are described later in this chapter) requires space. Figure 18-12 shows a newly minted Windows 10 VM taking over 11 GB of disk space. Make sure you have plenty of disk space for your VMs.

Image

Figure 18-12 Single VM file taking 11 GB

In addition, your VM files are precious. Protect them with good RAID arrays and the occasional backup as well to make sure they are available when you need them.

Emulation

Okay, I know what I said earlier, but there are some situations where hypervisors will do some simple emulation. This emulation only supports certain types of hardware. The best examples are network interface cards (NICs). Instead of writing drivers so that every imaginable guest OS can use the hypervisor’s virtual NIC, the hypervisor will likely emulate a popular, widely supported hardware NIC. A great example is Microsoft’s Hyper-V. If you want to set up a Linux VM on your Hyper-V system, Linux won’t understand how to talk to Microsoft’s default virtual NIC unless you change it to Legacy Network Adapter (see Figure 18-13).

Image

Figure 18-13 Enabling NIC emulation on Hyper-V

Basically all you are doing is turning off a few “Microsofty” features that work great when your guest VM is a Windows system. So is this really emulation as we described earlier in the chapter? Not in quite the same way; the hypervisor is not emulating a full system, just some pieces for compatibility reasons. These settings rarely mention emulation explicitly, but be aware that they exist.

Network Support

Probably one of the coolest features of VMs is the many different ways you can “virtually” network them. Don’t just limit yourself to thinking, “Oh, can I get a VM to connect to the Internet?” Well, sure you can, but hypervisors do so much more. Every hypervisor has the capability to connect each of its virtual machines to a network in a number of different ways depending on your needs.

Internal Networking Let’s say you have a scenario where you have four VMs and you want them to see each other, but nothing else. No Internet connection: just four VMs that think they are the only computers in existence. Go into the settings for all four VMs and set their respective virtual NICs to an internal network (see Figure 18-14). In this case, every VM running on that one hypervisor will act as though it is connected to its own switch and nothing else.

Image

Figure 18-14 Configuring a VM for an internal network in VirtualBox

Internal networking is really handy when you want to play with some cool networking tool, but you don’t want to do something potentially unsafe to anything but your little virtual test network. I often find fun utilities that do all kinds of things that you would never want to do on a real network (malware utilities, network scanners, and so on). By making a few VMs and connecting them via an internal network, I can play all I wish without fear of messing up a real network.

Bridged Networking When most people think of networking a new VM, it’s safe to say they are really thinking, “How do I get my new VM on the Internet?” You first connect to the Internet by connecting to a real network. There are plenty of scenarios where you might want a VM that connects to your real network, exactly as your host machine connects to the network. To do this, the VM’s virtual NIC needs to piggyback (the proper word is bridge) the real NIC to get out to the network. A VM with a bridged network connection accesses the same network as the host system. It’s a lot like the virtual machine has its own cable to connect it to the network. Bridged networking is a simple way to get a VM connected to the Internet (assuming the host machine has an Internet connection, of course).

Images

EXAM TIP A VM connected using bridged networking is subject to all the same security risks as a real computer on the Internet.

Here’s a scenario where bridged networking is important. Let’s say someone is trying to access my online videos, but is having trouble. I can make a VM that replicates a customer’s OS and browser. In this case, I would set up the VM’s NIC as a bridged network. This tells the real NIC to allow the virtual NIC to act as though it is a physical NIC on the same network. It can take Dynamic Host Configuration Protocol (DHCP) information just like a real NIC.

Images

EXAM TIP On almost every hypervisor, when you create a new VM, it will by default use bridged networking unless you specifically reconfigure the VM’s NIC to do otherwise.

Virtual Switches Think about this for a moment: If you are going to make an internal network of four virtual machines, aren’t they going to need to somehow connect together? In a real network, we connect computers together with a switch. The answer is actually kind of cool: hypervisors make virtual switches! Every hypervisor has its own way to set up these virtual switches. Some hypervisors, like VirtualBox, just do this automatically and you don’t see anything. Other hypervisors, Microsoft’s Hyper-V is the best example, require you to make a virtual switch to set up networking. Figure 18-15 shows the Hyper-V Virtual Switch Manager setting up a virtual switch.

Image

Figure 18-15 Hyper-V’s Virtual Switch Manager

No Networking The last and probably least network option is no network at all. Just because you make a VM doesn’t mean you need any kind of network. I have a number of VMs that I keep around just to see what my test bank software does on various standalone systems. I don’t need networking at all. I just plug in a thumb drive with whatever I’m installing and test.

Installing a Virtual Machine

The actual process of installing a hypervisor is usually no more complicated than installing any other type of software. Let’s use Hyper-V as an example. If you have a Windows 8/8.1/10 system, you can enable Microsoft’s Hyper-V by going to the Programs and Features Control Panel applet and selecting Turn Windows features on or off, which opens the Windows Features dialog box, as shown in Figure 18-16.

Image

Figure 18-16 Installing Hyper-V in Windows

Once you’ve installed the hypervisor feature, you’ll have a virtual machine manager that acts as the primary place to create, start, stop, save, and delete guest virtual machines. Figure 18-17 shows the manager for Oracle’s VirtualBox.

Image

Figure 18-17 Oracle VM VirtualBox Manager (three VMs installed)

Creating a Virtual Machine

So it’s time to build a virtual machine. On pretty much any manager this is simply a matter of clicking New | Virtual Machine, which starts some kind of wizard to ensure you’re creating the right virtual machine for your guest OS. Most hypervisors will have presets for a number of crucial settings to ensure your guest OS has the virtual hardware it needs to run well. Figure 18-18 shows the VirtualBox wizard asking what OS I intend to install. By selecting the correct preset, I’ll make sure my guest OS has all the virtual RAM, hard drive space, and so forth, that it needs.

Image

Figure 18-18 Creating a new VM in Oracle’s VirtualBox

Installing the Operating System

Once you’ve created the new guest VM, it’s time to install a guest operating system. Just because you’re creating a virtual machine, don’t think the operating system and applications aren’t real. You need to install an operating system on that virtual machine. You can do this using some form of optical media, just as you would on a machine without virtualization. Would you like to use Microsoft Windows in your virtual machine? No problem, but know that every virtual machine on which you create and install Windows requires a separate, legal copy of Windows; this also goes for any licensed software installed in the VM.

Because virtual machines are so flexible on hardware, all good virtual machine managers enable you to use the host machine’s optical drive, a USB thumb drive, or an ISO file. One of the most popular ways is to tell the new virtual machine to treat an ISO file as its own optical drive. In Figure 18-19, I’m installing Ubuntu on a VMware virtual machine. I downloaded an ISO image from the Ubuntu Web site (www.ubuntu.com), and as the figure shows, I’ve pointed the dialog box to that image.

Image

Figure 18-19 Selecting the installation media

If you look closely at Figure 18-19, you’ll see that VMware reads the installation media and detects the operating system. Because VMware knows this operating system, it configures all of the virtual hardware settings automatically: amount of RAM, virtual hard drive size, and so on. You can change any of these settings, either before or after the virtual machine is created.

Next, you need to accept the size of the virtual drive, as shown in Figure 18-20.

Image

Figure 18-20 Setting the virtual drive size

You also need to give the virtual machine a name. By default, VMware Workstation uses a simple name. For this overview, accept the default name: Ubuntu (plus some version-specific information). This dialog box also lets you decide where you want to store the files that comprise the virtual machine. Note that VMware uses a folder in the user’s Documents folder called Virtual Machines (see Figure 18-21).

Image

Figure 18-21 Entering VM name and location

Images

NOTE Use descriptive names for virtual machines. This will save you a lot of confusion when you have multiple VMs on a single host.

After you’ve gone through all the configuration screens, you can start using your virtual machine. You can start, stop, pause, add, or remove virtual hardware.

After the virtual machine installs, you then treat the VM exactly as though it were a real machine. The only big difference is that VMware Workstation replaces CTRL-ALT-DELETE with CTRL-ALT-INSERT by default. Figure 18-22 shows VMware Workstation with the single VM installed but not running.

Image

Figure 18-22 VMware Workstation with a single VM

Congratulations! You’ve just installed a virtual desktop. Like with a real system, you can add or remove hardware, but it won’t take a trip to the electronics store or a box from Newegg. The real power of a hypervisor is in the flexibility of the virtual hardware. A hypervisor has to handle every input and output that the operating system would request of normal hardware. With a good hypervisor, you can easily add and remove virtual hard drives, virtual network cards, virtual RAM, and so on, helping you adapt your virtual desktop to meet changing needs. Figure 18-23 shows the Hardware Configuration screen from VMware Workstation.

Image

Figure 18-23 Configuring virtual hardware in VMware Workstation

Images

SIM Check out the excellent Chapter 18 Show! and Click! sims on “Virtual Hardware” over at http://totalsem.com/90x. These help reinforce terminology and typical steps for setting up a virtual machine.

Virtual desktops were the first type of popular virtual machines seen in the PC world, championed by VMware and quickly copied by other virtualization programs. However, there’s a lot more to virtualization than just virtual desktops. Before I dive in too far, let’s step back a moment and understand a very important question: Why do we virtualize?

902

Why Do We Virtualize?

Virtualization has taken IT by storm, but for those who have never seen virtualization, the big question has got to be: Why? Let’s talk about the benefits of virtualization. While you read this section, keep in mind two important things:

•   A single hypervisor on a single system will happily run as many virtual machines as its RAM, CPU, and drive space allow. (RAM is almost always the limiting factor.)

•   A virtual machine that’s shut down is no more than a file or folder sitting on a hard drive.

While you’re reading about the benefits of virtualization, don’t forget about the requirements. To run one or more virtual machines, you’ll need a powerful machine—fast processor, loads of RAM, and a good amount of hard drive space. If you want the virtual PC to connect to a network, your physical PC needs a NIC.

Images

EXAM TIP Virtualized operating systems use the same security features as real operating systems. For each virtual machine user account, you’ll need to keep track of user names, passwords, permissions, and so on, just like on a normal PC.

Power Saving

Before virtualization, each OS needed to be on a unique physical system. With virtualization, you can place multiple virtual servers or clients on a single physical system, reducing electrical power use substantially. Rather than one machine running a Windows file server, another Windows system acting as a DNS server, and a third machine running Linux for a DHCP server, why not take one physical computer to handle all three servers simultaneously as virtual machines (see Figure 18-24)?

Image

Figure 18-24 Virtualization saves power.

Hardware Consolidation

Much in the way you can save power by consolidating multiple servers or clients into a single powerful server or client, you can also avoid purchasing expensive hardware that is rarely if ever run at full capacity during its useful lifetime. Complex desktop PCs can be replaced with simple but durable thin clients, which may not need hard drives, fans, or optical drives, because they only need enough power to access the remote desktop. For that matter, why buy multiple high-end servers, complete with multiple processors, RAID arrays, redundant power supplies, and so on, and only run each server using a fraction of its resources? With virtualization, you can easily build a single physical server machine and run a number of servers or clients on that one box.

System Management and Security

The most popular reason for virtualizing is probably the benefits we reap from easy-to-manage systems. We can take advantage of the fact that VMs are simply files: like any other files, they can be copied. New employees can be quickly set up with a department-specific virtual machine with all of the software they need already installed.

Let’s say you have set up a new employee with a traditional physical system. If that system goes down—due to hacking, malware, or so on—you need to restore the system from a backup, which may or may not be easily at hand. With virtualization, you merely need to shut down the virtual machine and reload an alternate copy of it.

Most virtual machines let us take a snapshot or checkpoint, which saves the virtual machine’s state at that moment, allowing us to quickly return to this state later. Snapshots are great for doing risky (or even not-so-risky) maintenance with a safety net. These aren’t, however, a long-term backup strategy—each snapshot may reduce performance and should be removed as soon as the danger has passed. Figure 18-25 shows VMware Workstation saving a snapshot.

Image

Figure 18-25 Saving a snapshot

Research

Here’s a great example that happens in my own company. I sell my popular Total Tester test banks: practice questions for you to test your skills on a broad number of certification topics. As with any distributed program, I tend to get a few support calls. Running a problem through the same OS helps my team solve it. In the pre-virtualization days, I usually had seven to ten multi-boot PCs laying around my office just to keep active copies of specific Windows versions. Today, a single hypervisor enables us to support a huge number of Windows versions with one machine.

Now that we’ve discussed how virtualization is useful, let’s look at how virtualization is implemented in networks.

Real-World Virtualization

When it comes to servers, virtualization has pretty much taken over everywhere. Many of the servers we access, particularly Web and e-mail servers, are now virtualized. Like any popular technology, there are a lot of people continually working to make virtualization better. The VMware Workstation example shown earlier in this chapter is a very powerful desktop application, but it still needs to run on top of a single system that is already running an operating system—the host operating system.

What if you could improve performance by removing the host operating system altogether and install nothing but a hypervisor? Well, you can! This is done all the time with another type of powerful hypervisor/OS combination called a bare-metal hypervisor. We call it bare metal because there’s no other software between it and the hardware—just bare metal. The industry also refers to this class of hypervisors as Type-1, and applications such as VMware Workstation as Type-2 (see Figure 18-26).

Image

Figure 18-26 Type 1 vs. Type 2 hypervisors

In 2001 VMware introduced a bare-metal hypervisor, originally called ESX, that shed the unnecessary overhead of an operating system. ESX has since been supplanted by ESXi in VMware’s product lineup. ESXi is a free hypervisor that’s powerful enough to replace the host operating system on a physical box, turning the physical machine into a system that does nothing but support virtual ones. ESXi, by itself, isn’t much to look at; it’s a tiny operating system/hypervisor that’s often installed on something other than a hard drive. In fact, you won’t manage a single VM at the ESXi server itself, as pretty much everything is done through a Web interface (see Figure 18-27).

Image

Figure 18-27 Web interface for ESXi

A host running its hypervisor from flash memory can dedicate all of its available disk space to VM storage, or even cut out the disks altogether and keep its VMs on a storage area network (SAN). Figure 18-28 shows how I loaded my copy of ESXi: via a small USB thumb drive. The server loads ESXi off the thumb drive when I power it up, and in short order a very rudimentary interface appears where I can input essential information, such as a master password and a static IP address.

Image

Figure 18-28 USB drive on server system

Don’t let ESXi’s small size fool you. It’s small because it only has one job: to host virtual machines. ESXi is an extremely powerful bare-metal hypervisor.

To understand the importance of virtualization fully, you need to get a handle on how it increases flexibility as the scale of an operation increases. Let’s take a step back and talk about money. One of the really great things money does is give us common, easily divisible units we can exchange for the goods and services we need. When we don’t have money, we have to trade goods and services to get it, and before we had money at all we had to trade goods and services for other goods and services.

Let’s say I’m starving and all I have is a hammer, and you just so happen to have a chicken. I offer to build you something with my hammer, but all you really want is a hammer of your own. This might sound like a match made in heaven, but what if my hammer is actually worth at least five chickens, and you just have one? I can’t give you a fifth of a hammer, and once I trade the hammer for your chicken, I can’t use it to build anything else. I have to choose between going without food and wasting most of my hammer’s value. If only my hammer was money.

In the same vein, suppose Mario has only two physical servers; he basically has two really expensive hammers. If he uses one server to host an important site on his intranet, its full potential might go almost unused (especially since his intranet site will never land on the front page of reddit). But if Mario installs a hypervisor on each of these machines, he has taken a big step toward using his servers in a new, more productive way.

In this new model, Mario’s servers become less like hammers and more like money. I still can’t trade a fifth of my hammer for a chicken, but Mario can easily use a virtual machine to serve his intranet site and only allocate a fifth—or any other fraction—of the host’s physical resources to this VM. As he adds hosts, he can treat them more and more like a pool of common, easily divisible units used to solve problems. Each new host adds resources to the pool, and as Mario adds more and more VMs that need different amounts of resources, he increases his options for distributing them across his hosts to minimize unused resources (see Figure 18-29).

Image

Figure 18-29 No vacancy on these hosts

To the Cloud

While simple virtualization enabled Mario to optimize and reallocate his computing resources in response to his evolving needs (as described in the previous section), he can’t exceed the capabilities of his local hardware. Luckily, he’s no longer stuck with just the hardware he owns. Because his virtual machines are just files running on a hypervisor, he can run them in the cloud on networks of servers worldwide.

Consider a simple Web server. In the not-so-long-ago days, if you wanted a Web site, you needed to build a system, install a Web server, get a commercial-grade Internet link, obtain and properly configure the box with a public IP address, set up firewalls, and provide real-time administration to that system. The cost for even a single system would be thousands of dollars upfront for installation and hundreds of dollars a month for upkeep (and heaven forbid your system ever went down).

As time passed we began to see hosting services that took most of the work of setting up a server infrastructure away from you. You merely “hosted” some space on a single server and that server was also hosting other Web sites. This saved you from having to set up infrastructure, but it was still relatively expensive.

Around 2005/2006, a number of companies, Amazon being the best example, started offering a new kind of hosting service. Instead of individual physical computers or directories on a shared host, Amazon discovered it could use large groups of virtualized servers combined with a powerful front end to enable customers to simply click and start the server they wanted. Cloud computing was born.

When we talk about the “cloud,” we’re talking not just about friendly file-storage services like Dropbox or Google Drive, but also about simple interfaces to a vast array of on-demand computing resources sold by Amazon (see Figure 18-30), Microsoft, and many other companies over the open Internet. The technology at the heart of these innovative services is virtualization.

Image

Figure 18-30 Amazon Web Services Management Console

The Service-Layer Cake

Service is the key to understanding the cloud. At the hardware level, we’d have trouble telling the difference between the cloud and the servers and networks that comprise the Internet as a whole. We use the servers and networks of the cloud through layers of software that add great value to the underlying hardware by making it simple to perform complex tasks or manage powerful hardware. As end users we generally interact with just the sweet software icing of the service-layer cake—Web applications like Dropbox, Gmail, and Facebook, which have been built atop it. The rest of the cake exists largely to support Web applications like these and their developers. Let’s slice it open (see Figure 18-31) and start at the bottom.

Image

Figure 18-31 A tasty three-layer cake

Infrastructure as a Service

Building on the ways virtualization allowed Mario to make the most efficient use of hardware in his local network, large-scale global Infrastructure as a Service (IaaS) providers use virtualization to minimize idle hardware, protect against data loss and downtime, and respond to spikes in demand. Mario can use big IaaS providers like Amazon Web Services (AWS) to launch new virtual servers using an operating system of his choice on demand (see Figure 18-32) for pennies an hour. The beauty of IaaS is that you no longer need to purchase expensive, heavy hardware. You are using Amazon’s powerful infrastructure as a service.

Image

Figure 18-32 Creating an instance on AWS

A huge number of Web sites are really more easily understood if you use the term Web applications. If you want to access Mike Meyers’ videos, you go to http://hub.totalsem.com. This Web site is really an application that you use to watch videos, practice simulation questions, and so forth. This Web application is a great tool, but as more people access the application we often need to add more capacity so you won’t yell at us for a slow server. Luckily, our application is designed to run distributed across multiple servers. If we need more servers, we just add as many more virtual servers as we need. But even this is just scratching the surface. AWS provides many of the services needed to drive popular, complex Web applications—unlimited data storage (see Figure 18-33), database servers, caching, media hosting, and more—all billed by usage.

Image

Figure 18-33 Amazon Simple Storage Service (S3)

The hitch is that, while we’re no longer responsible for the hardware, we are still responsible for configuring and maintaining the operating system and software of any virtual machines we create. This can mean we have a lot of flexibility to tune it for our needs, but it also requires knowledge of the underlying OS and time to manage it. If you want someone to handle the infrastructure, the operating system, and everything else (except your application), you need to move up to Platform as a Service (PaaS).

Platform as a Service

Web applications are built by programmers. Programmers do one thing really well: they program. The problem for programmers is that a Web application needs a lot more than just a programmer. Developing a Web application requires people to manage the infrastructure: system administrators, database administrators, general network support, and so on. A Web application also needs more than just hardware and an operating system. It needs development tools, monitoring tools, database tools, and potentially hundreds of other tools and services. Getting a Web application up and running is a big job.

A Platform as a Service (PaaS) provider gives programmers all the tools they need to deploy, administer, and maintain a Web application. The PaaS provider starts with some form of infrastructure, which could be provided by an IaaS provider, and on top of that infrastructure the provider builds a platform: a complete deployment and management system to handle every aspect of a Web application.

The important point of PaaS is that the infrastructure underneath the PaaS is largely invisible to the developer. The PaaS provider is aware of their infrastructure, but the developer cannot control it directly, and doesn’t need to think about its complexity. As far as the programmer is concerned, the PaaS is just a place to deploy and run his or her application.

Heroku, one of the earliest PaaS providers, creates a simple interface on top of the IaaS offerings of AWS, further reducing the complexity of developing and scaling Web applications. Heroku’s management console (see Figure 18-34) enables developers to increase or decrease the capacity of an application with a single slider, or easily set up add-ons that add a database, monitor logs, track performance, and more. It could take days for a tech or developer unfamiliar with the software and services to install, configure, and integrate a set of these services with a running application; PaaS providers help cut this down to minutes or hours.

Image

Figure 18-34 Heroku’s management console

Software as a Service

Software as a Service (SaaS) sits at the top layer of the cake. SaaS shows up in a number of ways, but the best examples are the Web applications we just discussed. Some Web applications, such as Total Seminars Training Hub, charge for access. Other Web applications, like Google Maps, are offered for free. Users of these Web applications don’t own this software; you don’t get an installation DVD, nor is it something you can download once and keep using. If you want to use a Web application, you must get on the Internet and access the site. While this may seem like a disadvantage at first, the SaaS model provides access to necessary applications wherever you have an Internet connection, often without having to carry data with you or regularly update software. At the enterprise level, the subscription model of many SaaS providers makes it easier to budget and keep hundreds or thousands of computers up to date (see Figure 18-35).

Image

Figure 18-35 SaaS vs. every desktop for themselves

The challenge to perfectly defining SaaS is an argument that almost anything you access on the Internet could be called SaaS. A decade ago we would’ve called the Google search engine a Web site, but it provides a service (search) that you do not own and that you must access on the Internet. If you’re on the Internet, you’re arguably always using SaaS.

It isn’t all icing, though. In exchange for the flexibility of using public, third-party SaaS, you often have to trade strict control of your data. Security might not be crucial when someone uses Google Drive to draft a blog post, but many companies are concerned about sensitive intellectual property or business secrets traveling through untrusted networks and being stored on servers they don’t control.

Images

EXAM TIP Know the differences between basic cloud concepts such as SaaS, IaaS, and PaaS.

Ownership and Access

Security concerns like those just discussed don’t mean organizations have to forfeit all of the advantages of cloud computing, but they do make their management think hard about the trade-offs between cost, control, customization, and privacy. Some organizations also have unique capacity, performance, or other needs no existing cloud provider can meet. Each organization makes its own decisions about these trade-offs, but the result is usually a cloud network that can be described as public, private, community, or hybrid.

Public Cloud

Most folks usually just interact with a public cloud, a term used to describe software, platforms, and infrastructure delivered through networks that the general public can use. When we talk about the cloud, this is what we mean. Out on the open, public Internet, cloud services and applications can collaborate in ways that make it easier to think of them collectively as the cloud than as many public clouds. The public doesn’t own this cloud—the hardware is often owned by companies like Amazon, Google, and Microsoft—but there’s nothing to stop a company like Netflix from building its Web application atop the IaaS offerings of all three of these companies at once.

The public cloud sees examples of all the xaaS varieties, which give specific names to these cloud concepts:

•   Public IaaS

•   Public PaaS

•   Public SaaS

Private Cloud

If a business wants some of the flexibility of the cloud, needs complete ownership of its data, and can afford both, it can build an internal cloud the business actually owns—a private cloud. A security-minded company with enough resources could build an internal IaaS network in an onsite data center. Departments within the company could create and destroy virtual machines as needed, and develop SaaS to meet collaboration, planning, or task and time management needs all without sending the data over the open Internet. A company with these needs but without the space or knowledge to build and maintain a private cloud can also contract a third party to maintain or host it.

Again, there are private versions of each of the cloud concepts:

•   Private IaaS

•   Private PaaS

•   Private SaaS

Community Cloud

While a community center is usually a public gathering place for those in the community it serves, a community cloud is more like a private cloud paid for and used by more than one organization. Community clouds aren’t run by a city or state for citizens’ use; the community in this case is a group of organizations with similar goals or needs. If you’re a military contractor working on classified projects, wouldn’t it be nice to share the burden of defending your cloud against sophisticated attackers sponsored by foreign states with other military and intelligence contractors?

Just like with the public and private cloud, there are community cloud versions of all the xaaS varieties:

•   Community IaaS

•   Community PaaS

•   Community SaaS

Hybrid Cloud

Sometimes we can have our cake and eat it too. Not all data is crucial, and not every document is a secret. Needs that an organization can only meet in-house might be less important than keeping an application running when demand exceeds what it can handle onsite. We can build a hybrid cloud by connecting some combination of public, private, and community clouds, allowing communication between them. Using a hybrid cloud model can mean not having to maintain a private cloud powerful enough to meet peak demand—an application can grow into a public cloud instead of grind to a halt, a technique called cloud bursting. But a hybrid cloud isn’t just about letting one Web application span two types of cloud—it’s also about integrating services across them. Let’s take a look at how Mario could use a hybrid cloud to expand his business.

Images

EXAM TIP Know the differences between public, private, community, and hybrid cloud models.

Mario runs a national chain of sandwich shops and is looking into drone-delivered lunch. He’ll need a new application in his private cloud to calculate routes and track drones, and that application will have to integrate with the existing order-tracking application in his private cloud. But then he’ll also need to integrate it with a third-party weather application in the public cloud to avoid sending drones out in a blizzard, and a flight-plan application running in a community cloud to avoid other drones, helicopters, and aircraft (and vice versa). The sum of these integrated services and applications is the hybrid cloud that will power Mario’s drone-delivered lunch. Like the other three clouds, the hybrid cloud sees examples of all the xaaS varieties, which give specific names to these cloud concepts:

•   Hybrid IaaS

•   Hybrid PaaS

•   Hybrid SaaS

Why We Cloud

Cloud computing is the way things are done today. But let’s take a moment to discuss some of the reasons we use the cloud instead of the old-style hammer of individual servers.

Virtualization

The cloud relies on virtualization. All of the power of virtualization discussed throughout this chapter applies to the cloud. Without virtualization’s savings of power, resources, recovery, and security, the cloud simply could not happen.

Rapid Elasticity

Let’s say you start a new Web application. If you use an IaaS provider such as Amazon, you can start with a single server and get your new Web application out there. But what happens if your application gets really, really popular? No problem! Using AWS features, you can easily expand the number of servers, even spread them out geographically, with just a click of the switch. We call this ability rapid elasticity.

On-Demand

So what if you have a Web application that has wild swings in demand? A local university wants to sell football tickets online. When there isn’t a game coming up, their bandwidth demands are very low. But when a game is announced, the Web site is pounded with ticket requests. With cloud computing, it’s easy to set up your application to add or reduce capacity based on demand with on-demand. The application adjusts according to the current demands.

Resource Pooling

Any time you can consolidate systems’ physical and time resources, you are resource pooling. While a single server can pool the resources of a few physical servers, imagine the power of a company like Amazon. AWS server farms are massive, pooling resources that would normally take up millions of diverse physical servers spread all over the world!

Measured Service

Ah, the one downside to using the public cloud: you have to write a check to whoever is doing the work for you—and boy can these cloud providers get creative about how to charge you! In some cases you are charged based on the traffic that goes in and out of your Web application, and in other cases you pay for the time that every single one of your virtualized servers is running. Regardless of how costs are measured, this is called measured service because of how it differs from more traditional hosting with a fixed monthly or yearly fee.

Chapter Review

Questions

1. Upgrading which component of a host machine would most likely enable you to run more virtual machines simultaneously?

A.  CPU

B.  Hard drive

C.  RAM

D.  Windows

2. What is the difference between a virtual machine (VM) and an emulator?

A.  A VM converts commands to and from a host machine to an entirely different platform, whereas an emulator creates an environment based on the host machine and does no converting.

B.  An emulator converts commands to and from a host machine to an entirely different platform, whereas a VM creates an environment based on the host machine and does no converting.

C.  An emulator requires a host OS, whereas a VM runs on bare-metal servers without an OS.

D.  A VM requires a host OS, whereas an emulator runs on bare-metal servers without an OS.

3. What feature lets you save a VM’s state so you can quickly restore to that point? (Choose two.)

A.  Checkpoint

B.  Save

C.  Snapshot

D.  Zip

4. What do you need to install a legal copy of Windows 8.1 into a virtual machine using VMware Workstation?

A.  A valid VM key

B.  Valid Windows 8.1 installation media

C.  A valid ESXi key

D.  A second NIC

5. Which of the following is an advantage of a virtual machine over a physical machine?

A.  Increased performance

B.  Hardware consolidation

C.  No backups needed

D.  Operating systems included

6. Janelle wants to start a new photo-sharing service for real pictures of Bigfoot, but doesn’t own any servers. How can she quickly create a new server to run her service?

A.  Public cloud

B.  Private cloud

C.  Community cloud

D.  Hybrid cloud

7. After the unforeseen failure of her Bigfoot-picture-sharing service, bgFootr—which got hacked when she failed to stay on top of her security updates—Janelle has a great new idea for a new service to report Loch Ness Monster sightings. What service would help keep her from having to play system administrator?

A.  Software as a Service

B.  Infrastructure as a Service

C.  Platform as a Service

D.  Network as a Service

8. Powerful hypervisors like ESXi are often booted from __________________.

A.  Floppy diskettes

B.  USB thumb drives

C.  Firmware

D.  Windows

9. When a virtual machine is not running, how is it stored?

A.  Firmware

B.  RAM drive

C.  Optical disc

D.  Files

10. BigTracks is a successful Bigfoot-tracking company using an internal service to manage all of its automated Bigfoot monitoring stations. A Bigfoot migration has caused a massive increase in the amount of audio and video sent back from their stations. In order to add short-term capacity, they can create new servers in the public cloud. What model of cloud computing does this describe?

A.  Public cloud

B.  Private cloud

C.  Community cloud

D.  Hybrid cloud

Answers

1. C. Adding more RAM will enable you to run more simultaneous VMs. Upgrading a hard drive could help, but it’s not the best answer here.

2. B. An emulator converts from one platform to another, whereas a virtual machine mirrors the host machine.

3. A, C. The saved state of a VM is called a snapshot or checkpoint. Not to be confused with a true backup.

4. B. You need a copy of the Windows installation media to install Windows.

5. B. A big benefit of virtualization is hardware consolidation.

6. A. Using the public cloud will enable Janelle to quickly create the servers she needs.

7. C. By switching to a PaaS, Janelle can concentrate on creating her service and leave the lower-level administration up to the PaaS provider.

8. B. A good hypervisor can be tiny, loading from something as small as a USB thumb drive.

9. D. VMs are just files, usually stored on a hard drive.

10. D. BigTracks is creating a hybrid cloud by connecting its internal private cloud to a public cloud to quickly expand capacity.