Virtualization: ONE BOX, MANY SERVERS
There’s no magic to server virtualization, and the benefits of hardware consolidation and datacenter control are real
SERVER VIRTUALIZATION IS ONE OF those rare technologies that sounds too good to be true, but it’s real. Its earliest use was to consolidate underutilized server hardware onto a smaller number of machines. Since those early days, it has grown into a multipurpose solution that enables greater reliability, improved management, and other benefits that make it an all-but-indispensable tool for enterprise data-center administrators.
The rocket science that makes virtualization work is as easy to summarize as, well, a rocket. To use an oversimplified definition, a virtual server mimics, using software alone, the behavior and capabilities of a stand-alone computer.
The nomenclature of virtualization is simple. The bottom of the software stack is occupied by a single instance of an ordinary operating system that’s installed directly onto the server hardware. Above that, a virtualization layer handles the redirection and emulation that make up the virtual computer. The combination of these two lower layers is referred to as the host. The host provides the full workings of a familiar PC right down to its BIOS ROM, and it can spawn as many independent PCs – using varying user-defined configurations – as you choose.
As are physical servers, a virtual PC is useless until you install an operating system on it. The operating systems that you install on your virtual hosts are called guests. Installing a guest OS can be as easy as booting from the OS’s installation CD. It’s just like installing an OS on a PC, and in general, if you wonder how virtualization will behave, that’s the answer: Just like a PC.
In fact, in an all-Windows environment, it’s easy to lose your place: Are you looking at your Windows host OS or at one of the four Windows guest OSes you just installed? You might get confused, but your guest OSes and their applications never do. Each guest OS believes it has the whole machine to itself. And, in a sense, it does.
Operating systems and applications running on virtual servers don’t have direct control over resources such as memory, hard drives, and network ports. Instead, the VM that sits beneath the OS and applications intercepts requests for interaction with hardware and handles them as it sees fit.
The real mindblower that turns this technology into something close to magic is that a world-class virtualization solution such as VMware ESX Server can synthesize an entire hardware configuration that has little resemblance to the underlying hardware. For example, the host might simulate the initialization process of a SCSI controller to the last detail, convincing the guest OS that this initialization is being performed even when no physical SCSI controller exists. It can make IDE drives look like SCSI drives, convert network shares into locally attached storage, turn one Ethernet adapter into several, and create gateways between older operating systems and unsupported modern hardware such as Fibre Channel adapters. You build your own servers that precisely fit the needs of your applications, but you use a mouse instead of a screwdriver.
Installing the OS and software onto a physical PC server every time you need one can be tedious. Fortunately, with virtualization you don’t have to. After you’ve tuned a virtualized hardware configuration precisely to your liking, you can save that server’s disk image to a file and use it as a template for other guest systems. In practice, this is a delight. You can back up a virtual server by copying the file. You can create a new server by duplicating the file – copying Windows requires reactivation and an appropriate license – or move an existing server to different physical hardware.
Virtualized servers do all the good and bad things regular servers do. They boot up, power down, suspend, hang, and even crash. If a guest OS or a device driver it uses is buggy, the virtual PC will crater. But not the physical computer, and that’s key.
If your OS crashes or an application hangs, or even if you install a software fix that requires a reboot, nothing happens to the hardware. One virtual machine can fail over to another in a purely virtual sense or in a way that’s closer to the real thing. Even if certain hardware devices have malfunctioned, so long as the fail-over target is configured to use a secondary network adapter and an alternate path to storage, the fail-over will work exactly as it would if the virtual PCs were physical PCs.
In most cases, an enterprise management system will monitor and react to a virtual fail-over as if it were the real thing. Solutions such as HP OpenView see and interact with virtual servers the same way they do with physical ones. The reported configurations of the servers will change after they’re virtualized, but it’s entirely likely that the day-to-day management of your shop will experience little change.
In addition, most virtualization systems bundle solution- specific management software, allowing an administrator to sit at a central console and manipulate all the virtual servers in an enterprise. It’s quite an eye-opener to swap out a virtual Ethernet card without ever touching the hardware.
A virtualization solution’s management console gives you a degree of control over your virtual PCs that surpasses what administrators can do with traditional tools. From a central location, you can boot and shut down virtual PCs as needed. It’s also possible to pause them, which harmlessly freezes them in their current state, or hibernate them, putting them in a deep freeze by saving their state to a file on disk. By overwriting the disk file, you can restore PCs from a backed-up state and roll back changes that rendered the guest inoperable, all from a terminal session.
For example, if your physical storage configuration supports volume sharing – our own reviews were performed with an Emulex SAN storage switch and an Apple Xserve RAID disk array – VMware’s VMotion option allows you to pause a running guest and start it up again on a different physical server. In a matter of seconds, you can push all the running guests and their applications from one server to another to take a machine down for maintenance. Or you can use VMotion for reprovisioning assets. A virtual PC that’s bogging down the network segment it occupies can be moved to a location with less traffic. No back strain, no recabling, and at most a few seconds of paused execution, not ended sessions or rebooting.
In environments with a mix of operating systems – a common condition that turns even simple consolidation into a messy affair – one solution would be to host each OS in its own VM. For example, on a PC server running one of VMware’s virtualization solutions, you can run any combination of Windows 2003 Server, Windows 2000, Windows NT 4.0, various flavors of Linux, and FreeBSD. You can even use VMs to host different versions of the same OS. Linux software is infamous for dependence on specific versions and vendor distributions of Linux. Virtualization is the only way to run applications designed for Red Hat 7.2 and Suse 9.0 simultaneously on a single server.
Virtualization is magnificent stuff, but it doesn’t jump out of the box and cure all ills. You can never create a virtual PC that outperforms the physical system underneath. You will learn much about your applications’ system requirements from moving them to a virtual environment. They’ll likely surprise you, either with how little of the original server they used – that’s the typical case – or how piggish they are. If necessary, you can throttle the nasty ones down.
And while one of the great benefits of virtualization is security – it’s hard to accomplish much by cracking a system that doesn’t exist – a virtualized PC can still be compromised. Fortunately, the cure is to overwrite the virtual PC’s disk image with one that’s known to be clean, but managing virtual servers still requires vigilance.
Ultimately, hardware consolidation is only one reason to opt for server virtualization, and it has wide appeal. Still, depending on each department’s unique needs, IT managers are sure to find innumerable ways that virtualization can benefit your enterprise. Too good to be true? Maybe. But it’s also too good to pass up.
You build your own servers to fit the needs of your applications, but you use a mouse instead of a screwdriver.
Wanted: A Little Help From Hardware
THE TOP-NOTCH DEVELOPERS AT VMWARE AND CONNFXTIX (NOW MICROSOFT) have spent much of their time inventing intricate work-arounds for design shortcomings of the x86 architecture. But that needn’t be the case. When virtualization gets help from hardware, its performance skyrockets. Such hardware assistance is commonplace on mainframes and other big iron, but few today remember that Intel set the precedent for hardware virtualization support on x86 chips nearly 20 years ago.
Intel’s 80286 processor allowed Microsoft to replace Windows’ original and much-derided software-based multitasking with a faster, safer hardware-based approach. It flew, but despite Microsoft pitching Windows 2.0 (later known as Windows/286) as a multitasking GUI for the masses, customers prima\rily wanted Windows to run multiple instances of DOS. Although that was referred to at the time as DOS multitasking, virtualization was the only way to run DOS on Windows.
When push came to shove, both Microsoft and Intel determined that software virtuaiization would be too slow to make customers happy. As a result, Intel designed a row of virtual 8086 processors into the 80386 chip. Each of the virtual 8086 units bootstrapped DOS – or any OS designed for the 8086 – and operated exactly like a stand- alone system. Even ill-behaved DOS apps could poke away at CPU registers and rewire interrupts without requiring the 80386 to so much as turn its head.
Today, PCs are heading for a jam very similar to the one that inspired Intel to create the 80386. Current 32-bit operating systems are the DOSes of IT’S modern age – software that insists on owning the servers on which they run and poking directly at system hardware with no protective intervention. Some major steps forward are 64- bit CPUs and forthcoming multicore processors, but they fall short of enabling the dynamic enterprise IT needs.
Intel, which dropped the ball on PC virtualization, and AMD, which failed to pick it up, are now talking – sans details – about adding hardware virtualization support to future processors. But between now and the undisclosed dates when Intel’s Vanderpool and AMD’s Pacifica technologies might actually ship, IT will get stuck with a four-figure tab for every x86 server it wants to virtualize. Microsoft, SWsoft, VMware, and a few others are more than deserving of that revenue. But IT really needs brainpower focused on the much larger problem of how to manage large pools of VMs and storage, not on figuring out how to build x86 servers out of C++ with no help from hardware. – T. Y.
Far from being just a clever gimmick, modem server visualization technologies offer major benefits. A well-designed visualization solution can do the following.
* Reduce costs by consolidating server hardware
* Dynamically allocate resources when and where they’re needed
* Dramatically reduce the time needed to provision new systems
* Isolate overall system health from application or OS failures
* Enable easier management of heterogeneous resources
* Facilitate testing and debugging in controlled environments
Virtualization is magnificent stuff, but it doesn’t jump out of the box and cure all ills.
Copyright Infoworld Media Group Nov 8, 2004