This article was first published on SearchServerVirtualization.TechTarget.com.

The term “virtualization” is a general one, and it can apply to many different technologies. For example, storage systems, databases, and networks can all be virtualized in one way or another. Much of the current buzz about virtualization focuses on “server virtualization” – the ability to allow multiple independent operating systems to run on the same hardware at the same time. Products from Microsoft and VMware lead in this area. While this approach can clearly provide tremendous benefits , it’s not the only option out there. In this article, I’ll provide some details related to various approaches to virtualization, along with the pros and cons of each. The goal is to determine the best method for a particular workload.

An Overview of Virtualization Approaches

Figure 1 provides a high level overview of when areas of a standard server “stack” can be virtualized. Moving up from the bottom is the hardware layer, followed by the operating system, and finally the applications.

image001

Figure 1: A diagram showing various virtualization layers.

Before we get into further technical details, let’s quickly review the key goals related to virtualization. The first is to ensure independence and isolation between the applications and operating systems that are running on a particular piece of hardware. The second is to provide access to as much of the underlying hardware system as possible. The third is to do all of this while minimizing performance overhead. That’s no small set of goals, but it can be done (and in more ways than one). Let’s take a look at how.

Hardware-Level Virtualization and Hypervisors

We’ll start the bottom of the “stack” – the hardware level. Theoretically, virtualization platforms that run directly on the base hardware should provide the best performance by minimizing overhead. An example of this type of solution is VMware’s ESX Server. ESX Server installs directly on a supported hardware platform and includes a minimal operating system. Administration is performed through a web-based application that can be access remotely using a web browser.

The main concept behind a Hypervisor is that it’s a thin layer that runs directly between operating systems and the hardware itself. Again, the goal here is to avoid the overhead related to having a “host” operating system. Microsoft and other vendors will be moving to a Hypervisor-based model in future versions of their virtualization platforms.

While the low-level approach might seem like an ideal one, there are some drawbacks. First and foremost is the issue of device compatibility. In order for the platform to work at all, it must support all of the devices that are connected to the main computer. Currently, products such as ESX Server are limited to running only on approved hardware platforms. While many popular server platforms are supported, this clearly is not as compatible as other solutions. Another issue is related to manageability. The dedicated virtualization layers must provide some methods for managing virtualization services. There are various approaches, including operating system “hooks” and web-based administration, but they tend to be more complicated than other virtualization options.

Server-Level Virtualization

Perhaps the best known and most readily useful virtualization products are those that operate at the server level. Products such as VMware Server and Microsoft Virtual Server 2005 are examples. These products are installed within a host operating system (such as a supported Linux distribution or the Windows Server platform). In this approach, virtual machines run within a service or application that then communicates with hardware by using the host OS’s device drivers. Figure 2 shows an example of server virtualization using Microsoft Virtual Server 2005.

image002

Figure 2: An example of a server-level virtualization stack

The benefits of server-level virtualization include ease of administration (since standard management features of the host OS can be used), increased hardware compatibility (through the use of host OS device drivers), and integration with directory services and network security. Overall, whether you’re running on a desktop or a server OS, you can be up and running with these platforms within a matter of minutes.

The drawbacks include additional overhead caused by the need for the host OS. The amount of memory, CPU, disk, network, and other resources used by the host must be subtracted from what would otherwise be available for use by VMs. Generally, the host OS also requires an operating system license. Finally, server-level virtualization solutions are often not quite as efficient as that of hardware-based virtualization platforms.

Application-Level Virtualization

In some cases, running multiple independent operating systems is overkill. If you only want to create isolate environments that allow multiple users to concurrently run instances of a few applications, there’s no need to create a separate VM for each concurrent user. That’s where application-level virtualization comes in. Such products run on top of a host operating system, and place standard applications (such as those included with Microsoft Office) in isolated environments. Each user that accesses the computer gets what appears to be his or her own unique installation of the products. Behind the scenes, file system modifications, Registry settings, and other details are performed in isolated sandbox environments, and appear to be independent for each user. Softricity and SWSoft are two vendors that provide application-level virtualization solutions.

The main benefits of this approach include greatly reduced overhead (since only one full operating system is required) and improved scalability (many users can run applications concurrently on the same server). Generally, only one OS license is required (for the host OS). The drawbacks are that only software settings will be independent. Therefore, if a user wants to change hardware settings (such as memory or network details) or operating system versions (through patches or updates) those changes will be made for all users on the system.

Thin Clients and Remote Application Execution

The idea of “thin-clients” has been around since the days of mainframes (when they were less affectionately known as “dumb terminals”). The idea here is that users can connect remotely to a centralized server using minimal software and hardware on the client side. All applications execute on the server, and only keyboard, video, and mouse I/O are transferred over the wire. Solutions from Citrix and Microsoft Windows Server 2003 Terminal Services are examples of this approach.

Selecting the Best Approach

So now that you know that there are options, how do you decide which is the best one for a particular virtualization workload? Table 1 provides some examples of typical workloads. In general, as you move from Hardware- to Server- to Application-level virtualization, you gain scalability at the cost of overall independence. The “best” solution will be based on the specific workload and other related details. The bottom line is that you do have options when deciding to go with virtualization, so be sure to consider them all.

image

Table 1: Comparing virtualization approaches for various workload types