Archive for category Management

Virtualization Considerations for Storage Managers

This article was first published on SearchServerVirtualization.TechTarget.com.

It’s common for new technology to require changes in all areas of an organization’s overall infrastructure. Virtualization is no exception. While many administrators often focus on CPU and memory constraints, storage-related performance is also a very common bottleneck. In some ways, virtual machines can be managed like physical ones. After all, each VM runs its own operating systems, applications, and services. But there are also numerous additional considerations that must be taken into account when designing a storage infrastructure. By understanding the unique needs of virtual machines, storage managers can build a reliable and scalable data center infrastructure to support their VMs.

Analyzing Disk Performance Requirements

For many types of applications, the primary consideration around which the storage infrastructure is designed is based on I/O operations per second (IOPS). IOPS refer to the number of read and write operations that are performed, but do not always capture the whole picture. Additional considerations include the type of activity. For example, since virtual disks that are stored on network-based storage arrays must support guest OS disk activity, the average I/O request size tends to be small. Additionally, I/O requests are frequent and often random in nature. Paging can also create a lot of traffic on memory-constrained host servers. There are also other considerations that will be workload-specific. For example, it’s also good to measure the percentage of read vs. write operations when designing the infrastructure.

Now, multiply all of these statistics by the number of VMs that are being supported on a single storage device, and you are faced with the very real potential for large traffic jams. The solution? Optimize the storage solution for supporting many, small, and non-sequential IO operations. And, most importantly, distribute VMs based on their levels and types of disk utilization. Performance monitoring can help generate the information you need.

Considering Network-Based Storage Approaches

Many environments already use a combination of NAS, SAN, and iSCSI-based store to support their physical servers. These methods can still be used for hosting virtual machines, as most virtualization platforms provide support for them. For example, SAN- or iSCSI-based volumes that are attached to a physical host server can be used to store virtual machine configuration files, virtual hard disks, and related data. It is important to note that, by default, the storage is attached to the host and not to the guest VM. Storage managers should keep track of which VMs reside on which physical volumes for backup and management purposes.

In addition to providing storage at the host-level, guest operating systems (depending on their capabilities) can take advantage of NAS and iSCSI-based storage. With this approach, VMs can directly connect to network-based storage. A potential drawback, however, is that guest operating systems can be very sensitive to latency, and even relatively small delays can lead to guest OS crashes or file system corruption.

Evaluating Useful Storage Features

As organizations place multiple mission-critical workloads on the same servers through the use of virtualization, they can use various storage features to improve reliability, availability and performance. Implementing RAID-based striping across arrays of many disks can help significantly improve performance. The array’s block size should be matched to the most common size of I/O operations. However, more disks means more chances for failures. So, features such as multiple parity drives and hot standby drives are a must.

Fault tolerance can be implemented through the use of multi-pathing for storage connections. For NAS and iSCSI solutions, storage managers should look into having multiple physical network connections and implementing fail-over and load-balancing features by using network adapter teaming. Finally, it’s a good idea for host servers to have dedicated network connections to their storage arrays. While you can often get by with shared connections in low-utilization scenarios, the load placed by virtual machines can be significant and can increase latency.

Planning for Backups

Storage administrators will have the need to backup many of their virtual machines. Apart from allocating the necessary storage space, it is necessary to develop a method for dealing with exclusively-locked virtual disk files. There are two main approaches:

  • Guest-Level Backups: In this approach, VMs are treated like physical machines. Generally, you would install backup agents within VMs, define backup sources and destinations, and then let them go to work. The benefit of this approach is that only important data is backed up (thereby reducing required storage space). However, your backup solution must be able to support all potential guest OS’s and versions. And, the complete recovery process can involve many steps, including reinstalling and reconfiguring the guest OS.
  • Host-Level Backups: Virtual machines are conveniently packaged into a few important files. Generally, this includes the VM configuration file and virtual disks. You can simply copy these files to another location. The most compatible approach involves stopping or pausing the VM, copying the necessary files, and then restarting the VM. The issue, however, is that this can require downtime. Numerous first- and third-party solutions are able to backup VMs while they’re “hot”, thereby eliminating service interruptions. Regardless of the method used, replacing a failed or lost VM is easy – simple restore the necessary files to the same or another host server and you should be ready to go. The biggest drawback of host-level backups is in the area of storage requirements. You’re going to be allocating a ton of space for the guest OS’s, applications, and data you’ll be storing.

Storage solutions options such as the ability to perform snapshot-based backups can be useful. However, storage administrators should thoroughly test the solution and should look for explicitly-stated virtualization support from their vendors. Remember, backups must be consistent to a point in time, and non-virtualization-aware solutions might neglect to flush information stored in the guest OS’s cache.

Summary

By understanding and planning for the storage-related needs of virtual machines, storage administrators can help their virtual environments scale and keep pace with demand. While some of the requirements are somewhat new, many involve utilizing the same storage best practices that are used for physical machines. Overall, it’s important to measure performance statistics and to consider storage space and performance when designing a storage infrastructure for VMs.

Advanced Backup Options for Virtual Machines

This article was first published on SearchServerVirtualization.TechTarget.com.

It’s a pretty big challenge to support dozens or hundreds of separate virtual machines. Add in the requirement for backups – something that generally goes without saying – and you have to figure out how to protect important information. Yes, that usually means at least two copies of each of these storage hogs. I understand that you’re not made of storage (unless, of course, you’re the disk array that’s reading this article on the web server). So what should you do? In this tip, I’ll outline several approaches to performing backups for VMs, focusing on the strengths and limitations of each.

Determining Backup Requirements

Let’s start by considering the requirements for performing backups. The list of gaols is pretty simple, in theory:

  • Minimize data loss
  • Minimize recovery time
  • Simplify implementation and administration
  • Minimize costs and resource usage

Unfortunately, some of these objectives are often at odds with each other. Since implementing any solution takes time and effort, start by characterizing the requirements for each of your virtual machines and the applications and services they support. Be sure to write in pencil, as it’s likely that you’ll be revising these requirements. Next, let’s take a look at the different options for meeting these goals.

Application-Level Backups

The first option to consider for performing backups is that of using application features to do the job. There’s usually nothing virtualization-specific about this approach. Examples include:

  • Relational Database Servers: Databases were designed to be highly-available and it should come as no surprise that there are many ways of using built-in backup methods. In addition to standard backup and restore operations, you can use replication, log-shipping, clustering, and other methods to ensure that data remains protected.
  • Messaging Servers: Communications platforms such as Microsoft Exchange Server provide methods for keeping multiple copies of the data store in sync. Apart from improving performance (by placing data closer to those who need it), this can provide adequate backup functionality.
  • Web Servers: The important content for a web server can be stored in a shared location or can be copied to each node in a web server farm. When a web server fails, just restore the important data to a standby VM, and you’re ready to go. Better yet, use shared session state or stateless application features and a network load-balancer to increase availability and performance.

All of these methods allow you to protect against data loss and downtime by storing multiple copies of important information.

Guest-Level Backups

What’s so special about VMs, anyway? I mean, why not just treat them like the physical machines that they think they are? That’s exactly the approach with guest-level backups. The most common method with this approach is to install backup agents within the guest OS and to specify which files should be backed up and their destinations. As with physical servers, administrators can decide what really needs to be backed up – generally just data, applications, and configuration files. That saves precious disk space and can reduce backup times.

There are, however, drawbacks to this backup approach. First, your enterprise backup solution must support your guest OS’s (try finding an agent for OS/2!) Assuming the guest OS is supported, the backup and recovery process is often different for each OS. This means more work on the restore side of things. Finally, the restore process can take significant time, since a base OS must be installed and the associated components restored.

Examples of popular enterprise storage and backup solutions are those from Symantec, EMC, Microsoft and many other vendors.

Host-Level Backups

Host-level backups take advantage of the fact that virtual machines are encapsulated in one or more virtual disk files, along with associated configuration files. The backup process consists of making a copy of the necessary files from the host OS’s file system. Host-level backups provide a consistent method for copying VMs since you don’t have to worry about differences in guest operating systems. When it comes time to restore a VM (and you know it’s going to happen!), all that’s usually needed is to reattach the VM to a working host server.

However, the drawback is that you’re likely to need a lot of disk space. Since the entire VM, including the operating system, applications, and other data are included in the backup set, you’ll have to allocate the necessary storage resources. And, you’ll need adequate bandwidth to get the backups to their destination. Since virtual disk files are exclusively locked while a VM is running, you’ll either need to use a “hot backup” solution, or you’ll have to pause or stop the VM to perform a backup. The latter option results in (gulp!) scheduled downtime.

Solutions and technologies include:

  • VMware: VMotion; High Availability; Consolidated Backup; DRS
  • Microsoft Volume Shadow Services (VSS)

File System Backups

File system backups are based on features available in storage arrays and specialized software products. While they’re not virtualization-specific, they can help simplify the process of creating and maintaining VM backups. Snapshot features can allow you make a duplicate of a running VM, but you should make sure that your virtualization platform is specifically supported. File system replication features can use block- or bit-level features to keep a primary and backup copy of virtual hard disk files in-sync.

Since changes are transferred efficiently, less bandwidth is required. And, the latency between when modifications are committed on the primary VM and the backup VM can be minimized (or even eliminated). That makes the storage-based approach useful for maintaining disaster recovery sites. While third-party products are required, file system backups can be easy to setup and maintain. But, they’re not always ideal for write-intensive applications and workloads.

Potential solutions include products from Double-Take Software and from Neverfail. Also, if you’re considering the purchase of a storage solution, ask your vendor about replication and snapshot capabilities, and their compatibility with virtualization.

Back[up] to the Future

Most organizations will likely choose different backup approaches for different applications. For example, application-level backups are appropriate for those systems that support them. File system replication is important for maintaining hot or warm standby sites and services. Guest- and host-level backups balance ease of backup/restore operations vs. the amount of usable disk space. Overall, you should compile the data loss, downtime and cost constraints, and then select the most appropriate method for each type of VM. While there’s usually no single answer that is likely to meet all of your needs, there are some pretty good options out there!

Evaluating Network-Based Storage Options

This article was first published on SearchServerVirtualization.TechTarget.com.

Imagine living in a crowded apartment with a bunch of people that think they own the place. Operating systems and applications can be quite inconsiderate at times. For example, when they’re running on physical machines, these pieces of software are designed to monopolize hardware resources. Now, add virtualization to the picture, and you get a lot of selfish people competing for the same resources. In the middle is the virtualization layer – acting as a sort of landlord or superintendent – trying to keep everyone happy (while still generating a profit). Such is the case with disk I/O on virtualization host servers. In this Tip, I’ll discuss some options for addressing this common bottleneck.

Understanding Virtualization I/O Requirements

Perhaps the most important thing to keep in mind is that not all disk I/O is the same. When designing storage for virtualization host servers, you need to get an idea of the actual disk access characteristics you will need to support. Considerations include:

  • Ratio of read vs. write operations
  • Frequency of sequential vs. random reads and writes
  • Average I/O transaction size
  • Disk utilization over time
  • Latency constraints
  • Storage space requirements (including space for backups and maintenance operations)

Collecting this information on a physical server can be fairly simple. For example, on the Windows platform, you can collect data using Performance Monitor and store it to a binary file or database for later analysis. When working with VMs, you’ll need to measure and combine I/O requirements to define your disk performance goals. The focus of this tip is on choosing methods for storing virtual hard disk files, based on cost, administration and scalability requirements.

Local / Direct-Attached Storage

The standard default storage option in most situations is that of using local storage. The most common connection methods include PATA, SATA, SCSI, and SAS. Each type of connection comes with associated performance and cost considerations. RAID-based configurations can provide fault-tolerance and can be used to improve performance.

· Pros:

  • Generally cheaper than other storage options
  • Low latency, high bandwidth connections that are reserved for a single physical server

· Cons:

  • Potential waste of storage space (since disk space is not shared across computers)
  • Limited total storage space and scalability due to physical disk capacity constraints (especially when implementing RAID)
  • Difficult to manage, as storage is decentralized

Storage Area Networks (SANs) / Fibre Channel

SANs are based on Fibre Channel connections, rather than copper-based Ethernet. SAN-based protocols are design to provide high throughput and low latency, but require the implementation of an optical-based network infrastructure. Generally, storage arrays provide raw block-level connections to carved-out portions of disk space.

· Pros:

  • Can provide high performance connections
  • Improved compatibility – appears are local storage to the host server
  • Centralizes storage management

· Cons:

  • Expensive to implement – requires Fibre Channel-capable host bus adapters, switches, and cabling
  • Expensive to administer – requires expertise to manage a second “network” environment

Network-Based Storage

Network-based storage devices are designed to provide disk resources over a standard (Ethernet) network connection. They most often support protocols such as Server Message Block (SMB), and Network File System (NFS), both of which are designed for file-level disk access. The iSCSI protocol provides the ability to perform raw (block-level) disk access over a standard network. iSCSI-attached volumes appear to the host server as if they were local storage.

· Pros:

  • Lower implementation and management cost (vs. SANs) due to utilization of copper-based (Ethernet) connections
  • Storage can be accessed at the host- or guest-level, based on specific needs
  • Higher scalability (arrays can contain hundreds of disks) and throughput (dedicated, redundant I/O controllers)

· Cons:

  • Simplified administration (vs. direct-attached storage), since disks are centralized
  • Applications and virtualization platforms must support either file-based access or iSCSI

Storage Caveats: Compatibility vs. Capacity vs. Cost

In many real world implementations of virtualization, an important bottleneck is storage performance. Organizations can use well-defined methods of increasing CPU and memory performance, but what about the hard disks? Direct-attached, network-based, and SAN-based storage options can provide several viable options. Once you’ve outgrown local storage (from a capacity, performance, or administration standpoint), you should consider implementing iSCSI or file-based network-based storage servers. The primary requirement, of course, is that your virtualization layer must support the hardware and software you choose. SANs are a great option for organizations that have already made the investment, but some studies show that iSCSI devices can provide similar levels of performance at a fraction of the cost.

The most important thing to remember is to thoroughly test your solution before deploying it into production. Operating systems can be very sensitive to disk-related latency, and disk contention can cause unforeseen traffic patterns. And, once the systems are deployed, you should be able to monitor and manage throughput, latency, and other storage-related parameters.

Overall, providing storage for virtual environments can be a tricky technical task. The right solution, however, can result in happy landlords and tenants whereas the wrong solutions result in one seriously overcrowded apartment.

Desktop Virtualization: Evaluating Approaches and Solutions

This article was first published on SearchServerVirtualization.TechTarget.com.

Visualize the unglamorous task of crawling behind a dusty desktop computer to check for an unplugged cable. This is in response to a report that a user’s computer is “broken”. You have only the soothing sounds of an employee complaining to a friend about how IT is taking forever to solve the problem. Finish the job quickly, as you’ll soon be off to troubleshooting an application compatibility problem on an executive’s notebook computer. Assuming you haven’t left the IT industry altogether after reading this paragraph, I think you’ll agree that there are many compelling reasons for addressing desktop and application management issues.

In the first Tip in this series on desktop virtualization, I defined the goals and problems that we were trying to solve. I provided an overview of the various approaches (and noted that there’s often some disagreement over specific terminology). Here, I’ll cover the pros and cons of specific approaches, along with applications you might want to consider.

Presentation Virtualization Solutions

In presentation virtualization, user input and application output are managed using a network connection. Applications run on the server, and screen updates are sent to a thin client or desktop computer. Some solutions can virtualize individual applications, can work securely over Internet connections, and can be integrated with a variety of network-level access methods.

  • Benefits: Scalability is a big one: up to hundreds of simultaneous application sessions can be created on a single server. Applications management can be simplified since deployment to desktops is no longer a requirement. Access to applications can be managed centrally, and data may be stored securely on back-end systems.
  • Drawbacks: Applications must be compatible with the virtualization solution. While utilities are available for assisting in this area, they’re far from foolproof. Additionally, all users will be using the same OS version on the server side. Reliability is also a concern, especially when running business-critical client applications, due to the number of sessions that must be maintained. When running on slow connections, slow application responses can hurt the end-user experience.

Products and Solutions:

Application and OS Virtualization Solutions

Realizing that the primary purpose of desktop computers is to allow users to run applications, some vendors have focused on using application deployment and management solutions. The goal here is to allow many different applications to coexist on a single operating system that runs on users’ hardware.

  • Benefits: Users can run their operating systems and applications locally, leading to better performance and support for disconnected scenarios. IT departments can avoid application compatibility and deployment issues and can more easily keep track of licensing. Overall scalability is often far higher than that of virtualizing entire desktop operating systems.
  • Drawbacks: Applications may need to be modified (or at least tested) when running with these solutions. The base OS is shared among all applications and application environments, so all applications must run on the same basic platform.

Products and Solutions:

VM-Based Virtualization Solutions

There’s no reason that the benefits of server virtualization can’t be extended to desktop machines. VM-based virtualization involves the creation of VMs (either on-demand on permanent) for users on data center hardware. Users access their individual VMs using a thin client device or a remote desktop solution. On the server side, a “connection broker” layer is able to ensure that the right VMs are available and that users connect to their own systems.

  • Benefits: All OS’s and user data are stored within the data center (presumably on fault-tolerant, high performance hardware). This enables centralized management and increases average utilization on all of the systems that an IT department supports. Security risks are decreased, as are costs related to managing client-side computers.
  • Drawbacks: Entire operating systems are running for each user. This can limit scalability and increase costs related to storage. Additionally, users require a network connection in order to access their computers. Finally, server-side hardware resources can be far more costly than their desktop counterparts.

Products and Solutions:

Summary

It is important to note that these solutions are not exclusive of each other. For example, you could choose to virtualize a desktop OS and then use application virtualization products to deploy and manage applications. Realistically, most organizations will find all of these options to be suitable for simplifying some aspect of overall desktop operations. This area is evolving rapidly (in terms of both real technology and hype), so be sure to thoroughly research options before deploying them. Overall, knowledge is power, so keep these options in mind the next time you spend 30 minutes repairing a mouse-related problem!

Desktop Virtualization: Goals and Options

This article was first published on SearchServerVirtualization.TechTarget.com.

Quick: Name a task that’s less enjoyable than managing client operating systems and applications! I have a feeling that if you’re a seasoned IT pro, you had to think for a few seconds (and, I’ll bet that many of you came up with some very creative responses). Clearly, the challenges of keeping end-users’ systems up-to-date can be a thankless and never-ending ordeal. Vendors have heard your cries, and various solutions are available. At the risk of sounding like a flight attendant, I do understand that you have a choice in choosing virtualization approaches. In this series of Tips, I’ll describe details related to the pros and cons of desktop application and OS virtualization. Let’s start by defining the problem.

Desktop Management Challenges

There are many reasons that desktop computers can be more painful to manage than their server-side counterparts. Some important issues include:

  • Analyzing Administration: Desktop and notebook computers are often located in the most remote areas of your organization (or outside of it altogether). Performing systems administration tasks can sometimes require physical access to these machines. And, even with remote management tools, the client-side machine has to be online and connected to the network. The result is significant time and effort requirements for keeping systems optimally configured.
  • Mitigating Mobile Mayhem: Traveling and remote users can be (quite literally) a moving target: It seems that as soon as you’ve deployed a system for their use, changes are required. While some users can’t avoid working offline, there’s a subset of the user population that might need to access their OS and applications from multiple locations. Managing multiple pieces of hardware or shared desktop machines can be time-consuming and tedious.
  • Dealing with Distributed Data: Security and regulatory compliance requirements necessitate the management of data. It’s far easier to secure information and prevent data loss or theft when everything’s stored in the data center. While stolen laptops can be costly, it’s far cheaper than dealing with stolen data.
  • Application Anarchy: Deploying and managing desktop applications can be a struggle. While deployment tools can simplify the process, issues like managing application compatibility problems can lead to a large number of support desk calls. Other issues include tracking license usage, ensuring that systems remain patched, and providing the right applications on the right computer at the right time.
  • Bumbling Backups: Ensuring that data remains protected on desktop and notebook computers can be problematic. Even with the use of backup agents, there’s room for error. And, getting users to consistently save their important files to network shares can seem futile.
  • Hardware Headaches: Managing desktop and notebook hardware can be time-consuming. Add in the costs of technology refreshes and verifying hardware system requirements, and the issue can quickly float to the top of an IT department’s list of costs.

From a technical standpoint, the issue is that applications are tightly tied to their operating systems. And the operating systems, in turn, are tightly tied to hardware. Solving these problems can help alleviate some of the pain.

Choosing a Virtualization Approach

There are several different approaches to addressing desktop-related challenges. One caveat is that the terminology can be inconsistent. I’ve taken a shot at categorizing the different approaches, but vendors’ descriptions do differ. Here’s a breakdown:

  • Presentation Virtualization: Some users require access to only one or a few applications (think about call center and point-of-sale users). The main idea behind presentation virtualization is that all applications are installed and executed on a specialized server that can then redirect video, keyboard, and mouse signals between a small client application or a thin client device. Since applications are installed centrally, deployment and management is less of an issue.
  • OS and Application Virtualization: For some portion of the user population, such as traveling employees or “power-users”, there’s a real need to run an operating system directly on individual computers. In these scenarios, it’s still desirable to simplify the deployment and management of applications. Application virtualization solutions provide a way to either compartmentalize or stream programs to the computers that need them. The process is quick, safe, and can happen with little IT involvement.
  • VM-Based Virtualization: Also known as Virtual Desktop Infrastructure (VDI), among other names, the idea here is to allow users to run their own desktop OS’s – except that they are physically stored in the data center. Typically, the operating system and applications are run within a dedicated virtual machine which is assigned to a particular user. Employees use either a thin client computer or a remote desktop connection to access their environments.

In addition to these options, there’s an implicit fourth choice: “None of the above.” As I described in my Tips, “VDI Benefits without VDI”, you can reduce problems to some extent by utilizing IT best practices. You can also use a combination of these approaches (for example, VM-based virtualization with application virtualization)to meet different needs in the same environment.

Looking for Solutions

In this Tip, I presented some of the problems that desktop virtualization attempts to address. It’s important to understand your pain points before you start looking for a remedy. Then, I described three high-level approaches for solving common problems. In the next part of this series, I’ll present information about the pros and cons of each approach, along with specific products to consider.

IT Policies: Managing VM Sprawl

This article was first published on SearchServerVirtualization.TechTarget.com.

Many organizations have turned to virtualization to help reduce the number of servers and other computer that they support in their environments. The problem itself has often been referred to as “server sprawl”. The cause of this phenomenon is often the excessive deployment of new physical machines. Often, systems administrators would deploy a new computer just to support a lightweight web application or a simple workload that could easily have been placed on another server. In other cases, the proliferation was unavoidable, as some applications and services just don’t behave nicely with others on the same machine.

Virtualization technology can help resolve the latter problem by allowing multiple independent workloads to run on the same computer at the same time. The process of deploying a new VM can be performed in a matter of minutes, thereby reducing costs and administrative effort. Unfortunately, these benefits can lead to a new problem: “VM sprawl”. IT organizations often find themselves tasked with keeping track of dozens or hundreds of new VMs seemingly overnight. When considering security, performance, reliability, and adhering to IT standards, the task of managing virtual systems can quickly become overwhelming. Fortunately, there are some ways to reduce some of the headaches. In this tip, I’ll present some best practices that can help.

Virtual Machine Deployment

The first step in managing VM sprawl is related to reign in the deployment of new VMs. Just because end-users and systems administrators have the ability to deploy new virtual machines does not necessarily mean that they should do so. IT departments should define a process for the deployment of a new VM. Figure 1 provides a basic example of some typical steps. Often, the suggestion of a process conjures up an image of a small army of pointy-haired bosses creating a new bureaucracy. In reality, it’s certainly possible to perform all of the steps in a process such as this in a matter of minutes.

image

Figure 1: Possible steps in a VM deployment process.

Best Practice: IT departments should remain involved in all virtual machine deployments.

Configuration Management

Another problem related to the widespread deployment of VMs is a lack of configuration consistency. Since users can choose from a wide array of operating systems and applications to run within a VM, the number of variations can grow exponentially. Additionally, the VMs that are deployed may not adhere to IT standards and guidelines for security and other settings.

One way to minimize these effects is for IT organizations to create a standardized set of base images in what is often referred to as a VM library. Users should be required to begin the creation of a new VM using one of these images. Figure 2 provides some examples of types of VM images that might be created.

image

Figure 2: Examples of virtual machine images that might be available in a VM library.

While developing a list of standard configurations can help reduce the number of configurations that are supported, IT staff should still remember the need to verify configurations before deployment into a production environment.

Best Practice: All users and systems administrators should base their deployments on IT-approved base images and supported configurations.

Keeping VMs Up-to-Date

An important concern for all deployments – both physical and virtual – is keeping systems up-to-date. Security patches and application upgrades can help minimize the risk of reliability and data loss. The good news is that IT organizations can depend on their standard patch and update deployment tools for managing virtual machines. Of course, this will only be possible if the guest OS is supported by those tools (another good reason for implementing configuration management).

Best Practice: Treat productions VMs as if they were physical machines, and ensure that they are monitored and updated regularly.

Contain Yourself (and your VMs)!

If you’re responsible for limiting VM sprawl in your environment, you know that it’s important to give users what they want. Reducing deployment times and providing access to virtualization functionality can positively impact productivity while minimizing data center impacts. By keeping IT departments involved in deployment decisions, and making sure that VMs are properly managed, organizations can enjoy these benefits without suffering from unmitigated VM sprawl.