While virtualization technology certainly helps reduce some of the most important problems for IT organizations, there’s a potential downside. Many organizations have found that they’re ill-equipped to manage the dozens or hundreds of VMs that tend to pop-up once virtualization software has been deployed. Some of these deployments circumvent IT, while others just slip in under the radar. For example, VMs that are only occasionally powered on or that are not connected to external networks can be overlooked entirely. When they’re brought online, they’re often out-of-date with respect to patches.
If server virtualization has a dark side, it may be virtual machine (VM) sprawl. The principal problem created by sprawl is that IT administrators can’t certify that all deployed VMs meet an organization’s policies and procedures just as they would certify physical servers. "Deploying VMs at many organizations circumvents the standard processes for deploying physical servers," noted SearchServerVirtualization.com contributor Anil Desai.
The main idea is that virtualization-aware tools are a must for organizations that must maintain control of their production deployments. Embotics is one of many organizations that has understood that need and has developed products that are focused on virtual environments. For more details, see the White Paper that I wrote for Embotics, titled Controlling VM Sprawl: Best Practices for Maintaining Control of Virtualized Infrastructures.
The Microsoft eLearning web site includes a wide variety of different online training courses. I have used many of these to keep up-to-date with new releases and product features. Best of all, there are many courses that are available for free. One such course is Clinic 5935: Introducing Server Virtualization in Microsoft Windows Server 2008 (RC0).
While the naming and terminology of the Hyper-V feature uses some outdated terms, all of the major technical information should still be accurate. This is a good place to start with Microsoft’s E-Learning and to lean about Microsoft’s upcoming virtualization products. Thanks’s to virtualization.info for the link.
If you have used the Microsoft Virtual Server 2005 platform, there’s a good chance that you find its default web-based management tools to be lacking. If you’re running one or a few virtualization host servers, the admin tools can certainly get the job done. But what if you’re deploying dozens or hundreds of VMs every month. In order to manage these systems, you’ll need to invest in some virtualization-aware software. Microsoft’s System Center Virtual Machine Manager (SCVMM) is one such product.
If you have even heard of the product, you might be wondering about its capabilities, its architecture, and how you can get started with it. The January, 2008 issue of Microsoft TechNet Magazine includes an article titled Real Control with Virtual Machine Manager 2007. From the article’s introduction:
System Center Virtual Machine Manager 2007 is a new solution that provides a consolidated interface for managing your entire virtual infrastructure. Virtual Machine Manager (VMM) can manage existing Microsoft® Virtual Server 2005 installations, and it can also install Virtual Server 2005 R2 SP1 on new virtual machine (VM) hosts. With VMM, the traditional Virtual Server 2005 administrative tasks can now be performed far more efficiently through a centralized interface, with management access across multiple Virtual Server installations.
In addition, VMM brings new capabilities to Virtual Server, including Physical-to-Virtual (P2V) conversions, Virtual-to-Virtual (V2V) conversion of VMware Virtual Machine Disk Format (VMDK) disks to Virtual Server Virtual Hard Disks (VHDs), and rapid VM deployments from templates and pre-configured VHDs via a centralized library of virtual infrastructure objects.
In the following pages, I’ll explore VMM and the powerful set of features it provides to IT administrators. I will then look at the requirements and steps for creating a VMM installation. Finally, I’ll take a deeper dive into a handful of the more exciting features of VMM and leave you with some helpful tips on getting started.
Microsoft is fairly ambitious with the SCVMM product. In addition to its current features, future updates will be able to manage VMware and Microsoft’s Hyper-V technology (the new virtualization layer that will be included with Windows Server 2008). See the article and Microsoft’s site for more details.
Virtual Strategy Magazine has published my latest article: Comparing Virtualization Approaches. The article examines the various approaches to virtualization, including presentation-, application-, and server/hardware-level virtualization. The following diagram provides a brief overview of the approaches and their details.
The overall idea is that organizations have a wide array of choices in deciding how to isolate and consolidate their workloads. The challenges is picking the right tool for the job.
The SearchServerVirtualization.com has a new post that offers an interesting and thought-provoking topic: What does the future hold for virtualization? The post, Thoughts on the ‘top five’ trends in virtualization includes editor Hannah Drake’s take on the subject. I chimed in with my responses:
It’s always fun to make predictions about the future. I’ll join in with a few of mine:
1) Desktop Virtualization/VDI deployments remain limited: Like “thin-client” computing before it, the idea of virtualizing entire desktop environments will fail to gain traction. Certainly, companies are doing this now. But, I think the potential drawbacks won’t be addressed quickly enough (if ever), and other solutions will help address security and manageability issues. Most importantly, though: What does everyone else think?
2) Other forms of virtualization gain traction: Presentation- and application-level virtualization will become much more common, and IT organizations will find that they have many different ways to address potential management issues.
3) Server Virtualization technology will start to become commoditized: Already, numerous companies provide useful Hypervisors and virtualization layers. It’s a cool technology, but many vendors have figured out how to do it. Moving forward, the real challenge will be in managing VM deployments, implementing backups and DR, HA, and dealing with storage issues. The virtualization layer will be considered the “foundation”, whereas management tools will receive the focus.
4) Virtualization Knowledge: For most IT people, managing basic virtualization functions will become a standard job function (like performing backups). There’s nothing shocking there. As virtual platforms get easier to manage, most organizations will need only a few “experts” (such as those that have the VCP certification) to work on design and troubleshooting. The rest of the IT crowd will adapt on their own. This might not be ideal, but I don’t see the VCP certification being as popular as the MCSE c. 1996 – 2000.
Some of this might be going against conventional “wisdom” (and aggressive marketing), but these wouldn’t be very useful predictions if I stayed with the safe bets. It will certainly be interesting to see how things pan out.
I have certainly been in IT long enough to see many fads come and go. I have also seen many genuinely good ideas become part of standard IT best practices. It’s probably safe to say that server virtualization fits in the latter camp. But, there’s still a lot of hype out there, and it’s good to keep things in perspective.
There’s probably a lot more to predict, so I’d be interested in hearing readers’ opinions: What are some other predictions, and what do you think I got wrong?
I recently wrote a technical best practices White Paper for Embotics, Inc. It’s titled Controlling VM Sprawl: Best Practices for Maintaining Control of Virtualized Infrastructures, and is available for free download (registration is required). The content defines and addresses the issue of "VM Sprawl" – the rapid proliferation of virtual machines that many environments are experiencing. While virtualization technology can provide numerous benefits in just about all areas of an IT organization’s operations, many people have let issues like security, policies, and processes slide. Here’s an excerpt from the introduction:
Many IT solutions tend to solve important business and technical problems in ways that can create management-related concerns. Virtualization is no exception. While organizations and their IT staff have quickly realized the many benefits of implementing virtualization, the challenge of controlling virtual infrastructures is one that is often overlooked.
Often, the benefits of virtualization start to become overshadowed by issues of security, administration, and configuration management. The primary cause is often referred to as “VM Sprawl” – the proliferation of virtual machines without adequate IT control. Organizations must recognize that virtual machines are different from their physical ones and the systems and controls that are in place to manage their physical environment may not work well in the virtual one.
In this White Paper, I will discuss the sources of VM sprawl, the dangers inherent in it and present best practices to address these issues. Finally, I will discuss the importance of automated virtualization management solutions. The goal of this white paper is to allow organizations to realize the many benefits of virtualization technology while still maintaining control of their environment.
Download the White Paper and feel free to leave me some feedback!
Many IT managers don’t know how many virtual machines they’re running and whether they’re secure, says virtualization expert Anil Desai.
…
Software developers like to use virtual machines because they can cheaply mimic a target environment.
Testers like virtual machines because they can test more combinations of new software with parts of the infrastructure in virtual machines.
Department heads like virtual appliances — applications teamed up with an operating system in virtual machine-ready file format — because they can be downloaded off the Internet, tried out, and pressed into service immediately, without the usual delays.
And each of these examples illustrates how virtualizing the enterprise leads to uncontrolled, virtual machine sprawl, with IT managers not knowing how many virtual machines they’re running, where they’re running, whether they’re offline and stored away, or whether they are secure.
The article raises awareness of the problem of "VM sprawl" – the rapid proliferation of virtual machines, often with little or no IT oversight. The article and the White Paper provide some best practices for gaining (or regaining) control of virtual machines through policies and processes. Feel free to leave comments about your own VM management horror stories (and, better yet, solutions)!
The goal of the Microsoft Solution Accelerator team is to ease the design and deployment of infrastructures based on Microsoft products. Earlier this year, I authored guides in their Infrastructure Planning and Design Series (see Microsoft Infrastructure Planning and Design (IPD) Guides Available for details).
The Microsoft Assessment and Planning (MAP) Solution Accelerator is an integrated platform with tools and guidance that make it easier for you to assess your current IT infrastructure and determine the right Microsoft technologies for your IT needs. It offers easy inventory, powerful assessment and actionable recommendations for Windows Server 2008, Windows Server Hyper-V, Virtual Server 2005 R2, Terminal Services, SoftGrid, System Center Virtual Machine Manager, Windows Vista, and 2007 Microsoft Office. The popular Windows Vista Hardware Assessment readiness tool will be integrated into this platform. …
Target Audience
Customers: IT Architects, Infrastructure Specialists and Desktop/Application Administrators.
Partners: System Integrators, Value-Added Partners and IT Consultants in the Enterprise and Midmarket
Key Benefits
Quick Assessment of Your Existing infrastructure and assets
Adaptive Guidance and Actionable Proposals that provide specific recommendations that will help simplify your planning and deployment of Microsoft technologies
One-Stop Shop for All Your Planning (or Pre-Sales) Needs
The good news is that this is a completely agent-less method of automatically analyzing your entire environment. The product generates detailed reports that would be tedious and error-prone to create manually.
Overall, the idea is to help organizations determine how best to deploy Microsoft’s virtualization technologies. If you’re currently considering an expanded virtualization deployment, this tool can help you make better decisions about your infrastructure needs. Give it a shot, and send feedback to the development team to improve the final version!
I was recently interviewed by Consortio Services as the first interviewee in their new podcast series. The team of Eric Johnson, Eric Beehler, and Josh Jones asked me numerous questions related to virtualization. Here’s a quick introduction to the topics:
This week our guest is Anil Desai, who we talk with about virtualization best practices. In the news, detecting wireless intruders, HP buys up more companies, Quicktime exploit, Exchange Server 2007 SP1, and how to keep your IT staff happy. Plus, “The Worst Tech Move of the Week”, “IT Pet Peeve”, and “The Tech Tip of the Week”.
The topics ranged from what’s important for IT people to learn and a comparison of the available technology approaches. You can download or listen to the free webcast at CS TechCast 1: Going Virtual.
A new chapter from my eBook titled The Definitive Guide to Virtual Platform Management is now available for free download (registration is required). Chapter #9, "Data Center Automation", focuses on ways in which enterprise management tools can help make the seemingly insurmountable task of managing server sprawl and VM sprawl much easier. Here’s a brief excerpt from the introduction:
A constant challenge in most IT environments is that of finding enough time and resources to finish all the tasks that need to be completed. IT departments find themselves constantly fighting fires and responding to a seemingly never-ending stream of change requests. Although virtualization technology can provide numerous advantages, there are also associated management-related challenges that must be addressed. When these tasks are performed manually, the added overhead can reduce cost savings and can result in negative effects on performance, availability, reliability, and security.
In previous chapters, I have covered a broad array of best practices related to virtualization management. Organizations have the ability to choose from a range of implementation methods, including physical servers, virtual machines, and clustered systems. The tasks have ranged from deployment and provisioning to monitoring virtual systems once they are in production. All of this raises questions related to the best method of actually implementing these best practices.
The focus of this chapter is on data center automation. Organizations that have deployed virtual machines throughout their environment can benefit from using enterprise software that has been designed to provide automated control. The goal is to implement technology that can provide for a seamless, self-managing, and adaptive infrastructure while minimizing manual effort. It’s a tall order, but certainly one that is achievable by using a combination of best practices and the right tools.
Stay tuned for the next and final chapter of the Guide!
In the past, I wrote a couple of articles related to Virtual Desktop Infrastructure (VDI) (for the articles, see the VDI/Desktop Management Category). The main idea is that, by running desktops within VMs, you can solve a lot of IT management challenges. I’m not sold on the idea, and it appears that I’m not alone. Hannah Drake, Editor at SearchServerVirtualization.com, asks Client-side virtual desktop technology: Unnecessary?. The article quotes some interesting points from an IT consulting company. I added my $.02 in the comments section. Feel free to comment here, as well: Is VDI a fad, or is it a real solution for current IT problems.
Chapter #8 of my free eBook called, The Definitive Guide to Virtual Platform Management, is now available for download. This chapter talks about ways in which organizations can use policies and processes to better manage virtualization. Included is information about creating and enforcing Service Level Agreements (SLAs), implementing charge-backs, and other best practices. Check it out online (and don’t miss the first seven chapters)!
There seems to be a lot of confusion out there related to different methods of virtualization. In short, it’s not all about running multiple operating systems on the same system at the same time. You can also virtualize and isolate specific programs (for example, within a Java Virtual Machine). There are also other approaches. Microsoft refers to its Terminal Services feature as “presentation virtualization.” Most of us are quite familiar with using the Remote Desktop Protocol (RDP) to remote manage a computer or to remote run applications. But with Terminal Services, applications actually execute on the server. What if you want them to run on the client (where CPU, memory, disk, and network resources are arguably far cheaper)?
Which to use, Microsoft SoftGrid or Terminal Services? Both of the fictional companies in our webcast, Contoso and Fabrikam, are considering application virtualization, and they have heard of both Terminal Services and SoftGrid. But which do they choose? In this session, we look at these solutions, provide details on how they differ, and explain when to use them. We also cover how to install, configure, and use SoftGrid.
Better yet, the technologies can successfully be used together. Unfortunately, one of the drawbacks of Softgrid is that it requires an Enterprise-level license for organizations that wish to deploy it. There are hints that this will soon change to make SoftGrid a lot more accessible to the masses (I’d consider using it for my home office).
Of course, there’s also an option not to virtualize at all. If you’re trying to consolidate, for example, Microsoft SQL Server machines, there’s probably a better way to consolidate your databases. The bottom line is that there are a lot of different options for obtaining the benefits of virtualization.
My article, the first in a series entitled, “Fighting The Dark Side of Virtualization” is now available on the Virtual Strategy Magazine Web site. The article, IT Fights Back: Virtualization SLAs and Charge-Backs, focuses on ways in which IT departments can help manage issues such as VM sprawl (the explosive proliferation of VMs), while containing costs. As a quick teaser, here’s the opening marquee:
Deploying virtualization into a production data center can provide an interesting mix of pros and cons. By consolidating workloads onto fewer server, physical management is simplified. But what about managing the VMs? While storage solutions can provide much-needed flexibility, it’s still up to datacenter administrators to determine their needs and develop appropriate solutions. In this article, I’ll present storage-related considerations for datacenter administrators.
Estimating Storage Capacity Requirements
Virtual machines generally require a large amount of storage. The good news is that this can, in some cases, improve storage utilization. Since direct-attached storage is not confined to a per-server basis (which often results in a lot of unused space), using centralized storage arrays can help. There’s also a countering effect, however: Since the expansion of virtual disk files is difficult to predict, you’ll need to leave some unallocated space for expansion. Storage solutions that provide for over-committing space (sometimes referred to as “soft-allocation”) and for dynamically resizing arrays can significantly simplify management.
To add up the storage requirements, you should consider the following:
The sum of the sizes of all “live” virtual disk files
Expansion predictions for virtual disk files
State-related disk files such as those used for suspending virtual machines and maintaining point-in-time snapshots
Space required for backups of virtual machines
All of this can be a tall order, but hopefully the overall configuration is no more complicated than that of managing multiple physical machines.
Placing Virtual Workloads
One of the best ways to reduce disk contention and improve overall performance is to profile virtual workloads to determine their requirements. Performance statistics help determine the number, size, and type of IO operations. Table 1 provides an example.
Table 1: Assigning workloads to storage arrays based on their performance requirements
In the provided example, the VMs are assigned to separate storage arrays to minimize contention. By combining VMs with “compatible” storage requirements on the same server, administrators can better distribute load and increase scalability.
Selecting Storage Methods
When planning to deploy new virtual machines, datacenter administrators have several different options. The first is to use local server storage. Fault-tolerant disk arrays that are directly-attached to a physical server can be easy to configure. For smaller virtualization deployments, this approach makes sense. However, when capacity and performance requirements grow, adding more physical disks to each server can lead to management problems. For example, arrays are typically managed independently, leading to wasted disk space and requiring administrative effort.
That’s where network-based storage comes in. By using centralized, network-based storage arrays, organizations can support many host servers using the same infrastructure. While support for technologies varies based on the virtualization platform, NAS, iSCSI, and SAN-based storage are the most common. NAS devices use block-level IO and are typically used as file servers. They can be used to store VM configuration and hard disk files. However, latency and competition for physical disk resources can be significant.
SAN and iSCSI storage solutions perform block-level IO operations, providing raw access to storage resources. Through the use of redundant connections and multi-pathing, they can provide the highest levels of performance, lowest latency, and simplified management.
In order to determine the most appropriate option, datacenter managers should consider workload requirements for each host server and its associated guest OS’s. Details include the number and types of applications that will be running, and their storage and performance requirements. The sum of this information can help determine whether local or network-based storage is most appropriate.
Monitoring Storage Resources
CPU and memory-related statistics are often monitoring for all physical and virtual workloads. In addition to this information, disk-related performance should be measured. Statistics collected at the host server level will provide an aggregate view of disk activity and whether storage resources are meeting requirements. Guest-level monitoring can help administrators drill-down into the details of which workloads are generating the most activity. While the specific statistics that can be collected will vary across operating systems the types of information that should be monitoring include:
IO per Second (IOPs): This statistic refers to the number of disk-related transactions that are occurring at a given instant. IOPs are often used as the first guideline for determining overall storage requirements.
Storage IO Utilization: This statistic refers to the percentage of total IO bandwidth that is being consumed at a given point in time. High levels of utilization can indicate the need to upgrade or move VMs.
Paging operations: Memory-starved VMs can generate significant IO traffic due to paging to disk. Adding or reconfiguring memory settings can help improve performance.
Disk queue length: The number of IO operations that are pending. A consistently high number will indicate that storage resources are creating a performance bottleneck.
Storage Allocation: Ideally, administrators will be able to monitor the current amount of physical storage space that is actually in use for all virtual hard disks. The goal is to proactively rearrange or reconfigure VMs to avoid over-allocation.
VM disk-related statistics will change over time. Therefore, the use of automated monitoring tools that can generate reports and alerts are an important component of any virtualizations storage environment.
Summary
Managing storage capacity and performance should be high on the list of responsibilities for datacenter administrators. Virtual machines can easily be constrained by disk-related bottlenecks, causing slow response times or even downtime. By making smart VM placement decisions and monitoring storage resources, many of these potential bottlenecks can be overcome. Above all, it’s important for datacenter administrators to work together with storage managers to ensure that business and technical goals remain aligned over time.