Archive for category Best Practices

Evaluating Virtualization Management Solutions: Free eBook Chapter

The tenth and final chapter of my eBook, the Definitive Guide to Virtual Platform Management is now available for free download (registration is required).  The complete eBook, available as a single PDF, should be available sometime in the near future.  The chapter focuses on details that IT professionals should keep in mind when looking for tools to manage virtualization.  From the introduction:

IT organizations are often aware of the fact that costs associated with managing new technology can far outweigh the initial deployment costs. Virtualization is no exception. Although the ability to run multiple isolated workloads on the same hardware can provide immediate cost savings and benefits throughout the environment, the associated administration tasks involve significant time and effort.

Throughout the previous chapters, I have covered a wide array of practices and recommendations for gaining and retaining control over virtualized environments. The primary challenge is that properly managing an environment that contains dozens (if not hundreds) of virtual machines can be very difficult. When these tasks are performed manually, IT organizations must absorb significant costs.

Fortunately, there’s a better way—through the use of virtualization-aware enterprise automation solutions much of the work can be simplified or even eliminated. With the proliferation of virtual machine technology, literally dozens of products are available for meeting these needs. The focus of this chapter is on presenting factors that should be considered when evaluating these solutions. I’ll present details related to the overall goals of virtualization management, along with specific features IT organizations should look for in products that will help manage their mixed virtual and physical infrastructures.

I hope readers find the entire guide to be useful.  Feel free to leave questions and comments here.

Virtual Strategy Magazine: Comparing Virtualization Approaches

Virtual Strategy Magazine has published my latest article: Comparing Virtualization Approaches. The article examines the various approaches to virtualization, including presentation-, application-, and server/hardware-level virtualization.  The following diagram provides a brief overview of the approaches and their details.

image

The overall idea is that organizations have a wide array of choices in deciding how to isolate and consolidate their workloads.  The challenges is picking the right tool for the job.

Virtual Platform Management – Data Center Automation

A new chapter from my eBook titled The Definitive Guide to Virtual Platform Management is now available for free download (registration is required).  Chapter #9, "Data Center Automation", focuses on ways in which enterprise management tools can help make the seemingly insurmountable task of managing server sprawl and VM sprawl much easier.  Here’s a brief excerpt from the introduction:

A constant challenge in most IT environments is that of finding enough time and resources to finish all the tasks that need to be completed. IT departments find themselves constantly fighting fires and responding to a seemingly never-ending stream of change requests. Although virtualization technology can provide numerous advantages, there are also associated management-related challenges that must be addressed. When these tasks are performed manually, the added overhead can reduce cost savings and can result in negative effects on performance, availability, reliability, and security.

In previous chapters, I have covered a broad array of best practices related to virtualization management. Organizations have the ability to choose from a range of implementation methods, including physical servers, virtual machines, and clustered systems. The tasks have ranged from deployment and provisioning to monitoring virtual systems once they are in production. All of this raises questions related to the best method of actually implementing these best practices.

The focus of this chapter is on data center automation. Organizations that have deployed virtual machines throughout their environment can benefit from using enterprise software that has been designed to provide automated control. The goal is to implement technology that can provide for a seamless, self-managing, and adaptive infrastructure while minimizing manual effort. It’s a tall order, but certainly one that is achievable by using a combination of best practices and the right tools.

Stay tuned for the next and final chapter of the Guide!

Microsoft Infrastructure Planning and Design (IPD) Guides Available

I recently worked with Microsoft’s Solution Accelerator team to develop a guide to designing an infrastructure to support Microsoft’s virtualization solutions.  Unlike much of the other technical information that is available online, this series focuses on the design aspect of managing technology, rather than on implementation details.  From the web site:

Infrastructure Planning and Design guides share a common structure, including:

  • Definition of the technical decision flow through the planning process.
  • Listing of decisions to be made and the commonly available options and considerations.
  • Relating the decisions and options to the business in terms of cost, complexity, and other characteristics.
  • Framing decisions in terms of additional questions to the business to ensure a comprehensive alignment with the appropriate business landscape.

These guides complement product documentation by focusing on infrastructure design options.

Each guide leads the reader through critical infrastructure design decisions, in the appropriate order, evaluating the available options for each decision against its impact on critical characteristics of the infrastructure. The IPD Series highlights when service and infrastructure goals should be validated with the organization and provides additional questions that should be asked of service stakeholders and decision makers.

You can download the files from the Infrastructure Planning and Design page (registration is optional).  The content includes the following downloads:

  • IPD Series Introduction: A brief introduction to the series and its approach.
  • Select the Right Virtualization Solution: This guide includes an overview of Microsoft’s virtualization products and technologies.  The package includes a flowchart that can be helpful in deciding how to select from among Microsoft Virtual Server 2005, Microsoft Virtual PC, Microsoft Terminal Services, Microsoft SoftGrid, and the newly-announced Hyper-V (available with Windows Server 2008).
  • Windows Server Virtualization: This guide covers details on Windows Server Virtualization (WSv, now officially "Hyper-V") and Microsoft Virtual Server.  It includes a document and slides that cover the process of selecting which workloads to virtualize.  The guide then walks through the process of translating virtual machine requirements to host infrastructure requirements.
  • SoftGrid Application Virtualization: This guide focuses on SoftGrid – recently renamed to Microsoft Application Virtualization.  It covers best practices for designing an infrastructure for simplified application deployment and maintenance.

All downloads include files in Office 2003 and Office 2007 formats and are ready for use in your own presentations or proposals.  More guides will be available in the near future, and you should be able to access beta versions of upcoming guides at Microsoft Connect.  I hope you find the content to be useful!

Virtual Platform Management: Policies and Processes

DGVPM Cover Chapter #8 of my free eBook called, The Definitive Guide to Virtual Platform Management, is now available for download.  This chapter talks about ways in which organizations can use policies and processes to better manage virtualization.  Included is information about creating and enforcing Service Level Agreements (SLAs), implementing charge-backs, and other best practices.  Check it out online (and don’t miss the first seven chapters)!

IT Fights Back: Virtualization SLAs and Charge-Backs

My article, the first in a series entitled, “Fighting The Dark Side of Virtualization” is now available on the Virtual Strategy Magazine Web site.  The article, IT Fights Back: Virtualization SLAs and Charge-Backs, focuses on ways in which IT departments can help manage issues such as VM sprawl (the explosive proliferation of VMs), while containing costs.  As a quick teaser, here’s the opening marquee:

temp

1

The adventure begins…

Managing Virtualization Storage for Datacenter Managers

This article was first published on SearchServerVirtualization.TechTarget.com.

Deploying virtualization into a production data center can provide an interesting mix of pros and cons. By consolidating workloads onto fewer server, physical management is simplified. But what about managing the VMs? While storage solutions can provide much-needed flexibility, it’s still up to datacenter administrators to determine their needs and develop appropriate solutions. In this article, I’ll present storage-related considerations for datacenter administrators.

Estimating Storage Capacity Requirements

Virtual machines generally require a large amount of storage. The good news is that this can, in some cases, improve storage utilization. Since direct-attached storage is not confined to a per-server basis (which often results in a lot of unused space), using centralized storage arrays can help. There’s also a countering effect, however: Since the expansion of virtual disk files is difficult to predict, you’ll need to leave some unallocated space for expansion. Storage solutions that provide for over-committing space (sometimes referred to as “soft-allocation”) and for dynamically resizing arrays can significantly simplify management.

  • To add up the storage requirements, you should consider the following:
  • The sum of the sizes of all “live” virtual disk files
  • Expansion predictions for virtual disk files
  • State-related disk files such as those used for suspending virtual machines and maintaining point-in-time snapshots
  • Space required for backups of virtual machines

All of this can be a tall order, but hopefully the overall configuration is no more complicated than that of managing multiple physical machines.

Placing Virtual Workloads

One of the best ways to reduce disk contention and improve overall performance is to profile virtual workloads to determine their requirements. Performance statistics help determine the number, size, and type of IO operations. Table 1 provides an example.

image

Table 1: Assigning workloads to storage arrays based on their performance requirements

In the provided example, the VMs are assigned to separate storage arrays to minimize contention. By combining VMs with “compatible” storage requirements on the same server, administrators can better distribute load and increase scalability.

Selecting Storage Methods

When planning to deploy new virtual machines, datacenter administrators have several different options. The first is to use local server storage. Fault-tolerant disk arrays that are directly-attached to a physical server can be easy to configure. For smaller virtualization deployments, this approach makes sense. However, when capacity and performance requirements grow, adding more physical disks to each server can lead to management problems. For example, arrays are typically managed independently, leading to wasted disk space and requiring administrative effort.

That’s where network-based storage comes in. By using centralized, network-based storage arrays, organizations can support many host servers using the same infrastructure. While support for technologies varies based on the virtualization platform, NAS, iSCSI, and SAN-based storage are the most common. NAS devices use block-level IO and are typically used as file servers. They can be used to store VM configuration and hard disk files. However, latency and competition for physical disk resources can be significant.

SAN and iSCSI storage solutions perform block-level IO operations, providing raw access to storage resources. Through the use of redundant connections and multi-pathing, they can provide the highest levels of performance, lowest latency, and simplified management.

In order to determine the most appropriate option, datacenter managers should consider workload requirements for each host server and its associated guest OS’s. Details include the number and types of applications that will be running, and their storage and performance requirements. The sum of this information can help determine whether local or network-based storage is most appropriate.

Monitoring Storage Resources

CPU and memory-related statistics are often monitoring for all physical and virtual workloads. In addition to this information, disk-related performance should be measured. Statistics collected at the host server level will provide an aggregate view of disk activity and whether storage resources are meeting requirements. Guest-level monitoring can help administrators drill-down into the details of which workloads are generating the most activity. While the specific statistics that can be collected will vary across operating systems the types of information that should be monitoring include:

  • IO per Second (IOPs): This statistic refers to the number of disk-related transactions that are occurring at a given instant. IOPs are often used as the first guideline for determining overall storage requirements.
  • Storage IO Utilization: This statistic refers to the percentage of total IO bandwidth that is being consumed at a given point in time. High levels of utilization can indicate the need to upgrade or move VMs.
  • Paging operations: Memory-starved VMs can generate significant IO traffic due to paging to disk. Adding or reconfiguring memory settings can help improve performance.
  • Disk queue length: The number of IO operations that are pending. A consistently high number will indicate that storage resources are creating a performance bottleneck.
  • Storage Allocation: Ideally, administrators will be able to monitor the current amount of physical storage space that is actually in use for all virtual hard disks. The goal is to proactively rearrange or reconfigure VMs to avoid over-allocation.

VM disk-related statistics will change over time. Therefore, the use of automated monitoring tools that can generate reports and alerts are an important component of any virtualizations storage environment.

Summary

Managing storage capacity and performance should be high on the list of responsibilities for datacenter administrators. Virtual machines can easily be constrained by disk-related bottlenecks, causing slow response times or even downtime. By making smart VM placement decisions and monitoring storage resources, many of these potential bottlenecks can be overcome. Above all, it’s important for datacenter administrators to work together with storage managers to ensure that business and technical goals remain aligned over time.

Virtualization Considerations for Storage Managers

This article was first published on SearchServerVirtualization.TechTarget.com.

It’s common for new technology to require changes in all areas of an organization’s overall infrastructure. Virtualization is no exception. While many administrators often focus on CPU and memory constraints, storage-related performance is also a very common bottleneck. In some ways, virtual machines can be managed like physical ones. After all, each VM runs its own operating systems, applications, and services. But there are also numerous additional considerations that must be taken into account when designing a storage infrastructure. By understanding the unique needs of virtual machines, storage managers can build a reliable and scalable data center infrastructure to support their VMs.

Analyzing Disk Performance Requirements

For many types of applications, the primary consideration around which the storage infrastructure is designed is based on I/O operations per second (IOPS). IOPS refer to the number of read and write operations that are performed, but do not always capture the whole picture. Additional considerations include the type of activity. For example, since virtual disks that are stored on network-based storage arrays must support guest OS disk activity, the average I/O request size tends to be small. Additionally, I/O requests are frequent and often random in nature. Paging can also create a lot of traffic on memory-constrained host servers. There are also other considerations that will be workload-specific. For example, it’s also good to measure the percentage of read vs. write operations when designing the infrastructure.

Now, multiply all of these statistics by the number of VMs that are being supported on a single storage device, and you are faced with the very real potential for large traffic jams. The solution? Optimize the storage solution for supporting many, small, and non-sequential IO operations. And, most importantly, distribute VMs based on their levels and types of disk utilization. Performance monitoring can help generate the information you need.

Considering Network-Based Storage Approaches

Many environments already use a combination of NAS, SAN, and iSCSI-based store to support their physical servers. These methods can still be used for hosting virtual machines, as most virtualization platforms provide support for them. For example, SAN- or iSCSI-based volumes that are attached to a physical host server can be used to store virtual machine configuration files, virtual hard disks, and related data. It is important to note that, by default, the storage is attached to the host and not to the guest VM. Storage managers should keep track of which VMs reside on which physical volumes for backup and management purposes.

In addition to providing storage at the host-level, guest operating systems (depending on their capabilities) can take advantage of NAS and iSCSI-based storage. With this approach, VMs can directly connect to network-based storage. A potential drawback, however, is that guest operating systems can be very sensitive to latency, and even relatively small delays can lead to guest OS crashes or file system corruption.

Evaluating Useful Storage Features

As organizations place multiple mission-critical workloads on the same servers through the use of virtualization, they can use various storage features to improve reliability, availability and performance. Implementing RAID-based striping across arrays of many disks can help significantly improve performance. The array’s block size should be matched to the most common size of I/O operations. However, more disks means more chances for failures. So, features such as multiple parity drives and hot standby drives are a must.

Fault tolerance can be implemented through the use of multi-pathing for storage connections. For NAS and iSCSI solutions, storage managers should look into having multiple physical network connections and implementing fail-over and load-balancing features by using network adapter teaming. Finally, it’s a good idea for host servers to have dedicated network connections to their storage arrays. While you can often get by with shared connections in low-utilization scenarios, the load placed by virtual machines can be significant and can increase latency.

Planning for Backups

Storage administrators will have the need to backup many of their virtual machines. Apart from allocating the necessary storage space, it is necessary to develop a method for dealing with exclusively-locked virtual disk files. There are two main approaches:

  • Guest-Level Backups: In this approach, VMs are treated like physical machines. Generally, you would install backup agents within VMs, define backup sources and destinations, and then let them go to work. The benefit of this approach is that only important data is backed up (thereby reducing required storage space). However, your backup solution must be able to support all potential guest OS’s and versions. And, the complete recovery process can involve many steps, including reinstalling and reconfiguring the guest OS.
  • Host-Level Backups: Virtual machines are conveniently packaged into a few important files. Generally, this includes the VM configuration file and virtual disks. You can simply copy these files to another location. The most compatible approach involves stopping or pausing the VM, copying the necessary files, and then restarting the VM. The issue, however, is that this can require downtime. Numerous first- and third-party solutions are able to backup VMs while they’re “hot”, thereby eliminating service interruptions. Regardless of the method used, replacing a failed or lost VM is easy – simple restore the necessary files to the same or another host server and you should be ready to go. The biggest drawback of host-level backups is in the area of storage requirements. You’re going to be allocating a ton of space for the guest OS’s, applications, and data you’ll be storing.

Storage solutions options such as the ability to perform snapshot-based backups can be useful. However, storage administrators should thoroughly test the solution and should look for explicitly-stated virtualization support from their vendors. Remember, backups must be consistent to a point in time, and non-virtualization-aware solutions might neglect to flush information stored in the guest OS’s cache.

Summary

By understanding and planning for the storage-related needs of virtual machines, storage administrators can help their virtual environments scale and keep pace with demand. While some of the requirements are somewhat new, many involve utilizing the same storage best practices that are used for physical machines. Overall, it’s important to measure performance statistics and to consider storage space and performance when designing a storage infrastructure for VMs.

Advanced Backup Options for Virtual Machines

This article was first published on SearchServerVirtualization.TechTarget.com.

It’s a pretty big challenge to support dozens or hundreds of separate virtual machines. Add in the requirement for backups – something that generally goes without saying – and you have to figure out how to protect important information. Yes, that usually means at least two copies of each of these storage hogs. I understand that you’re not made of storage (unless, of course, you’re the disk array that’s reading this article on the web server). So what should you do? In this tip, I’ll outline several approaches to performing backups for VMs, focusing on the strengths and limitations of each.

Determining Backup Requirements

Let’s start by considering the requirements for performing backups. The list of gaols is pretty simple, in theory:

  • Minimize data loss
  • Minimize recovery time
  • Simplify implementation and administration
  • Minimize costs and resource usage

Unfortunately, some of these objectives are often at odds with each other. Since implementing any solution takes time and effort, start by characterizing the requirements for each of your virtual machines and the applications and services they support. Be sure to write in pencil, as it’s likely that you’ll be revising these requirements. Next, let’s take a look at the different options for meeting these goals.

Application-Level Backups

The first option to consider for performing backups is that of using application features to do the job. There’s usually nothing virtualization-specific about this approach. Examples include:

  • Relational Database Servers: Databases were designed to be highly-available and it should come as no surprise that there are many ways of using built-in backup methods. In addition to standard backup and restore operations, you can use replication, log-shipping, clustering, and other methods to ensure that data remains protected.
  • Messaging Servers: Communications platforms such as Microsoft Exchange Server provide methods for keeping multiple copies of the data store in sync. Apart from improving performance (by placing data closer to those who need it), this can provide adequate backup functionality.
  • Web Servers: The important content for a web server can be stored in a shared location or can be copied to each node in a web server farm. When a web server fails, just restore the important data to a standby VM, and you’re ready to go. Better yet, use shared session state or stateless application features and a network load-balancer to increase availability and performance.

All of these methods allow you to protect against data loss and downtime by storing multiple copies of important information.

Guest-Level Backups

What’s so special about VMs, anyway? I mean, why not just treat them like the physical machines that they think they are? That’s exactly the approach with guest-level backups. The most common method with this approach is to install backup agents within the guest OS and to specify which files should be backed up and their destinations. As with physical servers, administrators can decide what really needs to be backed up – generally just data, applications, and configuration files. That saves precious disk space and can reduce backup times.

There are, however, drawbacks to this backup approach. First, your enterprise backup solution must support your guest OS’s (try finding an agent for OS/2!) Assuming the guest OS is supported, the backup and recovery process is often different for each OS. This means more work on the restore side of things. Finally, the restore process can take significant time, since a base OS must be installed and the associated components restored.

Examples of popular enterprise storage and backup solutions are those from Symantec, EMC, Microsoft and many other vendors.

Host-Level Backups

Host-level backups take advantage of the fact that virtual machines are encapsulated in one or more virtual disk files, along with associated configuration files. The backup process consists of making a copy of the necessary files from the host OS’s file system. Host-level backups provide a consistent method for copying VMs since you don’t have to worry about differences in guest operating systems. When it comes time to restore a VM (and you know it’s going to happen!), all that’s usually needed is to reattach the VM to a working host server.

However, the drawback is that you’re likely to need a lot of disk space. Since the entire VM, including the operating system, applications, and other data are included in the backup set, you’ll have to allocate the necessary storage resources. And, you’ll need adequate bandwidth to get the backups to their destination. Since virtual disk files are exclusively locked while a VM is running, you’ll either need to use a “hot backup” solution, or you’ll have to pause or stop the VM to perform a backup. The latter option results in (gulp!) scheduled downtime.

Solutions and technologies include:

  • VMware: VMotion; High Availability; Consolidated Backup; DRS
  • Microsoft Volume Shadow Services (VSS)

File System Backups

File system backups are based on features available in storage arrays and specialized software products. While they’re not virtualization-specific, they can help simplify the process of creating and maintaining VM backups. Snapshot features can allow you make a duplicate of a running VM, but you should make sure that your virtualization platform is specifically supported. File system replication features can use block- or bit-level features to keep a primary and backup copy of virtual hard disk files in-sync.

Since changes are transferred efficiently, less bandwidth is required. And, the latency between when modifications are committed on the primary VM and the backup VM can be minimized (or even eliminated). That makes the storage-based approach useful for maintaining disaster recovery sites. While third-party products are required, file system backups can be easy to setup and maintain. But, they’re not always ideal for write-intensive applications and workloads.

Potential solutions include products from Double-Take Software and from Neverfail. Also, if you’re considering the purchase of a storage solution, ask your vendor about replication and snapshot capabilities, and their compatibility with virtualization.

Back[up] to the Future

Most organizations will likely choose different backup approaches for different applications. For example, application-level backups are appropriate for those systems that support them. File system replication is important for maintaining hot or warm standby sites and services. Guest- and host-level backups balance ease of backup/restore operations vs. the amount of usable disk space. Overall, you should compile the data loss, downtime and cost constraints, and then select the most appropriate method for each type of VM. While there’s usually no single answer that is likely to meet all of your needs, there are some pretty good options out there!

Evaluating Network-Based Storage Options

This article was first published on SearchServerVirtualization.TechTarget.com.

Imagine living in a crowded apartment with a bunch of people that think they own the place. Operating systems and applications can be quite inconsiderate at times. For example, when they’re running on physical machines, these pieces of software are designed to monopolize hardware resources. Now, add virtualization to the picture, and you get a lot of selfish people competing for the same resources. In the middle is the virtualization layer – acting as a sort of landlord or superintendent – trying to keep everyone happy (while still generating a profit). Such is the case with disk I/O on virtualization host servers. In this Tip, I’ll discuss some options for addressing this common bottleneck.

Understanding Virtualization I/O Requirements

Perhaps the most important thing to keep in mind is that not all disk I/O is the same. When designing storage for virtualization host servers, you need to get an idea of the actual disk access characteristics you will need to support. Considerations include:

  • Ratio of read vs. write operations
  • Frequency of sequential vs. random reads and writes
  • Average I/O transaction size
  • Disk utilization over time
  • Latency constraints
  • Storage space requirements (including space for backups and maintenance operations)

Collecting this information on a physical server can be fairly simple. For example, on the Windows platform, you can collect data using Performance Monitor and store it to a binary file or database for later analysis. When working with VMs, you’ll need to measure and combine I/O requirements to define your disk performance goals. The focus of this tip is on choosing methods for storing virtual hard disk files, based on cost, administration and scalability requirements.

Local / Direct-Attached Storage

The standard default storage option in most situations is that of using local storage. The most common connection methods include PATA, SATA, SCSI, and SAS. Each type of connection comes with associated performance and cost considerations. RAID-based configurations can provide fault-tolerance and can be used to improve performance.

· Pros:

  • Generally cheaper than other storage options
  • Low latency, high bandwidth connections that are reserved for a single physical server

· Cons:

  • Potential waste of storage space (since disk space is not shared across computers)
  • Limited total storage space and scalability due to physical disk capacity constraints (especially when implementing RAID)
  • Difficult to manage, as storage is decentralized

Storage Area Networks (SANs) / Fibre Channel

SANs are based on Fibre Channel connections, rather than copper-based Ethernet. SAN-based protocols are design to provide high throughput and low latency, but require the implementation of an optical-based network infrastructure. Generally, storage arrays provide raw block-level connections to carved-out portions of disk space.

· Pros:

  • Can provide high performance connections
  • Improved compatibility – appears are local storage to the host server
  • Centralizes storage management

· Cons:

  • Expensive to implement – requires Fibre Channel-capable host bus adapters, switches, and cabling
  • Expensive to administer – requires expertise to manage a second “network” environment

Network-Based Storage

Network-based storage devices are designed to provide disk resources over a standard (Ethernet) network connection. They most often support protocols such as Server Message Block (SMB), and Network File System (NFS), both of which are designed for file-level disk access. The iSCSI protocol provides the ability to perform raw (block-level) disk access over a standard network. iSCSI-attached volumes appear to the host server as if they were local storage.

· Pros:

  • Lower implementation and management cost (vs. SANs) due to utilization of copper-based (Ethernet) connections
  • Storage can be accessed at the host- or guest-level, based on specific needs
  • Higher scalability (arrays can contain hundreds of disks) and throughput (dedicated, redundant I/O controllers)

· Cons:

  • Simplified administration (vs. direct-attached storage), since disks are centralized
  • Applications and virtualization platforms must support either file-based access or iSCSI

Storage Caveats: Compatibility vs. Capacity vs. Cost

In many real world implementations of virtualization, an important bottleneck is storage performance. Organizations can use well-defined methods of increasing CPU and memory performance, but what about the hard disks? Direct-attached, network-based, and SAN-based storage options can provide several viable options. Once you’ve outgrown local storage (from a capacity, performance, or administration standpoint), you should consider implementing iSCSI or file-based network-based storage servers. The primary requirement, of course, is that your virtualization layer must support the hardware and software you choose. SANs are a great option for organizations that have already made the investment, but some studies show that iSCSI devices can provide similar levels of performance at a fraction of the cost.

The most important thing to remember is to thoroughly test your solution before deploying it into production. Operating systems can be very sensitive to disk-related latency, and disk contention can cause unforeseen traffic patterns. And, once the systems are deployed, you should be able to monitor and manage throughput, latency, and other storage-related parameters.

Overall, providing storage for virtual environments can be a tricky technical task. The right solution, however, can result in happy landlords and tenants whereas the wrong solutions result in one seriously overcrowded apartment.

VDI Benefits without VDI: Desktop Management

This article was first published on SearchServerVirtualization.TechTarget.com.

Quick: Think of the five systems administration tasks you most enjoy doing! If you’re like most techies, desktop management probably didn’t make the list. It’s probably right up there with washing the car or mowing the lawn (a whole different type of administration challenge). Caring for and feeding client-side computers can be a painful and never-ending process. Therefore, it’s no surprise that Virtual Desktop Infrastructure (VDI) technology is capturing the eyes and ears of IT staff.

But does VDI provide a unique solution? Or, can you get the same benefits through other practices and approaches? (If you’ve read the title of this Tip, there’s a good chance you can guess where I’m going with this.) Over the years, a variety of solutions for managing desktop and notebook computers have become commonplace. In this article, I’ll outline some problems and solutions. The goal is not to discredit VDI, but to look at options for achieving the same goals.

Deployment and Provisioning

  • Problem: Rolling out new desktop computers can be time-consuming and labor-intensive. Using VDI, provisioning is much faster since standard base images can be quickly deployed within the data center. Users can then access the images from any computer or thin client.
  • Alternative Solution(s): Automated operating system deployment tools are available from OS vendors and from third-parties. Some use an image-based approach in which organizations can create libraries of supported configurations and then deploy them to physical or virtual machines. When combined with network boot features, the process can be completely automated. Additionally, there are server-based options such as Microsoft SoftGrid for automatically installing applications as they are requested.

Desktop Support and Remote Management

  • Problem: Managing and troubleshooting desktop systems can be costly and time-consuming in standard IT environments, as physical access to client machines is often required. With VDI implementations, all client operating systems, applications, and configuration settings are stored centrally within VMs within the data center. This reduces the need to visit client desktops or to have physical access to portable devices such as notebook computers.
  • Alternative Solution(s): While VDI can sometimes simplify support operations, IT departments still need to manage individual operating system images and application installations. Remote management tools can reduce the need for physical access to a computer for troubleshooting purposes. Some solutions use the same protocols (such as the Remote Desktop Protocol, RDP) that VDI or other approaches would use. Products and services also allow for troubleshooting computers over the Internet or behind remote office firewalls. That can help you support Mom, who might not be authorized to access a VM image in your corporate data center.

Resource Optimization / Hardware Consolidation

  • Problem: Desktop hardware is often under-utilized and hardware maintenance can be a significant cost and management burden. By combining many desktop computers on server hardware, VDI can be used to increase overall system resource utilization. Additionally, client computers have minimal system requirements, making them more cost effective to maintain over time.
  • Alternative Solution(s): VDI takes the “server consolidation” approach and applies it to desktop computers. Standard client computers are minimally utilized, from a resource standpoint. Desktop hardware, however, tends to be far cheaper than data center equipment. And, with VDI client-side devices are still required, although they are “thin”. When data center costs related to power, cooling, storage, and redundancy are factored in, it can be hard to beat to cost of a mid-range desktop computer. Through the use of application virtualization and solutions such as Citrix and Microsoft Terminal Services, organizations can increase the effective lifecycle of desktop hardware. Windows Server 2008’s version of Terminal Services provides the ability to run single applications (rather than entire desktops) in a virtualized environment, thereby providing the benefits of centralized application management with scalability. There are potential compatibility issues, but they may be offset by the ability to support many more users per server.

Supporting Mobile Users and Outsourcing

  • Problem: Maintaining security for remote sites, traveling users, and non-company staff can be a significant challenge when allowing the use of standard desktop or notebook computers. VDI helps minimize data-related risks by physically storing information within the data center. Even if client devices are lost or stolen, information should remain secure and protected.
  • Alternative Solution(s): For some types of remote users, it might make sense to provide isolated desktop environments via VDI. However, these users would require network access to the VMs themselves. Multi-factor authentication (using, for example, biometric devices) and encrypted connections (such as VPNs) can help protect network access from standard desktop computers. Network Access Control (NAC) is a technology that can help prevent insecure machines from connecting to the network. And, carefully managed security permissions can prevent unauthorized access to resources. All of these best practices apply equally whether or not VDI is being used. Finally, there’s no substitute for implementing and following rigid security policies, regardless of the technical approach that is used.

Managing Performance

  • Problem: Desktop operating systems and applications can never seem to have enough resources to perform adequately, leading to shorter upgrade cycles. Using VDI to place desktop VMs on the server, systems administrators can monitor and allocate system resources based on the resource needs of client computers.
  • Alternative Solution(s): In theory, VDI implementations can take advantage of highly-scalable server-side hardware, and it’s usually easier to reconfigure CPU, memory, disk and networking settings for a VM than it is to perform a hardware upgrade on a desktop computer. The drawback with the VDI approach is that applications or services that consume too many resources could potentially hurt the performance of other systems on that same server. Load-balancing and portability can help alleviate this, but administrators can also use other techniques such as server-based computing to centrally host specific resource-intensive applications.

Workload Portability

  • Problem: Operating systems and applications are tied to the desktop hardware on which they’re running. This makes it difficult to move configurations during upgrades, reorganizations, or job reassignments. With VDI, the process of moving or copying a workload is simple since the entire system configuration is encapsulated in a hardware-independent virtual machine.
  • Alternative Solution(s): When entire desktop configurations need to be moved or copied, the VDI approach makes the process easy since it’s based on virtual machines. When using standard desktop computers, however, the same imaging and conversion tools can be used to move an OS along with its applications to another computer. As these hardware-independent images can be deployed to both physical and virtual machines, this also provides IT departments with a seamless way to use VDI and standard desktop computers in the same environment.

Summary

Ask not whether VDI is a solution to your desktop management problems, but rather whether it is the best solution to these challenges. VDI offers benefits related to quick deployments, workload portability, centralized management, and support for remote access. Few of these benefits are unique to VDI, though, so keep in mind the alternatives.

VDI Benefits without VDI:Managing Security

This article was first published on SearchServerVirtualization.TechTarget.com.

What do leaky faucets, fragmented file systems and failed hard disks all have in common? We want to fix them! As IT professionals, most of us pride ourselves on our problem-solving abilities. As soon as we hear about an issue, we want to find the solution. Every once in a while a technology offers new solutions to problems you may not have recognized. VDI addresses raises and addresses some important issues that are related to IT management. But, is VDI the only solution to those problems?

Whether or not you agree that VDI technology will make inroads into replacing traditional desktop computers, all of the recent press on the technology helps highlight the typical pain that’s being seen in IT departments. From security to supportability to regulatory compliance, there’s clearly a need for improvements in IT management. For many environments, however, it’s possible to find solutions by using other approaches and practices.

For the record, I certainly don’t oppose the use of virtualization for desktop environments, and I think it most likely will find a useful role in many environments. However, in order to justify the costs and technology investments, it’s worth understanding other options. The point of this article is that VDI is not required in order to solve many IT-related security problems. Let’s look at some problems and alternatives.

Securing Desktop Data

  • Problem: Data stored on corporate desktop and notebook computers is vulnerable to theft or unauthorized access. By using VDI to physically store all of this data on virtual machine images in the data center, chances of data compromise are reduced. The reason for this is that information is that sensitive data is never actually stored on a desktop or portable computer. If the system is lost or stolen, organizations don’t have to worry about losing information since it is not stored on the local hard disk.
  • Alternative Solution(s): Securing data is a common challenge in all IT environments, and many solutions are available. Sensitive information, in general, should be stored in protected network locations. File servers should adhere to security standards to prevent unauthorized access or data loss. In this scenario, the most important data is already secured within the data center. For protecting local copies of information, there are several hardware and software-based solutions that can be used to encrypt the contents of desktop and notebook hard disks. An example is Windows Vista’s BitLocker feature. Even with VDI, you would have the need to protect local copies of VMs for traveling users.

Data Protection

  • Problem: Backing up and restoring important data on client machines takes significant time and effort. When using VDI, all of the contents of the desktop and notebook computers are actually stored in the data center (usually on a dedicated storage arrays or network-based storage devices). Since all of the data is stored centrally, systems administrators can easily make backups of entire computer configurations (including the operating system, installing applications, data, and configuration settings). The no longer have to really on network-based backup agents that require the computer to be powered on and accessible in order for the data to be copied.
  • Alternative Solution(s): Hardware failures or accidental data modifications on client-side computers are potential problems, but there are many backup-related solutions. I already mentioned the importance of storing critical files on data center servers. By using automated restore tools, users can quickly be restored to service, even after a complete hardware failure. While VDI might seem to help in this area, when backing up entire VMs and virtual hard disks, you’re actually protecting a lot of unnecessary information. For example, each virtual hard disk that is backed up will include the entire operating system and all of the installed program files. These types of files could be much more easily restored using installation media or by reverting to an image-based backup. Users should understand the importance of storing information in network environments. File synchronization (such as the Windows Offline Files feature) can be used to automatically support traveling users.

Managing System Updates

  • Problem: Systems administrators spend a lot of time in keeping systems up-to-date with security updates and related patches. Part of the challenge is in dealing with remote machines that must be connected to the network and be properly configured in order to be maintained. With VDI, guest OS images are located in the data center and can be accessed by systems administrators whether or not the VM is being used.
  • Alternative Solution(s): The VDI approach still requires each user to have access to a single operating system. The OS itself must be secured, patched, and periodically maintained with other types of updates. Most vendors have tools for automatically deploying updates to large numbers of computers. These same methods can be used with or without VDI. In addition, features such as Network Access Control (NAC) can help ensure that only secure computers are able to access the network.

Summary

VDI approaches can help increase security in many different situations. But, VDI is not the only option for meeting these needs. IT automation tools and practices can help address problems related to data protection, security of client-side data, and ensuring that network systems remain free of malware and other infections. When deciding how and when to deploy VDI, keep in mind the alternative approaches.

IT Policies: Monitoring Physical and Virtual Environments

This article was first published on SearchServerVirtualization.TechTarget.com.

Here’s quick question: How many virtual machines and physical servers are currently running on your production environment? If you can answer that, congratulations! Here’s a harder one: Identify the top 10 physical or virtual machines based on resource utilization. For most IT organizations, both of these questions can be difficult to answer. Fortunately, there are ways to implement monitoring in an automated way. In this tip, I’ll present some advice related to monitoring VMs and host computers in a production environment.

They’re all pretty much the same…

In many ways, the tasks associated with monitoring virtual machines are similar to those of working with physical ones. Organizations that have invested in centralized monitoring solutions can continue to rely upon them for gaining insight into how applications and services are performing. Examples include:

  • Establishing Baselines: A baseline helps you determine the standard level of resource utilization for a physical or virtual workload. Details to track typically include CPU, memory, disk, and network performance.
  • Root-Cause Analysis / Troubleshooting: When users complain of slow performance, it’s important for IT staff to be able to drill-down into the main cause of the problem. Performance statistics can often help identify which resources are constrained. Ideally, that will help identify the source of the problem and provide strong hints about resolving them.
  • Generating Alerts: In order to proactively manage performance, IT staff should be alerted whenever resource utilization exceeds certain thresholds. This can help reconfigure workloads

All of these tasks are fairly standard in many IT environments and are also applicable to working with virtual workloads.

… Except for their differences

Environments that use virtualization also have some unique challenges related to performance monitoring. Since it’s quick and easy to deploy new VMs, keeping track of them is a huge challenge. Some additional features and functions that can be helpful include:

  • Mapping Guest-to-Host Relationships: While virtual machines have their own operating system, resource utilization is often tied to other activity on the same host server. Virtualization-aware monitoring tools should be able to uniquely identify VMs and relate them to the physical computers on which they are running.
  • Automated Responses / Dynamic Reconfiguration: In many cases, it’s possible to perform automated tasks in reaction to performance-related issues. For example, if CPU usage of a single VM is slowing down the entire host, VM priority settings can be adjusted. Or, when excessive paging is occurring, the VM’s memory allocation can be increased.
  • Broad Platform Support: There’s a good chance that you’re supporting many more OS versions and flavors for VMs than on physical machines. A good performance monitoring solution will support the majority of virtual operating environments.
  • Reporting / Capacity Planning: The primary purpose of performance monitoring is to facilitate better decision-making. Advanced reporting features can help track untapped resources and identify host servers that are overloaded. Tracking historical performance statistics can also be very helpful.

Choosing the Right Tools for the Job

Most operating systems provide simple tools for troubleshooting performance issues on a single or a few computers. In environments that support more than a few VMs, automated performance monitoring and management tools are practically a must-have. Figure 1 provides some details into features that can be useful.

image

Figure 1: Features to look for in performance management tools

Summary

Overall, many of the standard IT best practices apply equally to monitoring physical and virtual machines. When searching for tools to get the job done, however, there are certain features that can dramatically reduce the time and effort required to gain insight into production performance.

IT Policies: Service Level Agreements (SLAs)

This article was first published on SearchServerVirtualization.TechTarget.com.

Have you heard the one about the IT department whose goals were not well-aligned with the needs of its users? OK, so that’s probably not a very good setup for a joke. One of the most common challenges faced by most IT organizations is defining their internal customers’ requirements and delivering services based on them. In this Tip, I’ll provide details on how you can define Service Level Agreements (SLAs) and how you can use them to better manage virtualization and to reduce costs.

Agreeing to Service Level Agreements

Challenges related to deploying virtualization include skepticism related to the technology. This often reads to resistance and a lack of knowledge about the potential cost and management benefits of using virtual machines.

The purpose of a Service Level Agreement is to define, prioritize, and document the real needs of an organization. All too often, IT departments tend to work in a relatively vacuum, focusing on technology. The area of virtualization is no exception – it’s often much easier to create and deploy VMs than it is to determine the strategic needs of the company. The problems range from poorly managing users’ expectations to large costs that might not directly address the most important challenges. The goal of containing costs is the basis for a lot of virtualization decisions, so it’s important to keep this in mind.

When developing SLAs, the most important aspect is for the process to be a team effort. Managers, IT staff, and end-users should all have input into the process. Typical steps in the process are shown in Figure 1.

image

Figure 1: Steps in the process of creating a new SLA

Defining SLA Goals and Metrics

SLA goals define the targeted levels of service that are to be expected from IT departments. Metrics are the specific statistics and data that must be measured to ensure that the levels are being met. Some examples might include:

  • Deployment: The time it takes to provision a new VM
  • Performance: Ensuring adequate application and service response times
  • Availability: Verifying virtual machine uptime
  • Change Management: Efficiently managing VM configuration updates

A well-defined SLA should include details about how the quality of the service is measured. For example, the goal for the uptime of a particular VM might be 99.9%. This can be measured using standard enterprise monitoring tools. Or, the deployment goal for a standard configuration of a virtual machine might be 4 business hours from the time of the request.

Reducing Costs with SLAs

If you haven’t yet created SLAs, you might be thinking about the time and effort that it will take to setup and track the associated metrics. While there is certainly a cost to be paid for creating SLAs, there can also be numerous benefits. One important aspect is that areas for improvement can easily be identified. For example, if a business finds that it could improve its operations by more quickly deploying VMs, an investment in automation could help. Table 1 provides that and some other hypothetical examples.

image

Table 1: Examples of potential cost savings based on automation

Summary

IT organizations that constantly find themselves trying to keep up with virtualization-related requirements can benefit by creating SLAs. When done properly, this will help technical initiatives (such as VM deployments and server consolidations) stay in line with users’ expectations. Overall, this can help the entire organization make better decisions about the importance of virtual infrastructures.

Virtualization Security: Pros and Cons

This article was first published on SearchServerVirtualization.TechTarget.com.

Historically, organizations have fallen into the trap of thinking about security implications after they deploy new technology. Virtualization offers so many compelling benefits, that it’s often an easy sell into IT architectures. But what about the security implications of using virtualization? In this tip, I’ll present information about the security-related pros and cons of using virtualization technology. The goal is to give you an overview of the different types of concerns you should have in mind. In a future article, I’ll look at best practices for addressing these issues.

Security Benefits of Virtualization

There are numerous potential benefits of running workloads with a VM (vs. running them on physical machines). Figure 1 provides an overview of these benefits, along with some basic details.

image

Figure 1: Virtualization features and their associated security benefits.

Since virtual machines are created as independent and isolated environments, systems administrators have the ability to easily configure them in a variety of ways. For example, if a particular VM doesn’t require access to the Internet or to other production networks, the VM itself can be configured with limited connectivity to the rest of the environment. This helps reduce risks related to the infection of a single system affecting numerous production computers or VMs.

If a security violation (such as the installation of malware) does occur, a VM can be rolled back to a particular point-in-time. While this method may not work when troubleshooting file and application services, it is very useful for VMs that contain relatively static information (such as web server workloads).

Theoretically, a virtualization product adds a layer of abstraction between the virtual machine and the underlying physical hardware. This can help limit the amount of damage that might occur when, for example, malicious software attempts to modify data. Even if an entire virtual hard disk is corrupted, the physical hard disks on the host computer will remain intact. The same is true for other components such as network adapters.

Virtualization is often used for performing backups and disaster recovery. Due to the hardware-independence of virtualization solutions, the process of copying or moving workloads can be simplified. In the case of a detected security breach, a virtual machine on one host system can be shut down, and another “standby” VM can be booted on another system. This leaves plenty of time for troubleshooting, while quickly restoring production access to the systems.

Finally, with virtualization it’s easier to split workloads across multiple operating system boundaries. Due to cost, power, and physical space constraints, developers and systems administrators may be tempted to host multiple components of a complex application on the same computer. By spreading functions such as middleware, databases, and front-end web servers into separate virtual environments, IT departments can configure the best security settings for each component. For example, the firewall settings for the database server might allow direct communication with a middle-tier server and a connection to an internal backup network. The web server component, on the other hand, could have required access via standard HTTP ports.

This is by no means a complete list of the benefits of virtualization security, but it is a quick overview of the security potential of VMs.

Potential Security Issues

As with many technology solutions, there’s a potential downside to using virtual machines for security. Some of the risks are inherent in the architecture itself, while others are issues that can be mitigated through improved systems management. A common concern for adopters of virtual machine technology is the issue of placing several different workloads on a single physical computer. Hardware failures and related issues could potentially affect many different applications and users. In the area of security, it’s possible for malware to place a significant load on system resources. Instead of affecting just a single VM, these problems are likely to affect other virtualized workloads on the same computer.

Another major issue with virtualization is the tendency for environments to deploy many different configurations of systems. In the world of physical server deployments, IT departments often have a rigid process for reviewing systems prior to deployment. They ensure that only supported configurations are setup in production environments and that the systems meet the organization’s security standards. In the world of virtual machines, many otherwise-unsupported operating systems and applications can be deployed by just about any user in the environment. It’s often difficult enough for IT departments to know what they’re managing, let alone how to manage a complex and heterogeneous environment.

The security of a host computer becomes more important when different workloads are run on the system. If an unauthorized user gains access to a host OS, he or she may be able to copy entire virtual machines to another system. If sensitive data is contained in those VMs, it’s often just a matter of time before the data is compromised. Malicious users can also cause significant disruptions in service by changing network addresses, shutting down critical VMs, and performing host-level reconfigurations.

When considering security for each guest OS, it’s important to keep in mind that VMs are also vulnerable to attacks. If a VM has access to a production network, then it often will have the same permissions as a physical server. Unfortunately, they don’t have the benefits of limited physical access, such as controls that are used in a typical data center environment. Each new VM is a potential liability, and IT departments must ensure that security policies are followed and that systems remain up-to-date.

Summary

Much of this might cast a large shadow over the virtualization security picture. The first step in addressing security is to understand the potential problems with a particular technology. The next step is to find solutions. Rest assured, there are ways to mitigate these security risks. That’s the topic of my next article, “Best Practices for Improving VM Security.”