Archive for category Virtualization

Consortio Services TechCast

CSTestCastLogo I was recently interviewed by Consortio Services as the first interviewee in their new podcast series.  The team of Eric Johnson, Eric Beehler, and Josh Jones asked me numerous questions related to virtualization.  Here’s a quick introduction to the topics:

This week our guest is Anil Desai, who we talk with about virtualization best practices. In the news, detecting wireless intruders, HP buys up more companies, Quicktime exploit, Exchange Server 2007 SP1, and how to keep your IT staff happy. Plus, “The Worst Tech Move of the Week”, “IT Pet Peeve”, and “The Tech Tip of the Week”.

The topics ranged from what’s important for IT people to learn and a comparison of the available technology approaches.  You can download or listen to the free webcast at CS TechCast 1: Going Virtual

Virtual Platform Management – Data Center Automation

A new chapter from my eBook titled The Definitive Guide to Virtual Platform Management is now available for free download (registration is required).  Chapter #9, "Data Center Automation", focuses on ways in which enterprise management tools can help make the seemingly insurmountable task of managing server sprawl and VM sprawl much easier.  Here’s a brief excerpt from the introduction:

A constant challenge in most IT environments is that of finding enough time and resources to finish all the tasks that need to be completed. IT departments find themselves constantly fighting fires and responding to a seemingly never-ending stream of change requests. Although virtualization technology can provide numerous advantages, there are also associated management-related challenges that must be addressed. When these tasks are performed manually, the added overhead can reduce cost savings and can result in negative effects on performance, availability, reliability, and security.

In previous chapters, I have covered a broad array of best practices related to virtualization management. Organizations have the ability to choose from a range of implementation methods, including physical servers, virtual machines, and clustered systems. The tasks have ranged from deployment and provisioning to monitoring virtual systems once they are in production. All of this raises questions related to the best method of actually implementing these best practices.

The focus of this chapter is on data center automation. Organizations that have deployed virtual machines throughout their environment can benefit from using enterprise software that has been designed to provide automated control. The goal is to implement technology that can provide for a seamless, self-managing, and adaptive infrastructure while minimizing manual effort. It’s a tall order, but certainly one that is achievable by using a combination of best practices and the right tools.

Stay tuned for the next and final chapter of the Guide!

Desktop Virtualization: The Next IT Fad?

In the past, I wrote a couple of articles related to Virtual Desktop Infrastructure (VDI) (for the articles, see the VDI/Desktop Management Category).  The main idea is that, by running desktops within VMs, you can solve a lot of IT management challenges.  I’m not sold on the idea, and it appears that I’m not alone.  Hannah Drake, Editor at SearchServerVirtualization.com, asks Client-side virtual desktop technology: Unnecessary?.  The article quotes some interesting points from an IT consulting company.  I added my $.02 in the comments section.  Feel free to comment here, as well: Is VDI a fad, or is it a real solution for current IT problems.

Microsoft Infrastructure Planning and Design (IPD) Guides Available

I recently worked with Microsoft’s Solution Accelerator team to develop a guide to designing an infrastructure to support Microsoft’s virtualization solutions.  Unlike much of the other technical information that is available online, this series focuses on the design aspect of managing technology, rather than on implementation details.  From the web site:

Infrastructure Planning and Design guides share a common structure, including:

  • Definition of the technical decision flow through the planning process.
  • Listing of decisions to be made and the commonly available options and considerations.
  • Relating the decisions and options to the business in terms of cost, complexity, and other characteristics.
  • Framing decisions in terms of additional questions to the business to ensure a comprehensive alignment with the appropriate business landscape.

These guides complement product documentation by focusing on infrastructure design options.

Each guide leads the reader through critical infrastructure design decisions, in the appropriate order, evaluating the available options for each decision against its impact on critical characteristics of the infrastructure. The IPD Series highlights when service and infrastructure goals should be validated with the organization and provides additional questions that should be asked of service stakeholders and decision makers.

You can download the files from the Infrastructure Planning and Design page (registration is optional).  The content includes the following downloads:

  • IPD Series Introduction: A brief introduction to the series and its approach.
  • Select the Right Virtualization Solution: This guide includes an overview of Microsoft’s virtualization products and technologies.  The package includes a flowchart that can be helpful in deciding how to select from among Microsoft Virtual Server 2005, Microsoft Virtual PC, Microsoft Terminal Services, Microsoft SoftGrid, and the newly-announced Hyper-V (available with Windows Server 2008).
  • Windows Server Virtualization: This guide covers details on Windows Server Virtualization (WSv, now officially "Hyper-V") and Microsoft Virtual Server.  It includes a document and slides that cover the process of selecting which workloads to virtualize.  The guide then walks through the process of translating virtual machine requirements to host infrastructure requirements.
  • SoftGrid Application Virtualization: This guide focuses on SoftGrid – recently renamed to Microsoft Application Virtualization.  It covers best practices for designing an infrastructure for simplified application deployment and maintenance.

All downloads include files in Office 2003 and Office 2007 formats and are ready for use in your own presentations or proposals.  More guides will be available in the near future, and you should be able to access beta versions of upcoming guides at Microsoft Connect.  I hope you find the content to be useful!

Hyper-V for Windows Server 2008 Announced

It looks like the official, final name of the server technology previously known as Viridian and Windows Server Virtualization (WSv) has been announced.  See Microsoft Outlines Pricing, Packaging and Licensing for Windows Server 2008… for details.  As few techies like to read Press Releases, I’ll give you the short version.  The name of the technology/feature that will be available in 64-bit editions of Windows Server 2008 is Hyper-V.  In addition, Microsoft is providing a Hyper-V Server version of the product for use by OEMs.  The technical requirements seem to remain the same.  I’ll soon be writing some articles about the architecture of Hyper-V and will post the details here.

Personally, I like the new name somewhat more than the WSv title.  Hyper-V has fewer syllables, which is always a plus.  It’s a departure from Microsoft’s usually long-winded product names that can rarely be completed in a single breath. 

On that note: In keeping with descriptive product names, SoftGrid has also been renamed.  It is now called Microsoft Application Virtualization 4.5.  A list of new features is listed in Brian Madden’s blog entry, Microsoft announces Application Virtualization 4.5.  While I have mixed feelings on long, descriptive product names, I really do think this will help clear up some confusion over virtualization approaches.  I’ll post more information as it becomes available.

Virtual Platform Management: Policies and Processes

DGVPM Cover Chapter #8 of my free eBook called, The Definitive Guide to Virtual Platform Management, is now available for download.  This chapter talks about ways in which organizations can use policies and processes to better manage virtualization.  Included is information about creating and enforcing Service Level Agreements (SLAs), implementing charge-backs, and other best practices.  Check it out online (and don’t miss the first seven chapters)!

Microsoft’s Virtualization Options (Upcoming Webcast)

There seems to be a lot of confusion out there related to different methods of virtualization.   In short, it’s not all about running multiple operating systems on the same system at the same time.  You can also virtualize and isolate specific programs (for example, within a Java Virtual Machine).  There are also other approaches.  Microsoft refers to its Terminal Services feature as “presentation virtualization.”  Most of us are quite familiar with using the Remote Desktop Protocol (RDP) to remote manage a computer or to remote run applications.  But with Terminal Services, applications actually execute on the server.  What if you want them to run on the client (where CPU, memory, disk, and network resources are arguably far cheaper)?

Microsoft SoftGrid (formerly Softricity) is designed to do just that.  An upcoming webcast will help explain the approach of deploying applications on-demand: TechNet Webcast: Introducing SoftGrid to the World (Level 200)

Which to use, Microsoft SoftGrid or Terminal Services? Both of the fictional companies in our webcast, Contoso and Fabrikam, are considering application virtualization, and they have heard of both Terminal Services and SoftGrid. But which do they choose? In this session, we look at these solutions, provide details on how they differ, and explain when to use them. We also cover how to install, configure, and use SoftGrid.

Better yet, the technologies can successfully be used together.  Unfortunately, one of the drawbacks of Softgrid is that it requires an Enterprise-level license for organizations that wish to deploy it.  There are hints that this will soon change to make SoftGrid a lot more accessible to the masses (I’d consider using it for my home office).

Of course, there’s also an option not to virtualize at all.  If you’re trying to consolidate, for example, Microsoft SQL Server machines, there’s probably a better way to consolidate your databases.  The bottom line is that there are a lot of different options for obtaining the benefits of virtualization.

IT Fights Back: Virtualization SLAs and Charge-Backs

My article, the first in a series entitled, “Fighting The Dark Side of Virtualization” is now available on the Virtual Strategy Magazine Web site.  The article, IT Fights Back: Virtualization SLAs and Charge-Backs, focuses on ways in which IT departments can help manage issues such as VM sprawl (the explosive proliferation of VMs), while containing costs.  As a quick teaser, here’s the opening marquee:

temp

1

The adventure begins…

Managing Virtualization Storage for Datacenter Managers

This article was first published on SearchServerVirtualization.TechTarget.com.

Deploying virtualization into a production data center can provide an interesting mix of pros and cons. By consolidating workloads onto fewer server, physical management is simplified. But what about managing the VMs? While storage solutions can provide much-needed flexibility, it’s still up to datacenter administrators to determine their needs and develop appropriate solutions. In this article, I’ll present storage-related considerations for datacenter administrators.

Estimating Storage Capacity Requirements

Virtual machines generally require a large amount of storage. The good news is that this can, in some cases, improve storage utilization. Since direct-attached storage is not confined to a per-server basis (which often results in a lot of unused space), using centralized storage arrays can help. There’s also a countering effect, however: Since the expansion of virtual disk files is difficult to predict, you’ll need to leave some unallocated space for expansion. Storage solutions that provide for over-committing space (sometimes referred to as “soft-allocation”) and for dynamically resizing arrays can significantly simplify management.

  • To add up the storage requirements, you should consider the following:
  • The sum of the sizes of all “live” virtual disk files
  • Expansion predictions for virtual disk files
  • State-related disk files such as those used for suspending virtual machines and maintaining point-in-time snapshots
  • Space required for backups of virtual machines

All of this can be a tall order, but hopefully the overall configuration is no more complicated than that of managing multiple physical machines.

Placing Virtual Workloads

One of the best ways to reduce disk contention and improve overall performance is to profile virtual workloads to determine their requirements. Performance statistics help determine the number, size, and type of IO operations. Table 1 provides an example.

image

Table 1: Assigning workloads to storage arrays based on their performance requirements

In the provided example, the VMs are assigned to separate storage arrays to minimize contention. By combining VMs with “compatible” storage requirements on the same server, administrators can better distribute load and increase scalability.

Selecting Storage Methods

When planning to deploy new virtual machines, datacenter administrators have several different options. The first is to use local server storage. Fault-tolerant disk arrays that are directly-attached to a physical server can be easy to configure. For smaller virtualization deployments, this approach makes sense. However, when capacity and performance requirements grow, adding more physical disks to each server can lead to management problems. For example, arrays are typically managed independently, leading to wasted disk space and requiring administrative effort.

That’s where network-based storage comes in. By using centralized, network-based storage arrays, organizations can support many host servers using the same infrastructure. While support for technologies varies based on the virtualization platform, NAS, iSCSI, and SAN-based storage are the most common. NAS devices use block-level IO and are typically used as file servers. They can be used to store VM configuration and hard disk files. However, latency and competition for physical disk resources can be significant.

SAN and iSCSI storage solutions perform block-level IO operations, providing raw access to storage resources. Through the use of redundant connections and multi-pathing, they can provide the highest levels of performance, lowest latency, and simplified management.

In order to determine the most appropriate option, datacenter managers should consider workload requirements for each host server and its associated guest OS’s. Details include the number and types of applications that will be running, and their storage and performance requirements. The sum of this information can help determine whether local or network-based storage is most appropriate.

Monitoring Storage Resources

CPU and memory-related statistics are often monitoring for all physical and virtual workloads. In addition to this information, disk-related performance should be measured. Statistics collected at the host server level will provide an aggregate view of disk activity and whether storage resources are meeting requirements. Guest-level monitoring can help administrators drill-down into the details of which workloads are generating the most activity. While the specific statistics that can be collected will vary across operating systems the types of information that should be monitoring include:

  • IO per Second (IOPs): This statistic refers to the number of disk-related transactions that are occurring at a given instant. IOPs are often used as the first guideline for determining overall storage requirements.
  • Storage IO Utilization: This statistic refers to the percentage of total IO bandwidth that is being consumed at a given point in time. High levels of utilization can indicate the need to upgrade or move VMs.
  • Paging operations: Memory-starved VMs can generate significant IO traffic due to paging to disk. Adding or reconfiguring memory settings can help improve performance.
  • Disk queue length: The number of IO operations that are pending. A consistently high number will indicate that storage resources are creating a performance bottleneck.
  • Storage Allocation: Ideally, administrators will be able to monitor the current amount of physical storage space that is actually in use for all virtual hard disks. The goal is to proactively rearrange or reconfigure VMs to avoid over-allocation.

VM disk-related statistics will change over time. Therefore, the use of automated monitoring tools that can generate reports and alerts are an important component of any virtualizations storage environment.

Summary

Managing storage capacity and performance should be high on the list of responsibilities for datacenter administrators. Virtual machines can easily be constrained by disk-related bottlenecks, causing slow response times or even downtime. By making smart VM placement decisions and monitoring storage resources, many of these potential bottlenecks can be overcome. Above all, it’s important for datacenter administrators to work together with storage managers to ensure that business and technical goals remain aligned over time.

Virtualization Considerations for Storage Managers

This article was first published on SearchServerVirtualization.TechTarget.com.

It’s common for new technology to require changes in all areas of an organization’s overall infrastructure. Virtualization is no exception. While many administrators often focus on CPU and memory constraints, storage-related performance is also a very common bottleneck. In some ways, virtual machines can be managed like physical ones. After all, each VM runs its own operating systems, applications, and services. But there are also numerous additional considerations that must be taken into account when designing a storage infrastructure. By understanding the unique needs of virtual machines, storage managers can build a reliable and scalable data center infrastructure to support their VMs.

Analyzing Disk Performance Requirements

For many types of applications, the primary consideration around which the storage infrastructure is designed is based on I/O operations per second (IOPS). IOPS refer to the number of read and write operations that are performed, but do not always capture the whole picture. Additional considerations include the type of activity. For example, since virtual disks that are stored on network-based storage arrays must support guest OS disk activity, the average I/O request size tends to be small. Additionally, I/O requests are frequent and often random in nature. Paging can also create a lot of traffic on memory-constrained host servers. There are also other considerations that will be workload-specific. For example, it’s also good to measure the percentage of read vs. write operations when designing the infrastructure.

Now, multiply all of these statistics by the number of VMs that are being supported on a single storage device, and you are faced with the very real potential for large traffic jams. The solution? Optimize the storage solution for supporting many, small, and non-sequential IO operations. And, most importantly, distribute VMs based on their levels and types of disk utilization. Performance monitoring can help generate the information you need.

Considering Network-Based Storage Approaches

Many environments already use a combination of NAS, SAN, and iSCSI-based store to support their physical servers. These methods can still be used for hosting virtual machines, as most virtualization platforms provide support for them. For example, SAN- or iSCSI-based volumes that are attached to a physical host server can be used to store virtual machine configuration files, virtual hard disks, and related data. It is important to note that, by default, the storage is attached to the host and not to the guest VM. Storage managers should keep track of which VMs reside on which physical volumes for backup and management purposes.

In addition to providing storage at the host-level, guest operating systems (depending on their capabilities) can take advantage of NAS and iSCSI-based storage. With this approach, VMs can directly connect to network-based storage. A potential drawback, however, is that guest operating systems can be very sensitive to latency, and even relatively small delays can lead to guest OS crashes or file system corruption.

Evaluating Useful Storage Features

As organizations place multiple mission-critical workloads on the same servers through the use of virtualization, they can use various storage features to improve reliability, availability and performance. Implementing RAID-based striping across arrays of many disks can help significantly improve performance. The array’s block size should be matched to the most common size of I/O operations. However, more disks means more chances for failures. So, features such as multiple parity drives and hot standby drives are a must.

Fault tolerance can be implemented through the use of multi-pathing for storage connections. For NAS and iSCSI solutions, storage managers should look into having multiple physical network connections and implementing fail-over and load-balancing features by using network adapter teaming. Finally, it’s a good idea for host servers to have dedicated network connections to their storage arrays. While you can often get by with shared connections in low-utilization scenarios, the load placed by virtual machines can be significant and can increase latency.

Planning for Backups

Storage administrators will have the need to backup many of their virtual machines. Apart from allocating the necessary storage space, it is necessary to develop a method for dealing with exclusively-locked virtual disk files. There are two main approaches:

  • Guest-Level Backups: In this approach, VMs are treated like physical machines. Generally, you would install backup agents within VMs, define backup sources and destinations, and then let them go to work. The benefit of this approach is that only important data is backed up (thereby reducing required storage space). However, your backup solution must be able to support all potential guest OS’s and versions. And, the complete recovery process can involve many steps, including reinstalling and reconfiguring the guest OS.
  • Host-Level Backups: Virtual machines are conveniently packaged into a few important files. Generally, this includes the VM configuration file and virtual disks. You can simply copy these files to another location. The most compatible approach involves stopping or pausing the VM, copying the necessary files, and then restarting the VM. The issue, however, is that this can require downtime. Numerous first- and third-party solutions are able to backup VMs while they’re “hot”, thereby eliminating service interruptions. Regardless of the method used, replacing a failed or lost VM is easy – simple restore the necessary files to the same or another host server and you should be ready to go. The biggest drawback of host-level backups is in the area of storage requirements. You’re going to be allocating a ton of space for the guest OS’s, applications, and data you’ll be storing.

Storage solutions options such as the ability to perform snapshot-based backups can be useful. However, storage administrators should thoroughly test the solution and should look for explicitly-stated virtualization support from their vendors. Remember, backups must be consistent to a point in time, and non-virtualization-aware solutions might neglect to flush information stored in the guest OS’s cache.

Summary

By understanding and planning for the storage-related needs of virtual machines, storage administrators can help their virtual environments scale and keep pace with demand. While some of the requirements are somewhat new, many involve utilizing the same storage best practices that are used for physical machines. Overall, it’s important to measure performance statistics and to consider storage space and performance when designing a storage infrastructure for VMs.

Advanced Backup Options for Virtual Machines

This article was first published on SearchServerVirtualization.TechTarget.com.

It’s a pretty big challenge to support dozens or hundreds of separate virtual machines. Add in the requirement for backups – something that generally goes without saying – and you have to figure out how to protect important information. Yes, that usually means at least two copies of each of these storage hogs. I understand that you’re not made of storage (unless, of course, you’re the disk array that’s reading this article on the web server). So what should you do? In this tip, I’ll outline several approaches to performing backups for VMs, focusing on the strengths and limitations of each.

Determining Backup Requirements

Let’s start by considering the requirements for performing backups. The list of gaols is pretty simple, in theory:

  • Minimize data loss
  • Minimize recovery time
  • Simplify implementation and administration
  • Minimize costs and resource usage

Unfortunately, some of these objectives are often at odds with each other. Since implementing any solution takes time and effort, start by characterizing the requirements for each of your virtual machines and the applications and services they support. Be sure to write in pencil, as it’s likely that you’ll be revising these requirements. Next, let’s take a look at the different options for meeting these goals.

Application-Level Backups

The first option to consider for performing backups is that of using application features to do the job. There’s usually nothing virtualization-specific about this approach. Examples include:

  • Relational Database Servers: Databases were designed to be highly-available and it should come as no surprise that there are many ways of using built-in backup methods. In addition to standard backup and restore operations, you can use replication, log-shipping, clustering, and other methods to ensure that data remains protected.
  • Messaging Servers: Communications platforms such as Microsoft Exchange Server provide methods for keeping multiple copies of the data store in sync. Apart from improving performance (by placing data closer to those who need it), this can provide adequate backup functionality.
  • Web Servers: The important content for a web server can be stored in a shared location or can be copied to each node in a web server farm. When a web server fails, just restore the important data to a standby VM, and you’re ready to go. Better yet, use shared session state or stateless application features and a network load-balancer to increase availability and performance.

All of these methods allow you to protect against data loss and downtime by storing multiple copies of important information.

Guest-Level Backups

What’s so special about VMs, anyway? I mean, why not just treat them like the physical machines that they think they are? That’s exactly the approach with guest-level backups. The most common method with this approach is to install backup agents within the guest OS and to specify which files should be backed up and their destinations. As with physical servers, administrators can decide what really needs to be backed up – generally just data, applications, and configuration files. That saves precious disk space and can reduce backup times.

There are, however, drawbacks to this backup approach. First, your enterprise backup solution must support your guest OS’s (try finding an agent for OS/2!) Assuming the guest OS is supported, the backup and recovery process is often different for each OS. This means more work on the restore side of things. Finally, the restore process can take significant time, since a base OS must be installed and the associated components restored.

Examples of popular enterprise storage and backup solutions are those from Symantec, EMC, Microsoft and many other vendors.

Host-Level Backups

Host-level backups take advantage of the fact that virtual machines are encapsulated in one or more virtual disk files, along with associated configuration files. The backup process consists of making a copy of the necessary files from the host OS’s file system. Host-level backups provide a consistent method for copying VMs since you don’t have to worry about differences in guest operating systems. When it comes time to restore a VM (and you know it’s going to happen!), all that’s usually needed is to reattach the VM to a working host server.

However, the drawback is that you’re likely to need a lot of disk space. Since the entire VM, including the operating system, applications, and other data are included in the backup set, you’ll have to allocate the necessary storage resources. And, you’ll need adequate bandwidth to get the backups to their destination. Since virtual disk files are exclusively locked while a VM is running, you’ll either need to use a “hot backup” solution, or you’ll have to pause or stop the VM to perform a backup. The latter option results in (gulp!) scheduled downtime.

Solutions and technologies include:

  • VMware: VMotion; High Availability; Consolidated Backup; DRS
  • Microsoft Volume Shadow Services (VSS)

File System Backups

File system backups are based on features available in storage arrays and specialized software products. While they’re not virtualization-specific, they can help simplify the process of creating and maintaining VM backups. Snapshot features can allow you make a duplicate of a running VM, but you should make sure that your virtualization platform is specifically supported. File system replication features can use block- or bit-level features to keep a primary and backup copy of virtual hard disk files in-sync.

Since changes are transferred efficiently, less bandwidth is required. And, the latency between when modifications are committed on the primary VM and the backup VM can be minimized (or even eliminated). That makes the storage-based approach useful for maintaining disaster recovery sites. While third-party products are required, file system backups can be easy to setup and maintain. But, they’re not always ideal for write-intensive applications and workloads.

Potential solutions include products from Double-Take Software and from Neverfail. Also, if you’re considering the purchase of a storage solution, ask your vendor about replication and snapshot capabilities, and their compatibility with virtualization.

Back[up] to the Future

Most organizations will likely choose different backup approaches for different applications. For example, application-level backups are appropriate for those systems that support them. File system replication is important for maintaining hot or warm standby sites and services. Guest- and host-level backups balance ease of backup/restore operations vs. the amount of usable disk space. Overall, you should compile the data loss, downtime and cost constraints, and then select the most appropriate method for each type of VM. While there’s usually no single answer that is likely to meet all of your needs, there are some pretty good options out there!

Evaluating Network-Based Storage Options

This article was first published on SearchServerVirtualization.TechTarget.com.

Imagine living in a crowded apartment with a bunch of people that think they own the place. Operating systems and applications can be quite inconsiderate at times. For example, when they’re running on physical machines, these pieces of software are designed to monopolize hardware resources. Now, add virtualization to the picture, and you get a lot of selfish people competing for the same resources. In the middle is the virtualization layer – acting as a sort of landlord or superintendent – trying to keep everyone happy (while still generating a profit). Such is the case with disk I/O on virtualization host servers. In this Tip, I’ll discuss some options for addressing this common bottleneck.

Understanding Virtualization I/O Requirements

Perhaps the most important thing to keep in mind is that not all disk I/O is the same. When designing storage for virtualization host servers, you need to get an idea of the actual disk access characteristics you will need to support. Considerations include:

  • Ratio of read vs. write operations
  • Frequency of sequential vs. random reads and writes
  • Average I/O transaction size
  • Disk utilization over time
  • Latency constraints
  • Storage space requirements (including space for backups and maintenance operations)

Collecting this information on a physical server can be fairly simple. For example, on the Windows platform, you can collect data using Performance Monitor and store it to a binary file or database for later analysis. When working with VMs, you’ll need to measure and combine I/O requirements to define your disk performance goals. The focus of this tip is on choosing methods for storing virtual hard disk files, based on cost, administration and scalability requirements.

Local / Direct-Attached Storage

The standard default storage option in most situations is that of using local storage. The most common connection methods include PATA, SATA, SCSI, and SAS. Each type of connection comes with associated performance and cost considerations. RAID-based configurations can provide fault-tolerance and can be used to improve performance.

· Pros:

  • Generally cheaper than other storage options
  • Low latency, high bandwidth connections that are reserved for a single physical server

· Cons:

  • Potential waste of storage space (since disk space is not shared across computers)
  • Limited total storage space and scalability due to physical disk capacity constraints (especially when implementing RAID)
  • Difficult to manage, as storage is decentralized

Storage Area Networks (SANs) / Fibre Channel

SANs are based on Fibre Channel connections, rather than copper-based Ethernet. SAN-based protocols are design to provide high throughput and low latency, but require the implementation of an optical-based network infrastructure. Generally, storage arrays provide raw block-level connections to carved-out portions of disk space.

· Pros:

  • Can provide high performance connections
  • Improved compatibility – appears are local storage to the host server
  • Centralizes storage management

· Cons:

  • Expensive to implement – requires Fibre Channel-capable host bus adapters, switches, and cabling
  • Expensive to administer – requires expertise to manage a second “network” environment

Network-Based Storage

Network-based storage devices are designed to provide disk resources over a standard (Ethernet) network connection. They most often support protocols such as Server Message Block (SMB), and Network File System (NFS), both of which are designed for file-level disk access. The iSCSI protocol provides the ability to perform raw (block-level) disk access over a standard network. iSCSI-attached volumes appear to the host server as if they were local storage.

· Pros:

  • Lower implementation and management cost (vs. SANs) due to utilization of copper-based (Ethernet) connections
  • Storage can be accessed at the host- or guest-level, based on specific needs
  • Higher scalability (arrays can contain hundreds of disks) and throughput (dedicated, redundant I/O controllers)

· Cons:

  • Simplified administration (vs. direct-attached storage), since disks are centralized
  • Applications and virtualization platforms must support either file-based access or iSCSI

Storage Caveats: Compatibility vs. Capacity vs. Cost

In many real world implementations of virtualization, an important bottleneck is storage performance. Organizations can use well-defined methods of increasing CPU and memory performance, but what about the hard disks? Direct-attached, network-based, and SAN-based storage options can provide several viable options. Once you’ve outgrown local storage (from a capacity, performance, or administration standpoint), you should consider implementing iSCSI or file-based network-based storage servers. The primary requirement, of course, is that your virtualization layer must support the hardware and software you choose. SANs are a great option for organizations that have already made the investment, but some studies show that iSCSI devices can provide similar levels of performance at a fraction of the cost.

The most important thing to remember is to thoroughly test your solution before deploying it into production. Operating systems can be very sensitive to disk-related latency, and disk contention can cause unforeseen traffic patterns. And, once the systems are deployed, you should be able to monitor and manage throughput, latency, and other storage-related parameters.

Overall, providing storage for virtual environments can be a tricky technical task. The right solution, however, can result in happy landlords and tenants whereas the wrong solutions result in one seriously overcrowded apartment.

Desktop Virtualization: Evaluating Approaches and Solutions

This article was first published on SearchServerVirtualization.TechTarget.com.

Visualize the unglamorous task of crawling behind a dusty desktop computer to check for an unplugged cable. This is in response to a report that a user’s computer is “broken”. You have only the soothing sounds of an employee complaining to a friend about how IT is taking forever to solve the problem. Finish the job quickly, as you’ll soon be off to troubleshooting an application compatibility problem on an executive’s notebook computer. Assuming you haven’t left the IT industry altogether after reading this paragraph, I think you’ll agree that there are many compelling reasons for addressing desktop and application management issues.

In the first Tip in this series on desktop virtualization, I defined the goals and problems that we were trying to solve. I provided an overview of the various approaches (and noted that there’s often some disagreement over specific terminology). Here, I’ll cover the pros and cons of specific approaches, along with applications you might want to consider.

Presentation Virtualization Solutions

In presentation virtualization, user input and application output are managed using a network connection. Applications run on the server, and screen updates are sent to a thin client or desktop computer. Some solutions can virtualize individual applications, can work securely over Internet connections, and can be integrated with a variety of network-level access methods.

  • Benefits: Scalability is a big one: up to hundreds of simultaneous application sessions can be created on a single server. Applications management can be simplified since deployment to desktops is no longer a requirement. Access to applications can be managed centrally, and data may be stored securely on back-end systems.
  • Drawbacks: Applications must be compatible with the virtualization solution. While utilities are available for assisting in this area, they’re far from foolproof. Additionally, all users will be using the same OS version on the server side. Reliability is also a concern, especially when running business-critical client applications, due to the number of sessions that must be maintained. When running on slow connections, slow application responses can hurt the end-user experience.

Products and Solutions:

Application and OS Virtualization Solutions

Realizing that the primary purpose of desktop computers is to allow users to run applications, some vendors have focused on using application deployment and management solutions. The goal here is to allow many different applications to coexist on a single operating system that runs on users’ hardware.

  • Benefits: Users can run their operating systems and applications locally, leading to better performance and support for disconnected scenarios. IT departments can avoid application compatibility and deployment issues and can more easily keep track of licensing. Overall scalability is often far higher than that of virtualizing entire desktop operating systems.
  • Drawbacks: Applications may need to be modified (or at least tested) when running with these solutions. The base OS is shared among all applications and application environments, so all applications must run on the same basic platform.

Products and Solutions:

VM-Based Virtualization Solutions

There’s no reason that the benefits of server virtualization can’t be extended to desktop machines. VM-based virtualization involves the creation of VMs (either on-demand on permanent) for users on data center hardware. Users access their individual VMs using a thin client device or a remote desktop solution. On the server side, a “connection broker” layer is able to ensure that the right VMs are available and that users connect to their own systems.

  • Benefits: All OS’s and user data are stored within the data center (presumably on fault-tolerant, high performance hardware). This enables centralized management and increases average utilization on all of the systems that an IT department supports. Security risks are decreased, as are costs related to managing client-side computers.
  • Drawbacks: Entire operating systems are running for each user. This can limit scalability and increase costs related to storage. Additionally, users require a network connection in order to access their computers. Finally, server-side hardware resources can be far more costly than their desktop counterparts.

Products and Solutions:

Summary

It is important to note that these solutions are not exclusive of each other. For example, you could choose to virtualize a desktop OS and then use application virtualization products to deploy and manage applications. Realistically, most organizations will find all of these options to be suitable for simplifying some aspect of overall desktop operations. This area is evolving rapidly (in terms of both real technology and hype), so be sure to thoroughly research options before deploying them. Overall, knowledge is power, so keep these options in mind the next time you spend 30 minutes repairing a mouse-related problem!

Desktop Virtualization: Goals and Options

This article was first published on SearchServerVirtualization.TechTarget.com.

Quick: Name a task that’s less enjoyable than managing client operating systems and applications! I have a feeling that if you’re a seasoned IT pro, you had to think for a few seconds (and, I’ll bet that many of you came up with some very creative responses). Clearly, the challenges of keeping end-users’ systems up-to-date can be a thankless and never-ending ordeal. Vendors have heard your cries, and various solutions are available. At the risk of sounding like a flight attendant, I do understand that you have a choice in choosing virtualization approaches. In this series of Tips, I’ll describe details related to the pros and cons of desktop application and OS virtualization. Let’s start by defining the problem.

Desktop Management Challenges

There are many reasons that desktop computers can be more painful to manage than their server-side counterparts. Some important issues include:

  • Analyzing Administration: Desktop and notebook computers are often located in the most remote areas of your organization (or outside of it altogether). Performing systems administration tasks can sometimes require physical access to these machines. And, even with remote management tools, the client-side machine has to be online and connected to the network. The result is significant time and effort requirements for keeping systems optimally configured.
  • Mitigating Mobile Mayhem: Traveling and remote users can be (quite literally) a moving target: It seems that as soon as you’ve deployed a system for their use, changes are required. While some users can’t avoid working offline, there’s a subset of the user population that might need to access their OS and applications from multiple locations. Managing multiple pieces of hardware or shared desktop machines can be time-consuming and tedious.
  • Dealing with Distributed Data: Security and regulatory compliance requirements necessitate the management of data. It’s far easier to secure information and prevent data loss or theft when everything’s stored in the data center. While stolen laptops can be costly, it’s far cheaper than dealing with stolen data.
  • Application Anarchy: Deploying and managing desktop applications can be a struggle. While deployment tools can simplify the process, issues like managing application compatibility problems can lead to a large number of support desk calls. Other issues include tracking license usage, ensuring that systems remain patched, and providing the right applications on the right computer at the right time.
  • Bumbling Backups: Ensuring that data remains protected on desktop and notebook computers can be problematic. Even with the use of backup agents, there’s room for error. And, getting users to consistently save their important files to network shares can seem futile.
  • Hardware Headaches: Managing desktop and notebook hardware can be time-consuming. Add in the costs of technology refreshes and verifying hardware system requirements, and the issue can quickly float to the top of an IT department’s list of costs.

From a technical standpoint, the issue is that applications are tightly tied to their operating systems. And the operating systems, in turn, are tightly tied to hardware. Solving these problems can help alleviate some of the pain.

Choosing a Virtualization Approach

There are several different approaches to addressing desktop-related challenges. One caveat is that the terminology can be inconsistent. I’ve taken a shot at categorizing the different approaches, but vendors’ descriptions do differ. Here’s a breakdown:

  • Presentation Virtualization: Some users require access to only one or a few applications (think about call center and point-of-sale users). The main idea behind presentation virtualization is that all applications are installed and executed on a specialized server that can then redirect video, keyboard, and mouse signals between a small client application or a thin client device. Since applications are installed centrally, deployment and management is less of an issue.
  • OS and Application Virtualization: For some portion of the user population, such as traveling employees or “power-users”, there’s a real need to run an operating system directly on individual computers. In these scenarios, it’s still desirable to simplify the deployment and management of applications. Application virtualization solutions provide a way to either compartmentalize or stream programs to the computers that need them. The process is quick, safe, and can happen with little IT involvement.
  • VM-Based Virtualization: Also known as Virtual Desktop Infrastructure (VDI), among other names, the idea here is to allow users to run their own desktop OS’s – except that they are physically stored in the data center. Typically, the operating system and applications are run within a dedicated virtual machine which is assigned to a particular user. Employees use either a thin client computer or a remote desktop connection to access their environments.

In addition to these options, there’s an implicit fourth choice: “None of the above.” As I described in my Tips, “VDI Benefits without VDI”, you can reduce problems to some extent by utilizing IT best practices. You can also use a combination of these approaches (for example, VM-based virtualization with application virtualization)to meet different needs in the same environment.

Looking for Solutions

In this Tip, I presented some of the problems that desktop virtualization attempts to address. It’s important to understand your pain points before you start looking for a remedy. Then, I described three high-level approaches for solving common problems. In the next part of this series, I’ll present information about the pros and cons of each approach, along with specific products to consider.

VDI Benefits without VDI: Desktop Management

This article was first published on SearchServerVirtualization.TechTarget.com.

Quick: Think of the five systems administration tasks you most enjoy doing! If you’re like most techies, desktop management probably didn’t make the list. It’s probably right up there with washing the car or mowing the lawn (a whole different type of administration challenge). Caring for and feeding client-side computers can be a painful and never-ending process. Therefore, it’s no surprise that Virtual Desktop Infrastructure (VDI) technology is capturing the eyes and ears of IT staff.

But does VDI provide a unique solution? Or, can you get the same benefits through other practices and approaches? (If you’ve read the title of this Tip, there’s a good chance you can guess where I’m going with this.) Over the years, a variety of solutions for managing desktop and notebook computers have become commonplace. In this article, I’ll outline some problems and solutions. The goal is not to discredit VDI, but to look at options for achieving the same goals.

Deployment and Provisioning

  • Problem: Rolling out new desktop computers can be time-consuming and labor-intensive. Using VDI, provisioning is much faster since standard base images can be quickly deployed within the data center. Users can then access the images from any computer or thin client.
  • Alternative Solution(s): Automated operating system deployment tools are available from OS vendors and from third-parties. Some use an image-based approach in which organizations can create libraries of supported configurations and then deploy them to physical or virtual machines. When combined with network boot features, the process can be completely automated. Additionally, there are server-based options such as Microsoft SoftGrid for automatically installing applications as they are requested.

Desktop Support and Remote Management

  • Problem: Managing and troubleshooting desktop systems can be costly and time-consuming in standard IT environments, as physical access to client machines is often required. With VDI implementations, all client operating systems, applications, and configuration settings are stored centrally within VMs within the data center. This reduces the need to visit client desktops or to have physical access to portable devices such as notebook computers.
  • Alternative Solution(s): While VDI can sometimes simplify support operations, IT departments still need to manage individual operating system images and application installations. Remote management tools can reduce the need for physical access to a computer for troubleshooting purposes. Some solutions use the same protocols (such as the Remote Desktop Protocol, RDP) that VDI or other approaches would use. Products and services also allow for troubleshooting computers over the Internet or behind remote office firewalls. That can help you support Mom, who might not be authorized to access a VM image in your corporate data center.

Resource Optimization / Hardware Consolidation

  • Problem: Desktop hardware is often under-utilized and hardware maintenance can be a significant cost and management burden. By combining many desktop computers on server hardware, VDI can be used to increase overall system resource utilization. Additionally, client computers have minimal system requirements, making them more cost effective to maintain over time.
  • Alternative Solution(s): VDI takes the “server consolidation” approach and applies it to desktop computers. Standard client computers are minimally utilized, from a resource standpoint. Desktop hardware, however, tends to be far cheaper than data center equipment. And, with VDI client-side devices are still required, although they are “thin”. When data center costs related to power, cooling, storage, and redundancy are factored in, it can be hard to beat to cost of a mid-range desktop computer. Through the use of application virtualization and solutions such as Citrix and Microsoft Terminal Services, organizations can increase the effective lifecycle of desktop hardware. Windows Server 2008’s version of Terminal Services provides the ability to run single applications (rather than entire desktops) in a virtualized environment, thereby providing the benefits of centralized application management with scalability. There are potential compatibility issues, but they may be offset by the ability to support many more users per server.

Supporting Mobile Users and Outsourcing

  • Problem: Maintaining security for remote sites, traveling users, and non-company staff can be a significant challenge when allowing the use of standard desktop or notebook computers. VDI helps minimize data-related risks by physically storing information within the data center. Even if client devices are lost or stolen, information should remain secure and protected.
  • Alternative Solution(s): For some types of remote users, it might make sense to provide isolated desktop environments via VDI. However, these users would require network access to the VMs themselves. Multi-factor authentication (using, for example, biometric devices) and encrypted connections (such as VPNs) can help protect network access from standard desktop computers. Network Access Control (NAC) is a technology that can help prevent insecure machines from connecting to the network. And, carefully managed security permissions can prevent unauthorized access to resources. All of these best practices apply equally whether or not VDI is being used. Finally, there’s no substitute for implementing and following rigid security policies, regardless of the technical approach that is used.

Managing Performance

  • Problem: Desktop operating systems and applications can never seem to have enough resources to perform adequately, leading to shorter upgrade cycles. Using VDI to place desktop VMs on the server, systems administrators can monitor and allocate system resources based on the resource needs of client computers.
  • Alternative Solution(s): In theory, VDI implementations can take advantage of highly-scalable server-side hardware, and it’s usually easier to reconfigure CPU, memory, disk and networking settings for a VM than it is to perform a hardware upgrade on a desktop computer. The drawback with the VDI approach is that applications or services that consume too many resources could potentially hurt the performance of other systems on that same server. Load-balancing and portability can help alleviate this, but administrators can also use other techniques such as server-based computing to centrally host specific resource-intensive applications.

Workload Portability

  • Problem: Operating systems and applications are tied to the desktop hardware on which they’re running. This makes it difficult to move configurations during upgrades, reorganizations, or job reassignments. With VDI, the process of moving or copying a workload is simple since the entire system configuration is encapsulated in a hardware-independent virtual machine.
  • Alternative Solution(s): When entire desktop configurations need to be moved or copied, the VDI approach makes the process easy since it’s based on virtual machines. When using standard desktop computers, however, the same imaging and conversion tools can be used to move an OS along with its applications to another computer. As these hardware-independent images can be deployed to both physical and virtual machines, this also provides IT departments with a seamless way to use VDI and standard desktop computers in the same environment.

Summary

Ask not whether VDI is a solution to your desktop management problems, but rather whether it is the best solution to these challenges. VDI offers benefits related to quick deployments, workload portability, centralized management, and support for remote access. Few of these benefits are unique to VDI, though, so keep in mind the alternatives.