Virtual Platform Management – Data Center Automation

A new chapter from my eBook titled The Definitive Guide to Virtual Platform Management is now available for free download (registration is required).  Chapter #9, "Data Center Automation", focuses on ways in which enterprise management tools can help make the seemingly insurmountable task of managing server sprawl and VM sprawl much easier.  Here’s a brief excerpt from the introduction:

A constant challenge in most IT environments is that of finding enough time and resources to finish all the tasks that need to be completed. IT departments find themselves constantly fighting fires and responding to a seemingly never-ending stream of change requests. Although virtualization technology can provide numerous advantages, there are also associated management-related challenges that must be addressed. When these tasks are performed manually, the added overhead can reduce cost savings and can result in negative effects on performance, availability, reliability, and security.

In previous chapters, I have covered a broad array of best practices related to virtualization management. Organizations have the ability to choose from a range of implementation methods, including physical servers, virtual machines, and clustered systems. The tasks have ranged from deployment and provisioning to monitoring virtual systems once they are in production. All of this raises questions related to the best method of actually implementing these best practices.

The focus of this chapter is on data center automation. Organizations that have deployed virtual machines throughout their environment can benefit from using enterprise software that has been designed to provide automated control. The goal is to implement technology that can provide for a seamless, self-managing, and adaptive infrastructure while minimizing manual effort. It’s a tall order, but certainly one that is achievable by using a combination of best practices and the right tools.

Stay tuned for the next and final chapter of the Guide!

Visual Studio 2008 & Business Intelligence Development Studio (Troubleshooting)

I recently installed Visual Studio 2008 on my main development computer and have been very happy with it overall.  However, before starting the installation, I decided to remove all of the Visual Studio 2005 components from my computer.  Overall, this was a good idea (VS 2008 is backwards-compatible), but I found out that it broke my ability to launch the Business Intelligence Development Studio (the primary tool for creating, among other things, SQL Server Reporting Services projects).  One solution would be to re-run the SQL Server 2005 setup, but I didn’t want to go through the time and trouble.

Fortunately, it looks like there’s a better way…  This MSDN Thread outlines a great response from Dan Jones:

You should make sure that Visual Studio is still installed. If you didn’t previously have VS installed, the BI Dev Studio installation will install a VS shell called Visual Studio Premier Partner Edition. Look in Add or Remove Programs for an entry like this. If you don’t find any entry for Visual Studio go to the location for SQL Server setup and run .\Tools\Setup\vs_setup.exe. This will install the VS Shell. After this is installed repair the BI Studio installation by running the following from the command line from the .\Tools directory: start /wait setup.exe /qb REINSTALL=SQL_WarehouseDevWorkbench REINSTALLMODE=OMUS

After running both commands, I’m back up and running properly.  Hopefully, this “gotcha” will be better documented at some point (perhaps in an official Knowledge Base article?).  For now, though it should get you back up and running within about 10 minutes.  Note that you’ll want to run Microsoft Update to install the Visual Studio SP1 updates on your computer.

Update: If you’re looking for information on SQL Server 2008 R2 and Report Builder 3.0, please see my newer post SQL Server 2008 R2 Report Builder 3.0 (RTM).

Desktop Virtualization: The Next IT Fad?

In the past, I wrote a couple of articles related to Virtual Desktop Infrastructure (VDI) (for the articles, see the VDI/Desktop Management Category).  The main idea is that, by running desktops within VMs, you can solve a lot of IT management challenges.  I’m not sold on the idea, and it appears that I’m not alone.  Hannah Drake, Editor at SearchServerVirtualization.com, asks Client-side virtual desktop technology: Unnecessary?.  The article quotes some interesting points from an IT consulting company.  I added my $.02 in the comments section.  Feel free to comment here, as well: Is VDI a fad, or is it a real solution for current IT problems.

Free Microsoft Learning Courses

If you’re not already familiar with the Microsoft Learning web site, you might be missing out on many different training courses and resources.  The site provides access to hundreds of resources, ranging from books to exams to online training courses.  Content is organized both by the role of an IT pro (systems administration, development, etc.) and by technology.

If you want to get started with content related to security, development, and related topics, see the list of free courses.  Often, these courses cover new technologies or methods that Microsoft wants developers and IT pros to learn about.  Current examples include:

I have completed somewhere around a dozen of these courses over the last few years, and I have found most of them to be pretty useful for quickly getting up to speed on new technologies.  And, they’re a good chance of pace from reading books, blogs, and other online materials.

Visual Studio 2008 RTM now available

For all the developer-types out there, Visual Studio 2008 (formerly code-named "Orcas"), has been released and is now available for download.  You can download a trial version on the Try Visual Studio 2008 site.  Best of all, if you have an active MSDN Subscription, you’re already licensed to run the latest version of Visual Studio.  Just use the MSDN Subscriber Downloads page. 

There’s already a lot of useful information on VS 2008 on other sites and blogs.  I’ll just plug one of the more convenient ones: You no longer have to have multiple versions of Visual Studio involved if you plan to deploy to .NET 2.0, 3.0, or 3.5 applications.  That means that. unless you need to deploy to .NET 1.x platforms, you can replace VS 2005 with VS 2008.  I’m getting warmed up to LINQ and some of the other new features in the IDE.  Good stuff…

Microsoft Infrastructure Planning and Design (IPD) Guides Available

I recently worked with Microsoft’s Solution Accelerator team to develop a guide to designing an infrastructure to support Microsoft’s virtualization solutions.  Unlike much of the other technical information that is available online, this series focuses on the design aspect of managing technology, rather than on implementation details.  From the web site:

Infrastructure Planning and Design guides share a common structure, including:

  • Definition of the technical decision flow through the planning process.
  • Listing of decisions to be made and the commonly available options and considerations.
  • Relating the decisions and options to the business in terms of cost, complexity, and other characteristics.
  • Framing decisions in terms of additional questions to the business to ensure a comprehensive alignment with the appropriate business landscape.

These guides complement product documentation by focusing on infrastructure design options.

Each guide leads the reader through critical infrastructure design decisions, in the appropriate order, evaluating the available options for each decision against its impact on critical characteristics of the infrastructure. The IPD Series highlights when service and infrastructure goals should be validated with the organization and provides additional questions that should be asked of service stakeholders and decision makers.

You can download the files from the Infrastructure Planning and Design page (registration is optional).  The content includes the following downloads:

  • IPD Series Introduction: A brief introduction to the series and its approach.
  • Select the Right Virtualization Solution: This guide includes an overview of Microsoft’s virtualization products and technologies.  The package includes a flowchart that can be helpful in deciding how to select from among Microsoft Virtual Server 2005, Microsoft Virtual PC, Microsoft Terminal Services, Microsoft SoftGrid, and the newly-announced Hyper-V (available with Windows Server 2008).
  • Windows Server Virtualization: This guide covers details on Windows Server Virtualization (WSv, now officially "Hyper-V") and Microsoft Virtual Server.  It includes a document and slides that cover the process of selecting which workloads to virtualize.  The guide then walks through the process of translating virtual machine requirements to host infrastructure requirements.
  • SoftGrid Application Virtualization: This guide focuses on SoftGrid – recently renamed to Microsoft Application Virtualization.  It covers best practices for designing an infrastructure for simplified application deployment and maintenance.

All downloads include files in Office 2003 and Office 2007 formats and are ready for use in your own presentations or proposals.  More guides will be available in the near future, and you should be able to access beta versions of upcoming guides at Microsoft Connect.  I hope you find the content to be useful!

Hyper-V for Windows Server 2008 Announced

It looks like the official, final name of the server technology previously known as Viridian and Windows Server Virtualization (WSv) has been announced.  See Microsoft Outlines Pricing, Packaging and Licensing for Windows Server 2008… for details.  As few techies like to read Press Releases, I’ll give you the short version.  The name of the technology/feature that will be available in 64-bit editions of Windows Server 2008 is Hyper-V.  In addition, Microsoft is providing a Hyper-V Server version of the product for use by OEMs.  The technical requirements seem to remain the same.  I’ll soon be writing some articles about the architecture of Hyper-V and will post the details here.

Personally, I like the new name somewhat more than the WSv title.  Hyper-V has fewer syllables, which is always a plus.  It’s a departure from Microsoft’s usually long-winded product names that can rarely be completed in a single breath. 

On that note: In keeping with descriptive product names, SoftGrid has also been renamed.  It is now called Microsoft Application Virtualization 4.5.  A list of new features is listed in Brian Madden’s blog entry, Microsoft announces Application Virtualization 4.5.  While I have mixed feelings on long, descriptive product names, I really do think this will help clear up some confusion over virtualization approaches.  I’ll post more information as it becomes available.

Virtual Platform Management: Policies and Processes

DGVPM Cover Chapter #8 of my free eBook called, The Definitive Guide to Virtual Platform Management, is now available for download.  This chapter talks about ways in which organizations can use policies and processes to better manage virtualization.  Included is information about creating and enforcing Service Level Agreements (SLAs), implementing charge-backs, and other best practices.  Check it out online (and don’t miss the first seven chapters)!

Microsoft’s Virtualization Options (Upcoming Webcast)

There seems to be a lot of confusion out there related to different methods of virtualization.   In short, it’s not all about running multiple operating systems on the same system at the same time.  You can also virtualize and isolate specific programs (for example, within a Java Virtual Machine).  There are also other approaches.  Microsoft refers to its Terminal Services feature as “presentation virtualization.”  Most of us are quite familiar with using the Remote Desktop Protocol (RDP) to remote manage a computer or to remote run applications.  But with Terminal Services, applications actually execute on the server.  What if you want them to run on the client (where CPU, memory, disk, and network resources are arguably far cheaper)?

Microsoft SoftGrid (formerly Softricity) is designed to do just that.  An upcoming webcast will help explain the approach of deploying applications on-demand: TechNet Webcast: Introducing SoftGrid to the World (Level 200)

Which to use, Microsoft SoftGrid or Terminal Services? Both of the fictional companies in our webcast, Contoso and Fabrikam, are considering application virtualization, and they have heard of both Terminal Services and SoftGrid. But which do they choose? In this session, we look at these solutions, provide details on how they differ, and explain when to use them. We also cover how to install, configure, and use SoftGrid.

Better yet, the technologies can successfully be used together.  Unfortunately, one of the drawbacks of Softgrid is that it requires an Enterprise-level license for organizations that wish to deploy it.  There are hints that this will soon change to make SoftGrid a lot more accessible to the masses (I’d consider using it for my home office).

Of course, there’s also an option not to virtualize at all.  If you’re trying to consolidate, for example, Microsoft SQL Server machines, there’s probably a better way to consolidate your databases.  The bottom line is that there are a lot of different options for obtaining the benefits of virtualization.

IT Fights Back: Virtualization SLAs and Charge-Backs

My article, the first in a series entitled, “Fighting The Dark Side of Virtualization” is now available on the Virtual Strategy Magazine Web site.  The article, IT Fights Back: Virtualization SLAs and Charge-Backs, focuses on ways in which IT departments can help manage issues such as VM sprawl (the explosive proliferation of VMs), while containing costs.  As a quick teaser, here’s the opening marquee:

temp

1

The adventure begins…

Managing Virtualization Storage for Datacenter Managers

This article was first published on SearchServerVirtualization.TechTarget.com.

Deploying virtualization into a production data center can provide an interesting mix of pros and cons. By consolidating workloads onto fewer server, physical management is simplified. But what about managing the VMs? While storage solutions can provide much-needed flexibility, it’s still up to datacenter administrators to determine their needs and develop appropriate solutions. In this article, I’ll present storage-related considerations for datacenter administrators.

Estimating Storage Capacity Requirements

Virtual machines generally require a large amount of storage. The good news is that this can, in some cases, improve storage utilization. Since direct-attached storage is not confined to a per-server basis (which often results in a lot of unused space), using centralized storage arrays can help. There’s also a countering effect, however: Since the expansion of virtual disk files is difficult to predict, you’ll need to leave some unallocated space for expansion. Storage solutions that provide for over-committing space (sometimes referred to as “soft-allocation”) and for dynamically resizing arrays can significantly simplify management.

  • To add up the storage requirements, you should consider the following:
  • The sum of the sizes of all “live” virtual disk files
  • Expansion predictions for virtual disk files
  • State-related disk files such as those used for suspending virtual machines and maintaining point-in-time snapshots
  • Space required for backups of virtual machines

All of this can be a tall order, but hopefully the overall configuration is no more complicated than that of managing multiple physical machines.

Placing Virtual Workloads

One of the best ways to reduce disk contention and improve overall performance is to profile virtual workloads to determine their requirements. Performance statistics help determine the number, size, and type of IO operations. Table 1 provides an example.

image

Table 1: Assigning workloads to storage arrays based on their performance requirements

In the provided example, the VMs are assigned to separate storage arrays to minimize contention. By combining VMs with “compatible” storage requirements on the same server, administrators can better distribute load and increase scalability.

Selecting Storage Methods

When planning to deploy new virtual machines, datacenter administrators have several different options. The first is to use local server storage. Fault-tolerant disk arrays that are directly-attached to a physical server can be easy to configure. For smaller virtualization deployments, this approach makes sense. However, when capacity and performance requirements grow, adding more physical disks to each server can lead to management problems. For example, arrays are typically managed independently, leading to wasted disk space and requiring administrative effort.

That’s where network-based storage comes in. By using centralized, network-based storage arrays, organizations can support many host servers using the same infrastructure. While support for technologies varies based on the virtualization platform, NAS, iSCSI, and SAN-based storage are the most common. NAS devices use block-level IO and are typically used as file servers. They can be used to store VM configuration and hard disk files. However, latency and competition for physical disk resources can be significant.

SAN and iSCSI storage solutions perform block-level IO operations, providing raw access to storage resources. Through the use of redundant connections and multi-pathing, they can provide the highest levels of performance, lowest latency, and simplified management.

In order to determine the most appropriate option, datacenter managers should consider workload requirements for each host server and its associated guest OS’s. Details include the number and types of applications that will be running, and their storage and performance requirements. The sum of this information can help determine whether local or network-based storage is most appropriate.

Monitoring Storage Resources

CPU and memory-related statistics are often monitoring for all physical and virtual workloads. In addition to this information, disk-related performance should be measured. Statistics collected at the host server level will provide an aggregate view of disk activity and whether storage resources are meeting requirements. Guest-level monitoring can help administrators drill-down into the details of which workloads are generating the most activity. While the specific statistics that can be collected will vary across operating systems the types of information that should be monitoring include:

  • IO per Second (IOPs): This statistic refers to the number of disk-related transactions that are occurring at a given instant. IOPs are often used as the first guideline for determining overall storage requirements.
  • Storage IO Utilization: This statistic refers to the percentage of total IO bandwidth that is being consumed at a given point in time. High levels of utilization can indicate the need to upgrade or move VMs.
  • Paging operations: Memory-starved VMs can generate significant IO traffic due to paging to disk. Adding or reconfiguring memory settings can help improve performance.
  • Disk queue length: The number of IO operations that are pending. A consistently high number will indicate that storage resources are creating a performance bottleneck.
  • Storage Allocation: Ideally, administrators will be able to monitor the current amount of physical storage space that is actually in use for all virtual hard disks. The goal is to proactively rearrange or reconfigure VMs to avoid over-allocation.

VM disk-related statistics will change over time. Therefore, the use of automated monitoring tools that can generate reports and alerts are an important component of any virtualizations storage environment.

Summary

Managing storage capacity and performance should be high on the list of responsibilities for datacenter administrators. Virtual machines can easily be constrained by disk-related bottlenecks, causing slow response times or even downtime. By making smart VM placement decisions and monitoring storage resources, many of these potential bottlenecks can be overcome. Above all, it’s important for datacenter administrators to work together with storage managers to ensure that business and technical goals remain aligned over time.

Virtualization Considerations for Storage Managers

This article was first published on SearchServerVirtualization.TechTarget.com.

It’s common for new technology to require changes in all areas of an organization’s overall infrastructure. Virtualization is no exception. While many administrators often focus on CPU and memory constraints, storage-related performance is also a very common bottleneck. In some ways, virtual machines can be managed like physical ones. After all, each VM runs its own operating systems, applications, and services. But there are also numerous additional considerations that must be taken into account when designing a storage infrastructure. By understanding the unique needs of virtual machines, storage managers can build a reliable and scalable data center infrastructure to support their VMs.

Analyzing Disk Performance Requirements

For many types of applications, the primary consideration around which the storage infrastructure is designed is based on I/O operations per second (IOPS). IOPS refer to the number of read and write operations that are performed, but do not always capture the whole picture. Additional considerations include the type of activity. For example, since virtual disks that are stored on network-based storage arrays must support guest OS disk activity, the average I/O request size tends to be small. Additionally, I/O requests are frequent and often random in nature. Paging can also create a lot of traffic on memory-constrained host servers. There are also other considerations that will be workload-specific. For example, it’s also good to measure the percentage of read vs. write operations when designing the infrastructure.

Now, multiply all of these statistics by the number of VMs that are being supported on a single storage device, and you are faced with the very real potential for large traffic jams. The solution? Optimize the storage solution for supporting many, small, and non-sequential IO operations. And, most importantly, distribute VMs based on their levels and types of disk utilization. Performance monitoring can help generate the information you need.

Considering Network-Based Storage Approaches

Many environments already use a combination of NAS, SAN, and iSCSI-based store to support their physical servers. These methods can still be used for hosting virtual machines, as most virtualization platforms provide support for them. For example, SAN- or iSCSI-based volumes that are attached to a physical host server can be used to store virtual machine configuration files, virtual hard disks, and related data. It is important to note that, by default, the storage is attached to the host and not to the guest VM. Storage managers should keep track of which VMs reside on which physical volumes for backup and management purposes.

In addition to providing storage at the host-level, guest operating systems (depending on their capabilities) can take advantage of NAS and iSCSI-based storage. With this approach, VMs can directly connect to network-based storage. A potential drawback, however, is that guest operating systems can be very sensitive to latency, and even relatively small delays can lead to guest OS crashes or file system corruption.

Evaluating Useful Storage Features

As organizations place multiple mission-critical workloads on the same servers through the use of virtualization, they can use various storage features to improve reliability, availability and performance. Implementing RAID-based striping across arrays of many disks can help significantly improve performance. The array’s block size should be matched to the most common size of I/O operations. However, more disks means more chances for failures. So, features such as multiple parity drives and hot standby drives are a must.

Fault tolerance can be implemented through the use of multi-pathing for storage connections. For NAS and iSCSI solutions, storage managers should look into having multiple physical network connections and implementing fail-over and load-balancing features by using network adapter teaming. Finally, it’s a good idea for host servers to have dedicated network connections to their storage arrays. While you can often get by with shared connections in low-utilization scenarios, the load placed by virtual machines can be significant and can increase latency.

Planning for Backups

Storage administrators will have the need to backup many of their virtual machines. Apart from allocating the necessary storage space, it is necessary to develop a method for dealing with exclusively-locked virtual disk files. There are two main approaches:

  • Guest-Level Backups: In this approach, VMs are treated like physical machines. Generally, you would install backup agents within VMs, define backup sources and destinations, and then let them go to work. The benefit of this approach is that only important data is backed up (thereby reducing required storage space). However, your backup solution must be able to support all potential guest OS’s and versions. And, the complete recovery process can involve many steps, including reinstalling and reconfiguring the guest OS.
  • Host-Level Backups: Virtual machines are conveniently packaged into a few important files. Generally, this includes the VM configuration file and virtual disks. You can simply copy these files to another location. The most compatible approach involves stopping or pausing the VM, copying the necessary files, and then restarting the VM. The issue, however, is that this can require downtime. Numerous first- and third-party solutions are able to backup VMs while they’re “hot”, thereby eliminating service interruptions. Regardless of the method used, replacing a failed or lost VM is easy – simple restore the necessary files to the same or another host server and you should be ready to go. The biggest drawback of host-level backups is in the area of storage requirements. You’re going to be allocating a ton of space for the guest OS’s, applications, and data you’ll be storing.

Storage solutions options such as the ability to perform snapshot-based backups can be useful. However, storage administrators should thoroughly test the solution and should look for explicitly-stated virtualization support from their vendors. Remember, backups must be consistent to a point in time, and non-virtualization-aware solutions might neglect to flush information stored in the guest OS’s cache.

Summary

By understanding and planning for the storage-related needs of virtual machines, storage administrators can help their virtual environments scale and keep pace with demand. While some of the requirements are somewhat new, many involve utilizing the same storage best practices that are used for physical machines. Overall, it’s important to measure performance statistics and to consider storage space and performance when designing a storage infrastructure for VMs.

Advanced Backup Options for Virtual Machines

This article was first published on SearchServerVirtualization.TechTarget.com.

It’s a pretty big challenge to support dozens or hundreds of separate virtual machines. Add in the requirement for backups – something that generally goes without saying – and you have to figure out how to protect important information. Yes, that usually means at least two copies of each of these storage hogs. I understand that you’re not made of storage (unless, of course, you’re the disk array that’s reading this article on the web server). So what should you do? In this tip, I’ll outline several approaches to performing backups for VMs, focusing on the strengths and limitations of each.

Determining Backup Requirements

Let’s start by considering the requirements for performing backups. The list of gaols is pretty simple, in theory:

  • Minimize data loss
  • Minimize recovery time
  • Simplify implementation and administration
  • Minimize costs and resource usage

Unfortunately, some of these objectives are often at odds with each other. Since implementing any solution takes time and effort, start by characterizing the requirements for each of your virtual machines and the applications and services they support. Be sure to write in pencil, as it’s likely that you’ll be revising these requirements. Next, let’s take a look at the different options for meeting these goals.

Application-Level Backups

The first option to consider for performing backups is that of using application features to do the job. There’s usually nothing virtualization-specific about this approach. Examples include:

  • Relational Database Servers: Databases were designed to be highly-available and it should come as no surprise that there are many ways of using built-in backup methods. In addition to standard backup and restore operations, you can use replication, log-shipping, clustering, and other methods to ensure that data remains protected.
  • Messaging Servers: Communications platforms such as Microsoft Exchange Server provide methods for keeping multiple copies of the data store in sync. Apart from improving performance (by placing data closer to those who need it), this can provide adequate backup functionality.
  • Web Servers: The important content for a web server can be stored in a shared location or can be copied to each node in a web server farm. When a web server fails, just restore the important data to a standby VM, and you’re ready to go. Better yet, use shared session state or stateless application features and a network load-balancer to increase availability and performance.

All of these methods allow you to protect against data loss and downtime by storing multiple copies of important information.

Guest-Level Backups

What’s so special about VMs, anyway? I mean, why not just treat them like the physical machines that they think they are? That’s exactly the approach with guest-level backups. The most common method with this approach is to install backup agents within the guest OS and to specify which files should be backed up and their destinations. As with physical servers, administrators can decide what really needs to be backed up – generally just data, applications, and configuration files. That saves precious disk space and can reduce backup times.

There are, however, drawbacks to this backup approach. First, your enterprise backup solution must support your guest OS’s (try finding an agent for OS/2!) Assuming the guest OS is supported, the backup and recovery process is often different for each OS. This means more work on the restore side of things. Finally, the restore process can take significant time, since a base OS must be installed and the associated components restored.

Examples of popular enterprise storage and backup solutions are those from Symantec, EMC, Microsoft and many other vendors.

Host-Level Backups

Host-level backups take advantage of the fact that virtual machines are encapsulated in one or more virtual disk files, along with associated configuration files. The backup process consists of making a copy of the necessary files from the host OS’s file system. Host-level backups provide a consistent method for copying VMs since you don’t have to worry about differences in guest operating systems. When it comes time to restore a VM (and you know it’s going to happen!), all that’s usually needed is to reattach the VM to a working host server.

However, the drawback is that you’re likely to need a lot of disk space. Since the entire VM, including the operating system, applications, and other data are included in the backup set, you’ll have to allocate the necessary storage resources. And, you’ll need adequate bandwidth to get the backups to their destination. Since virtual disk files are exclusively locked while a VM is running, you’ll either need to use a “hot backup” solution, or you’ll have to pause or stop the VM to perform a backup. The latter option results in (gulp!) scheduled downtime.

Solutions and technologies include:

  • VMware: VMotion; High Availability; Consolidated Backup; DRS
  • Microsoft Volume Shadow Services (VSS)

File System Backups

File system backups are based on features available in storage arrays and specialized software products. While they’re not virtualization-specific, they can help simplify the process of creating and maintaining VM backups. Snapshot features can allow you make a duplicate of a running VM, but you should make sure that your virtualization platform is specifically supported. File system replication features can use block- or bit-level features to keep a primary and backup copy of virtual hard disk files in-sync.

Since changes are transferred efficiently, less bandwidth is required. And, the latency between when modifications are committed on the primary VM and the backup VM can be minimized (or even eliminated). That makes the storage-based approach useful for maintaining disaster recovery sites. While third-party products are required, file system backups can be easy to setup and maintain. But, they’re not always ideal for write-intensive applications and workloads.

Potential solutions include products from Double-Take Software and from Neverfail. Also, if you’re considering the purchase of a storage solution, ask your vendor about replication and snapshot capabilities, and their compatibility with virtualization.

Back[up] to the Future

Most organizations will likely choose different backup approaches for different applications. For example, application-level backups are appropriate for those systems that support them. File system replication is important for maintaining hot or warm standby sites and services. Guest- and host-level backups balance ease of backup/restore operations vs. the amount of usable disk space. Overall, you should compile the data loss, downtime and cost constraints, and then select the most appropriate method for each type of VM. While there’s usually no single answer that is likely to meet all of your needs, there are some pretty good options out there!

Evaluating Network-Based Storage Options

This article was first published on SearchServerVirtualization.TechTarget.com.

Imagine living in a crowded apartment with a bunch of people that think they own the place. Operating systems and applications can be quite inconsiderate at times. For example, when they’re running on physical machines, these pieces of software are designed to monopolize hardware resources. Now, add virtualization to the picture, and you get a lot of selfish people competing for the same resources. In the middle is the virtualization layer – acting as a sort of landlord or superintendent – trying to keep everyone happy (while still generating a profit). Such is the case with disk I/O on virtualization host servers. In this Tip, I’ll discuss some options for addressing this common bottleneck.

Understanding Virtualization I/O Requirements

Perhaps the most important thing to keep in mind is that not all disk I/O is the same. When designing storage for virtualization host servers, you need to get an idea of the actual disk access characteristics you will need to support. Considerations include:

  • Ratio of read vs. write operations
  • Frequency of sequential vs. random reads and writes
  • Average I/O transaction size
  • Disk utilization over time
  • Latency constraints
  • Storage space requirements (including space for backups and maintenance operations)

Collecting this information on a physical server can be fairly simple. For example, on the Windows platform, you can collect data using Performance Monitor and store it to a binary file or database for later analysis. When working with VMs, you’ll need to measure and combine I/O requirements to define your disk performance goals. The focus of this tip is on choosing methods for storing virtual hard disk files, based on cost, administration and scalability requirements.

Local / Direct-Attached Storage

The standard default storage option in most situations is that of using local storage. The most common connection methods include PATA, SATA, SCSI, and SAS. Each type of connection comes with associated performance and cost considerations. RAID-based configurations can provide fault-tolerance and can be used to improve performance.

· Pros:

  • Generally cheaper than other storage options
  • Low latency, high bandwidth connections that are reserved for a single physical server

· Cons:

  • Potential waste of storage space (since disk space is not shared across computers)
  • Limited total storage space and scalability due to physical disk capacity constraints (especially when implementing RAID)
  • Difficult to manage, as storage is decentralized

Storage Area Networks (SANs) / Fibre Channel

SANs are based on Fibre Channel connections, rather than copper-based Ethernet. SAN-based protocols are design to provide high throughput and low latency, but require the implementation of an optical-based network infrastructure. Generally, storage arrays provide raw block-level connections to carved-out portions of disk space.

· Pros:

  • Can provide high performance connections
  • Improved compatibility – appears are local storage to the host server
  • Centralizes storage management

· Cons:

  • Expensive to implement – requires Fibre Channel-capable host bus adapters, switches, and cabling
  • Expensive to administer – requires expertise to manage a second “network” environment

Network-Based Storage

Network-based storage devices are designed to provide disk resources over a standard (Ethernet) network connection. They most often support protocols such as Server Message Block (SMB), and Network File System (NFS), both of which are designed for file-level disk access. The iSCSI protocol provides the ability to perform raw (block-level) disk access over a standard network. iSCSI-attached volumes appear to the host server as if they were local storage.

· Pros:

  • Lower implementation and management cost (vs. SANs) due to utilization of copper-based (Ethernet) connections
  • Storage can be accessed at the host- or guest-level, based on specific needs
  • Higher scalability (arrays can contain hundreds of disks) and throughput (dedicated, redundant I/O controllers)

· Cons:

  • Simplified administration (vs. direct-attached storage), since disks are centralized
  • Applications and virtualization platforms must support either file-based access or iSCSI

Storage Caveats: Compatibility vs. Capacity vs. Cost

In many real world implementations of virtualization, an important bottleneck is storage performance. Organizations can use well-defined methods of increasing CPU and memory performance, but what about the hard disks? Direct-attached, network-based, and SAN-based storage options can provide several viable options. Once you’ve outgrown local storage (from a capacity, performance, or administration standpoint), you should consider implementing iSCSI or file-based network-based storage servers. The primary requirement, of course, is that your virtualization layer must support the hardware and software you choose. SANs are a great option for organizations that have already made the investment, but some studies show that iSCSI devices can provide similar levels of performance at a fraction of the cost.

The most important thing to remember is to thoroughly test your solution before deploying it into production. Operating systems can be very sensitive to disk-related latency, and disk contention can cause unforeseen traffic patterns. And, once the systems are deployed, you should be able to monitor and manage throughput, latency, and other storage-related parameters.

Overall, providing storage for virtual environments can be a tricky technical task. The right solution, however, can result in happy landlords and tenants whereas the wrong solutions result in one seriously overcrowded apartment.

Desktop Virtualization: Evaluating Approaches and Solutions

This article was first published on SearchServerVirtualization.TechTarget.com.

Visualize the unglamorous task of crawling behind a dusty desktop computer to check for an unplugged cable. This is in response to a report that a user’s computer is “broken”. You have only the soothing sounds of an employee complaining to a friend about how IT is taking forever to solve the problem. Finish the job quickly, as you’ll soon be off to troubleshooting an application compatibility problem on an executive’s notebook computer. Assuming you haven’t left the IT industry altogether after reading this paragraph, I think you’ll agree that there are many compelling reasons for addressing desktop and application management issues.

In the first Tip in this series on desktop virtualization, I defined the goals and problems that we were trying to solve. I provided an overview of the various approaches (and noted that there’s often some disagreement over specific terminology). Here, I’ll cover the pros and cons of specific approaches, along with applications you might want to consider.

Presentation Virtualization Solutions

In presentation virtualization, user input and application output are managed using a network connection. Applications run on the server, and screen updates are sent to a thin client or desktop computer. Some solutions can virtualize individual applications, can work securely over Internet connections, and can be integrated with a variety of network-level access methods.

  • Benefits: Scalability is a big one: up to hundreds of simultaneous application sessions can be created on a single server. Applications management can be simplified since deployment to desktops is no longer a requirement. Access to applications can be managed centrally, and data may be stored securely on back-end systems.
  • Drawbacks: Applications must be compatible with the virtualization solution. While utilities are available for assisting in this area, they’re far from foolproof. Additionally, all users will be using the same OS version on the server side. Reliability is also a concern, especially when running business-critical client applications, due to the number of sessions that must be maintained. When running on slow connections, slow application responses can hurt the end-user experience.

Products and Solutions:

Application and OS Virtualization Solutions

Realizing that the primary purpose of desktop computers is to allow users to run applications, some vendors have focused on using application deployment and management solutions. The goal here is to allow many different applications to coexist on a single operating system that runs on users’ hardware.

  • Benefits: Users can run their operating systems and applications locally, leading to better performance and support for disconnected scenarios. IT departments can avoid application compatibility and deployment issues and can more easily keep track of licensing. Overall scalability is often far higher than that of virtualizing entire desktop operating systems.
  • Drawbacks: Applications may need to be modified (or at least tested) when running with these solutions. The base OS is shared among all applications and application environments, so all applications must run on the same basic platform.

Products and Solutions:

VM-Based Virtualization Solutions

There’s no reason that the benefits of server virtualization can’t be extended to desktop machines. VM-based virtualization involves the creation of VMs (either on-demand on permanent) for users on data center hardware. Users access their individual VMs using a thin client device or a remote desktop solution. On the server side, a “connection broker” layer is able to ensure that the right VMs are available and that users connect to their own systems.

  • Benefits: All OS’s and user data are stored within the data center (presumably on fault-tolerant, high performance hardware). This enables centralized management and increases average utilization on all of the systems that an IT department supports. Security risks are decreased, as are costs related to managing client-side computers.
  • Drawbacks: Entire operating systems are running for each user. This can limit scalability and increase costs related to storage. Additionally, users require a network connection in order to access their computers. Finally, server-side hardware resources can be far more costly than their desktop counterparts.

Products and Solutions:

Summary

It is important to note that these solutions are not exclusive of each other. For example, you could choose to virtualize a desktop OS and then use application virtualization products to deploy and manage applications. Realistically, most organizations will find all of these options to be suitable for simplifying some aspect of overall desktop operations. This area is evolving rapidly (in terms of both real technology and hype), so be sure to thoroughly research options before deploying them. Overall, knowledge is power, so keep these options in mind the next time you spend 30 minutes repairing a mouse-related problem!