Desktop Virtualization: Goals and Options

This article was first published on SearchServerVirtualization.TechTarget.com.

Quick: Name a task that’s less enjoyable than managing client operating systems and applications! I have a feeling that if you’re a seasoned IT pro, you had to think for a few seconds (and, I’ll bet that many of you came up with some very creative responses). Clearly, the challenges of keeping end-users’ systems up-to-date can be a thankless and never-ending ordeal. Vendors have heard your cries, and various solutions are available. At the risk of sounding like a flight attendant, I do understand that you have a choice in choosing virtualization approaches. In this series of Tips, I’ll describe details related to the pros and cons of desktop application and OS virtualization. Let’s start by defining the problem.

Desktop Management Challenges

There are many reasons that desktop computers can be more painful to manage than their server-side counterparts. Some important issues include:

  • Analyzing Administration: Desktop and notebook computers are often located in the most remote areas of your organization (or outside of it altogether). Performing systems administration tasks can sometimes require physical access to these machines. And, even with remote management tools, the client-side machine has to be online and connected to the network. The result is significant time and effort requirements for keeping systems optimally configured.
  • Mitigating Mobile Mayhem: Traveling and remote users can be (quite literally) a moving target: It seems that as soon as you’ve deployed a system for their use, changes are required. While some users can’t avoid working offline, there’s a subset of the user population that might need to access their OS and applications from multiple locations. Managing multiple pieces of hardware or shared desktop machines can be time-consuming and tedious.
  • Dealing with Distributed Data: Security and regulatory compliance requirements necessitate the management of data. It’s far easier to secure information and prevent data loss or theft when everything’s stored in the data center. While stolen laptops can be costly, it’s far cheaper than dealing with stolen data.
  • Application Anarchy: Deploying and managing desktop applications can be a struggle. While deployment tools can simplify the process, issues like managing application compatibility problems can lead to a large number of support desk calls. Other issues include tracking license usage, ensuring that systems remain patched, and providing the right applications on the right computer at the right time.
  • Bumbling Backups: Ensuring that data remains protected on desktop and notebook computers can be problematic. Even with the use of backup agents, there’s room for error. And, getting users to consistently save their important files to network shares can seem futile.
  • Hardware Headaches: Managing desktop and notebook hardware can be time-consuming. Add in the costs of technology refreshes and verifying hardware system requirements, and the issue can quickly float to the top of an IT department’s list of costs.

From a technical standpoint, the issue is that applications are tightly tied to their operating systems. And the operating systems, in turn, are tightly tied to hardware. Solving these problems can help alleviate some of the pain.

Choosing a Virtualization Approach

There are several different approaches to addressing desktop-related challenges. One caveat is that the terminology can be inconsistent. I’ve taken a shot at categorizing the different approaches, but vendors’ descriptions do differ. Here’s a breakdown:

  • Presentation Virtualization: Some users require access to only one or a few applications (think about call center and point-of-sale users). The main idea behind presentation virtualization is that all applications are installed and executed on a specialized server that can then redirect video, keyboard, and mouse signals between a small client application or a thin client device. Since applications are installed centrally, deployment and management is less of an issue.
  • OS and Application Virtualization: For some portion of the user population, such as traveling employees or “power-users”, there’s a real need to run an operating system directly on individual computers. In these scenarios, it’s still desirable to simplify the deployment and management of applications. Application virtualization solutions provide a way to either compartmentalize or stream programs to the computers that need them. The process is quick, safe, and can happen with little IT involvement.
  • VM-Based Virtualization: Also known as Virtual Desktop Infrastructure (VDI), among other names, the idea here is to allow users to run their own desktop OS’s – except that they are physically stored in the data center. Typically, the operating system and applications are run within a dedicated virtual machine which is assigned to a particular user. Employees use either a thin client computer or a remote desktop connection to access their environments.

In addition to these options, there’s an implicit fourth choice: “None of the above.” As I described in my Tips, “VDI Benefits without VDI”, you can reduce problems to some extent by utilizing IT best practices. You can also use a combination of these approaches (for example, VM-based virtualization with application virtualization)to meet different needs in the same environment.

Looking for Solutions

In this Tip, I presented some of the problems that desktop virtualization attempts to address. It’s important to understand your pain points before you start looking for a remedy. Then, I described three high-level approaches for solving common problems. In the next part of this series, I’ll present information about the pros and cons of each approach, along with specific products to consider.

Scalability:Behind the Scenes

While I consider myself a fairly well-informed IT architect, I often wonder incredulously at the how sites like Google, Amazon, YouTube, MySpace and Microsoft keep up with demand.  While a lot of the information is proprietary (and understandably so), some details are available online. 

I recently ran across HighScalability, a site dedicated to covering the behind-the-scenes details of scalable installations.  Interesting, the site mentions that it would like to “help you build successful scalable web sites.”  It sounds like a noble goal, but there are very few sites in the world that could truly benefit from these levels of performance.  And, those that can usually have unique considerations.  Amazon’s architecture is probably not very useful to YouTube (though parts of it are probably relevant).

Microsoft TechNet also includes numerous scalability and performance-related studies.  While some reek of unbridled marketing (showing happy, attractive people in pristine data centers), some studies are really interesting.  I especially like the section entitled How Microsoft Does IT (n.b., “IT” refers to information technology). 

Overall, there’s a lot to be learned from how others do things.  In some cases, you can even find out what doesn’t work well.  Anyway, scalability studies can be an interesting way to find out more about cutting edge technology.  Certainly, we’ve come a long way since dynamic DNS and static content caching!  Now if only my blog were to require those levels of performance (do your part, and reload this page!).

VDI Benefits without VDI: Desktop Management

This article was first published on SearchServerVirtualization.TechTarget.com.

Quick: Think of the five systems administration tasks you most enjoy doing! If you’re like most techies, desktop management probably didn’t make the list. It’s probably right up there with washing the car or mowing the lawn (a whole different type of administration challenge). Caring for and feeding client-side computers can be a painful and never-ending process. Therefore, it’s no surprise that Virtual Desktop Infrastructure (VDI) technology is capturing the eyes and ears of IT staff.

But does VDI provide a unique solution? Or, can you get the same benefits through other practices and approaches? (If you’ve read the title of this Tip, there’s a good chance you can guess where I’m going with this.) Over the years, a variety of solutions for managing desktop and notebook computers have become commonplace. In this article, I’ll outline some problems and solutions. The goal is not to discredit VDI, but to look at options for achieving the same goals.

Deployment and Provisioning

  • Problem: Rolling out new desktop computers can be time-consuming and labor-intensive. Using VDI, provisioning is much faster since standard base images can be quickly deployed within the data center. Users can then access the images from any computer or thin client.
  • Alternative Solution(s): Automated operating system deployment tools are available from OS vendors and from third-parties. Some use an image-based approach in which organizations can create libraries of supported configurations and then deploy them to physical or virtual machines. When combined with network boot features, the process can be completely automated. Additionally, there are server-based options such as Microsoft SoftGrid for automatically installing applications as they are requested.

Desktop Support and Remote Management

  • Problem: Managing and troubleshooting desktop systems can be costly and time-consuming in standard IT environments, as physical access to client machines is often required. With VDI implementations, all client operating systems, applications, and configuration settings are stored centrally within VMs within the data center. This reduces the need to visit client desktops or to have physical access to portable devices such as notebook computers.
  • Alternative Solution(s): While VDI can sometimes simplify support operations, IT departments still need to manage individual operating system images and application installations. Remote management tools can reduce the need for physical access to a computer for troubleshooting purposes. Some solutions use the same protocols (such as the Remote Desktop Protocol, RDP) that VDI or other approaches would use. Products and services also allow for troubleshooting computers over the Internet or behind remote office firewalls. That can help you support Mom, who might not be authorized to access a VM image in your corporate data center.

Resource Optimization / Hardware Consolidation

  • Problem: Desktop hardware is often under-utilized and hardware maintenance can be a significant cost and management burden. By combining many desktop computers on server hardware, VDI can be used to increase overall system resource utilization. Additionally, client computers have minimal system requirements, making them more cost effective to maintain over time.
  • Alternative Solution(s): VDI takes the “server consolidation” approach and applies it to desktop computers. Standard client computers are minimally utilized, from a resource standpoint. Desktop hardware, however, tends to be far cheaper than data center equipment. And, with VDI client-side devices are still required, although they are “thin”. When data center costs related to power, cooling, storage, and redundancy are factored in, it can be hard to beat to cost of a mid-range desktop computer. Through the use of application virtualization and solutions such as Citrix and Microsoft Terminal Services, organizations can increase the effective lifecycle of desktop hardware. Windows Server 2008’s version of Terminal Services provides the ability to run single applications (rather than entire desktops) in a virtualized environment, thereby providing the benefits of centralized application management with scalability. There are potential compatibility issues, but they may be offset by the ability to support many more users per server.

Supporting Mobile Users and Outsourcing

  • Problem: Maintaining security for remote sites, traveling users, and non-company staff can be a significant challenge when allowing the use of standard desktop or notebook computers. VDI helps minimize data-related risks by physically storing information within the data center. Even if client devices are lost or stolen, information should remain secure and protected.
  • Alternative Solution(s): For some types of remote users, it might make sense to provide isolated desktop environments via VDI. However, these users would require network access to the VMs themselves. Multi-factor authentication (using, for example, biometric devices) and encrypted connections (such as VPNs) can help protect network access from standard desktop computers. Network Access Control (NAC) is a technology that can help prevent insecure machines from connecting to the network. And, carefully managed security permissions can prevent unauthorized access to resources. All of these best practices apply equally whether or not VDI is being used. Finally, there’s no substitute for implementing and following rigid security policies, regardless of the technical approach that is used.

Managing Performance

  • Problem: Desktop operating systems and applications can never seem to have enough resources to perform adequately, leading to shorter upgrade cycles. Using VDI to place desktop VMs on the server, systems administrators can monitor and allocate system resources based on the resource needs of client computers.
  • Alternative Solution(s): In theory, VDI implementations can take advantage of highly-scalable server-side hardware, and it’s usually easier to reconfigure CPU, memory, disk and networking settings for a VM than it is to perform a hardware upgrade on a desktop computer. The drawback with the VDI approach is that applications or services that consume too many resources could potentially hurt the performance of other systems on that same server. Load-balancing and portability can help alleviate this, but administrators can also use other techniques such as server-based computing to centrally host specific resource-intensive applications.

Workload Portability

  • Problem: Operating systems and applications are tied to the desktop hardware on which they’re running. This makes it difficult to move configurations during upgrades, reorganizations, or job reassignments. With VDI, the process of moving or copying a workload is simple since the entire system configuration is encapsulated in a hardware-independent virtual machine.
  • Alternative Solution(s): When entire desktop configurations need to be moved or copied, the VDI approach makes the process easy since it’s based on virtual machines. When using standard desktop computers, however, the same imaging and conversion tools can be used to move an OS along with its applications to another computer. As these hardware-independent images can be deployed to both physical and virtual machines, this also provides IT departments with a seamless way to use VDI and standard desktop computers in the same environment.

Summary

Ask not whether VDI is a solution to your desktop management problems, but rather whether it is the best solution to these challenges. VDI offers benefits related to quick deployments, workload portability, centralized management, and support for remote access. Few of these benefits are unique to VDI, though, so keep in mind the alternatives.

VDI Benefits without VDI:Managing Security

This article was first published on SearchServerVirtualization.TechTarget.com.

What do leaky faucets, fragmented file systems and failed hard disks all have in common? We want to fix them! As IT professionals, most of us pride ourselves on our problem-solving abilities. As soon as we hear about an issue, we want to find the solution. Every once in a while a technology offers new solutions to problems you may not have recognized. VDI addresses raises and addresses some important issues that are related to IT management. But, is VDI the only solution to those problems?

Whether or not you agree that VDI technology will make inroads into replacing traditional desktop computers, all of the recent press on the technology helps highlight the typical pain that’s being seen in IT departments. From security to supportability to regulatory compliance, there’s clearly a need for improvements in IT management. For many environments, however, it’s possible to find solutions by using other approaches and practices.

For the record, I certainly don’t oppose the use of virtualization for desktop environments, and I think it most likely will find a useful role in many environments. However, in order to justify the costs and technology investments, it’s worth understanding other options. The point of this article is that VDI is not required in order to solve many IT-related security problems. Let’s look at some problems and alternatives.

Securing Desktop Data

  • Problem: Data stored on corporate desktop and notebook computers is vulnerable to theft or unauthorized access. By using VDI to physically store all of this data on virtual machine images in the data center, chances of data compromise are reduced. The reason for this is that information is that sensitive data is never actually stored on a desktop or portable computer. If the system is lost or stolen, organizations don’t have to worry about losing information since it is not stored on the local hard disk.
  • Alternative Solution(s): Securing data is a common challenge in all IT environments, and many solutions are available. Sensitive information, in general, should be stored in protected network locations. File servers should adhere to security standards to prevent unauthorized access or data loss. In this scenario, the most important data is already secured within the data center. For protecting local copies of information, there are several hardware and software-based solutions that can be used to encrypt the contents of desktop and notebook hard disks. An example is Windows Vista’s BitLocker feature. Even with VDI, you would have the need to protect local copies of VMs for traveling users.

Data Protection

  • Problem: Backing up and restoring important data on client machines takes significant time and effort. When using VDI, all of the contents of the desktop and notebook computers are actually stored in the data center (usually on a dedicated storage arrays or network-based storage devices). Since all of the data is stored centrally, systems administrators can easily make backups of entire computer configurations (including the operating system, installing applications, data, and configuration settings). The no longer have to really on network-based backup agents that require the computer to be powered on and accessible in order for the data to be copied.
  • Alternative Solution(s): Hardware failures or accidental data modifications on client-side computers are potential problems, but there are many backup-related solutions. I already mentioned the importance of storing critical files on data center servers. By using automated restore tools, users can quickly be restored to service, even after a complete hardware failure. While VDI might seem to help in this area, when backing up entire VMs and virtual hard disks, you’re actually protecting a lot of unnecessary information. For example, each virtual hard disk that is backed up will include the entire operating system and all of the installed program files. These types of files could be much more easily restored using installation media or by reverting to an image-based backup. Users should understand the importance of storing information in network environments. File synchronization (such as the Windows Offline Files feature) can be used to automatically support traveling users.

Managing System Updates

  • Problem: Systems administrators spend a lot of time in keeping systems up-to-date with security updates and related patches. Part of the challenge is in dealing with remote machines that must be connected to the network and be properly configured in order to be maintained. With VDI, guest OS images are located in the data center and can be accessed by systems administrators whether or not the VM is being used.
  • Alternative Solution(s): The VDI approach still requires each user to have access to a single operating system. The OS itself must be secured, patched, and periodically maintained with other types of updates. Most vendors have tools for automatically deploying updates to large numbers of computers. These same methods can be used with or without VDI. In addition, features such as Network Access Control (NAC) can help ensure that only secure computers are able to access the network.

Summary

VDI approaches can help increase security in many different situations. But, VDI is not the only option for meeting these needs. IT automation tools and practices can help address problems related to data protection, security of client-side data, and ensuring that network systems remain free of malware and other infections. When deciding how and when to deploy VDI, keep in mind the alternative approaches.

Windows Server 2008 Component Posters

While Windows Server 2008 (formerly code-named, “Longhorn Server”) is still several months from release, it’s never too early to start learning about the new features that will be included in the platform.  As you might guess, five years of development has lead to a wide array of improvements.  To help sort out the details, TechNet Magazine published a couple of posters in its July, 2007 issue.  You can also download the posters in PDF format.  It’s hard to take in all at once, but you can zoom in on sections of interest to find more useful details.  And, if there’s ever any doubt about your level of techiness, you can print them out and proudly display them in your cube/office!

Security to the Extreme?

A friend recently tipped me off to Microsoft Support Knowledge Base Article ID 276304, “Error Message: Your Password Must Be at Least 18770 Characters and Cannot Repeat Any of Your Previous 30689 Passwords”.  As the title suggests, the error message this addresses is:

Your password must be at least 18770 characters and cannot repeat any of your previous 30689 passwords. Please type a different password. Type a password that meets these requirements in both text boxes.

Personally, I try to keep my passwords well under 10,000 characters.  With thee requirements, brute force hacking would have to be pretty efficient to compromise security.  Fortunately, seeing this problem in the real world is rather unlikely, and it only applies to Windows Server 2000.  So, you can rest easy if you choose to use slightly shorter Windows passwords.

Online Backup Options

There are plenty of reasons to perform frequent backups.  While most people seem to think of hardware failures first, it’s far more common for people to accidentally delete or modify files.  Regardless of the cause, it’s helpful to be able to roll-back to earlier versions of files.  While modern operating systems provide various methods of creating backups, there’s one problem: Protecting the backups themselves.  In the past, I used to back up to DVDs and have friends keep copies at their houses.  It’s not an elegant solution, but it does provide some level of “off-site” protection.  The problem is maintaining the backup media with updates and performing a restore process (the latter of which would likely cost me a beer or two).  Clearly, there’s room for improvement.

One excellent option is to back up your data to the Internet.  A few years ago, bandwidth and storage limitations would have made this process difficult and costly.  Today, there are numerous online sites that provide backup services.  Some provide free trials or a limited amount of space that is available almost instantly.  For more information and reviews on available options, see:

Or, just visit the various vendors’ web sites (they’re usually pretty good about telling you what features they can provide).

I’ve tried several of these products, but I’ve been using Mozy for over a year, and I’ve been really happy with it.  Here are some benefits:

  • Off-site protection: Data is transferred to an Internet data center that probably has better power, networking, and cooling support than my home office.
  • Efficient file transfers: The Mozy client determines binary-level difference in files.  It then compresses and encrypts the data before transferring it to an online server.
  • Automated operations: Most backup clients are able to monitor for file changes and then send them periodically or based on a schedule.  The main benefit is that the weak link in most backup plans (humans) are eliminated.
  • Convenient Restore Options: Mozy provides the ability to perform restores using Windows Explorer integration (i.e., by right-clicking a file and choose a prior version), by using their web site, or by using a drive icon that allows you to browse directly to your files.  Compare that to tape backups, and it’s easy to see the benefits.
  • Revision tracking: Mozy lets you restore from previous versions of files.  This, to me, is a useful feature.  Again, it’s far more likely for me to accidentally modify or delete a file than it for an entire hard disk to fail.
  • A non-intrusive client: The last thing I want to install on my computer is a memory hog or something that will scan every file I use.  Mozy works on a scheduled basis, so it minimizes the overall impact.  Usually, I don’t even notice it.

Of course, many of those features apply to other products and services.  Some client software was either buggy or overly-intrusive (in my opinion), so that’s certainly something to keep in mind when you evaluate online backup solutions.

In addition to providing personal-level service, many companies also focus on enterprise-level services.  There are some issue, as well.  For example, in the United States, upstream bandwidth is quite limited.  Transfer a few gigabytes of data can take a long time.  Overall, though, backing up over the Internet is an excellent (and available) option.  Check it out, and let me know what you think!

P.S.  If you decide to try Mozy, please use my referral code: https://mozy.com/?code=CJM3BB (we’ll both get an extra 250MB of free storage space).

Update: I recently subscribed to the Mozy service to get unlimited storage.  It took about 5 days, but I ended up backing up 15GB of data over the Internet.  Overall, the process went very smoothly.

My E-Mail Setup: Outlook + GMail + a Personal E-Mail Address

Almost two years ago, I switched from using my ISP-provided e-mail account to using GMail as my primary mail account.  I also decided that I never wanted to go through the pain of switching accounts again, so I also decided to get my own domain name.  I’m really happy with this setup, and I thought I’d outline how it all fits together. 

Benefits

Before we dive down into the technical details, here are the major benefits of this configuration.

  • Automatic backups:  Both my ISP-based POP account and GMail hold copies of my e-mail messages.  This is in addition to my local Outlook message store (which I also back up over the Internet nightly).  Perhaps that’s overkill, but most of this stuff is automatic and costs very little.
  • E-mail access from anywhere:  When I travel, I can directly access my GMail account via the web interface.  The vast majority of the time, though I use Microsoft Outlook.
  • The ability to use Microsoft Outlook:  While web-based messaging systems provide some advantages, I greatly prefer using Microsoft Outlook.  The only issue with Outlook is that it doesn’t provide a way to synchronize multiple PST files (unless, of course, you rely on either an Exchange Server or your ISP’s POP/IMAP features to keep mail on the server).
  • Spam filtering: I generally receive about 400 spam messages per day (a dubious distinction).  By using GMail’s Spam Filter combined with Outlook 2007’s Junk E-Mail filter, I rarely see any of it. 
  • A permanent e-mail address:  I have my own permanent address that’s personalized and won’t have to change as I switch Internet providers.  And, this way spammers won’t have to bother to learn the address.  🙂
  • Archiving: To keep my Outlook PST file relatively small, I can archive off the data to another file.  If I need to find an old message, I can always search for it online using GMail.
  • Deletion of attachments: A single attachment can be larger than the next 500 e-mail messages combined.  I usually delete file attachments from Outlook messages or store them in the file system.  Should I need an attachment, I can always log in to GMail and download it.  That’s a pretty rare occurrence, though.

There are some other minor benefits, but I think that covers the main list.

Requirements

In order to set all of this up, I needed the following:

  1. A registered domain name
  2. An ISP to host the domain (~$4.00/month) and to provide POP3 access
  3. A GMail account (free)
  4. (optional) Microsoft Outlook (or any other e-mail client)

The costs are really minimal, especially if you go with a discount web host and if you’re already using Outlook or another e-mail client.

Configuration

Now, let’s look at the technical details.  If you’re unfamiliar with standards and protocols such as POP3 and SMTP, you’ll probably need to do some research before setting this up.  Otherwise, it should be fairly straight-forward.

So, the way inbound mail works is as follows:

image

  • The DNS MX record for my domain points to my ISP’s POP3 account, and all new mail is received there.
  • All inbound messages from my ISP are set to redirect to my GMail account.
  • GMail is configured to allow POP3 access and to automatically archive messages that are downloaded
  • Outlook is configured to use POP3 to download messages from GMail.  Once messages are downloaded, they’re automatically archived on the GMail server.

Outbound mail works like this: I set the Reply To address to my custom domain e-mail address and then send messages to GMail as my outbound SMTP server.  The benefit here is that all outbound messages are cached by GMail (so I can search them later or access them online).

A Little Quirk

There’s one minor issue with this configuration: When Outlook users see me messages, they look something like “Anil@domain.com on behalf of Anil@ISPAccount.com“.  All operations such as replying works just fine, but some people seem to be confused by it.  Other than that, I haven’t had any problems with the setup.

The Add-Ons

You can download and install the GMail Notifier or the Google Desktop to automatically receive notifications and/or previews of new messages as they arrive.

Customization Options

The same setup can certainly be created in a variety of different ways.  For example, you can use a web-based server other than GMail, and you’re certainly not tied to Microsoft Outlook in any way.  Overall, the approach should work fine for most people.

Conclusion

Overall, this e-mail setup works well for me.  It also costs a total of ~$4.00/month (a fee that I could probably eliminate by finding a free web host).  And, I get the benefits of web-based messaging (simplified access and online storage), with the convenience of using Microsoft Outlook.  Was this helpful?  Does it make sense?  Should I add more detail?  Post a comment!

IT Policies: Monitoring Physical and Virtual Environments

This article was first published on SearchServerVirtualization.TechTarget.com.

Here’s quick question: How many virtual machines and physical servers are currently running on your production environment? If you can answer that, congratulations! Here’s a harder one: Identify the top 10 physical or virtual machines based on resource utilization. For most IT organizations, both of these questions can be difficult to answer. Fortunately, there are ways to implement monitoring in an automated way. In this tip, I’ll present some advice related to monitoring VMs and host computers in a production environment.

They’re all pretty much the same…

In many ways, the tasks associated with monitoring virtual machines are similar to those of working with physical ones. Organizations that have invested in centralized monitoring solutions can continue to rely upon them for gaining insight into how applications and services are performing. Examples include:

  • Establishing Baselines: A baseline helps you determine the standard level of resource utilization for a physical or virtual workload. Details to track typically include CPU, memory, disk, and network performance.
  • Root-Cause Analysis / Troubleshooting: When users complain of slow performance, it’s important for IT staff to be able to drill-down into the main cause of the problem. Performance statistics can often help identify which resources are constrained. Ideally, that will help identify the source of the problem and provide strong hints about resolving them.
  • Generating Alerts: In order to proactively manage performance, IT staff should be alerted whenever resource utilization exceeds certain thresholds. This can help reconfigure workloads

All of these tasks are fairly standard in many IT environments and are also applicable to working with virtual workloads.

… Except for their differences

Environments that use virtualization also have some unique challenges related to performance monitoring. Since it’s quick and easy to deploy new VMs, keeping track of them is a huge challenge. Some additional features and functions that can be helpful include:

  • Mapping Guest-to-Host Relationships: While virtual machines have their own operating system, resource utilization is often tied to other activity on the same host server. Virtualization-aware monitoring tools should be able to uniquely identify VMs and relate them to the physical computers on which they are running.
  • Automated Responses / Dynamic Reconfiguration: In many cases, it’s possible to perform automated tasks in reaction to performance-related issues. For example, if CPU usage of a single VM is slowing down the entire host, VM priority settings can be adjusted. Or, when excessive paging is occurring, the VM’s memory allocation can be increased.
  • Broad Platform Support: There’s a good chance that you’re supporting many more OS versions and flavors for VMs than on physical machines. A good performance monitoring solution will support the majority of virtual operating environments.
  • Reporting / Capacity Planning: The primary purpose of performance monitoring is to facilitate better decision-making. Advanced reporting features can help track untapped resources and identify host servers that are overloaded. Tracking historical performance statistics can also be very helpful.

Choosing the Right Tools for the Job

Most operating systems provide simple tools for troubleshooting performance issues on a single or a few computers. In environments that support more than a few VMs, automated performance monitoring and management tools are practically a must-have. Figure 1 provides some details into features that can be useful.

image

Figure 1: Features to look for in performance management tools

Summary

Overall, many of the standard IT best practices apply equally to monitoring physical and virtual machines. When searching for tools to get the job done, however, there are certain features that can dramatically reduce the time and effort required to gain insight into production performance.

IT Policies: Service Level Agreements (SLAs)

This article was first published on SearchServerVirtualization.TechTarget.com.

Have you heard the one about the IT department whose goals were not well-aligned with the needs of its users? OK, so that’s probably not a very good setup for a joke. One of the most common challenges faced by most IT organizations is defining their internal customers’ requirements and delivering services based on them. In this Tip, I’ll provide details on how you can define Service Level Agreements (SLAs) and how you can use them to better manage virtualization and to reduce costs.

Agreeing to Service Level Agreements

Challenges related to deploying virtualization include skepticism related to the technology. This often reads to resistance and a lack of knowledge about the potential cost and management benefits of using virtual machines.

The purpose of a Service Level Agreement is to define, prioritize, and document the real needs of an organization. All too often, IT departments tend to work in a relatively vacuum, focusing on technology. The area of virtualization is no exception – it’s often much easier to create and deploy VMs than it is to determine the strategic needs of the company. The problems range from poorly managing users’ expectations to large costs that might not directly address the most important challenges. The goal of containing costs is the basis for a lot of virtualization decisions, so it’s important to keep this in mind.

When developing SLAs, the most important aspect is for the process to be a team effort. Managers, IT staff, and end-users should all have input into the process. Typical steps in the process are shown in Figure 1.

image

Figure 1: Steps in the process of creating a new SLA

Defining SLA Goals and Metrics

SLA goals define the targeted levels of service that are to be expected from IT departments. Metrics are the specific statistics and data that must be measured to ensure that the levels are being met. Some examples might include:

  • Deployment: The time it takes to provision a new VM
  • Performance: Ensuring adequate application and service response times
  • Availability: Verifying virtual machine uptime
  • Change Management: Efficiently managing VM configuration updates

A well-defined SLA should include details about how the quality of the service is measured. For example, the goal for the uptime of a particular VM might be 99.9%. This can be measured using standard enterprise monitoring tools. Or, the deployment goal for a standard configuration of a virtual machine might be 4 business hours from the time of the request.

Reducing Costs with SLAs

If you haven’t yet created SLAs, you might be thinking about the time and effort that it will take to setup and track the associated metrics. While there is certainly a cost to be paid for creating SLAs, there can also be numerous benefits. One important aspect is that areas for improvement can easily be identified. For example, if a business finds that it could improve its operations by more quickly deploying VMs, an investment in automation could help. Table 1 provides that and some other hypothetical examples.

image

Table 1: Examples of potential cost savings based on automation

Summary

IT organizations that constantly find themselves trying to keep up with virtualization-related requirements can benefit by creating SLAs. When done properly, this will help technical initiatives (such as VM deployments and server consolidations) stay in line with users’ expectations. Overall, this can help the entire organization make better decisions about the importance of virtual infrastructures.

Virtualization Security: Pros and Cons

This article was first published on SearchServerVirtualization.TechTarget.com.

Historically, organizations have fallen into the trap of thinking about security implications after they deploy new technology. Virtualization offers so many compelling benefits, that it’s often an easy sell into IT architectures. But what about the security implications of using virtualization? In this tip, I’ll present information about the security-related pros and cons of using virtualization technology. The goal is to give you an overview of the different types of concerns you should have in mind. In a future article, I’ll look at best practices for addressing these issues.

Security Benefits of Virtualization

There are numerous potential benefits of running workloads with a VM (vs. running them on physical machines). Figure 1 provides an overview of these benefits, along with some basic details.

image

Figure 1: Virtualization features and their associated security benefits.

Since virtual machines are created as independent and isolated environments, systems administrators have the ability to easily configure them in a variety of ways. For example, if a particular VM doesn’t require access to the Internet or to other production networks, the VM itself can be configured with limited connectivity to the rest of the environment. This helps reduce risks related to the infection of a single system affecting numerous production computers or VMs.

If a security violation (such as the installation of malware) does occur, a VM can be rolled back to a particular point-in-time. While this method may not work when troubleshooting file and application services, it is very useful for VMs that contain relatively static information (such as web server workloads).

Theoretically, a virtualization product adds a layer of abstraction between the virtual machine and the underlying physical hardware. This can help limit the amount of damage that might occur when, for example, malicious software attempts to modify data. Even if an entire virtual hard disk is corrupted, the physical hard disks on the host computer will remain intact. The same is true for other components such as network adapters.

Virtualization is often used for performing backups and disaster recovery. Due to the hardware-independence of virtualization solutions, the process of copying or moving workloads can be simplified. In the case of a detected security breach, a virtual machine on one host system can be shut down, and another “standby” VM can be booted on another system. This leaves plenty of time for troubleshooting, while quickly restoring production access to the systems.

Finally, with virtualization it’s easier to split workloads across multiple operating system boundaries. Due to cost, power, and physical space constraints, developers and systems administrators may be tempted to host multiple components of a complex application on the same computer. By spreading functions such as middleware, databases, and front-end web servers into separate virtual environments, IT departments can configure the best security settings for each component. For example, the firewall settings for the database server might allow direct communication with a middle-tier server and a connection to an internal backup network. The web server component, on the other hand, could have required access via standard HTTP ports.

This is by no means a complete list of the benefits of virtualization security, but it is a quick overview of the security potential of VMs.

Potential Security Issues

As with many technology solutions, there’s a potential downside to using virtual machines for security. Some of the risks are inherent in the architecture itself, while others are issues that can be mitigated through improved systems management. A common concern for adopters of virtual machine technology is the issue of placing several different workloads on a single physical computer. Hardware failures and related issues could potentially affect many different applications and users. In the area of security, it’s possible for malware to place a significant load on system resources. Instead of affecting just a single VM, these problems are likely to affect other virtualized workloads on the same computer.

Another major issue with virtualization is the tendency for environments to deploy many different configurations of systems. In the world of physical server deployments, IT departments often have a rigid process for reviewing systems prior to deployment. They ensure that only supported configurations are setup in production environments and that the systems meet the organization’s security standards. In the world of virtual machines, many otherwise-unsupported operating systems and applications can be deployed by just about any user in the environment. It’s often difficult enough for IT departments to know what they’re managing, let alone how to manage a complex and heterogeneous environment.

The security of a host computer becomes more important when different workloads are run on the system. If an unauthorized user gains access to a host OS, he or she may be able to copy entire virtual machines to another system. If sensitive data is contained in those VMs, it’s often just a matter of time before the data is compromised. Malicious users can also cause significant disruptions in service by changing network addresses, shutting down critical VMs, and performing host-level reconfigurations.

When considering security for each guest OS, it’s important to keep in mind that VMs are also vulnerable to attacks. If a VM has access to a production network, then it often will have the same permissions as a physical server. Unfortunately, they don’t have the benefits of limited physical access, such as controls that are used in a typical data center environment. Each new VM is a potential liability, and IT departments must ensure that security policies are followed and that systems remain up-to-date.

Summary

Much of this might cast a large shadow over the virtualization security picture. The first step in addressing security is to understand the potential problems with a particular technology. The next step is to find solutions. Rest assured, there are ways to mitigate these security risks. That’s the topic of my next article, “Best Practices for Improving VM Security.”

Improving VM Security: Best Practices

This article was first published on SearchServerVirtualization.TechTarget.com.

In my previous Tip, “Pros and Cons of Virtualization Security”, I described many considerations that IT organizations should keep in mind when planning to deploy virtual machines. To put it simply, the goal was to better-define the problem. In this Tip, I’ll present some best practices for managing security for virtualization.

Assessing Risks

Before we dive further into technical details of securing VMs, it’s important to consider the potential security vulnerabilities that are relevant to a particular host and guest OS. Particular questions to ask include:

  • Does the guest of host contain sensitive information such as logon details or sensitive data? If so, how is this information protected?
  • Does the VM have access to the Internet?
  • Can the VM access other production computers?
  • Is the Guest OS running a supported operating system version?
  • Are host and guest OS’s updated automatically?

Answering each question can help clue you in to issues that may need to be addressed. For example, non-networked VMs that reside on a test network will likely have different security requirements from those that are running in a production environment. Let’s look at some details.

Implement Minimal Permissions

A fundamental aspect of maintaining security is to provide users and systems administrators with the minimal permissions they need to complete their jobs. Figure 1 provides an overview of the types of permissions that should be configured.

image

Figure 1: Types of permissions to consider when securing virtualization

On virtualization hosts, for example, only certain staff members should be able to start, stop, and reconfigure VMs. In addition, it’s important to configure virtual applications and services using limited system accounts. Finally, you should take into account the real requirements for VM configurations. For example, does every VM really need to be able to access the Internet? If so, what is the reason for this? Remember, in the case of a security breach, you want to minimize the number and types of systems that may be affected.

Virtual Machines are still “machines”

Whether an operating system is running on a physical machine or within a virtual one, it still should be regularly updated. Most IT organizations have already invested in some type of automated patch and update deployment process. With virtualization, there are a couple of additional challenges: First, IT departments must be aware of all VMs that are deployed in the environment. Second, each guest OS must be either protected by the update management solution, or must be kept up-to-date manually. Regardless of the approach, systems administrators should keep in mind the time and effort required.

Enforce Consistency and Quality

Simpler environments are much easier to manage than ones in which there is a huge variation in the number and types of systems that are supported. Whenever possible, IT departments should create a base library of reference virtual machines from which users and systems administrators should start. These base images should be verified to meet the IT department’s policies and must be kept up-to-date. Of course, it’s likely that some workloads require deviations from standard deployments. In those cases, IT departments must remain involved in the deployment of all new virtual machines (or, at least those that will have access to production resources).

Managing Moving Targets

The process of moving virtual machines between host servers is usually as simple as performing file copy operations. When a VM is moved, it is important for all relevant security settings and options to move with it. For example, permissions set on virtual hard disk files, and network access details, should be recreated on the target platform. Figure 2 provides some examples of relevant configuration settings to consider.

image

Figure 2: Security-related settings to consider when moving VMs

Security through Education

Even though the basic concept of virtualization technology is well-planted in most peoples’ minds, users and systems administrators are often confused about the potential use (and misuse) of virtual machines. IT departments, therefore, should verify that their staff is aware of the potential security risks related to deploying new VMs. For most practical purposes, deploying a new VM is similar to deploying a new physical server (though it’s often quicker, cheaper, and easier).

Using Third-Party Solutions

It’s no secret that virtualization technology creates additional burdens related to security. Numerous third-party vendors understand this and have either updated their existing enterprise management tools to include virtualization or have created totally new solutions with innovative approaches to limited vulnerabilities. The focus of this article is on best practices, but when it comes to implementation, IT departments should consider evaluating these various tools.

Summary

Overall, organizations can realize the benefits of using virtualization to improve security. However, they will need to be diligent in the creation and deployment of new VMs, as well as with the maintenance of VMs after they’re deployed. As with many other IT solutions, you’ll need to focus on management in order to get the best benefits while minimized vulnerabilities. It’s not an easy job, but it certainly can be done.

IT Policies: Managing VM Sprawl

This article was first published on SearchServerVirtualization.TechTarget.com.

Many organizations have turned to virtualization to help reduce the number of servers and other computer that they support in their environments. The problem itself has often been referred to as “server sprawl”. The cause of this phenomenon is often the excessive deployment of new physical machines. Often, systems administrators would deploy a new computer just to support a lightweight web application or a simple workload that could easily have been placed on another server. In other cases, the proliferation was unavoidable, as some applications and services just don’t behave nicely with others on the same machine.

Virtualization technology can help resolve the latter problem by allowing multiple independent workloads to run on the same computer at the same time. The process of deploying a new VM can be performed in a matter of minutes, thereby reducing costs and administrative effort. Unfortunately, these benefits can lead to a new problem: “VM sprawl”. IT organizations often find themselves tasked with keeping track of dozens or hundreds of new VMs seemingly overnight. When considering security, performance, reliability, and adhering to IT standards, the task of managing virtual systems can quickly become overwhelming. Fortunately, there are some ways to reduce some of the headaches. In this tip, I’ll present some best practices that can help.

Virtual Machine Deployment

The first step in managing VM sprawl is related to reign in the deployment of new VMs. Just because end-users and systems administrators have the ability to deploy new virtual machines does not necessarily mean that they should do so. IT departments should define a process for the deployment of a new VM. Figure 1 provides a basic example of some typical steps. Often, the suggestion of a process conjures up an image of a small army of pointy-haired bosses creating a new bureaucracy. In reality, it’s certainly possible to perform all of the steps in a process such as this in a matter of minutes.

image

Figure 1: Possible steps in a VM deployment process.

Best Practice: IT departments should remain involved in all virtual machine deployments.

Configuration Management

Another problem related to the widespread deployment of VMs is a lack of configuration consistency. Since users can choose from a wide array of operating systems and applications to run within a VM, the number of variations can grow exponentially. Additionally, the VMs that are deployed may not adhere to IT standards and guidelines for security and other settings.

One way to minimize these effects is for IT organizations to create a standardized set of base images in what is often referred to as a VM library. Users should be required to begin the creation of a new VM using one of these images. Figure 2 provides some examples of types of VM images that might be created.

image

Figure 2: Examples of virtual machine images that might be available in a VM library.

While developing a list of standard configurations can help reduce the number of configurations that are supported, IT staff should still remember the need to verify configurations before deployment into a production environment.

Best Practice: All users and systems administrators should base their deployments on IT-approved base images and supported configurations.

Keeping VMs Up-to-Date

An important concern for all deployments – both physical and virtual – is keeping systems up-to-date. Security patches and application upgrades can help minimize the risk of reliability and data loss. The good news is that IT organizations can depend on their standard patch and update deployment tools for managing virtual machines. Of course, this will only be possible if the guest OS is supported by those tools (another good reason for implementing configuration management).

Best Practice: Treat productions VMs as if they were physical machines, and ensure that they are monitored and updated regularly.

Contain Yourself (and your VMs)!

If you’re responsible for limiting VM sprawl in your environment, you know that it’s important to give users what they want. Reducing deployment times and providing access to virtualization functionality can positively impact productivity while minimizing data center impacts. By keeping IT departments involved in deployment decisions, and making sure that VMs are properly managed, organizations can enjoy these benefits without suffering from unmitigated VM sprawl.

Implementing Disaster Recovery for Virtual Machines

This article was first published on SearchServerVirtualization.TechTarget.com.

One of the many benefits of virtualization technology is its ability to de-couple workloads and operating systems from the underlying hardware on which they’re running. The end result is portability – the ability to move a VM between different physical servers without having to worry about minor configuration inconsistencies. This ability can greatly simplify a common IT challenge: Maintaining a disaster recovery site.

In an earlier article, “Implementing Backups for Virtual Machines”, I focused on performing backups from within guest OS’s. In this article, I’ll look at the other approach: Performing VM backups from within the host OS.

Determining What to Back Up

From a logical standpoint, virtual machines themselves are self-contained units that include a virtual hardware configuration, an operating system, applications, and services. Physically, however, there are numerous files and settings that must be transferred to a backup or disaster recovery site. While the details will differ based on the virtualization platform, the general types of files that should be considered include:

  • Host server configuration data
  • Virtual hard disks
  • VM configuration files
  • Virtual network configuration files
  • Saved-state files

In some cases, thorough documentation and configuration management practices can replace the need to track some of the configuration data. Usually, all of the files except for the virtual hard disks are very small and can be transferred easily.

Performing Host-Level Backups

The primary issue related to performing VM backups is the fact that VHD files are constantly in use while the VM is running. While it might be possible to make a copy of a VHD while it is running, there’s a good chance that caching and other factors might make the copy unusable. This means that “open file agents” and snapshot-based backups need to be aware of virtualization in order to generate reliable (and restorable) backups.

There are three main ways in which you can perform host-level backups of VM-related files. Figure 1 provides an overview of these options. Cold backups are reliable and easy to implement, but they do require downtime. They’re suitable for systems that may be unavailable for at least the amount of time that it takes to make a copy of the associated virtual hard disk files. Hot Backups, on the other hand, can be performed while a VM is running. Virtualization-aware tools are usually required to implement this type of backup.

image

Figure 1: Options for performing host-level VM backups

Backup Storage Options

One of the potential issues with performing backups of entire virtual hard disks is the total amount of disk space that will be required. IT organizations have several different storage-related options. They are:

  • Direct-Attached Storage (Host File System): This method involves storing copies of VHD files directly on the host computer. While the process can be quick and easy to implement, it doesn’t protect against the failure of the host computer or the host disk subsystem.
  • Network-based Storage: Perhaps the most common destination for VM backups is network-based storage. Data can be stored on devices ranging from standard file servers, to dedicated network-attached storage (NAS) devices to iSCSI-based storage servers. Regardless of the technical details, bandwidth is an important concern. This is especially true when dealing with remote disaster recovery sites.
  • Storage Area Networks (SANs): Organizations can use SAN-based connections to centrally manage storage, while still providing high performance for backups and related processes. SAN hardware is usually most applicable to backups performed within each of the disaster recovery sites, since there are practical limitations on the length of these connections.

Maintaining the Disaster Recovery Site

So far, we’ve looked at what you need to backup and some available storage technologies. The most important question, however, is that of how to maintain the disaster recovery site. Given that bandwidth and hardware may be limited, there are usually trade-offs. The first consideration is related to keeping up-to-date copies of VHDs and other files at both sites. While there are no magical solutions to this problem, many storage vendors provide for bit-level or block-level replication that can synchronize only the differences in large binary files. While there is usually some latency, this can minimize the bandwidth load while keeping files at both sites current.

At the disaster recovery site, IT staff will need to determine the level of capacity that must be reserved for managing failures situations. For example, will the server already be under load? If so, during a fail-over, what are the performance requirements? The process of performing a fail-over can be simplified through the use of scripts and automation. However, it’s critically important to test (and rehearse) the entire process before a disaster occurs.

Planning for the Worst…

Overall, the task of designing and implementing a disaster recovery configuration can be challenging. The use of virtual machines can simplify the process by loosening the requirements for identical hardware at the primary and backup sites. The process still isn’t easy, but with proper planning and the right tools, it’s certainly possible. Good luck, and let’s hope you never need to use your DR handiwork!

Implementing Backups for Virtual Machines

This article was first published on SearchServerVirtualization.TechTarget.com.

In the early days of virtualization, it was common for users to run a few VMs in test and development environments. These VMs were important, but only to a small set of users. Now, it’s common for organizations to run mission-critical production workloads on their virtual platforms. Downtime and data loss can affect dozens or hundreds of users, and the rule is to ensure that virtual machines are at least as well protected as their physical counterparts. So how can this be done? In this article, I’ll present some information related to developing a backup strategy for virtual machines. In a related article, “Implementing Disaster Recovery for Virtual Machines,” I’ll look at some additional options for performing host-based backups.

Determining Recovery Requirements

If there’s a golden rule to follow related to implementing backups, it’s to start with enumerating your recovery requirements. After all, that’s the goal of performing backups: To allow for recovery. Considerations should include:

  • Data loss: What is an acceptable amount of data loss, in a worst-case scenario? For some applications and services, it might be acceptable to lose several hours worth of data if it can lower backup costs. In other cases, near-realtime backups might be required.
  • Downtime windows: What is an acceptable amount of downtime? Some workloads will require rapid recovery in the case of the failure of a host. In other cases
  • Virtual machine configuration details: What are the CPU, memory, disk, and network requirements for the VM? These details can help prepare you for moving a workload to another physical host.
  • Identifying important data: Which information really needs to be backed up? In some cases, full VHD backups might make sense. More often, critical data such as web server content, data files, and related information is sufficient.
  • Budget and Resources: Organizations have limits based on the amount of available storage space, bandwidth, human resources, and technical expertise. These details must be factored in to any technical solution.

Once you have the business-related requirements in mind, it’s time to look at technical details.

Backups for Guest OS’s

One common approach to performing backups for VMs is to treat virtual machines as if they were physical ones. Most organizations have invested in some method of centralized backup solution for their physical servers. Since VMs will often be running a compatible guest OS, it’s usually easy to install and configure backup agent within them. Configuration details will include the frequency of backups, which data to protect, and associated monitoring jobs.

The technical details can vary significantly, based on the needs of the environment. Some examples might include:

  • Small Environments: When managing a few virtual machines (such as in development and test environments), simple scripting or automation might be enough to meet backup requirements. For example, test results and data files might be stored on a shared network drive so they can be reviewed even when the VMs are unavailable.
  • Medium-Sized Environments: The job of supporting dozens or hundreds of virtual machines will require the use of a centralized, automated backup solution. Data is usually sent over a dedicated backup network and stored in one or more network locations.
  • Large Environments: When scaling to support many hundreds of virtual machines, managing direct-attached storage becomes nearly impossible. Organizations often invest in Storage Area Network (SAN) technology to support the increased bandwidth and disk space requirements. It may become difficult to identify important data when working with a vast array of different types of VMs. Organizations that can afford the storage resources may consider backing up the entire contents of their virtual hard disks to ensure that they can quickly recover them.

Again, regardless of the approach, the goal should be to meet business-level recovery requirements. Technical constraints such as limited storage space and limited bandwidth will play a factor in the exact configuration details.

Benefits of iSCSI

An important virtualization management-related concern is that of keeping track of virtual hard disks. The default option in many environments is to rely upon local storage. The problem is that it can quickly become difficult to enumerate and backup all of these different servers. For many environments, SAN-based resources are too costly for supporting all virtual machines. The iSCSI standard provides an implementation of SCSI that runs over standard Ethernet (copper-based) networks. To a host computer or a guest OS, an iSCSI-attached volume appears like a local physical volume. Block-level operations such as formatting or even defragmenting the volume are possible.

From a backup standpoint, systems administrators can configure their host and/or guest OS’s to use network-attached storage for storing virtual hard disk data. For example, on the host system, virtual hard disks may be created on iSCSI volumes. Since the actual data resides on a network-based storage server, this approach lends itself to performing centralized backups. One important caveat is that organizations should thoroughly test the performance and reliability of their iSCSI infrastructures before relying on their for production workloads. Issues such as latency can cause reliability issues.

Other Options

In this article, I presented details related to perform virtual machine backups from within Guest OS’s. Of course, this is only one option. Another useful approach is to perform backups at the level of the host OS. I’ll cover that topic in my next article, “Implementing Disaster Recovery for Virtual Machines.”