Archive for category Best Practices

Improving VM Security: Best Practices

This article was first published on SearchServerVirtualization.TechTarget.com.

In my previous Tip, “Pros and Cons of Virtualization Security”, I described many considerations that IT organizations should keep in mind when planning to deploy virtual machines. To put it simply, the goal was to better-define the problem. In this Tip, I’ll present some best practices for managing security for virtualization.

Assessing Risks

Before we dive further into technical details of securing VMs, it’s important to consider the potential security vulnerabilities that are relevant to a particular host and guest OS. Particular questions to ask include:

  • Does the guest of host contain sensitive information such as logon details or sensitive data? If so, how is this information protected?
  • Does the VM have access to the Internet?
  • Can the VM access other production computers?
  • Is the Guest OS running a supported operating system version?
  • Are host and guest OS’s updated automatically?

Answering each question can help clue you in to issues that may need to be addressed. For example, non-networked VMs that reside on a test network will likely have different security requirements from those that are running in a production environment. Let’s look at some details.

Implement Minimal Permissions

A fundamental aspect of maintaining security is to provide users and systems administrators with the minimal permissions they need to complete their jobs. Figure 1 provides an overview of the types of permissions that should be configured.

image

Figure 1: Types of permissions to consider when securing virtualization

On virtualization hosts, for example, only certain staff members should be able to start, stop, and reconfigure VMs. In addition, it’s important to configure virtual applications and services using limited system accounts. Finally, you should take into account the real requirements for VM configurations. For example, does every VM really need to be able to access the Internet? If so, what is the reason for this? Remember, in the case of a security breach, you want to minimize the number and types of systems that may be affected.

Virtual Machines are still “machines”

Whether an operating system is running on a physical machine or within a virtual one, it still should be regularly updated. Most IT organizations have already invested in some type of automated patch and update deployment process. With virtualization, there are a couple of additional challenges: First, IT departments must be aware of all VMs that are deployed in the environment. Second, each guest OS must be either protected by the update management solution, or must be kept up-to-date manually. Regardless of the approach, systems administrators should keep in mind the time and effort required.

Enforce Consistency and Quality

Simpler environments are much easier to manage than ones in which there is a huge variation in the number and types of systems that are supported. Whenever possible, IT departments should create a base library of reference virtual machines from which users and systems administrators should start. These base images should be verified to meet the IT department’s policies and must be kept up-to-date. Of course, it’s likely that some workloads require deviations from standard deployments. In those cases, IT departments must remain involved in the deployment of all new virtual machines (or, at least those that will have access to production resources).

Managing Moving Targets

The process of moving virtual machines between host servers is usually as simple as performing file copy operations. When a VM is moved, it is important for all relevant security settings and options to move with it. For example, permissions set on virtual hard disk files, and network access details, should be recreated on the target platform. Figure 2 provides some examples of relevant configuration settings to consider.

image

Figure 2: Security-related settings to consider when moving VMs

Security through Education

Even though the basic concept of virtualization technology is well-planted in most peoples’ minds, users and systems administrators are often confused about the potential use (and misuse) of virtual machines. IT departments, therefore, should verify that their staff is aware of the potential security risks related to deploying new VMs. For most practical purposes, deploying a new VM is similar to deploying a new physical server (though it’s often quicker, cheaper, and easier).

Using Third-Party Solutions

It’s no secret that virtualization technology creates additional burdens related to security. Numerous third-party vendors understand this and have either updated their existing enterprise management tools to include virtualization or have created totally new solutions with innovative approaches to limited vulnerabilities. The focus of this article is on best practices, but when it comes to implementation, IT departments should consider evaluating these various tools.

Summary

Overall, organizations can realize the benefits of using virtualization to improve security. However, they will need to be diligent in the creation and deployment of new VMs, as well as with the maintenance of VMs after they’re deployed. As with many other IT solutions, you’ll need to focus on management in order to get the best benefits while minimized vulnerabilities. It’s not an easy job, but it certainly can be done.

IT Policies: Managing VM Sprawl

This article was first published on SearchServerVirtualization.TechTarget.com.

Many organizations have turned to virtualization to help reduce the number of servers and other computer that they support in their environments. The problem itself has often been referred to as “server sprawl”. The cause of this phenomenon is often the excessive deployment of new physical machines. Often, systems administrators would deploy a new computer just to support a lightweight web application or a simple workload that could easily have been placed on another server. In other cases, the proliferation was unavoidable, as some applications and services just don’t behave nicely with others on the same machine.

Virtualization technology can help resolve the latter problem by allowing multiple independent workloads to run on the same computer at the same time. The process of deploying a new VM can be performed in a matter of minutes, thereby reducing costs and administrative effort. Unfortunately, these benefits can lead to a new problem: “VM sprawl”. IT organizations often find themselves tasked with keeping track of dozens or hundreds of new VMs seemingly overnight. When considering security, performance, reliability, and adhering to IT standards, the task of managing virtual systems can quickly become overwhelming. Fortunately, there are some ways to reduce some of the headaches. In this tip, I’ll present some best practices that can help.

Virtual Machine Deployment

The first step in managing VM sprawl is related to reign in the deployment of new VMs. Just because end-users and systems administrators have the ability to deploy new virtual machines does not necessarily mean that they should do so. IT departments should define a process for the deployment of a new VM. Figure 1 provides a basic example of some typical steps. Often, the suggestion of a process conjures up an image of a small army of pointy-haired bosses creating a new bureaucracy. In reality, it’s certainly possible to perform all of the steps in a process such as this in a matter of minutes.

image

Figure 1: Possible steps in a VM deployment process.

Best Practice: IT departments should remain involved in all virtual machine deployments.

Configuration Management

Another problem related to the widespread deployment of VMs is a lack of configuration consistency. Since users can choose from a wide array of operating systems and applications to run within a VM, the number of variations can grow exponentially. Additionally, the VMs that are deployed may not adhere to IT standards and guidelines for security and other settings.

One way to minimize these effects is for IT organizations to create a standardized set of base images in what is often referred to as a VM library. Users should be required to begin the creation of a new VM using one of these images. Figure 2 provides some examples of types of VM images that might be created.

image

Figure 2: Examples of virtual machine images that might be available in a VM library.

While developing a list of standard configurations can help reduce the number of configurations that are supported, IT staff should still remember the need to verify configurations before deployment into a production environment.

Best Practice: All users and systems administrators should base their deployments on IT-approved base images and supported configurations.

Keeping VMs Up-to-Date

An important concern for all deployments – both physical and virtual – is keeping systems up-to-date. Security patches and application upgrades can help minimize the risk of reliability and data loss. The good news is that IT organizations can depend on their standard patch and update deployment tools for managing virtual machines. Of course, this will only be possible if the guest OS is supported by those tools (another good reason for implementing configuration management).

Best Practice: Treat productions VMs as if they were physical machines, and ensure that they are monitored and updated regularly.

Contain Yourself (and your VMs)!

If you’re responsible for limiting VM sprawl in your environment, you know that it’s important to give users what they want. Reducing deployment times and providing access to virtualization functionality can positively impact productivity while minimizing data center impacts. By keeping IT departments involved in deployment decisions, and making sure that VMs are properly managed, organizations can enjoy these benefits without suffering from unmitigated VM sprawl.

Implementing Disaster Recovery for Virtual Machines

This article was first published on SearchServerVirtualization.TechTarget.com.

One of the many benefits of virtualization technology is its ability to de-couple workloads and operating systems from the underlying hardware on which they’re running. The end result is portability – the ability to move a VM between different physical servers without having to worry about minor configuration inconsistencies. This ability can greatly simplify a common IT challenge: Maintaining a disaster recovery site.

In an earlier article, “Implementing Backups for Virtual Machines”, I focused on performing backups from within guest OS’s. In this article, I’ll look at the other approach: Performing VM backups from within the host OS.

Determining What to Back Up

From a logical standpoint, virtual machines themselves are self-contained units that include a virtual hardware configuration, an operating system, applications, and services. Physically, however, there are numerous files and settings that must be transferred to a backup or disaster recovery site. While the details will differ based on the virtualization platform, the general types of files that should be considered include:

  • Host server configuration data
  • Virtual hard disks
  • VM configuration files
  • Virtual network configuration files
  • Saved-state files

In some cases, thorough documentation and configuration management practices can replace the need to track some of the configuration data. Usually, all of the files except for the virtual hard disks are very small and can be transferred easily.

Performing Host-Level Backups

The primary issue related to performing VM backups is the fact that VHD files are constantly in use while the VM is running. While it might be possible to make a copy of a VHD while it is running, there’s a good chance that caching and other factors might make the copy unusable. This means that “open file agents” and snapshot-based backups need to be aware of virtualization in order to generate reliable (and restorable) backups.

There are three main ways in which you can perform host-level backups of VM-related files. Figure 1 provides an overview of these options. Cold backups are reliable and easy to implement, but they do require downtime. They’re suitable for systems that may be unavailable for at least the amount of time that it takes to make a copy of the associated virtual hard disk files. Hot Backups, on the other hand, can be performed while a VM is running. Virtualization-aware tools are usually required to implement this type of backup.

image

Figure 1: Options for performing host-level VM backups

Backup Storage Options

One of the potential issues with performing backups of entire virtual hard disks is the total amount of disk space that will be required. IT organizations have several different storage-related options. They are:

  • Direct-Attached Storage (Host File System): This method involves storing copies of VHD files directly on the host computer. While the process can be quick and easy to implement, it doesn’t protect against the failure of the host computer or the host disk subsystem.
  • Network-based Storage: Perhaps the most common destination for VM backups is network-based storage. Data can be stored on devices ranging from standard file servers, to dedicated network-attached storage (NAS) devices to iSCSI-based storage servers. Regardless of the technical details, bandwidth is an important concern. This is especially true when dealing with remote disaster recovery sites.
  • Storage Area Networks (SANs): Organizations can use SAN-based connections to centrally manage storage, while still providing high performance for backups and related processes. SAN hardware is usually most applicable to backups performed within each of the disaster recovery sites, since there are practical limitations on the length of these connections.

Maintaining the Disaster Recovery Site

So far, we’ve looked at what you need to backup and some available storage technologies. The most important question, however, is that of how to maintain the disaster recovery site. Given that bandwidth and hardware may be limited, there are usually trade-offs. The first consideration is related to keeping up-to-date copies of VHDs and other files at both sites. While there are no magical solutions to this problem, many storage vendors provide for bit-level or block-level replication that can synchronize only the differences in large binary files. While there is usually some latency, this can minimize the bandwidth load while keeping files at both sites current.

At the disaster recovery site, IT staff will need to determine the level of capacity that must be reserved for managing failures situations. For example, will the server already be under load? If so, during a fail-over, what are the performance requirements? The process of performing a fail-over can be simplified through the use of scripts and automation. However, it’s critically important to test (and rehearse) the entire process before a disaster occurs.

Planning for the Worst…

Overall, the task of designing and implementing a disaster recovery configuration can be challenging. The use of virtual machines can simplify the process by loosening the requirements for identical hardware at the primary and backup sites. The process still isn’t easy, but with proper planning and the right tools, it’s certainly possible. Good luck, and let’s hope you never need to use your DR handiwork!

Implementing Backups for Virtual Machines

This article was first published on SearchServerVirtualization.TechTarget.com.

In the early days of virtualization, it was common for users to run a few VMs in test and development environments. These VMs were important, but only to a small set of users. Now, it’s common for organizations to run mission-critical production workloads on their virtual platforms. Downtime and data loss can affect dozens or hundreds of users, and the rule is to ensure that virtual machines are at least as well protected as their physical counterparts. So how can this be done? In this article, I’ll present some information related to developing a backup strategy for virtual machines. In a related article, “Implementing Disaster Recovery for Virtual Machines,” I’ll look at some additional options for performing host-based backups.

Determining Recovery Requirements

If there’s a golden rule to follow related to implementing backups, it’s to start with enumerating your recovery requirements. After all, that’s the goal of performing backups: To allow for recovery. Considerations should include:

  • Data loss: What is an acceptable amount of data loss, in a worst-case scenario? For some applications and services, it might be acceptable to lose several hours worth of data if it can lower backup costs. In other cases, near-realtime backups might be required.
  • Downtime windows: What is an acceptable amount of downtime? Some workloads will require rapid recovery in the case of the failure of a host. In other cases
  • Virtual machine configuration details: What are the CPU, memory, disk, and network requirements for the VM? These details can help prepare you for moving a workload to another physical host.
  • Identifying important data: Which information really needs to be backed up? In some cases, full VHD backups might make sense. More often, critical data such as web server content, data files, and related information is sufficient.
  • Budget and Resources: Organizations have limits based on the amount of available storage space, bandwidth, human resources, and technical expertise. These details must be factored in to any technical solution.

Once you have the business-related requirements in mind, it’s time to look at technical details.

Backups for Guest OS’s

One common approach to performing backups for VMs is to treat virtual machines as if they were physical ones. Most organizations have invested in some method of centralized backup solution for their physical servers. Since VMs will often be running a compatible guest OS, it’s usually easy to install and configure backup agent within them. Configuration details will include the frequency of backups, which data to protect, and associated monitoring jobs.

The technical details can vary significantly, based on the needs of the environment. Some examples might include:

  • Small Environments: When managing a few virtual machines (such as in development and test environments), simple scripting or automation might be enough to meet backup requirements. For example, test results and data files might be stored on a shared network drive so they can be reviewed even when the VMs are unavailable.
  • Medium-Sized Environments: The job of supporting dozens or hundreds of virtual machines will require the use of a centralized, automated backup solution. Data is usually sent over a dedicated backup network and stored in one or more network locations.
  • Large Environments: When scaling to support many hundreds of virtual machines, managing direct-attached storage becomes nearly impossible. Organizations often invest in Storage Area Network (SAN) technology to support the increased bandwidth and disk space requirements. It may become difficult to identify important data when working with a vast array of different types of VMs. Organizations that can afford the storage resources may consider backing up the entire contents of their virtual hard disks to ensure that they can quickly recover them.

Again, regardless of the approach, the goal should be to meet business-level recovery requirements. Technical constraints such as limited storage space and limited bandwidth will play a factor in the exact configuration details.

Benefits of iSCSI

An important virtualization management-related concern is that of keeping track of virtual hard disks. The default option in many environments is to rely upon local storage. The problem is that it can quickly become difficult to enumerate and backup all of these different servers. For many environments, SAN-based resources are too costly for supporting all virtual machines. The iSCSI standard provides an implementation of SCSI that runs over standard Ethernet (copper-based) networks. To a host computer or a guest OS, an iSCSI-attached volume appears like a local physical volume. Block-level operations such as formatting or even defragmenting the volume are possible.

From a backup standpoint, systems administrators can configure their host and/or guest OS’s to use network-attached storage for storing virtual hard disk data. For example, on the host system, virtual hard disks may be created on iSCSI volumes. Since the actual data resides on a network-based storage server, this approach lends itself to performing centralized backups. One important caveat is that organizations should thoroughly test the performance and reliability of their iSCSI infrastructures before relying on their for production workloads. Issues such as latency can cause reliability issues.

Other Options

In this article, I presented details related to perform virtual machine backups from within Guest OS’s. Of course, this is only one option. Another useful approach is to perform backups at the level of the host OS. I’ll cover that topic in my next article, “Implementing Disaster Recovery for Virtual Machines.”