Archive for category Storage

Sept. 20, 2016: Software-Defined Storage Features in Windows Server 2016

BrightTalk-SDStorageInWS2016I’ll be giving a presentation as part of BrightTALK‘s Software-Defined Week of presentations.  The free session (registration required) is titled Software-Defined Storage Features in Windows Server 2016.  Here’s an overview of the presentation topic:

Meeting storage-related requirements has been a long-standing challenge for IT organizations, and added workload requirements from cloud- and software-defined architectures can add quickly to the burden.  Common goals are to implement solutions that provide high-availability and high performance, with low capital and operational costs.  The Windows Server 2016 platform includes a tremendous list of improved and new features that are available “out-of-the-box”.  That makes the biggest barrier understanding how, when and why you can implementing these features.

This presentation will cover a wide array of different features in the Windows Server platform, including Storage Spaces and Storage Spaces Direct; SMB 3.x improvements; storage tiering; Storage QoS; Storage Replica; data de-duplication; and many other features.  When compared to the costs and administrative complexity of traditional SANs, these tools can provide ready solutions for environments of all sizes and types.  The focus will be on technical details about the features and capabilities of the Windows Server platform, and how organizations can make best use of them.

It would be great if you can make it for the live session, but if not, it will also be available on-demand after the event is complete.

Note: To access the recording of this session (and all of my past BrightTALK webinars), please search using

Building and Managing Storage Environments for MSPs: Free Webinar on 05/27/2015

I’ll be presenting a Ziff-Davis webinar on the topic of Building and Managing Storage Environments for MSPs.  The topic will cover best practices and considerations for moving from local-based storage architectures to storage-based service offerings.  For more information, or to register for the free webinar, please visit Building and Managing Storage Environments for MSPs.

Optimize SQL Server with Flash Storage: Webinar

On March 12th, I’ll be presenting a free online webinar titled, “Optimize SQL Server Performance with Flash-Based Storage – On a Budget“.  Here’s an overview of what the session will cover:

Are you tired of database latency? Low transaction throughput? Have you created a complicated storage design just to eek out a few more IOPS? If you answered yes to any of these questions then you should consider investigating flash-based storage. A flash-based storage array provides consistent performance, simple storage design, and low latency for SQL Server workloads such as OLTP, Business Intelligence and Data Warehousing.

Register for the webinar to learn more about how moving to flash-based storage addresses many of the pain points that application owners and DBAs face in the spindle world.

SQL Server and storage-related issues are among the most common issues I run into with my clients.  This presentation, sponsored by PureStorage, will try to dispel many of the myths and not-so-best practices, and will include some real-world input from Rob “barkz” Barker, Solutions Architect at Pure Storage.  Be sure to register if you’re planning to attend!

Windows Server Software-Defined Storage and Networking Presentation: Dec. 5th, 2014


I’m excited to have been given the opportunity to present at the December meeting of the Central Texas Systems Management User Group (CTSMUG)!  I’ll try to post some more details on the topic here within the next week. 

The meeting starts at 10:00am and includes lunch.  For details (including directions and a full agenda) and to register to attend, see the Event Details.  There are lots of other interesting topics on the agenda, so do try to attend if you’re in the Austin area!

BrightTALK Webinar: Windows Server Enterprise Storage and Networking Features

imageOn December 9th, 2014, I’ll be presenting a free online webinar titled, Windows Server Enterprise Storage and Networking Features.  Here’s a quick overview of the topic:

IT professionals face many challenges in their struggle to deliver the infrastructure, applications, and services that their organizations need. Common issues include limited budgets, datacenter infrastructure complexity, and technical expertise to support a wide variety of changing goals. The presentation will provide guidance and best practices for data center admins that are looking for cost-effective ways to increase automation, improve hardware resource utilization, and provide HA/DR features without having to make costly investments in third-party products.
This webinar will discuss:

  • Features that include support for iSCSI-based SANs,
  • SMB-based virtual disks
  • Management UI and automation improvements
  • The latest version of Hyper-V
  • Low-cost high-availability

Register online if you’d like to attend!

Note: To access the recording of this session (and all of my past BrightTALK webinars), please search using

MVP Blog Post: Hyper-V High-Availability Without a SAN

imageI have mentioned before that my favorite features in Windows Server 2012 are related to improvements in the storage stack.  While it might not seem as exciting as some of the many other new features, the number and types of scenarios that storage and networking improvements allow are tremendous.  Best of all, these features ship “in the box” (that is, as part of the product itself), so no third-party tools, utilities, or drivers are required. 

I recently wrote an article for the Microsoft MVP Award Program blog that covers some ways in which IT pros can use these features to implement high-availability and other Enterprise-level features using Windows Server 2012.  Here’s a brief excerpt from the post:

Enter Windows Server 2012: A server product that ships with all of the required ingredients to brew your own highly-available storage environment. In this post, I’ll focus on the storage and high-availability-related features that ship as part of Windows Server 2012. Specifically, I’ll discuss what’s required to build and deploy a fault-tolerant Hyper-V deployment using only in-box features. I’ll start with the configuration basics and then list higher-end features that are available for production environments.

For the complete post, please see Windows Server 2012: Hyper-V High-Availability without a SAN, and feel free to post questions or comments there!

BrightTALK Presentation: Application Performance Monitoring (APM) in Virtualized and Cloud Environments

imageOn June 6th, I’ll be presenting another live, free webinar on BrightTALK.  The title is Maintaining Service Levels with APM in Virtualized & Cloud Environments.  Here’s the abstract/overview of the content:

Significant changes in IT infrastructure approaches are driving data centers towards high levels of efficiency and automation. Virtualization and public/private/hybrid cloud architectures can help reduce costs and simplify administration, but the primary goal for IT organizations is to ensure that the applications and services they deliver meet or exceed their users’ needs. This presentation will provide advice and recommendations that focus on end-to-end monitoring and management of highly virtualized and cloud infrastructure components, including user experience, storage, networking, and hypervisors.

Visit the site to register for the webinar, or use the below information to sign up. And, while you’re there, be sure to check out the huge library of related content that’s available for free!

A BrightTALK Channel

Note: To access the recording of this session (and all of my past BrightTALK webinars), please search using

VKernel Podcast: Top New Features in Hyper-V 3.0 and Windows Server 2012

files/podcasts/TEC_hyper-v_windows_mattias_hans_anil.pngA few weeks ago, during the TEC 2012 Conference, I had the opportunity to record a brief podcast that provides an overview of the new features in Microsoft’s upcoming server update.  In this brief interview, fellow Virtualization MVP Hans Vredevoort and I discuss some of the features we’re most looking forward to.  You can access the audio-only podcast on YouTube, and you can download an MP3 version.

Here’s a brief overview of the topic:

VKernel’s Mattias Sundling discusses The Expert Conference event with MVPs Hans Vredevoort and Anil Desai. Topics include highlights of the technical sessions presented by Microsoft, Quest and industry experts as well as updates and highlights of Windows Server 2012 and Hyper-V3 advances.

Both Hans and I gave presentations at the conference and focused on storage-related improvements.  Hans’s presentation was an excellent demonstration of how quickly and easily administrators can setup the new Scale-Out File Server role in Windows Server 2012, using nothing but a single laptop (that is, no shared storage and no third-party products and tools).  The best part was the conclusion: Hans setup a highly-available Hyper-V cluster configuration and did a live migration of a VM using only his laptop (and several virtual machines). 

Thanks to Mattias Sundling, Evangelist & vExpert at VKernel for arranging, recording, and posting the podcast!

Virtualization and Storage Presentations at TEC 2012

It’s still a few months away, but I’ll be presenting at two storage-related presentations in the Virtualization and Cloud track at The Experts Conference (TEC) 2012 in San Diego, CA.  Below are the abstracts.  For more information about the conference, please visit the TEC 2012 Conference web site.


Storage Improvements in Windows Server 8 / Hyper-V 3.0

Virtualization architects and administrators have long sought quicker, simpler and more cost effective ways to scale and manage storage in their data centers. Microsoft has made many significant improvements in the architecture and storage features of Hyper-V 3.0 and the Windows Server 8 platform. Examples include support for SMB-based virtual disks, management UI improvements, network stack improvements, Hyper-V Replicas, NTFS reliability improvements, incremental VHD backups, storage de-duplication, offloaded data transfer, SMB protocol improvements, and Storage Spaces. These features can help improve storage management for many different types of virtualization deployments and can help bring the idea of cloud-based automation closer to reality.

This session will focus on technical details and demonstrations of new features in the Windows Server 8 platform and in Hyper-V 3.0. The focus will be on practical suggestions for how and when the new features should be used to reduce costs, simplify administration, and increase performance.

Designing Storage for Virtual Environments

One of the most common issues related to virtual infrastructure design is related to planning for and managing the storage environment. Successful SAN, NAS, and local storage deployments require the provisioning of highly-reliable, high-performance, cost-effective solutions to meet business and technical needs. The challenge for IT is in consolidating and optimizing infrastructures while staying within budgets. The primary concerns – including storage capacity, performance, and reliability – can drive the success or failure of virtualized deployments.

This presentation begins with recommendations for designing a storage environment based on requirements, starting with a solid understanding of application workload characteristics. Strategies for collecting storage statistics through historical and real-time performance monitoring can provide valuable insight into real requirements. Based on this data, IT departments can compare different storage approaches, including centralized network-based storage, and cloud-based options. Important features to consider include file- and block-level de-duplication, thin provisioning, high-availability, clustering, and disaster recovery. Attendees will learn methods by which they can best plan for, implement, manage, and monitor storage for virtualization in their own environments.

TEC 2011: Virtualization Approaches and Storage Presentations

imageAs I mentioned in a previous post, I’m scheduled to speak at The Experts Conference 2011 in Las Vegas (April 17 – 20, 2011).  I’ll be giving two presentations in TEC’s new Virtualization and Cloud track.  My sessions abstracts are below.  In addition, Session Abstracts for each of the tracks and the Conference Agenda are now available online.  Let me know if you plan to attend or if there’s anything you’d like to see me cover (either in the presentations or on this blog).

Storage Considerations for Virtualization

Key considerations related to successful virtualization deployments revolve around provisioning highly-reliable, cost-effective solutions to meet business and technical needs. The challenge for IT is in consolidating and optimizing infrastructures while staying within budgets. The primary concerns – including storage capacity, performance, and reliability – can drive the success or failure of virtualized deployments.

This presentation begins with recommendations for designing a storage environment based on business and technical requirements and a solid understanding of application workload requirements. Strategies for collecting storage statistics through historical and real-time performance monitoring can provide valuable insight into real requirements. Based on this data, IT departments can compare different storage approaches, including centralized network-based storage, and cloud-based options. Important features to consider include data de-duplication, thin provisioning, high-availability, clustering, and disaster recovery. Attendees will learn methods by which they can best plan for, implement, manage, and monitor storage for virtualization in their own environments.

Evaluating Virtualization Approaches

The term "virtualization" can apply to a broad range of varying technologies, ranging from storage to networks to servers to applications. The primary goal of these approaches is to simplify management, increase efficiency, allow for scalability, and meet reliability requirements. With recent improvements in virtualization technology, the challenge for IT professionals is in deciding which approaches are the most relevant, given specific requirements.

The focus of this presentation is on understanding the technology behind various virtualization approaches, including presentation-, application-, session-, user state-, desktop-, and server-virtualization. The topic will begin with information on understanding business, technical, and service requirements. These details will then be used to compare a wide variety of different approaches to solving common IT problems. Attendees will receive information that will help them choose which approaches make sense in their own environments.

Virtualization and Storage Presentations at The Experts Conference

imageI’m currently scheduled to speak on two topics at The Experts Conference 2011 in Las Vegas (April 17 – 20, 2011).  The conference has tracks that focus on Directory Services, Exchange, SharePoint, and Virtualization. 

The two topics I’m planning to present are tentatively titled Storage Considerations for Virtualization and Evaluating Virtualization Approaches. I’ll post more details and abstracts here as the conference gets closer.

Mozy Support Nightmares: A Cloudy Forecast for Online Storage and Backups?

Over the last year, I have been frequently asked write and speak about storage and cloud-based service offerings.  Remote storage is a compelling technology for consumers and IT departments, and it’s a good starting point for those that might be interested in dipping their toes (or heads) into the more-ethereal-than-Ethernet “cloud”.

Trouble in Cloud City

Several years ago, I wrote a blog post about the virtues and benefits of online backups (see Online Backup Options).  Since then, I have recommended cloud-based storage (and, Mozy, in particular) to a rather large number of IT professionals, friends, and family.  The idea itself is compelling: Online backups have the potential of simplifying the backup process for most users, while providing secure remote storage.  But what happens when something goes wrong?  Or if you just have a technical question?

I don’t often highlight specific companies for poor customer service – it’s almost to be expected from many organizations these days – but a recent interaction I had with Mozy’s Customer Support has ended in my completely giving up on trying to resolve what should have been a very simple issue.  Without getting into the technical specifics, I have been trying to perform backups of Encrypting File System (EFS)-encrypted local files to the cloud.  From the latest information I could find, Mozy supports both local and online backups of EFS encrypted files.  That wasn’t my experience, though – I received cryptic error messages and overall backup failures.  So, I decided to contact Mozy’s Customer Support, creating a case that included my log files and a detailed description of the problem. 

A Little Rain Must Fall…

In summary: It has been over two weeks now, and after three escalations, I’m no closer to resolving the problem.  Just about every response I have sent to Mozy (along with requests for escalation) have been ignored.  In fact, a US Escalations Customer Support Manager has barely managed to feign any interest in my issue at all.  An hour-long phone call with a Level 2 Customer Support technician resulted in his disabling of several necessary services on my primary Windows 7 workstation (I had to keep records of this so I could reverse the changes myself), and poring through log files that provided little useful information.  The response to my most recent request for support has been a request for me to (again) restate the original problem (it’s thoroughly documented in their support system – I just can’t get anyone to read it).  I do plan to escalate this issue to the Director- or VP-level at Mozy as I somehow hopeful that someone at the organization will care.

Cloud Compatibility

One of the most promising aspects of cloud-based service offerings is a reduction in complexity.  Rather that relying on complicated application deployments (the story goes), we can leave all of the details to services that are provided off-site.  But what about support and compatibility issues?  What happens when two or more cloud services vendors decide that their services are incompatible?  My case with Mozy might be that type of issue, though it doesn’t seem like there’s any official documentation or support boundaries related to which products can peacefully co-exist on the same system with it and which options are supported.  And what if the vendor decides that features and functionality I require aren’t important to them?  Sure, I could run into the same problems with local applications, but workarounds are far easier to find when I control both communication endpoints.

Risk Mitigation

I understand that I’m hardly the first person to suffer from poor technical support, but this experience has made me reconsider the risks of cloud-based services in general.  I’m hardly an important customer for Mozy, but I am paying for their service and I really do rely on the sanctity of my backups.  My typical response to organizations that doubt the cloud is to first compare the reliability of their own datacenter infrastructure against that of an online service provider’s.  However, in this case, I’m completely stuck – I either need to reduce security at my file system level, discontinue the use of Mozy (and transfer 25 GB of data to a competing service), or revert to local backups.

All Eggs in One Cloud?

As the entire world moves to a greater reliance on Internet connections and online services, it becomes harder to create fall-back plans and alternatives.  It’s simply not practical or cost-effective to expect your service providers to fail you.  What’s the point in online backups if I need to have a backup plan for my online backup provider?

That makes me curious: Who else has had a recent experience that has questioned their value in hosted services?  Was it downtime, client application issues, availability, poor customer support, or all of the above?  And how safe do you feel when your mission-critical IT infrastructure is resting on clouds?

Webcast: Network-Attached Storage (NAS) and Virtualization

I recently recorded an on-demand webcast, sponsored by Hewlett-Packard.  The webcast, titled NAS and virtualization: Right scenarios, right choices, right deployments.  From the abstract:

Virtualization has brought next-generation network-attached storage (NAS) beyond the limitations of the old NAS architectures. View this webcast with virtualization expert Anil Desai to learn more about this evolution and get tips on best fit scenarios and deployment techniques.

Deciding the best place for NAS implementation is very hard in the complicated world of the new data center. In this webcast, Anil Desai describes the right NAS scenarios, right choices and the right deployment options for your infrastructure.

The Webcast is available for free, but registration with TechTarget’s is required.

Advanced NAS Features for Virtualization Article

I recently wrote an article on using Network-Attached Storage (NAS) devices for supporting virtualization.  You can find the article, Using advanced NAS features in virtualization at SearchServerVirtualization.  From the article’s introduction:

When it comes to determining the type of storage to deploy, are you a storage-technology snob? Or do you consider network-attached storage (NAS) devices as part of your storage strategy?

There’s clearly a perception among some systems administrators that high-end solutions such as Fibre Channel-based SANs provide the better performance. Or they might prefer products based on iSCSI, which provide some of the same benefits such as block-level disk I/O. Plus, iSCSI-based products run over existing copper-based Ethernet connections instead of requiring a much more expensive fiber optic infrastructure, making them even more attractive to admins.

So where does this leave the tried and true NAS device? While newer technologies get most of the attention, modern NAS devices provide many new features, including ones that simplify virtualization and support larger numbers of virtual machines (VMs). How these advanced features benefit virtualization will be the focus of this tip.

Personally, I think NAS solutions are great for organizations of all sizes.  They’re certainly far more cost effective than Fibre Channel SANs and work well with solutions that don’t need block-level I/O.

Read the full article to learn about these advanced features and to find out if or how NAS devices can help alleviate potential storage-related issues for your virtual infrastructure.

Managing Virtualization Storage for Datacenter Managers

This article was first published on

Deploying virtualization into a production data center can provide an interesting mix of pros and cons. By consolidating workloads onto fewer server, physical management is simplified. But what about managing the VMs? While storage solutions can provide much-needed flexibility, it’s still up to datacenter administrators to determine their needs and develop appropriate solutions. In this article, I’ll present storage-related considerations for datacenter administrators.

Estimating Storage Capacity Requirements

Virtual machines generally require a large amount of storage. The good news is that this can, in some cases, improve storage utilization. Since direct-attached storage is not confined to a per-server basis (which often results in a lot of unused space), using centralized storage arrays can help. There’s also a countering effect, however: Since the expansion of virtual disk files is difficult to predict, you’ll need to leave some unallocated space for expansion. Storage solutions that provide for over-committing space (sometimes referred to as “soft-allocation”) and for dynamically resizing arrays can significantly simplify management.

  • To add up the storage requirements, you should consider the following:
  • The sum of the sizes of all “live” virtual disk files
  • Expansion predictions for virtual disk files
  • State-related disk files such as those used for suspending virtual machines and maintaining point-in-time snapshots
  • Space required for backups of virtual machines

All of this can be a tall order, but hopefully the overall configuration is no more complicated than that of managing multiple physical machines.

Placing Virtual Workloads

One of the best ways to reduce disk contention and improve overall performance is to profile virtual workloads to determine their requirements. Performance statistics help determine the number, size, and type of IO operations. Table 1 provides an example.


Table 1: Assigning workloads to storage arrays based on their performance requirements

In the provided example, the VMs are assigned to separate storage arrays to minimize contention. By combining VMs with “compatible” storage requirements on the same server, administrators can better distribute load and increase scalability.

Selecting Storage Methods

When planning to deploy new virtual machines, datacenter administrators have several different options. The first is to use local server storage. Fault-tolerant disk arrays that are directly-attached to a physical server can be easy to configure. For smaller virtualization deployments, this approach makes sense. However, when capacity and performance requirements grow, adding more physical disks to each server can lead to management problems. For example, arrays are typically managed independently, leading to wasted disk space and requiring administrative effort.

That’s where network-based storage comes in. By using centralized, network-based storage arrays, organizations can support many host servers using the same infrastructure. While support for technologies varies based on the virtualization platform, NAS, iSCSI, and SAN-based storage are the most common. NAS devices use block-level IO and are typically used as file servers. They can be used to store VM configuration and hard disk files. However, latency and competition for physical disk resources can be significant.

SAN and iSCSI storage solutions perform block-level IO operations, providing raw access to storage resources. Through the use of redundant connections and multi-pathing, they can provide the highest levels of performance, lowest latency, and simplified management.

In order to determine the most appropriate option, datacenter managers should consider workload requirements for each host server and its associated guest OS’s. Details include the number and types of applications that will be running, and their storage and performance requirements. The sum of this information can help determine whether local or network-based storage is most appropriate.

Monitoring Storage Resources

CPU and memory-related statistics are often monitoring for all physical and virtual workloads. In addition to this information, disk-related performance should be measured. Statistics collected at the host server level will provide an aggregate view of disk activity and whether storage resources are meeting requirements. Guest-level monitoring can help administrators drill-down into the details of which workloads are generating the most activity. While the specific statistics that can be collected will vary across operating systems the types of information that should be monitoring include:

  • IO per Second (IOPs): This statistic refers to the number of disk-related transactions that are occurring at a given instant. IOPs are often used as the first guideline for determining overall storage requirements.
  • Storage IO Utilization: This statistic refers to the percentage of total IO bandwidth that is being consumed at a given point in time. High levels of utilization can indicate the need to upgrade or move VMs.
  • Paging operations: Memory-starved VMs can generate significant IO traffic due to paging to disk. Adding or reconfiguring memory settings can help improve performance.
  • Disk queue length: The number of IO operations that are pending. A consistently high number will indicate that storage resources are creating a performance bottleneck.
  • Storage Allocation: Ideally, administrators will be able to monitor the current amount of physical storage space that is actually in use for all virtual hard disks. The goal is to proactively rearrange or reconfigure VMs to avoid over-allocation.

VM disk-related statistics will change over time. Therefore, the use of automated monitoring tools that can generate reports and alerts are an important component of any virtualizations storage environment.


Managing storage capacity and performance should be high on the list of responsibilities for datacenter administrators. Virtual machines can easily be constrained by disk-related bottlenecks, causing slow response times or even downtime. By making smart VM placement decisions and monitoring storage resources, many of these potential bottlenecks can be overcome. Above all, it’s important for datacenter administrators to work together with storage managers to ensure that business and technical goals remain aligned over time.