Archive for category Cloud Computing

Migrating from Gmail to Office 365

In previous posts, I covered the reasons I wanted to move to Office 365, including the potential benefits of the transition. In this post, I’ll discuss the changes and steps that were required to make the transition.

Note: This post is part of a series on my move from Gmail to Office 365. To see a complete list of related posts, see Summary: Moving to Office 365.

Configuration Changes

Prior to signing up for the Office 365 Preview, I came up with a list of steps and requirements that I’d need to perform in order to fully migrate from Gmail. The list (which is not necessary in order) included:

  • DNS Hosting: While there are other options, the Office 365 Small Business (P) plan recommends that users have Microsoft manage their DNS settings. This was actually a fairly easy change for me, as all I had to do was have my domain host point to Microsoft’s name servers. There was an initial step of proving that I owned my domain (by creating a temporary DNS MX record), but the step-by-step setup wizard walked me through it all. I was even able to add a few aliases that I use for my web site (this blog) and for a few other dev/test services. The process might seem more complicated for people who aren’t used to administering their own domains, but it was definitely as simple as I would have expected. For those that want to continue to manage their own domains, there are ways to add the required Office 365 DNS records manually.
  • Transfer of Organized E-Mail, Calendars, and Contacts: I needed a method to retain my current folder structure and details from my Hotmail account (which was quite small), as well as for my archived data. I considered the use of TrueSwitch, but I would have had issues with merging and managing the folder structure. I decided that the best approach would be using drag-and-drop through the Outlook client interface. This allowed me to merge calendars, contacts, and (of course) e-mail messages. This was also the primary reason that I decided against using (which is free) and signed up for the Office 365 Preview.
  • Client Synchronization: For the most part, I’m connected to the Internet all day, every day (that’s one of the many benefits of working primarily from home). I decided to store all data online and use local .OST files to cache data locally when using Outlook. That provides access to my message store on any device (including through web browsers), while maintaining local performance and the ability to occasionally work offline. Connecting to my new account from Outlook was quick and easy using auto discovery features, but my Android devices were a little more complicated and required me to access the Office 365 Admin Help.

Limitations of vs. Office 365

imageFor the vast majority of online users, I think that many of the free e-mail offerings (Gmail, Hotmail/, Yahoo Mail, etc.) are perfectly usable. Typically, you’ll choose a service based on cost (or lack thereof), performance, reliability, storage space, and the usability of its web interface. That was my initial approach, but I realized that there were some limitations of that prevented me from moving to Microsoft’s free service. They are:

  • Uploading Archived Messages: I have been using the Outlook client on my computers for at least the last 10 years, and I have amassed a huge collection of historical messages. Every once in a while, it can be helpful to resurrect a discussion from years ago. Or, more commonly, I just want to look back through some past messages to reminisce. I wanted my new e-mail account to include an automated way to upload my archive, including all old contacts, calendar items, and folder structure. There are two major approaches: First, I could open my current and archive e-mail .pst files and drag and drop the contents to my new mailbox. Or, I could use Outlook’s “Import .PST File” feature to load the data from my local storage files. does not support these methods, while Office 365 supports both approaches (through the use of the Outlook 2013 client). Outlook 2013 also allowed me to merge all of my archived folders with my current ones relatively easily.
  • Retaining Folder Structure: The ability to use the TrueSwitch seemed, at first, to be the ideal solution. I could just enter my login information and have the service automatically transfer my messages from Gmail to The problem, however, was that I’d just end up with one huge folder filled with a tremendous amount of unstructured, unsorted data.

If I didn’t have the above requirements (or, if I were willing to start from scratch with a new e-mail account), I probably would have opted for Office 365 does provide the benefit of providing excellent pricing for up to 5 installations of the full Office applications, though, so it’s still a compelling subscription offering. And, I haven’t yet experimented with SharePoint and Lync, both of which are included.

I was pleasantly surprised to find that Exchange Active Sync (EAS) is well-supported on many devices and applications, including the stock Android Email application. For more details, see Information about Exchange ActiveSync.

Update (09/23/2012): While I was unable to find an official statement from Microsoft, it does appear that it might be possible to copy messages and folders in the release (RTM) version of Outlook 2013.  I’ll try to post an update here if/when that becomes supported.

Backing Up a Gmail Account

Part of the migration process for me was making sure that, after everything was transferred successfully, that I’d be able to create a full backup of my Gmail content. It’s not that I’m worried about Gmail going away anytime in the foreseeable future. While I had the vast majority of this content organized in Outlook, I periodically deleted attachments from my messages. And, there’s always the chance that I accidentally deleted something important. My Gmail account isn’t going away, and I can always search for content through the web interface. However, I like the convenience and usability of having an indexed .PST file and raw messages if I ever needed them while offline.

Fortunately, there are a few methods you can use to easily download and install your entire Gmail (or other POP/IMAP-based account):

  • IMAP E-Mail Clients: Use any e-mail client (like Outlook) to download and save all your messages. I enabled IMAP for my Gmail account, chose to synchronize all mail (it took about 4 hours to synchronize ~55,000 messages), and then exported the results to a .PST file. I now have an archive that I can save off to local or online storage for posterity.
  • Backup Utilities: Gmail Backup is a free program that can individually download all your messages and save each in individual .eml files. The files can be opened in Microsoft Outlook or other compatible e-mail programs. I have also used the free Gmvault application in the past. While it worked fine, the resulting downloaded files (which are in text format) were far from ideal.
  • Scripting / Enterprise Tools: There are, of course, other approaches for migrating e-mail. I only had a couple of accounts to consolidate, so I took the above approaches. Exchange Server admins and others who need to migrate multiple message stores can use the Windows PowerShell cmdlets for Office 365 or third-party upload tools.


In this post, I covered a summary of the steps required to move to Office 365 and to back up a Gmail account. I didn’t spend a lot of time on technical steps, because those are well-explained on other sites. Feel free to leave any questions or comments if you have them! Next stop: Potential Office 365 Issues.

Office 365: Benefits and Features

Note: This post is part of a series on my move from Gmail to Office 365. To see a complete list of related posts, see Summary: Moving to Office 365.

In my previous post, Reasons for Moving from Gmail/POP to Office 365, I described many of the limitations of my Outlook/POP/Gmail approach to managing e-mail. In this post, I’ll talk about the reasons I decided to take the plunge and move to the Office 365 Preview.

Based on the issues I had with my older configuration, I decided to look into and Office 365 as a solution. Here’s a list of the primary benefits. For the most part, these improvements directly address the problems listed earlier.

Note: Some of the following also applies to Microsoft’s free e-mail offering,, but this section focuses on Microsoft’s hosted Exchange / Office 365 service offering.

Office 365 Messaging Benefits

  • Improved Web/Client Interface: Though I occasionally used the Gmail web-based interface, I could never stand to use it for more than just the most basic messaging tasks. The process of organizing, replying to, and sending messages was too cumbersome for me. I tend to format my messages with tables and other options where needed, and that was just too clumsy for me to perform via the web interface. I know that a lot of people like Google’s approach to a conversation-based views and the use of labels instead of folders, but I found the process of organizing e-mail to be so tedious that I wouldn’t do it. Instead, I always used the full Microsoft Outlook client wherever possible. Though that required other services to back up the local .PST file, I was able to use a much better user interface. The updates to and Office 365 changes all that for me – I can now use drag-and-drop and familiar keyboard shortcuts to create and manage messages. It also manages my Contacts and Calendar without the use of other integration methods. And, with the ability to perform on-demand installations of Outlook directly from the Office 365 Admin page, I can make sure that I get the full Outlook client installed and configured on my frequently-used Windows machines.
  • Elimination of PST File Synchronization: With Office 365, all of my data is stored online and there’s no need to synchronize and backup separate .PST files. I can now keep Outlook 2013 open all day, every day, on several different computers and mobile devices and I never have to worry about missing any changes.
  • Efficient Local Storage: While cloud-based access has been reliable for me in the past, for performance, backup, and peace of mind, I prefer to have a local cache of my data. Outlook 2013 uses a more efficient, compressed .OST file format that keeps all e-mail synchronized locally on my machine. Outlook 2013 provides some great improvement for Cached Exchange Mode though .OST file improvements. You can now specify how much data is cached locally, and the .OST file sizes benefit from compression (in my tests, I got around a 50% reduction in storage space when I compared similar .OST and .PST files). For more information, see What’s New in Outlook 2013.
  • Consolidation / Historical Data: With Office 365’s generous storage space (25GB per user, in my plan), I was able to transfer all of my historical e-mail to a single hosted Exchange account. I no longer have a need to archive data in separate PST files or periodically strip out attachments. By default, Outlook will cache one year of data locally. That’s probably plenty for most users. On my primary computer, I chose to download the entire list of messages for easy indexed searching.
  • Better support for mobile devices: While IMAP (and, to a lesser extent, POP) are acceptable options for accessing e-mail on multiple devices, it can be a tedious and error-prone chore to keep track of folder structure, calendar items, contacts, and changes from multiple devices (especially those that are occasionally disconnected from the server). With Gmail, I relied on the Gmail Notifier and Google Calendar Sync applications (both of which are years-old and poorly supported) to try to keep things in sync. With Exchange Active Sync, all of this works flawlessly (so far) on my Android Phone (Motorola Droid), Android Tablet (ASUS Transformer TF101) and Windows-based machines.
  • Attachment Archival: With the storage space offered by Office 365, I no longer need to worry about archiving off attachments to keep my .PST file small enough for frequent backups. No need to remove attachments due to unlimited storage space.
  • “Automatic” backups: An obvious benefit of having all of my messages stored online is that there’s less of a need for local backups. I still periodically export my data store to a .PST file, but that’s a quick and simple operation.
  • On-demand / Streaming installations: The ability to automatically download, install, and configure an instance of Office 2013 in a matter of a couple of clicks is really powerful. Occasionally, I’ll be working in a VM or on-site on a client’s computer, and the ability to use a full-fledged e-mail client is excellent.
  • Spam / Junk-Mail Filtering: Over the years, I have received as many as 24,000 spam messages per month. Thanks to Gmail’s excellent spam filtering, only an extremely small number of bad messages would get through. So far, Office 365’s spam filtering has seemed to work fairly well, though I’ll need more time to evaluate how it compares. I do like the ability to quickly and easily allow or block specific senders, though.
  • Push-Based Notifications: In my POP-based approach for receiving messages, I had configured Outlook to regularly poll for messages. It worked fine, but there was a potential delay of a few minutes before I received messages. Unfortunately, that little delay is often enough for some of clients to start to panic when they don’t get a (really) quick response to an issue. With Exchange-based messaging, I can get near-instant notifications to my desktop, laptop, tablet, and mobile devices.

Sounds good. How do I sign up?

That concludes the “short list” of potential benefits of my e-mail migration hopes and wishes. In the next post, I’ll provide details on how I migrated to the Office 365 Preview.

Reasons for Moving from Gmail/POP to Office 365

imageFor the last five years, I’ve relying on a fairly typical e-mail approach: A Gmail account that I occasionally manage online, but most commonly access through the use of Outlook via POP. I also used a custom domain name so I can have a permanent e-mail address. I documented the setup in an earlier blog post, My E-Mail Setup: Outlook + Gmail + a Personal E-Mail Address.

For a free online e-mail implementation, that approach has worked well for several years. But it left many everyday annoyances. Over the last couple of weeks, after Microsoft’s release of its Office 2013 Preview and Office 365 Preview (both available in free evaluation versions), I decided to re-examine my current setup. I have a lot of information related to the reasons for the move, steps in the migration, and pros/cons of the Office 365 approach. To keep things manageable, I have split the content into several posts. My hope is that the information in these posts will help others who are considering a move to Office 365 or a different hosted e-mail solution.

In this post, I’ll start with a quick overview of what prompted the move for me.

Note: This post is part of a series on my move from Gmail to Office 365. To see a complete list of related posts, see Summary: Moving to Office 365.

User Profile: About Me

First, a basic background on me and my e-mail needs: I’m an independent IT consultant that does most of my work from home. Occasionally, I travel and visit client sites, and need the ability to access at least my most recent messages. I use multiple devices (an Android phone, an Android tablet, a Windows notebook, and multiple Windows desktops and servers).

I have about 10 years of e-mail stored in Microsoft Outlook .pst files. In the past, I’ve been diligent about periodically removed all attachments from my messages using various scripts, macros, and utilities to lower the PST file size (that makes online backups quicker and more efficient). For me, one of the most important promises of cloud-based solutions is for small businesses like mine to be able to access infrastructure components (like Exchange and SharePoint services) that are typically reserved for larger organizations.

Issues with Gmail, POP, and Outlook

OK, back to the show: My original Gmail/POP/Outlook configuration worked pretty well, but there was clearly room for improvement. Specifically, here’s a list of some of the issues and considerations I had in mind when deciding to migrate:

  • Message and Folder Organization: I have never liked the Gmail approach of using labels rather than folders. While folders were available (and accessible using IMAP), I strongly preferred using the Outlook client’s UI for managing my messages. I’m used to drag-and-drop, familiar keyboard shortcuts, and efficient ways of keeping my Inbox clean. I have always found the use of the Gmail web interface to be clunky, time-consuming, and outdated (even though it’s arguably one of the best web-based interfaces).
  • Multiple sets of rules (Outlook and Gmail): I have been using the same e-mail address for about 7 years, and I make little effort to try to conceal it (it’s, in case you’re wondering. 🙂 ). One way I maintain sanity is to try to keep my Inbox down to just a few messages at a time. In order to do that, I use rules to transfer messages like newsletters, common feedback and online notifications. The main goal for me is to make sure that I limit interruptions that are received in my Inbox. Newsletters, notices, and other information are automatically moved to other folders. With the Gmail/Outlook implementation, I actually setup rules on both the Gmail web site (to avoid having a large number of unread messages) and in Outlook. It took some effort to keep these rules in sync.
  • E-Mail Addresses: I use a custom domain name (, and have used one e-mail address ( for almost all communicates over the last five years. While Gmail allowed me to change the reply-to address on my messages, Outlook has a nasty habit of adding an “on behalf of” to the From address. So, my message typically appear as being sent from “ on behalf of”. Sending, receiving, and replying to the messages works as expected, but I found that even technical people seem to be confused by it. So many of my clients and users tend to respond manually to both my Gmail and custom domain addresses, and I can’t seem to train them out of it.   Note: Apparently, Microsoft is aware of the feedback.  From the Outlook Blog post titled, “Upgrade from Gmail to in 5 easy steps

A side note on "Sent on behalf"

You may notice that messages you send using your Gmail address will be sent "on behalf of" your Gmail account. This means that Outlook is actually sending the email, but setting the "From:" address to be your Gmail address. The From: header in most email clients will look something like this:

From: on behalf of Dick Craddock (

We’ve gotten feedback from some of you that you don’t like the "on behalf of header" and so we’re working to change this – stay tuned!

  • Syncing Calendars and Contacts: For many years, I’ve relied on a few utilities to keep my online data in-sync with my Outlook client. That includes the use of the Gmail Notifier application and Google Calendar Sync, both of which seem to have been abandoned many years ago my Google itself. The utilities would often crash, and had to be run regularly in order to keep data in sync. Google is clearly focus on its own paid service, Google Apps for Business.
  • Offline Access: Gmail offers the ability to cache messages and to work offline by using its Gmail Offline Chrome Browser Extension, but the experience for me was rather clumsy. For one thing, I had trouble getting my Sent Items to be stored properly in Outlook, and I was stuck with using the clunky web-based interface. It worked, just not well enough for me to like using it.
  • Managing Multiple E-Mail Addresses: While I prefer to keep things simple, having multiple e-mail addresses is often unavoidable. For now, I have three (one Gmail, one, and one Office 365). I tend to use only my custom domain name for all messages, regardless of which account I’m actually using. Microsoft, Google, and other online messaging providers are very generous in allowing you to “pull” (via POP/IMAP) or push (via forwarding) messages between accounts. However, I found that, when forwarding to Gmail, everything would just be lumped into my default Inbox, prompting me to eventually reorganize the messages in Outlook. It was manageable, but definitely tedious and labor-intensive.

Don’t get me wrong: The solution did work, overall. It just left a lot to be desired (details are later in this series of posts).

Issues with Gmail and IMAP

One potential solution to some of the above is to use IMAP (instead of POP) with Gmail. While the concept of accessing e-mail via IMAP makes sense, I found that it didn’t actually meet my needs for real multi-device management of messaging. First, there’s a fairly long list of Gmail IMAP Issues available that lists some of the problems I ran into. My main hesitation is the number and frequency of reports people have posted regarding problems with sending, receiving, and organizing their messages. On numerous occasions, I have had to “trick” Outlook into sending the messages by reordering, deleting, or copying messages from my Outbox to other folders. Also, the folder management structure didn’t seem to be nearly as flexible as that of a standard Outlook PST file.

Without going into too much detail, I found that using POP and local Outlook .PST files worked best. I tend to use my primary desktop computer for at least 90% of my work, so for others that are more mobile (and that don’t have access to other methods), IMAP might make sense.

The End (of the Beginning)

OK, so hopefully the stage is set: The characters (me) have their motivation (to address the aforementioned technical and usability issues). In my next post, I’ll cover Office 365: Benefits and Features.

Cloud Services: The Importance of Technical Support

imageI’m currently working on a series of blog posts related to my move from Gmail to the Office 365 Preview.  Overall, my experience has been really positive, and I’ll be posting the details over the next few days.  Unfortunately, I have been experiencing an e-mail issue with my hosted Exchange Server instance: I have been unable to send any outbound messages for the last seven days (and counting, as of the writing of this post).  The service is currently in beta with limited support options, but I wanted to share what I think should be an important consideration for an IT organization that’s considering cloud-based solutions: Customer Support.

Cloud Services Support: Potential Problems with Problem Resolution

In cloud-based architectures, end-users and administrators are giving up significant direct control over their infrastructures and are placing a large amount of reliance in another organization’s infrastructure.  That approach comes with a wide array of potential benefits, including the ability to rely on testing, well-managed infrastructure that’s managed by specialists and experts. 

Ideally, all of these cloud services would be completely reliable and there would be no need for technical support. But what happens when those ideals aren’t met?  When evaluating cloud solution providers, it’s extremely important to consider how issues are handled when they do occur. It’s no secret that cloud services, in general, have had a checkered past and that outages and related problems will continue to occur. Over time, systems should become more resilient to failures, but in the meantime, it’s important to have quick, knowledgeable and responsive technical support and service.

Cloud Support Options: What to Look For

In addition to security, performance, and availability, problem resolution is a big issue to consider. In the case of my own small business (which is really just me), I’m not a high-visibility customer for any provider. I don’t have any leverage when it comes to negotiating contracts, SLAs, terms of service, and support agreements. For the most part, the service offerings are a take-it-or-leave-it proposition. Still, that’s no different from the implied contract with just about every hosted service we have come to rely on these days.  Things do get a little different when you’re betting your business (and revenue) on someone else’s infrastructure.

In general, IT professionals should request (or demand, if necessary) the following information as part of their cloud provider evaluation:

  • Historical Record: Service providers should be able to provide details on the number, types, and frequency of issues they’ve experienced in the past.  The should provide an official statement that guarantees the accuracy of this information, to the best of their knowledge.  It’s all too easy for cloud providers to choose not to report or record some issues, or to find technicalities that point the finger elsewhere.  If your potential cloud provider is doing this during the “honeymoon” phase (pre-sales), don’t expect a happy marriage in the future.
  • Time to Resolution: Problems, of course, will always happen.  So, the key is in determining how quickly and efficiently issues have been resolved.  It’s easy for a service provider to state that they resolved problems within minutes or hours of having confirmed them.  But what about the entire process?  How long does it take to get hold of someone when there’s a potential outage?  How much time, on average, do customers spend before an issue is recognized?  Is the support staff highly technical and well-trained, or will they force you to perform hours of unnecessary troubleshooting before they admit to or realize a problem?  If possible, test your providers reactions by calling their support staff before you need them.  It’s sometimes difficult to simulate a cloud-based outage, but you can simulate client-side issues and test wait times, and time to resolution.
  • Real-Time Status Information: Perhaps one of the most aggravating aspects of working with cloud services is being in the dark about what is going on with the infrastructure.  If I have a service failure or outage in my own data center, I typically know what to do: I can collect more information, and I can attempt to isolate the cause of the problem or fail-over to other systems.  With cloud infrastructures, my hands are tied.  Microsoft Office 365 Preview, in my opinion, is a good step in the right direction (see screenshot below).  In this summary view, you can see the last several days worth of issues, along with real-time status.  But there’s a catch: Is the information accurate and valid?  (In my case, described below, it most certainly isn’t – I and other users have had a serious e-mail outage for over a week now, and it’s not yet reported for the beta service).  Another plus: The information icons allow users to see details about an issue.  The information might be limited, but it’s definitely much better than flying completely blind.


  • Service Level Agreements (SLA’s) with Meaningful Penalties: Downtime, data loss, slowdowns, and other issues can be costly, so it’s really important to get real terms that make providers pay affected users for their infrastructure outages.  A simple pro-rated refund is ridiculous in these situations (for example, would you be satisfied with receiving a $300 credit for three hours of downtime during business hours?).  Instead, customers should negotiate a minimal per-incident credit amount, along with rapidly-increasingly compensation for downtime or data loss.  Personally, I would like to see clauses that state that, if problems can’t be resolved, a provider will pay me to go to their competitors. Cloud providers that trust their infrastructure shouldn’t balk at these terms, so make sure that their pain is at least as much as your pain when failures occur.
  • Escalation Processes: Especially for knowledgeable IT staff, customers should have the option of forcing an escalation if their issues aren’t being addresses properly.  In the case I mention below, my requests were all completely ignored, and I was left with nowhere else to turn (other than, perhaps, to a competing service or back to an on-premises solution).  Perhaps larger customers could have called their account reps. or would have some leverage through other avenues (I contacted my Microsoft MVP Lead, who was very helpful).  But customers shouldn’t have to go through all of this.

Of course, this list is just a starting point.  It’s important for IT departments to get expert legal input when negotiating terms with their cloud service providers.  If that makes a potential business partner sweat, it’s much better to find this out early, rather than when your organization is losing huge amounts of time and money after problems occur. 

An Case in Point: An Office 365 Preview E-Mail Service Outage


While running the Office 365 Preview, I ran into an issue that seems to have affected numerous users: I was unable to send outbound e-mail.  The problem went on for days before I received non-delivery notices.  It affected my consulting business (customers didn’t receive important updates on production changes for my clients), and it forced me to scramble to use an account with another provider to continue with my business.  Sure, I’m only one person and this is a beta service with limited support, but I think there’s a good lesson to learn here.

I don’t think I need to go into all of the technical details, other than my description of the above problem.  After an hour of phone-based troubleshooting, unnecessary configuration changes (including changes to my hosted DNS settings), and at least a dozen e-mails back-and-forth, I was finally able to get Microsoft to recognize the issue. For details, you can see my post titled Outbound Mail Failures: #550 4.4.7 QUEUE.Expired; message expired ##. It took several days for users (myself included) to notice that messages were not being delivered.  However, numerous users were reporting errors, and all were asked to perform basic troubleshooting that was completely irrelevant to the problem.  Responses often took days and my specific, direct questions went completely ignored.  I understand that limited support resources are available, but I needed some actionable advice: If services couldn’t be restored (or Microsoft was unwilling to try), I needed to start changing my DNS records and moving services elsewhere.  Support staff should have realized that the problem affected multiple users, that it started at the same time for many of us, that all services were working fine before this time, and that several of the people who posted (myself included) were highly technical.  The issue should have been escalated, or (at the very least) been reported as a known issue.  That would have reduced some of the uncertainty.  Rather, I ended up just waiting… and waiting.

Overall, it took nearly a week after the problem began for Microsoft to start looking into it. Being a cloud-based solution, regardless of my technical knowledge, there was very little troubleshooting I could do myself. The sense of helplessness is difficult enough when dealing with a single e-mail account and support limited to discussion forums. It could be catastrophic when dealing with dozens or hundreds of affected accounts.

In all fairness, the Office 365 program I’m subscribed to is currently free and is in a beta/preview mode. Microsoft was very clear that the service is not currently designed for production use (I knew that going in) and that support resources were limited. It’s not my intention to single out Microsoft (especially for a “Preview” product).  I’d like to add that, in many cases, Microsoft’s support levels have been exemplary for real-world, supported production issues I ran across.  (Many years ago, I even had Microsoft Product Support Services offer to create a hotfix for a SQL Server issue my company was experiencing!)

On the bright side, most of the people I talked to about this issue were knowledgeable about their infrastructure and had good troubleshooting skills. That’s something that’s often not available to small businesses. I am sure that support for live, production instances would be much more responsive. But, this experience underscores the importance of cloud provider’s technical support processes.

Lesson Learned: Always Have an Alternative

It might sound like common sense, but having fallback systems in place can be complicated, time-consuming, and tedious. However, with the ready availability of so many different online services, it makes sense to have alternatives to choose from in a pinch. In my case, I was able to fall back to using Gmail for outbound messages, and through a setup of automatic forwarding, I was able to remain up and running.

Of course, not all systems are as simple to configure. For example, a CRM backup instance, or a relational database disaster recovery implementation can take a lot of time and effort to setup and manage. Still, as the saying goes, it’s good to hope for the best and plan for the worst.

A Less Cloudy Outlook

Just to be clear, I really believe in the cloud architecture approach, and I think it will continue to have a dramatic impact on how organizations implement IT services.  I understand (first-hand, in this case!) why people have their trepidations about trusting other organizations with their infrastructure.  But, trust is something that is earned over time, and hopefully by deeds rather than through promises.  Overall, I’m excited about the future of hosted applications, platforms, and infrastructure.  For now, though, it looks like IT professionals will have to plan and manage with a partly-cloudy outlook on outsourced infrastructure.

Real World Internet Speed Test: Office 365

All too often, people tend to measure whatever is easiest to measure rather than what matters most.  Examples range from health (body weight, nutrition, etc.) to technical fields such as IT. 

Easy Answers

When I am attempting to “test’ the bandwidth of a system or network connection, I often find myself using on of the common free online tests like  It usually runs quickly and requires no configuration.  But what do the results really mean?  Below is an example of a recent test result.

But what does this really mean in the real world?  First off, the automatic server selection process favors the server that is “closest” (from a network architecture standpoint) to me.  Generally, the results will give me the best possible speed and path and can be considered a theoretical maximum.  But, I rarely connect to resources on my ISP’s core network.  Rather, almost everything I do requires routing outside of the ISP’s boundaries.  That’s where arrangements like Internet Peering and Content Delivery Networks (CDNs) can make a huge difference.  In this case, the easiest answer is clearly not the best one…

Better Answers

What I really want to know is how well I can connect to “real” online applications and services, ranging from Netflix to Office 365.  I want my Xbox Live connection to have a low latency, and I want to make sure that performance doesn’t vary dramatically during the day.  That’s where more specific tests become important.  Many online content and application providers have their own tests.  You can often find them by doing a basic web search. 

Example: Testing Office 365 Performance

Performance and reliability are among the foremost concerns for most IT professionals that are consider moving some applications and services to the cloud (that is, network infrastructure that they do not completely control).  This often introduces numerous variables, but technical (bandwidth, latency, routing, quality of service) and not-so-technical (quality of support personnel, investments in the network, priority of each customer, etc.)  Even the best implementations can fail if the end-user experience is poor based on limited bandwidth or high latency. 

As an example of a more “Real World” (and therefore relevant) test, I want to highlight Microsoft Online Services’ Performance Test.  This set of online tests takes into account bandwidth, latency, routing, and related parameters to give you a good idea of how well your experience with Microsoft’s Online services will be (from a performance standpoint, at least).  Below is a portion of the “Speed” test result:


This clearly shows that I’m not getting my maximum stated bandwidth (~32Mbps down / 3.0Mbps up), but the performance definitely looks good enough for basic usage. 

The tests also measure other important statistics, such as packet loss, round-trip time, packets per second, and related characteristics.  All of this yielded the following summary:


Of course, performance is likely to vary at different dates and times (I happened to perform this test on a Sunday afternoon).  If you want some additional detail on the tests, see the blog post titled Moving your customers to BPOS or Office 365? Check their BANDWIDTH!.  And, feel free to try the test yourself if you’re considering moving yourself and/or your  users to Microsoft Office Online.

BrightTALK Presentation: Application Performance Monitoring (APM) in Virtualized and Cloud Environments

imageOn June 6th, I’ll be presenting another live, free webinar on BrightTALK.  The title is Maintaining Service Levels with APM in Virtualized & Cloud Environments.  Here’s the abstract/overview of the content:

Significant changes in IT infrastructure approaches are driving data centers towards high levels of efficiency and automation. Virtualization and public/private/hybrid cloud architectures can help reduce costs and simplify administration, but the primary goal for IT organizations is to ensure that the applications and services they deliver meet or exceed their users’ needs. This presentation will provide advice and recommendations that focus on end-to-end monitoring and management of highly virtualized and cloud infrastructure components, including user experience, storage, networking, and hypervisors.

Visit the site to register for the webinar, or use the below information to sign up. And, while you’re there, be sure to check out the huge library of related content that’s available for free!

A BrightTALK Channel

Note: To access the recording of this session (and all of my past BrightTALK webinars), please search using

Virtualization and Storage Presentations at TEC 2012

It’s still a few months away, but I’ll be presenting at two storage-related presentations in the Virtualization and Cloud track at The Experts Conference (TEC) 2012 in San Diego, CA.  Below are the abstracts.  For more information about the conference, please visit the TEC 2012 Conference web site.


Storage Improvements in Windows Server 8 / Hyper-V 3.0

Virtualization architects and administrators have long sought quicker, simpler and more cost effective ways to scale and manage storage in their data centers. Microsoft has made many significant improvements in the architecture and storage features of Hyper-V 3.0 and the Windows Server 8 platform. Examples include support for SMB-based virtual disks, management UI improvements, network stack improvements, Hyper-V Replicas, NTFS reliability improvements, incremental VHD backups, storage de-duplication, offloaded data transfer, SMB protocol improvements, and Storage Spaces. These features can help improve storage management for many different types of virtualization deployments and can help bring the idea of cloud-based automation closer to reality.

This session will focus on technical details and demonstrations of new features in the Windows Server 8 platform and in Hyper-V 3.0. The focus will be on practical suggestions for how and when the new features should be used to reduce costs, simplify administration, and increase performance.

Designing Storage for Virtual Environments

One of the most common issues related to virtual infrastructure design is related to planning for and managing the storage environment. Successful SAN, NAS, and local storage deployments require the provisioning of highly-reliable, high-performance, cost-effective solutions to meet business and technical needs. The challenge for IT is in consolidating and optimizing infrastructures while staying within budgets. The primary concerns – including storage capacity, performance, and reliability – can drive the success or failure of virtualized deployments.

This presentation begins with recommendations for designing a storage environment based on requirements, starting with a solid understanding of application workload characteristics. Strategies for collecting storage statistics through historical and real-time performance monitoring can provide valuable insight into real requirements. Based on this data, IT departments can compare different storage approaches, including centralized network-based storage, and cloud-based options. Important features to consider include file- and block-level de-duplication, thin provisioning, high-availability, clustering, and disaster recovery. Attendees will learn methods by which they can best plan for, implement, manage, and monitor storage for virtualization in their own environments.

BrightTALK Webcast: Managing VM Sprawl: [Re]gaining Control of Your Data Center

imageI hope the New Year is off to a good start for everyone!  For many data center administrators, the tasks related to supporting a wide variety of virtual machines and related infrastructure is going to take a lot of time and resources.  To help address some of the chaos, I’ll be presenting a free webcast titled Managing VM Sprawl: [Re]gaining Control of Your Data Center at the upcoming BrightTALK Virtualization Conference.  The presentation will take place on January 11, 2012.  Here’s an overview of the topic:

As virtual machines have become the default method of deploying new applications and services many organizations have found that they’re encountering the problem of “VM Sprawl” – the rapid proliferation of VMs that make management difficult.

In this presentation, you’ll learn:

  • Common causes of VM sprawl and how to address them
  • Specific technical administration issues that are unique to virtual machines
  • Methods of managing a VM’s “life cycle”, from initial deployment to retirement
  • Ways to maintain control of data center resources while also allowing for end-user self-service
  • Ways in which automation can help manage the major causes of VM sprawl

This online conference provides a wide variety of different presentations, so be sure to check out information about the Virtualization Summit and register for the event.

Note: To access the recording of this session (and all of my past BrightTALK webinars), please search using

TechNet Radio Community Corner: Virtualization with Microsoft MVP Anil Desai

I often enjoy talking with other technical professionals about the path of IT (in general) and about new or upcoming technology (the geeky details).  I’m happy to have had the opportunity to appear as a guest on a recent TechNet Radio Community Corner.  In the ~15-minute conversation, we discussed information about supporting the IT community, the current and future state of virtualization (including directions of Hyper-V and System Center Virtual Machine Manager (SCVMM)). 

Here’s some brief information about this episode, titled TechNet Radio Community Corner: Virtualization with Microsoft MVP Anil Desai:

In today’s Community Corner, Sr. IT Pro Evangelist John Weston interviews Microsoft Virtualization MVP, Anil Desai to the show. Tune in as they discuss cloud computing’s impact to IT, System Center Virtual Machine Manager 2012, as well as the relationship between Virtualization and Private Cloud solutions.


Special thanks to Chris Caldwell and John Weston for inviting me and for a fun conversation!  For more shows and episodes of related shows, visit the TechNet Edge web site.

TEC 2011: Virtualization Approaches and Storage Presentations

imageAs I mentioned in a previous post, I’m scheduled to speak at The Experts Conference 2011 in Las Vegas (April 17 – 20, 2011).  I’ll be giving two presentations in TEC’s new Virtualization and Cloud track.  My sessions abstracts are below.  In addition, Session Abstracts for each of the tracks and the Conference Agenda are now available online.  Let me know if you plan to attend or if there’s anything you’d like to see me cover (either in the presentations or on this blog).

Storage Considerations for Virtualization

Key considerations related to successful virtualization deployments revolve around provisioning highly-reliable, cost-effective solutions to meet business and technical needs. The challenge for IT is in consolidating and optimizing infrastructures while staying within budgets. The primary concerns – including storage capacity, performance, and reliability – can drive the success or failure of virtualized deployments.

This presentation begins with recommendations for designing a storage environment based on business and technical requirements and a solid understanding of application workload requirements. Strategies for collecting storage statistics through historical and real-time performance monitoring can provide valuable insight into real requirements. Based on this data, IT departments can compare different storage approaches, including centralized network-based storage, and cloud-based options. Important features to consider include data de-duplication, thin provisioning, high-availability, clustering, and disaster recovery. Attendees will learn methods by which they can best plan for, implement, manage, and monitor storage for virtualization in their own environments.

Evaluating Virtualization Approaches

The term "virtualization" can apply to a broad range of varying technologies, ranging from storage to networks to servers to applications. The primary goal of these approaches is to simplify management, increase efficiency, allow for scalability, and meet reliability requirements. With recent improvements in virtualization technology, the challenge for IT professionals is in deciding which approaches are the most relevant, given specific requirements.

The focus of this presentation is on understanding the technology behind various virtualization approaches, including presentation-, application-, session-, user state-, desktop-, and server-virtualization. The topic will begin with information on understanding business, technical, and service requirements. These details will then be used to compare a wide variety of different approaches to solving common IT problems. Attendees will receive information that will help them choose which approaches make sense in their own environments.

Virtualization and Storage Presentations at The Experts Conference

imageI’m currently scheduled to speak on two topics at The Experts Conference 2011 in Las Vegas (April 17 – 20, 2011).  The conference has tracks that focus on Directory Services, Exchange, SharePoint, and Virtualization. 

The two topics I’m planning to present are tentatively titled Storage Considerations for Virtualization and Evaluating Virtualization Approaches. I’ll post more details and abstracts here as the conference gets closer.

Mozy Support Nightmares: A Cloudy Forecast for Online Storage and Backups?

Over the last year, I have been frequently asked write and speak about storage and cloud-based service offerings.  Remote storage is a compelling technology for consumers and IT departments, and it’s a good starting point for those that might be interested in dipping their toes (or heads) into the more-ethereal-than-Ethernet “cloud”.

Trouble in Cloud City

Several years ago, I wrote a blog post about the virtues and benefits of online backups (see Online Backup Options).  Since then, I have recommended cloud-based storage (and, Mozy, in particular) to a rather large number of IT professionals, friends, and family.  The idea itself is compelling: Online backups have the potential of simplifying the backup process for most users, while providing secure remote storage.  But what happens when something goes wrong?  Or if you just have a technical question?

I don’t often highlight specific companies for poor customer service – it’s almost to be expected from many organizations these days – but a recent interaction I had with Mozy’s Customer Support has ended in my completely giving up on trying to resolve what should have been a very simple issue.  Without getting into the technical specifics, I have been trying to perform backups of Encrypting File System (EFS)-encrypted local files to the cloud.  From the latest information I could find, Mozy supports both local and online backups of EFS encrypted files.  That wasn’t my experience, though – I received cryptic error messages and overall backup failures.  So, I decided to contact Mozy’s Customer Support, creating a case that included my log files and a detailed description of the problem. 

A Little Rain Must Fall…

In summary: It has been over two weeks now, and after three escalations, I’m no closer to resolving the problem.  Just about every response I have sent to Mozy (along with requests for escalation) have been ignored.  In fact, a US Escalations Customer Support Manager has barely managed to feign any interest in my issue at all.  An hour-long phone call with a Level 2 Customer Support technician resulted in his disabling of several necessary services on my primary Windows 7 workstation (I had to keep records of this so I could reverse the changes myself), and poring through log files that provided little useful information.  The response to my most recent request for support has been a request for me to (again) restate the original problem (it’s thoroughly documented in their support system – I just can’t get anyone to read it).  I do plan to escalate this issue to the Director- or VP-level at Mozy as I somehow hopeful that someone at the organization will care.

Cloud Compatibility

One of the most promising aspects of cloud-based service offerings is a reduction in complexity.  Rather that relying on complicated application deployments (the story goes), we can leave all of the details to services that are provided off-site.  But what about support and compatibility issues?  What happens when two or more cloud services vendors decide that their services are incompatible?  My case with Mozy might be that type of issue, though it doesn’t seem like there’s any official documentation or support boundaries related to which products can peacefully co-exist on the same system with it and which options are supported.  And what if the vendor decides that features and functionality I require aren’t important to them?  Sure, I could run into the same problems with local applications, but workarounds are far easier to find when I control both communication endpoints.

Risk Mitigation

I understand that I’m hardly the first person to suffer from poor technical support, but this experience has made me reconsider the risks of cloud-based services in general.  I’m hardly an important customer for Mozy, but I am paying for their service and I really do rely on the sanctity of my backups.  My typical response to organizations that doubt the cloud is to first compare the reliability of their own datacenter infrastructure against that of an online service provider’s.  However, in this case, I’m completely stuck – I either need to reduce security at my file system level, discontinue the use of Mozy (and transfer 25 GB of data to a competing service), or revert to local backups.

All Eggs in One Cloud?

As the entire world moves to a greater reliance on Internet connections and online services, it becomes harder to create fall-back plans and alternatives.  It’s simply not practical or cost-effective to expect your service providers to fail you.  What’s the point in online backups if I need to have a backup plan for my online backup provider?

That makes me curious: Who else has had a recent experience that has questioned their value in hosted services?  Was it downtime, client application issues, availability, poor customer support, or all of the above?  And how safe do you feel when your mission-critical IT infrastructure is resting on clouds?