This will be the first of a four part series of posts on the four major steps involved in migrating off Windows Server 2003 (and later versions of Windows Server in most cases), but instead of just rehashing the information that targets larger enterprise customers, I will be presenting the information from the perspective of smaller organisations. While some of the particulars are the same, the scale of the migration, and the use of a subset of the migration tools will be what makes this content more targeted for the SMB reseller and their customers.
The above graphic is one that you will be seeing a great deal of in lead up to Windows Server 2003 End of Support in July 2015. It’s a clear message, but underneath it is a very different story compared to previous migration scenarios that SMBs would have gone down. What’s this major change? Well, it’s this thing called the cloud, which has opened up a variety of different options that may not have been considered in the past. In some ways these upgrade decisions may have been easier in the past because they were limited for most, the default path being to upgrade to the latest version of whatever had been used previously. I’m getting ahead of myself here, but I will delve into this in much more detail in Part 3.
Back to today’s focus, the first step is purely the collection of information about the customer’s environment. If it is one of your existing customers, you probably have a pretty good idea about what they are currently running, but if it’s a customer that is new, or one that has been mostly managing their own environment, you know that you will make a few discoveries that catch you by surprise. The goal during discovery isn’t to start classifying or planning around what you find, it’s merely to do the best job possible to identify what is running, to ensure that the other stages don’t present any surprises.
While there are a number of different tools that are available and recommended for identifying software and hardware inside the network, I recommend using the Microsoft Assessment and Planning Toolkit (aka MAP) for several reasons. The first is that it is free, which is always a great start for cost conscious customers, and second it is a tool that you can revisit for a variety of tasks, not just identifying server workloads. Desktop application identification, report generation and action plans are other benefits it brings to the table.
My recommendation for using MAP to collect data within with your customers environment will depend on the technology in use, but if they are using a virtualisation solution already, you should set up a virtual machine that meets the requirements to install and run MAP, and allow it to run continuously for a period of time to collect the appropriate data. It doesn’t require agents to be installed on the clients or servers, instead it uses the appropriate domain credentials to query the environment. You can easily include and exclude different subnets if there are slow links that need to be taken into consideration.
MAP is updated regularly, so I recommend that you check for a new version before embarking on new customer engagements, and also suggest that you sign up for the early release previews on Microsoft Connect. These previews tend to include capabilities that align with the major Microsoft migration scenarios, but use the previews alongside the supported version.