This post provides an overview of key technical considerations, actionable recommendations and common pitfalls related to migration of workloads from an on-premises datacenter to Microsoft Azure. The business related considerations were addressed within the Migrating Datacenter to Azure – Part 1.
Migrating IT systems and infrastructure to a private or public cloud can be a challenge even for the most seasoned IT professional. At the same time, it is the moment to revise, rethink and improve architecture and concepts.
The most common strategic drivers behind moving a datacenter to the cloud include: reducing capital expenditure, decreasing operational expenditure, improving elasticity, improving time-to-market and attaining improvements in security and compliance.
A datacenter migration must start with very careful planning and a phased approach to execution. Although the ease of entry to the cloud invites to easily rush in without a solid plan an inadequate cloud architecture almost always ends up preventing organisations from realizing the benefits that prompted the migration in the first place. Also, a datacenter migration is a strategic undertaking that must be executed without causing a significant impact on business operations, service delivery, performance, and data protection requirements.
This post is focused on the process and most essential design decisions towards the transformation of the on-premises environment into Microsoft Azure. Implementation specific topics, such as prescriptive guidance on Azure Resource Manager templates, Azure Network Security Group design or Azure storage configuration are not addressed within this post.
This post consists of multiple sections that do not need to be read in sequence and which may be referred to as required to assist in planning and migration activities:
- The Landscape and Feasibility. This chapter summarises the key areas to address when evaluating the business feasibility of moving the managed services provider platform into Microsoft Azure.
- Planning for Coexistence. Transformations or migrations are not accomplished Customer functional and non-functional requirements, platform requirements, and other dependencies force typically demand a phased approach to migration. This means that both the source and target environments will be active concurrently for at least the duration of the migration.
- Building the Inventory. A thorough understanding of the source environment is required when moving a workload, application or service from the source environment into the target environment. Dependencies within and between workloads must be identified to assess the impact of migration and plan an appropriate approach.
- Designing the Target Environment. This section will address considerations for planning the Microsoft Azure environment and provide examples of logical and physical architectures and approaches to leverage the agility introduced when embracing public cloud services.
- Migration. This section focusses on different approaches to moving services into Azure subscriptions, the approach to project execution as well as roles and responsibilities within the migration project team.
This post has been prepared for Managed Service Providers seeking to move their shared services and tenant-specific workloads into Microsoft Azure. Many of the concepts and recommendations within this post may also be applicable to other business types including, but not limited to Systems Integrators and traditional datacenter hosting companies.
The Landscape and Feasibility
The landscape for a typical Managed Services Provider (MSP) is often defined by platform efficiencies and resource sharing. Typically, MSPs host multiple tenants (customers) within their datacenters and these tenants share networking, storage and compute resources. Managed Services Providers also typically provide shared or common platform services for tenants including directory services, messaging services, publishing services, service delivery services and file services. It is the shared nature of these common services that introduces complexity, as the significant size of these environments prevents them from being transformed overnight. At any point in time during this transitioning cycle, there are two platforms that need to be operated and maintained concurrently.
When evaluating these parameters for transformation or migration planning it is essential to understand the overall process and steps that should be followed. Figure 1 provides an overview of the steps involved, and further details are provided elsewhere in this post.
Figure 1: Migration Steps
Service providers deploying Azure solutions will typically utilise Azure subscriptions deployed under one of two Microsoft licencing programs; the Cloud Solution Provider (CSP) program and the Enterprise Agreement (EA).
Utilising the Microsoft Cloud Solution Provider (CSP) program, each Azure subscription provisioned under program the must contain resources utilised by one and only one tenant. Not only is this a requirement of the CSP program terms, but the approach also provides security, billing and operational benefits. For example:
- Role-based access control may be configured at the subscription level to secure Azure resources.
- All Azure resource consumption within a CSP-based subscription is linked to the subscription, enabling the service provider to readily identify the tenant with which the consumption is associated.
- When requesting support from Microsoft, service providers can create service requests on behalf of their tenants that are linked to a specific tenant and subscription.
When Azure resources will be shared by multiple customers then Microsoft requires that an alternative licensing program such as an EA or Microsoft Online Subscription Program (MOSP) .
The appropriate Azure licensing model for a specific workload will depend on the MSP’s business model and whether it wishes to own the customer relationship and solutions end to end. Most MSPs will prefer to licence Azure through the CSP program as it allows them to build a closer and higher value customer relationship.
Although Microsoft offers the CSP program as the primary cloud channel program for service providers, in some cases a hybrid model that combines both EA-based subscriptions and CSP-based subscriptions may be required. In the case study that was used as a reference for post, a combination of CSP and EA Azure subscriptions was used. Approximately 75% of the workloads were deployed into CSP-based subscriptions, with the balance using EA-based subscriptions for shared services.
From a technology perspective, there is no major difference since both environments are based on Azure Resource Management (ARM) deployment concepts.
As mentioned earlier, hosting Azure resources for each tenant within their own CSP-based Azure subscription is the most common approach. However, if tenant (customer) resources are limited in number and rely heavily on shared services, setting up dedicated subscriptions for each tenant then configuring connectivity between the tenant environment and the shared services environment may not be cost effective.
There are a few considerations to be taken into account, but the most important one is if the tenant requires direct access to the resources such as a remote desktop session (RDS) directly hosted on a server system. This is not preferred in a shared environment since it has the potential for compromising security. If the customer is only consuming Software as a Service (SaaS) services through a UI, web services or a shared remote desktop session, that might warrant locating these single server systems into the shared services environment eliminating the overhead and cost of egress traffic from tenant environments into the shared services.
Many Managed Service Providers have adopted virtualisation technologies to optimise the usage of their compute, network and storage resources. Often the resource allocation for workloads hosted in these virtualisation environments are not optimised and more resources (eg. CPU cores, memory, disk) are assigned to workloads than what is necessary. In addition, virtualisation infrastructure will most commonly include redundant hardware to provide high availability for virtualised workloads.
When transitioning to a public cloud service, the approach to provisioning of workload resources must change. For example:
- The public cloud does not provide hardware resilience in the same way as on-premises virtualisation platforms may. Workloads should incorporate availability measures at the application layer rather than relying on highly available infrastructure.
- Service providers have limited control over maintenance cycles for public cloud infrastructure but are still required to meet service levels with their customers as described by their managed services agreement.
There are several considerations when planning for workload migration to the public cloud to ensure cost optimisation and an ongoing ability to deliver managed services according to service level agreements:
- Resource Capacity Requirements: It is of utmost importance to understand the actual resource requirements of a certain workload, application or services within managed services. By understanding specific workload requirements it is possible to allocate sufficient public cloud resources without over-provisioning and incurring unnecessary additional cost. Using resource definitions equivalent to an existing on-premises environment and adopting them unchanged in the public cloud infrastructure often leads to higher cost, thereby affecting the bottom line.
- Infrastructure Availability: Not having an underlying highly available compute infrastructure for workloads means that service availability must be accomplished at a workload level. Single server configurations must transition to multiple server configurations to provide continued availability of managed services in case underlying infrastructure is affected by issues or maintenance. Microsoft Azure Availability Set resources can be used to ensure virtual machines are distributed across multiple Fault Domains.
- Workload Availability: Increasing workload availability through the addition of virtual machine instances may increase operational costs and thereby impact the business case for virtual machine migration to Microsoft Azure. One approach to managing this impact is to appropriately scale virtual machines to meet workload requirements whilst adding additional instances to enable a workload to survive the loss of one or more virtual machines. Load balancing across virtual machines can be accomplished either at the application level by workload-specific mechanisms, or through the addition of infrastructure-level load balancing resources such as Azure Traffic Manager, Azure Web Application Gateway or Azure Load Balancers. Other approaches to providing workload resilience include utilising replication capabilities or cluster capabilities within a workload. For example, Active Directory Domain Controllers replication can provide resilience for Active Directory Domain Services, whilst Microsoft SQL Server Log Shipping or Always-On Availability Groups may provide resilience for database services. Replication is unlikely to result in cost optimisation, but may address requirements for ongoing service availability.
- Planning for Growth: Even if a decision is made to implement a single virtual machine instance, Microsoft recommends that single virtual machine instances are deployed in their own Availability Sets. This approach facilitates the addition of additional virtual machine instances in future if required, through their addition to the same Availability Set. By including the initial virtual machine in an Availability Set from the outset, a service interruption to join the virtual machine to an Availability Set when subsequent virtual machine instances are deployed is avoided.
- Virtual Machines with Licenced Software: Some VM images include pre-installed software, such as Microsoft SQL Server. The usage costs for VMs using these images will be higher than the cost of a VM deployed with an image including only the operating system, as a component of the usage cost is attributed to the licence for the included software. When deploying virtual machines it is important to consider whether to utilise images with pre-installed software, or whether to install application software after VM deployment and leverage other licencing models for the application.
The cost optimisations outlined above represent the “low hanging” fruits when planning a migration of an on-premises solution to Microsoft Azure. Additional complexities arise during migration planning when current platform capabilities are mapped to resources available in Microsoft Azure. These complexities are not address in post, but are instead discussed in some detail in Microsoft Azure online documentation and corresponding blogs.
Planning For Coexistence
Migrating managed services into Microsoft Azure is often unlikely to be accomplished overnight or over a weekend due to the size of the environment being migrated or the number and duration of verification steps used in the migration process.
To accommodate a phased migration it is necessary to plan for a period of coexistence in which some services continue to run on-premises whilst others are running in Microsoft Azure.
When planning for coexistence the following topics should be considered:
- On-Premises Bandwidth Requirements. During a period of coexistence the bandwidth needs of the on-premises environment will fluctuate. Initially, additional bandwidth may be required to accommodate communication between services hosted on-premises and those running in Microsoft Azure. Over time, the bandwidth requirements for on-premises systems will decrease as they are progressively migrated to Microsoft Azure and client applications access these through endpoints published in Microsoft Azure. Estimating bandwidth requirements and how these will change over the period of coexistence is important but can be a challenging task, particularly when the on-premises network is not well segregated or network components do no provide sufficient information for analysis.
- Deployment Sequence. When planning for coexistence, Microsoft recommends deploying core services in Microsoft Azure prior to migrating services that are dependent on these. For example, consider deploying (or migrating, when appropriate) the following services in Microsoft Azure early in the period of coexistence:
- Active Directory Domain Controllers to provide authentication and authorisation services for subsequent application workloads.
- Distributed File Services (DFS) nodes to provide file services with a local replica of content.
- Remote Desktop Services (RDS) services to minimise the impact to end users.
- SQL Server Always-On Availability Group nodes to provide a local replica of databases that may be used by applications.
During the period of coexistence after core services are deployed in Microsoft Azure, consumption charges will accrue for these resources that are not yet in use. For this reason Microsoft recommends that the duration of this period is minimised.
During the period of coexistence, core services will be running both on-premises and in Microsoft Azure. Once workload migration is complete, core services that are running on-premises and that are required only for workloads that have been migrated to Azure may be decommissioned.
- DNS Switch or Platform Activation. Once a hybrid scenario is established and core services are deployed and extended into Microsoft Azure, platform activation is the next step. Although there are several approaches that may be used, two of these approaches are described below. Reconfiguration of endpoints or clients should be avoided due to the effort, timeframes and costs involved in making these changes. Two possible approaches include:
- Traffic Manager. Introducing Azure Traffic Manager to provide global load balancing services will enable load balancing of traffic across services running on-premises and in Microsoft Azure. Azure Traffic Manager operates as a mediation layer between the end points consuming services and the platform publishing the services. If running two environments concurrently is not an option, Traffic Manager may also be used to switch between environments without requiring a change to public DNS records. Using Azure Traffic Manager in this manner to support migration is typically temporary and provides a controlled approach to direct traffic to migrated workloads.
- DNS Switch. When performing a DNS Switch, the public DNS records for the workload are updated. Initially, these DNS records resolved to the IP address of the workload on-premises, however after migration the DNS records are updated to reflect the addresses of the Microsoft Azure endpoints for the migrated workload. When updating DNS records the time-to-live (TTL) of the records must be considered as it may take some time for systems to cease using cached IP addresses and query DNS servers for updated addresses. As this approach may result in a delay until all clients utilise the new IP address, performing a DNS switch is seen as the least controlled approach to connecting to migrated workloads. From the time the DNS record is updated, traffic may be directed to both the prior on-premises address and the address of the workload in Microsoft Azure for some time.
Building the Inventory
Platforms operated by Managed Services Providers are often large and complex, composed of shared services and dedicated or multitenant customer environments. As discussed earlier, these platforms are not typically migrated overnight or even over a weekend into public cloud platforms such as Microsoft Azure. These transformations typically include a coexistence period during which workloads are migrated from on-premises systems into Microsoft Azure.
To make efficient use of cloud economics and to be able to migrate a workload from on-premises to Microsoft Azure without affecting the services availability requires a comprehensive understanding of the workload and source environment.
In smaller environments, a manual approach to compiling an inventory may be used, in which details such as server names, their current resource definitions and dependencies may be documented. In more complex or larger environments an automated approach to data collection is most often required. In these circumstances additional information including such as intra-application communication patterns and networking protocols and port information is also captured as this can be critical to planning a migration of workloads to Microsoft Azure.
The Microsoft Assessment and Planning Toolkit (MAP) is an agentless, automated, multi-product planning and assessment tool that can facilitate information gathering for desktop, server and cloud migrations. MAP provides detailed readiness assessment reports with extensive hardware and software information and actionable recommendations to help organisations accelerate their IT infrastructure planning process, and gather more details on assets that reside within their current environment. MAP also provides server utilisation data for Hyper-V server virtualization planning; identifying server placements, and performing virtualisation candidate assessments.
The Microsoft Assessment and Planning Toolkit (MAP) is free and may be downloaded.
Products that can provide similar or more comprehensive assessments than MAP are also available from third parties. For example, HealthCheck for Azure from BitTitan, Cloudamize and SurPaaS MaaS from Corent may be leveraged in migration scenarios .
Based on the output of these tools a platform diagram such as that shown in the next figure can be constructed that will support technical or planning activities related to the migration.
Designing the Target Environment
Although migrations from on-premise environments into public cloud platforms such as Microsoft Azure are often referred to as lift and shift migrations, the simplicity implied by this term does not mean that that planning is unnecessary. Even in lift and shift migrations it is still necessary to design and define the target environment hosted within Microsoft Azure. This chapter addresses the most essential aspects when designing the target environment.
When planning for deployment of resources within Microsoft Azure, it will quickly become evident that there are many more named objects to plan for than with on premises deployments. The use of consistent and intuitive naming conventions for managed objects (VMs, network interfaces, public IP addresses, storage accounts etc.) will make management and troubleshooting of operational issues easier and less error-prone. It is therefore important to establish and enforce a structured and robust naming convention. A naming convention should make it easy for administrators to create new objects and for everyone to identify and understand the role and purpose of the object.
Given the complexity of the Managed Service Provider environments, the infrequency of major migrations and the need for engagement of multiple disciplines in migration projects, a common best practice is to approach the target environment design and structure from a high level with a logical design. This approach will support conversations with the different stakeholders in the migration. The logical design organises the target platform in different layers, providing improved oversight and understanding of the target architecture. This can be applied to core or shared services as well as to customer environments within an Azure subscription provisioned under the CSP framework.
The next figure represents a sample logical design where the platform is composed of different layers and where the virtual networks and subnets are mapped to these different layers.
Traditionally designing a platform often results in written documents describing the platform and providing configuration details as well as the trade-offs and decisions made during the process. This production of these documents is often a lengthy process, requiring a significant investment of effort and discipline to maintain. The migration to Microsoft Azure does not necessarily impact workloads but does impact the underlying infrastructure that is consumed as a service by the workload. Instead of producing lengthy descriptive documents, a more agile approach may be considered. For example, the following figures provide examples of such an approach, wherein the different layers within the logical architecture are represented by a physical design based on purely on diagrams.
The diagrams are for illustrative purposes only.
As mentioned earlier in this post, there are trade-offs to be made when determining whether a customer environment should be hosted within a MSP subscription or within a CSP provisioned Azure Subscription. Factors to consider when making this decision include:
- The size of the customer environment.
- The type of access a customer would require within that environment.
- The ability to isolate customers from other customers.
- The cost and operational considerations in operating one environment or several.
- Licensing constraints that may limit the ability to deploy services or workloads shared by multiple customers within a given subscription type.
When customers’ Azure subscriptions are provisioned through CSP, customers initially have no access to their subscriptions. Customer access to their subscriptions must be explicitly granted by the Microsoft partner through configuration of role-based access control settings on the subscription, a resource group or resource. Microsoft Partner accounts within the partner tenancy that have been assigned the Admin Agent role are able to administer customers’ Azure subscriptions on their behalf by virtue of their owner role for customers’ CSP Azure subscriptions.
To implement a consistent administrative structure across all CSP provisioned Azure subscriptions, Microsoft recommended that MSPs create Azure Active Directory groups within customer Azure Active Directory tenancies, assign customer accounts to these groups as appropriate, then grant the groups the appropriate roles within customer Azure subscription(s). Depending on the scale of this exercise, this configuration may either be automated or performed manually.
At the time of writing, Microsoft Azure subscription has a limit of 200 storage accounts with limits on the maximum request rate per storage account. Storage accounts also include limits on the number of Input/Output Operations (IOPS) per share when utilising Azure Files. When planning a migration to Microsoft Azure, the approach to the use of storage accounts must be well thought out in order to ensure requirements for scalability, performance and availability are addressed.
A typical approach to storage account planning is to use storage accounts on a per logical architecture layer basis, where separate storage accounts are used for virtual machine operating systems and for data. To obtain cost efficiency and scale, the operating systems are installed on Local Replicated Storage while the data is stored on Geo Replicated Storage. Only when absolutely required, premium storage is used.
Where Microsoft SQL Server has been deployed on virtual machines and Azure Files is used to store SQL Server data and log files, one approach may include using one storage account per SQL Server instance for the operating system and a dedicated storage account for the data and log files. Data and log files should be located on separate shares to ensure the maximum IOPS are available.
This concept can be applied either within the core or shared services as well as within the CSP provisioned customer Azure subscriptions.
Virtual Networks and Network Security Groups
Managed Service Providers will typically host network traffic from multiple customers within their environment. It is therefore necessary to isolate traffic for these customers through the appropriate applications of controls.
When planning an Azure network solution one common consideration is whether to use multiple VNETs for network isolation within an Azure subscription, or to use multiple subnets within a VNET. The approach selected will impact operational effort as well as solution complexity.
To minimise complexity, one approach is to use single VNETs consisting of multiple subnets, where the traffic is regulated within the environment using Network Security Groups. This approach can be applied to shared services environments as well as to customer-specific environments. Once again, different security zones or workload layers may be assigned to each subnet as represented within the logical design. The next figure illustrates a sample application of this approach.
There are different approaches available to migration of workloads into the target Microsoft Azure environment. Before any migration commences, connectivity is typically required between the current on-premises datacenter and the target subscription in Microsoft Azure to enable migrated workloads to communicate with the workloads that still reside within the originating environment.
The following approaches to migration are common:
- Lift and Shift. This approach involves shutting down the VM in the source location, copying the VM data to an Azure storage account then recreating the VM in Azure. The data transfer will originate from the system where the copy action is started but the actual data to transfer comes from the location where the virtual Hyper-V disks are located. Where Microsoft Hyper-V virtual hard disks (VHD) are migrated no further conversion is Where other virtualisation platforms are used and virtual hard disks are not in the VHD format, conversion of virtual disks to a format supported by Microsoft Azure is required. Disk format conversion may take some time and the down time for the workload starts when the conversion starts. Once the virtual disk is in the VHD format, the virtual disk must be copied to the storage account in the Microsoft Azure subscription. This data transfer will occur over the Internet, and the throughput available will determine the duration of the transfer. Once the virtual disks for a VM are transferred, the virtual machine can be created in Microsoft Azure. The newly created virtual machine will come online in the new environment with new IP addresses however as the operating system disk has been retained the same hostname will be used. Although this process can be automated, the amount of data to transfer, the level of concurrency, downtime required for testing and the required overall downtime of the workload must be taken in consideration when selecting this approach.
- Azure Site Recovery. Azure Site Recovery (ASR) provides the capability to replicate a workload from a source environment to the target Microsoft Azure subscription. This replication is either performed by an agent installed on the Hyper-V host server (when no conversion is required) or it is performance by installing an agent in the guest operating system in case the Hyper-V hypervisor is an older version not supporting replication, or the workload either is located on physical servers or VMWare hosts. When using ASR there is no downtime during replication of disk content to Azure. Although the new virtual machine within the target environment can retain existing IP addresses or obtain new IP addresses, the virtual machine network name will stay the same which could introduce challenges when bringing up this virtual machine in the target environment while the source system is still active in the current environment. This approach minimises workload disruption as there is a short switch over time whilst the VM is brought online in Microsoft Azure. Where hard dependencies on the IP address of the migrated server exist, these must be identified and modified where necessary when using this migration approach.
- Rebuild or Build out. This approach is based on either the redeployment of the workload in Microsoft Azure, or the application-dependent capability to extend the workload through the addition of additional nodes. In the latter case, the ability to add additional nodes may also require application-level support for data replication. This approach also provides an opportunity to eliminate legacy applications, either known or unknown. Although this approach might be labour intensive, it provides an opportunity to run two environments concurrently, create a “clean sheet” situation and do a better and easier non-disruptive testing as well. Workload will get new names and new IP addressing within the target environment.
The three approaches described above are not mutually exclusive and a combination of these approaches may be applied based on the workload characteristics.
The process of planning to migrate a customer environment to Microsoft Azure takes into consideration many factors, including dependencies between systems to be migrated. Specific migration activities are frequently assigned to different individuals or teams for execution. To facilitate coordination of tasks across the broader migration team, a structured approach to migration and sequencing of activities is required. The following section provides a guide to support this process.
Essential Customer Details
|CSP Customer ID|
|Azure Subscription ID|
|Number of users|
|Location of user profiles|
|Number of user profiles|
|Idx||Resource Name||Source Location|
|Idx||Server Name||Depends on (before migration)||Depends on (after migration)|
|1||All Servers||Active Directory||Active Directory|
|2||ABASDSSDS01||OLDDSDASDS01 (RDS Server)||NEWDSDASDS01 (RDS Server)|
|3||ABASDSSDS01||OLDDSESDES01 (File Server)||OLDDSESDES01 (File Server)|
|Idx||Server Name Current||Server Name New|
The customer X environment is composed of a single application server and a single database server. Database services are obtained from SERVERABC through SQL Server instance SQLINSTANCEABC. The customer’s clients are accessing the platform based on a remote desktop session through the following dedicated URL abc.
The migration approach is based on building the server system and performing a backup and restore of the current databases at the time of migration. The customer environment is built within a customer dedicated CSP provisioned Azure subscription. The CSP provisioned Azure subscription is provisioned prior to the build activities.
During the cut-over from the current to the new environment client access will be disabled; the data will be transferred through a backup and restore within the new environment. The DNS records will be updated and configured to refer to the new environment. Once the migrated workload is working and has been verified, and the DNS changes propagated, client access will be enabled.
As application server names and database server names will change the application configuration will be updated during the build of the environment.
|Milestone||Description||Duration||Start date||End date|
|M1||Migration plan approved|
|M2||Built plan approved|
|M3||Customer provisioned within CSP|
|M4||Customer environment built within Azure|
|M5||Environment test plan completed||T – 1 week|
|M7||End Migration||T + 4 hours|
|M8||Customer environment shut down||T + 1 week|
|M9||Customer environment decommissioned||T + 2 weeks|
|M10||Completion migration plan|
|1.1||Migration Plan Approved||M1|
|1.2||Built plan approved||M2|
|2.1||Onboard customer within CSP|
|3.1||Deployment server A|
|3.2||Configure endpoint B|
|3.3||Deployment database instance B|
|4.1||Test plan A|
|5.1||Disable access to current environment||M5|
|7.1||Shutdown server A|
|7.2||Backup database Y|
|7.3||Remove VPN connection B|
|1||Communication XXX – YYY||…||…|
|1||Stop ftp entrance in the new environment|
|2||Activate ftp entrance in the old environment|
The success of migration projects is dependent on thorough planning to ensure the timely and effective completion of required activities in all work streams.
The project management activities and responsibilities summarised in the following table
|Issue and risk management||
Migration projects are typically run by multiple work streams that execute concurrently to optimise timelines. The following figure provides an example of such an approach.
The schedule illustrated is typical when managed services are constructed around a shared services platform. During the initial phases, the shared services platform is either extended or moved into Microsoft Azure followed by the migration of tenant environments. The latter could possibly run in batches or collections of tenant environments depending on capacity and dependencies.
As illustrated, the project has been divided into different phases. The next section will provide further details on activities typically executed during each phase. The assumption has been made that the platform is composed of shared services and dedicated customer environment.
During the Planning phase, the team prepares the functional specification, works through the design and prepares design documents. Following the completion of this phase, the team moves forward to begin construction of the solution in the Build phase.
The following are typical activities during this phase:
- Build initial Architectural blueprint for shared services as well as dedicated customer environments within Microsoft Azure.
- Build initial migration blueprints and planning for shared services as well as dedicated customer environments for migration from the current datacenters to Microsoft Azure
- Setup initial project planning, team structure, tracking, and risk management
- Validate and finalise Architectural blueprint
- Validate and finalise migration blueprints
- Finalise collection of customers to be migrated in this project
- Establish project organisation and allocate resources
- Identify 3rd party dependencies and engage these 3rd parties.
- Define migration procedure verification criteria
- Define platform acceptance tests on core components to support coexistence
Typically, this phase has the following output:
|Initial Architectural Blueprint||Initial platform architecture blueprint for shared services and dedicated customer environments within Azure|
|Initial Migration Strategy||Initial migration blueprint detailing the migration strategy, the plan for the shared services and dedicated customer environments|
|Migration Readiness Definition||Definition of platform acceptance test on core components to support coexistence|
|Initial Project Management Framework||Initial project management framework containing project planning, action log, decision log and risk register|
During the Build phase, the team builds the core infrastructure and prepares the procedures and tools required for deployment (migration). Completion of this phase marks the transition to the Deployment phase.
The following are typical activities during this phase:
- Preparation of procedures to build core platform building blocks
- Preparation of procedures for platform migration
- Execution of functional tests as defined in migration procedure verification criteria
- Execution of functional tests as determined in the definition of platform acceptance test on core components to support coexistence
- Validation and finalisation of procedures to build core platform building blocks
- Validation and finalisation of procedures for platform migration
- Build platform core components within Azure
Typically, this phase has the following output:
|Initial Platform Build Procedures||Initial build of procedures towards build core platform infrastructure building blocks|
|Initial Platform Migration Procedures||Initial build of procedures towards platform migration|
|Azure Shared Services Platform||Platform core within Azure, ready for coexistence|
|Core platform functional verification report||Report out on core platform functional verification|
|Migration functional verification report||Report out on migration procedures verification|
During the Deploy phase the actual migration will take place, commencing with the shared services environment followed by the various dedicated customer environments.
The following typical activities could be identified:
- Migrate shared services and dedicated customer environments from the source platform into Microsoft Azure.
During the Stabilise phase, the team focuses on resolving issues and bugs.
The following typical activities could be identified:
- Stabilise the environment and identify any issues. These issues will be triaged, prioritised and resolved.
Table 2 outlines the typical roles and responsibilities of the team that will migrate the MSP environment into Microsoft Azure.
Table 2: Migration Project Team Roles and Responsibilities
|Program Manager||Makes key project decisions, assists in escalating unresolved issues to the Executive Steering Committee, and clears project roadblocks from a customer perspective|
|Project Manager||Primary point of contact for team
Responsible for managing and coordinating the overall project
Responsible for resource allocation, risk management, project priorities, and communication to executive management
Manages day-to-day activities of the project
Coordinates the activities of the team to deliver deliverables according to the project schedule
|Technical Leader||Keeps technical oversight and responsible for long term technical alignment
Advises program manager on technical decisions
Advises program manager on deliverables sign off
Primary technical point of contact for the team that is responsible for technical architecture and code deliverables
Responsible for overall architecture and technical decisions as well as technical success.
First level of quality control
Coordinates technical status meetings
Oversees all technical delivery streams from a technical perspective
|Migration Lead(s)||Responsible for end to end planning tenant migration.
Responsible for identification of dependencies
Performs and lead actual migration, if needed supported by SMEs
|Subject Matter Expert(s)||Specific component related technical expertise
Responsible for implementation of any infrastructure and application aspects required to support the transformation within the current local environment.
Responsible for the build of target environment(s)
Responsible for implementation of migration scripts and procedures