System Center Configuration Manager Infrastructure Lift and Shift Migration to Azure

The configuration manager hierarchy managing all of Microsoft devices (~300K) was traditionally hosted by on-premises virtual machines (VMs) and physical servers.  The goal of this project completion was to “lift-and-shift” migration of configuration manager infrastructure to Microsoft Azure which improves up time, reliability, and scalability by utilizing various new Azure features. We are now 100% hosted on Azure for configuration manager infrastructures for all site roles – primary site, SQL server, management point, software update point with the exception of regional distribution points/secondary sites. As part of this migration we have optimized distribution point content consumption significantly using peer cache and branch feature where almost 80% content is delivered from the peers vs distribution point. We have implemented SQL Server always on availability groups for configuration manager and WSUS database to provide redundancy and high availability.  We leveraged automation by using custom script-extension integration with Azure ARM templates to provide faster, more consistent, and error-free migrations. Lastly for the central administration site (CAS) migration we leveraged a new feature, site server high availability, in configuration manager, to migrate the CAS. This allowed us to avoid the risks associated with a CAS outage while migrating the CAS from on-premises infrastructure to Azure.

Key facts of this Configuration Manager Lift and Shift Migration

  • 160 - Configuration Manager VMs hosted in Azure
  • 1200 - Compute Cores used
  • 7 - Azure regions used for infra deployment – SE Asia, East US2, West US, West US2, Central US, North Europe, West Europe

Here is the technical case study published for this migration journey.

Comments (7)

  1. Q-Tech says:

    Hold on a sec – Microsoft Docs reports that using site server high availability only works in standalone primary site scenarios, not a hierarchy –

    Has this changed?? Am I missing something here?

    1. Hi Q-Tech. You are exactly right. HA for hierarchies is not yet publicly available, although I believe it is in tech preview builds. Being internal to Microsoft we run features before full release some times, to help validate the scenarios at a large and complex scale. We did find a few bugs, nothing major but they would be annoyances to customers so those are being fixed before we make the feature available to customers. You should see it generally available in normal builds soon.

  2. Krishsh says:

    Awesome documentation. Really helped me alot however there’s this one thing which is not clear to me.
    “We used the same server name and drive layout in Azure as we did on-premises. The Azure VM name differed, but the NetBIOS server name remained the same.” -> Can you put some more light on this please.

    1. We have separated our drives so ConfigMgr is on a separate drive from the OS, swap file is separate disk, etc. We just kept that same design when building the Azure VMs, and same drive letters, for consistency and avoiding TSG and doc changes referencing drive letters. Also, because we moved our primaries as a backup/restore process we wanted the Azure VMs to have the same server name so after backing up the on-prem box we would power down, rename the Azure VM to the old name, and then restore the machine which expected the old server name.

  3. 3M3M3M says:

    Thank you for writing this post. Can you shed some light on OSD in Azure environment as well please?
    Do you use SCCM OSD via Azure?

    1. Our OSD processes have not changed. Clients still talk to MP/DP as always. We don’t do bare metal deployments through ConfigMgr here so we didn’t have to worry about WinPE and WDS services. Bare metal is handled by a different, non ConfigMgr, group.

      1. 3M3M3M says:

        Thank you. Did you use Azure File storage for hosting package sources and remote content library?

Skip to main content