Active Directory Considerations in Azure Virtual Machines and Virtual Networks Part 2 – Azure Virtual Machines and Virtual Networks Basics

imageIn the first part of this series on Active Directory in Azure Virtual Machines and Virtual Networks, I went over the concept of hybrid cloud and talked about some of the advantages a hybrid cloud could provide you. If you missed that article, check out at Active Directory Considerations in Azure Virtual Machines and Virtual Cloud and Datacenter Solutions HubNetworks Part 1 – Hybrid IT.

I know that you’re chomping on the bit to hear about the specific design considerations for putting Active Directory in a hybrid cloud, but I think it’s important that we understand some basic Azure Virtual Machines and Virtual Networks concepts and capabilities before we get there. Without this understanding, you’ll have some problems understanding how these interact with Active Directory in the cloud. So, bear with me, you’ll be glad you did.

Azure Virtual Machines

Azure Virtual Machines are virtual machines that you’ll spin up in the Azure Virtual Machines and Virtual Networks cloud. These virtual machines are running on a core Hyper-V infrastructure and therefore are represented as .vhd files in Azure Virtual Machines and Virtual Networks storage. You can create a new virtual machine from a gallery of choices provided by Azure Virtual Machines and Virtual Networks or you can create your own virtual machine on premises and then upload that virtual machine’s .vhd file to Azure to create a new virtual machine there. You can even take a running virtual machine that’s on premises and copy that to Azure. However, there are some caveats to consider when you port your own VMs to the cloud, and we’ll cover those later in this series.

Azure Virtual Machines and Virtual Networks allows you to choose a virtual machine “hardware” configuration from a list. We sometimes refer to these as “T-shirt sizes” where you can choose a small, medium, large and extra large machine. Azure Virtual Machines and Virtual Networks is currently in customer preview, so the list of virtual machines that you can choose from might change. To see a list of the sizes available, head on over to https://msdn.microsoft.com/en-us/library/windowsazure/jj156003.aspx Remember, that’s the current list and might not represent what’s available when Azure Virtual Machines and Virtual Networks leaves customer preview.

Azure Virtual Networks

An Azure Virtual Network is a network segment where you can put all your virtual machines that you want to communicate with each other without having to loop back through the Internet. This is similar to a Hyper-V virtual switch that you connect all the virtual machines to when you’re running Hyper-V on premises. The Azure Virtual Network enables:

  • Transparent extension of your corporate network – you can think of the Azure Virtual Network as another subnet on your corporate network.
  • You can connect your corporate network to the Azure Virtual Network using an IKE-based IPsec site to site VPN connection and manage it like you would any other site to site VPN connection. Note that you need to use a VPN device on premises that is approved by Azure. You can find a current list of such devices at https://msdn.microsoft.com/en-us/library/windowsazure/jj156075.aspx Again, this list might change by the time Azure Virtual Machines and Virtual Networks leaves customer preview status

When you provision a net Azure Virtual Network, you begin with how many IP addresses you need. This will define your CIDR subnet mask. You can subnet your address space if you like, but you only do that for IP address management reasons. The reason for this is that all the addresses and subnets will be able to connect to each other without a supporting routing infrastructure. That is to say, if you provisioned a Azure Virtual Network with a network range of 10.0.0.0/16 and then created various subnets of that, such as /24 subnets, you don’t need to have the connections routed and there is no ability to create virtual network segments like you could do with an on premises Hyper-V server, which the virtual network segments are represented by separate Hyper-V virtual switches.

You can also have multiple Azure Virtual Networks and connect to all of them at the same time from one point of presence at the corporate network. For example, you might want to have five Azure Virtual Networks because you have five services that have a collection of machines that support each of the services. You need to connect all of those services to the corporate network because they all use Active Directory based authentication. You can do this. Your on premises VPN server can be connected to each of the Azure Virtual Networks.

While you can connect your on premises network to multiple Azure Virtual Networks, you can’t connect an Azure Virtual Network to multiple on premises networks. Each Azure Virtual Network can connect to a single on premises network.

Another thing to keep in mind when you have multiple Azure Virtual Networks is that if you want the Virtual Networks to communicate with each other, they will need to loop back through the on-premises VPN gateway. The reason for this is that at this time it is not possible to route connection between Azure Virtual Networks through the Azure network fabric. Not to say this might not happen someday, but for now, if you want Azure Virtual Networks to communicate directly with each other, they are going to have to route through your VPN gateway on premises.

Virtual IP Addresses

If you have a Windows networking background, you probably have heard the term “virtual IP address” or VIP, as it’s part of the terminology for the Windows Server Network Load Balancing service (NLB). Things change when we talk about VIPs in Azure Virtual Networks. That’s because a VIP for a Azure Virtual Network is just the public IP address that external hosts will use to access the virtual machines on the Azure Virtual Network. This IP address is not bound to a particular computer or IP address.

When an external host needs to connect to a virtual machine on an Azure Virtual Network, it will connect to the VIP and a specific UDP or TCP port number. Azure will then perform port direction if required, to forward the connection to an IP address and port of a virtual machine located on your Azure Virtual Network. This port redirection is done my configuring endpoints in your Azure Virtual Machines and Virtual Networks interface.

VIPs can be used to enable load balancing for virtual machines on the virtual network. For example, suppose you have two web front-end servers for your web based service. You want to load balance incoming connections to those servers. You can do that by configuring load balancing on a particular VIP and telling Azure Virtual Networks to balance the connections between the two destination web servers. This is similar to what you see when you’re using a hardware load balancer on your on premises network.

The load balancing function also supports probes. These probes are used to determine if the machine is online and if it needs to be removed from the list of machines that should accept connections. At this time, Azure Virtual Networks supports ICMP, HTTP and UDP and TCP port-based probes.

While you can do this kind of load balancing, what you can’t do is enable NLB on virtual machines on the Azure Virtual Network. As you’ll see later, the reason for this is that you can’t control the IP addressing on the virtual machines. You must allow Azure Virtual Networks to do that for you at this time.

Dynamic IP Address

A Dynamic IP Address or DIP is the IP address that is assigned to the virtual machine on the Azure Virtual Network. You do not directly assign these IP addresses. Instead, what you do is define a subnet range in the Azure Virtual Machines and Virtual Networks interface, and then IP addresses are dynamically assigned to virtual machines as you add them to your virtual network.

Now, don’t worry. This doesn’t mean that your server infrastructure in the cloud is always going to be changing its IP addressing scheme. Once a virtual machine is assigned an IP address by Azure Virtual Networks, it will keep that address for the lifetime of the virtual machine. You can think of this as a DHCP reservation for your virtual machine that it can keep until you delete that machine.

It’s very important that you do not assign static IP addressing to your virtual machines because Azure Virtual Networks is not going to recognize the IP addresses that you assign to your virtual machines. It doesn’t matter if the IP addresses that you assign are on the correct network ID that you assigned your Azure Virtual Network. Any static IP addressing information will be lost. And not only will the static addressing be lost, the virtual machine that you assigned the static addressing to will be isolated and you will not be able to connect to it.

Azure does this because it doesn’t recognize the IP addressing of your virtual machine. If it detects a virtual machine that it doesn’t recognized, it’s going to interpret that as a bad thing from security perspective and isolate that machine. That’s good from a security perspective, but not good if you didn’t know about this behavior Smile.

IMPORTANT:
Never assign static IP addressing information to your virtual machines located on Azure Virtual Networks. Note that if you are moving a virtual machine from an on premises deployment, you do not need to assign the existing NIC to dynamic addressing. When you recreate the virtual machine in Azure, it will be assign a new NIC that is recognized by Azure. The existing NIC won’t be used and won’t be recognized by Azure. This makes virtual machine portability easier. Thanks to Ronald Beekelaar for this handy tip!

Another side-effect of this behavior is that you can’t move virtual machines into an Azure Virtual Network if that virtual machine “lived” somewhere else prior to you wanting to place it on a virtual network. We’ll let me put it another way – if the virtual machine “lived” somewhere else in Azure. That is to say, you can’t move virtual machines into a Azure Virtual Network, you need to create new virtual machines in the Azure Virtual Network.

To avoid confusion, I want to make it clear that you can still have an existing virtual machine somewhere on Azure and you can recreate it as a new virtual machine on an Azure Virtual Network. When you create the new virtual machine, you just take the VHD file that you used for it before, and then create a new virtual machine on your Azure Virtual Network using that VHD file. The same goes for virtual machines that you create or have already running on premises. You can use those VHD files to create new virtual machines on the Azure Virtual Network too.

There are some DNS issues that you’ll need to address, but we’ll talk about that within the context of the Active Directory design principles for Azure Virtual Machines and Virtual Networks later in this series.

Azure Service Healing

Azure Service Healing includes processes that automatically restore virtual machines to a running state if the Azure fabric controller detects that there is a problem with the virtual machine. This is a good thing as it enables increased availability and resiliency. However, when it comes to Active Directory, how does it affect Active Directory domain controllers? I mean, what do DCs think about just being shut down and then moved to another location in the datacenter?

When Service Healing springs into action, the virtual machine is actually restarted on another server, so the domain controller see’s this an unplanned reboot. When the machine reboots, the following happens:

  • The MAC address is changed – DCs don’t care about MAC addresses, so no problems there.
  • The processor and DPU ID will change – DCs don’t care about this either.
  • The IP address of the VM will not change – this is good, since IP address changes for DCs that are also DNS server could create some disruption. This does require that your virtual machine hosting the DC is placed on a Azure Virtual Network
  • Writing to Active Directory DIT/logs/sysvol will not be lost because the storage is persistent and unaffected by the restart if you put these files on “data-disks”. We’ll talk more about data disks later in this series and how we use data disks for our domain controllers

Bottom line is that DCs will tolerate Azure Service Healing just fine.

Summary

In this article, part 2 of our series on Active Directory in Azure Virtual Machines and Virtual Networks, we looked at key pieces of Azure Virtual Machines and Virtual Networks and defined those components and provided a basic explanation of how they work. Now that you know about the basic moving parts of Azure Virtual Machines and Virtual Networks, we’re ready for the next step – to dig into key design considerations for Active Directory deployments in a hybrid networking scenario. See you soon for part 3! Thanks! –Tom.

Tom Shinder
tomsh@microsoft.com
Principal Knowledge Engineer, SCD iX Solutions Group
Follow me on Twitter: https://twitter.com/tshinder
Facebook: https://www.facebook.com/tshinder
image


Go Social with Building Clouds! Building Clouds blog Private Cloud Architecture Facebook page Private Cloud Architecture Twitter account Building Clouds Twitter account Private Cloud Architecture LinkedIn Group Cloud TechNet forums TechNet Cloud and Datacenter Solutions Site Cloud and Datacenter Solutions on the TechNet Wiki