This post was contributed by Stefano Gagliardi, Pedro Perez, Telma Oliveira, and Leonid Gagarin
As you know, we recently introduced the Azure Resource Manager deployment model as an enhancement of the previous Classic deployment model. Read here for more details on them https://azure.microsoft.com/en-us/documentation/articles/resource-manager-deployment-model/
There are important differences between the two models on several aspects spanning different technologies in Azure. In this article we wanted to clarify in particular what has changed when it comes to public IP addresses that you can assign to your resources.
Azure Service Management/ Classic deployment / v1
in ASM, we have the concept of Cloud Service.
The Cloud Service is a container of instances, either IaaS VMs or PaaS roles.
Cloud Services are bound to a public IP address that is called VIP and have a name that is registered in the public DNS infrastructure with the cloudapp.net suffix.
For example my Cloud Service is called chicago and has 184.108.40.206 as a VIP.
Note: it is possible to assign multiple VIPs to the same cloud service
It is alsopossible to reserve a cloud service VIP so that you don’t risk that your VIP will change when VMs restart.
In Azure Service Manager model, you deploy IaaS VMs inside Cloud Services.
(read here for more info about endpoints https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-set-up-endpoints/ )
Endpoints are simply a mapping between a certain private port on the VM’s internal dedicated IP address (DIP) and a public port to be opened on the cloud service public IP (VIP). Azure will take care of all NATting, you will not need to worry about configuring anything else.
Notice that in ASM you don’t need to necessarily add the VM to a Virtual Network. If you do, the VM will have a DIP in the private address range of your choice. Else, Azure will assign the VM a random internal IP. In some datacenters if the VM is not in a Vnet it can receive a public IP address as a DIP, but the machine won’t be reachable from the internet on that IP! It will, again, be reachable only on the endpoints of the VIP.
Security is taken care by Azure for you as well: no connection will ever be possible from the outside on ports for which you haven’t defined an endpoint. Instead, traffic on opened ports can be filtered by means of the Endpoint ACLs https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-acl/ or Network Security Groups https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-nsg/
Now, you can deploy several IaaS VMs inside the same cloud service.
This has a consequence: you cannot expose different services on different VMs on the public internet using the same public port. You will need to create an endpoint on each VM referencing a different public port.
For example, in order to connect via RDP to vm1 and vm2 in my chicago Cloud Service, i have done the following.
it is worth noticing that the client you are starting RDP from does not need to have any knowledge of the hostname of the destination machine (vm1,vm2). Also, the destination machine is completely unaware of the Cloud Service, its public DNS name and its VIP. The machine is just listening on its private DIP on the appropriate ports. You will not be able to see the VIP on the Azure VM’s network interface.
You can however create load-balanced endpoints to expose the same port of different VMs to the same port on the VIP (think of an array of web servers to handle http requests for the same website).
There is a limit on the amount of 150 endpoints you can open on a cloud service.
This means that you cannot open the whole range of TCP dynamic ports for a VM. If you have applications that require to be contacted on dynamic TCP ports (for example passive FTP), you may want to consider assigning your machine an Instance Level Public IP. ILPIPs are assigned exclusively to the VM and are not shared amongst other VMs in the same cloud service. Hence, the whole range of TCP/UDP ports is available with a 1to1 mapping between the public port on the ILPIP and the private port on the VM’s DIP – again, no ICMP!
The ILPIP does not substitute the cloud service VIP, it is just an additional public IP for the VM. However, the VM uses the ILPIP as its outbound IP address.
Note that you do not need to open endpoints for ILPIPs are there is no need to NAT. All TCP/UDP ports will be “opened” by default so make sure you take care of the Security with a proper firewall configuration on the Guest VM and/or by applying Network Security Groups.
As of January 2016, you can not reserve an ILPIP address as static for your classic VM. Check back on these official documents for future announcements.
Azure Resource Manager / ARM / v2
In this new deployment model, we have changed how Azure works under the covers. in ARM, there is no longer the Cloud Service concept, while instead we have the Resource Group. While you can still think as the Resource Group as a container for your VMs (and other resources), it is very different than the Cloud Service.
What is interesting to notice from a networking perspective is that the Resource Group doesn’t have a VIP bound to it by default. Also, in ARM Every VM must be deployed in a Virtual Network.
ARM has introduced the concept of the Public IP, an object that can be bound to VM NICs, load balancers and other PaaS instances like VPN or Application gateways.
As you create VMs, you will then assign them a NIC and a public IP. The public IP will be different for every VM. Simplifying, in the ARM model all VMs will have their own public IP (it’s like they were classic VMs with an ILPIP).
Hence, you no longer need to open endpoints as you do in ASM/Classic because all ports are potentially open and no longer NATted: there are no endpoints in ARM.
By default, all public IP addresses in ARM come as dynamic.
Note: you will need to stop/deallocate the VM to make this effective. it doesn’t work on a running VM. So plan some downtime ahead. Then you will have to perform something like the below:
#create a new static public IP
$PubIP = New-AzureRmPublicIpAddress –name $IPname –ResourceGroupName $rgname -AllocationMethod Static –Location $location
#fetch the current NIC of the VM
$NIC = Get-AzureRmNetworkInterface –name $NICname –ResourceGroupName $rgname
#assign the new public static IP to the NIC
$NIC.IpConfigurations.publicIPaddress.id = $PubIP.Id
Set-AzureRmNetworkInterface -NetworkInterface $NIC
This is a sample script: consider extensive testing before applying any kind of change in a production environment.
Now, there are circumstances in which we will still like to take advantage of the port forwarding/NATting in ARM, just like Endpoints in classic did. This is possible: You will have to resemble the V1 mechanism of traffic going through the load balancer.
However be aware of the requirements:
Once you’re ok with the above, the procedure is “simple” and can be derived from here
For your reference, here is the sample script I have used.
$vmname="<the name of the VM>"
$rgname="<the name of the Resource Group>"
$vnetname="<the name of the Vnet where the VM is>"
$subnetname=" the name of the Subnet where the VM is>"
#This creates a new loadbalancer and creates a NAT rule from public port 50000 to private port 80
$publicIP = New-AzureRmPublicIpAddress -Name PublicIP -ResourceGroupName $rgname -Location $location –AllocationMethod Static
$externalIP = New-AzureRmLoadBalancerFrontendIpConfig -Name LBconfig -PublicIpAddress $publicIP
$internaladdresspool= New-AzureRmLoadBalancerBackendAddressPoolConfig -Name "LB-backend"
$inboundNATRule1= New-AzureRmLoadBalancerInboundNatRuleConfig -Name "natrule" -FrontendIpConfiguration $externalIP -Protocol TCP -FrontendPort 50000 -BackendPort 80
$NRPLB = New-AzureRmLoadBalancer -ResourceGroupName $rgname -Name IrinaLB -Location $location -FrontendIpConfiguration $externalIP -InboundNatRule $inboundNATRule1 -BackendAddressPool $internaladdresspool
#These retrieve the vnet and VM settings (necessary for later steps)
$vm= Get-AzureRmVM -name $vmname -ResourceGroupName $rgname
$vnet = Get-AzureRmVirtualNetwork -Name $vnetname -ResourceGroupName $rgname
$internalSubnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $subnetname -VirtualNetwork $vnet
#This creates a new NIC with the LB settings
$lbNic= New-AzureRmNetworkInterface -ResourceGroupName $rgname -Name LBnic -Location $location -Subnet $internalSubnet -LoadBalancerBackendAddressPool $nrplb.BackendAddressPools -LoadBalancerInboundNatRule $nrplb.InboundNatRules
#This removes the old NIC from the VM
Remove-AzureRmVMNetworkInterface -vm $vm -NetworkInterfaceIDs $vm.NetworkInterfaceIDs
#This adds the new NIC we just created to the VM
Add-AzureRmVMNetworkInterface -vm $vm -id $lbNic.id -Primary
#This Stops the VM
Stop-AzureRmVM -Name $vmname -ResourceGroupName $rgname
#This commits changes to the Fabric
Update-AzureRmVM -vm $vm -ResourceGroupName $rgname
#This restarts the VM
Start-AzureRmVM -Name $vmname -ResourceGroupName $rgname
After this, you can access port 80 on the VM by accessing port 50000 on the load balancer’s $publicIP.
Again, this is a sample script: consider extensive testing before applying any kind of change in a production environment.