Step-by-Step: Load balancing workloads in Azure


Hello folks,

The last few weeks I have been meeting with customers in Toronto, Winnipeg, and Calgary.  There was one common theme.  When we discussed moving workloads to IaaS (Web front ends with SQL back ends) the conversation always came back to scaling.  In PaaS scaling websites is pretty straight forwards, but what happens when you want to scale a website running in a VM in IaaS.

So today we’ll look at how to setup an IaaS solution with multiple front end and how to configure the load balancer in front of them.  But first, if you’re interested in scalability and performance in the cloud I recommend the following Microsoft Virtual Academy modules.

Now before we start, let’s set some rules.  We’re not looking at latency here, since our users\customers are theoretically in the same geography as the datacenter we used to deploy our solution. Today we will look at Internal load balancing (ILB). This is a function of the Azure Traffic Manager.  It allows you to control the traffic to endpoints, which can include cloud services, websites, external sites, and other Traffic Manager profiles.  It does so by applying an intelligent policy engine to Domain Name System (DNS) queries for the domain names of your service

There are two levels of load balancing available for Azure infrastructure services:

  • DNS Level: Load balancing traffic to different cloud services located in different data centers, or different Azure websites located in different data centers, or to external endpoints. This is done with Traffic Manager and the Round Robin load balancing method.
  • Network Level: Load balancing of incoming Internet traffic to different virtual machines of a cloud service, or load balancing of traffic between virtual machines in a cloud service or virtual network. This is done with the Azure load balancer

Azure Traffic Manager uses three load-balancing methods to distribute traffic.  The first is Failover, it uses a primary endpoint for all traffic, but provide backups in case the primary becomes unavailable. The second is Performance where you have endpoints in different geographic locations and your clients get connected to the "closest" endpoint. and the last is Round Robin Use this method when you want to distribute load across a set of cloud services in the same datacenter or across cloud services or websites in different datacenters

Our Scenario.

image

The front end consists of 2 Windows Server 2012 R2 with the IIS role installed.  we installed a test application that connects to a SQL backed. (we will look at scaling the backend in another post.

To allow the load to be balanced between both hosts that are in the same cloud service we will be using the Network Level type of load balancing to balance of traffic between virtual machines in a cloud service or virtual network.

We started with one server and we had already configured the Endpoints for HTTP and HTTPS

 

image

We realized we needed more instances, so I created a new server identical to the first one and now it’s time to configure the load balancing.

Create the Load-Balanced Set

1- We need to navigate to the first server we setup and modify the endpoints we already created. Select the endpoint you want to load balance (in my case HTTP) and click Edit in the action bar at the bottom.

image

2- In the Edit dialogue box of the pre-existing endpoint.  I clicked “Create a Load-Balanced Set” and clicked the Next icon.

image

3- In the next page, I gave a name to the Load-Balanced set. and left the other fields as is. (the “number of probes” is the number of times the service will try the port before it decided the port is non-responsive, and the “probe interval” is the number of second between the probes) and click the finish icon

image

4- we end up with Load-balanced set to witch the first machine is already a part of.

image

Adding the second machine to the Load-Balanced Set.

1- Now that we have 2 machine and one Load-Balanced set.  we need to create a endpoint on the second machine and make it part of the Load-Balanced set.  Select the second virtual machine, in the EndPoint tab and click Add.

image

2- In the dialogue box, select “add an endpoint to an existing load-balanced set” and ensure that the set you need is the one selected in the drop down.

image

3- Give it a name for that specific server and click the finish icon

image

Adding Another machine to the Load-Balanced Set With PowerShell

You can achieve the same results by using the following PowerShell command.

Get-AzureVM -ServiceName "CanITProCamp" -Name "WEBFE02" | Add-AzureEndpoint -Name "HTTP" -Protocol "TCP" -PublicPort 80 -LocalPort 80 -LBSetName "Web-Farm" -DefaultProbe | Update-AzureVM

Just replace “WEBFE02” with the name of the machine you’ve added to your cloud service.

That’s it.  we now have a load balance set of web servers.  Next week we’ll look at load balancing the SQL back end, and How we can load balance across different datacenter across the globe.

Until then, keep learning…. 

Cheers!

clip_image024

Pierre Roman | Technology Evangelist
Twitter | Facebook | LinkedIn

Comments (0)

Skip to main content