Rob here, spring is in the air and we have some yard cleaning to do. But before we get to the hard stuff, let’s check out something useful Windows 2008 R2 brings us: Bridgehead Load Balancing.
For those of you who do not know; Windows 2008 R2 has a brand new logic to effectively load balance connection objects in the Hub. In larger deployments of Active Directory, this feature will help evenly distribute connections between the hub and branch office. From a performance perspective, this also allows extra processing cycles for whatever else the domain controller may be doing. This ultimately reduces the uneven or “everything goes to server X” phenomenon that can be observed prior to Windows 2008 R2.
To start, let’s review the basic question: What is a Bridgehead server and what is its role in Inter-Site replication?
A Bridgehead server is a Domain Controller designated to perform site-to-site replication. Basically a bridgehead is a point where an Active Directory replication connection leaves or enters a site. By default, the Inter-Site Topology Generator (ISTG) automatically designates which servers act as Bridgehead servers.
Here’s a basic diagram that shows a single-domain replication topology – 1 Hub site and 5 branch sites.
Obviously it’s an extremely simple example, but it illustrates the basic role of the Bridgehead server – to perform replication between sites. With every DC performing site-to-site replication, the ISTG has an easy job. It designates that every server is a bridgehead.
Take notice once again, that each DC builds an inbound connection from the bridgehead server in the connected site. The Hub DC builds 1 inbound connection from each of the 5 branch DCs, and each branch DC builds 1 inbound connection from the Hub DC for a total of 10 connections objects.
For some environments, perhaps yours — what happens when things get more complicated? How does the ISTG bridgehead selection work when it has multiple servers to choose from?
Prior to Windows 2008 R2 – the typical results were:
• Hub site inbound connections were NOT load balanced evenly
• Branch sites inbound connections were load balanced evenly
• In large branch office scenarios, 1 Hub DC carried >50% of all inbound connections (Potential bridgehead overload scenario)
• After adding additional Hub DCs, only Branch RODCs rebalanced, Branch RWDCs ignore the new DC
• ADLB utility (AD Load Balance) frequently used to rebalance large site designs
Using the Windows 2008 or prior logic, the replication topology could look similar to the following:
As you can see, Branch inbound connections are evenly balanced but the Hub Inbound connections are not. If the left Hub DC fails, changes from 7 of the 10 branch offices will not replicate to the Hub site until the connections are rebuilt around the failed DC.
So we can already see the minor cracks in the topology generation for this type of scenario. So what happens when we add an additional Hub DC to the mix?
Notice that the new Hub DC is completely ignored by the Branch RWDCs. To get the Branch DCs to recognize the new Hub DC, you have to delete all the inbound connection objects on the RWDCs and kick off the KCC to generate a new topology. If present, Branch RODCs would rebalance their inbound connections to use the new DC.
Fast forward to today, we find ourselves with the improved logic:
• Improved replication algorithm for branch office topologies
• All Hub and Branch site inbound connections load balance evenly (Both RODCs and RWDCS)
• Adding additional DCs to Hub site causes an automatic rebalance of connections across all Hub DCs (Both RODCs & RWDCS)
• ADLB no longer needed for large environments – topology automatically recalculated on DC changes(adds, deletes, moves)
Using the new 2008 R2 algorithm could generate a topology as seen below:
As you can see the connections are evenly distributed between the two Hub DCs. If the left Hub DC fails, only 5 branch DCs will be affected as opposed to 7 in the Pre-R2 scenario.
So with that new logic, let’s see how adding a new Hub DC works:
Upon addition of the new Hub DC, all (both Hub Inbound and Branch Inbound) connections rebalance automatically to use the new DC.
So how do you get this new functionality/replication logic?
· Begin by installing at least two 2008 R2 DCs – starting with the Hub site DCs first.
· Adding more 2008 R2 DCs improves the overall load-balancing, with the best results found in a pure 2008 R2 environment.
· Windows 2008 R2 Forest or Domain functional mode is not required – just R2 DCs!
More details on the new Bridgehead selection process can be found here:
Thanks to Brian Mulford (Boston Premier Field Engineering) for the fancy graphics and additional commentary to my blog post.