Lite-Touch Deployments – Making them a bit more scalable

Take a look at the image below, which is a fairly common scenario for a company; let's call them Contoso.  Contoso has multiple offices around the country that all connected to their head office, some with high-speed network connections and others with relatively slow ones.  Contoso is planning their Windows Vista rollout which will be driven from their central office, in this example, Madrid.  Contoso wants to deploy their Windows Vista image that they created on their MDT server in Madrid but with a lite-touch installation to all their clients nationwide.

The problem they need to solve is one of distributing their image file to all company workstations without saturating the network at the time of deployment.  Unfortunately, they do not have the infrastructure of SMS or Systems Center Configuration Manager which would make this problem fairly trivial, so they need to create a solution that will allow them to deploy their image created in Madrid to all clients without using up all available network bandwidth.    The biggest problem that they will have with a lite-touch scenario is that all clients will contact the MDT server directly by hostname (as configured in the file bootstrap.ini that MDT created and placed inside the WinPE image file) to download the corporate WIM image which, in Contoso's case, is 6 Gb in size.  This will work fine in the Madrid office because the MDT server is there and there is high-speed network connectivity between the server and the workstations.  But when Contoso starts the deployment in their office in Bilbao, which only has a 256kb network connection to Madrid, it will almost certainly fail due to network bandwidth/latency issues.


So, how can Contoso deploy their image to their satellite offices without bringing down the network at the same time?  (this really does sound like a possible Microsoft MCP question, doesn't it??!!)

Not too long ago Adam Shepherd wrote a great article about making a BDD server scalable using technologies such as DFS-R and SQL Replication.  It really is a good article because I am often asked by clients about this topic, and I have always used Adam's article to show how it can be done.  The problem is that some of my clients do not always implement the recommendations in Adam's article as they see it as too complicated, expensive, or do not like the idea of updating the Active Directory schema in order to implement DFS-R.

Well, there is another possibility that exists which would achieve a similar result and does not require any network or schema changes; DNS Round-Robin.  Round-Robin requires the installation of no additional servers, no extra hardware, no extra licenses and probably no changes to any infrastructure/configuration; all it needs to work in this scenario is the simple tick of 2 checkboxes (which are pre-selected with the default configuration of DNS) on the DNS server.  In the properties of the DNS server, just make sure that the following two options are selected from the "Advanced" tab:

  • Enable round robin

  • Enable netmask ordering

You should also double-check that you have "Strict Name Checking" disabled as well.  If it isn't you might find that the remote computer refuses any connection made to it via a DNS alias rather than it's real name.  The reason for the error is that an attempt was made to access \\AnAlias\AShare using the DNS A record but the actual hostname of that target was "XYZ", and strict name checking was enabled.  See here for more information.

The first step to get it all setup is to replicate the MDT file share to a server in each of the remote offices.  Contoso will need to do this manually, and a good way to do it would be with the command ROBOCOPY source destination /MIR.  A scheduled task could then be set up to update any changes made to the share with this command as well.  An alternative to copying the data over the network is to burn it to a DVD and then send the DVD to the remote office.  From there someone in the remote office could copy the data from the DVD onto the local server.  Afterwards, Contoso creates an A record for each remote server but with the same hostname as the MDT server in Madrid, just with relevant remote server's IP address.  That's it!

To test the round robin configuration, simply use the NSLOOKUP command from any computer in a remote office.  If setup correctly, your NSLOOKUP request should always return the IP address of the nearest server, i.e. the server that is hosting the replicated MDT share in the same location as the computer that made the query; if you repeat the NSLOOKUP command, the returned result should always be the same.  If not, then a different IP address will be received for each repetition of the NSLOOKUP command.  If this is the case then you probably do not have the two checkboxes configured on the DNS server, as mentioned previously.  An interesting 'choice' for this setup is that you could even host the MDT share on a Windows XP workstation in a remote office if no server existed and one could not be installed; however, there are 2 major downsides of using a workstation OS rather than a server OS:

  1. Windows XP is limited to 10 concurrent network connections at any one time.

  2. You will certainly see lower network performance from the workstation when compared to a server.

If Contoso wish to use PXE to load the WinPE boot image then they will also need to replicate the WDS server as well to each office which introduces other complications and might not make this solution a viable choice.  Otherwise, Contoso simply boots each machine from the MDT-generated boot CD, and upon boot WinPE will map a drive using the MDT server's hostname which, when resolved by DNS, will actually be the server on the same subnet that holds a replica of the MDT share.  From then on, the process will continue over the local office network and should thus run significantly faster.

Finally, I would like to add two very important points that will need considering.  The first is that Contoso will still need to replicate the MDT database (assuming one is used) to the remote servers, or they could leave it in the central office and maintain the cost of the network traffic for the database.  The amount of data transferred from the DB would be far less than the amount used to deploy the image so this might not be too much of an issue.   The second is that DNS round robin is not fault-tolerant.  The DNS server will not be aware that any of the remote servers are offline and will continue issuing the IP address of a server regardless of it's status.  This would lead to failures in the deployment process for an office if the local resource is not available when an attempt to install a computer was made.

Alternatively, you could use a MEDIA deployment point and do away completely with the need for replicated the data between offices and what I have just described.  This is also a great way of deploying image files created with MDT but the disadvantage is that every time your deployment point changes you will need to recreate and re-distribute all the deployment DVD's as well; which could prove costly and hard to maintain.  Using the method outlined here offers more flexibility and, once setup, would require very little administrative effort (you could program a task to replicate only the changes to the remote shares over night when there would be more network bandwidth available).

In conclusion, what has been (albeit briefly) explained in this post is a useful way of improving the chances of a successful operating system deployment in situations where you need to send large amounts of data over slow or unreliable network connections.  My recommendation will always be to use SMS or System Center Configuration Manager if faced with this scenario as they are designed to handle extremely well these situations, and therefore the use of round-robin could never replace the functionality that these technologies provide.

This post was contributed by Daniel Oxley a consultant with Microsoft Services Spain

Comments (22)

  1. doxley says:

    Hi Rich,

    No, you don’t need to edit the bs.ini for each OS, only if the name of the deployment server changes i.e. from site a to site b.  That was the benefit of using the DNS alias that I detailed.  We had a crossed wire when we spoke.


  2. doxley says:


    What you are suggesting is pretty similar to what I detailed in the post.  The key difference is the following.  Doing it your way will require you to edit the bootstrap.ini file so that your clients load the image from their local server.  For each office (and local copy of Distribution$) you’ll need to make this change.  If you use DNS to spoof the host name then you can manage everything centrally and it requires less admin work.

    So, every single client will use the *same* boot cd (or boot image) as every other client computer in order to boot up into WinPE.  It is DNS which will then tell WinPE where to go for it’s Distribution$.  Therefore, you won’t need to create a different CD for each office.  Less work for you.



  3. doxley says:


    The netmask ordering feature is used to return addresses for type A DNS queries to prioritize local resources to the client.  However, there are a few issues with this, the biggest being that the DNS server will continue to issue the IP address of a server, even if it is offline.

    This is where the use of SMS/Configuration Manager really makes a difference, which is why I recommend it 100% for large-scale deployments (especially) across slow, networks and multiple offices.  It can handle these situations extremely well.

    The post I wrote was to offer an alternative very low-cost option to think about, which of course, comes with a few considerations.



  4. Anonymous says:


    Apologies if I am posting the question in the wrong platform.

    My question is related to the usage of “Locationserver.xml” with MDT. I have followed "few" of the documentation available and made it work, to an extend. Appreciate if you can give me guidance to the following (I am using LiteTouch process) :

    1. How can I automate the whole process with "Locationserver.xml". Always it prompts for user credentials after the "Site Selection Wizard" and need to input those values. Is there any specific entries within the XML sections to specify the credentials too?. Using “Bootstrap.ini”, I was able to automate the network connection process (UserID, UserPassword, etc..).

    2. How can I troubleshoot whether "defaultSites" specified in the "Querydefault" site (<QueryDefault><![CDATA[http://myserver/DefaultSite.asp]]></QueryDefault>) is returning the value correctly? Also, will it proceed automatically, if found the defaultsite, without waiting for the "NEXT" prompt?

    Thanks in advance,

    Jibu Thomas

  5. doxley says:


    Check out Adam Shepherds article on how to use DFS and SQL Replication with BDD/MDT.  Here:



  6. doxley says:


    The documentation explicitly says: "Windows PE supports Distributed File System (DFS) name resolution only to stand-alone DFS roots."

    So, although you have managed to get it working, your configuration may not be a supported one.  Your best bet would be to discuss this with your Microsoft Technical Account Manager (TAM) as he could provide you with detailed information about the level of support you would receive.



  7. doxley says:

    If you have seperate build servers in each remote office, how can you easily create a single standard corporate image that is used on every computer?  The idea of having 1 central server and then syncing the contents of this server to remote points ensures that you only ever have to update 1 server, once because only 1 image was created.

    This way there is less admin required.

  8. doxley says:

    Whoops!  Yes, you  might need to disable it.  I’ve updated the post, thanks Rob!

  9. DeploymentGuys says:

    Hi Björn,

    WinPE only supports connecting to stand alone DFS points- ufortunatly. See

    This will restrict what you can do I am afraid.


    Richard Trusson

  10. doxley says:

    Hi Björn, glad you like the post.

    DFS-R was covered by Adam Shepherd a while back in an article he wrote for the TechNet magazine.  See it here:

    He wrote about using DFS-R to replicate the Distribution share of the server.

  11. Scoobysnax says:

    Quality article!  Just the scenario that we see time and again.  A client may not have or SMS for the ZTI implementation, and LTI has lacked the scalability when deploying to remote locations… I can see this coming in very useful..


  12. Björn Axéll says:


    Thanks for a good post. I like the solution you wrote but I was wondering if you have any expericens on using a DFS name instead of a server name for the deployment root? I have noticed that some wizard doesn’t work with DFS but you can change it to the DFS name before creating the deployment point. Any info in this would be nice?


  13. Rob says:

    For this to function correctly, wouldn’t you also need to configure "Disable strict name checking" so that the servers hosting the replica respond to the alternate name?

  14. Björn Axéll says:


    I have read the article by Adam, but that doesn’t cover using DFS as a name space. What he talks about is to use DFS-R as a replication mechanism but I would like to use the DFS name instead of a server name for my deployment points (\servernetworkshare).

  15. Björn Axéll says:


    Do you mean that MS only support using it against a standalone or doesn’t work? Since I us it today against a DFS root that is a "Domain root" and it looks like it is working but I’m not sure if all MD script is tested for it. The only thing I have found so far is that the deployment point wizard doesn’t accept a dfs path but this can be changed after the wizard but before clicking "Update".

    Thanks for helping out!


  16. Rich says:

    I’m confused as to why we just don’t build out separate MDT servers at our remote offices?  


  17. Rich says:

    Hi Daniel,

    I guess what I’m getting at is that the WIM file has to be copied to remote sites either way you do this correct?  Say for example, I have site A built out with MDT and we are deploying our gold image to clients.  I want to set site B up as a test bed to outline the process for setting up my remote sites with MDT.  In order to do this, why not just copy the distribution directory from Site A to Site B – as you indicate in your blog.  Then configure bootable media or WDS (PXE) so that Site B can boot to the MDT file share to deploy images?  When site A updates their image, we then copy the WIM file to Site B and redistribute it to MDT and modify the Task Sequences accordingly.

    Trying to piece this all together to have a better understanding of what we are trying to accomplish here.  Seems to me that round robin has done some things in the past that we didn’t like.  But that’s another story.



  18. Björn Axéll says:


    I haven’t played around with Round Robin to much. How does the client select the right server, I can risk that the client recive a server outside there brach (AD site). Is there a way to prioritice this or? Do you have any experience in this?



  19. Rich says:


    I just want to make sure I’m following you here.  So, with the way that I outlined the problem would be anytime I made a change to my gold xp image; I would have to reimport it into MDT at Site A.  And then I would have to modify the distribution directory at site B with the new WIM?  But, I’m not sure I’m following the statement on editing the bootstrap.ini?  I wasn’t aware that you needed to update the boostrap.ini file every time you add an O.S. to MDT.  If this is what you’re getting at?

    Keep in mind in the scenario I’m talking about I will only have one primary deployment point configured at Site A.  I will not have multiple deployment points configured for remote offices as I will not have them boot over the network to receive images from Site A.



  20. paul says:

    In situations without the MDT database you can easily make your deployment more scalable by using the variable %WDSSERVER% in bootstrap.ini



  21. Mantabloke says:

    Just a quick question.

    We are a small company who builds the PC’s all in the same place but have distributed offices around the UK.  Is it possible to have the Deployment workbench files set on our DFS route so we can access via litetouch.vbs for application distribution only using a custom task sequence?  Our DFS is AD replicated.

    Is it just as simple as setting up the workbench to store its files within the DFS and then using it to replicate out?

    We are not going to be updateing XP to Vista as we replace the PC’s first with a new vista image on it or are deilivered direct from OEM to site (laptops).

    Sometimes it isnt possible to install all applications as each office has a different setup of applications.

Skip to main content