DirectAccess is one of those technologies that can be a very powerful thing – having seamless remote access can change the way your users work remotely. However, finding where to get started can be a little tricky. In this post, I’ll walk you through the kinds of design and deployment decisions you’ll need to consider when planning your own implementation.
To make this guide as straightforward as possible, I’m going to walk you through the kinds of design decisions, category by category. As we go, I’ll explain common pitfalls that I see customers making most frequently, as well as general advice to ensure your deployment goes as smoothly as possible.
This guide is not a step by step/how to guide on setting up DA. It’s your one-stop-shop for understanding the kinds of architectural and design decisions you’ll need to make before performing the setup.
The first thing to discuss is the general methodology around deploying DirectAccess. Unlike some other technologies (and some people’s advice for deploying DirectAccess!) I recommend the approach of setting up all the ancillary services first, before you even think about installing the Remote Access role on your first DA server.
Some of the categories I recommend exploring and configuring prior to even stepping foot in DirectAccess land are shown below:
The high level categories above (Server & Network, NLS, Certificates, NRPT, Clients, AppCompat, and Supportability) will drive the rest of this article and are shown as headings below. If you want to read about a specific component, just jump to that section!
Server & Network
There are a number of considerations when deploying DirectAccess, and the core of it (things you really need to know first) are the server and network setup options you have at your disposal.
One of the key considerations will be whether to enable high availability in some form. While it can be tempting to not implement DA with high availability initially when piloting or performing a limited deployment, consider going ahead and working with some kind of high availability plan.
There are a few good reasons to go ahead and include high availability or resiliency of some kind in the design from the get-go:
- Moving to a HA model later (although possible & supported) can be challenging, especially when you have clients using the existing service
- The overall architecture is different with HA, and ideally if you’re evaluating DA you’ll want to most production like environment possible so you can fully understand the support considerations etc.
- As we all know, often services deployed as Proof of Concept, or Pilot – have a funny way of becoming Semi-Production or Production quite quickly when demand grows.
So I strongly recommend going for some kind of resiliency / high availability. In DirectAccess land, this is primarily accomplished by creating a DirectAccess cluster, load balanced with either Windows Network Load Balancing (NLB) or using an External Load Balancer (ELB) device of some kind such as a reverse proxy / web acceleration appliance that performance load balancing.
Naturally when you’re doing this there are some considerations, especially if you don’t have an existing External Load Balancing infrastructure you can leverage. Regardless of what you do, high availability using a cluster means you need to meet the following requirements:
- All nodes must be in the same subnet (yes, this includes when using an External Load Balancer (ELB)
- Client certificates must be used (no Kerberos Proxy aka. Simplified Deployment possible with HA)
The other option that you can consider is to use the multisite functionality in DirectAccess to publish at least two sites (or endpoints) that DirectAccess clients can connect to. This approach provides some rudimentary resiliency without the need for both machines needing to be in the same subnet, although client certificates are still required. The benefit of multisite is clients can choose the ‘closest’ endpoint and connect to that.
The downsides of this approach is you need to create separate endpoints complete with the DNS records, SSL certificates & path from the Internet to your DA servers for each DA site you configure, and that multisite is only supported on Windows 8 or later.
Finally – you can do both. That is, have multiple connection addresses published to clients, where one or two endpoints are actually cluster addresses with multiple servers, giving you the best of both worlds.
Given this, I’m going to assume for the remainder of this post that you’ll be going down one of these two paths (multisite or cluster, or both).
So we know we’re going to have one or multiple servers we’re going to want to publish on the Internet. In the early versions of DirectAccess, you needed to publish both Teredo, 6to4 and IPHTTPS so that they were available to the Internet. This meant having two consecutive public Internet addresses (a requirement of Teredo) and there was no way an edge device could proxy the traffic coming in from the Internet; meaning the DA servers had to literally be listening on the Internet.
However, this is no longer the case. One of the new deployment models made possible in Windows Server 2012 is the ‘behind an edge device’ model. With this, you can place your DirectAccess servers behind external load balancers or other reverse proxy devices. However, in this model only IPHTTPS is published and available for clients to use as a transition technology.
Q) Hang on, I thought IPHTTPS was the transition protocol of last resort in DirectAccess, only used with Teredo and 6to4 weren’t possible?
A) Yes, that’s correct for previous incarnations of DirectAccess. The reason for it being used last previously was because of the need to double encrypt (and decrypt) traffic since it was being transferred over IPSec tunnels (first layer of encryption) and then over an SSL channel (second level of encryption).
However in Windows 8 and Windows Server 2012, the clients can negotiate a NULL cipher for the resulting SSL channel, negating the unnecessary extra layer of encryption.
Note that Windows 7 clients will still behave the old way, so if you have many, many of these then you may want to consider the performance aspect, particularly server side.
If you go down the path of IPHTTPS-only, then you do get a number of benefits:
- Simplified troubleshooting as clients will only be using a single kind of transition technology
- DirectAccess servers can be placed behind edge devices
- The traffic is just SSL/443 traffic and connections can be monitored the same way to monitor access to websites etc,
Which ever you choose, ensure you are clear on the plan up-front; as this will determine what load balancing methods are available (eg, unless your external load balancer can load balance Teredo (very few do), then you’ll have to use NLB).
The next thing to consider if whether you’ll deploy with a single or two NIC model. You can either have a single interface that is both where the Internet connections come in via, and where corporate network access goes out over, or a dedicated NIC for each access type (Internal and External).
Other Server & Network Considerations
There are a few other things I want to cover, as I see these being common blockers to deployment of DirectAccess:
Windows Firewall must be enabled – don’t forget that even if you don’t like it, you must have the firewall turned on (that is, not turned off!). This means that you have to have the firewall enabled for the appropriate profile based on your network topology.
The reason for this is because the IPSec rules that get created are configured to specific firewall profiles, and IPSec in general is a function that requires the Windows Firewall to operate effectively.These are the Domain and Public profiles. That said, you can configure the firewall to allow all connections (Eg, not to block) if you really want to, while keeping it enabled.
Multiple subnets for cluster not supported – In general, you should run your design by our DirectAccess Unsupported Configuration page to make sure there’s no glaring issues. However – one unsupported configuration that’s not documented there (yet) is having cluster nodes in multiple subnets. This is obvious for NLB since NLB doesn’t work across multiple subnets, but might not be so obvious when using an External Load Balancer. Regardless of the load balancing mechanism, having DirectAccess cluster nodes in different subnets is not supported.
The other main consideration (other than Server & Network generally) is the clients you wish to support. This decision basically involves whether or not you’ll support Windows 7 clients. If you only need to have Windows 8.1 (and later) DA clients, then you can deploy in a more simplified model without client certificates. However – if you want multisite, high availability, or one time password integration then you’ll need certificates.
So, since we’ve already covered that high availability is essential – certificates are essential also.
Once you know what clients you wish to support, you’re good to go. If you are supporting Windows 7 clients then remember to check the Enable Support for Windows 7 Clients checkbox during DA setup. You’ll also want to remember to deploy the DirectAccess Connectivity Assistant for Windows 7 Clients as well – remember to configure the DCA via a Group Policy Object (GPO) other than the DirectAccess Client Settings GPO that DA setup uses. (Eg, create a dedicated GPO for your Windows 7 machines with organisation-specific DA settings not managed by the DA wizards.
Network Location Server (NLS)
The network location server is the HTTPS URL that clients will check to determine whether they are on the corporate network or not. When clients get put onto a new network (and periodically), this URL will be accessed to determine the network location.
We want to make sure that NLS is only available on the corporate network and not publically, so in most cases I see customers using an internal name for this - eg, nls.corp.contoso.com - where corp is the internal AD domain (or other DNS zone, really) that’s not accessible on the internet. That way, internet clients can’t even resolve the address, let alone connect to anything.
Q) Doesn’t this create a chicken-and-egg style problem where once DA is connected, NLS will be accessible to the client, and therefore assume it’s on the corporate network and disconnect DirectAccess?
A) No, the DA setup will add the NLS DNS name to the Name Resolution Policy Table (NRPT) to ensure that that URL is never available over DirectAccess.
Hint: This is why you never want to use another, existing resource for NLS when setting up DA. I’ve heard many stories of customers using their internal OWA or SharePoint farm URLs for this, and wondered why they then can’t access those resources over DirectAccess!
One more note on accessibility – you want this URL to be highly available! If this resource is down then clients on the corporate network, being unable to access it, will assume they are on the Internet and try and initiate a DirectAccess connection.
So what should this URL be? Well, it has the following requirements:
- Must be HTTPS with a valid certificate (doesn’t have to be port 443, however)
- Must return a valid HTTP OK response
- Ideally, is just a basic page (Hello world!)
So in theory you could host this anywhere – on network appliances, other web servers – anywhere. In fact, you can even host it on existing infrastructure, as long as it’s a different URL to production services. So, you could create a new website on some existing web infrastructure, and point your NLS address there. Just remember, (I can’t stress this enough) the URL you enter as the NLS URL will not be available over DirectAccess!
Name Resolution Policy Table (NRPT)
The NRPT contains the set of rules the define what addresses will be accessed over DirectAccess versus what won’t. You can think of the NRPT as a traffic controller for DNS requests – it directs DNS requests either to the DirectAccess server (for ultimately, resolution by your corporate DNS servers) or via the Internet DNS servers the client has (eg, ISP DNS servers, etc).
By default, the DirectAccess configuration will contain two NRPT rules:
- Your Active Directory domain name, pointing all requests to go over DirectAccess (eg, corp,.contoso.com, fourthcoffee,local - whatever)
- The NLS address, excluding it and therefore telling it to resolve over the Internet (which is probably impossible if you’ve used an internal name)
The net effect of this configuration is that for resources that have a suffix of your AD domain (except your NLS name) will go over DirectAccess, whereas the remainder will go directly via the Internet. This approach suits most simple scenarios well – however have a think about what other DNS zones you use for internally facing services.
Consider if any of the following apply to you:
- Resources that can only be accessed internally, but have an external name (eg, internalapp.contoso.com, rather than internalapp.corp.contoso.com)
- Environments were the external DNS zone (contoso.com) is the same as the internal AD domain (contoso.com as well)
- Other internal DNS zones you manage – eg, you’ve created an internal zone for corpapps.contoso.com which wouldn’t be added to the NRPT by default.
In any of the scenarios above, you’ll want to add additional records to your NRPT, which you can do either via PowerShell or via the DirectAcess Wizard.
As with many things, there are application compatibility issues to consider with DirectAccess. While this really could be it’s own post (shout out in the comments if you want me to go ahead and do that!) I’ve covered the top two considerations below.
It’s no secret that DirectAccess is an IPv6 only technology. That is; all client connections are IPv6 traffic sent to the DirectAccess Server. The server, if needed can perform NAT64 on the traffic to translate it into IPv4 for connectivity to IPv4-only network resources.
So given this, all applications running on client PCs will need to be compatible with talking on IPv6 networks. The reality is that most are, as they simply don’t care. Most applications use built-in APIs for accessing network resources that are part of the operating system and therefore are compatible with IPv6 (since Windows is).
However, applications that have their own methods, drivers and capabilities for accessing the network may have compatibility issues. For example, an application might perform a DNS request prior to connecting and be expecting an IPv4 address when it will actually get an IPv6 address when querying it over DirectAccess.
So while rare, you might find that some applications that are important to your organisation might not work over DirectAccess. If this happens, follow the following steps:
- Firstly confirm you’re actually having an IPv6 compatibility issue. Ask the vendor of the software or perform testing on an IPv6 network that is not DirectAccess.
- Then, find out if the vendor plans to support IPv6. In my experience, it is quite common for the vendor to have a newer version of the application that is IPv6-compatible.
- If there is no IPv6 compatible version, or moving to it isn’t possible, then look to other methods to publish the application. For example if the application allows the direct publishing on the Internet (eg, Citrix) then you can still publish the service via that mechanism and exclude the traffic from DirectAccess, causing it to access the published IPv4 Internet addresses as you’d normally publish the application without DirectAccess.
- Finally, if all else fails you can explore options around proxying the connection locally on the client before sending it over DirectAccess. This allows the application to talk to a local proxy service over IPv4, and the service takes care of the transport of IPv6. There are a number of 3rd party products out there, as well as on offering from Microsoft Consulting Services.
Hard coded IPv4 Addresses
Another common compatibility is client applications that use IPv4 addresses to connect to servers rather than their names. The DirectAccess server receives DNS requests for IPv4-only resources and translates these using DNS64, providing an IPv6 address for clients to connect to.
If you have client applications with hard coded IPv4 addresses rather than DNS names for servers, then you’ll bypass this process and the connection will fail (unless it’s a public Internet IPv4 address, I guess!).
For this topic, I’ll break down the top support considerations into three areas.
One thing that regularly gets overlooked when deploying DirectAccess is the processes required to support it effectively. The most critical of these from a security perspective is the process for revoking device access should it be lost or stolen.
While you should have some kind of full volume encryption (such as BitLocker) enabled on mobile computers, and this goes a fair way to mitigating the risks associated with lost PCs, you still want to ensure you revoke access to lost or stolen machines to ensure that they can’t connect into the corporate network.
The easiest way to do this is to disable the computer account of the lost machine. If you’re still concerned about security then revoking the computer certificate is a good idea as well. Ensure that your service desk know how to deal with calls from users that have lost their devices.
Monitoring DirectAccess is essential to ensure that it’s performing well and issues are developing over time. You’ll want to make sure you’re monitoring the availability of:
- Appropriate ports from the Internet
- The Network Location Server (NLS) URL
- Each DirectAccess server and DirectAccess related services
Reporting in DirectAccess can either use the inbox accounting solution based on the Windows Internal database format, or you can configure DirectAccess to log to a RADIUS accounting server.
There’s probably much more, but I think that’s a good start and I’ll keep this post updated over time to the best of my ability! If you have any suggestions of things I’ve missed, let me know on Twitter, or in the comments below.