tspring https://docs.microsoft.com/archive/blogs/tspring/feed.xml The Identity and Directory Service related blog of Tim Springston...Microsoft employee, software engineer and all around outstanding person. Tue, 08 Jan 2019 22:10:52 GMT en-US hourly 1 Performing Azure Key Vault Inventory https://docs.microsoft.com/archive/blogs/tspring/performing-azure-key-vault-inventory Mon, 26 Jun 2017 09:13:17 GMT https://blogs.technet.microsoft.com/tspring/?p=415 docs.microsoft.com at this link. Creating and configuring Key Vault (KV) is like on premise PKI solutions in that it does have some complexity. KV can be created and configured using  any of four methods: AzureRM PowerShell, the "legacy" Azure portal or the new (otherwise known as Ibiza) Azure portal, or REST APIs.  An additional option for managing Azure Key Vaults is the GUI based tool Azure Key Vault Explorer which can be downloaded from Github here. One of the most challenging aspects to PKI and now Key Vault is that there is a lot of complexity. Of course, the complexity is driven by the robust set of features it provides as well as whatever is configured for a use case scenario.  Gaining insight into the current configuration of whatever Key Vaults are present can be incredibly useful when trying to iron out a problem with a Key Vault use scenario, or when deciding what Key Vault resources are already present and ready for use. Driving simplicity in complex scenarios is key (pardon the pun) to using services like Key Vault effectively. One Key Vault area where simplicity is sorely needed is being able to get a quick comprehensive list of what Key Vaults you have and what is in them. I'm reminded of one of my many jobs before college-I was a retail chain sales person. In that job one of the things we did twice a year was inventory all our unsold stock. This was to reconcile what we thought we had with what was actually on the shelves.  The inventory inevitably revealed missing items or in some cases more than we thought we had. To help provide that simplicity I have written a PowerShell script which can be used to provide a straightforward method for getting an inventory of the current Azure Key Vault or Vaults and what is in them. Think of this script as providing an inventory of what Key Vaults you have and what is in those Key Vaults.  The idea behind creating this method came from seeing customers having difficulty understanding what had been configured for use and how to access it. This PowerShell based script will simply enumerate all Azure subscriptions and look for Key Vaults in them. For each Key Vault which is found it will export the contents and all of their details to the PowerShell console and a text file. You can download the code and script from the link below: Performing Azure Key Vault Inventory (GetKeyVaultInventory.ps1) https://gallery.technet.microsoft.com/scriptcenter/Performing-Azure-Key-Vault-94c68c57 The script relies on an Azure reporting API style application and for that application to have access to the Azure Key Vault(s).  If you have not already created an Azure AD reporting application here's you chance to do so since you can use it for both Azure Key Vault and Azure AD reporting. Start by following the steps in this tutorial on how to create the application. Be sure to save the client secret somewhere since you only get to see it on creation. However, an Azure AD reporting application which was created using the steps above does not give the Azure Key Vault permissions you need for reporting. The Azure AD application must be given permission to both the Subscriptions the Key Vaults are in as well as the Key Vaults explicitly in order for you to use the application for pulling inventory. To add the permission the application needs first go to navigate in your Azure portal's Subscriptions blade and then select the "Access Control (IAM)" blade.  Choose the application as the identity to add permissions and give the application Reader permissions to the subscription.  Repeat this process for each subscription which has Key Vaults or that you want the reporting application to have access to report on. Next, add the Key Vault permissions the application needs. Do this by selecting the Key Vault blade in the Azure portal and then select the "Access Control (IAM)" blade.  Next just add the application to have Read permissions to the vault.  Repeat this process for each Key Vault. Here's your key learning for the day: This application can be used to inventory any Azure resources it has access to. You'll just need to use it with code which pulls what you need. But let's get back to the Azure Key Vault inventory steps… Then download the PowerShell script from the TechNet script gallery.  You will need to edit the script and enter in your tenant name, client ID of the application and the client secret for the application, by editing the lines below to insert your tenant and app specific information. $ClientID       = "insert GUID here"             # Should be a ~35 character string insert your info here $ClientSecret   = "insert secret here"         # Should be a ~44 character string insert your info here $loginURL       = "https://login.windows.net" $tenantdomain   = "insert tenant name here"            # For example, contoso.onmicrosoft.com I mentioned that the script will provide an inventory of what is all of the vaults which the Azure AD application has access to.  What is included in the inventory? Essentially everything in a given Key Vault. This includes some information that would be privileged if you access it with a service account which has sufficient access to glean it.  For that reason, it is important to only provide permissions to the application or the Key Vault to services and identities which should have access in the first place. Warning! Since the inventory is complete it will contain privileged info. This includes the secret values of many of the objects which are stored there.  Don't give access to this application to just anyone and don't share the results with individuals you don't trust and who don't have a need to know. Let's go over the inventory in detail. The script will provide a data and time of run, as well as the identity which is being used for the service context. It wil export the details to both a text file and the PowerShell console. Each Azure subscription which the service identity has access to are enumerated like this example: Each Key Vault will be displayed including the full path including the association to Azure subscription like this example: The script will list the details of each Key Vault including:
  • Associated tenant ID (GUID)
  • Access Policies (tenantID, objectID, permissions)
  • Each Key and Key details (Key name, whether enabled, start time, created time, updated time, expiration time)
  • Each Secret and Secret details (Secret name, whether enabled, start time, created time, updated time, expiration time, Secret URI, Content Type, Secret Value)
  • Each Certificate and Certificate details (Certificate name, whether enabled, start time, created time, updated time, expiration time, Certificate URI, Certificate Policy ID, Policy Secret Properties, Policy Subject, Policy Key Usage, Policy Enhanced Key Usage, whether Policy Key Exportable, Policy Key Type, Policy Key Size, Policy Key Reuse, Policy Validity Months, Policy Basic Constraints, Policy Lifetime Actions, Policy Issuer, whether Policy Enabled, when policy was created and when it was last updated.
What happens if there is an access problem (permissions or otherwise) in listing the Key Vault info? What you should see in that scenario a trapped exception with the details of what the error were. That should appear in the console and in the text file output so you can explore why there was an issue. My hope is that this Key Vault tool will help make your Azure Key Vault deployment and use easier.  Please adopt and use the service and all our other services and let us know if you have feedback. We are committed to making our services be the best there is and your feedback is appreciated!]]>
(Updated) Federated to Microsoft Cloud and Account Lockouts https://docs.microsoft.com/archive/blogs/tspring/federated-to-microsoft-cloud-and-account-lockouts Fri, 20 Jan 2017 13:32:54 GMT https://blogs.technet.microsoft.com/tspring/?p=335 An important requirement to federated single sign on is the availability of the federated network endpoints over the internet. This availability is typically done via HTTP port 443 so that secure Transport Layer Security (TLS) connections can be used when clients need to obtain or pass a token for sign on.

This requirement is something that bad actors can take advantage of for malicious purposes.

The bad actors may be attempting to flood the legitimate business with network traffic or bad password attempts which must be processed or they may simply be attempting to try guessing passwords.

The result is the same in most cases: an impact to the availability of services to legitimate users and possible security breaches.

Malicious entities can target highly visible users like executives or managers. This kind of malicious attempt is most effective from a denial of service perspective if your organization uses Active Directory account lockout policies to lock user accounts on premise if a certain number of bad passwords are submitted. If you have strong passwords enforced, the issue quickly becomes less about the possibility of someone brute forcing the password and more about the account being locked out. This turns into a denial of service to the users as their accounts are locked out and the users are not able to authenticate on-premises or to their cloud services.

How is this possible?

One way is that it is easy for a bad actor to use Home Realm Discovery behavior (type in username@company.com at the Microsoft portals and press enter and be redirected) to discover the federated sign in endpoints.

Another way is to take advantage of any legacy protocols a tenant may have enabled for mail or other services. More on that later in this document.

When users are federated (meaning that their user principal (UPN) name matches one which has been configured in Azure AD/Office 365 for federated single sign on) the Microsoft Cloud forwards the authentication requests to the on-premises federated servers to verify the user's credentials. In the case of Windows ADFS, ADFS will immediately attempt a credential check against the user. If there is a bad password submitted for an identity that bad password will count against any configured Active Directory domain bad password attempt. Other federated services typically work in the same manner and increment the bad password count as well. If an Active Directory domain has a very low account lockout threshold then the user may be locked out in short order.

Malicious bad password attempts combined with an account lockout threshold will result in account lockouts and effective (and perhaps intentional) denial of service to users where they will not be able to access on-premises resources nor Microsoft Cloud services due to their account being locked out.

How do you protect against it?

There are several things to do in different stages of the incident.

Identify the Problem

Brute force and denial of service attacks are characterized by several common symptoms.

  • The logon attempts are persistent and do not stop.
  • They are from a specific geography but could be from ever changing source IP addresses.
  • In most cases the bad actor is taking advantage of Exchange Online basic authentication (also known as legacy authentication) so that the client IP address appears as a Microsoft one. The application side of basic authentication is for use in older mail protocols like IMAP, POP, and SMTP.
  • The attempts typically iterate sign in attempts for each user in an organization using a specific sequence and frequency.

If there is still ambiguity of whether this is truly malicious in nature then first thing is to identify where the bad password attempts are coming from and what identities they are targeting. To figure that out you can turn on ADFS auditing and then review the logs.

Turn on ADFS Auditing

Server 2012R2 and Server 2016 ADFS can be configured for security auditing and service verbose events. These events will show information about the accounts which are being targeted and the IP address of the malicious client doing it.

Note: Server 2012 ADFS (non-R2) does not support the verbosity needed for this investigation. If the customer is seeing suspected brute force behavior against their ADFS they should upgrade or parallel install to Server 2016 ADFS.

  1. Enable ADFS event verbosity using the ADFS Event Log Powershell Module.
  2. This can be downloaded from GitHub here: https://github.com/Microsoft/adfsLogTools
    1. If the ADFS farm is 2016 or is using PowerShell 5 then simply install the module using the command "install-module ADFSLogTools".
  3. If the ADFS farm is Server 2012 R2 or earlier then on each ADFS server run "import-module ADFSEventsModule" and then "Enable-ADFSAuditing"
  4. If the ADFS farm is Server 2016 then on one ADFS server run "import-module ADFSEventsModule" and then "Enable-ADFSAuditing" to enable auditing on the entire farm.

Review Auditing Data

After enabling auditing events will start to appear on the ADFS servers which need to be reviewed. The critical items to look for are the user principal names being targeted and the IP addresses of the submitting clients.

Important: Azure AD Connect Health for ADFS is the best reporting and review option to review user and IP lists.

  • If the organization is using Azure AD Connect Health (AADCH) for ADFS and the agent is currently installed on all ADFS and WAPs in the farm then easy to use reporting will be available in the Connect health dashboard for ADFS in the Azure portal.
  • ADFS Auditing must already be enabled.
  • AADCH for ADFS requires AAD Premium P1 licensing.
  • AADCH has downloadable reporting on bad password attempts.
  • AADCH for ADFS can be configured to send mail notifications to admins if bad password thresholds (admin configurable) are met.

Important: If AAD Connect Health for ADFS is not available then the ADFS event logs must be reviewed using PowerShell.

If you have Windows Server 2012 R2 ADFS or later, you can search All ADFS Servers' Security event logs for Event ID 411 Source ADFS Auditing events.

  1. You can download the PowerShell script to search your ADFS servers for events 411 at this link. The script will provide a CSV file which contains the UserPrincipalName, IP address of submitter, and time of all bad credential submissions to your ADFS farm.
  2. You can open the CSV in Excel and quickly filter by username, or IP or times.
  3. More information on the 411 events themselves:
    1. These events will contain the user principal name (UPN) of the targeted user.
    2. These events will also contain a message "token validation failed" and will say if it was a bad password attempt or the account is locked out.
    3. There will be one per brute force attempt-there may be a lot.
  4. If your server has 411 events showing up but the IP address field isn't in the event make sure you have the latest ADFS hotfix on your servers.
    1. More information can be found in KB3134222.
  1. If you have Windows Server 2008 R2 or Windows Server 2012 ADFS you will not have the needed Event 411 details. Instead, download and run the PowerShell script below to correlate security events Security Event 4625 (bad password attempts) and 501 (ADFS audit details) together to find the details for the affected users.
    1. You can download the PowerShell script to search your ADFS servers for events at this link. The script will provide a CSV file which contains the UserPrincipalName, IP address of submitter, and time of all bad credential submissions to your ADFS farm.
    2. You can also use this method if you are planning on discovery of what connections are taking place successfully for the users in the 411 events you can search the ADFS events 501 for more detail.
    3. When running the PowerShell script to search your events, just pass the UPN of the user identified in the 411 event(s) or by account lockout reports to your helpdesk.
    4. The IP address of the malicious submitters will appear in one of two different fields in the 501 events.
      1. For web based and most application authentication scenarios the malicious IP will be in the x-ms-client-ip field.

Note

For non-Modern Authentication Outlook clients, the IP address of the malicious submitter will be in the x-ms-forwarded-client-ip and Microsoft Exchange Online server IPs will be in the x-ms-client-ip value.

This is a result of "legacy" or Basic authentication having the Exchange Online servers in the cloud proxying the authentication verification on behalf of the Outlook client. Mail clients which support Modern Authentication (aka ADAL) will not proxy the auth this way.

Mitigate the Immediate Problem (i.e. stop the bleeding)

When you see this issue, it is typically an "all hand on deck" thing. Many users are impacted by having their user accounts locked out and their voices are heard throughout the organization-typically starting with the help desk.

Important

If the organization is using ADFS then the best recommended option to mitigate the problem is to make sure that the customer has Server 2016 ADFS and sets the Extranet Smart Lockout (ESL) feature to enforce.

This may require a move to an ADFS 2016 farm. If the customer has a less complex ADFS configuration (for example one relying party trust and little or no claims rule configuration) then the easiest method would be to do a simply parallel install of ADFS 2016 farm and then switch over to it using a change to DNS and AADConnect to update the trust.

This article describes how to configure ESL on an ADFS farm:

Description of the Extranet Smart Lockout feature in Windows Server 2016

https://support.microsoft.com/en-us/help/4096478/extranet-smart-lockout-feature-in-windows-server-2016

Note that the ESL current configuration can easily be checked by using Get-ADFSProperties in PowerShell on an ADFS server.

The recommendation for using Extranet Smart Lockout is to follow this routine:

  1. Configure ESL in ADFSSmartLockoutLogOnly mode for a period. This period should be long enough for each federated user to have successfully signed in via ADFS at least once to populate their identities "familiar" IP address.
  2. During the ADFSSmartLockoutLogOnly phase the ADFS server will continue to see the brute force attempt impact just as it did with ADFS in prior Windows Server versions.
  3. Once the ADFSSmartLockoutLogOnly period is over set ADFS smart lockout to ADFSSmartLockoutEnforce. This should prevent the ADFS server from passing along bad password attempts from unknown IP addresses any longer.

Tracking User Logon Activity with Extranet Smart Lockout Enforcement Enabled

  1. The event for ADFS extranet lockout is Security event 1210 source AD FS Auditing. This should appear for any extranet lockout activity.
  2. The event does not differentiate between familiar and unfamiliar bad password attempts.

    An example of this event:

    Log Name: Security

    Source: AD FS Auditing

    Date: 6/29/2018 7:34:32 PM

    Event ID: 1210

    Task Category: (3)

    Level: Information

    Keywords: Classic,Audit Failure

    User: domain\serviceaccnt

    Computer: adfsserver.domain.com

    Description:

    An extranet lockout event has occurred. See XML for failure details.

    Additional Data

    XML: <?xml version="1.0" encoding="utf-16"?>

    <AuditBase xmlns:xsd="https://www.w3.org/2001/XMLSchema" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xsi:type="ExtranetLockoutAudit">

    <AuditType>ExtranetLockout</AuditType>

    <AuditResult>Failure</AuditResult>

    <FailureType>ExtranetLockoutError</FailureType>

    <ErrorCode>AccountRestrictedAudit</ErrorCode>

    <ContextComponents>

    <Component xsi:type="ResourceAuditComponent">

    <RelyingParty>https://sts.tspringtoys.com/adfs/services/trust</RelyingParty>

    <ClaimsProvider>N/A</ClaimsProvider>

    <UserId>domain\user5</UserId>

    </Component>

    <Component xsi:type="RequestAuditComponent">

    <Server>N/A</Server>

    <AuthProtocol>WSFederation</AuthProtocol>

    <NetworkLocation>Extranet</NetworkLocation>

    <IpAddress>167.220.148.83</IpAddress>

    <ForwardedIpAddress>167.220.148.83</ForwardedIpAddress>

    <ProxyIpAddress>N/A</ProxyIpAddress>

    <NetworkIpAddress>N/A</NetworkIpAddress>

    <ProxyServer>Proxy1</ProxyServer>

    <UserAgentString>Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko</UserAgentString>

    <Endpoint>/adfs/ls/</Endpoint>

    </Component>

    <Component xsi:type="LockoutConfigAuditComponent">

    <CurrentBadPasswordCount>3</CurrentBadPasswordCount>

    <ConfigBadPasswordCount>3</ConfigBadPasswordCount>

    <LastBadAttempt>06/29/2018 19:34:24</LastBadAttempt>

    <LockoutWindowConfig>00:01:00</LockoutWindowConfig>

    </Component>

    </ContextComponents>

    </AuditBase>

  3. If a user calls and is complaining about their account being locked out immediately run the two commands below and compare the results. These will tell you if they are running into extranet smart lockout behavior or not.
  4. Here is an example of an ADFS example of extranet locked user:

    PS C:\Users\udrt> $user = Get-ADUser -id userfive-Properties UserPrincipalName, BadPwdCount, Lockedout

    PS C:\Users\udrt> $user

    BadPwdCount : 0

    DistinguishedName : CN=User Five,CN=Users,DC=domain,DC=com

    Enabled : True

    GivenName : User

    LockedOut : False

    Name : User Five

    ObjectClass : user

    PS C:\Users\udrt> get-AdfsAccountActivity -UserPrincipalName userfive@domain.com

    Identifier : domain\userfive

    BadPwdCountFamiliar : 0

    BadPwdCountUnknown : 3

    LastFailedAuthFamiliar : 1/1/0001 12:00:00 AM

    LastFailedAuthUnknown : 6/29/2018 7:34:24 PM

    FamiliarLockout : False

    UnknownLockout : True

    FamiliarIps : {}

    Then signed in successfully which clears bad activity and adds Familiar IP:

    PS C:\Users\udrt> get-AdfsAccountActivity -UserPrincipalName userfive@domain.com

    Identifier : domain\userfive

    BadPwdCountFamiliar : 0

    BadPwdCountUnknown : 0

    LastFailedAuthFamiliar : 1/1/0001 12:00:00 AM

    LastFailedAuthUnknown : 6/29/2018 7:34:24 PM

    FamiliarLockout : False

    UnknownLockout : False

    FamiliarIps : {167.220.148.83}

  5. Example PowerShell code to get a specific user's current account lockout and smart lockout status:

    $User = Get-ADUser -id <user> -Properties UserPrincipalName, BadPwdCount, Lockedout

    $User | FL

    get-AdfsAccountActivity -UserPrincipalName <userPrincipalName>

ESL Frequently Asked Questions

  • Q: Will an ADFS farm using Extranet Smart Lockout in enforce mode ever see malicious user lockouts?
    • A: If ADFS Smart Lockout is set to enforce then you will never see the accounts locked out by brute force or denial of service. The only way a malicious account lockout can prevent a user sign in is if the bad actor has the user password or can send requests from a known good (familiar) IP address for that user.
  • Q: What happens if a bad guy has a user password?
    • A: The typical goal of the brute force attack scenario is to guess a password and successfully sign in. If a user is phished or if a password is guessed then the ESL feature will not block the access since the sign in will meet "successful" criteria of correct password plus new IP. The bad actors IP would then appear as a "familiar" one.
    • A: The best mitigation in this scenario is to clear the user's activity in ADFS and to require MFA for the users. More info on that later in the document.
  • Q: If my user has never signed in successfully from an IP and then tries with wrong password a few times will they be able to login once they finally type their password correctly?
    • A: When a user who has been submitting bad passwords (legitimately mis typing or for other legit reason) and finally gets the password correct then the user will immediately succeed to sign in.  This will clear the bad password count familiar and bad password count unknown and add that IP to the FamiliarIPs list.
  • Q: What should we expect when we set ADFS to log only mode?
    • A: If the ADFS server is set to ADFSSmartLockoutLogOnly mode then it will have the same Extranet behavior as the soft lockout feature in Server 2012 R2.
  • Q: Does ESL work on intranet too?
    • A: If the clients connect directly to the ADFS servers and not via Web Application Proxy servers then the ESL behavior will not apply.
  • Q: What if my clients all show as having the same client IP?
    • A: If all clients connect through a specific network provider or connection and use the same internet facing IP address then this will result in unexpected results from ESL. This can happen when all clients on the corporate network connect via the proxy servers or for other network reasons. Our recommendation is to only have external clients with individual IPs connect to ADFS for ESL.
  • Q: Will ESL block EXO proxied brute force attacks?
    • A: ESL will work well to prevent Exchange Online or other legacy authentication brute force attack scenarios. It does this by reviewing the header contents and requiring that all IPs be familiar ones.

More information on Modern Authentication and Basic Authentication Office 365 scenarios can be found online here.

Mitigation: Less Exciting Ways to Block the Bad Guys

If using another federated service provider other than ADFS or if Server 2016 ADFS is not an option, then less optimal methods need to be used to mitigate the scenario.

The best way to address this scenario is the same way you would address a denial of service to a public web site: block the IP address(es) of the submitters at the network level (firewall). This approach is basically the same one you would take if the scenario was your website being targeted for brute force or denial of service attacks.

Blocking by Policy or Block List at Exchange Online

Exchange Online has a new policy method for blocking the most common vector for password spray attacks.  It is in Preview mode however it can still be effective.  More information, including how to enable this feature, can be found at this blog post. If you cannot disable all Basic Authentication in your environment (though you should!) then a block list cam be put in place at the Exchange Online (EXO) service side to block certain IPs or ranges of IP addresses. This method has the advantages of blocking a common avenue of attack and being a very easy method to enable. The disadvantages of this method are that it only covers the EXO basic authentication scenario and that, like all IP address black lists, the blocks must be updated as the attacker moves to new IP addresses. Office 365 customers can self-manage domain-wide IP Block Lists. This is done via Exchange Online PowerShell in the following formats
  1. Standard IPv4 and IPv6 address
  2. IP range
For ex. IP Block ranges may be submitted using the following formats:
  1. CIDR format- 2001:0DB8::CD3/60
  2. High-Low format- 192.168.0.1-192.168.0.254
  3. sub masking format- 192.168.8.2(255.255.255.0)
Here are the steps for self-service management of EXO IP Block list.
  1. If you haven't already done so on the PS client being used, run the following 1x:Set-ExecutionPolicy RemoteSigned
  2. Connect to EXO using steps 1-3 in https://technet.microsoft.com/en-us/library/jj984289(v=exchg.160).aspx  using Global Admin or Exchange admin security context. If you have enabled MFA for your Administrator, please follow the steps in the document to get connect to EXO Powershell - https://technet.microsoft.com/en-us/library/mt775114(v=exchg.160).aspx
  3. Enable the block list for the IP address assigned to the test device used in step 1To enable the IP block on a single IP, use the syntax:Set-OrganizationConfig  -IPListBlocked 127.0.0.2To enable the IP block on multiple IPs using an array of IP formats depicted in formats 2a-2c above, use the syntax: Set-OrganizationConfig -IPListBlocked@{add="198.76.9.23", "172.16.0.0-172.31.255.255","2001:db8:0:1234:0:567:8:1", "2001:0DB8::CD3/60","192.168.8.2(255.255.255.0)","2001:db8::1","2001:0DB8:0000:CD30:0000:0000:0000:0000/60","ABCD:EF01:2345:6789:ABCD:EF01:2345:6789"}
  4. Verify your block list submission get-OrganizationConfig | select -ExpandProperty IPListBlocked
    1. 2001:0DB8:0000:CD30:0000:0000:0000:0000/60
    2. 2001:db8::1192.168.8.2(255.255.255.0)
    3. 2001:0DB8::CD3/60
    4. 2001:db8:0:1234:0:567:8:1
    5. 172.16.0.0-172.31.255.255
    6. 198.76.9.23
    7. ABCD:EF01:2345:6789:ABCD:EF01:2345:6789
    8. 127.0.0.2
  5. Wait 4 hours for the IP Block list change to fully propagate in the Exchange Online environment. You may see partial authentication request from blocked IPs until the blocked IP configuration is fully propagated.  
  6. Note: Please note that if you add and subsequently remove the IP Block list, the change request will be processed sequentially. Hence please take proper precaution the IP(s) you are blocking are not a part of your organization.
  7. To close the session, run the cmdlet below.
    Remove-PSSession $Session

For federated single sign on, the best practice is to have the proxy servers for ADFS (or any federated service) in a DMZ. Since DMZs have network traffic rules it would make sense to add a blocking rule at the DMZ to prevent the traffic from the suspect IPs from ever reaching the Web Application Proxy (WAP) servers. However, some environments make this an easier approach to do in the load balancer, or even at the internet service provider (ISP). The net result is to prevent the brute force traffic from ever reaching the ADFS servers in the first place.

The only fly in the ointment in the traffic blocking scenario is that "legacy" Exchange Online authentication mentioned above which will actually proxy the authentication of the thick "Basic Authentication" Outlook clients to the on premise ADFS servers. This scenario sounds complicated but really only means that Outlook connects to the Exchange Online servers and then the Exchange Online servers authenticate to your on premise ADFS servers on the users behalf. So there is a connection from the Exchange servers IP address as the client.

In that instance blocking the malicious IP address at the DMZ perimeter won't work since the IP address is actually a Microsoft Exchange Online one. Instead you would need to either examine your ADFS server's events for the IP address in the x-ms-forwarded-client-ip value or do SSL termination on your network before the Web Application Proxy servers and review the data on a network device. Again, this requires SSL termination in a network device prior to the WAP servers.

If your environment is seeing Exchange Online Basic Authentication being used to pass along brute force attempts then you should also consider disabling POP and IMAP protocols for the targeted users.

In addition to the account lockouts that you were experiencing due to constant Exchange Active Sync Authentication requests we also identified that there are similar types of requests on the IMAP, SMTP and POP protocol currently occurring.

Our recommendation is to disable these protocols if they are not needed for the users.  This will block the requests at the service side preventing it from being forwarded to your ADFS servers which will reduce the available attack service.

Unless you have an account that specifically relies on it you can disable IMAP and the POP protocols across all your mailboxes by running the following command: get-mailbox | Set-CasMailbox -PopEnabled $False -ImapEnabled $False

This action should also help to mitigate the issues related around these protocols that also rely on basic authentication and may not be needed in your environment.

The GUI method in the Office 365 Portal is as follows: Mitigation: Refine Sign In Security

Review AD Account Lockout Threshold, Implement ADFS Extranet Lockout feature, Enable MFA

Password security is important, so this should start with a review your organization's password complexity requirement to make sure passwords are sufficiently complex and cannot be quickly guessed via brute force attempts.

Once that is done review and consider raising the current Active Directory on-premises account lockout threshold to determine whether the current setting is providing sufficient security to prevent password guessing while at the same time preventing intentional or unintentional account lockouts from bad password attempts which will result in denial of services. This topic is discussed on TechNet here.

If your organization is seeing targeted account lockouts to a specific subset of users you may be very granular in addressing the concern by using Fine Grained Password Policy settings. This feature essentially allows for assignment of a different password complexity and account lockout configuration for a security group or specific users.

In my experience, organizations discover that the on-premises Active Directory account lockout threshold is set too low when there is a rash of maliciously based account lockouts or unintentional account lockouts from an application. In this scenario, I encourage reviewing the balance between preventing password guessing via brute force and allowing denial of services for users since their account had too many bad password attempts.

When using ADFS, it is also important to implement the ADFS 2016 Extranet Smart lockout feature. This feature will help mitigate large numbers of attempts from locking out accounts since it is a lower threshold than what you have defined in AD and will cause ADFS to stop forwarding the bad password attempts to AD.

Additionally, it's important to require Multi Factor Authentication (MFA) sign in for users. This can't be stressed enough as being a useful security item to implement. The MFA requirement will mitigate the risk if a password is guessed or phished since MFA will still be required to complete the authentication and a malicious user will lack the required additional factor of a phone call, a text message or other method.

Office 365 MFA or Azure AD Multi Factor Authentication (MFA) may be used. If you have your own MFA solution then that works too. In any case, MFA can be very quickly implemented via your cloud services. For Azure AD MFA the setup would require:

  • Assigning an Azure AD Premium license to the user(s)
  • Deciding on MFA settings for complexity, duration of MFA authentication, and a review of the user's device(s) and applications to ensure their device apps support MFA.

For organizations which need to decide which MFA solution to use, information on the differences between Office 365 and Azure AD MFA can be found online at this link and general pricing information for Azure AD MFA is documented here. Some additional MFA information can be found in Channel 9's MFA OVerview, in Azure online documentation here, and finally in the MFA Deployment Guide.

If your ADFS farm is Server 2012 R2 then you do not have to do additional steps to simply use Azure AD MFA. However, ADFS allows for on premise control of MFA via claims rules if you would like to implement them. For example, if a user is a member of a security group and they are signing in from extranet you can require MFA auth from them. More information on MFA related claims rules can be found in the blog posts below:

Review, Report, Monitor

It's important to review your online reports for Office 365 and Azure AD to see what impact (if any) the incident has had to online services and resources. This entails a review Azure AD Security Reports (for alerts on activity deemed to be malicious), Azure AD Audit Reports (for details on changes to anything in Azure AD), and Sign In Activity (which will show when denies attempted sign in, to what and the end result). A review of these reports for activity for and by the affected VIP users to gauge impact of the event is vital to address the concern.

More information on Azure AD reports is online here: https://docs.microsoft.com/en-us/azure/active-directory/active-directory-view-access-usage-reports

Outside of the immediate review it's also important to implement a consistent routine of automatic download of Azure AD Security Reports, Audit Reports, and Sign In Activity reports.  This can be done by setting a scheduled task on a Windows computer to run PowerShell scripts to pull the reports down periodically for reference.

Just as important is to configure Azure AD to notify global administrators if anomalous sign ins are seen in the future.  This is a switch setting in Azure AD (screenshot below) in the Azure AD web portal:

Long Term Recommendation

Consider moving to a Windows Server 2016 ADFS farm in order to mitigate bad password submission attempts by using the Multi Factor Authentication feature in 2016. This will allow Azure MFA as the primary authentication method.  This prevents the scenario of a low account lockout threshold and malicious bad password attempts via brute force from being a concern.  This is discussed in more detail on TechNet here.

Note that a server 2016 ADFS farm could be in place as a parallel switch-over upgrade or an in place farm upgrade.

To sum things up, scenarios where malicious entities can submit bad password attempts can be a challenge at first. However, like any challenge you may face in life you can build strength and capability in how to address the problem and overcome it. Once you've overcome the challenge any next one won't be near as impactful or difficult.

difficult.]]>
Easy Parsing of ADFS Security Audit Events https://docs.microsoft.com/archive/blogs/tspring/easy-parsing-of-adfs-security-audit-events Wed, 17 Feb 2016 05:35:00 GMT https://blogs.technet.microsoft.com/tspring/2016/02/17/easy-parsing-of-adfs-security-audit-events/

I recently saw an internet meme going around that showed a sticker shaped like a cloud and in the cloud were the words “The Cloud is just someone else’s computer.”.  This is accurate on so many levels. Any company’s cloud solution is simply a series of data centers, geographically managed on the internet so that you find the one network-wise closest to you.

The idea that the cloud is someone else’s computer is exactly why many companies use federation sign on services. A federated sign on configuration in your cloud service simply redirects the user from the cloud sign in page back to the on premise federated sign in servers. 

The configuration, setup and maintenance of federated single sign on to the cloud requires a significant amount of work.

Why would you want to do that?

The answer is simple-control. Requiring that the cloud redirect the user to your premise for sign in to cloud services provides a company a great deal of control on who gets access to the cloud services, where they are allowed to gain that control from, what additional authentication may be required and many more control aspects. This measure of control is quite a discussion point when you are using “someone else’s computer”.

It also gives a great deal of on-premise auditing which can be done if you are using Active Directory Federation Services (AD FS). ADFS can be configured to do service auditing of the user logon in order to reveal a level of detail any national intelligence service would envy. You will learn the version of device a user is connecting from, the application they are using, the IP address of the originating client, the user name and more.

The ”how to” for ADFS auditing Configuring ADFS Servers for Troubleshooting can be found in this TechNet link under the “Configuring ADFS Servers to Record Auditing of ADFS Events to the Security Log” heading.

These audit data points are most commonly used in setting ADFS Client Access Policies. In fact, in the past few years a nice collection of blog and other content has sprung up around configuring those client access policies and claims processing rules.  Client access policies can take those same values which appear in the Security event log as auditing details and make decisions about whether to issue a token to a client, to transform a value received from the client into another value issued in the token, or to challenge the user for additional authentication proof up-essentially a challenge for the user to provide multi factor authentication.

Greater information on scenarios where ADFS client access policies could be used and the values which can be seen in them can be found on TechNet at this link: https://technet.microsoft.com/en-us/library/dn592182.aspx .

The only catch is that the auditing provides approximately 78 events per user logon to ADFS in Server 2012 R2 and earlier versions. Word on the street is that Server 2016’s ADFS verbose auditing will be less, er, verbose and give fewer events but in the meantime 78 is what we get.

How do you make sense of so many events so you can tune your client access or policies or tie together your security forensics?

I’ve written a PowerShell script to make it easier to parse through an ADFS servers Security event log for these events.

He script was written for and tested on Server 2012 R2 ADFS. It is likely to work on prior versions of ADFS but since it hasn’t been tested on them it’s not certain.

You can download the PowerShell script here:

ADFS Security Audit Events Parser (ADFSSecAuditParse.ps1)

https://gallery.technet.microsoft.com/scriptcenter/ADFS-Security-Audit-Events-81c207cf/

More details about the script:

  • The script is intended to run against a “live” Security event log on the ADFS server. It is not written to run against saved logs-though that is possible it is very slow and resource intensive.
  • If you have an ADFS farm the script would need to be ran against each server in the farm (not WAP) in order to collect all of the data you need reliably. Unless you can trick your load balancer with a HOST entry for testing.
  • The script has three switches: SearchCriteria, PastDays and PastHours.
  • SearchCriteria should be a string variable of what you want to search the Security event log for. For example, if I want to see if a user with the universal principal name (UPN) of joebob@contoso.com has logged on and all of the details regarding that logon I would pass in “joebob@contoso.com” or that same string in a variable.
  • Only one SearchCriteria string can be specified at a time.
  • PastDays specifies how many days in the past from current time to search the log. The default is 1.
  • PastHours specifies how many hours to search the log back from current time. Less than an hour can be specified if you’d like-just use decimal. For example, a half hour would be .5.
  • The script searches for instanceIDs which match the SearchCriteria and then searches for all of the matches to that instanceID.
  • The script will find all instanceIDs (token requests) which take place during the specified time and get the event details.
  • The script will find any Security event which contains the instanceID in the event details. For ADFS token requests this is typically events 500, 501 and 299.
  • Each result set based on isntanceID is displayed in the PowerShell console and also piped out to a text file.
  • Each output text file is named %SearchCriteria%-ADFSSecAudit_%Counter%.txt. The counter value is in lieu of the instanceID since instanceIDs are too large for practical file names.

Example text output:

Security Audit Events which match joeuser@contoso.com and instance 3a152fbb-6cda-495a-ac9d-98ce2b98631c in Security event log.

Event ID : 501

Provider : AD FS Auditing

Machine Name : adfsserver1.contoso.com

User ID : S-1-5-21-<snip>

Time Created : 2/4/2016 8:38:18 PM

Value

-----

19d6868b-c074-4afd-990e-d237b4aabb1b

https://schemas.microsoft.com/2012/01/requestcontext/claims/client-request-id

00000000-0000-0000-6300-0080000000d3

https://schemas.microsoft.com/2012/01/requestcontext/claims/relyingpartytrustid

https://login.microsoftonline.com/login.srf

https://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-forwarded-client-ip

192.168.1.23

https://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-forwarded-client-ip

192.168.1.23

https://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-client-ip

10.0.0.9

Event ID : 501

Provider : AD FS Auditing

Machine Name : adfsserver1.contoso.com

User ID : S-1-5-21-<snip>

Time Created : 2/4/2016 8:38:18 PM

Value

-----

19d6868b-c074-4afd-990e-d237b4aabb1b

https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid

S-1-1-0

https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid

S-1-5-32-545

https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid

S-1-5-2

https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid

S-1-5-11

https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid

S-1-5-15

https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid

S-1-18-2

https://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-client-user-agent

Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko

https://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-endpoint-absolute-path

/adfs/ls/

https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork

false

https://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-proxy

contosowap1

Event ID : 501

Provider : AD FS Auditing

Machine Name : adfsserver1.contoso.com

User ID : S-1-5-21-<snip>

Time Created : 2/4/2016 8:38:18 PM

Value

-----

19d6868b-c074-4afd-990e-d237b4aabb1b

https://schemas.xmlsoap.org/ws/2005/05/identity/claims/implicitupn

joeuser@tspring.com

https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod

urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport

https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant

2016-02-04T20:38:18.701Z

https://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn

joeuser@contoso.com

https://schemas.microsoft.com/ws/2008/06/identity/claims/primarygroupsid

S-1-5-21-<snip>-513

https://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid

S-1-5-21-<snip>-1107

https://schemas.xmlsoap.org/ws/2005/05/identity/claims/name

CONTOSO\joeuser

https://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname

CONTOSO\joeuser

https://schemas.microsoft.com/claims/authnmethodsreferences

urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport

https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid

S-1-5-21-<snip>-513

I sincerely hope using this PowerShell script helps you in day to day activities, setting things up in a new client access policy, or your forensic peek under the covers.

]]>
Checking for SHA1 Signatures using PowerShell https://docs.microsoft.com/archive/blogs/tspring/checking-for-sha1-signatures-using-powershell Mon, 07 Dec 2015 05:01:00 GMT https://blogs.technet.microsoft.com/tspring/2015/12/07/checking-for-sha1-signatures-using-powershell/  

Certificates are complex. They can be tough to view and difficult to understand. This is even more true when the topic is related to determining certificate signature details. In the past, there were really no reasons to look so closely at an issued certificates signature details. That has all changed now in conjunction with an ever changing security landscape.

Microsoft has plans to deprecate the use of some certificates which have SHA1 signatures. The plans are nuanced by date and the deprecation strategy does not apply to just any certificate. The current plans apply to server authentication certificates (certificates which are used to secure network communication using TLS) as well as to the certificates used to sign executables in Windows (code signing and time stamping certificates).

Of course, the authoritative place to find out about the Microsoft strategy for SHA1 deprecation is https://aka.ms/sha1 .

We’ve done a pretty good job of preparing people for the scenario of getting a Microsoft certificate authority (Windows Server with the Active Directory Certificate Services role) updated to not issue SHA1 signatures.

Earlier this year my colleague Rob Greene published a blog post about migrating certificate authorities from SHA1 to SHA2 hashing algorithms for future issued certificates. That blog post can be found at the AskDS TechNet blog here.

The migration can be complex so my colleague Jim Tierney wrote a detailed blog on migrating a two tier CA hierarchy from SHA1 to SHA2 on the AskDS blog here.

We are left with the question of “does the certificate I’m using have a SHA1 signature?”. That’s a tough question to answer.  Due to the complexity of certificates and PKI in general it can be confusing and difficult to determine whether the SHA1 deprecation applies to a particular certificate or code signed application.

To help make that easier I wrote a PowerShell script. The script is available in the TechNet script gallery at this link.

There are two scenarios this script checks. One is a server auth (TLS) certificate which is signed with SHA1, and the other is an application (EXE) which is signed with SHA1.

More details of the script:

  • The script can check either an exported certificate (filename.cer) or a signed executable but not both at once.
  • The script checks only one certificate or executable at a time.
  • The certificate to check in the Server Auth scenario must be exported to a file and the file path must be specified.
  • The script reviews the specified certificate to make sure it is a Server Auth one.
  • The script checks the signature to make sure that the signature algorithm matches one of the OIDs below
  • 1.3.14.3.2.29, 1.2.840.10040.4.3, 1.2.840.10045.4.1, 1.2.840.113549.1.1.5, 1.3.14.3.2.13, 1.3.14.3.2.27
  • The script has two switches and a Path parameter
  • File: The is the default. This tells the script you will be checking an exported certificate file.
  • EXE: This specifies that you will be checking an executable’s code signing and time stamping certificate(s).
  • Path: The path to the exported certificate or executable to check.
  • The script checks for both code signing and time stamping certificates and will display either or both if found.

Here’s an example of using the script when SHA1 was not found:

image

And an example if using the script where SHA1 was found:

image

I hope this script helps you gain some clarity on whether your environment or services are at risk from SHA1 so you can prepare for the SHA1 deprecation as easily as possible.

 

]]>
Authorization and Getting User Group Memberships https://docs.microsoft.com/archive/blogs/tspring/authorization-and-getting-user-group-memberships Mon, 11 May 2015 09:14:50 GMT https://blogs.technet.microsoft.com/tspring/2015/05/11/authorization-and-getting-user-group-memberships/ It’s a pretty common occurrence where I have to help determine why a particular user or users are getting an access denied to a resource I know little about. One side of that equation is seeing the object permissions.

The other side of the equation for determining access  for a user to a resource is determining what the user would have in their access token. If you are logged on as the user this is easily done by using “Whoami.exe”. If you are not logged on as that user or if the user is a service account this can be difficult and require changing user rights or overcoming other hurdles.

Viewing the authorization blocker is something which is made much easier with the advent of the Windows Server 2012 and later feature called Access Denied Assistance (ADA).  ADA is makes the failure easy to discover for people who don’t necessarily know the ins an outs of how permissions are assessed and compared to allow or disallow access-and it’s a graphical user interface plugged into the object properties (file or folder).  It’s also well documented on TechNet at this link and this link.

But what if you are not using Windows Server 2012 or later, or if the access denied or authorization failure you are looking into is not a file or folder object? In those cases you need to compare the identity that is requesting access to is an Active Directory, service control manager or other object?

For the user group membership side of the concern I’ve written a script to make life easier for those scenarios.  You can download it from the TechNet Script Gallery here: https://gallery.technet.microsoft.com/Get-User-Group-Memberships-b5930288 . The script can be ran against any user in your domain and give you a result file which contains their group memberships and any SIDHistory memberships they have.

More info about this script:

  • The script does not require the Active Directory PowerShell module
  • The script can be ran as a non-Domain Admin
  • The script is subjective to the domain where you run it.  This is significant due to group scopes not traversing all boundaries.
  • The script will place results in a text file at the location of the prompt. The text file will be named after the username.
  • This script will simply obtain all of the Windows token related items which can be used for authorization decisions. This includes groups as well as SIDhistory entries on the user in AD or on the group the user is a member of.
  • This script is not a substitute for the Server 2012 Access Denied Assistance feature. If you have it use that instead! It's awesome.
  • Note: there is a known bug where local Administrators membership may appear as an incorrect group scope (shown below). The SID will be a local well known identifier however.

Here’s a sample command line result of running it

Getting group membership details for user tspring...

User group collection complete. Results are at C:\test\tspringGroupList.txt.

Here’s a sample result from that text file:

Friday, May 8, 2015 10:46:55 AM

Groups for user tspring

Domain\Name: NA\tspring

User SID: S-1-5-21-2255868-8675309-8675309-995152

Domain Name: northamerica.contoso.com

Forest Name: contoso.com

***************

Domain Users (Domain Global Group)                                                       : S-1-5-21-2255868-8675309-8675309-513

Administrators (Domain Local Group)                                                      : S-1-5-32-544

Read-OnlyNADFSGroup  (Domain Global Group)                                                     : S-1-5-21-2255868-8675309-8675309-5555671

NA-BBGHT-445  (Domain Global Group)                                         : S-1-5-21-2255868-8675309-8675309-5555671

Northamerica_PastaLovers (Domain Global Group)                                         : S-1-5-21-2255868-8675309-8675309-5555671

SIDHistory Group Details

***************

[NONE FOUND]

User SIDHistory Details

***************

[NONE FOUND]

]]>
Making Secure Administration Work: StartScriptAsProcess.ps1 https://docs.microsoft.com/archive/blogs/tspring/making-secure-administration-work-startscriptasprocess-ps1 Mon, 27 Apr 2015 05:52:41 GMT https://blogs.technet.microsoft.com/tspring/2015/04/27/making-secure-administration-work-startscriptasprocess-ps1/ A recommended security practice nowadays is to use a less privileged account when logging into domain joined computers. The idea behind this is that if the session becomes compromised (such as from social attack like “you may already be a winner! emails) the compromise does not have the user’s more privileged credentials to do bad things with.

This challenge of minimizing the exposure to compromise yet still being able to administer and resolve an Active Directory domain environment is a tough one. Since the user is not logged on as a privileged one many of the AD specific tasks they may need to do will simply not work as expected.

An additional related scenario is service accounts which application servers are running as are an explicit security barrier. As an administrator though it is common to need to check access or other settings as that service. Without a method to do so as that service identity there’s a lot of guesswork involved. A common workaround in the past was to log on interactively as that service identity which increase the security exposure of the service identity to all of the applications and services which run in an interactive session but do not in service sessions.

I’ve written this PowerShell script in order to overcome these scenarios an allow running PowerShell scripts as a specific identity. In Windows the identity is tied to a Windows Process. In order to start a script or other item as a different identity and not use the implicit process you are logged on as we need to create the process and explicitly tell Windows to use an alternate identity. That’s what this script does, using the System.Diagnostics.Process.Start method.

The specific scenario that brought this script about was a need to test and view certificates for a service identity. Since certificate store objects can only be accessed and reliably tested as the identity that should have access to them we needed a way to run PowerShell scripts as that service identity. To that end I wrote this script to call one of my other scripts for certificate chaining (which is here on TechNet).

A few things about this script:

· You can download the script from the TechNet Script Gallery at this link: https://gallery.technet.microsoft.com/Run-Script-As-A-Process-53e0c56a

· The script takes one parameter: ScriptPath. This needs to be the full local path to the script along with script name.

· The script will not work against remote destinations (where script is on a share and not on local computer the PowerShell console is on).

· The script will display the results in the same PowerShell window the script was called from.

· The script will have the console wait until the called script completes before returning to the prompt.

· Any error messages received by the called script will be shown in the PowerShell console. This was a common pain point that I was happy to resolve for when I needed to troubleshoot the script processing or why the called script was failing.

· The script will prompt for the credentials to use:

clip_image001

· When complete a message of “Script $Scriptname has finished.” will be shown.

· The script has been tested on Windows Server 2008 and later.

Here’s an example of running the script:

PS C: \Scripts> .\startscriptasprocess.ps1 -ScriptPath "c:\scripts\isdomainadmin.ps1"

User CONTOSO\jimbob is not member of the built in Domain Admins group in domain contoso.com.

Script .\isdomainadmin.ps1 has finished.

Hopefully this script helps make your life easier as you securely administer your environment.

]]>
A Day at the SPA https://docs.microsoft.com/archive/blogs/tspring/a-day-at-the-spa Mon, 23 Mar 2015 05:33:18 GMT https://blogs.technet.microsoft.com/tspring/2015/03/23/a-day-at-the-spa/ Note: “A Day at the SPA” is the first in series for updates and republish of “Tspring’s Greatest Hits” blogs from https://blogs.technet.com/ad . Updates for applicability in newer products added.

Ah, there’s nothing like the stop-everything, our-company-has-come-to-a-complete-halt emergency call we sometimes get where the domain controllers have slowed to a figurative crawl. Resulting in nearly all other business likewise emulating a glacier as well owing to logon and application failures and the like.

If you’ve had that happen to one of your domain controllers then you are nodding your head now and feeling some relief that you are reading about it and not experiencing that issue right this moment.

The question for this post is: what do you do when that waking nightmare happens (other than consider where you can hide where your boss can’t find you)?

Well, you use my favorite, and the guest of honor for this post: Server Performance Advisor. Otherwise known as SPA.

The original “SPA” was a not installed or available in Windows. Instead, it needed to be download (link below) for Windows Server 2003.  Windows Server 2008 and later come with the functionality of Server Performance Advisor baked into Performance Monitor (easily launched from Start—>Run and entering PerfMon.msc). When Active Directory Directory Services (ADDS) are installed on a server the “Active Directory Diagnostics” data collector set in performance monitor is also installed automatically.

The functionality in Performance Monitor’s “Data Collector Sets” is basically the same as in SPA. For the purposes of this blog post I’ll use acronym “SPA” to mean Server 2003 SPA or the AD/ADLDS Data Collector Sets in Server 2008 and later Performance Monitor.

Think of SPA as a distilled and concentrated version of the Perfmon performance logging and tracing data you might review in this scenario. Answers to your questions are boiled down to what you need to know; things that are not relevant to Active Directory performance aren’t gathered, collated or mentioned. SPA may not tell you the cause of the problem in every case, but it will tell you where to look to find that cause.

Furthermore, the Active Directory Diagnostics data collector set has heuristics which will review what your server is doing versus what would be considered excessive given the performance capabilities of the hardware or allocated resources. For example, understanding if your server is running out of ATQ threads to handle LDAP queries is a calculated thing and SPA can sum things up nicely for you.

To start SPA simply right click the data collector set and choose Start.image The data collector set will show a green “Play” symbol on the data collector set icon while it is running.image When the data collection is finished running and after the report is finished compiling you can find the viewable report in the Reports node in the left hand tree.image The report will be named after the date it was ran and an incremental number for how many were ran that day. To export the report for viewing simply click the folder icon above (it says “Open Data Folder”) and zip up all of the files in that directory.image

So I’ve talked about the generalities of SPA, now let’s delve into the specifics. Well, not all of them, but an overview and the highlights which will be most useful to you.

SPA’s AD data collector is comprised of sections called Performance Advice, Active Directory, Application Tables, CPU, Network, Disk, Memory, Tuning Parameters, and General Information. For the 2008 and later data collectors the categories are Performance, Active Directory, CPU, Network, Disk, Memory, Hardware Configuration and Report Statistics.

Before you reach all of the hard data in those sections, though, SPA gives you a summary at the top of the report. It’ll look something like this:

Performance Advice is pretty self explanatory and is one of the big benefits of SPA over other performance data tools. It’s a synopsis of the more common bottlenecks that can be found with an assessment of whether they are a problem in your case. Very helpful. It looks at CPU, Network, Memory and Disk I/O and gives a percentage of overall utilization, it’s judgment on whether the performance seen is idle, normal or a problem and a short detail sentence that may tell more.

image

The Active Directory portion gives good collated data and some hard numbers on AD specific counters. These are most useful if you already have an understanding of what that domain controllers baseline performance counters are. In other words, what the normal numbers would be for that domain controller based on what role it has and services it provides day to day. Generally speaking, though, SPA is most often used when a sudden problem has occurred, and so at that point establishing a baseline is not what it should be used for.

The good collated data includes a listing of clients with the most CPU usage for LDAP searches. Client names are resolved by FQDN and there is a separate area which gives the result of those searches.

AD has indices for fast searches and those indices can get hammered sometimes. The Application Tables section gives data on how those indices are used. The information this gives to you can be used to refine queries being issued to the database (if they were to traverse too many entries to get you a result for example) if you have an application that is doing that sort of thing, it can suggest that you need to index something new, or that you need to examine and perhaps fix your database using ntdsutil.exe.

The CPU portion gives a good snapshot of the busiest processes running on the server during the data gathering. Typically, this would show LSASS.EXE as being the busiest on a domain controller, but not always-particularly in situations where the domain controller has multiple jobs (file server, application server of some kind perhaps). Generally speaking, having a domain controller be just a domain controller is a good thing.

Note: If Idle has the highest CPU percentage then you may want to make sure you gathered data during the problem actually occurring.

The Network section is one of the most commonly useful ones. Among other things, this summarizes the TCP and UDCP client inbound and outbound traffic by computer. It also tells what processes on the local server were being used in conjunction with that traffic. Good stuff which can give a “smoking gun” for some issues. The remaining data in the Network section is also useful but we have to draw the line somewhere or this becomes less of a blog post and more like training.

The Disk and Memory sections will provide very useful data, more so if you have that baseline for that system to tell you what is out of the normal for it typically.

SPA is a free download from our site, and installs as a new program group. Here’s where you can get it (install does not require a reboot):

https://www.microsoft.com/downloads/details.aspx?familyid=09115420-8c9d-46b9-a9a5-9bffcd237da2&displaylang=en

A few other things to discuss regarding SPA.

  • For the Server 2003 Server Performance Advisor: It requires Server 2003 to run.
  • As I stated above, when you have a problem is the worst time to establish a baseline. SPA and the data collector set uses should be should be used to look for problems which are underway at the current time.
  • The default duration of the data collection is 300 seconds (5 minutes). The duration of the test can be altered depending on your issue. Keep in mind that if you gather data a great deal longer than the duration of the problem then you run the risk of averaging out the data and making it less useful for troubleshooting.
  • For Windows Server 2008 and later Data Collector Sets: Once the Active Directory Directory Services (ADDS) role is installed the Active Directory Diagnostics data collector set is also installed and available.
  • For the Server 2003 Server Performance Advisor: :In the same way that there are ADAM performance counters, SPA has an ADAM data collector
  • For Windows Server 2008 and later Data Collector Sets: When Active Directory Lightweight Directory Services is installed an additional ADLDS data collector is also installed.
  • For the Server 2003 Server Performance Advisor: The latest version (above) includes an executable that can kick this off from a command line and it can be run remotely via PsExec or similar.
  • SPA will not necessarily be the only thing to do in emergency scenarios but it’s a great starting place to figure out the problem.
  • When a  data collection is finished it will then compile the data report for review. If the server was very busy during the data collection-as is likely since why else would you be running it?-it may take awhile for the report to be compiled. The time to compile the report often takes longer than the data collection duration itself on busy servers.

See? A day at the SPA can really take the edge off of a stop-everything, our-company-has-come-to-a-complete-halt emergency kind of day. Very relaxing indeed.

]]>
Discovering AD Trust Topology https://docs.microsoft.com/archive/blogs/tspring/discovering-ad-trust-topology Tue, 10 Mar 2015 05:53:37 GMT https://blogs.technet.microsoft.com/tspring/2015/03/10/discovering-ad-trust-topology/ Though many of today’s information technology topics revolve around “the cloud” it’s still very common to be looking at Active Directory trusts.  Active Directory (AD) trusts are the method by which one AD domain can allow access to resources joined to it from identities in other AD domains.  The value in setting up an AD trusts versus setting up “pass through authentication” (identities in both domains with the same name and password) is that you can use the integrated security mechanisms of Kerberos and NTLM when authenticating to resources. The AD trust also provides from single sign on in the traditional sense of not prompting anyone for credentials.

Ultimately AD trusts boil down to-allowing access to resources in an authenticated way. The details of how and what level of access can be configured somewhat on the trust but mostly on the resources themselves once the authentication across the trust has already happened. It’s that “somewhat” configuration of security on the trust and how AD trusts behave that drives any administrator to getting a better understanding of how the trust is configured.

There are several kinds of AD trusts and each may have their own unique properties. These properties can effect authentication and authorization to resources across those trusts. For the most part trust security and configuration hasn’t changed much since Windows Server 2003 days. For that reason the old TechNet reference “Administering Domain and Forest Trusts”is still applicable.

Let’s go over trusts at a high level. There are three categories of trust: Forest, External and Internal.

Forest trusts are explained pretty well in the link above however I will add that a forest trust is the kind where you should expect Kerberos authentication to work. Forest trusts were created and tested explicitly to make sure Kerberos would work (among other things like transitivity). Forest trust configuration can be a major factor in authentication if the trust is optimized for DNS suffix inclusion or exclusion due to overlapping namespaces at source or destination.

External trusts are a lot like Windows NT style trusts. They do not allow for transitivity or customization with respect to namespace routing. They were not created or tested with Kerberos in mind even though it will often work if some details and name resolution is configured correctly.

Internal trusts are trusts within a forest. In forests with more than one domain this can be a ParentChild, CrossLink (also known as a Short Cut Trust) or TreeRoot trust type.

Trust types are confusing in TechNet documentation. The best reference is really MSDN at this link:  https://msdn.microsoft.com/en-us/library/system.directoryservices.activedirectory.trusttype(v=vs.110).aspx 

In addition to the different kinds of trusts, trust security can be configured on the trust itself for Selective Authentication and Quarantine (also known as SIDFiltering). These security settings may restrict access to resources across the trust so having the details of whether they are enabled or not is very helpful. Trust security is a hot topic if you have common migrations or plan to migrate objects from one domain to another and rely on SIDHistory to help maintain a continuity of access of resources for users.   

I’ve written a PowerShell script to help give you a quick understanding of the trust layout in your environment. You can download it from the TechNet Script Center at this link: https://gallery.technet.microsoft.com/scriptcenter/Get-AD-Trust-Topology-f8f2d1d7.

Script Details

  • You do not need to be a domain or enterprise administrator to run this script and get results.
  • You do not need AD PowerShell module or to load any module for that matter
  • Trust type and scope are shown.
  • Trust direction is shown.
  • For forest trusts, DNS suffix inclusions and exclusions are shown as “Trust TopLevelNames (Name Suffix Routing)” and “Trust Excluded TopLevelNames(Name Suffix Routing)”.
  • For forest trusts, transitively trusted domains are listed as “trusted domain info”. This will show the NetBIOS domain name, domain SID and fully qualified DNS name of the domain.
  • Selective Authentication status for the trust is given
  • SIDFiltering (also known as Quarantine) status for the trust is given.
  • To run the script open an elevated PowerShell prompt and enter “.\GetTrustTopology.ps1” (without the prompts).
  • The script will create a file %systemroot%\temp\TrustTopology.txt. All details will go into it.
  • The trust details are contextual. In other words, the trust details are taken from the forest and domain of the computer the script is running the GetTrustTopology.ps1 script on.
  • The script will provide a time estimate for how long the script will take to complete since each trust query takes approximately 1 second to complete.
    • Example:

Trust data collection from the domain is expected to take approximately 1 minute(s) and 53 seconds.

Working...

  • A summary of how many trusts are configured and what kind they are is given.
  • In the text output file the trusts are categorized by Internal, External and Forest.
  • The script uses the .Net methods below:
    • [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
    • [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()
    • [System.DirectoryServices.ActiveDirectory.Domain]::GetAllTrustRelationships()

Here’s an example snippet from a trust output (which has been edited for length and content):

Trust topology information obtained from the computer TSPRING1 in the domain na.contoso.com on 03/09/2015 16:04:33.

This text file contains information on all trusts: Forest, External, Shortcut and ParentChild.

There are 41 forest trusts.

There are 8 internal (intra forest) trusts.

There are 64 external trusts.

**********************************************************************************************

Active Directory Trusts for Trusts of Type internal

*********************************

Trust Name : Trust Details for na.contoso.com | af.contoso.com.com

Local Domain (Source) : na.contoso.com

Trusted Domain (Target) : af.contoso.com.com

Trust Direction : Bidirectional

Trust Type : CrossLink

Quarantine (SIDFiltering) : False

Selective Authentication : False

Active Directory Trusts for Trusts of Type forest

*********************************

Trust Name : Trust Details for corp.contoso.com| zeem.contoso.com

Local Domain (Source) : corp.contoso.com

Trusted Domain (Target) : zeem.contoso.com

Trust Direction : Bidirectional

Trust Type : Forest

Trusted Domain Info: ZEEM : DNSName zeem.contoso.com | Domain SID: S-1-5-21-54554650-542264547-849213213

Trusted Domain Info: CORP : DNSName z.corp.contoso.com | Domain SID: S-1-5-21-126545641-121658548-89421321

Trust TopLevelNames (Name Suffix Routing) : {corp.contoso.com}

Trust Excluded TopLevelNames (Name Suffix Routing) : {sub.corp.contoso.com}

Quarantine (SIDFiltering) : True

Selective Authentication : False

Active Directory Trusts for Trusts of Type external

*********************************

Trust Name : Trust Details for na.contoso.com | manuf.tailspintoys.com

Local Domain (Source) : na.contoso.com

Trusted Domain (Target) : manuf.tailspintoys.com

Trust Direction : Bidirectional

Trust Type : External

Quarantine (SIDFiltering) : True

Selective Authentication : True

*****************

I hope this script and blog post helps folks as much as it helps me when I use it to discover a new environments trust topology.

]]>
Poor Man’s Guide to Troubleshooting TLS Failures https://docs.microsoft.com/archive/blogs/tspring/poor-mans-guide-to-troubleshooting-tls-failures Mon, 23 Feb 2015 09:49:00 GMT https://blogs.technet.microsoft.com/tspring/2015/02/23/poor-mans-guide-to-troubleshooting-tls-failures/ Network security has never been more of a hot topic than it is now. There are many different driving forces making network security an ever increasing topic for discussion and review. Network security using Secure Sockets Layer (SSL) or Transport Layer Security (TLS) are particularly focused on since they are layer of network security which can traverse a corporate on premise network and provide security from that network to a cloud hosted service. Or online shopping ;)

In today’s post we will be focusing on how to determine

  • what version of SSL or TLS is occurring in a network trace
  • Quickly identifying which is the client and which is the server in the interaction
  • How to find an RFC compliant TLS error if present in a network trace
  • How to find what the client and server are exchanging in the TLS negotiation with respect to certificates
  • How to find what the client and server are exchanging in the TLS negotiation with respect to supported ciphers
  • How to view unexpected TLS session shutdowns from client or server

This blog post will make assumptions about a basic understanding of SSL and TLS. That may seem like a huge base of knowledge (and it is!) but you can apply Occam’s Razor as a framework to making things understandable: there’s a client and a server and they must negotiate the best mutually supported security for the session. That’s it in a nutshell for most network security and authentication and everything else is just filling in the blanks.

MSDN has a detailed explanation of how SSL and TLS work at this link.

Getting a network capture

Though I love the WireShark tool like a long lost brother I will be talking about using NetMon 3.x for the purposes of this blog post.  I’ve used Message Analyzer a bit and I suspect the same filters and techniques will work with it as well.

If you have NetMon installed then getting a capture is very easy. What many don’t know is that getting a network capture on a Windows computer even without NetMon is easy and scriptable. NetSh.exe, the “Swiss Army knife” of Windows networking, can be used to collect a network trace. More on NetSh.exe can be found on TechNet at this link.

The command I usually use is (at an elevated command prompt):

netsh trace start tracefile=<filenameandpath> capture=yes maxsize=200 filemode=circular overwrite=yes report=no

If you want to include this in a PowerShell script the command translates to (at an elevated PowerShell prompt):

$NetCapFile = $env:SystemRoot + '\temp\netcap.etl'
$StartNetCap = "netsh trace start traceFile=" + $NetCapFile  + " capture=yes maxsize=200 filemode=circular overwrite=yes report=no"

Locating SSL or TLS in Network Trace

This is the easiest of filters in NetMon. For SSL of any version the filter is simply “ssl” (without the quotes) and for TLS it is “tls” (also without quotes).

Finding the SSL or TLS Version Used

Finding the SSL or TLS version is very easy since it is exchanged in the Client and Server Hellos. Just expand the packet to view using the NetMon parsers. Keep in mind that this is negotiated between client and server.

image

Determining Client and Server in Trace

This is the second easiest filter you will use when looking at SSL or TLS. If you filter for SSLA or TLS you can identify the client who is initiating the secure session because that is the computer which is the source of the “Client Hello” message. That’s the quick and easy way to understand who is initiating the conversation and who is the server side of the session, every time.

If you want to get an idea of all of the client connections by filtering just for the Client Hello messages you can apply the filter

TLS.TlsRecLayer.TlsRecordLayer.SSLHandshake.HandShake.ClientHello //TLS 1.x Client Hello filter

Finding SSL and TLS Negotiation Errors

SSL and TLS have a set client to server exchange where how the secure session will take place is ironed out and mutually agreed upon. This negotiation can go wrong for various reasons and the client or the server are allowed to send an error message to the other side of the conversation detailing that things went wrong. In SSL/TLS parlance this is known as an “alert” message. It invariably means that something went wrong. An example would be that a server side (Server Auth) certificate may be expired, or not trusted by the client, and the result is that the client would send a TLS alert message to the server. Servers can send the requesting clients TLS alerts for a variety of reasons as well.

This capture filter will only any TLS errors in the capture so you can quickly see if any are present at all…

TLS.TlsRecLayer.TlsRecordLayer.ContentType== 0x15 //This filter will show TLS Alerts

Viewing Exchanged Certificates

SSL and TLS rely on Public Key Infrastructure (PKI)-in other words certificates and specifically certificates which have the Server Authentication usage (identified by OID 1.3.6.1.5.5.7.3.1) or the Client Authentication Usage (identified by the OID 1.3.6.1.5.5.7.3.2). The server side application decides whether both client and server or only the server side certificate is required.

If the certificate’s issuer is not trusted, or a specified URI in the certificate cannot be checked, or it is expired or not yet valid then the certificate cannot be used. SSL or TLS will fail at that point.

During the SSL/TLS session setup the certificates are exchanged. The server will offer its certificate in the Server Hello, and the client will offer it’s certificate if the server tells it that it is needed. To find those certificates you can use the filter below

TLS.TlsRecLayer.TlsRecordLayer.SSLHandshake.HandShake.HandShakeType == 0xb //This filter will show packets which contain certificates exchanged in TLS negotiation

The things which you can see about the certificate are really helpful to tell you what is actually being offered by the client and server applications:

  • Subject Name
  • Version (usually indicative of the template version the certificate was initially based on)
  • Serial Number
  • Issuer (certificate authority which issued the certificate)
  • Validity (NotAfter and NotBefore times)
  • Public Key Info (cipher)
  • Extensions like Subject Alternate Name (SAN), CRL Distribution Points, Key Usage and more.

Determining the Supported Ciphers

With the added attention to transport layer security comes more attention to the ciphers used by the client and server for the security.  Not all TLS servers and clients support the same ciphers and it is possible to see incompatibilities in TLS negotiation. This is true within the Windows versions from over the years, but is equally true when looking at network captures between other operating systems or network devices and Windows clients or servers.

In network captures we can filter for specific ciphers if we know that they may be problematic for either the client or the server. When doing so you simply add the hexadecimal value of the offered cipher to the filter below.

image

The hexadecimal values are the right hand numbers above.  For example:

TLS.TlsRecLayer.TlsRecordLayer.SSLHandshake.HandShake.ClientHello.TLSCipherSuites.Cipher == <hexadecimal cipher value>

In support we talked to some customers who saw issues related to newer ciphers after they installed the Microsoft security fix MS14-066 (https://technet.microsoft.com/en-us/library/security/ms14-066.aspx). Some customers reported performance related concerns on the servers or even failed TLS session setups. This was by no means universally saw by all customers in all scenarios but was a complex scenario-specific thing in each and every case.

One of the more difficult things to do is to identify whether that is even a potentially related concern when troubleshooting. To quickly rule in or rule out whether those newer ciphers are being used simply add both lines below to a filter with an “or” between them:

TLS.TlsRecLayer.TlsRecordLayer.SSLHandshake.HandShake.ClientHello.TLSCipherSuites.Cipher == 0xc014 //Filter to find TLS Client Hello’s which are offering ECDHE_RSA_WITH_AES_256_CBC_SHA  as an available cipher

TLS.TlsRecLayer.TlsRecordLayer.SSLHandshake.HandShake.ClientHello.TLSCipherSuites.Cipher == 0xc013 //Filter to find TLS Client Hello’s which are offering ECDHE_RSA_WITH_AES_128_CBC_SHA as an available cipher

TLS.TlsRecLayer.TlsRecordLayer.SSLHandshake.HandShake.ClientHello.TLSCipherSuites.Cipher == 0xc0A //Filter to find TLS Client Hello’s which are offering ECDHE_ECDSA_WITH_AES_256_CBC_SHA  as an available cipher

TLS.TlsRecLayer.TlsRecordLayer.SSLHandshake.HandShake.ClientHello.TLSCipherSuites.Cipher == 0xc09 //Filter to find TLS Client Hello’s which are offering ECDHE_ECDSA_WITH_AES_256_CBC_SHA  as an available cipher

Does simply finding these ciphers in the TLS Client Hello mean that there is a problem? No it doesn't. (I put that in bold to make sure it is not missed). Things can work perfectly well with these advanced ciphers in place, and in fact they can offer better security.

There are some additional techniques which you can apply to find out whether those newer ciphers being present are leading to the server side of the negotiation to close or reset the connection before any real data can be transmitted.

First filter for the TLS traffic you are concerned about. If the capture was taken using the NetSH.exe commands earlier in this blog post and it was taken on the client side you can identify your application in a column named “UTProcessName”. An example of one for Outlook.exe is

UTProcessName == "OUTLOOK.EXE (3014)"  //filter for process

Keep in mind that the process ID number above (3014) changes client to client and reboot to reboot. Also, if you filter only for that then you may miss some of the conversation since not all network traffic for any network exchange “goes in or out” of a single process. However, it is a very useful filter to know of if you need to tune into a specific application and how it is behaving on the wire.

If the server is perhaps closing the TLS session with a TCP reset after negotiation-without failing with a TLS alert-then you can help tune into that with a filter like this one, where we are looking for server side TCP resets and showing TLS negotiations as well.

TCP.Flags.Reset == 0x1 and ipv4.address==<IP of Server side> or TLS.TlsRecLayer.TlsRecordLayer.SSLHandshake.HandShake

Keep in mind that TCP resets should always be expected at some point as the client closes out the session to the server.  However, if there are a high volume of TCP resets with little or no “Application Data” (traffic which contains the encapsulated encrypted data between client and server) then you likely have a problem. Particularly if the server side is resetting the connection as opposed to the client.

Also, remember that you can quickly find what the IP address of the client and the server in the TLS exchange are by simply filtering for Client Hellos.

Finding the Unexpected

It is very common for people on the Identity support team to be engaged with a vague problem description that is thought to be related to SSL or TLS. It is truly a Monty Python-esque “rumors and portents of things going on” ambiguity where the people asking for us to look for “problems” aren't really sure even if the reason they are asking can be remotely related to SSL or TLS concerns.

The right thing to do is simply look and see what is in the network traces.

Sometimes we discover that the connection is not even using SSL or TLS. Conversely, sometimes we are engaged on a general problem with authentication only to discover that the failure is SSL/TLS related. They key is to approach it with an open mind and simply see what happens on the wire during the failure.

I usually start with the filter captures discussed in this blog post and simply go from there.  If I do not find any TLS alerts then I “widen the net” to look for how the TLS sessions are setting up (what version, what ciphers, what certificates) and contextually where the session setups fit in with the client to server scenario.

If nothing SSL or TLS related stand out then I simply select the built in “authentication filter” like below and apply it to start looking for things unrelated to SSL and TLS.

image  This allows us to quickly look for anything related to authentication between a client and a server including NTLM, Kerberos and transport layer protocols which are authenticated using them.

Thanks for reading  and I hope the “Poor Man’s Guide…” helps you better understand and perhaps even fix transport layer security on your network.

]]>
Golden Ticket! You lose! Good day, sir! (Updated) https://docs.microsoft.com/archive/blogs/tspring/golden-ticket-you-lose-good-day-sir-updated Fri, 30 Jan 2015 11:08:26 GMT https://blogs.technet.microsoft.com/tspring/2015/01/30/golden-ticket-you-lose-good-day-sir-updated/ In unique situations it is possible for a malicious person-who has already compromised a computer using social methods-to craft a Kerberos ticket granting ticket. This ticket granting ticket can then be used to request service tickets in the domain environment and those service tickets could then be passed to services for authorization.

Though very rare, these attacks are possible and are difficult to detect.

To try and help give a basic insight into whether an odd looking ticket granting ticket is on a computer I’ve written a PowerShell script. You can download the PowerShell script from the link below

Kerberos Golden Ticket Check

https://gallery.technet.microsoft.com/scriptcenter/Kerberos-Golden-Ticket-b4814285 

This PowerShell script is designed to query through the Kerberos ticket caches on a computer and look for Ticket Granting Tickets which have a duration (lifetime) that is different than the 10 hour default or the script-running user's specified duration (since the value can be changed per domain).

This script is not a security method in itself. Neither is it an antimalware tool. It is simply a script that may be helpful in quickly examining a specific computer's Kerberos ticket caches for anomalous tickets.

Essentially, the script compares the duration (aka lifetime) of the TGT against the expected TGT expiry the domain KDCs are set to issue. That duration is only changeable at domain controllers via policy and it will always be a per domain setting. So a TGT will always have the domain duration.

The script will take one parameter which is the Ticket Granting Ticket lifetime. If not specified the default will be 10 (same as for the domain default in Active Directory). This setting is discussed on TechNet here. Here’s the detailed decryption of the setting

Maximum lifetime for user ticket

Description

This security setting determines the maximum amount of time (in hours) that a user's ticket-granting ticket (TGT) may be used. When a user's TGT expires, a new one must be requested or the existing one must be "renewed."

Default: 10 hours.

  • The script will take one parameter which is the Ticket Granting Ticket lifetime. If not specified the default will be 10 (same as for the domain default in Active Directory).
  • The script will alert if any anomolous TGT's or service tickets are found and then display pertinent details about the TGT or service tickets in the PS prompt.
  • The script will give a message if none are found at all-basically an "all clear".
  • The script will place the returned information to a text file at %systemroot%\temp\KerberosGoldenTicketChecks.txt for review.
  • There is a known data return/formatting problem where some TGT returned for impersonated TGTs does not show the details correctly. This will be updated in a future update.
  • The script will alert if any anomalous TGT's are found and then display pertinent details about the TGT in the PS prompt.
  • The script will give a message if none are found at all-basically an "all clear".

Here's a sample result where I specified the expiry time of the tickets as being "2" instead of 10 which was the actual issuance.

Monday, March 23, 2015 2:39:44 PM
Review of local Kerberos ticket caches for ticket granting tickets (TGTs) or service tickets which have durations which differ from the domain specified ticket duration and hence may be maliciously created.

We have one or more potential Golden Ticket service tickets here folks.
Listing session information and ticket details...
SessionID  : 0x3e7
Identity   : CONTOSO\COMPUTER1$
AuthMethod : Negotiate
Logon Type : (0)


SessionID  : 0x3e7
Identity   : CONTOSO\COMPUTER1$
AuthMethod : Negotiate
Logon Type : (0)
Note: LogonID may not match Session info if the cache is for Kerberos delegation or services for user.
Session (LogonID) : 0xb9417
Client            : COMPUTER1$@ CONTOSO.COM
Server (Service)  : krbtgt/CONTOSO.COM @ CONTOSO.COM
Encryption Type   : RSADSI RC4-HMAC(NT)
Ticket Flags      : 0x40a50000 -> forwardable renewable pre_authent ok_as_delegate name_canonicalize
StartTime         : 3/23/2015 12:53:59 (local)
EndTime           : 3/23/2015 16:00:18 (local)
RenewUntil        : 3/30/2015 6:00:18 (local)
KDC Called        : DC5.CONTOSO.COM

Though not a complete or comprehensive solution by any means, I hope this script helps folks out when looking for suspicious TGTs.

]]>