Maintaining Version Control Over Distributed PowerShell Scripts


This is the first in a two-part blog post covering the topic of how to maintain version control over PowerShell scripts and Exchange 2016 deployment files in a large distributed environment (the second post titled Leveraging the PowerShell Script Version Control Process for Exchange Cumulative Update Distribution is now live). I say large environment because it's usually not too difficult to keep a tight rein on PowerShell script versions or Exchange deployment files in small environments, however this functionality can be used in environments of any size.

To be clear, the issue of maintaining version control on PowerShell scripts is not unique to Exchange deployments, and as such the solution covered here can be leveraged by/applied to any PowerShell script regardless of its function. The focus on Exchange deployment scripts in this blog post is merely to provide context as to how/why this solution was created.

NOTE: There are code snippets used in these blog posts, but PLEASE do not copy and paste them as that can lead to unexpected results. Instead please download this sample script from the TechNet Gallery where I have provided the same code used in both blog posts.

DISCLAIMER: This is not an official Microsoft offering nor is it supported by Microsoft. This is a solution I came up with and am sharing with the community.

Background/Challenge

Exchange 2016 deployments are ever changing/evolving, both from the perspective of quarterly releases of new Exchange 2016 Cumulative Updates (CUs), changes in supporting files (.NET versions for example), to changes in the existing Exchange environment that need to be incorporated in the scripts used to deploy and maintain Exchange.

In a large environment, this constant evolution represents a challenge when it comes to maintaining version control as copies of those various files get distributed to various locations, but they still need to be updated/changed over time. Consider the following scenario:

  • A company uses scripts to deploy Exchange 2016 in their existing Exchange 2013 infrastructure.
  • Those scripts are copied to various file shares and local hard drives as different admins begin to deploy Exchange 2016 servers to different parts of the company.
  • In the middle of the Exchange 2016 deployment, a new company policy mandates that the maximum message size be increased from 25 to 50 MB.
  • An admin uses the Exchange Management Shell to makes a change to all existing Exchange servers to incorporate the new message size.

How does the company ensure the new message size is used as the standard size for all future Exchange 2016 server builds, even though several copies of the deployment scripts with the older message size have been distributed throughout their environment?

Solution

To address the challenge of maintaining version control over scripts as they proliferate throughout a customer's environment, my colleague Daniel Hibbert and I got together to formulate a plan. One initial thought was to hard code a specific file share in all the scripts, where they would look for newer versions of themselves based upon the file's time and date stamp. However, file servers come and go, and we didn't want to be locked into a specific server name and file path forever, much less deal with potential time zone discrepancies between servers.

What we ultimately came up with was the idea to track script versions via Active Directory (AD) because AD is the one service that is guaranteed to be accessible in every site where Exchange will be installed, and because it has the native capability to store lots of information on objects it contains. Next, we decided to use a service account to store the information because the Custom Attributes on the account could be used for that purpose. This approach also allows for the file server/share/folder to move/change over time, as it is just information stored in the service account.

I took that idea and ran with it, creating what I call the "PowerShell Script Version Control Process" (PSVCP), where the Exchange deployment scripts would dynamically self-update whenever they detected through the service account that newer versions of themselves were "published" to a centralized file share. The published information is stored in Custom Attribute 1 through 15 on the service account, which means initially this solution can track up to 15 different files with their version information.

Here is an example of the code in action. A sample local script is older than a newer published version, so it prompts to update itself and then launches the new script while passing through the originally supplied parameters it is authorized to (notice 3 parameters are passed through into the older script, but the SourceFiles parameter is deliberately skipped for the newer script):

Here the updated script is kicked off automatically in a newly launched PowerShell window on the same host (notice only the 2 intended parameters were passed through and that the version matches the published version this time):

Later, this process was expanded to include tracking the latest published Exchange CU, to help ensure that as new servers were deployed they weren't accidentally using an older CU. Additional detail on leveraging this process to publish the latest Exchange CU will be covered in the second of this two-part blog post.

The information below covers how to update/manage script versions over time, how to set up the service account to track script versions (including the formatting of the information stored in the Custom Attributes), and the actual PowerShell self-updating code to incorporate into scripts.

Script Version Numbering

Traditionally most people, myself included, use a standard numbering method of Version 1.0, 2.1, 4.4b, etc... to establish a numerical version for files that undergo changes over time. It is subjective when a major (before the ".") versus a minor (after the ".") version number is incremented in this method, and often times letters are added to indicate a sub-version of a major.minor file version.

This methodology doesn't really scale well when a large number of iterations to a file will occur over time, the inclusion of letters for sub-versions doesn't work well with programmatically comparing version numbers, and the version number doesn't help identify how old a version of a file version is at a quick glance.

To address all of these concerns, this process was designed to use the numerical format for the script version consisting of Year, Month, Day, and Iteration in the following format:

YYYYMMDD.##

The ".##" iteration number should always start at .01 for the first script published on a day, but can be incremented to allow for multiple versions/iterations of a script in the same day. For example, the second iteration of a script on the same day would end in .02, the third would end in .03, and so on. Normally the iteration number stays at .01 because there is only one release of a script for any particular day, but it is available for when a script has to be changed and published multiple times in a day.

The PowerShell code part of this process, which is plugged into every participating script, is designed to look for each script's version number in the header section located in the beginning of script on the following line:

SCRIPT VERSION: 20170709.01

The code looks specifically for this line by searching for "SCRIPT VERSION:" to locate the script's own version number, and takes everything after the ":" as the actual version number. Using the example directly above the script version number is 20170709.01, which means the script is the first iteration (.01) for July 9th 2017.

Service Account Configuration

All participating scripts are all published and tracked using a service account in AD. A disabled computer account was chosen to act as the AD based service account, which meets the criteria of having 15 Custom Attribute fields on it. It's disabled because it does not need to be enabled in order to track the various published files. It's also a computer object because if the account was ever accidentally enabled, it cannot be used to interactively log on to any system.

As previously mentioned, the scripts are tracked using Custom Attribute 1 through 15 on the service account, with one file entry published per Custom Attribute, using the following fields names:

  • FileID - This field which is used to track the identity of a file, is optional for scripts but and mandatory for Exchange CUs. This field is to be used when the actual file name will change over time (such as with Exchange CUs), and a constant name is needed to identify the file entry.
  • FielName - Mandatory field to track the actual file name.
  • FileVersion - Mandatory field to track the version number of the file.
  • FilePath - Mandatory field to track the access path to the file.
  • FileHash - This field which is used to track the hash of the file, is optional for scripts but mandatory for Exchange CUs. Hashing a file provides a mechanism to ensure it has not been modified/corrupted when copied over a network link.

All the fields listed above that are populated for an individual file entry are stored together in a single Custom Attribute, where the fields (along with their values) are separated from each other by a comma "," delimiter character. The value for each field is stored after the field name, separated by a colon ":" delimiter character. For example, an Exchange 2016 installation script would be stored in a single custom attribute as the following string:

FileID:Exchange2016InstallScript,FileName:Install-Exchange2016.ps1,FileVersion: 20170709.01,FilePath:\\ SERVER2.contoso.com\Software\Microsoft\Exchange\2016\Install\Scripts,FileHash:2AACABB7DB9AC691D866BAD947AEBB6DE0A95D54BA48D784DB9F16AA0B3CC274

Which when broken out to a more user readable version looks like the following:

  • FileID - Exchange2016InstallScript
  • FileName - Install-Exchange2016.ps1
  • FileVersion - 20170709.01
  • FilePath - \\SERVER2.contoso.com\Software\Microsoft\Exchange\2016\Install\Scripts
  • FileHash - 2AACABB7DB9AC691D866BAD947AEBB6DE0A95D54BA48D784DB9F16AA0B3CC274

Publishing New Script Versions

After the non-version related changes to a script are tested and implemented, such as the example above where the message size is changed from 25 to 50MB, perform the following steps to update the script's version number and then deploy it:

  1. Update the "SCRIPT VERSION:" line in the modified script to reflect the current date and iteration of the script for that day. Again, in the example above the version number 20170709.01 is the first iteration (.01) for July 9th
  2. Copy the updated script to the appropriate folder in the established file share. In the example above the script is being copied to the Microsoft\Exchange\2016\Install\Scripts folder on the Software share of SERVER2.
  3. Update the relevant Custom Attribute entry on the service account to reflect the new version number of the script.

While manually updating the individual file entries on the service account is possible, it isn't easy and is subject to human error. To address this, I created the Update-FileEntries.ps1 script to provide a PowerShell based GUI that assists with publishing new or updating existing file entries as needed. A special thanks to another colleague, Chris Lewis, who helped me when I hit a few stumbling points getting the script working.

To modify file entries using the script, using the following steps:

  1. Run the Update-FileEntries.ps1 script - it does not require the PowerShell session be run as an Administrator, but it does require the user account running it have modify rights on the service account in AD.
  2. When the GUI is first loaded it should look like this:
  3. Confirm the "LDAP Domain" field points to the LDAP name of the domain where the service account is stored, and confirm the "Service Account" field points to the "SAM Account Name" of the disabled computer service account. Change as necessary.
    NOTE: The default LDAP Domain and Service Account are pre-defined at the top of the Update-FileEntries script, and they should be modified for your environment so you don't have to change them every time you run the script.
  4. Click the "Get File Entries" button to retrieve all the file entries from the 15 Custom Attributes on the service account:

    NOTE: If this is the first time running the script for a service account, all columns other than "Entry #" will be blank. The example above shows a service account that has already been populated with information for both Exchange 2013 and Exchange 2016 deployment scripts.
  5. Select the file "Entry #" row that needs to be modified and click the "Edit File Entry" button (double clicking a file entry works as well) to bring up the Edit File Entry window:

    NOTE: The Entry # value correlates to the actual Custom Attribute number the data is stored in.
  6. Make the necessary changes to the file entry as needed:
    1. File ID - Optionally enter a name (no spaces) which is used to identify a file entry separately from its file name.
      NOTE: This is currently only required for the Exchange CU ZIP file as its File Name is expected to change over time.
    2. File Name & File Path- To change file name or location of the file, click the "Browse" button, navigate to and select the file, and press the "Open" button:

      NOTE: The Update-FileEntries script is intentionally hard coded to only allow selecting .PS1 PowerShell script files or .ZIP files, changeable by the pull down in the bottom right hand corner.
    3. File Version - To update the version number of the file, click the "Retrieve" button.
      NOTE: File version information for scripts will be extracted from them in the same manner described in the Self-Updating Script Code section below.
    4. File Hash - Optionally generate a new SHA256 file hash for the file, click the "Generate" button. This is currently only required for the Exchange CU ZIP files.
      NOTE: Generating a file hash for large files such as the Exchange CU ZIP can take a little while, and the PowerShell window may appear to hang briefly during the process.
  7. Review the Edit File Entry window and confirm all the necessary changes were made. In the following example, the Install-Exchange2016.ps1 script file entry was updated on July 9th 2017 as the first iteration, and the Retrieve button was used to update the File Version to reflect that:
  8. To save changes, select the "Save Entry" button.
  9. To reverse any changes made prior to saving, select the "Reset Entry" button.
  10. To completely remove the file entry from the service account, select the "Clear Entry" and then the "Save Entry" button.
    NOTE: This will wipe all data from only the selected Entry (Custom Attribute) #.
  11. Once the Save Entry button is used, or the red X is clicked to cancel the changes, the Edit File Entry window will close, and the focus will return to the Service Account File Entry Editor main window.
  12. When done, close the Service Account File Entry Editor main window by clicking the red X.

Self-Updating Script Code

Up to this point we have covered the version number formatting for scripts, where and how the version number should be stored in the scripts, how the file entries for scripts are stored on a service account in AD, and even how to use the Update-FileEntries script to make populating and updating file entries on the service account easier. That is all the foundation of this new process, but none of that actually causes the scripts to start self-updating.

To make a script leverage the afore mentioned process components, and self-update itself when it finds a newer version published on the service account, actual PowerShell code needs to be injected into the top of the main body of the script.

The code example provided here is broken up into two parts, the first performing the following:

  • Defines the service account and domain the script should use to perform the version lookup.
  • Establishes an array of user provided parameters that should be omitted when the script is relaunched. More on this later.
  • Extract the script's version as a "Double" numerical value (what is otherwise known as a decimal number).
  • Extract the name of the script (ignoring the current path of the script).

#################################DISCLAIMER#################################

The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

############################################################################

I highly recommend placing this at the very top of your main code section just after your script level parameters, so it is easily found and changed in the future if needed:

# Service account SAMAccountName that holds all the file version information, and it's LDAP domain.
$SvcAccountName = "FileVerTracker"
$SvcAccountDomain = "LDAP://DC=contoso,DC=com"
# Establish a list of parameters that should not persist when relaunching the script.
$SkippedParameters = @()
# Find the first instance of the "SCRIPT VERSION:" at the top of the executing script, and use what's after the colon as the ScriptVersion.
[Double]$ScriptVersion = (Select-String -Pattern "SCRIPT VERSION:" -LiteralPath $PSCommandPath -SimpleMatch).Line.Split(":")[1]
# Extract the file name of this script.
$ScriptName = Split-Path $PSCommandPath -Leaf
# Base installation location for all of the source files.
$InstallDir = "C:\Software\Microsoft\Exchange\2016\Install"
# Location of the scripts directory in the installation package.
$InstallScriptsDir = "$InstallDir\Scripts"

Just like with the Update-FileEntries script, the first two variables should be modified to reflect the SAM account name and the LDAP domain of the service account being used to publish all of the file entries. The SkippedParameters variable is for cases when a parameter might need to be skipped if it shouldn’t be called every time a script executes, such as when a value is recorded to a file before the script version checking code executes. The last two variables are the paths I hard code of where to install the Exchange deployment files and scripts when they are pulled down/updated.

The second code example that should be injected into the main body of the script before any other scripted tasks activate. The reason to have this code block be the first real action the script takes is to ensure that if one of the other actions later in the script is what needs to be changed through a newer version of the script, that you can make sure the script is updated with those changes to prevent the older version of those other actions from executing.

I break down the actions the code is taking in a bulleted formatted below, but here is the actual code block I use with the Exchange deployment scripts:

#region Confirm-ScriptVersion
# Query the specified service account in its specified domain.
$ADSearch = New-Object System.DirectoryServices.DirectorySearcher
$ADSearch.SearchRoot = $SvcAccountDomain
$ADSearch.SearchScope = "Subtree"
$ADSearch.Filter = "(&(objectCategory=Computer)(sAMAccountName=$SvcAccountName$))"
# Add all 15 custom (extension) attributes to the ADAttributes array, and include them in the search.
$ADAttributes = @()
For ($Num = 1; $Num -le 15; $Num++) {
	$ADAttributes += ("extensionAttribute" + [String]$Num)
}
$ADSearch.PropertiesToLoad.Addrange($ADAttributes)

# Set UpdateScript and DomainLookupError to False and then try to find the computer object service account in the specified domain.
$UpdateScript = $False
$DomainLookupError = $False
Write-Host ""
Write-Host ""
Try {
	$SvcAccount = $ADSearch.FindOne()
} Catch {
	# There was a problem contacting/searching the domain so mark the $DomainLookupError as True.
	# NOTE: This usually only occurs here when the domain wasn't specified correctly as an incorrect account name will just result in an empty $SvcAccount.
	$DomainLookupError = $True
}

# Check to see if the service account was found.
If ($SvcAccount) {
	# It was so extract all the populated custom attributes from it, and store the values as objects.
	$FileEntries = @()
	ForEach ($ADAttribute in $SvcAccount.Properties.PropertyNames) {
		If ($ADAttribute -match "extensionattribute") {
			$FileObject = New-Object -TypeName PSObject
			# Split the custom attribute entry up into 3 properties separated by ",", and then split each property up into name and value separated by ":".
			ForEach ($FileProperty in ($SvcAccount.Properties.$ADAttribute).Split(",")) {
				$FileObject | Add-Member -NotePropertyName $FileProperty.Split(":")[0] -NotePropertyValue $FileProperty.Split(":")[1]
			}
			$FileEntries += $FileObject
		}
	}

	# Check to see if this script was found in the FileVersions array.
	If ($FileMatch = $FileEntries | Where-Object {$_.FileName -eq $ScriptName}) {
		# It was so record the FilePath as a script variable, and then compare this script's version recorded FileVersion.
		$FileMatchPath = $FileMatch.FilePath
		[Double]$FileMatchVersion = $FileMatch.FileVersion
		If ($ScriptVersion -gt $FileMatchVersion) {
			# This script version is greater/newer than the entry in the service account, so throw a warning and prompt to proceed.
			Write-Warning "This script's version #$ScriptVersion is newer than the version #$FileMatchVersion defined on the service account."
			Write-Host "NOTE: This script may be an unsupported test version, or AD replication might be slow and not reflect a recently updated script."
			Write-Host ""
			$Host.UI.RawUI.FlushInputBuffer()
			$Proceed = Read-Host "Do you want to proceed with running this newer script version anyway? (Y/N)"
			If ($Proceed -eq "Y") {
				# Note this script will proceed.
				Write-Host "Continuing with script execution..."
			} Else {
				# Otherwise exit this script.
				Write-Host "Exiting this script due to it being newer than the version defined on the service account." -ForegroundColor Red
				EXIT
			}
		} ElseIf ($ScriptVersion -lt $FileMatchVersion) {
			# This script version is lesser/older than the entry in the service account, so throw a warning and prompt to proceed.
			Write-Warning "This script's version #$ScriptVersion is older than the version #$FileMatchVersion defined on the service account."
			Write-Host ""
			$Host.UI.RawUI.FlushInputBuffer()
			$Proceed = Read-Host "Would you like to continue using those files and then have this script automatically update and relaunch itself with the latest version? (Y/N)"
			If ($Proceed -eq "Y") {
				Write-Host "This script will update and relaunch itself."
				$UpdateScript = $True
			} Else {
				# Otherwise exit this script.
				Write-Host "Exiting this script due to it being older than the version defined on the service account." -ForegroundColor Red
				EXIT
			}
		} Else {
			# Otherwise this script was the same version as entry on the service account, so note that and proceed.
			Write-Host " This script is verified as the latest version. Continuing with script execution..."
		}

	} Else {
		# This script wasn't found on the service account so report that and give the option to continue anyway.
		Write-Warning "This script's file name `"$ScriptName`" was not found in a custom attribute of the service account `"$SvcAccountName`"."
		Write-Host ""
		$Host.UI.RawUI.FlushInputBuffer()
		$Proceed = Read-Host "Do you want to proceed with running this script anyway? (Y/N)"
		If ($Proceed -eq "Y") {
			Write-Host "Continuing with script execution..."
		} Else {
			Write-Host "Exiting this script due to being unable to find the proper version information for it in AD." -ForegroundColor Red
			EXIT
		}
	}
# If there was no service account found, check to see if there was a problem with contacting the domain above.
} ElseIf ($DomainLookupError) {
	Write-Warning "There was a problem contacting the domain `"$SvcAccountDomain`". Please ensure the LDAP domain name is formatted correctly. Skipping script version check."
# Otherwise there was no problem contacting the domain, but no service account was found, note that and move on.
} Else {
	Write-Warning "The service account `"$SvcAccountName`" was not found in the script specified domain. Please ensure the correct service account SAMAccountName is used. Skipping script version check."
}

# Check to see if an update of this script is required.
If ($UpdateScript) {
	# It is so try to copy the updated script to the scripts folder, overwriting the existing script.
	Write-Host "Updating this script with the latest version."
	Try {
		Copy-Item ($FileMatchPath + "\" + $ScriptName) -Destination $InstallScriptsDir -Force -ErrorAction Stop
	} Catch {
		# Exit this script if there was a problem copying the file for any reason to prevent the old script from going into an update loop.
		Write-Host "There was a problem copying `"$ScriptName`" from `"$FileMatchPath`". Copy it manually and re-run this script." -ForegroundColor Red
		EXIT
	}
	Write-Host "Restarting using the updated script..."
	# Create an string to hold the arguments to be passed to PowerShell.exe, and start with the basic script execution information.
	$ArgumentList = "-ExecutionPolicy Unrestricted -NoExit -Command `"$InstallScriptsDir\$ScriptName`""
	# Loop through each manually specified parameter for this script, and add them to pass through to the new PowerShell instance as long as they are not marked to be skipped.
	ForEach ($Parameter in $Script:PSBoundParameters.Keys) {
		If ($SkippedParameters -notcontains $Parameter) {
			# Try to grab the parameter value and type, and then add special formatting if the type is a String or Switch/Boolean.
			$ParamValue = $Null
			If ($TempValue = Get-Variable $Parameter -ValueOnly -ErrorAction SilentlyContinue) {
				$TempValueType = $TempValue.GetType().Name
				If ($TempValueType -eq "String") {
					$ParamValue = "`"$TempValue`""
				} ElseIf (($TempValueType -eq "SwitchParameter") -or ($TempValueType -eq "Boolean")) {
					$ParamValue = "`$$TempValue"
				} Else {
					$ParamValue = $TempValue
				}
				# Add the parameters on to the end of the ArgumentList after the PowerShell script.
				$ArgumentList += " -$Parameter`:$ParamValue"
			# Otherwise the parameter must be a common PowerShell parameter, so just add it to the argument list by itself.
			} Else {
				$ArgumentList += " -$Parameter"
			}
		}
	}
	Write-Verbose "Start-Process Arguments are: $ArgumentList"
	# Launch the new script and then exit the current older script.
	Start-Process "$PSHOME\powershell.exe" -Verb Runas -ArgumentList $ArgumentList
	EXIT
}
#region Confirm-ScriptVersion

Here are the steps the code is executing:

  1. A .NET call to AD is established.
    1. A .NET call is used to talk to AD in lieu of an AD cmdlet because there is no guarantee the computer running the script has the Active Directory module loaded, and there is overhead loading up the module for what is otherwise a quick AD search.
    2. The AD query specifies the SAM account name in of the service account and that it's a computer. It also specifies that the search should go through the entire domain, throughout any OU or container (that's the use of "subtree"), so it doesn't matter where in the domain the service account is stored.
    3. All 15 custom attributes are added as attributes to be returned as part of the query.
  2. The AD query for the service account is performed in a Try/Catch statement, where only one account matching the filters is expected (that's the use of "FindOne").
  3. If there was an error contacting the domain, such as it was mistyped, record that for later.
  4. If the service account was found, all the custom attribute properties that have values in them are extracted out. For each custom attribute entry, which represents a unique file entry, the following is performed:
    1. Create a custom PowerShell object for the unique file entry.
    2. Separate out the file entry fields by splitting up the string using the "," character.
    3. Next split each file entry field into a name and value splitting the string using the ":" character.
    4. Store each file entry field name and value as a pair in the custom PowerShell object. This means at a minimum the PowerShell object will have a name and value pair for the File Name, File Path, and File Version fields.
    5. Store the custom PowerShell object for the entire file entry in an array of file entries.
  5. The file entries array is searched for the name of the script currently executing. If a matching script name is found, then following is performed:
    1. Extract out the File Path and File Version from the matching entry (the File Name is not needed as it was already matched).
    2. Compare the current script's version number against the version extracted from the service account.
      1. If the local script is newer than the entry on the service account, which would be highly unusual except in testing scenarios, prompt to proceed with executing the newer local script.
      2. If the local script is older than the entry on the service account, prompt to update the local copy of the script, with the actual update action taking place further below.
        NOTE: In some of the scripts I have switch to bypassing the prompt and automatically upgrade the script as I have yet to run into a scenario where I didn't want the script to update to a newer published version. It's a good idea to leave the prompt in place until you reach a comfort level with this new process.
      3. Otherwise then the local script is the same version as the entry on the service account, so just note that and continue.
  6. If the script name wasn't found in the file entries array created from the service account, then note that and prompt to continue running the script. Normally this should never happen if the service account was found unless someone accidentally deleted the file entry from the service account.
  7. If the domain couldn't be contacted earlier, then note that and prompt to continue.
  8. If the service account wasn't found in the search, then note that and prompt to continue.
  9. If the script was marked to be updated, then try to copy the script from the path recorded on the service account to the designated local scripts directory, exiting out of the script if it fails.
    1. The local scripts directory $InstallScriptsDir is a hard-coded path in the Exchange deployment scripts, but you could swap the path out for directory of the currently executing script by using:
      $InstallScriptsDir = Split-Path $PSCommandPath -Parent
  10. Extract all of the parameters the user supplied when the script was executed, so that they can be passed through the "ArgumentList" for when the updated script is launched.
    1. If the parameter's name is listed in the $SkippedParameters array (defined earlier in the Exchange deployment scripts) then skip including it in the launching of the updated script.
    2. If the parameter is a String value, then include its value inside escaped quotes so the string integrity is maintained.
    3. If the parameter is a Switch or a Boolean value, then set its value to $True or $False according to what was specified.
    4. If the parameter is anything else, include it as is. This seems to work for Arrays, PowerShell Objects, Integers, etc... but you may have a parameter value that needs special handing which should be added to this section of code.
    5. Also the -Verbose and -Debug type of "common" parameters don't register as anything listed above, so if they were used with the script then they are recorded as well with just their parameter name.
  11. Launch the updated script in its designated directory with all of the parameters supplied to the old script that weren't deliberately skipped.
    1. The Start-Process cmdlet will launch the updated script in a new window, and the old script will exit in the current window, so the new script starts cleanly in its own PowerShell instance. The added value to this approach is if there are any issues you can still see what happened in both windows separately.
    2. The new window will be "Run as" an Administrator only because that's a requirement of the actions being taken in the Exchange deployment scripts.
    3. If you rather not launch a new window for the new script, then you could use the following Start-Process line instead:
      Start-Process "$PSHOME\powershell.exe" -ArgumentList $ArgumentList -NoNewWindow -Wait

      NOTE: Don't for get to keep the EXIT after the Start-Process line so the old script quits immediately after running the new "embedded" script in the same window.

There you have it, that code will update the current script to a newer version published on the service account, passing through all desired parameters from the original script, and do so in a new window so it starts up cleanly.

To recap you might need to do the following to customize this code to your environment:

  1. Leverage the provided hard-coded $InstallScriptsDir variable path for where scripts should be downloaded to and launched from, or switch to extracting the path from the currently executing script so that path is re-used. In either case the update process will overwrite any previous version of the scripts in the target directory.
  2. Define any parameters you don't want to be passed through a re-launching of the script by adding the parameter names to the $SkippedParameters array earlier in the script like this:
    $SkippedParameters = @("StartAtStep","SourceFiles")
  3. Decide whether you want to have the script launch in a new window or use the existing one, and if it is launched in a new window decide if you need it to "Run as" an Administrator.
  4. Test all your script's parameters to ensure they are passed through to the new script correctly, adding additional special parameter handling code as necessary.

Security Controls

This process is only as security as you make it, and to that end I wanted to highlight some important points to make sure people implement this process in secure fashion.

  1. Only the necessary Domain accounts should have access to modify the custom attributes of the service account. Therefore, make sure to secure the OU the service account lives in or the permissions on the service account itself.
  2. The Domain account running a script using this process only needs read rights to the service account and its custom attributes. By default, all Domain authenticated users can read custom attributes from all Domain accounts (including computer accounts). Therefore, this process should work for any Domain account, unless special restrictive permissions have been used on the service account.
  3. This should go without saying but lock down permissions to the file shares where the files referenced on the service account are stored, so only the authorized users running the scripts can get to the updated versions.
  4. Authenticated Domain accounts being able to see file names and paths stored on the service account is not a risk in and of itself. Also, custom attributes aren't normally used, much less displayed, on computer objects so someone would have to go looking for them. The risk would be if unauthorized users went and accessed those files, but this concern should be addressed as per the bullet above.

Closing Thoughts

I'm sure as others discover and test this code they will have feedback and other ideas, so if you start to use this code consider checking back occasionally to see what changes to the code are made as I will update this blog post over time.

Meanwhile I encourage you to check out my next blog where I cover using this process to ensure the Exchange deployment scripts are using the latest published Exchange CU. If you have no interest in Exchange file distribution, or any large application package distribution, then you can stop with this post. 😊

Please feel free to leave me comments here if you wish, I promise I will try to respond to each in kind.

Thanks!

Dan Sheehan

Senior Premier Field Engineer

Comments (3)

  1. Bill Neumann says:

    Wouldn’t Visual Studio Team Services (VSTS) be equally effective for doing version control over PowerShell scripts?

    1. You are thinking of a script/code repository revision control system, where code changes are checked in and different revisions of a file are kept for archival/historical purposes. VSTS is a great solution for that, and in fact I use it for that very purpose.

      If you read the entire blog you will see this solution is for scripts as they are scattered out in the wild, and they need to be updated as they are being executed, which a solution like VSTS wasn’t designed to address. Plus this solution doesn’t require anything fancier other than a file share and a service account.

  2. Jason wheeler says:

    Very nice, thank you for your time and effort

Skip to main content