IMPORTANT ANNOUNCEMENT FOR OUR READERS!
AskPFEPlat is in the process of a transformation to the new Core Infrastructure and Security TechCommunity, and will be moving by the end of March 2019 to our new home at https://aka.ms/CISTechComm (hosted at https://techcommunity.microsoft.com). Please bear with us while we are still under construction!
We will continue bringing you the same great content, from the same great contributors, on our new platform. Until then, you can access our new content on either https://aka.ms/askpfeplat as you do today, or at our new site https://aka.ms/CISTechComm. Please feel free to update your bookmarks accordingly!
Why are we doing this? Simple really; we are looking to expand our team internally in order to provide you even more great content, as well as take on a more proactive role in the future with our readers (more to come on that later)! Since our team encompasses many more roles than Premier Field Engineers these days, we felt it was also time we reflected that initial expansion.
If you have never visited the TechCommunity site, it can be found at https://techcommunity.microsoft.com. On the TechCommunity site, you will find numerous technical communities across many topics, which include discussion areas, along with blog content.
NOTE: In addition to the AskPFEPlat-to-Core Infrastructure and Security transformation, Premier Field Engineers from all technology areas will be working together to expand the TechCommunity site even further, joining together in the technology agnostic Premier Field Engineering TechCommunity (along with Core Infrastructure and Security), which can be found at https://aka.ms/PFETechComm!
As always, thank you for continuing to read the Core Infrastructure and Security (AskPFEPlat) blog, and we look forward to providing you more great content well into the future!
Hello everyone! I’m Preston K. Parsard, Platforms PFE and I’d like to talk about running PowerShell and Desired State Configuration (DSC) on Linux in Microsoft Azure. Just in case you haven’t heard yet, PowerShell has been open sourced and released for Linux since 18 August, 2016. More details and other resources for the announcement can be found here. If you are new to DSC in general, you may want to first hop over to one of the posts my colleage Dan Cuomo wrote about for a quick intro and a plethora of online training resources here, then pop back to this post and continue reading.
What challenges does this address?
Different operating system platforms require different tools, standards and procedures, and in many cases, multiple teams of IT Pros. This requires more resources and can create silos within the IT department. DevOps aims to enhance business value by improving operations for people, proceses and technology by encouraging cross-platform collaboration and integration to simplify IT operations.
What are the benefits?
For teams with Windows administrators in a predominantly Windows environment, who manage a few Linux systems, or Linux administrators in largely Linux shops that must also administer some Windows machines; Both may now realize the benefit of leveraging a single set of tools and standards such as PowerShell and DSC.
1. Efficiency: With this script, you can now quickly create a basic test environment infrastructure in as little as about 30 minutes, to begin explore the features of PowerShell and DSC on Linux.
2. Cost: There is no capital investment required to set up storage, networking, compute, a home lab, or a physical facility since this is all done in Azure. An Azure subscription is required though, which you can try it free for 30 days here.
3. Administration: The administrative requirements to create this environment is much lower than managing physical infrastructure directly, and the complexity of the setup is handled automatically in the code.
4. Training & Skills Development: PowerShell and DSC on Linux is still a fairly new concept, but now you can review this post and use the script to reference how it works in detail, sort of like the “Infrastructure as Code” idea to develop these cross-platform skills. You may even decide later to contribute back to the PowerShell Core community to make things better and easier for all of us in the spirit of collaboration and continuous improvement.
As an IT Pro, knowledge of a widely accepted technology implemented across multiple platforms means you can also offer and demonstrate more valuable skills for your current or future employers, plus it’s just more fun anyway!
What is the script overview?
We’ll cover creating a small lab to support running PowerShell and DSC on Linux, and since this will be created in Microsoft Azure, there are no hardware requirements except the machine you will be running the script from. When you’re finished, you can easily remove all the resources to reduce cost and rebuild again whenever you need to.
The script will deploy all the infrastructure components required, including network, storage, and the VMs themsleves. One of the cool features of this script is that you don’t have to worry about downloading the additional files from GitHub, such as the DSC configuration script or the shell script for the Linux systems to install PowerShell core on each distro. The main script includes functions that will auto-magically download, stage and copy these files for you, in addition to any DSC resources from www.powershellgallery.com ! You have the option of creating between 1-3 Windows VMs, and exactly 3 Linux VMs. By default, 1 Windows 2016 Datacenter VM will be created, but you can use this command to specify a parameter value for more Windows VMs when you run the script.
.\New-PowerShellOnLinuxLab.ps1 -WindowsInstanceCount <#>
Where <#> is a number from 1-3
Example: .\New-PowerShellOnLinuxLab.ps1 -WindowsInstanceCount 2
The Windows VMs are really placeholders at this point, since the script will be enhanced to eventually integrate the Windows servers into PowerShell and DSC operations with the Linux VMs. For now, the focus will be on just configuring the Linux distros for Azure Automation DSC and to run PowerShell, though you can always be creative and continue experimenting with Windows/Linux PS and DSC integration youreself in the meantime, then give us your feedback so we can all make the experience even better!
The 3 Linux VMs deployed will have these distros:
1) UbuntuServer LTS 16.04
2) CentOS 7.3, and
3) openSUSE-Leap 42.2.
On each of these distros, PowerShell Core will be installed based on a single shell script activated by the Linux custom script extension. For the DSC pre-requisites and installation, the DSC for Linux VM extension will be configured for each machine as it is being provisioned by the main PowerShell script New-PowerShellOnLinuxLab.ps1. An automation account will also be instantiated by the main script and a DSC configuration file will be imported and compiled to this Azure Automation account. Since the configuration will use the pull method in Azure, each Linux system will also require their local configuration mananger properties to be modified in order to pull the compiled configuration from Azure Automation. Ok, so now the stage is set and the configuration is staged (ha!). Each Linux VM will apply the configuration compiled in the Azure Automation account to reach their specified desired state. Great, so what is the desired state in this case?
The desired state is to simply create a new text file in the path of \tmp\dir\file with the content of file being … hello word. No prizes for guessing that one, right? While this may seem simplistic, remember that you can continue to experiement with more sophisticated configurations and resources, so this is just to get started and introduce you to the basic concepts for now. Who knows, we may even update the script for more complex scenarios in the future. We can test this simple configuration after the deployment using the following commands:
- (Ubuntu) $linuxuser@AZREAUS2LNX01~$ sudo cat /tmp/dir/file
- (CentOS) $linuxuser@AZREAUS2LNX02~$ sudo cat /tmp/dir/file
- (openSUSE) $linuxuser@AZREAUS2LNX03~$ sudo cat /tmp/dir/file
We can also test running PowerShell in each of these builds. For convenience, I’ve included a few sample commands within comments at the end of the main script labled as:
#region POST-DEPLOYMENT TESTING
What are the requirements?
Before diving in any further, let’s cover what we’ll need first.
- A Microsoft Azure subscription.
- Subscription owner or contributor permissions.
- Resource capacity to deploy between 4-6 VMs with the size of Standard_D1_v2 for each.
- Windows PowerShell Version 5.0 on the on-premisses machine from which you run the PowerShell script.
- An SSH Key-Pair for secure Linux authentication.
During the script execution, you will be prompted for the following information:
- Your Microsoft Azure subcription credentials.
- The name of your subscription.
- A resource group name, using the convention rg##, where ## is a two digit number.
- The Azure region for the deployment.
- A Windows user password. The username will be ent.g001.s001 (ent stands for enterprise administrator – eventually when we promote the Windows VM to a domain controller, and g in g001 is for given name, while s in s001 is for the surname property. How did you guess? 😉)
- The path for the SSH public key file that will be used to securely authenticate you to an SSH session for each Linux VM, so remember to create your key-pairs before running the script.
- A review of the deployment summary will be presented for the next prompt, at which time you may elect to proceed with the deployment if you are satisfied with the values; Enter yes to continue or no to quit.
- Finally, at the end of the deployment, you will be prompted whether you would like to open the logs now for review. Select yes to open logs now, or no to quit.
What does the environment and process look like?
To make it easier to visualize, here is the diagram of the solution. Just follow the index numbers and match them up with the sequence outlined on the right to get a general idea of the process.
Figure 1: New-AzureRmLinuxLab.ps1 Script Summary Sequence of Activities
What are the details for each step?
In the following list, more details of each step outlined in the diagram is covered.
|Step 1: The script downloads, installs and imports the required modules from www.powershellgallery.com and saves them to the default PowerShell module path. These modules include a custom logging module, the required DSC module for Linux and the Azure modules. The nx.zip Linux module is also copied to a local staging folder and will be uploaded to a storage account resource later in the script.|
|Step 2: A custom logging file is created to record logging of script activity using the logging module that was just downloaded in step 1.|
|Step 3: For more extensive logging, a transcript to track all PowerShell transactions is also started using the Start-Transcript cmdlet. This information will be saved to a log file as the script continues to execute.|
|Step 4: The user is now prompted to authenticate to an Azure subscription with their registered credentials.|
|Step 5: The script will then prompt the user for the name of the subscription that will be used for this deployment.|
|Step 6: The script then asks for a new resource group name. The format must use rg, followed by a two-digit number, such as rg01.|
|Step 7: In this step, the script will request the Azure region where this deployment will be built. As a convenience, the region name will also be re-utilized as the name of the virtual network later in the script without additional user intervention. After the region name is supplied, the script creates the resource group, since the region is a required parameter for the resource group.|
|Step 8: An Azure automation account will then be created by the script, using the prefix AzureAccount, followed by a randomly generated 4-digit number, such as AzureAccount3425.|
|Step 9: Since there will be between 1-3 Windows servers, the script will now pro-actively create an availability set named AvSetWNDS for these machines to accommodate fault tolerance for faults and updates. Managed storage will be used for the operating system and data drives for each Windows machine, so this parameter is also specified by the script when creating the availability set.|
|Step 10: For the 3 distros of the Linux machines, an availability set is also created for planned and unplanned downtime and to support managed disks. The name of this availability set is AvSetLNUX.|
|Step 11: The windows administrator username is pre-defined in the script as ent.g001.s001, but a password is also required. In this step, the user is prompted for a Windows password that will, together with the username, make up the credentials for building the Windows machines.|
|Step 12: Next, the script creates the username
linuxuser for the initial user account for each distro.
|Step 13: For the Linux machine credentials, an SSH key-pair will be used. This means an existing SSH public key must already exist as a pre-requisite before the script is executed. The user is now prompted to specify the path for the SSH public key file that the script will use to configure the credential parameter for each Linux distro.|
|Step 14: The script will now configure the subnet for the Windows servers with the name WS and an address space of 10.10.10.0/28.|
|Step 15: The next subnet created by the script is LS for the Linux servers, with an address space of 10.10.10.16/28 to align next to the WS subnet.|
|Step 16: Now that both required subnets are defined, the Virtual Network (VNET) is created, using the WS and LS subnets as parameters to complete the configuration. The name of the VNET will be inherited from the region name supplied earlier by the user, i.e. EASTUS2.|
|Step 17: For security, it is a best practice to apply network level firewall rules in Azure using Network Security Groups (NSGs). Here, an NSG is created named NSG-WS, rules defined to allow RDP to each Windows VM, and then associated to the WS subnet.|
|Step 18: Another firewall is configured in the same manner for the LS subnet, which is named NSG-LS. The rules are slightly different however, to allow SSH and WSMAN TCP ports 22 and 5986 inbound for all the Linux servers in this subnet.|
|Step 19: At this point, the script will generate a summary of properties that have been set for the deployment. Some properties, such as the resource group and the region were supplied by the user, but others, like the VNET and VM names were constructed by the script. The user reviews the summary and types yes or no to continue or stop the deployment.|
|Step 20: The next resource the script creates is the storage account. It will use a random string generator to create the storage account name. This is because the name must be globally unique in DNS, consist of 3-24 characters and must also be lowercase and numeric only. The script then queries DNS to ensure that the randomly generated name is globally unique. If it isn’t then a loop is used to generate a new random string until the name is unresolvable in DNS, meaning that it is available to use for creating the storage account resource.|
|Step 21: The script will now check for the existence of a custom directory within the user profile at <$env:Home>/New-AzureRmLinuxLab/Scripts and if it isn’t available, create it. The <$env:Home> environment variable resolves to the currently logged on user’s home profile, as in c:\users\<userid>.This folder named Scripts will be the download target for scripts that are hosted in the New-AzureRmLinuxLab GitHub repository.|
|Step 22: The script then uses a custom function to download the files AddLinuxFileConfig.ps1 and Install-OmiCimServerOnLinuxAndConfigure.sh from the public New-AzureRmLinuxLab GitHub repository, to the <$env:Home>/New-AzureRmLinuxLab/Scripts directory created in step 21.|
|Step 23: The nx.zip Linux module is retrieved by the script from the public online repository at www.powershellgallery.com , then installed and imported as a module on the deployment machine. It is also copied to a staging area in the local file system at: <$env:Home>/New-AzureRmLinuxLab/Modules. If the Modules subfolder didn’t exist before, it will be created now. From here, this module will be uploaded to another intermediate area before imported into Azure Automation modules.|
|Step 24: Next, the script will generate the onboarding metaconfigurations from the Azure Automation account for each of the Linux VMs. The resulting *.mof files will first be placed in a new DscMetaConfigs directory at <$env:Home>/New-AzureRmLinuxLab for subsequent uploading to the powershell-dsc staging container in the new storage account. These metaconfig files, AZREAUS2LNUX01.meta.mof, AZREAUS2LNUX02.meta.mof and AZREAUS2LNUX.meta.mof as shown in the diagram, will specify the URL, registration key, and the refresh frequency used to pull and apply the Azure automation DSC node configuration later in the deployment process.|
|Step 25: To host the custom Linux extension script that will install and configure PowerShell, the script now creates a staging container named staging in the storage account.|
|Step 26: For the nx.zip Linux DSC module, the script creates a container, also in the same storage account, named powershell-dsc.|
|Step 27: The Install-OmiCimServerOnLinuxAndConfigure.sh script is now uploaded to the staging container that was created in step 25. From here, this script will later be consumed and executed by each Linux VMs as they are being built.|
|Step 28: The nx.zip Linux DSC resource that was previously staged locally in step 23, is now uploaded to the powershell-dsc container that was created in step 26.|
|Step 29: The metamof files created in step 24 are now uploaded to the powershell-dsc container as well.|
|Step 30: The nx.zip Linux DSC resource is then imported into Azure Automation so that when the Linux VMs pull the DSC node configuration, each local configuration manager will refer to the resources contained in this nx module to apply the configuration.|
|Step 31: The AddLinuxFileConfig.ps1 DSC configuration script is now imported into Azure Automation DSC configuration.|
|Step 32: After the *.ps1 script is imported, it must then be compiled into a *.mof format to be consumable by target nodes, in this case the Linux servers.|
|Step 33: The Windows VMs are now built based on the value of the $WindowsInstanceCount variable, so if this value is 3, the loop will create 3 new Windows 2016 Datacenter VMs of size Standard_D1_v2. Each will have a public and private IP addresses and two drives, a system and a data drive, which both uses managed disks.|
|Step 34: The Linux VMs are built by iterating 3 times through a loop in the script. Like the Windows VMs, these also use the Standard_D1_v2 size, have public and private IP addresses and host two managed disks – one for the operating system and the other for data. As each VM is built, a custom script extension is applied to execute the Install-OmiCimServerOnLinuxAndConfigure.sh script. Since the installation of PowerShell and configuration of iptables and newer style firewalls in each machine uses varying syntax, this single *.sh (shell) script first checks the distro of the machine, and based on that distro, uses an if-fi condition with the appropriate syntax to perform the installation and firewall configurations. Also during creation, a Linux DSC extension is added to each Linux VM. This is where the *.metamof files for each machine is applied so that after the system is built, it will be coded with the necessary properties for the local configuration manager to pull the DSC node configuration from Azure Automation during first startup.|
|Step 35: The script will now prompt the user whether they want to open the log files. If yes is specified, then two files will open, but they will appear as only one. This is because the transcript file will be superimposed on the custom log file. To see both files, just drag the transcript file away from the log file and the script completes. If the user responds by typing no, the script finishes also, but no logs are displayed. To see the logs later, just navigate to <$env:Home>/New-AzureRmLinuxLab. The log and transcript files will be shown as displayed below. Also note that each file is created with a time-stamp.
How do I test the deployment? (Step 36)
To test the results, follow the steps outlined here for each Linux VM. For brevity, only UbuntuServer LTS 16.04 distro will be shown in this example, but you should test all 3 distros to ensure that the results are consistent after-all.
At the shell run the command $linuxuser@AZREAUS2LNX01~$ sudo cat /tmp/dir/file
Type powershell to start the PowerShell core engine.
Start typing PowerShell cmdlets and observe the results.
Where’s the Script?
You can review or download the script from GitHub here. I’ve included comments where I think it was necessary to make it readable. Remember that the only file you need to download and run is this one:
Figure 2: Script in GitHub
What did we just cover?
So we’ve outlined some key benefits of cross-platform tools like PowerShell and DSC for heterogeneous environments, provided a quick overview of the script, it’s requirements, included a diagram, some details and the link for the script itself in GitHub. Feel free to sumbit any issues in GitHub for the script or contribute by forking the repository and making pull request.
That’s it friends. Thanks for reading and hanging in there with me! Happy automating!
Where can I get more information?
|1||DSC for the PFE: A Training Montage||TechNet Blog: Ask PFE Platforms||Link|
|2||PowerShell DSC for Linux is now Available||MSDN Blog||Link|
|3||Install .NET Core and PowerShell on Linux Using DSC||TechNet: Hey Scripting Guy! Blog||Link|
|4||Get Started with Desired State Configuration for Linux||Microsoft Docs||Link|
|5||Using DSC on Microsoft Azure||Azure Documentation||Link|
|6||Passing credentials to the Azure DSC extension handler||Azure Documentation||Link|
|7||PowerShell-DSC-for-Linux||Microsoft GitHub Repo||Link|
|8||Azure Automation DSC Overview||Azure Documentation||Link|
|9||PowerShell DSC for Linux Step by Step||TechNet Building Clouds Team||Link|
|10||Monad Manifesto||Jeffrey P. Snover||Link|
|11||PowerShell DSC for Linux Installation||PowerShell Magazine||Link|
|12||Posh-SSH: Open Source SSH PowerShell Module||PowerShell Magazine||Link|
|13||Running DSC Cmdlets through CIM Session on Linux||Stack Overflow||Link|
SEO Key words: Linux, PowerShell, DSC, Azure Automation