DSC for the PFE: The Good, the Bad, the Ugly, and the Solution with Configuration Data

Hi Folks,

Platforms PFE Dan Cuomo here to chat about Desired State Configuration, AKA “the hotness.” DSC has a few value propositions and is different than general PowerShell scripting in that it’s declarative, idempotent, and remediable in nature. If you’re not too familiar, checkout last week’s post (A Training Montage) which describes DSC and includes some recommendations for available training at the bottom of this article. If you have a basic understanding of DSC, what a MOF file is, and Configuration Data – please continue reading!

We’ll begin with a brief overview on why we use configuration data, discuss some challenges you might encounter, and some tricks learned along the way to simplify the management of larger configurations.

Of course, if you have some questions, comments, or ideas on clever configurations, we’d love to hear them so drop us a comment at the bottom of the article!

The Good

Using configuration data in DSC is one of the ways in which you can parameterize your configuration. It helps you separate the custom data (what changes) from the configuration logic which doesn’t change.

Parameterized configurations = Automation for automation == PFE Paradise J

The documentation on MSDN adds,

“…consider the structural configuration (for example, a configuration might require that IIS be installed) as separate from the environmental configuration (that is, whether the situation is a test environment vs. a production one—the node names would be different, but the configuration applied to them would be the same).”

Consider the following DSC configuration:


Note 1: Every name parameter (Telnet-Client) is from the Get-WindowsFeature cmdlet Name property

Note 2: In the above example, I use a variable named $ConfigData to store the configuration data. However, you could also separate this out into its own .psd1 file. You can see an example here.

There is one MOF per node generated by the config above. Note that Node1 and Node3 are the same size (Length), and Node2 and Node4 are the same size as they share the same Role in Configuration Data.


The “Name=” in the generated MOFs are different for Node1 (left) and Node2 (right).



Without the configuration data ($ConfigData), editing becomes a bit of a challenge. So long as you never change anything, this isn’t too bad. However, if you later decided to add or subtract Nodes from the configuration or change the desired state, you could have a lot of editing ahead of you. Scale only compounds the awkward editing problem and increases your chances of errors (we’ll see an example coming up).

Instead, utilizing configuration data to separate the nodes from the configuration helps build reusable code and parameterize our configurations. Editing becomes a simpler task as we have to go one place ($ConfigData), lowering our chance of errors if we need to modify our configuration.


The Bad

One of the first major tasks I was asked to tackle with DSC was to implement a Windows Role and Feature baseline for some Server 2012 R2 systems. The nodes in question were in a Hyper-V failover cluster. You may already know that each node in a cluster should have the same roles and features installed, however configuration drift is regularly a problem between clustered systems such as these.

For our purposes, imagine a private cloud environment like this:

– Two Fabric Management Clusters

o Hyper-V clusters that run the virtualization environment

(e.g. VMM, SQL, Azure Pack, etc.)

o Each cluster contained eight nodes (2 * 8 = 16)

– Ten Tenant Clusters

o Hyper-V clusters that ran the tenant virtual machines

o Each cluster contained 32 nodes (32 * 10 = 320)

If I modeled the first example to fit this scenario, I would need to add one entry for each of the 336 Hyper-V compute nodes. Harrumph!!!

$ConfigData = @{ AllNodes = @(

@{ NodeName = ‘Node101’ ; Role = ‘Cluster1’ }

@{ NodeName = ‘Node102’ ; Role = ‘Cluster1’ }

@{ NodeName = ‘Node103’ ; Role = ‘Cluster1’ }

@{ NodeName = ‘Node104’ ; Role = ‘Cluster1’ }

@{ NodeName = ‘Node201’ ; Role = ‘Cluster2’ }

@{ NodeName = ‘Node202’ ; Role = ‘Cluster2’ }

@{ NodeName = ‘Node301’ ; Role = ‘Cluster3’ }

@{ NodeName = ‘Node302’ ; Role = ‘Cluster3’ }

@{ NodeName = ‘Node401’ ; Role = ‘Cluster4’ }

@{ NodeName = ‘Node402’ ; Role = ‘Cluster4’ }

);    }

Not shown above: 326 additional lines of configuration data!

While configuration data simplifies our configuration, the scale of the configuration data itself creates another problem. How would this be affected by adding or removing nodes from a cluster? What about hostname changes or migrations to new nodes? I think we can all agree that just editing the configuration data in this scenario could become a little unwieldy.

Never being one to shy away from a little work, you still might find yourself copy and pasting enough lines in configuration data then spending a good amount of time CTRL+H ’ing (find/replace) for all the cluster names and editing each nodename (hopefully they’re at least sequential).

But wait! There’s more!


The Ugly

All this effort was to ensure the Install State for every role and feature on each cluster node.



Wow! Two hundred sixty-seven roles and features? There are a couple of ugly solutions to this problem. Since you associated each of the 336 nodes with a cluster in your configuration data, you could, target each cluster node using:

However, there are 12 clusters in the environment and you probably know what’s next. Now you’ll need to enter in the features and the parameters for each. It would look something like this, only far more unruly:

That’s a lot of editing the “Name=‘The FeatureName’” and “Ensure=‘Absent or Present’” parameters.


Not only does this become an extremely long and unmanageable configuration, it’s error prone and not what DSC promised us. Let’s not do that.

Ugly Option #2

A slightly more clever approach would be to include some additional parameters in your configuration data to indicate the features to install vs. uninstall. This approach allows you to parameterize the installation and removal of features, and will automatically build out your MOF configuration saving you some repetitive coding. In addition, it is a little less error-prone than what shall henceforth be named UO#1 (Ugly Option #1).

This is one method I did attempt at one point however I do not recommend it. Below we create an array of comma separated features in their respective parameters (IncludedFeatures or ExcludedFeatures). Again, I’ve only showed a couple of features for installation, but there could be up to 267!


As an aside, do your best to make sure your parameters line up (this goes for configuration data and the structural configuration). When you’re trying to troubleshoot or identify a misconfiguration, especially a long configuration, it’s much easier to pick out the culprit if everything lines up.

However, you still need to hard code each Role in multiple places which means that every time we add or remove a cluster from Configuration Data I also need to modify the configuration below to match. There’s a lot going on here so I’ve color coded each step in the configuration.



This is still a bit clunky. It’s very easy to accidentally mistype one of the Role names (e.g. Cluster1, Cluster2, etc.) and proper validation requires you to review the generated MOF files which could be a considerable amount of work depending on how different each cluster is from one another.

Also, in my opinion, the editing workflow is “screen-punch” provoking (a lot like editing this article as I scroll up and down checking things ;-). Configuration Data sits in one location in your script (or a separate file), while the structural configurations are sometimes many lines below it. Scrolling back and forth while you edit and verify things can be extremely cumbersome.

Mostly however it’s the MSDN documentation we looked at earlier ringing in my head that said:

“…consider the structural configuration (for example, a configuration might require that IIS be installed) as separate from the environmental configuration (that is, whether the situation is a test environment vs. a production one—the node names would be different, but the configuration applied to them would be the same).”

The configuration above has it backwards. It pivots off the configuration data included or excluded features to pass in what is being applied. This in turn causes the configuration data to be quite a bit chunkier. While it would work, it’s probably not my best idea…

Alright, enough about the ugly options. Let’s talk about some of the options available to simplify some of this mess.


The Solution

Early on in my DSC journey I found myself saying quite often, “can I do that in DSC?” I wasn’t questioning whether DSC had the capability to reach a specific goal, but I had worked with technologies like SMA and PowerShell workflows before and I knew that once inside those Narnia-like keywords (Configuration), things could get a lil’ rowdy. The answer to this question is, “It’s just PowerShell.” Let me explain…

At the end of the day, you create and edit your configuration in PowerShell. Sure the Configuration keyword is different – the $AllNodes and $Node variable never existed before, and a configuration doesn’t seem to do anything but create a MOF file, whatever that is 😛

But everything I’m about to show you is all possible because at the end of the day (say it with me now):

“It’s just PowerShell”

Configuration Data Targeting

The first challenge with this configuration is the sheer quantity of nodes that need to be configured. The specific scenario outlined earlier would require 336 separate configuration data node entries. That’s a lot of copy / paste / edit and makes for a frustratingly error-prone configuration data that isn’t exceptionally easy to modify.

Instead, it’s just PowerShell, and $ConfigData is just a hashtable, which means we can do things that hashtables allow. Instead of statically adding each node, we can dynamically create our list like this:



This is a little better. We no longer have one line per node. Rather, I have one line per cluster and all the nodes associated with the appropriate role. Below we see the full hashtable in $ConfigData.AllNodes.


By specifying a particular index, I can verify that there is a relationship between the appropriate key’s and values. This is indeed a hashtable:


Of course, I still have to enter that initial array of nodes manually and have to worry about the lifecycle of the nodes. As we mentioned before, a node could be moved to a different cluster, change hostnames, etc. Well, it’s just PowerShell after all – and since this is a failover cluster, I have the cmdlet Get-ClusterNode available. I can use that to dynamically generate an up-to-date list of cluster nodes at configuration creation time.


Now that’s a bit more manageable! What if we had some clusters whose configuration should be all the same. For example, perhaps our fabric management clusters, Cluster1 and Cluster2, need a GUI while, Cluster3, Cluster4, Cluster5, Cluster6 etc. do not.

To save this from getting too clunky again, we can call some functions. In the #Generate Nodes section at the bottom of the script block, notice $NodeData is receiving node names returned by the HasGUI or Headless functions. Function HasGUI and Function Headless iterate through the $Clusters and returns the contents of the $NodeList variable from the respective function containing the appropriate target nodes.


Note 1: If you use this like me, you’ll rarely modify these functions once they’ve been configured. I like to use #region/#endregion to “bucketize” sections so that they’re easily collapsible.

Note 2: At this point, I would also recommend that you remove this from the configuration file into a separate psd1 as mentioned earlier so that it can be checked into a version control system. This allows you to look back and “see” what was actually configured where and why – e.g. Cluster2 has a GUI; Cluster3 is headless.

Nice…Once we’ve setup the structural configuration (shown below) we need only to make sure that the appropriate clusters were added to the appropriate targeting functions (HasGUI and Headless).


In the section above, we simplified our configuration data by dynamically generating our nodes with some functions and PowerShell cmdlets. This approach allows us to scale without sacrificing “The Good” that comes with configuration data.

Composite Configurations and Resources

The last big problem we have to resolve is this long and arduous process of adding the resource blocks for each Windows role and feature. As we mentioned earlier, there are 267 onboard!

The Configuration keyword is a special function. Among other similarities to functions, configurations allow you to call other configurations, passing in the needed parameters for the “worker-bee” to do its work. The “called” configuration is known as a composite configuration.

There are two methods you might use to pass in the parameters. The first has a few moving parts and while it works, it’s not the easiest to follow. The second method is a bit easier to read. Of course, the majority of this article is about simplifying your configurations and editing experience, so I’d recommend GO #2 (Good Option #2).

Good Option #1

In the next section we’ll use two configurations. One configuration (Caller) will declare the parameters to be passed in to the composite configuration (FeatureConfig). The composite configuration will utilize the data passed in to create the MOF.

When the composite configuration completes its tasks, it returns to the calling configuration (Caller) which continues on with the rest of its work – in this case, calling the composite configuration (FeatureConfig) again, this time with new parameters. It repeats this process until the calling configuration (Caller) is complete.


The process above continues until all of the work listed in Configuration Caller is completed.


For brevity I didn’t show all 267 features in FeatureList however, the concept is still the same. You can add each of the included or excluded features to their appropriate sections.

Let’s see what we can learn from reviewing the generated MOF.

The MOF below is for Node301 in Cluster3 as targeted by our Headless function. As you can see the MOF contains the extrapolated role and feature installation list.



For brevity I showed only one example of included or excluded features, however you would see an entry for each of the named roles and features you listed.

Now this is interesting. A regular ResourceID takes the structure [WindowsFeature]Failover_Clustering however this MOF combines the [WindowsFeature]Failover_Clustering with the configuration name [FeatureConfig] and instance name Headless_IncludedFeatures (shown in yellow above).

The Configuration Caller is identifying FeatureConfig Headless_IncludedFeatures as its resource, similar to WindowsFeature. As such, you might notice that you can add the ‘DependsOn’ parameter to one of them making each of the features listed in the FeatureList parameter dependent on those in the other. For example, the features listed below in FeatureConfig Headless_ExcludedFeatures are dependent on those features listed in FeatureConfig Headless_IncludedFeatures.



Now before we go any further it’s probably important to point out that one of DSC’s strengths is its simplicity. However, the configuration above has a lot of moving parts and is not overly simple. Give this configuration to someone who does not know much about DSC and you could probably read the code reflecting off their glazed-over eyes. So while this is a valid approach and actually pretty easy to edit (you only have to change the passed in Role, State, and FeatureList parameters), it may be difficult for anyone who didn’t write it to figure out what’s going on.

Good Option #2

It’s at this point that I would recommend separating out the composite configurations (e.g. Configuration FeatureConfig) into their own composite resources. Composite resources sit in their own separate file and allows you to call the configuration as you would any other resource.

They’re also very discoverable – like other resources, you can see composite resources with the Get-DSCResource cmdlet. It’s simple to setup and goes a long way towards solving the problems we’ve worked to alleviate so far. Most importantly once you get promoted for being the automation machine that you are, the person that has to take all of this over will appreciate how easy it is to read.

If you want more information on using composite resources, check-out the great information at this MSDN documentation and the blog article from our friends on the PowerShell team.

And those are only a couple of links available out there on setting up composite resources, so I won’t bore you with any of that. Instead, here’s what the previous configuration looks like when using a composite resource instead of composite configuration.

Note that the Configuration FeatureConfig has been removed from the configuration below. Also please remember, that at scale with many different settings to configure (this example is only for windows features) this can be very challenging to manage without a composite resource.


For brevity I didn’t show all 267 features in FeatureList however, the concept is still the same. You can add each of the included or excluded features to their appropriate sections.

Summary of Composite Configurations

As we showed in this section, we are able to use composite configurations and pass in the necessary parameters to reach our “desired state.” We can also leverage composite resources to further simplify the readability of our configuration.

One Last Alternative

Overall, the composite configuration approach allowed us to simplify and parameterize our structural configuration. However, just as we wanted to stay dynamic with the node generation, we may want to get a little more “automated” in the generation of the list of features.

If you have a “pristine” node that is your “master” of sorts, you could use that machine to automatically gather the configuration to apply to the others. This would slim down the configuration even more by allowing you to remove the manually entered list of features in the FeatureName parameter and automatically generate it using a PowerShell command. It’s also a lot less work as we have all those features to track!


Each node with the appropriate role (in this case, ‘Headless’) will have a generated MOF that mimics the features currently installed on Node301 to ‘Present’ and all the features that are currently available (not installed) on Node301 to ‘Available’.

There is a downside which is that if you use a version control system of your configuration (and you absolutely should :-), this configuration will not tell you how the nodes were configured alone. You would also have to save the generated MOFs to tell that, at the time of generation, Node301 had feature (e.g. XPS-Viewer) uninstalled.


The Showdown

Configuration data plays a vital part in Desired State Configuration by allowing you to separate out the nodes from the configuration itself. The larger a configuration becomes the more likely it will become unrealistic to manage. There are a number of options to simplify them, and it is tempting to use configuration data for this purpose.

If, however, you introduce additional complexities by using configuration data, you may be better served by using the inherent PowerShell capabilities that we already know and love. It’s just PowerShell after all!

What I’ve shown in this article is sure to be just one of many ways you might find to “skin the cat.” I’d love to hear what kind of creative configurations you’ve made or if you’ve used any clever approaches to make your configurations easier to manage! Please feel free to toss a comment in with your questions or experience, or send us an email using the links at the top of the page.

And before we go, a very special “Thank you” to Mark Gray of the PowerShell product team for his help with this article!


Dan “The Waco Kid” Cuomo