Microsoft Windows has long supported standards-based management. We were one of the founding members of the Distributed Management Task Force (DMTF) and shipped the first, and richest, Common Information Model (CIM) Object Manager (CIMOM) we all know as Windows Management Instrumentation (WMI). While WMI has served our customers and partners well, the true promise of standards-based management has not yet been realized. By and large, it is still a world where you have vendor-specific tools – Windows managing Windows, Linux managing Linux, and network & storage vendors managing their own stuff. Customers still have islands of management. There are examples of products which bridge these worlds but often they require bogging down the managed resource with extra vendor-specific agents. Lack of standards-based management is a major pain point for customers.
We spent a lot of time talking to partners and customers to understand what they needed to succeed with Windows Server “8”. We paid particular attention to Cloud OS scenarios and from there, it was clear that we needed a major investment in standards-based management. The shift to a Cloud OS focus significantly increased the scope of the management problem. Not only did we shift our focus from managing a server to managing lots of servers, but we also need to manage all the heterogeneous devices necessary to bring those servers together into a comprehensive and coherent computing platform. Today, cloud computing works by picking and qualifying a very small set of components and having a large staff of developers write component-specific management tools. Generalized cloud computing requires standards-based management. This is why we made a major investment in standards-based management in Windows Server “8”.
The heart of the management problem is that it requires a distributed group of people, often with conflicting interests, to make a common set of decisions. Our approach to this is simple: create a value-chain that makes it easy and rational to do the right thing. Development organizations look at how much it costs to develop something and what benefits it brings them. If the ratios work out, then they do the work, otherwise they don’t. So our job is to minimize the cost and effort to implement standards-based management and to maximize the benefit. This blog post describes how we accomplished that. It does not discuss our other major standards-based management initiative: Storage Management Initiative Specification (SMI-S) which allows Windows Server “8” to discover and manage external storage arrays. We’ll discuss that in a future blog post.
This blog post contains content appropriate for both IT Pros and developers. It contains both code and a schema example to highlight how easy we made things for developers. If you are an IT Pro, you might find this valuable in making good decisions for your deployment and architectural decisions in your IT infrastructure.
Wojtek Kozaczynski, a Principal Architect in the Windows Server team, wrote this blog.
WMI first shipped in Windows 2000 (and was available down-level to NT4). It used a COM-based development model and writing class providers was not for the faint of heart. Frankly, it was difficult to write them and more difficult to write them correctly. In addition to a difficult programming model teams had to also learn the new world of standards-based management with CIM schemas, the Managed Object Format (MOF) language, and other new terms, mechanisms and tools. We got quite good coverage in the first few releases but many teams were not satisfied with the effort-to-benefit ratio.
A big part of that equation is the benefit side. Starting a management ecosystem from scratch is incredibly difficult. If you write a provider and no one calls it, what value was generated? None. This is why Systems Management Server (SMS), now known as the System Center Configuration Manager (SCCM), added support for WMI around the same time we released it (WMI was actually spawned out of the SMS team.) This was great, but it had two problems:
- It created an incentive to produce WMI providers which were largely focused on inventory and monitoring of the managed resources (vs. command and control), and
- SMS was not widely deployed so the teams that wrote WMI providers didn’t see the customer impact of their investments.
Since the release of the WMI there has been a steady increase in the number of management products and tools that consume its providers, but for a long time this had not been matched by a proportional increase in coverage. Then things started to change with Hyper-V. The original management schemas defined by the DMTF were focused on what existed in the world, as opposed to focusing on management scenarios. There are things to be said for both approaches but when it came to managing virtual environments the DMTF got a team of pragmatists involved and when that schema came out, it was quickly adopted. The Hyper-V team developed providers that implemented the schema classes and the System Center Virtual Machine Manager (SCVMM) team produced a management tool which consumed them. This worked really well and it turned some heads because it demonstrated that the WMI was good for more than just inventory and monitoring. WMI could be effectively support configuration, command and control as well. To be fair, a number of other teams had done similar things before, but none of them had the visibility or impact that Hyper-V and SCVMM had.
The other big change in standards-based management was the definition and availability of a standard management protocol. WMI was a standard CIMOM that hosted many standard class providers, but at the time there wasn’t an interoperable management protocol, so WMI used DCOM. This, however, made it an island of management for Windows managing Windows. It worked well, but it did not deliver on the vision of standards-based management. That changed with the DMTF’s definition and approval of the WS-Management (WS-MAN) protocol, a SOAP-based, firewall-friendly protocol that allows a client on any OS to invoke operations on a CIMOM running on any platform. Microsoft shipped the first partial implementation of WS-MAN in Windows Server 2003/R2 and named it Windows Remote Management (WinRM). It interoperated with a number of CIMOM/WS-MAN stacks available on other platforms including Openwsman (Perl, Python, Java and Ruby Bindings), Wiseman (a Java implementation), and OpenPegasus.
Once standards-based management clients and CIMOMs could interoperate, the ball started rolling. However it also started stressing the seams in the increasingly heterogeneous world as vendors used the protocols to develop truly agentless management solutions. Differences in the ways the specifications got implemented meant that the tools needed to write special case code. Difficult APIs made it hard to write serious applications. Gaps in coverage meant that vendors still had to install agents, and vendors and customers hate having extra agents on the machines. Vendors hate them because they require lot of work to write and keep up to date with OS releases. Customers hate them because they complicate provisioning processes and introduce yet another thing that consume precious resources and can go wrong.
Early in the Windows Server “8” planning process we realized that we could not deliver a Cloud OS without a major investment in standards-based management. There are simply too many things to manage in the cloud to manage each of them differently. Considering the situation I described above, we concluded that we needed to:
- Dramatically reduce the effort required to write WMI providers and standards-based management tools
- Substantially improve manageability coverage particularly in the areas of configuration, command and control
- Update our code to comply with the latest DMTF standards
- Tightly integrate WMI and Windows PowerShell
- Provide a clear and compelling value proposition for everyone to use standards based management on Windows or any other platform.
Summary of what we have done
Let’s take a look at what we’ve done from two perspectives; the IT Pro perspective, and the Windows/ device developer perspective.
Our goal for IT Pros is to let them manage everything using Windows PowerShell, so we needed to give them simple-to-use cmdlets to remotely manage resources with standard interfaces on remote machines or heterogeneous devices. This, in turn, allows the IT Pros to script against those resources and write workflows which tie together tens, or tens of thousands of servers and devices without having to learn, configure and operate separate technologies and toolsets for each resource type.
Our goal for Windows/device developers is to make it simple and easy to define and implement standard-based management interfaces, and then expose them through client APIs, cmdlets and REST endpoints. For developers writing management tools we wanted to make it simple and easy to manage all the components of a solution including down-level DCOM Windows servers, standards-based management Operating Systems, servers and devices. For Web Developers we want to make it simple and easy to manage Windows via REST APIs.
Let’s start looking at what we have done starting from the developer’s perspective. The picture below shows the components of what we call the CIM stack.
- In the area of provider development we introduced a new Management Infrastructure (MI) API for the WMI, which significantly simplifies development of new providers (MI Providers in the picture). New tools generate skeleton providers from the CIM class specifications. The new API supports rich cmdlets semantics that IT Pros have come to expect: –WhatIf, -Confirm, -Verbose as well as progress bars and the other cmdlet behaviors. When a new provider that supports the rich semantics is called by an old CIM client, these APIs do nothing. However new clients and Windows PowerShell can request these semantics and the APIs “light up” to deliver rich experience.
- We made WS-MAN the primary protocol for management of Windows Servers and kept the COM and DCOM stacks for backwards compatibility. We have completed the full set of WS-MAN protocol operations and optimize our implementation for performance and scale. We also added support for handling connection interruptions to make management more robust. This simplifies the task of managing large sets of machines where interruptions are sure to occur.
- For the client developers we created a new MI Client API and stack that can communicate with the WMI over COM locally and DCOM and WS-Man remotely. It can also communicate with any CIM-compliant server via WS-MAN. The client’s API, both C/C++ and .Net, is consistent with the provider MI API (they share the main header file).
The above gave us the foundation on which we have built Windows PowerShell access to CIM classes implemented in any CIMOM, which is illustrated in the picture below.
- We created a Windows PowerShell module called CIM Cmdlets with tasks that directly correspond to the generic CIM operations. The module is built on top of the client .Net API and can manage any standards-based management resource.
- We modified Windows PowerShell to be able to generate resource specific CIM-Based cmdlets at run-time. These cmdlets are generated from a declarative XML specification (CDXML) of the Windows PowerShell-to-CIM mapping and can call CIM class providers locally or remotely. This allows a developer to write a CIM provider, write the CDXML mapping file, and make the generated cmdlets available on every Windows device running Windows PowerShell 3.0. This works for non-Windows providers as well. Now imagine the value of this to device vendors. If they implement a standards-compliant provider and include this CDXML mapping file, then a couple hundred million Windows machines will be able to manage that device without the vendor having to write, debug, distribute or support any additional code. When a new version of Windows comes out, their device is supported without them having to write any code. This alone gives a huge incentive to the device vendors to support standards-based management.
In the picture above you may have noticed a box labeled “NanoWBEM”. Let’s talk about that now. As we engaged our partners and the community in our plans to pursue standard- based management we got mixed reactions. Some felt it was the right thing to do and understood the business opportunities it could create, but where skeptical about whether it would really work. When we drilled into that we discovered that the partners did not feel like they could succeed using the existing open source CIMOMs. At the same time our own System Center team encountered similar problems as it expanded its capabilities to manage Linux Servers. To address them the team started a project to build a portable, small-footprint, high performance CIMOM and the result is the NanoWBEM. NanoWBEM is written in portable C and runs on Linux, UNIX and Windows. Because of its very small size it is suitable for running on small devices such as network devices, storage controllers and phones. NanoWBEM uses the same MI provider APIs as WMI, so the same tools that the developers use to create Windows providers can be used to develop providers for other platforms.
Now to address the original concerns of our partners and the community will are planning to make NanoWBEM available to the open-source community.
With the things I described above we have the best of both worlds:
- We give IT Pros powerful tools to access the standard-based management APIs realized by CIM class providers. If those classes are implemented by the MI providers, they can support the extended Windows PowerShell semantics like progress, -WhatIf, -Confirm and –Verbose.
- We also gave the managed software and device developers tools to create new MI providers at a significantly lower cost than before, and make them manageable by IT Pros via Windows PowerShell modules at a very small incremental cost.
Finally, for the Web developers that want to manage Windows from non-Windows platforms we have developed the Management OData IIS Extension. This contains tools and components that simplify building REST APIs (OData Service endpoints).
OData is a set of URI conventions, tools, components and libraries for building REST APIs. What makes the OData services stand out is that they are based on explicit domain models, which define their data content and behavior. This allows rich client libraries (e.g. Windows/IoS/Android Phones, Browsers, Python, Java, etc.) to be generated automatically to simplify the developing solutions on a wide range of devices and platforms.
There are a number of products that have full Windows PowerShell APIs and need REST APIs now that they are being hosted in the Cloud. This is why our first use of OData focused on exposing set of Windows PowerShell cmdlets.
However we have architected a general purpose solution to for future releases. Rest APIs, and OData in particular, map very well to CIM data models so what we did was to provide a mechanism to map sets of cmdlets into a CIM data model and then expose that data model as an OData service endpoint.
A shallow dive into the CIM stack
In the preceding section, I showed a high-level overview of what we have done. This inevitably left many of you asking; So how does it work in practice? In the team blog we will take deep-dives into all features and components of the Windows Sever “8” management platform. In the meantime, for the inpatient among us, I will do a “shallow dive” into the features starting with the IT Pro experience.
IT Pros have two mechanisms to manage CIM classes. The first option is to use the generic CIM cmdlets from the CimCmdlets module, which is imported to PowerShell_ISE and PowerShell by default. The cmdlets of the module should look quite familiar to IT Pros familiar with CIM because they map very directly to the generic CIM/WS-Man operations. For example three different parameter sets of the Get-CimInstance cmdlet map directly to CIM/WS-MAN generic operations: GetInstance, EnumerateInstances and QueryInstances. The module also includes cmdlets to create remote server connections (sessions) and inspecting definitions of classes registered with the CIMOM.
The new CIM cmdlets are a replacement for the *-Wmi* cmdlets which only worked in Windows-to-Windows. The cmdlets are optimized to work over WS-MAN and will continue to work seamlessly over DCOM, so as an IT pro you no longer need to use two sets of commands to manage Windows and Non-Windows machines.
The following example shows getting the names and types of the properties of the Win32_Service class registered in the WMI root\cimv2 namespace on the local computer.
Getting the names of the Win32 services from a remote server is as simple as this.
CIM-Based Cmdlets generated from CDXML
IT Pros can also use cmdlets that Windows PowerShell generates using a CDXML mapping file. This model allows developers to write one piece of code and get the benefits of both the Windows PowerShell and WMI ecosystems. It also allows cmdlets to be written in native code which was of particular interest to some of the OS feature teams. The CIM-based cmdlets, although written as WMI providers look and feel just like Windows PowerShell cmdlets:
- They provide task-oriented abstractions that hide implementation details like namespace, class name, method names, etc.
- They support full Windows PowerShell semantics: -WhatIf, -Confirm, etc…
- They have uniform support for rich operational controls: -AsJob, -Throttlelimit, etc…
- They are packaged as Windows PowerShell modules and are discoverable using the get/import-module cmdlets.
The CDXML file used to generate the CIM-based cmdlets maps a cmdlet verb, noun and parameters to a Cmdlet Adapter. A Cmdlet Adapter is a .NET class which maps the requirements of a Windows PowerShell cmdlet into a given technology. We ship a Cmdlet Adapter for CIM classes but anyone can write their own (e.g. to map cmdlets to a Java classes). The file extension of the mapping file is .CDXML (Cmdlet Definition XML). A number of related CDXML files can be combined into a Windows PowerShell module together with files which describe the returned objects and how to format them.
The beauty of this mechanism is that Windows PowerShell can import such a .CDXML module from a remote CIMOM, and then create cmdlets which manage the classes on that server without any prior knowledge about them. In other words a CIMOM can expose a server-specific Windows PowerShell interface to its classes at run-time, without a need for any software installation!
Authoring CDXML files requires a level of detail comparable with specifying any other cmdlet, plus information about mappings to the CIM class functions. To simplify that task we developed CDXML editing tools that we will detail in a separate blog. Without going into details let me illustrate the idea behind generated CIM-based cmdlets with a simple example. Above I showed how to access the Win32_Service class and its instances using the CIM cmdlets. Below is a .CDXML file that defines the Get-Win32Service generated CIM-based cmdlet that will call the enumeration method on the same class. You will not find the name of that Get-Win32Service cmdlet in the file because it is generated by default from the default noun Win32Service and the verb Get. What is in the file is the <QueryableProperties> element which defines the properties that Windows PowerShell will use to query for the instances of the Win32_Service class. In our case the property we want to query on is service Name.
The following sequence of Windows PowerShell commands imports the .CDXML file as a module, lists our new cmdlet as defined in the module, and then shows its signature. Notice that because in the .CDXML file we say that the parameter Name is mandatory (<cmdletParameterMetadata IsMandatory="true" cmdletParameterSets="ByName" />), Name it is shown as the only mandatory parameter in the Get-Win32Service cmdlet signature.
Note: If you are curious about how we accomplish this you can see the cmdlet we generate on the fly using the following command.
Our newly created cmdlet (Get-Win32Service) behaves like any other cmdlet and as I mentioned above, can be executed as a background job. Throttling (-ThrottleLimit) is useful when executing the command against a large set of servers. You can run the command against a few hundred or thousand servers and throttle how many concurrent outstanding requests are allowed to run.
We shipped Windows PowerShell V1 with 130 cmdlets and Windows PowerShell V2 with 230 cmdlets. With Windows PowerShell V3, Windows Server “8” ships with over 2,300 cmdlets. A large percentage of those cmdlets are written using the new WMI MI providers and .CDXML files. What that means is that those functions are both available via Windows PowerShell and via standards-based management. We recently shocked an audience by demonstrating the ability to install a set of Windows Server Roles from a Linux client machine using WS-MAN!
The CIM Client .Net API
Both CIM cmdlets and CIM-based cmdlets in Windows PowerShell are implemented on top of the new MI .Net Client API. Although it is unlikely that IT Pros will write C# client code, management tools developers certainly will, so let’s take a look at a simple example.
The client API supports both synchronous and asynchronous interactions with the server. The synchronous operations return IEnumerable<CimInstance> collections of CIM class instances. The asynchronous operations use the concept of asynchronous, observable collections from the Reactive Extensions, which results in efficient, simple and compact client code. Below is an example of a simple command-line program that enumerates instances of the Win32_Service class on a remote computer. My objective is not to discuss the client API, but to illustrate how compact and clean the resulting program is.
The code that handles the numeration is in the three highlighted lines of the Main function. The lines turn the result of enumerating instances of Win32_Service into an Observable collection of CimInstance objects and associate the consumer observer object with that collection. The observer contains three callbacks that handle returned instances, the final result, and errors. This makes it simple and easy to perform rich management functions against a remote CIM server in just a few lines.
The New WMI Providers
I said earlier that we significantly simplified development of the MI providers. There are a number of things that contributed to that simplification.
The picture below shows the steps involved in writing a provider. A provider can implement one or more CIM classes and the first step is to describe them in the MOF specification language. The next step is to generate a provider skeleton to implement the CIM classes. We provide a utility Convert-MofToProvider.exe to automate this step. This utility takes the class definition(s) as input and creates a number of C header and code files. Two of these files are worth mentioning
- The first one, called the schema file, contains definitions of all data structures, including the CIM class instances, which the provider uses. It makes the provider strongly typed, and a pleasure to work with Visual Studio’s intellisense and auto-completion. This file should never be edited by hand.
- The other file is the provider code file, which contains the skeleton of the provider. This is the only file that should be edited. It contains the code of all the CIM class methods, with the method bodies returning the not-implemented result. So the generated provider is buildable, can be registered and will run, but will do nothing.
The next step is to fill the CIM class methods with their respective logic. Once that is done, the provider can be registered and tested. We also greatly simplified the provider registration by building a new registration tool that takes only one input; the provider DLL. We could do that, because the MI providers have their class definitions complied into them in the schema file.
In order to make the new providers work well with Windows PowerShell we added the extended Windows PowerShell semantics API. The essence of that feature is that a provider can obtains input from the user while an operation is executing if the cmdlet that invoked the operation contains the –Confirm or –WhatIf parameter. The following code snippet is from a test provider that enumerates, stops and starts Win32 services, and illustrates how the feature works. The code is part of the operation that stops the service and asks the user (the MI_PrompUser() function) if she wants to stop the service with the name that was given to the operation as the Name argument. If the answer is No (bContinue == FALSE) or the function failed, which means the provider could not reach the user, the provider does not stop the process but writes a message back to the user (the MI_WriteVerbose() function) and terminates the operation.
The last feature I want to briefly describe is the IIS extensions for building OData-based RESTful management endpoints. The idea behind the feature is that we can declaratively configure how to dispatch the endpoint service requests to cmdlets in a Windows PowerShell module. Let me explain how it works using an example of a very simple endpoint that can be used to enumerate, create and terminate Windows processes.
The endpoint’s directory includes the three files shown in the picture below.
The schema file contains the definitions of the endpoints’ entities and is loaded when the endpoint is loaded. The module file defines the Windows PowerShell cmdlets that will be called to handle the endpoint requests. The dispatching file ties the two other artifacts together by defining what cmdlets are called for different client requests. In our example it maps the query for Win32 processes onto the Get-Win32_Process cmdlet, which then uses a generic CIM cmdlet to talk to the WMI. The result of this endpoint configuration is shown in the screenshot below, which is a response to the URL
http://wojtek8srv3:8000/Management/Process.svc/Win32Process?$filter=Name eq 'svchost.exe'&$select=ProcessId.
Jeffrey often makes the point that “nobody cares about your first million lines of code”. The “million” is an arbitrarily picked very large number, but the idea of the metaphor is that every software product must accumulate a critical mass of foundational components before delivering a meaningful value. However once that critical mass is reached, even a few additional lines can make a significant difference.
Following that metaphor, I feel that in Windows Server “8” we have written our “first million” lines of the standards-based management platform code. We bridged the gap between the IT Pros who are managing increasingly complex cloud infrastructures and the Windows and device developers who build the things that must be managed. We have laid a consistent foundation that spans from low level CIM interfaces on one end, to the IT Pro oriented Windows PowerShell and OData interfaces on the other end. We’ve created a clear and compelling value for heterogeneous devices to implement standards based managed and we’ve delivered comprehensive coverage so that management tool providers can use standards based management to manage Windows.