Monitor an agent – but run response on a Management Server

This post has been updated:


Comments (13)

  1. Raphael Burri says:

    Hi Kevin

    Thank you for writing this up as a reference. I have been using this now and again for the last years and it is a great enabler for complex workflow scenarios.

    However; I came to understand that it is often better to use the more specialized target class Microsoft.SystemCenter.CollectionManagementServer. Such the response will run on a "true" Management Server with database and SDK capabilities. This allows using
    PowerShell scripts (or modules) that require the SCOM SDK.

    Using the Microsoft.SystemCenter.ManagementServer target class will include Gateways. Those do not allow access to SDK and/or DB. Hence advanced rule actions may fail wen running for agents connected through gateways.

    When using Microsoft.SystemCenter.CollectionManagementServer instead, the rule action will be executed on the MS that is currently serving the GW to which an agent is connected. More versatile in my opinion.

    The other remark when using this great rule re-targeting: One has to be careful with variables. the usual $Target$ replacement will show unexpected results when used on the rule action. This is because not the triggering agent’s target object properties are
    being evaluated but the MS’ properties it has been redirected to. If you need to know e.g. which agent the rule was triggered on, one possible workaround includes:

    Include the agent’s name (or other properties) as parameters in the event you’re triggering the rule on. Then use $Data$ replacement when calling the action sctript. E.g.: $Data/Context/DataItem/Params/Param[3]$ (getting the 3rd parameter from the collected
    event coming from a consolidated rule).


  2. Kevin Holman says:

    Thanks Raphael – excellent comment and spot on. I will update the post to reflect that. I don’t have and GW’s in this customer where I was developing it, but it certainly is a possibility. As to use of $Target – I am not using it anywhere above that’s
    an issue, correct? You mean, when passing something like computername of the agent as a parameter to the script? For a simple event (rule) datasource I would pass $Data/LoggingComputer$ typically.

  3. Raphael Burri says:

    Hi Kevin
    Indeed; when I need to pass anything re-usable from the agent to the MS-side action, only $Data… can be used. Everything that is contained in the built up DataItem of a datasource is fine.

    What I wanted to emphasize is that initially I often failed because I went the seemingly more obvious way of using $Target – only to discover that those workflows would most often fail silently. If a different base classes was used on the datasource and action
    sides. When using the same base class(Microsoft.SystemCenter.HealthService and Microsoft.SystemCenter.CollectionManagementServer for example), they ran but contained the wrong information – much easier to troubleshoot.

    Let me brew up a quick scenario:
    – Agent rule targeted at Microsoft.Windows.Computer (my DataSource)
    – I need to re-use the value of the OrganizationalUnit property for SDK-scripts run as responses (my WriteAction)

    When using $Target/Property[Type="Windows!Microsoft.Windows.Computer"]/OrganizationalUnit$ as a WriteAction parameter, the server side response will fail quietly. The action attempts to evaluate the $Target expression on the MS. MS’ base class is Microsoft.SystemCenter.HealthService.
    Not resolvable to my $Target expression – hence the action will not run. This is awfully difficult to troubleshoot as it needs a deep understanding of the workflows and DataItem passed around.

    Workarounds: Build a DS that puts the $Target/Property[Type="Windows!Microsoft.Windows.Computer"]/OrganizationalUnit$ value into the DataItem (e.g. System.Event.GenericDataMapper to add the value to the Params or a PropertyBag creation script), then extract
    on the action using the corresponding $Data expression.

    Bottom line: May I suggest adding a few extra words to the post that informs about the limitation around replacement and gives a few hints on how to successfully use $Data instead. Plus a some hints on where to investigate when things do not work out?

  4. TheSilverCanuck says:

    Dude, you just rocked my world! 🙂

  5. rob1974 says:

    Is there any reason to use the "collection server" class over the all management servers pool? (Target="SC!Microsoft.SystemCenter.AllManagementServersPool). I have only tested the Microsoft.SystemCenter.AllManagementServersPool class in a test environment
    with 1 MS, so i want to know before i import in production if collection server is better or not.

  6. Kevin Holman says:

    @rob1974 –

    Yes. You should use Microsoft.SystemCenter.CollectionManagementServer. If you use the AMSRP – that is an object itself – a singleton class like a group – that technically only exists, or is hosted on, a single MS in the pool. Therefore if you used that class,
    all workflows will try and execute on that management server hosting the object. This wont work as this channel runs the response on the management server that the agent is assigned and communicating with.

    When you target the AMSRP object – this does give a workflow "high availability" because this host will move if a specific MS is down, but this would be a poor choice for this scenario in my opinion. In fact – I believe it wont work, unless the agents are assigned
    to the specific MS hosting that class.

  7. Martins says:

    Is there any possibility to use this for agent based task which is targeted on a class hosted by an agent but it needs to run script on the management server?

  8. Ludik D says:

    Can this target modification be used on all WriteAction instances, for example: a WriteAction instance within a Monitor? Or will this only work for Rules?

  9. Kevin Holman says:

    @MartinS – I don’t know – whats the scenario?

    @Ludik D – Writeactions aren’t a component of monitors. Monitors output a monitor state, and optionally an alert. But I don’t see why this couldn’t be used in a recovery write action triggered by a monitor statechange.

    1. Stephen says:

      Hi Kevin

      I’ve done this with a rule before. What about a WriteAction in an Agent Task? Have you tried this? I’m not having much luck.

  10. Alex P says:

    Hey @Kevin Holman!

    I’m trying to understand what data do I have accessible upon WriteAction part of rule – for example, how would I return a source computer name in the Event viewer on MS ? I’ve been struggling with it for some time already, and it all broke down due to lack of convenient debugging tools available – I could’ve missed it, but how do you debug workflows? Authoring Extensions were last released in 2007 and they don’t initially seem to like my 2012 Environment. Any advice? 🙂

    1. Kevin Holman says:

      I’ll have to admit, I do VERY little workflow debugging. I try to do lots of research and find working examples of workflows to modify and adapt, and I get very lucky. I also follow the KISS principle, and try to create as simple monitoring as possible that meets the customer’s needs, so years down the road they can see and understand the XML. I don’t write a lot of complex composite datasources, which is where the majority of debugging/analyzer stuff comes in. I just don’t have the time with my customers to do this, or I’d be spending all my time doing MP authoring. That said – I heard the WF analyser works fine if you use a SCOM 2012 SP1 agent.

  11. Anonymous says:
    (The content was deleted per user request)
Skip to main content