OpsMgr: AD Client monitoring – There are not enough GC’s available, and other troubleshooting issues

Ran into this one recently during a Proof of Concept at a customer using the ADMP.

You implement the AD Management pack, then optionally enable AD Client monitoring per the ADMP Guide.  Almost immediately, you start getting a lot of alerts with high repeat counts, that state “There are not enough GC’s available”.




No problem – we likely need to adjust the number of GC’s expected for the site.  The default is 3.  So you go find the monitor that you assume is running this script – by scoping the authoring pane to “Active Directory Client Perspective” and viewing monitors.  What we find – is that the monitor for “Active Directory Global Catalog Availability” is DISABLED by default:




So why are we generating this alert?


One concept in MP authoring – is that multiple workflows (including multiple rules and monitors) can share a common datasource.  A datasource might be something as simple as a script, and a timed schedule.  Then – multiple rules and monitors can all call that same script, pass the same parameters, and a single run of the script on a schedule can “feed” all the rules and monitors with the data they are expecting.  This is exactly the case here.

There is a datasource in the AD Client MP, “AD_Client_GC_Availability.DataSource”  This is simply a script, and a timer for the interval to run.  The disabled monitor “AD_Client_GC_Availability.Monitor” uses a MonitorType of “AD_Client_GC_Availability.Monitortype” which references this datasource. 

However – the monitor is disabled – so this script datasource should not be executing.

UNLESS – there is some other rule or monitor that is executing it.  We can find a rule in the MP called “AD Client GC Availability PerformanceCollection”.  This rule is ENABLED and calls the same datasource, passing the same parameters to the script.  What this means is, if you are going to place an override on a monitor or rule, and that monitor or rule shares a script datasource with OTHER monitors or rules, you MUST ensure that you override ALL the monitors and rules that share the datasource to be the same.  This ensures that we do not break cookdown.  Situations like this SHOULD be documented in any MP guide, but these are often overlooked.


The easiest solution to this issue is to disable this performance collection rule.  Especially since it is creating alert noise.  Or, you can override this rule (and the aforementioned monitor) to use a more reasonable number of GC’s.  You might consider enabling the GC availability monitor after configuring this, if this is something you are concerned with monitoring.




I just turned off my performance collection rule, since this wasn’t a valuable perf collection to the customer, and gets rid of another script running.



Here is another example of something similar, when you enable AD Client monitoring.  There is a monitor for “AD Client Connectivity”.  This monitor uses a shared script datasource of “AD_Client_Connectivity.Datasource”  This datasource has a required parameter of “LogSuccessEvent” expecting “True” or “False”.

This monitor is set to a default value of “false”.  HOWEVER – you find the event logs of your AD Client machines flooded every 1 minute with the event ID 5000 that the “AD Client Connectivity : The script 'AD Client Connectivity' has completed in n seconds.” 

Log Name:      Operations Manager
Source:        Health Service Script
Date:          6/4/2012 4:10:42 PM
Event ID:      5000
Task Category: None
Level:         Information
Keywords:      Classic
User:          N/A
Computer:      RD01.opsmgr.net
AD Client Connectivity : The script 'AD Client Connectivity' has completed in 3 seconds.

Why is this success event being logged, even when the monitor is set to false?  Again – the answer is multiple rules and monitors sharing a single datasource, but passing different parameters.  And again – if you are going to make a change to ONE monitor or rule, you MUST make the same change on ALL that share the same datasource.

After a little review of the XML – we can see that the following rules and monitors reference this shared datasource:



AD Client AD Client LDAP Bind Time Collection

AD Client AD Client LDAP Ping Time Collection

AD Client ADSI Client Search Time Collectrion


AD Client Connectivity Monitor


What we can easily see – is that the “log success event” on the perf collection rules is set to True!  What I generally recommend, in order to support cookdown and allow a shared datasource to work properly, is to configure all overrides on the rules and monitors the same.  If you need to have different settings, such as intervals, understand that these will likely not cook down and you will have additional simultaneous scripts running, which may impact performance in some cases where a script uses considerable resources.


***Note – after further review – there is a bug in the AD_Client_Connectivity.Monitortype which causes the script to log a success event on every run – no matter what you input on the AD_Client_Connectivity.Monitor.  This is also why it logs an event every time.  The parameter is screwed up and instead of passing “False” it passes the interval in seconds.  I think it might be interesting to some on how to troubleshoot this – so I will include my steps:

Troubleshooting a SCOM VBscript using RegMon/ProcMon:


I start by looking at the script itself, hopefully if it is simple enough I can figure out what they are doing.  To find the scripts on a SCOM agent, browse to the \Program Files\System Center Operations Manager\Agent\Health Service State\ folder.  In here there will be one or more “Monitoring Host Temporary Files xx” folders.  Search the top level “Health Service State” folder in Windows Explorer for the name of the file, or “*.vbs” and scroll around until you find the script you are looking for:




COPY THE SCRIPT elsewhere to look at it and review it.  Editing this script will not fix ANYTHING because these folders are torn down and rebuilt on each launch of the MonitoringHost.exe process… extracted from the management packs. 

I found in the script where the LogSuccessEvent gets evaluated:

If CBool(GetTestParameter(SCRIPT_NAME, LOG_SUCCESS, bLogSuccess)) Then
                "' has completed in " & DateDiff("s", dtStart, Now) & " seconds."
End If

So next I need to understand what this function is doing:  GetTestParameter(SCRIPT_NAME, LOG_SUCCESS, bLogSuccess)

In that function – I can see that the script is checking a specific location in the registry – if it isn't there, then to use the params passed to the script by SCOM.  This is COOL, because this gives the ability to manually override the script, without having to go into SCOM to change something.  Normally I wouldn’t consider this a good thing, as it adds a lot of code to your scripts, but in this case it is fantastic because the MP has a bug, is sealed, so we don’t have any easy way to fix a busted monitortype.

I load up my trusty ProcMon  http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx

I configure it so that only Registry access is enabled:


I configure a filter so that I only see registry access from my specific VB script by name, and where the path is only near the registry location that I care about:


I enable Auto-scroll, then wait for my script to run (Hint – to speed things up you can run the script manually, or override the workflow that runs it to go very often for testing.

From looking at the script, and the ProcMon output – I can see that we are looking for a non-existent registry key:

HKLM\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Modules\{management group GUID}\S-1-5-18\Script\AD Management Pack\Tests\

What I can tell is that the script is looking for these registry overrides in the following location:

HKLM\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Modules\{management group GUID}\S-1-5-18\Script\AD Management Pack\Tests\AD Client Connectivity\

The following parameters are supported (you can see this from the script, or from the regmon output):

    • FailureThreshold
    • LDAPPingTimeout
    • BindThreshold
    • SearchThreshold
    • LogSuccessEvent


Voila!  So I create a String Value for LogSuccessEvent = False



And no longer do I see these success events logged on the AD Client Perspective machines.  Now, normally, I’d recommend leaving these events alone, as they are helpful for troubleshooting.  Only if you want to disable them because they are filling your logs and blocking your ability to see other SCOM events, should you likely need to turn these off.

Comments (4)

  1. Kevin Holman says:

    @Jan –

    No, not exactly.  It does not force all rules and monitors to go live – even when disabled.  In my example above – there was a different rule, which looked for the event created by the script.  So that rule had a different datasource (event based) and that is what was being triggered by the script.  The script was the shared datasource – but the alert was coming from a different rule.

    However, properly authoring for cookdown is always going to be a challenge, the author MUST document which rules and monitors leverage a shared datasource so the end-user can keep all params the same when placing overrides, and the author must also document which additional rules and monitors depend on this datasource, such as rules and monitors the look for script generated events.  This latter part doesnt affect cookdown, but its good for the end user to understand the relationships/dependencies.

  2. Nice article!

    I always thought that as long as a rule/monitor is disabled, it won't process anything coming from the referenced datasources.

    So actually, when a datasource runs it forces all rules/monitors to go live, even when disabled?

    If so, it becomes very hard to properly author cookdown, as you have to group your rules/monitors and datasources in logical sections which you'll expect to be either left on or off entirely.

  3. David says:

    can I disable the scom AD client connectivity event1 rule and AD Client GC availability event rule? because this rule store more events in Data warehouse and consumed more space.

    So if i disable this rule, it will cause any issue??

Skip to main content