The Anatomy of a Good SCOM Alert Management Process – Part 3: Completing the Alert Management Life Cycle.

Disclaimer: Due to changes in the MSFT corporate blogging policy, I’m moving all of my content to the following location. Please reference all future content from that location. Thanks.

This is my final article in a 3 part series about Alert Management.  Part 1 is herePart 2 is here.

In the first two parts, we have already discussed why alert management is necessary and what tends to get in the way.  The final article in this series will cover what processes need to change or be added in order to facilitate good alert management.

The information below can be found in a number of documentation.  It is found in our health check that is provided for SCOM.  I’ve seen it in a number of presentations by a number of different Microsoft PFEs as well.  It shows up on some blogs too.  Simply put, there’s plenty out there that can put you in the right direction, though sometimes the WHY gets left out.

Daily Tasks

  • Check using, Operations Manager Management Packs that Operations Manager components are healthy
  • Check new Alerts from previous day are not still in state of 'New'
  • Check for any unusual Alert or Event noise; investigate further if required (e.g. failing scripts, WMI issues, etc.,)
  • Check all Agents 'Status' for any that may be other than in a Green state
  • Review nightly backup jobs and database space allocation

Weekly Tasks

  • Schedule weekly meeting with IT Operational stake-holders \ and technical staff to review previous weeks most common alerts
  • Run the 'Most Common Alerts' report; investigate where necessary (see above bullet)

Monthly Tasks

  • Check for new Management Pack versions of those installed. Also check newly released management packs for suitability for your monitored environment
  • Run the baseline counters to access the ongoing performance of the Operations Manager environment as new agents are added and as new management packs are added

The task list doesn’t necessarily say WHO is responsible for completing these items, but I can say with reasonable certainty that if the SCOM administrator is the only one expected to do these tasks, he or she will fail.  Alert noise in particular is a team effort.  That needs to be handled directly by the people whose responsibility it is to maintain the systems they are monitoring.  That means that your AD guys should be watching the AD management pack.  The SQL guys need to be watching for SQL alerts, and so and so forth.  They know their products better than what the SCOM administrator will know them.

Tier one (and by proxy two) can certainly be the eyes and ears on the alerts that come through, but they need clearly defined escalation paths to the appropriate teams so that issues that aren’t easily resolved can be sent on to the correct tier three teams.  SCOM does a lot of self-alerting, so that escalation needs to include the SCOM administrators as issues such as WMI scripts not running, failing workflows, and various management group related alerts need to eventually make it to the SCOM administrator.  Issues such as health service heart beats (and by proxy gray agents when that heartbeat threshold is exceeded) need to be looked at right away.  Those indicate that an agent is not being monitored (at the least).  There are a number of reasons as to why that could be the case ranging from down systems (which you want to address), to bad processes, to some sort of client issue preventing communication.

Finally, all of this requires some sort of accountability.  Management doesn’t necessarily need to know why system X is red.  That’s usually the wrong question.  What management needs to be ensuring is that when there’s an alert from SCOM, SOMEONE is addressing it, and that someone also has a clear escalation path when they get to a point where they aren’t sure what’s going on.  To be clear, there’s going to be A LOT of this at first. That’s normal, and that also gets us into other key processes that need to be formed or adjusted in order to make this work.

  1. Server commission/decommission:  The most common issue for gray agents in SCOM is the failure to remove it from SCOM when the server is being retired.  It’s a simple change, but that has to be worked into your organizations current process.  On the flip side, ensuring that new servers are promptly added to SCOM is also important.  How that is managed is more organization specific.  You can auto-deploy via SCCM or AD (though don’t forget to change the remotely manageable settings if you do) or you can manually deploy through the SCOM console. You can also pre-install the image and use AD assignment as another option if that is preferred as well.  Keep in mind that systems in a DMZ will require certificates or a gateway to authenticate, which will further affect these processes.  You may also want to think about whether or not your development systems should be monitored the production environment (as these will usually generate more noise).  You may want to consider putting these systems in a dev SCOM environment (you’ll likely have no additional cost).
  2. Development Environment:  The Dev SCOM environment is also something that will have it’s own processes.  It will be used more for testing new MP rollouts, but in terms of being watched by your day to day support operations, it really is only being watched by the engineers responsible for their products as well as the SCOM administrator.
  3. Maintenance:  Server maintenance will need to be adjusted as well.  This might be the biggest process change (or in most cases, a new process altogether).  Rebooting a DC during production hours (for example) is somewhat normal since it really won’t cause an outage. If that DC is say the PDC emulator, each DC in SCOM will generate an alert when that DC goes down.  Domain controllers aren’t the only example here, as any time a server is rebooted.  Reboots can generate a health service heart beat alert if the server misses it’s ping or even a gray server if the reboot takes a while.  Application specific alerts can be generated as well, and SCOM specific alerts will generate when workflows are suddenly terminated.  This process is key as it’s a direct contributor to what is typically a daily amount of noise that SCOM generates.  SCOM isn’t smart enough to know which outages are acceptable to your organization and which ones aren’t.  It’s up to the org to tell it.  SCOM includes a nice tool called Maintenance Mode to assist with this (though it’s worth noting that this is a workflow that the management server orders a client to execute, so it can take a few minutes to go into affect).  System Center 2016 has also added the ability to schedule maintenance mode, so that noisy objects can be put in MM automatically when that 2:00 AM backup job is running.  If there’s a place for accountability, this one is key, as the actions of the guy doing the maintenance rarely get back to him or her as that same person is often not responsible for the alert that is generated.  Don’t assume this one will define itself organically.   It probably wont, and it may need some sort of management overview to get this one working well.
  4. Updates: The Update process is also one that will need adjusting.  It’s a bit of a dirty little secret in the SCOM world, but the simply using WSUS and/or SCCM will not suffice.  There’s a manual piece too involving running SQL scripts and importing SCOM’s updated internal MPs.  The process hasn’t changed as long as I’ve been doing it, but if you aren’t sure, Kevin Holman writes an updated one with just about every release (such as this one).
  5. Meeting with key teams:  This is specified as a weekly task, though as the environment is tuned (see below) and better maintained, this one can be happen less frequently.  The bottom line is that SCOM will generate alerts.  Some are easy to fix, such as the SQL SPN alerts that usually show up in a new deployment.  Some not so much.  If the SQL team doesn’t watch SQL alerts, they won’t know what is legit and what isn’t.  If they aren’t meeting with the SCOM admin on a somewhat consistent basis, then the tuning process won’t happen.  The Tier 1 and 2 guys start ignoring alerts when they see the same alerts over and over again with no guidance or attempts to fix them.  This process is key, as that communication doesn’t always happen organically.  SCOM also gives us some very nice reports in the ‘Generic Reports Library’ to help facilitate these meetings.  The ‘Most Common Alerts’ report mentioned above is a great example as you can configure the report to give you a good top down analysis of what is generating the most noise.  It will tell you which management packs are generating it.  Most importantly, what invariably happen is that the top 3-4 items usually account for 50-70% of your alert volume. So much of the tuning process can be accomplished by simply running this report and sitting down with the key teams.
  6. Tuning:  This really ties into those meetings, but at the same time, the tuning process needs to have it’s own process flow.  Noise needs to be escalated by the responsible teams to the SCOM administrator so that it can be addressed.  Noise can be addressed by threshold changes or by turning off certain rules/monitors.  To an extent, the SCOM administrators should push back on this as well.  In a highly functional team, this isn’t the case, but the default reaction that so many people have is just ‘turn it off.’  That’s not always the right answer.  It certainly can be in the right situation. For example, SCOM will tell you that website X or app pool Y is not running, and this can be normal in a lot of organizations.  But a lot of alerts aren’t that simple, and all of them need to be investigated, as some can be caused by events such as reboots, and many (such as SQL SPN alerts) are being ignored because the owner isn’t sure what to do.  This is not always readily apparent, and some back and forth here is healthy.
  7. Documentation:  In any health check, Microsoft asks if SCOM changes are documented.  I’ve yet to see a ‘yes’ answer here.  Truthfully, most organizations don’t handle change control that well, and IT people seem to be rather averse to documentation.  I’m sure part of that is that there’s already so much of it that it rarely gets read or ever makes sense. Other parts is that change management isn’t usually a daily event, and SCOM alert changes need to happen frequently. You really don’t need a change management meeting to facilitate those types of changes as the only real people affected are the SCOM admin and whomever owns the system/process in question, and waiting for those meetings can be painful to everyone responsible for dealing with said alerts.  I’ve always used a poor man’s implementation here.  Each management pack comes with a description and a version field that is easily editable.  Each time I make a change to a customization MP, I increment the version.  I put the new version number in the description field with a list of change(s) made, who made them, why, and who else was involved.  This is worthwhile for CYA, as management may occasionally ask if SCOM picked up on specific events, and you don’t want to try and explain why the alert for said event was turned off.  It’s also useful for role changes. Whenever a new SCOM administrator starts, the new SCOM admin tends to want to redo the environment because they have no clue what their predecessor(s) did and why.  That little history here can provide a quick rundown of the what and why which a new SCOM admin can use.  This assumes of course that a best practice is followed for customizations (don’t use the default MP, and by all means, do not simply dump all your changes into one MP).  It also assumes this is communicated.
  8. Backups:  This can be org specific, as spinning up a new SCOM environment might be preferable than maintaining terra-bytes of backup space.  This certainly is reasonable, but the org needs to actually make a decision here (and this one is a management decision in my opinion).  That said, if the other practices are being followed, suddenly those customizations are more important. Customized MPs can be backed up via a script or an MP, and this is usually the most important item needed for backups, as it takes the most work to restore manually.

I hope at this point that it is clear that rolling out SCOM is an org commitment.  A ‘check the box’ mentality won’t work here (though that’s probably true for all software).  There’s too much that needs to be discussed, and there’s too many processes that will require change.  If anything, this should provide any SCOM admin or member of management a good starting point to making these changes.

Technorati Tags: SCOM 2012,Alert,Management,Alert Management,Process,Tuning,Backup,Maintenance,Maintenance mode,administration,tasks,noise,documentation