Patching – how do you cope?

Patching is a necessary evil.

imageA lot of IT pros don’t like to talk about patching due to the amount of work it can generate for them in their already PACKED jobs.  It came up as a heated topic of discussion at a user group event I attended a while back – so much so, I decided to produce a monthly podcast (“Security Bulletins for the Regular IT Guy” – go check it out!) to help out with “Patch Tuesday” information overload. It’s goal is to simplify the jargon and corporate speak contained in the security bulletins and give you what you need to know to get started with your patch plans. When I was editing the last episode, it got me thinking that as an IT Community – we need to start a conversation around the processes we use to handle patching in their own environments. 

Back when I was jiggling cables under desks and looking after my LAN (after I ripped and replaced the arcnet) - I remember it used to be this random chaotic mess of patches that needed to be deployed and they always seemed to pop up when you absolutely did NOT have the time to look at them. I remember the time before automatic “let me check for you” services or routine/scheduled updates on a regular basis.  luckily things have gotten better – but patching is something where we each have our own process and tolerance level for how long before deployment.

If you’re an old IT Hack – patching is probably second nature and ingrained in your DNA.  If you are new to the game of IT – better get used to it and put in place a process to handle it. Your best bet is (to quote the boy scouts) - be prepared.

I thought I would share some things that I know have helped other IT Pros out there with their patching routines. Some of these recommendations will take time to implement, but trust me – once you do, you won’t approach patching with that feeling of nerves in your gut. you know what? Send me your ideas ( or comment below and I will expand on it so everyone can benefit from your experiences:

  • Awareness – what do you have to watch for?

    • By this I mean – what’s your inventory like for your patch checks? If your software vendors don’t have an auto-update service or regularly scheduled update process for releases, you’re going to want to know when something new come out proactively, instead of reactively.

    • Are you including driver updates, firmware updates and Bios Updates?

    • what about the software that ends up on your machines, but is installed by your users and you don’t find out about until there is a problem with something?

    • Don’t get overwhelmed – start small and focused so you can correctly document your process for others to follow - THEN gradually expand. Remember, to eat an elephant, you start by taking one bite at a time. 

    • For your Microsoft Apps and operating systems – don’t forget the free tool called MBSA (Microsoft Baseline Security Analyzer). This can be run in a one off local scan, network scan or even rolled into massive reporting and monitoring solutions like System Center Operations Manager.

  • Automation is your friend. Look into it.

    • If you work in a large organization, you probably already have an automated software distribution process in place. Check into it’s patch/update reporting capabilities and start requesting regular scans and checks on systems. The System Center family of products have you covered here.

    • If you don’t have something in place – definitely look for options. Check out WSUS for your Microsoft OSs and Applications – regardless of your company size – IT’S FREE and does reporting, approvals and deployment centrally (or distributed) in your environment.

  • Test, Test and Test again.

    • This used to be really hard and required A LOT of cycles to get a mocked up environment available. Now it is easier with the advent of Virtualization taking place all around you. Create a virtual lab of representative systems (within reason. How complex is your environment?) using Virtual PCHyper-V or vmware. Include your baseline of applications from your awareness phase and create a baseline test of what needs to work both before and after.

    • Boot up your virtualized environment. Take a snapshot. Do your application test to validate functionality pre patch. Apply your patch. Do your application test to validate functionality post patch. Does it pass – do they match?

    • Document testing results and patch approval.

  • Deployment – get ‘er done!

    • Use your weapon of choice. At a minimum – you should ABSOLUTELY be using a WSUS implementation in your environment and configure your desktops and servers to point to it for updates. It’s free, can coexist on existing servers if required and centralizes your authorization of patches in order to prevent random downloads of patches that have not undergone testing.  If you haven’t already implemented it – I strongly suggest you check it out. Main page for info can be found here. Step by Step deployment and implementation guide can be found here.

    • If you are using a larger solution for software deployment already – ensure it is working correctly with your patching strategy and is used to the best of it’s abilities.

  • Reporting – Otherwise known as C.Y.B. (Cover Your Butt) 

    • It all comes full circle after a round of patching. You now have a new baseline for your systems. You will need to update your documentation and be able to provide a report on what systems have been patched, which ones have not and why they are in the state they are.

The worst possible thing you can do is to NOT patch your systems. You are exposing your organization (and your job security) at risk by not staying up to date. If you simply can not patch something – you are going to want to at least research mitigation steps to keep the system safe until such time as you can apply a patch. This is going to require further digging into patch itself and ultimately is only a Band-Aid fix for the short term.

As I mentioned at the beginning of this post – this is part of a larger conversation we need to have as a community in order to help each other out. What do you do and how has it helped you cope with patching? Comment below or email me ( and I’ll make sure to include a follow up post here and quite probably create a special “out of band” podcast episode with your learnings.

IT Pro Team Blog | IT Managers Blog |Twitter | Facebook | LinkedIn
My Shared Bookmarks

Comments (3)

  1. Sean Kearney says:

    And don’t forget if you’re trying to control what does and doesn’t go in Network Wide, just download and use the free WSUS (Windows Server Update Services) Also helps control bandwidth on the internet by having only ONE device do the updates, and I do believe you can still setup the Windows Update / Microsoft update as a fallback on the workstations.  


  2. Ed says:

    I used WSUS for a couple of years. When updates came out on Tuesdays, I would get them out on my system and a few others while monitoring the Windows Update newsgroup and other sites to see if there were any issues. We had really just one critical application. If there was no common thread and the select few PCs had no issues with the patches, most of the time, the patches went out company-wide on the Thursday night. All but a handful of systems were considered critical.

    Now unless there was something very critical, servers were done manually [not via WSUS] and rebooted during the off hours but not immediately.

  3. Thanks for sharing Ed. Glad to hear that you have a plan to tackle patch Tuesday / Update Tuesday.

    Keep ’em comming.


Skip to main content