Would a properly managed IT have withstood Conficker?

Before I start here: Let’s be clear that I will not say (and will never say) that if a customer was infected with Conficker he had a poorly managed network!

I had a lot of discussions over the course of time about the reasons for customers being infected. We all know the attack vectors of Conficker but what are the real reasons behind it?

  • Poor or no Patch Management: We are coming back to my Russian Roulette post back in January. If you decide not to patch or leave it to the admin to decide, in my opinion this is negligent. And now, please, do not tell me that this is a Microsoft-only problem. Base don the Security Intelligence Report, we are responsible for 3% of the industry-wide vulnerabilities. So, do not forget to patch the other 97% as well!
  • Unmanaged Machines: This comes down to compliance management as well and is not an easy problem to be solved. But we have seen customers who thought that they were fully patched just to find some unmanaged machines which were not – and these machines were the starting point of the infection. Let’s be clear on this one: This is a problem which can be addressed but it is a project you have to run with the corresponding investment. There is technology out there like 802.1x to keep them off the network or IPSec Auth to make sure they are not able to talk with your key machines – but you have to deploy them.
  • Weak (or inexistent) passwords: Once you had (or have) Conficker on the network, weak passwords are often a pretty good vector for Conficker to spread. Again, it is about compliance management.
  • Everybody is an Admin: We can now debate about this again, who is to blame with this. It is a fact that a lot of users run as Admins (yes, in Enterprises as well) because certain applications do not run without Admin privileges. The virtualization part of Windows Vista definitely helped to reduce this. Since Windows Vista (and Windows 7) I am not running as Admin anymore nor does anybody else in my network! This is once of the key achievements of UAC.
  • Unsupported Operating Systems: It is really unbelievable how many NT4 we still find out there. We retired NT4 SP6a on 31.December 2004. Please do not come now and blame us for our policy. Today we support our products for 10 years at the supported service pack level (Business and Developer products). I know that there are reasons why you cannot upgrade – but there are a lot of machines out there which could be upgraded! Additionally it is sometimes worrisome to me how often I see old Operating Systems and unsupported applications being connected to the network without having any further protection and/or shielding.
  • Anti-Malware Protection: This is a very difficult area now. There are customers who had Antimalware in place and did their best to having the signatures updated – however the AV-vendor failed to protect their customer base. We have been out with a signature for Conficker.B since December 29th – not only detecting Conficker.B but removing it. And there are some vendors talking very loud about Conficker did a very doubtable job.

I know that it is not easy but looking at the reasons above, I am convinced that a well-managed environment would have had a good chance to withstand Conficker. Well-managed meaning:

  • Having proper policies in place where Business Risk Management is being seen as a fundamental part of an IT management
  • These policies are enforced through administrative measures and audits as well as through technical means (yes, the auditor can be your friend).
  • Violation of policy will be punished.
  • Apply not “best practices” but good practices to your environment

It just showed us once more that running a network of a certain size is an engineering practice and not an art. Today’s economical situation does not help here either as a lot of companies want to save cost. However, a well-managed network to me is an inexpensive network as well – and a secure network! So, we should definitely think about this further. I am convinced that in today’s time we have to move from “best of breed” to “best of need” and in addition we have to make sure we deliver and you deploy a “best of need integrated platform” to address the challenges outlined above. So that you can concentrate on a business strategy as well as on processes!

This raises the 1 Million Dollar question: How the hell can you make sure that you know what you need. Well, you have to do Business Risk Management. A lot of companies – to me – miss the “business” in the statement above it they do Risk Management at all. From my point of view, it is not the CSO’s job to decide about the risks acceptable for a company. It is the Management Board. At the end of the day it is a business decision and not an IT decision! However, it is the CSO’s job to make sure the Management Board understands the risks they are taking on a level which is understandable for a business leader.

Let me add one final statement. Microsoft IT has a pretty tough job to do with all these geeks connected to the network running all sorts of beta software. However, I did not feel any disruption from Conficker. So, there is a good chance that they did an excellent job to keep it out. So, it is doable.

I will try to blog more about the platform in the near future. I started to bring these pieces together in my test environment to get some hands-on experience and I want to share more of this with you


Comments (1)
  1. Shoaib Yousuf says:


    This is the topic where you can debate for hours. I think this is happening for several years, organizations doesn’t follow proper patch and compliance management procedures and when they get hit by viruses / worrms – they know who to blame!!

    After all, even if we go to buy a laptop it only comes with 2 year warranty – it doesn’t give you assurance for life-time gurantee for not breaking down.

    I agree to the certain point that patch management is really hard to implement. Most of the organizations find very hard to patch the servers / clients straight away as soon patch is relased. They are heaps of reasons but more prominent ones are

    1) Change management – it has to go for testing, approval from change management team and so forth – which takes weeks to deploy that patch (in most cases)

    2) Organization hates rebooting their servers

    3) Most of their applications are out-of-date and they are not sure whether this new patch will affect them or not.

    This is the huge problem and bad guys will continue to take advantage of this, atlas people will continue blaming Microsoft and vendors for not providing secure softwares.


Comments are closed.

Skip to main content