Is Security Research Ethical?

Shoaib's blog actually pointed me to a pretty interesting article called Face-Off: Is vulnerability research ethical? - Security Experts Bruce Schneier & Marcus Ranum Offer Their Opposing Points of View. Not surprisingly Bruce says "yes" and Marcus says "no". If you read through their points, you might even agree partly with each of them:

  • Bruce Schneier: Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible--and more and less legal--ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent.
  • Marcus Ranum: Trust model? What's that? The so-called vulnerability "researchers" are already sharpening their knives for the coming feast. If we were really interested in making software more secure, we'd be trying to get the software development environments to facilitate the development of safer code--fix entire categories of bugs at the point of maximum leverage.

But to me, this is the wrong question: It is not so much about security research. To me it is about two things:

  • Once you find a security vulnerability, what do you do with it? Do you do responsible or irresponsible disclosure?
  • And then, what does the vendor do with it? Does the company act on it? And does it provide security updates for free over the support lifecycle of a product? The following quote from Marcus scars me: The vulnerability game has given vendors a fantastic new way to lock in customers--if you stop buying maintenance and get off the upgrade hamster wheel, you're guaranteed to get reamed by some hack-robot within six months of your software getting out of date.

That's really bad if vendors make money selling security updates…

Roger