Matthew Fisher has written written an interesting article for the Industry Insider's blog which is hosted on TechNet. We're getting quite a few submissions from people like Matt who have best practise advise for you based on their practical experience. If you have something you'd like to share then browse here for details on how to contribute. You don't have to a security guru/expert - so don't be put off - you are bound to have solved a problem that others are still struggling with - that's the kind of information we're looking for. Articles can be as short as a couple of paragraphs so you don't need to spend ages writing - I understand that you're busy already. The benefit to you of posting an article is the exposure you'll receive - the benefit to us is that good advice will be shared with the community enabling us all to "get on with our work" rather than wasting time trying to solve problems that you've already overcome. Industry Insiders have posted articles on a wide range of subjects too - not just just security - topics such as messaging, management and platform are popular too.
Matt shares some of his experience "from the coal face" gained during years of exerience advising the military and leading companies in writing more secure software. I share Matt's view that it's far less expensive to fix security at the inital development stage rather than to retro-fit it. IMHO there are direct parallels with the cost of changing functionality once the code is complete - clearly it's a nightmare if your customer mandates a change during acceptance testing!Here's a really rough sketch to show what I mean:
This article is not purely for the attention of developers though. Matt's informal style lends itself well to informing business and IT Professionals in the reasons WHY much code is inherently insecure - just blaming the dev guys(and gals!) isn't fair. For security to be effective it has to be part of the culture of all involved, from the end user through to implementation team, operations and development. Just think about it for a moment, if the user is asked a "security question" by the application but not given enough information (in their terms) to make a sensible decision then it's hardly surprising if they make a bad choice.
Something that I know Matt's passionate about is incorporating assessment of code security at user acceptance testing and indeed specifing it at the functional definition stage too. Personally I think this is essential - consumers have the right to expect code to be secure BUT they should be able to clearly state what they mean by "secure" and therefore enable the developers to incorporate the appropriate security controls to mitigate the expressed risk.