Time to look at one of those annoying but important parts of programming: security. Over on the MITRE site, they've released a new Top 25 Most Dangerous Programming Errors list. I think it's both important and useful, so let's talk about it a bit.
It's basically the most-wanted list from the Common Weakness Enumeration (CWE), "A Community-Developed Dictionary of Software Weakness Types". Those of you who are used to Patterns should think of this as a formalized list of security anti-Patterns. The full list is pretty huge (hundreds of entries), but this is a pretty good stab at the 25 that you should absolutely be paying attention to. Almost all apply at least loosely to all online software; most apply to almost all software. They also have a pile of "focus profiles", side topics like: the ones that almost made it into the list; which languages are particularly prone to which problems; and which weaknesses customers should be particularly watching out for.
My recommendation is that, if you involved with software creation in any way, you bookmark this page. (There is a convenient guide at the top, describing how it applies to programmers, managers, customers and so on.) Skim it now to get started; read it in-depth at whatever pace suits you, and make sure you actually understand what each sectiion is talking about. Re-read it occasionally, to remind yourself of what the traps are. Seriously: while it's not a substitute for an in-depth course on software security, it's the closest I've seen a single (admittedly long) page get.
Also, I call your attention to a related page that may affect your life in the future: SANS' Application Security Procurement Language. This is suggested contract boilerplate that they recommend customers consider writing into software contracts. It calls for projects to have official security leads, and formal sign-off on security as part of project acceptance. Obviously SANS is rather self-interested here (they teach software security), but by providing this boilerplate they make it easier for companies to just write this language into every contract they issue. My guess is that this is going to become increasingly common over the next 5 years, so if you do any sort of contract work (either individually or as part of the team), I recommend thinking about how you're going to handle this.
Of course, one thing they *don't* talk about is cost. Doing security right isn't free -- there's a fair overhead imposed, simply because you have to put real work into being rigorous about security. I'm curious whether anyone has done a formal study of how the costs line up: the upfront development cost of doing security right vs. the risks imposed by not. My guess is that it's generally well worthwhile, but that a lot of people will be tempted to sweep those upfront costs under the rug, rather than accounting for it properly. It's very much like QA in that respect: too often under-planned, and therefore too often skimped on, with resulting costs after the fact.
It's basically the most-wanted list from the Common Weakness Enumeration (CWE), "A Community-Developed Dictionary of Software Weakness Types". Those of you who are used to Patterns should think of this as a formalized list of security anti-Patterns. The full list is pretty huge (hundreds of entries), but this is a pretty good stab at the 25 that you should absolutely be paying attention to. Almost all apply at least loosely to all online software; most apply to almost all software. They also have a pile of "focus profiles", side topics like: the ones that almost made it into the list; which languages are particularly prone to which problems; and which weaknesses customers should be particularly watching out for.
My recommendation is that, if you involved with software creation in any way, you bookmark this page. (There is a convenient guide at the top, describing how it applies to programmers, managers, customers and so on.) Skim it now to get started; read it in-depth at whatever pace suits you, and make sure you actually understand what each sectiion is talking about. Re-read it occasionally, to remind yourself of what the traps are. Seriously: while it's not a substitute for an in-depth course on software security, it's the closest I've seen a single (admittedly long) page get.
Also, I call your attention to a related page that may affect your life in the future: SANS' Application Security Procurement Language. This is suggested contract boilerplate that they recommend customers consider writing into software contracts. It calls for projects to have official security leads, and formal sign-off on security as part of project acceptance. Obviously SANS is rather self-interested here (they teach software security), but by providing this boilerplate they make it easier for companies to just write this language into every contract they issue. My guess is that this is going to become increasingly common over the next 5 years, so if you do any sort of contract work (either individually or as part of the team), I recommend thinking about how you're going to handle this.
Of course, one thing they *don't* talk about is cost. Doing security right isn't free -- there's a fair overhead imposed, simply because you have to put real work into being rigorous about security. I'm curious whether anyone has done a formal study of how the costs line up: the upfront development cost of doing security right vs. the risks imposed by not. My guess is that it's generally well worthwhile, but that a lot of people will be tempted to sweep those upfront costs under the rug, rather than accounting for it properly. It's very much like QA in that respect: too often under-planned, and therefore too often skimped on, with resulting costs after the fact.