I'd be a little cautious about this, as it seems to be plug-and-chug from CWE, followed by a breakdown of the "CWEs least and most likely to be detected in code review". I don't think CWE is a particularly good taxonomy, and, in any case, apart from obvious cases (yes, use of banned functions is easy to spot), it's pretty much orthogonal from the real factors that make vulnerabilities hard to spot in code review.
I haven't, like, done a study, but I have done a lot of security code review, and my instinct is that the basic issues here are straightforward: bugs are hard to spot when they're subtle (ie, they involve a lot of interactions with other systems outside the one under review) and when they involve a large amount of context (like object lifecycle bugs).
Which, another issue with this study: I get why Chromium OS is an interesting data set. But I don't think it's a particularly representative case study. Another factor in effectiveness of security review: how much of the whole system you can fit in your head at once.
>Another factor in effectiveness of security review: how much of the whole system you can fit in your head at once.
This is why I think pure functions as much as possible, i.e. a part of the functional programming mindset, is so important for making code reviewable. Yes, you can make nice abstractions in OOP, but at least in my experience OOP with it's stateful objects interacting makes you need to know a lot more about the system than pure functions.
And yes, it's not a panacea, and large allocations may take too long to copy, which is why the next best thing is mostly functional, most techniques don't work in every case.
+1. I got asked to focus on CWE top 25 in a node app. Problem is, half the top 25 are not applicable to node (buffer overflows etc, which matter in memory unsafe languages).
Uncontexualised security is bad security. Frequency of bugs is not a good indicator of severity.
how much of the whole system you can fit in your head at once.
That reminds me about an old saying on complexity: a complex system has no obvious bugs, a simple system obviously has no bugs.
Although a lot of people seem to reach for abstraction to "solve" complexity, I don't think that's the solution either --- because it can hide bugs too: https://news.ycombinator.com/item?id=25857729
IMHO abstraction is really a tool of last resort, for when you can't reduce the overall complexity of the system any further.
> how much of the whole system you can fit in your head at once.
It's a consistent issue that's brought up... the majority of bugs are found in large releases. The smaller the release, the less likely you'll create a bug (this is where you get some methodologies moving away from waterfall).
I wonder if the future of security practices will just be "deliver small pieces of functionality / changes"
Doesn't that seem like a tautology? I think the metric needs to be "bugs per feature" rather than "bugs per release" - it seems obvious that more changes will be buggier than fewer changes.
I haven't, like, done a study, but I have done a lot of security code review, and my instinct is that the basic issues here are straightforward: bugs are hard to spot when they're subtle (ie, they involve a lot of interactions with other systems outside the one under review) and when they involve a large amount of context (like object lifecycle bugs).
Which, another issue with this study: I get why Chromium OS is an interesting data set. But I don't think it's a particularly representative case study. Another factor in effectiveness of security review: how much of the whole system you can fit in your head at once.