Today I stumbled upon Fortify’s Java Open Review Project, whose goal is to count security defects in popular Java projects.
I’d like to tip my cap to Brian Chess and the folks at Fortify for this. It’s not quite a proper benchmark, but it is very interesting indeed. I’d like to ask him (or others associated with the project) for his perspective on the project — how it came about, where it’s going, and what the feedback has been like.
Also — and Brian, feel free to file this in the “unsolicited advice” drawer — I think Fortify could turn the crank on this a little and get some really interesting insights. For example:
- It would be great to have a bivariate plot showing size of code base (KLOC) versus the defect rate
- Maybe the plot divides itself into a 2x2 grid (“complex, hairy, ugly, big” v. “simple and secure” v “large-scale engineered” v. “small and sloppy”)
- Would like to see a friendlier format for defects (defects/KLOC is ok, but defects per million might be better)
- I’d love to see Coverity, Secure Software, Ounce Labs, Klocwork and others run their tools on the same code bases and see what they find
But these are nits. Overall, I am a big fan of public data. Nice work!
I nominate Brian to present a prettified version of this work at mini-Metricon at RSA 2007.