Hacker News new | past | comments | ask | show | jobs | submit login

Your criticism is valid. Adding context is very subjective. Getting objective metrics to some of these questions is an open issue for the software world.

I don't think it matters if the vulnerabilities are JIT related - a process that can JIT can create code, so any exploitable (controllable) overflow or memory vulnerability CAN be pivoted into arbitrary code execution.

The problem with CVEs is that it is not required to prove exploitability to get assigned. It can take a lot of effort (single or multiple people) to prove exploitability. Earlier this week someone quoted "weeks" to me for each bug. They were quoting numbers for some of the Chrome bugs. These researchers said it was not possible to keep up with the number of bugs being found.

I believe (but cannot back it up) that security bugs follow a bathtub curve for each change set. If you've got a lot of change in your code-base then you'll pretty much be on a high bug point of the curve for the whole project. It also probably matters quite a bit about what sort of changes are being made. Working to get high performance seems (again a feeling) to increase the chance of creating a security vulnerability.

The level of public research is a tough metric. The reward / motivation factors are not the same. There is also an issue with internal research teams. They will find bugs before they are released, so they never really "exist". Does measuring the number of CVEs issued indicate the quality or level of internal research? What is a "good" metric for any of this?






Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: