Hacker News new | past | comments | ask | show | jobs | submit login

For those complaining about not allowing 3rd Party JIT engines for 3rd Party Browsers. Please consider the vulnerability track records for Google Chrome, Mozilla Firefox and Safari:

I'm taking the best year for all of these (2024) - there are far worse years in the past 5 that could have been picked.

Chrome had 107 vulnerabilities that were Overflow or Memory Corruption. That is a vulnerability every 3.4 days. [0]

Mozilla had 52 vulnerabilities that were Overflow or memory Corruption. This is a vulnerability about every 7 days. [1]

Safari had 10 vulnerabilities that were Overflow or Memory Corruption. This is a vulnerability about every 36 days. [2]

Sources:

[0] <https://www.cvedetails.com/product/15031/Google-Chrome.html?...>

[1] <https://www.cvedetails.com/product/3264/Mozilla-Firefox.html...>

[2] <https://www.cvedetails.com/product/2935/Apple-Safari.html?ve...>






How many of those vulnerabilities were related to JITs? How many were actually feasibly exploitable, and not just theoretical? How many would have resulted in something actually dangerous (code execution, privilege escalation) and not just something annoying (denial of service)?

How many people are actively doing security research on each browser? Is the number of finds per browser more a function of how many eyeballs are on it than how many issues actually exist?

I don't doubt that there are actual, real differences here, but presenting context-free stats like that is misleading.


Your criticism is valid. Adding context is very subjective. Getting objective metrics to some of these questions is an open issue for the software world.

I don't think it matters if the vulnerabilities are JIT related - a process that can JIT can create code, so any exploitable (controllable) overflow or memory vulnerability CAN be pivoted into arbitrary code execution.

The problem with CVEs is that it is not required to prove exploitability to get assigned. It can take a lot of effort (single or multiple people) to prove exploitability. Earlier this week someone quoted "weeks" to me for each bug. They were quoting numbers for some of the Chrome bugs. These researchers said it was not possible to keep up with the number of bugs being found.

I believe (but cannot back it up) that security bugs follow a bathtub curve for each change set. If you've got a lot of change in your code-base then you'll pretty much be on a high bug point of the curve for the whole project. It also probably matters quite a bit about what sort of changes are being made. Working to get high performance seems (again a feeling) to increase the chance of creating a security vulnerability.

The level of public research is a tough metric. The reward / motivation factors are not the same. There is also an issue with internal research teams. They will find bugs before they are released, so they never really "exist". Does measuring the number of CVEs issued indicate the quality or level of internal research? What is a "good" metric for any of this?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: