Hacker News new | past | comments | ask | show | jobs | submit login

> Those protocols are a lot more complicated than HTTP. So it's much harder for a small group to implement them.

Why does a small group need to reimplement HTTP/2 and HTTP/3? It's important that we have more than 1 or 2 implementations, but we don't need more than a small handful, and we definitely don't need every independent group reimplementing them. We just need enough that anyone who needs it has access to an implementation that's usable for them, whether it's bundled with the OS (such as Apple's Foundation framework including a network stack that supports HTTP/2), or available as a library (such as Hyper for Rust, or I assume libcurl has HTTP/2 support).




Because then you get more parts of your stack that you don't really understand how they work and are unable to audit.

We are basically doing with TLS. Which went fine - until people realized that one of the major go-to implementations of TLS contained years old unfixed bugs that could be remotely exploited.


I am not sure TLS would have been better if instead everyone rolled their own TLS implementation.

Nor do I think that a more diverse world of TLS implementations would've led to better auditing of openSSL. We had barely enough eyeballs to audit openSSL, let alone to audit more stuff.

The issue with openSSL was that the protocol was sufficiently complicated and sufficiently critical that people just picked the available option. Perhaps those who did look into the code they were running concluded it was bad, but weren't willing to create a new library. Besides, any new library would have the stigma of 'they are using a non-standard and new crypto library'.

In that case, the solution would've been louder complains about the code quality of openSSL.


Better for everyone to be using a small handful of battle-tested implementations written by experts than for everyone to roll their own implementation. The latter may mean that people have a better understanding of the component, but it's also pretty much guaranteed to mean the various implementations are buggy. Even very simple protocols are easy to introduce bugs into.

For example, it's pretty easy to write an HTTP/1.0 implementation, but it's also easy to open yourself up to DoS attacks if you do so. If you're writing a server, did you remember to put a limit on how large a request body can be before you shut down the request? Great! Did you remember to do that for the headers too? Limiting request bodies is an obvious thing to do. Limiting the size of headers, not so much. But maybe you thought of that anyway. What about dealing with clients that open lots of connections and veeery sloowly feed chunks of a request? The sockets are still active, but the connections are so slow you can easily exhaust all your resources just tracking sockets (or even run out of file descriptors). And this is just plain HTTP, without even considering interacting with TLS.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: