Hacker News new | past | comments | ask | show | jobs | submit login

HTTP2 was a prototype that was designed by people who either assumed that mobile internet would get better much quicker than it did, or who didn't understand what packet loss did to throughput.

I suspect part of the problem is that some of the rush is that people at major companies will get a promotion if they do "high impact" work out in the open.

HTTP/2 "solves head of line blocking" which is doesn't. It exchanged an HTTP SSL blocking issues with TCP on the real internet issue. This was predicted at the time.

The other issue is that instead of keeping it a simple protocol, the temptation to add complexity to aid a specific use case gets too much. (It's human nature I don't blame them)




H/2 doesn't solve blocking it on the TCP level, but it solved another kind of blocking on the protocol level by having multiplexing.

H/1 pipelining was unusable, so H/1 had to wait for a response before sending the next request, which added a ton of latency, and made server-side processing serial and latency-sensitive. The solution to this was to open a dozen separate H/1 connections, but that multiplied setup cost, and made congestion control worse across many connections.


> it solved another kind of blocking on the protocol level

Indeed! and it works well on low latency, low packet loss networks. On high packet loss networks, it performs worse than HTTP1.1. Moreover it gets increasingly worse the larger the page the request is serving.

We pointed this out at the time, but were told that we didn't understand the web.

> H/1 pipelining was unusable,

Yup, but think how easy it would be to create http1.2 with better spec for pipe-lining. (but then why not make changes to other bits as well, soon we get HTTP2!) But of course pipelining only really works in a low packet loss network, because you get head of line blocking.

> open a dozen separate H/1 connections, but that multiplied setup cost

Indeed, that SSL upgrade is a pain in the arse. But connections are cheap to keep open. So with persistent connections and pooling its possible to really nail down the latency.

Personally, I think the biggest problem with HTTP is that its a file access protocol, a state interchange protocol and an authentication system. I would tentatively suggest that we adopt websockets to do state (with some extra features like optional schema sharing {yes I know thats a bit of enanthema}) Make http4 a proper file sharing prototcol and have a third system for authentication token generation, sharing and validation.

However the real world says that'll never work. So connection pooling over TCP with quick start TLS would be my way forward.


> Personally, I think the biggest problem with HTTP is that its a file access protocol, a state interchange protocol and an authentication system.

HTTP is a state interchange protocol. It's not any of the other things you mention.


Ok, if you want to be pedantic:

"HTTP is being used as a file access, state interchange and authentication transport system"

Ideally we would split them out into a dedicated file access, generic state pipe (ie websockets) and some sort of well documented, easy to understand, implement and secure authentication mechanism (how hard can that be!?)

but to you point. HTTP was always mean to be stateless. You issue a GET request to find an object at a URI. That object was envisaged to be a file. (at least in HTTP 1.0 days) Only with the rise of CGI-bin in the middle 90s did that meaningfully change.

However I'm willing to bet that most of the traffic over HTTP is still files. Hence the assertion.


What?

HTTP is just a protocol. Stateful or stateless is orthogonal. HTTP is both and neither.

Also, HTTP has no concept of files (in general), only resources. Files can be resources! Resources are not files.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: