> H2 attempts to solve a number of problems present in HTTP 1.1 when your connection _isn't_ perfect
by multiplexing a many pseudo connections down one TCP pipe. I still don't quite understand how that was meant to improve performance over a lossy link.
It seemed to me that Http2 was meant to be a file transfer protocol, but designed by people who don't really quite understand how TCP performs in the real world (especially on mobile)
> I still don't quite understand how that was meant to improve performance over a lossy link.
It does!
First, consider the case that HTTP pipelining addresses: lots of small independent requests. By packing multiple requests into the same connection, you can avoid handshakes, TCP slow start, etc. Perhaps more importantly, if your connection has more bytes in it and therefore more packets to ACK, then you can actually trigger TCP's detection of packet loss via duplicate acknowledgments, as opposed to waiting for retransmission timeouts.
Further, browsers typically limit the number of parallel requests, so you potentially have another source of HOL blocking if you've exhausted your connection pool (as happens much more quickly without pipelining).
That said, pipelining is also subject to HOL blocking: let's say that the first external resource to be requested is some huge JS file (not all that uncommon if you're putting <script> tags in your <head>). That needs to finish loading before we can fetch resources that are actually needed for layout, e.g. inline images.
H2 provides a handful of innovations here: one, it allows for prioritization of resources (so you can say that the JS file isn't as important); two, multiplexing allows multiple underlying responses to progress even if that prioritization isn't explicitly in place.
Yes, it still runs into problems with lossy links (as that's a problem with TCP + traditional loss-based CC algs like New Reno / CUBIC). But it's also better than the status quo either with HTTP/1.1 + connection pooling or with pipelining. And it has the noted advantage over QUIC in that it looks the same as H1 to ISPs.
> It seemed to me that Http2 was meant to be a file transfer protocol, but designed by people who don't really quite understand how TCP performs in the real world (especially on mobile)
And not only mobile, same applies for server-server connections over near perfect, inter-dc connections.
Even gigabit, and above links drop packets... because it's how TCP is supposed to work!
The higher is the speed, the more aggressive curve will be used by the congestion control algorithm.
In other words it will hit drops fast, be in the drop zone for longer, and this will be recurring more frequently than on slower links.
In fact, faster congestion protocols for speeds above 500 mbps get their speed by having A LOT MORE drops, not less.
Any protocol designed for the TCP transport must have been designed to work in concert TCP mechanics, not fight, or try to workaround it.
This is a perfect example of how AB test vodoo lead people down the rabbit hole.
> Any protocol designed for the TCP transport must have been designed to work in concert TCP mechanics, not fight, or try to workaround it.
There is a reality of running code on other peoples hardware: they impose limitations on network traffic that you cannot change. Example: Home-grade routers do NAT, but have table limitations which make it hard to have multiple connections. The connections get dropped, and users complain, not realizing it was due to them having bought a cheap router. Other actors in the network path between the client and the server impose their own limitations, making it hard to have multiple connections. (corporate firewalls being another big one).
HTTP/2 is a compromise, admitting that it is not possible to go fix or replace all this hardware. I worked on a few HTTP/2 implementations, and the complexity it incurs is significantly less than the alternatives.
by multiplexing a many pseudo connections down one TCP pipe. I still don't quite understand how that was meant to improve performance over a lossy link.
It seemed to me that Http2 was meant to be a file transfer protocol, but designed by people who don't really quite understand how TCP performs in the real world (especially on mobile)