I am a little confused about the test methodology.
The post clearly explains that the big advantage of HTTP/3 is that it deals much better with IP packet loss. But then the tests are done without inducing (or at least measuring) packet loss?
I guess the measured performance improvements here are mostly for the zero round-trip stuff then. But unless you understand how to analyze the security vs performance trade-off (I for one don't), that probably shouldn't be enabled.
For desktops HTTP2 is mostly ok, possibly an improvement. For Mobile it wasn't. I raised this when we were trailing in at $financial_media_company. Alas the problems were ignored because HTTP2 was new and shiny, and fastly at the time was pushing it. I remember being told by a number of engineers that I wasn't qualified to make assertions about latency, TCP and multiplexing, which was fun.
I still am not convinced by QUIC. I really think that we should have gone for a file exchange protocol, with separate control, data and metadata channels. Rather than this complicated mush of half remembered HTTP snippets transmuted into binary.
We know that despite best efforts website size is going to grow bigger, in both file size and number. Lets just embrace that and design HTTP to be a low latency file transfer protocol, with extra channels for real time general purpose comms.
Isn’t HTTP/3 a low latency file transfer protocol, and WebTransport over HTTP/3 extra channels for real time general purpose comms? It’s also worth noting that HTTP/3 actually does use a separate QUIC channel for control.
I don't understand the "HTTP snippets transmuted into binary" part.
QUIC itself doesn't have the request/response style of HTTP, it doesn't know anything about HTTP, it's just datagrams and streams inside the tunnel.
So you could use QUIC to build a competitor to HTTP/3, a custom protocol with bi-directional control, data, and metadata streams.
In fact, I'm looking forward to when Someone Else writes an SFTP / FTP replacement in QUIC. HTTP is already a better file transfer protocol than FTP. (because HTTP has byte-range headers, which AIUI are not well-supported by FTP servers) Think how much we could do if multiple streams and encryption were as simple as importing one library.
> I don't understand the "HTTP snippets transmuted into binary" part.
Yup, my mistake, I meant to say HTTP3 over QUIC.
At a previous company(many years ago), I designed a protocol that was a replacement for aspera. The idea being that it could allow high speed transfer over long distance, with high packet loss (think 130-150ms ping). We could max out a 1gig link without much effort, even with >0.5% packet loss.
In its present form its optimised for throughput rather than latency. However its perfectly possible to tune it on fly to optimise for latency.
>So the fairer comparison might be TLS 1.3 under all three, but if you need to upgrade why not upgrade the whole HTTP stack?
Because it's a benchmark of HTTP3 and not a comparison of "as it might be stacks".
It would a bit like bench-marking HTTP/1 with RHEL7 and Apache2, HTTP/2 with RHEL8 and NGINX and HTTP/3 with...let's say Alpine and Caddy...it's just not a clean benchmark if you mix more then one component and try to proof that this single one component is faster.
Especially when their benchmark scenario is something that plays to the strengths of tls1.3 and would probably only mildly be improved (if at all) by http/3
> HTTP/3 is a way of spelling the HTTP protocol over QUIC
HTTP/3 could perhaps be described as HTTP/2 over QUIC. It's still a very different protocol from HTTP/1.1, even if you were to ignore the transport being used - the way connections are managed is entirely different.
It's just spelling. It certainly wouldn't be better to think of it as HTTP/2 over QUIC since it works quite differently because it doesn't have an in-order protocol underneath it.
HTTP has a bunch of semantics independent of how it's spelled and HTTP/3 preserves those with a new spelling and better performance
QUIC does require TLS 1.3, but as far as I can tell HTTP/2 over TLS 1.3 is perfectly viable, and is likely a common deployment scenario.
Upgrading just TLS to 1.3 for most people likely just means upgrading to a newer openssl which you probably want to do anyway. In many web server deployment scenarios, deploying HTTP/3 is highly likely to be more involved. The Apache httpd doesn't support H3 at all, I don't know if nginx has it enabled by default these days?
NGINX has an experimental QUIC branch [1], but in my experience it is buggy and currently has lower throughput than using Quiche [2] with NGINX. I do the latter for my fork of NGINX called Zestginx [3], which supports HTTP/3 amongst a bunch of other features.
NGINX's QUIC implementation seems to also lack support for QUIC and or HTTP/3 features (such as Adaptive Reordering Thresholds and marking large frames instead of closing the connection).
EDIT: A friend of mine who works at VK ("the Russian Facebook") informed me that they're helping out with the NGINX QUIC implementation which is nice to hear, as having a company backing such work does solidify the route a little.
This seems like it could lead to worsening packet delivery from networks causing other protocols to do badly, since so far tcp has needed, and ip networks have delivered, near zero packet loss.
I will be interested to see how HTTP/3 fares on Virgin in the UK. I believe the Superhub 3 and Superhub 4 have UDP offload issues, so it's all dealt with by the CPU, meaning throughput is severely limited compared to the linespeed.
Waiting for Superhub 5 to be officially rolled out before upgrading here!
CPU on client or server? Client is probably negligible overhead while server is something that needs to be dealt with on an ISP level. Linespeed and latency live in different orders of magnitude to client CPU processing / rendering.
> I am a little confused about the test methodology.
Yeah, I remember Google engineers steamrolled HTTP/2 through W3C with equally flawed "real world data."
In the end it came out that HTTP/2 is terrible in real world, especially on lossy wireless links, but it made CDNs happy, because it offloads them more than the client terminals.
Now Google engineers again want to steamroll a new standard with their "real world data." It's easy to imagine what people think of that.
HTTP is maintained by the IETF; not the W3C. You make it sound like Google sends an army to the meetings. They don’t. I didn’t sit in on the quic meeting but in http there’s a small handful of people who ever speak up. They work at lots of places - Facebook, Mozilla, Fastly, etc. And they all fit around a small dining table. I don’t think I met any googlers last time I was at httpbis - though I ran into a few in the hallways.
You can join in if you want - the IETF is an open organisation. There’s no magical authority. Standards are just written by whoever shows up and convinces other people to listen. And then they’re implemented by any person or organisation who thinks they’re good enough to implement. That’s all.
If you think you have better judgement than the working groups, don’t whinge on hacker news. Turn up and contribute. We need good judgement and good engineering to make the internet keep working well. Contributing to standards is a great way to help out.
I am saying this well knowing the conduct of Google at W3C and whatwg, and which I believe you know too.
Now, nobody of httpbis raised a red flag, and challenged performance figures of HTTP/2 before it became a standard.
Ok, I will take that grumbling on internet forums is a non-solution. How would you suggest contributing to HTTPbis process without flying engineers around the world all year long to attend IETF meetings?
On the matter of QUIC — my biggest discontent with it is that these guys basically recreated SCTP (and botched it at that,) but did it in UDP, without taking advantage of most exiting OS level, and hardware level performance optimisation. There is no chance at all hardware makers will put any effort to support offloading somebody's proprietary weekend project into hardware, and without that it has no chance at adoption, and everybody will be stuck at HTTP/2 now because CDNs are very happy with it, and browsers can't roll back its support.
HTTP/4 is needed now, it needs to be built over SCTP to have any chance at getting hardware offloading.
I don’t know Google’s conduct at whatwg / w3c. I’ve never been to either of them. (My understanding is whatwg is invite only or something? Is that right?)
As for flying people around the world, most of the actual work of the IETF happens on the mailing lists and (in the case of httpbis) on the http GitHub issue tracker. You can attend the meetings virtually, and they go to great length to include virtual attendees - though it’s never quite the same as talking to people in person over drinks or in the corridors. If you think the http working group isn’t taking performance metrics seriously enough, it sounds like you have something really important to contribute to the standards group. That voice and perspective is important.
I agree with you about SCTP being a missed opportunity - though I suspect quic will get plenty of adoption anyway. And I’m sure there’s a reason for not using sctp - I think I asked Roberto Peon a couple of years ago at IETF but I can’t remember what he said. He’s certainly aware of sctp. (For those who don’t know, he’s one of the original authors of quic/spdy from when he was at Google.)
I agree that quic will probably never hit 100% of global web traffic, but I wouldn’t be surprised if it surpassed 50% within a decade or so. And there’s some positives from that - it’s nice to put pressure on internet vendors to allow opaque udp packets to float around the net. Hardware offload aside, that increases the opportunity for more sctp-like protocols on top of udp in the future. It’s just a shame any such attempts will need to layer on top of udp.
I am very serious about hardware offload being supercritical for adoption.
Not having it, means CDNs must have 4-5 times more CPU power, on top of natural internet traffic growth. Saying "buy 5 times more servers" will not fly
HTTP/2 is such a hit with CDNs exactly because it let them do more traffic with less servers, though with worse end user experience unless for kind of people who get gigabit at home.
The post clearly explains that the big advantage of HTTP/3 is that it deals much better with IP packet loss. But then the tests are done without inducing (or at least measuring) packet loss?
I guess the measured performance improvements here are mostly for the zero round-trip stuff then. But unless you understand how to analyze the security vs performance trade-off (I for one don't), that probably shouldn't be enabled.