By the way, even though it is not yet turned on by default, http3 support is present in Firefox and can be activated by toggling 'network.http.http3.enabled' in about:config to true. I have had it enabled for a few weeks and everything seems okay except that rarely I've noticed a few sites not loading the first time I visit them but needing a 'refresh', but I'm not sure if this is because of the new QUIC code or just another connection glitch.
Beware that there have been some issues with Google sites not loading or being very slow with Firefox's HTTP/3 implementation. Some of these were Firefox bugs and some were Google server bugs.
But I use HTTP/3 in Firefox Nightly and use Gmail and Google Docs all day long without major breakage these days.
It likely is. I played around with the implementation earlier on (it's now > 6 weeks back), and it still had some issues. E.g. the quic connection would time out at the server without the browser knowing about it. And all follow-up requests silently failed. A page refresh helped that.
Another thing where a full page refresh helps is towards getting consistency on whether HTTP/3 is available or not, because the client might make another HTTP/2 request instead and refresh the Alt-Svc information.
Had constant problems loading Youtube. Finally someone on r/Firefox helped me solve it by seeing what in my config was different from default. This value was it and everything worked well after the setting was reverted and restarting.
I know you were just making a point, but I want to point out that the “nobody” part is certainly not true, as sad as that is. Generally something like that is considered fraud/deception, but it does happen.
My understand is that HTTP/3 always means QUIC is used according to the standard. But that QUIC can be used for other protocols as well. FB's terminology seems to be backwards.
It's rather frustrating when people do this; for the rest of the article, when saying QUIC do they mean theur terminology that is actually QUIC and HTTP/3 or do they mean QUIC the TCP alternative? They use both.
As I said below, the blog is more focused on QUIC itself rather than HTTP/3. The bulk of the improvements we see are from QUIC as a transport layer, rather than the changes in HTTP/3.
Some years back, Google had two experimental pieces of work to try to speed up HTTP named QUIC and SPDY (as in "Quick" and "Speedy").
The output from the experiments was new standardization efforts at the IETF. Google's SPDY, which was a binary HTTP protocol over TLS, eventually resulted in HTTP/2 which there's a fair chance you use today.
The other idea, QUIC, is a much bigger lift. It replaces not just the HTTP protocol and TLS but the whole stack, even TCP. At the IETF this work was split into two pieces, the IETF's QUIC is just the TCP replacement, an encrypted connection-oriented reliable protocol. So the HTTP part of the problem is being standardised as HTTP/3
Google's QUIC "gQUiC" will be obsolete once the standardized protocol is finished. Right now a Chrome talking to e.g. GMail uses gQUIC, once the standards work is firmed up it'll speak HTTP/3 and then maybe a year later Google's sites will discontinue gQUIC because it's just maintenance effort with no residual value.
In here and in the article it sound as if QUIC or HTTP3 use or build a new Internet Protocol, while it was my understanding it _simply_ uses UDP to create a better version of TCP.
Of course an all new IP would be great, but the issue is it would take very long for all the soft and hardware to support it and UDP is simply there.
I would prefer for a new HTTP to go an all new way, leaving behind UDP and TCP and build something new. Then support fallback over HTTP1/2 until everything else caught up.
I don't think it's unreasonable to state that we will never deploy a new IP protocol again.
At least not one that works over the general Internet - controlled intra-organisation networks might possibly be able to do so, but very rarely.
There are too many broken machines on the Internet that assume all IP traffic is one of TCP, UDP and ICMP. And far too many are configured to screw up ICMP too.
So new protocols MUST use TCP or UDP as their base layer instead of raw IP.
So yes, QUIC uses UDP, but that should be considered an implementation detail. A hack for the lack of IP support on the Internet.
HTTP/3 uses QUIC like HTTP/1 and HTTP/2 use TCP + TLS, but QUIC is not limited to use by HTTP/3.
QUIC's development has been, basically, paused while HTTP/3 is finalised and then the IETF will pick up where it left off and work out how other higher-layer protocols will work using QUIC as the transport layer.
For a trivial connection-oriented protocol the ports are part of a 4-tuple (my-address, my-port, your-address, your-port) so the 16-bit port isn't a big problem there. It doesn't matter at all whether this port is being used for something else in regards to any other combination of remote address and port since that's not a match.
And for fancier protocols it doesn't matter anyway because they have their own concept of a connection identifier. WireGuard for example doesn't care at all, packets arrive and either they're authenticated or they aren't, it silently discards all packets that aren't authenticated, QUIC optionally has a connection ID that can survive changing the 4-tuple as far as I remember.
If you wish you did have more ports, the IPv6 address space makes it pretty cheap to just acquire more addresses on your network and use those, but I do not sense much appetite because people don't feel like they're short of ports.
You do see people spinning up more addresses to not need SNI or similar. If you have sixty virtual machines on one hardware box, having sixty IPv6 addresses, one per VM, means now the packets for VM #4 and VM #18 are separated on the wire, which might be convenient but that doesn't feel like it's due to running out of ports, it's just more convenient.
I think they're talking about Google QUIC, which formed the basis for both IETF QUIC and HTTP/3; Facebook has been deploying Google QUIC for a long while now maybe, AFAIK. The people at fault here are probably the IETF lol.
We never deployed Google QUIC. It was always IETF QUIC. Referring to them together as QUIC was just for expedience, since the main benefits came from the QUIC layer, not the HTTP layer.
HTTP/3 implies IETF QUIC. IETF QUIC itself can be used for non-HTTP protocols, though, just like TCP can be used for protocols that aren't HTTP/2.
Thank you for the clarification. I'm sure you're looking forward to WebTransport+QUIC as well, I know I am. I'm already doing some little tweaks to our WebSocket RPC and bulk transfer layers to resemble WebTransport so it's an easy drop-in when the QUIC transport becomes available, or when there's a reasonable implementation of WebTransport+HTTP/3 (the other middle ground).
Maybe I can publish the virtual “WebSocketTransport” thing once I'm satisfied with it.
Unfortunately, we may yet see networks that block or slow down QUIC, in a paradoxical attempt to improve performance.
QUIC is designed to hide a lot from the network. But some network nodes use visibility into things like round-trip time, data-in-flight and packet loss for each flow, so they can adjust queuing parameters to optimise for each user. These measurements are easy to get from monitoring TCP, but not QUIC.
The designers of those network nodes may conclude that blocking QUIC (UDP on port 443) and forcing fallback to HTTP/2 over TCP results in better ability to optimise network flows than allowing QUIC to go ahead. All browsers race TCP against QUIC, so a network blocking QUIC shouldn't significantly slow performance compared with just HTTP/2.
This is a hefty bet. The gamble is that you're very good at this, and so you'll deliver better performance with HTTP/2 than the client could have got from HTTP/3, but even if you actually are right by a slim margin you're likely to suffer the same pushback from users as if you were wrong. The only scenarios where this is a smart bet are where competitors allow HTTP/3 and have noticeably worse performance.
If it's close, regardless of which way, you'll get beaten up for disabling HTTP/3, maybe that's unjust but that's how it is.
What users are going to notice? Browsers and almost everything else is going to fallback to TCP and move on, because there are too many broken networks to bother.
Kind of like how most big sites clamp TCP MSS at 1440 instead of 1460; there's too many PPPoE or IPIP tunneled links with broken path mtu, and too many clients without working/enabler path mtu blackhole detection to bother making a fuss about it. Just move on and cry on the inside.
I think people will notice as BYOD gets more and more standard. There is already a situation in my company that everyone who is part of the BYOD initiative doesn't even use their company laptop anymore because everything is so much slower.
They might notice and complain if UDP in general is blocked, but if their webpages load, they won't complain that UDP to port 443 is degraded or blocked.
I expect HTTP/2 and HTTP/1.1 will still be with us for decades; orgs that block UDP will be able to continue to do so, and the rare HTTP/3-only website will just be inaccessible.
Even if there's a new killer app that requires QUIC, I imagine an org backward enough to disallow UDP will just not care about that app.
I expect HTTP/2 to disappear completely, to the point of browsers removing support for it probably even within a decade. Everything that works over HTTP/2 should work over HTTP/1.1, even if at a slight cost, and HTTP/3 should be uniformly superior to HTTP/2, except in those rare cases of UDP-blocking firewalls, which situation I expect to improve over time. Given the complexity and maintenance burden of the protocol and more importantly the upgrade mechanism, I think browsers will be happy to remove HTTP/2 in favour of the old reliable HTTP/1.1 (which is certainly not going anywhere) and HTTP/3 once they see very little using it any more, and absolutely nothing needing to use it.
I should also write a dissenting opinion on this point—for each client, the upgrade to HTTP/2 is free, as it’s negotiated via ALPN during the TLS handshake; whereas the HTTP/3 upgrade mechanism isn’t particularly solid yet, requiring an Alt-Svc header on a response from the server, meaning you’ve already had to connect over HTTP/1 or HTTP/2 before you learn you can connect over HTTP/3. There’s also work going on to add a new type of DNS record to indicate that the server is h3-capable, which would allow you to connect over HTTP/3 immediately, but that’s opt-in, and loads of sites that support HTTP/3 will never add that record. So until browsers switch to defaulting to trying HTTP/3, HTTP-over-TCP will continue to be used regularly even when HTTP-over-QUIC is supported, and that means plenty of HTTP/2 will be used until that time. This could keep HTTP/2 alive for a lot longer than the decade I posited. I don’t know how it’ll play out, but I think it will depend on how much browsers decide they want to push HTTP/3; if they push it hard and try it more optimistically, HTTP/2 will die more quickly.
I'm totally on board with having more than one protocol available (especially since UDP is so much more flexible), but what is it about TCP specifically that precludes its use for p2p? Is it easier to get through NATs et al with UDP?
TCP has an explicit client and server (some caveats here). The userspace API for TCP requires you to do a connect, which does the TCP handshake in the TCP stack, before you can send traffic.
For UDP, you just specify the destination IP and port, and send packets.
For TCP, new incoming SYNs to most (S)NAT addresses will just get dropped - especially CGNAT - making it impossible to communicate in that direction. If you're both in that situation (really common, actually), you just can't talk to each other.
For UDP, the packets will also get dropped on the receiver side. However, the act of sending the packet will often cause the originator's side's NAT to register that five-tuple (source and destination IP and port + UDP proto), which would allow the other side to reply. If both sides do this with the same IP/port pairs, then magically they can communicate - some of the time, at least. There's a lot more involved (for example, how do you know your own external IP address behind NAT?) - read up on STUN - https://en.wikipedia.org/wiki/STUN - for more details.
Third party mediated hole punching really only works with UDP. TCP is stateful and thus negotiating a hole punch requires incredibly precise timing, which is hard to achieve in practice.
Without UDP we have a purely "cloud-to-ground" Internet unless we can convince router/NAT makers to always include and always turn on UPnP/NAT-PMP. Not likely, and those protocols suck anyway.
TOR, at the moment, is based on TCP and since the whole architecture is based on the assumption of TCP, I don't see it changing anytime soon. QUIC means TOR is now "legacy" unless we just do the right thing and boycott QUIC.
For anyone interested, the original design doc for QUIC from 2013 [0]. Really good writeup, both in terms of engineering spec / architectural design. I recommend reading through if you have the time.
How does it compare to the February 2020 standard draft from IETF? [1] I was following the development of the Websocket protocol pretty closely when it happened and had to update my implementation[2] multiple times as the design kept changing. Did this also happen here, or was QUIC pretty much done and standardized straight from the Google design?
There's a lot of significant differences from the original Google QUIC design and what we currently have in the IETF drafts. They are very much wire-incompatible. That's why it took several years for the working group to get to a point where they were near-completion.
Can HTTP/3 be enabled in nginx now? Should it be? Is it a simple config change? I assume that would take care of serving static assets, but would reverse-proxied apps behind nginx also need to be upgraded to HTTP/3?
There are several efforts to implement QUIC and HTTP/3 in nginx. Cloudflare has it deployed in production with quiche[1], and Nginx themselves are developing one[2].
Applications sitting behind a proxy wouldn't need to be updated. The core protocol semantics of HTTP are relatively unchanged between HTTP/1.1, HTTP/2, and HTTP/3.
Pretty much any HTTP proxy supports talking a set of negotiated protocol versions/features with the client, and a potentially different set of protocol versions/features with the server behind it.
The high-level flow is pretty much:
1) Set up client connection/negotiate stuff (TLS, alpn, NPN, blah blah)
2) Process requests from that client connection
2.1) Decode request from client connection
2.2) Manipulate request (add/remove headers, ...)
2.3) Send request to server
2.3.1) If necessary, create a new connection to server (TLS, alpn, NPN, ...)
2.3.2) Encode request to server connection
2.4) Decode response from server connection
2.5) Manipulate response (add/remove headers, ...)
2.6) Encode response to client connection
You can talk totally different protocols from Internet-side client to the proxy, and from the proxy to the server - and multiple layers of proxies in between if you like. From an app point of view, there's essentially no difference. If you want to for some reason, you can use various headers to do attempt to indicate to the client to upgrade/downgrade to particular protocols, but most apps won't care about that.
I think it's extra slow in Safari, for some reason. Just like Google Maps is also slower in Safari than in Chrome. However, overall I would say Safari is the fastest browser out there, based on my own browsing habits. Also most memory efficient.
I have a feeling this may be React related. Safari is on average far, far faster on any site. But I have a larger React site that slows down quite a bit more than Chrome.
Perhaps related to the const+let vs var bug the other day.
I usually get the “this page is using too much memory” warning repeatedly using safari. And apparently there’s no way to make it go away, which is super annoying.
I feel the same with Twitter re-design too. Everyone was hating the re-design when it was launched but my twitter use increased from once or twice a week to couple of times a day after the new design.
It is amazingly fast(on desktop at least), behaves in all the right ways you would expect, you never lose you position anywhere, using back button always works properly, gracefully handles connection loss, and again, it's so fast and pleasurable to use. Too bad can't say the same for the content on the site.
People dislike change, but I don't believe big companies change their design on a whim; they do a ton of research, and while some may complain, they know it'll be an improvement over their existing designs.
I get notifications popping up but there's nothing there. Same thing with the messages jewel.
I'll get a notification about a reply to my comment, I'll go to the comment, like it, and then it will scroll me down to the same reply as a top level comment at the bottom of the post.
I've reported all the weirdness I've come across but nothing's changed since I was switched over a month ago.
On Safari, it feel fast in some places, but there are still many Janks. And no anywhere near as good as the App Experience. Especially in Scrolling and Lots of Imaging.
Unfortunately permissions to access everything on every website are way to broad for a niche extension like this. There's no guarantee it won't be sold to a malware developer in a month. If you want to use it, I suggest cloning the repo and loading it as an unpacked extension to avoid auto updates.
Also be sure to audit it yourself beforehand if you're going to go that route; there's no use tilting against auto-updates if you haven't gone to the trouble of making sure that malicious code isn't already present.
Can anyone make a tidy article/post that outlines exactly how each protocol differs (v1 vs v2 vs v3) on the network level. Obviously most of us understand the high level differences but what does the mechanisms and payloads look like on lower levels. A side by side comparison would be great. Pro's & con's of each would be great. Would older hardware (say a 10 year old netgear router) be able to use v3 without pain?
HTTP/1 is a plain text protocol over TCP. Requests are sequential per connection. Browsers tend to use up to six connections at a time to work around this.
HTTP/2 is a binary protocol over TCP. Requests are multiplexed. It’s normally better than HTTP/1, but in environments with higher packet loss rates (e.g. remote mobile networks) it can perform markedly worse than HTTP/1, because of TCP head-of-line blocking and the fact that the browser is now using only one connection.
HTTP/3 is a binary protocol over QUIC which is over UDP. Requests are multiplexed. If it works, it should be fairly uniformly better than both HTTP/1 and HTTP/2, because you can think of it as roughly HTTP/2 minus the bad parts of TCP. However, on a few networks (business networks typically, I think) it won’t work because they have firewalls that hate unknown UDP traffic.
(HTTP/3 is not actually just HTTP/2 over QUIC; HTTP/2’s header compression scheme HPACK is stateful in a way that depends on TCP’s sequential nature so that it couldn’t work over QUIC without completely reintroducing the head-of-line blocking problem, so it’s replaced with a variant that mitigates that problem substantially, called QPACK. But other than that, I think they’re roughly the same. Don’t quote me on that, though, it’s a few years since I read the specs and I’ve forgotten it all, not to mention that the specs have changed plenty in that time.)
I think this is a fair summary, but I don’t have any practical experience with HTTP/3, so I welcome any corrections.
this was a remarkably concise summary, nicely done. one clarification/question I had was I think all connections start as http1 by default, then a request to upgrade to http2 is sent. does the same happen for http3? does it go from 1 to 3 or 1 to 2 to 3?
HTTP/2 over cleartext uses an HTTP/1.1 Upgrade header field, yes, so that it’s an extra request. But HTTP/2 over TLS can use application-layer protocol negotiation (ALPN) so that the HTTP/2 negotiation piggybacks on the TLS handshake: the TLS ClientHello includes “I speak h2”, and the ServerHello response includes “OK, let’s do h2”.
HTTP/3 negotiation I’m not certain about. Its existence can be broadcast with an Alt-Svc header on an HTTP response, but that means that the browser won’t use HTTP/3 for the first request. I think that might be what browsers are doing now (rather than racing TCP and UDP). But I think the direction things are heading is to optionally advertise this stuff over DNS as well, which would allow browsers to go straight to HTTP/3 if the relevant DNS records say it’s OK to: https://blog.cloudflare.com/speeding-up-https-and-http-3-neg....
The HTTP/2 specification tells you how you could in principle do this, but no popular browsers implement it and so far as I know no popular servers do so either.
So in practice it's always ALPN, a modern browser specifies that it would prefer h2 and servers that speak HTTP/2 select that during TLS handshaking. As with SNI this means it is not secret yet (but it could be protected by Encrypted Client Hello)
Browsers did initially implement h2c (HTTP/2 cleartext), but before the dust had really settled, they decided “no, let’s use this opportunity to keep pushing people to adopt TLS” (if you want the performance improvements of HTTP/2, you have to go secure first) and so they ripped it out again. So h2c is much more an academic concept than a real-world thing.
Closely related is the general trend browser makers have agreed on to prefer to make new things only available in secure contexts, with https://w3c.github.io/webappsec-secure-contexts/ explaining and providing various rationale. There’s a genuine security aspect to it, but it’s also definitely about pushing people to do security properly. (“If you want to use this new feature, switch your site to HTTPS first.”)
was going to say "of course, why would the router care for layer4 proto" but then again, netgear is like a synonym for "unpredictable crappy middlebox".
I guess NAS (network access storage) would really benefit from QUIC; one could treat each file transfer as a separate session, so that a lost packet will not cause all the sessions to stall. Did any of the NAS protocols adopt QUIC already? Are they working on it?
There's been some interest among the Samba hackers, including some discussion with Microsoft. They seem quite excited about SMB-on-QUIC as an Internet-capable network filesystem protocol.
Indeed. I believe the latest Windows Insiders builds have SMB over QUIC support (via msquic). I think QUIC brings a lot potential to finally run network filesystem protocols on the Internet.
Most implementations[1] implement it in userland, but this is by no means a requirement. There is no implementation for the Linux kernel presently, but both msquic and F5's QUIC implementation can run in their respective kernels.
QUIC is indeed built on top of UDP datagrams, much in the same way TCP is built (typically) on top of IP datagrams.
Could QUIC also have been built on top of IP datagrams instead of UDP datagrams? Or does our crufty Internet mean only UDP and TCP are viable Internet protocols?
It is as you say. In theory you could build this as a new IP protocol next to TCP but in practice that would be blocked by default and can't be deployed.
Node.js v15.0 now has experimental QUIC support. For production usage, I wouldn't expect anything usable before the v16 LTS release in October 2021 (and it might still be feature flagged at that point).
But that's pretty irrelevant for most production deployments, as you'll almost certainly want to have some type of AWS ALB, nginx, Cloudflare etc. load balancer in front of your Node services.
Node.js has builtin HttpV2 support but no v3 support IIRC. Which is not a deal breaker because any production app should already have a load balancer/proxy in front of it, which do have QUIC support.
Yes. The Internet-side client and the proxy server can use different protocols and versions that they can negotiate however they like, receive the HTTP request over those, and the proxy can use a separately-negotiated connection to your server to send that request over.
Essentially, the core of standard HTTP requests and responses (method, path, headers, response code) is fairly unchanged since HTTP/1.0, and what's changed is how those requests/responses are encoded and carried.
Why is the response time chart missing an x axis? What competitive advantage in the social networking space do they gain by not telling me their p99 request time?
That seems normal... but in this particular case, it's kind of weird because anyone can make requests and observe how long they take. So nothing is really being hidden.
This is how the IETF and protocol development works[1]. In fact, it is strongly encouraged for participants to implement and deploy drafts before they are finalized. Otherwise you risk finalizing something with a lot of deployment problems.
You are reading it correctly. This is a draft, and I presume Facebook's engineering team are confident that they have the resources (time/ money/ whatever) to stay reasonably up to date on any subsequent changes.
Generally the way this work goes, at first things change pretty violently, maybe it goes from let's have a 1 byte version number, ASN.1 OIDs for everything and some JSON, to actually it's always the five bytes 'QUICK' then a four byte version number, we're doing CORS instead of JSON, and no OIDs now it's all URNs, in like two weeks of git pulls and mailing list posts.
But after a while the drafts start to settle down. On some topics everybody is satisfied that there's a good argument for why we do this and not that, on others it's a coin toss and it's just easier not to change it than argue constantly. Do I like OIDs? Eh, they're better than URNs but I can live with either, so fine, have URNs if you insist.
QUIC has largely settled down. Google's systems for example will tell you they speak draft 29 of QUIC. Do they? Well, more or less. Maybe it's sort of draft 32 really. But drafts 29 and 32 are pretty similar, and draft 32 is going to Last Call now, if nobody raises any issues it's done.
Is it conceivable that somebody discovers a grave problem in QUIC and it has to be substantially revised? Yes. But it's not very likely. So it's easily possible that a draft 29 QUIC implementation like Google's (or Facebook's) will mostly interoperate with an actual QUIC standard next year after just some small tweaks to tell it this isn't a draft any more. Why wait?
HTTP/3 is a bit more than just do HTTP on QUIC, and it's slightly less finished than QUIC is, but it's also pretty stable and there's less that might change anyway.
Somebody has to try this stuff out, and at scale if we're to learn much more than "It might work" before it finishes that Last Call. Facebook, Google, Cloudflare, Mozilla, Netflix and so on are able to do that.
Facebook is one of the biggest technology companies in the world and serves an unbelievable amount of traffic using the best technology available to them, and actively works to shape the technology standards they use. Facebook is squarely in the ideal HTTP/3 use case, why would they not experiment with the new standard that is supposed to help them if for no other reason than to validate that it delivers on its promise? (or not)
It's acceptable to "degrade" a protocol that is only used as a fallback on broken networks if the new protocol is better for your use-case and is available to 99% of users.
Nobody is trying to make v3 "look better." If you use v1, you can continue to use it for a very, very, very long time.
So are the QUIC HTTP/3 Connection Migration IDs and end run around ad blockers and MAC spoofing, and other forms of anti-tracking measures? Long ago, I read these IDs are persistent across networks, etc.
> Facebook has a mature infrastructure that allows us to safely roll out changes to apps in a limited fashion before we release them to billions of people.
When you put it like that, the scale of it is scarily comprehensible.
Nice, can we now make Facebook look not like it was created in the early 2000?
Am I the only one bothered by that?
I rarely use Facebook (I used it only when I had to, and never created by own profile, always some temporary ones), so maybe the content is king there, but the UI is very dated.
Well i feel old now.... to me fb is one of those sites using tons of heavy ui that makes me yearn for older sites.
Keep in mind also that https://nostalgia.wikipedia.org is what wikipedia looked like in the beginning of the 2000s, so i think you might just be misremembering how plain many sites were back then