Hacker News new | past | comments | ask | show | jobs | submit login
The majority of Facebook's traffic now uses QUIC and HTTP/3 (fb.com)
414 points by mostdefinite1 on Oct 21, 2020 | hide | past | favorite | 139 comments



By the way, even though it is not yet turned on by default, http3 support is present in Firefox and can be activated by toggling 'network.http.http3.enabled' in about:config to true. I have had it enabled for a few weeks and everything seems okay except that rarely I've noticed a few sites not loading the first time I visit them but needing a 'refresh', but I'm not sure if this is because of the new QUIC code or just another connection glitch.


Beware that there have been some issues with Google sites not loading or being very slow with Firefox's HTTP/3 implementation. Some of these were Firefox bugs and some were Google server bugs.

But I use HTTP/3 in Firefox Nightly and use Gmail and Google Docs all day long without major breakage these days.


I would avoid enabling QUIC manually anywhere anyway for now, Google controls both endpoints and the RFC so it clearly has an advantage there.


It likely is. I played around with the implementation earlier on (it's now > 6 weeks back), and it still had some issues. E.g. the quic connection would time out at the server without the browser knowing about it. And all follow-up requests silently failed. A page refresh helped that.

Another thing where a full page refresh helps is towards getting consistency on whether HTTP/3 is available or not, because the client might make another HTTP/2 request instead and refresh the Alt-Svc information.


Had constant problems loading Youtube. Finally someone on r/Firefox helped me solve it by seeing what in my config was different from default. This value was it and everything worked well after the setting was reverted and restarting.


Thanks, this helped. :) I had it enabled too.


Where can I put my money on cache coherency?


I’ll take that bet and I will give you a generous 1:10 bet odds ratio.

So, if you give me $1000 dollars as your stake then:

- If you are wrong I get to keep the whole $1000

- If you are correct then I will give you $100 of your money back. (Leaving me with $900 of your money still hehe).

In other words what I am saying is, I think you are correct in what you said ;)


FYI, what you wanted to say was 1 for 10, not 1 to 10.

Like craps odds when betting on the don't side, where you win less than what you bet.

But it's important to point out that even with 1 for 10 odds, you still get your original stake back plus the extra 1.


Thanks for clarifying. Guess it's a good thing I never tried to enter the betting industry haha


Yeah, nobody would ever place a bet if the best payout was less than what you put in...


I know you were just making a point, but I want to point out that the “nobody” part is certainly not true, as sad as that is. Generally something like that is considered fraud/deception, but it does happen.


I see stalls too. I just turned it off and everything seems to work better now.


Ah, the race to the bottom has begun with http too.

Can't wait for http 4, 5 & 6 to be released next year so that a few companies can fight to steer the direction of the Web.


> we refer to QUIC and HTTP/3 together as QUIC

My understand is that HTTP/3 always means QUIC is used according to the standard. But that QUIC can be used for other protocols as well. FB's terminology seems to be backwards.


It's rather frustrating when people do this; for the rest of the article, when saying QUIC do they mean theur terminology that is actually QUIC and HTTP/3 or do they mean QUIC the TCP alternative? They use both.


As I said below, the blog is more focused on QUIC itself rather than HTTP/3. The bulk of the improvements we see are from QUIC as a transport layer, rather than the changes in HTTP/3.


Why are there two meanings of QUIC in the first place?


Some years back, Google had two experimental pieces of work to try to speed up HTTP named QUIC and SPDY (as in "Quick" and "Speedy").

The output from the experiments was new standardization efforts at the IETF. Google's SPDY, which was a binary HTTP protocol over TLS, eventually resulted in HTTP/2 which there's a fair chance you use today.

The other idea, QUIC, is a much bigger lift. It replaces not just the HTTP protocol and TLS but the whole stack, even TCP. At the IETF this work was split into two pieces, the IETF's QUIC is just the TCP replacement, an encrypted connection-oriented reliable protocol. So the HTTP part of the problem is being standardised as HTTP/3

Google's QUIC "gQUiC" will be obsolete once the standardized protocol is finished. Right now a Chrome talking to e.g. GMail uses gQUIC, once the standards work is firmed up it'll speak HTTP/3 and then maybe a year later Google's sites will discontinue gQUIC because it's just maintenance effort with no residual value.


Good and concise explanation thank you


In here and in the article it sound as if QUIC or HTTP3 use or build a new Internet Protocol, while it was my understanding it _simply_ uses UDP to create a better version of TCP.

Of course an all new IP would be great, but the issue is it would take very long for all the soft and hardware to support it and UDP is simply there.

I would prefer for a new HTTP to go an all new way, leaving behind UDP and TCP and build something new. Then support fallback over HTTP1/2 until everything else caught up.


I don't think it's unreasonable to state that we will never deploy a new IP protocol again.

At least not one that works over the general Internet - controlled intra-organisation networks might possibly be able to do so, but very rarely.

There are too many broken machines on the Internet that assume all IP traffic is one of TCP, UDP and ICMP. And far too many are configured to screw up ICMP too.

So new protocols MUST use TCP or UDP as their base layer instead of raw IP.

So yes, QUIC uses UDP, but that should be considered an implementation detail. A hack for the lack of IP support on the Internet.

HTTP/3 uses QUIC like HTTP/1 and HTTP/2 use TCP + TLS, but QUIC is not limited to use by HTTP/3.

QUIC's development has been, basically, paused while HTTP/3 is finalised and then the IETF will pick up where it left off and work out how other higher-layer protocols will work using QUIC as the transport layer.


Well, I think UDP is about as minimal as can be when using IP. So it makes sense to use to design a potential TCP replacement.

Having said that, I wish we could replace the "port" concept. With the size of address space IPV6 allows, 16 bits for ports is looking a bit small.


For a trivial connection-oriented protocol the ports are part of a 4-tuple (my-address, my-port, your-address, your-port) so the 16-bit port isn't a big problem there. It doesn't matter at all whether this port is being used for something else in regards to any other combination of remote address and port since that's not a match.

And for fancier protocols it doesn't matter anyway because they have their own concept of a connection identifier. WireGuard for example doesn't care at all, packets arrive and either they're authenticated or they aren't, it silently discards all packets that aren't authenticated, QUIC optionally has a connection ID that can survive changing the 4-tuple as far as I remember.

If you wish you did have more ports, the IPv6 address space makes it pretty cheap to just acquire more addresses on your network and use those, but I do not sense much appetite because people don't feel like they're short of ports.

You do see people spinning up more addresses to not need SNI or similar. If you have sixty virtual machines on one hardware box, having sixty IPv6 addresses, one per VM, means now the packets for VM #4 and VM #18 are separated on the wire, which might be convenient but that doesn't feel like it's due to running out of ports, it's just more convenient.


I think they're talking about Google QUIC, which formed the basis for both IETF QUIC and HTTP/3; Facebook has been deploying Google QUIC for a long while now maybe, AFAIK. The people at fault here are probably the IETF lol.


We never deployed Google QUIC. It was always IETF QUIC. Referring to them together as QUIC was just for expedience, since the main benefits came from the QUIC layer, not the HTTP layer.

HTTP/3 implies IETF QUIC. IETF QUIC itself can be used for non-HTTP protocols, though, just like TCP can be used for protocols that aren't HTTP/2.


Thank you for the clarification. I'm sure you're looking forward to WebTransport+QUIC as well, I know I am. I'm already doing some little tweaks to our WebSocket RPC and bulk transfer layers to resemble WebTransport so it's an easy drop-in when the QUIC transport becomes available, or when there's a reasonable implementation of WebTransport+HTTP/3 (the other middle ground).

Maybe I can publish the virtual “WebSocketTransport” thing once I'm satisfied with it.


I am really happy Google and others are pushing QUIC, but for only one main reason: networks that disallow UDP will now be considered "broken."

Without something to push UDP usage its possible we could end up with a TCP-only Internet that would make P2P connectivity more or less impossible.


Unfortunately, we may yet see networks that block or slow down QUIC, in a paradoxical attempt to improve performance.

QUIC is designed to hide a lot from the network. But some network nodes use visibility into things like round-trip time, data-in-flight and packet loss for each flow, so they can adjust queuing parameters to optimise for each user. These measurements are easy to get from monitoring TCP, but not QUIC.

The designers of those network nodes may conclude that blocking QUIC (UDP on port 443) and forcing fallback to HTTP/2 over TCP results in better ability to optimise network flows than allowing QUIC to go ahead. All browsers race TCP against QUIC, so a network blocking QUIC shouldn't significantly slow performance compared with just HTTP/2.


This is a hefty bet. The gamble is that you're very good at this, and so you'll deliver better performance with HTTP/2 than the client could have got from HTTP/3, but even if you actually are right by a slim margin you're likely to suffer the same pushback from users as if you were wrong. The only scenarios where this is a smart bet are where competitors allow HTTP/3 and have noticeably worse performance.

If it's close, regardless of which way, you'll get beaten up for disabling HTTP/3, maybe that's unjust but that's how it is.


What users are going to notice? Browsers and almost everything else is going to fallback to TCP and move on, because there are too many broken networks to bother.

Kind of like how most big sites clamp TCP MSS at 1440 instead of 1460; there's too many PPPoE or IPIP tunneled links with broken path mtu, and too many clients without working/enabler path mtu blackhole detection to bother making a fuss about it. Just move on and cry on the inside.


I think people will notice as BYOD gets more and more standard. There is already a situation in my company that everyone who is part of the BYOD initiative doesn't even use their company laptop anymore because everything is so much slower.


They might notice and complain if UDP in general is blocked, but if their webpages load, they won't complain that UDP to port 443 is degraded or blocked.


I expect HTTP/2 and HTTP/1.1 will still be with us for decades; orgs that block UDP will be able to continue to do so, and the rare HTTP/3-only website will just be inaccessible.

Even if there's a new killer app that requires QUIC, I imagine an org backward enough to disallow UDP will just not care about that app.


I expect HTTP/2 to disappear completely, to the point of browsers removing support for it probably even within a decade. Everything that works over HTTP/2 should work over HTTP/1.1, even if at a slight cost, and HTTP/3 should be uniformly superior to HTTP/2, except in those rare cases of UDP-blocking firewalls, which situation I expect to improve over time. Given the complexity and maintenance burden of the protocol and more importantly the upgrade mechanism, I think browsers will be happy to remove HTTP/2 in favour of the old reliable HTTP/1.1 (which is certainly not going anywhere) and HTTP/3 once they see very little using it any more, and absolutely nothing needing to use it.


I should also write a dissenting opinion on this point—for each client, the upgrade to HTTP/2 is free, as it’s negotiated via ALPN during the TLS handshake; whereas the HTTP/3 upgrade mechanism isn’t particularly solid yet, requiring an Alt-Svc header on a response from the server, meaning you’ve already had to connect over HTTP/1 or HTTP/2 before you learn you can connect over HTTP/3. There’s also work going on to add a new type of DNS record to indicate that the server is h3-capable, which would allow you to connect over HTTP/3 immediately, but that’s opt-in, and loads of sites that support HTTP/3 will never add that record. So until browsers switch to defaulting to trying HTTP/3, HTTP-over-TCP will continue to be used regularly even when HTTP-over-QUIC is supported, and that means plenty of HTTP/2 will be used until that time. This could keep HTTP/2 alive for a lot longer than the decade I posited. I don’t know how it’ll play out, but I think it will depend on how much browsers decide they want to push HTTP/3; if they push it hard and try it more optimistically, HTTP/2 will die more quickly.


"Science progresses one funeral at a time."

https://en.m.wikipedia.org/wiki/Planck%27s_principle


I'm totally on board with having more than one protocol available (especially since UDP is so much more flexible), but what is it about TCP specifically that precludes its use for p2p? Is it easier to get through NATs et al with UDP?


TCP has an explicit client and server (some caveats here). The userspace API for TCP requires you to do a connect, which does the TCP handshake in the TCP stack, before you can send traffic.

For UDP, you just specify the destination IP and port, and send packets.

For TCP, new incoming SYNs to most (S)NAT addresses will just get dropped - especially CGNAT - making it impossible to communicate in that direction. If you're both in that situation (really common, actually), you just can't talk to each other.

For UDP, the packets will also get dropped on the receiver side. However, the act of sending the packet will often cause the originator's side's NAT to register that five-tuple (source and destination IP and port + UDP proto), which would allow the other side to reply. If both sides do this with the same IP/port pairs, then magically they can communicate - some of the time, at least. There's a lot more involved (for example, how do you know your own external IP address behind NAT?) - read up on STUN - https://en.wikipedia.org/wiki/STUN - for more details.


Third party mediated hole punching really only works with UDP. TCP is stateful and thus negotiating a hole punch requires incredibly precise timing, which is hard to achieve in practice.

Without UDP we have a purely "cloud-to-ground" Internet unless we can convince router/NAT makers to always include and always turn on UPnP/NAT-PMP. Not likely, and those protocols suck anyway.


Who disable UDP in 2020?


Schools tend to have it blocked.


Maybe someday a giant will push UDP broadcast. That would be neat.


That would require a ton of core upgrades and would be hard to implement without enormous abuse potential, so not very likely.


TOR, at the moment, is based on TCP and since the whole architecture is based on the assumption of TCP, I don't see it changing anytime soon. QUIC means TOR is now "legacy" unless we just do the right thing and boycott QUIC.


Why does For have to be used to access HTTP services? The web is crap for privacy already.


For anyone interested, the original design doc for QUIC from 2013 [0]. Really good writeup, both in terms of engineering spec / architectural design. I recommend reading through if you have the time.

[0]: https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqs...


Looks great, thanks for sharing!

How does it compare to the February 2020 standard draft from IETF? [1] I was following the development of the Websocket protocol pretty closely when it happened and had to update my implementation[2] multiple times as the design kept changing. Did this also happen here, or was QUIC pretty much done and standardized straight from the Google design?

[1] https://tools.ietf.org/html/draft-ietf-quic-transport-27

[2] https://github.com/nicolasff/webdis#websockets


There's a lot of significant differences from the original Google QUIC design and what we currently have in the IETF drafts. They are very much wire-incompatible. That's why it took several years for the working group to get to a point where they were near-completion.


Can HTTP/3 be enabled in nginx now? Should it be? Is it a simple config change? I assume that would take care of serving static assets, but would reverse-proxied apps behind nginx also need to be upgraded to HTTP/3?


There are several efforts to implement QUIC and HTTP/3 in nginx. Cloudflare has it deployed in production with quiche[1], and Nginx themselves are developing one[2].

Applications sitting behind a proxy wouldn't need to be updated. The core protocol semantics of HTTP are relatively unchanged between HTTP/1.1, HTTP/2, and HTTP/3.

[1] https://github.com/cloudflare/quiche

[2] https://www.nginx.com/blog/introducing-technology-preview-ng...


Pretty much any HTTP proxy supports talking a set of negotiated protocol versions/features with the client, and a potentially different set of protocol versions/features with the server behind it.

The high-level flow is pretty much:

1) Set up client connection/negotiate stuff (TLS, alpn, NPN, blah blah) 2) Process requests from that client connection 2.1) Decode request from client connection 2.2) Manipulate request (add/remove headers, ...) 2.3) Send request to server 2.3.1) If necessary, create a new connection to server (TLS, alpn, NPN, ...) 2.3.2) Encode request to server connection 2.4) Decode response from server connection 2.5) Manipulate response (add/remove headers, ...) 2.6) Encode response to client connection

You can talk totally different protocols from Internet-side client to the proxy, and from the proxy to the server - and multiple layers of proxies in between if you like. From an app point of view, there's essentially no difference. If you want to for some reason, you can use various headers to do attempt to indicate to the client to upgrade/downgrade to particular protocols, but most apps won't care about that.


I feel like I belong in the minority who like the new Facebook SPA. It loads shockingly fast on my computer, almost instantaneous even


It's almost like Windows 95! I click something and then it just happens, immediately!

Funny how people forgot that you actually can make (web)apps fast.


But being an SPA I'm sure it's a billion times more manageable for them to maintain instead of a javascript hack nightmare.

So, yeah of course you can always make something lightning fast.. but can you manage it, or even develop it properly in the first place?


SPA is not synonymous with slowness, this meme needs to die already.


not sure where you think this meme (never heard of it) was brought up itt.


Interesting, it's very slow for me, and usually triggers warnings from Safari for excess resource consumption.


I think it's extra slow in Safari, for some reason. Just like Google Maps is also slower in Safari than in Chrome. However, overall I would say Safari is the fastest browser out there, based on my own browsing habits. Also most memory efficient.


I've personally experienced this many times using Safari. Pages that take up a lot of memory just straight up freeze and start chugging.


I have a feeling this may be React related. Safari is on average far, far faster on any site. But I have a larger React site that slows down quite a bit more than Chrome.

Perhaps related to the const+let vs var bug the other day.


unlikely - any production build of React would transpile down to var for IE support


I usually get the “this page is using too much memory” warning repeatedly using safari. And apparently there’s no way to make it go away, which is super annoying.


Come to think of it, even Google Calendar does this if you leave the tab open in Safari for more than a day. Is this Safari's fault or Google?


Heavy JS apps (eg:React) have all had perf issues on my Safari. Not just FB, anyone using React has cause this issue on my machine.

Chrome and FF also using more resources for Heavy JS (not as much as Safari) but maybe they just have better engines.

I loathe the over-js'd web.


React apps are generally slow in Firefox, as well.


It hangs all the time for me, video scrubbing halts the entire tab, and it uses a lot of CPU sometimes.


That's not expected. If you can send me a performance profile I'll look into it. My nick @fb.com.


Any chance you're using Safari?


It loads very fast for me with Safari, but I regularly get an empty newsfeed or only 1 or 2 posts.


I feel the same with Twitter re-design too. Everyone was hating the re-design when it was launched but my twitter use increased from once or twice a week to couple of times a day after the new design.

It is amazingly fast(on desktop at least), behaves in all the right ways you would expect, you never lose you position anywhere, using back button always works properly, gracefully handles connection loss, and again, it's so fast and pleasurable to use. Too bad can't say the same for the content on the site.


People dislike change, but I don't believe big companies change their design on a whim; they do a ton of research, and while some may complain, they know it'll be an improvement over their existing designs.


Weird. It's incredibly buggy for me.

I get notifications popping up but there's nothing there. Same thing with the messages jewel.

I'll get a notification about a reply to my comment, I'll go to the comment, like it, and then it will scroll me down to the same reply as a top level comment at the bottom of the post.

I've reported all the weirdness I've come across but nothing's changed since I was switched over a month ago.


On Safari, it feel fast in some places, but there are still many Janks. And no anywhere near as good as the App Experience. Especially in Scrolling and Lots of Imaging.


SPA? Ah, single page app


For Chrome users, this open source extension will tell you what protocol the browser is using for each website being accessed:

https://chrome.google.com/webstore/detail/http-indicator/hgc...


Unfortunately permissions to access everything on every website are way to broad for a niche extension like this. There's no guarantee it won't be sold to a malware developer in a month. If you want to use it, I suggest cloning the repo and loading it as an unpacked extension to avoid auto updates.


Also be sure to audit it yourself beforehand if you're going to go that route; there's no use tilting against auto-updates if you haven't gone to the trouble of making sure that malicious code isn't already present.


Essentially it is

    performance.getEntriesByType("navigation")[0].nextHopProtocol
and background task to update UI, plus a link to chrome://net-export/

This one is not hard to audit but in current model should be done by each user.


If that's it, you could just save it as a bookmark:

    javascript:alert(performance.getEntriesByType("navigation")[0].nextHopProtocol)


That works in Firefox, too.



Chrome's own developer tools can tell you this easily. Check out the network pane.


Can anyone make a tidy article/post that outlines exactly how each protocol differs (v1 vs v2 vs v3) on the network level. Obviously most of us understand the high level differences but what does the mechanisms and payloads look like on lower levels. A side by side comparison would be great. Pro's & con's of each would be great. Would older hardware (say a 10 year old netgear router) be able to use v3 without pain?


HTTP/1 is a plain text protocol over TCP. Requests are sequential per connection. Browsers tend to use up to six connections at a time to work around this.

HTTP/2 is a binary protocol over TCP. Requests are multiplexed. It’s normally better than HTTP/1, but in environments with higher packet loss rates (e.g. remote mobile networks) it can perform markedly worse than HTTP/1, because of TCP head-of-line blocking and the fact that the browser is now using only one connection.

HTTP/3 is a binary protocol over QUIC which is over UDP. Requests are multiplexed. If it works, it should be fairly uniformly better than both HTTP/1 and HTTP/2, because you can think of it as roughly HTTP/2 minus the bad parts of TCP. However, on a few networks (business networks typically, I think) it won’t work because they have firewalls that hate unknown UDP traffic.

(HTTP/3 is not actually just HTTP/2 over QUIC; HTTP/2’s header compression scheme HPACK is stateful in a way that depends on TCP’s sequential nature so that it couldn’t work over QUIC without completely reintroducing the head-of-line blocking problem, so it’s replaced with a variant that mitigates that problem substantially, called QPACK. But other than that, I think they’re roughly the same. Don’t quote me on that, though, it’s a few years since I read the specs and I’ve forgotten it all, not to mention that the specs have changed plenty in that time.)

I think this is a fair summary, but I don’t have any practical experience with HTTP/3, so I welcome any corrections.


I'm wondering if we're giving up too much with the move away from clear text to binary, reminds me of systemd.

Considering so much of the internet traffic is not text media was it necessary.

I feel there is a slight possibility of a Google optimisation for a problem they perceive. And then no one questions because brrr faster brrr.

But I'm conscious I might come across as a bit of a Luddite, but it's mostly musing.


this was a remarkably concise summary, nicely done. one clarification/question I had was I think all connections start as http1 by default, then a request to upgrade to http2 is sent. does the same happen for http3? does it go from 1 to 3 or 1 to 2 to 3?


HTTP/2 over cleartext uses an HTTP/1.1 Upgrade header field, yes, so that it’s an extra request. But HTTP/2 over TLS can use application-layer protocol negotiation (ALPN) so that the HTTP/2 negotiation piggybacks on the TLS handshake: the TLS ClientHello includes “I speak h2”, and the ServerHello response includes “OK, let’s do h2”.

HTTP/3 negotiation I’m not certain about. Its existence can be broadcast with an Alt-Svc header on an HTTP response, but that means that the browser won’t use HTTP/3 for the first request. I think that might be what browsers are doing now (rather than racing TCP and UDP). But I think the direction things are heading is to optionally advertise this stuff over DNS as well, which would allow browsers to go straight to HTTP/3 if the relevant DNS records say it’s OK to: https://blog.cloudflare.com/speeding-up-https-and-http-3-neg....


HTTP/2 without TLS is basically not a thing.

The HTTP/2 specification tells you how you could in principle do this, but no popular browsers implement it and so far as I know no popular servers do so either.

So in practice it's always ALPN, a modern browser specifies that it would prefer h2 and servers that speak HTTP/2 select that during TLS handshaking. As with SNI this means it is not secret yet (but it could be protected by Encrypted Client Hello)


You are quite correct.

Browsers did initially implement h2c (HTTP/2 cleartext), but before the dust had really settled, they decided “no, let’s use this opportunity to keep pushing people to adopt TLS” (if you want the performance improvements of HTTP/2, you have to go secure first) and so they ripped it out again. So h2c is much more an academic concept than a real-world thing.

Closely related is the general trend browser makers have agreed on to prefer to make new things only available in secure contexts, with https://w3c.github.io/webappsec-secure-contexts/ explaining and providing various rationale. There’s a genuine security aspect to it, but it’s also definitely about pushing people to do security properly. (“If you want to use this new feature, switch your site to HTTPS first.”)


you have a gift for explaining these things. thank you!


was going to say "of course, why would the router care for layer4 proto" but then again, netgear is like a synonym for "unpredictable crappy middlebox".


I guess NAS (network access storage) would really benefit from QUIC; one could treat each file transfer as a separate session, so that a lost packet will not cause all the sessions to stall. Did any of the NAS protocols adopt QUIC already? Are they working on it?


There's been some interest among the Samba hackers, including some discussion with Microsoft. They seem quite excited about SMB-on-QUIC as an Internet-capable network filesystem protocol.


Indeed. I believe the latest Windows Insiders builds have SMB over QUIC support (via msquic). I think QUIC brings a lot potential to finally run network filesystem protocols on the Internet.


rsync, please and thank you.


Wow. Now if they could just figure out why my news feed has said "Something Went Wrong" at the bottom for about the last year or so.


Damn I thought it was just me becauseI thought that the Facebook algorithm doesn't have content for me because I don't have friends.


I don't think they test at all because... who cares? It's Facebook. You get what you pay for.


Is QUIC implemented at the kernel level like TCP/UDP, or is it entirely userland? Is it encapsulated in UDP packets?


Most implementations[1] implement it in userland, but this is by no means a requirement. There is no implementation for the Linux kernel presently, but both msquic and F5's QUIC implementation can run in their respective kernels.

QUIC is indeed built on top of UDP datagrams, much in the same way TCP is built (typically) on top of IP datagrams.

[1] https://github.com/quicwg/base-drafts/wiki/Implementations


Could QUIC also have been built on top of IP datagrams instead of UDP datagrams? Or does our crufty Internet mean only UDP and TCP are viable Internet protocols?


It is as you say. In theory you could build this as a new IP protocol next to TCP but in practice that would be blocked by default and can't be deployed.


Thanks for the info :)


Is this possible with NodeJS + express already? I didn't see any package for that or config


Node.js v15.0 now has experimental QUIC support. For production usage, I wouldn't expect anything usable before the v16 LTS release in October 2021 (and it might still be feature flagged at that point).

But that's pretty irrelevant for most production deployments, as you'll almost certainly want to have some type of AWS ALB, nginx, Cloudflare etc. load balancer in front of your Node services.


Caddy does support http/3.


Node.js has builtin HttpV2 support but no v3 support IIRC. Which is not a deal breaker because any production app should already have a load balancer/proxy in front of it, which do have QUIC support.


QUIC support has landed behind an experimental flag https://github.com/nodejs/node/issues/23064

As for HTTP/3 that’s a bit further down the line


For people running apps behind Cloudfront/Fastly/Cloudflare, is it similar to h2 where it can be enabled at the CDN level, or load balancer?


Yes. The Internet-side client and the proxy server can use different protocols and versions that they can negotiate however they like, receive the HTTP request over those, and the proxy can use a separately-negotiated connection to your server to send that request over.

Essentially, the core of standard HTTP requests and responses (method, path, headers, response code) is fairly unchanged since HTTP/1.0, and what's changed is how those requests/responses are encoded and carried.


Yes, you can enable H3 support in the Cloudflare dashboard. We serve... lots... of H3 requests every second on behalf of our customers.

More on our blog https://blog.cloudflare.com/http3-the-past-present-and-futur...


We are working on the future of http1, 2 & 3 in the web-server-frameworks team. You can find our progress here: https://github.com/nodejs/web-server-frameworks

TLDR: we are a few years out from this being anything but experimental in node.


Why is the response time chart missing an x axis? What competitive advantage in the social networking space do they gain by not telling me their p99 request time?


Policy reasons is all I can say unfortunately.


That seems normal... but in this particular case, it's kind of weird because anyone can make requests and observe how long they take. So nothing is really being hidden.


Am I reading the IETF status incorrectly or is HTTP/3 still i draft status? Why on earth are they implementing a protocol that is in draft status?


This is how the IETF and protocol development works[1]. In fact, it is strongly encouraged for participants to implement and deploy drafts before they are finalized. Otherwise you risk finalizing something with a lot of deployment problems.

[1]https://www.ietf.org/how/runningcode/


You are reading it correctly. This is a draft, and I presume Facebook's engineering team are confident that they have the resources (time/ money/ whatever) to stay reasonably up to date on any subsequent changes.

Generally the way this work goes, at first things change pretty violently, maybe it goes from let's have a 1 byte version number, ASN.1 OIDs for everything and some JSON, to actually it's always the five bytes 'QUICK' then a four byte version number, we're doing CORS instead of JSON, and no OIDs now it's all URNs, in like two weeks of git pulls and mailing list posts.

But after a while the drafts start to settle down. On some topics everybody is satisfied that there's a good argument for why we do this and not that, on others it's a coin toss and it's just easier not to change it than argue constantly. Do I like OIDs? Eh, they're better than URNs but I can live with either, so fine, have URNs if you insist.

QUIC has largely settled down. Google's systems for example will tell you they speak draft 29 of QUIC. Do they? Well, more or less. Maybe it's sort of draft 32 really. But drafts 29 and 32 are pretty similar, and draft 32 is going to Last Call now, if nobody raises any issues it's done.

Is it conceivable that somebody discovers a grave problem in QUIC and it has to be substantially revised? Yes. But it's not very likely. So it's easily possible that a draft 29 QUIC implementation like Google's (or Facebook's) will mostly interoperate with an actual QUIC standard next year after just some small tweaks to tell it this isn't a draft any more. Why wait?

HTTP/3 is a bit more than just do HTTP on QUIC, and it's slightly less finished than QUIC is, but it's also pretty stable and there's less that might change anyway.

Somebody has to try this stuff out, and at scale if we're to learn much more than "It might work" before it finishes that Last Call. Facebook, Google, Cloudflare, Mozilla, Netflix and so on are able to do that.


Facebook is one of the biggest technology companies in the world and serves an unbelievable amount of traffic using the best technology available to them, and actively works to shape the technology standards they use. Facebook is squarely in the ideal HTTP/3 use case, why would they not experiment with the new standard that is supposed to help them if for no other reason than to validate that it delivers on its promise? (or not)


Fait accompli?


Could that explain this : the new UI, appart from being terrible, is often seeing slowdowns for me ?


Possibly, if something in your network is disallowing UDP traffic and the system has to fall back to HTTP over TCP.


oh, so it's fine to degrade HTTP v1 to make v3 look better ?


It's acceptable to "degrade" a protocol that is only used as a fallback on broken networks if the new protocol is better for your use-case and is available to 99% of users.

Nobody is trying to make v3 "look better." If you use v1, you can continue to use it for a very, very, very long time.


So are the QUIC HTTP/3 Connection Migration IDs and end run around ad blockers and MAC spoofing, and other forms of anti-tracking measures? Long ago, I read these IDs are persistent across networks, etc.


facebook and porn are still driving tech forward


I would not name facebook close to something good like porn.


Thanks. that's going in my quotes.txt file


Is this down for anyone else?


Have you blocked facebook network wide?

Check this - https://web.archive.org/web/20201021174206/https://engineeri...


Nope.


Bit offtopic and dumb but..

> Facebook has a mature infrastructure that allows us to safely roll out changes to apps in a limited fashion before we release them to billions of people.

When you put it like that, the scale of it is scarily comprehensible.


Nice, can we now make Facebook look not like it was created in the early 2000?

Am I the only one bothered by that? I rarely use Facebook (I used it only when I had to, and never created by own profile, always some temporary ones), so maybe the content is king there, but the UI is very dated.


Well i feel old now.... to me fb is one of those sites using tons of heavy ui that makes me yearn for older sites.

Keep in mind also that https://nostalgia.wikipedia.org is what wikipedia looked like in the beginning of the 2000s, so i think you might just be misremembering how plain many sites were back then


On which platform is the ui outdated? IMO it looks pretty good


Settings menus on desktop browsers look like they haven't been touched since the Windows XP era.


Here is my screenshot: https://imgur.com/17OxohL

It is as if the whole Web 2.0 era forgot about it, all those icons on the left side.

The main part takes only like 30% of the screen estate.

On the other hand, reddit looks good, Google+ looked good. HN looks better.


They just updated it about a month ago.

Besides the dark theme, it is super crowded and feels like using a low-res phone on a 4k monitor—a step backwards.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: