Having lived through a production datacenter meltdown due to software engineers throwing away TCP, along with stuff like slow start and congestion control, and obliterating ethernet switch buffers with UDP, I'm a bit concerned as to how QUIC is supposed to be used in order to avoid that?
If your fabric has end-to-end flow control this would be fine, but this seems like it could be greatly misused and should come with some warning labels.
That’s the hope, anyway. It’s hard to compete with decades of TCP refinement but at least there’s some big guns behind making this work.
There’s a lot of thought that has gone into QUIC and the protocol has several rules and mechanisms to prevent problems, such as the 3x amplification limit. Both the browser authors and server side (e.g. google) are on the lookout for problems and how they can improve on them.
A protocol that only works on the whim of a third party corporate person is not optimal for human people. Baking in a requirement of CA based TLS is not great.
>TLS: Certificate, the certificate for this host, containing the hostname, a public key, and a signature from a third party asserting that the owner of the certificate's hostname holds the private key for this certificate
Do mainline browsers (chrome/firefox/safari) accept this? Do they do it without scaremongering warning the prevent access like with normal TLS and self-signing?
I hope to god this catches on. At this rate it won't, because a FAANG company isn't supporting it. But I still hope somehow it will catch on. DJB's work is fantastic and this is no exception.
It's almost identical to TLS over TCP, the main difference is that QUIC is responsible for reliable delivery of packets/frames once the TLS handshake is complete. The handshake itself is done in a special CRYPTO frame [1].
Quic can have multiple streams. Every stream is guaranteed to be ordered like TCP.
The TLS is wrapped in "CRYPTO-frames" whose content is passed verbatim to TLS.
Quic does require collaboration with TLS. There are a couple of events in which TLS calls the Quic implementation, for example for TLS state changes.
What I dont quite get is how the individual streams are encrypted. As I understood it, the initial vector(IV) is generated during the initial handshake, so all the streams would reuse the same IV which could make it leak the requested file.
Each endpoint maintains a packet number, starting at 0 and increasing by one for each packet sent. This is an input to the packet encryption IV.
Note that unlike TCP retries do not re-use the packet number, they get a higher packet number on re-transmit. This lets endpoints differentiate packet retries from delayed packets, which isn’t possible in TCP (and the ambiguity of which is a problem in TCP stacks that want to react differently to each).
Ironically TLS is not suited to TCP. TLS is record-based, and the stream-based TCP doesn’t have record boundaries.
TLS-over-TCP solves this by sending TLS records with a type-and-length byte header. QUIC uses UDP which is more record based, and sends TLS records encapsulated in QUIC packets.
yes and no. The TLS records are transmitted in CRYPTO frames, which form a crypto stream (just like a TLS stream) that is more or less transparent to the QUIC stack. QUIC implementations forward an arbitrary datastream from the peer into the TLS library, and place the data that is produced by that stack again onto the ordered bytestream. There's no record layer for CRYPTO streams - that one would again be above on the TLS layer.
Quic is a great protocol but IIUC you have to apply a user space library for things that normally (TCP) is kernel/hardware. Well, at least in my own experiments with a Golang implementation, I've seen CPU be the bottleneck for high bandwidth transfers (MTU is ~1300b). Add to that that common OS:s don't give you a particularly large UDP receive buffer, which can cause a BDP bottleneck for high latency connections.
That's true, and maybe lower-level optimizations will start to emerge. But I suspect QUIC was intentionally left in userspace to allow for more rapid development. Indeed there are already many libraries available which means you can use it today. If we had to wait for kernels and middleware to support it, that might still be years out.
Make sure the server sends h3 with the alt-svc (http) header and adds h3 to the (tls v1.3) alpn. The browser may connect to h1 or h2 endpoints at first, then upgrading it to h3 for subsequent connections where supported (provided, the firewall isn't blocking QUIC).
If you do a HTTP request in Javascript using any of the available APIs (fetch, XHR), it would automatically use HTTP/3 for those requests if the peer is known to understand HTTP/3. No special APIs are needed unless the protocol you care about is on top of QUIC but not actually using HTTP/3. As the sibling comment mentions, WebTransport might help with non-HTTP use-cases.
Chrome actually has deployed a version of WebTransport, which is basically a way of using QUIC directly from the browser for datagram/stream connections (e.g. to implement game networking, in browser.) But it's relatively new, and you need to modify the QUIC stack in order to accept and handle WebTransport connections; so it doesn't "just work" out of the box with any popular QUIC libraries I know of.
It's unclear where this will all end up right now; you'll probably have to wait a bit for it to be production ready. But there's clearly broad interest in the topic, though.
If your fabric has end-to-end flow control this would be fine, but this seems like it could be greatly misused and should come with some warning labels.