Hacker News new | past | comments | ask | show | jobs | submit login
Identifying QUIC deliverables (ietf.org)
253 points by arusahni on Oct 29, 2018 | hide | past | favorite | 182 comments



"confusion between QUIC-the-transport-protocol, and QUIC-the-HTTP-binding. I and others have seen a number of folks not closely involved in this work conflating the two, even though they're now separate things."

Well, yeah, they have the same name. My first reaction to the headline was that lots of software isn't ready for http over udp.


The protocol differences are miniscule as well, pretty much a version bit and ever so slightly slimmed header requirements IIRC. I remember reading it and thinking "cool, generalized QUIC... looks like I can built an HTTP-like transport with it and that's it".

As for the HTTP over UDP, indeed - there is a reason Chrome has the ability to race a TCP connection alongside QUIC opens :).


So what exactly is the QUIC-the-HTTP-binding?

Everything on the web about QUIC is about UDP. It doesn't talk about HTTP/3 and /2 difference. /2 is already complex enough as it is.


AIUI, the QUIC protocol [1] will be a general-purpose transport protocol built on top of UDP that offers multiple independent streams with encryption. For the application it's like it can open as many TCP connections as it wants, without the extra overhead from handshakes and rate control start.

HTTP-over-QUIC [2] specifies how to talk HTTP 2 over the QUIC protocol, with some adjustments like using "native" QUIC streams instead of multiplexing streams over a TCP stream, and adjusting the header compression to allow more out-of-order operations.

[1] https://datatracker.ietf.org/doc/draft-ietf-quic-transport/

[2] https://datatracker.ietf.org/doc/draft-ietf-quic-http/


> For the applications it’s like you can open as many TCP connections as you want without extra overhead from handshakes and rate control start.

This sounds a fascinating claim — is the idea that the programming model when implementing QUIC server applications will allow one to easily aggregate distinct server endpoints which automatically share or amortize the overhead of establishing encryption and congestion detection between all the logical streams used by the application across all the involved nodes?


I mean just between two endpoints, sorry for being imprecise. So instead of opening X TCP connections to a server to run requests in parallel, you'd use one QUIC connection with multiple streams. Much like HTTP 2 does, but using UDP instead of TCP so streams can be ordered and retransmitted separately.


"handshakes and rate control start"

This is a common way to describe congestion control which is a bit simple and reductive.

Given 2 endpoints on the network, you want to compute a congestion window for the entire flow rather than per-stream. Because 1) You can more accurately compute a single congestion window given aggregate stats from N streams rather than computing N congestion windows given stats from 1 stream each. 2) The congestion window is a shared resource across streams, which permits better network utilization

The other benefit of QUIC, coming from multiplexing streams over UDP rather than TCP, is that the "blast radius" arising from the consequences of a lost packet is better contained. It only impacts the stream that contained the packet; the rest of the flow can make progress uninhibited. So head-of-line blocking is not as severe.


QUIC offers a bunch of features HTTP wants. So this binding is about how you use those features. In some places HTTP/2 had a feature that QUIC now does better, in others QUIC offers something completely new.

Rebinding is an example of the latter. When you change networks (and thus addresses) in QUIC it can update the ongoing connection seamlessly.


Seeing how long we have used http 1.0/1.1 (and still); and http/2.0 far from fully adopted, I'm somewhat surprised to see v3 being discussed already and v4 being mentioned in there.

Though maybe it requires way more future vision and planning then I expect.


What you are seeing is someone picking up the baton and running with it. HTTP 1.1 has drawbacks which we've been living with for far too long. Getting people used to change was step one, that was HTTP/2, now were in a position to fix pain points.

I'd also like to see updates to IMAP, SMTP and FTP to name a few.


The most reasonable "update" to FTP would be to formally replace it with HTTP, because FTP is an awful protocol --- maybe the worst one in common use --- that deserved to die off decades ago.


FTP includes support for record-oriented files (STRU R, MODE B). This mostly isn't supported by FTP clients/servers on Unix-like platforms or Windows, but it is on those platforms which have record-oriented filesystem support (IBM mainframes, IBM i, Unisys mainframes, OpenVMS RMS, etc.) Although one could standardise a mechanism for transferring record-oriented files over HTTP, no such standard has been widely adopted. If someone wants to transfer a record-oriented file from e.g. VMS to z/OS and have the record boundaries kept (and with the necessary ASCII-EBCDIC conversion applied), FTP is the only widely adopted standard that can do that.

This is also why these platforms often use FTPS (FTP over TLS) instead of SFTP (SSH-based) – SFTP doesn't include any support for record-oriented files, only the stream-oriented files used on Unix and Windows.


> Although one could standardise a mechanism for transferring record-oriented files over HTTP

Or you could just provide a simple API over HTTP, whether an actual REST system or a single endpoint one or two params with well defined inputs that can be accepted (a CGI, basically). Why bother formalizing some standard when the tools to handle this are so ubiquitous (Apache+$LANG on the server, cURL or wget on the client)?

> SFTP doesn't include any support for record-oriented files, only the stream-oriented files used on Unix and Windows.

That's because SFTP isn't really FTP (in the protocol sense) at all. It's just a specialized shell started after an SSH session/tunnel is created.

That it includes FTP in the name is really just marketing because they wanted to supersede the real FTP. In that respect, it makes sense for them to just cover what 99% of the users of FTP needed and stop.


curl and wget on the client can't easily do this, when the client is another mainframe/minicomputer OS with a record-oriented filesystem. I don't believe they have any platform-specific code to support record-oriented files. You probably can get it to work with external configuration (e.g. on z/OS, using JCL to invoke curl/wget with a DD statement which sets the necessary dataset parameters.) But FTP-over-TLS is already a well-documented and well-understood technology in mainframe environments. What possible advantage could one get by replacing it with something hacked together with Apache/curl/wget?


> I don't believe they have any platform-specific code to support record-oriented files.

Yes, I was assuming a more traditional client, which is probably a mistake on my part.

> What possible advantage could one get by replacing it with something hacked together with Apache/curl/wget?

Well, there's always the obvious one, which is firewalls are much easier to deal with, since there aren't two separate ports in use, so you get rid of a whole class of network and firewall errors that are quite common.


> Well, there's always the obvious one, which is firewalls are much easier to deal with, since there aren't two separate ports in use, so you get rid of a whole class of network and firewall errors that are quite common.

If possible, one should use extended passive mode (EPSV) over TLS. Then, we are just talking about two ports instead of one, without any of the connection tracking complexity on middle-boxes that active mode or non-extended passive mode can require (such as rewriting the IP address in the PASV command response to implement NAT). And then, you have to wonder, if you have to substantially change the software in use at the client and server (and possibly even write custom code, per your Apache+$LANG suggestion), are those significant changes really worth it just to save on one extra port open on the firewall?


> And then, you have to wonder, if you have to substantially change the software in use at the client and server (and possibly even write custom code, per your Apache+$LANG suggestion), are those significant changes really worth it just to save on one extra port open on the firewall?

Oh, sorry, I didn't make it clear before. I'm fully admitting that in the case of record-oriented file requirements, especially on the client side, FTPS probably doesn't have a better solution available.

I was just noting that a vastly simplified firewall configuration that both doesn't require multiple ports opened (or to somehow read the PORT/PASV or EPRT/EPSV commands and open ports dynamically) is simpler, which is an advantage. It doesn't necessarily make up to the record-oriented storage problems that would have to be dealt with in those cases though.


There's all kinds of things that can be done to improve FTP but the single best would be to shoot it and shovel dirt over it.

- Marcus Ranum


> The most reasonable "update" to FTP would be to formally replace it with HTTP, because FTP is an awful protocol --- maybe the worst one in common use --- that deserved to die off decades ago.

The biggest problem with FTP[S] is that it interacts poorly with NAT. So maybe the problem is IPv4 and the solution is IPv6.


FTP also interacts badly with firewalls and network administrators (both the competent and not competent versions of them).

It also interacts way too well with the NSA and ISP eavesdropping.


> FTP also interacts badly with firewalls and network administrators (both the competent and not competent versions of them).

Passive mode FTP works fine with firewalls that allow outgoing connections. Even active mode can work using something like Port Control Protocol.

It's only a problem when the network is tightly locked down. And when everyone's answer to that is to use TLS/443 for everything, how is that better than just leaving the unprivileged outgoing ports open? You end up allowing everything either way, but with the everything-over-port-443 outcome you can't even selectively block things anymore.

> It also interacts way too well with the NSA and ISP eavesdropping.

You could say the same about HTTP, which is why they both support TLS.


I agree with your sentiment, but have found that enterprise partners are flabbergasted when I recommend using HTTPS for file exchanges, because they believe SFTP is the only way to accomplish this. When we control both sides of the communication (via an on-premise virtual appliance) we always use HTTPS, and only support SFTP where a customer is wanting to push data to us and this is the only way they can do it (because there's usually an ops-adjacent team that manages all file exchanges and they use software that only does SFTP--as far as they know).

In your (Matasano/Latacora) experience, does SFTP register as a security risk / exposure, or does it check the box as long as they aren't using FTP/FTPS?


Exposing SFTP is like exposing SSH. We're generally not happy about it.


How about when SFTP is treated as an alternative exposed protocol for your existing daemon to speak, with your program embedding an "SFTP-app framework" and giving it a delegate router/handler module—much like you'd embed a web-app framework and give it a delegate router/handler module?

I'm mostly thinking of Erlang's `ssh_sftpd` application here, where your application can "drive" it by exposing a "file server" for it to talk to as if it was the filesystem.


That seems like the worst of both worlds: a custom application and an SFTP server exposed to the world. We would not be thrilled about this. We'd secure it if it had to exist and be secured (sometimes, financial providers use SFTP to do batch job submissions), but would never recommend it.


SSH is harder to secure than TLS, I admit.

Sometimes security doesn't matter, though.

Picture a "web-app" like an APT package repo host. (I know the reference impls of ALT hosts are essentially static-site-generators, but ignore that, and imagine an impl that's a web-app with dynamic business logic to expose the relevant routes.)

Such a web-app can be served over plain HTTP. Not even an MITM can exploit an APT client, since it looks for specific files on the server and then checks those files' signatures against a local GPG database.

What would be wrong with having this web-app also be an "SFTP-app"? Or an "FTP-app"? Or even a "telnet-app"?

In all these cases, it'd just be a virtual filesystem being exposed over a wire protocol where the security of the wire protocol is immaterial.

That's more in line with what I was picturing, here. You know how (public) S3 files can be retrieved over HTTP, but also over BitTorrent? Picture adding SFTP to that, as another option. That kind of API.

---

For the situations where the service would benefit from security, though, you can still just obviate the need for relying on SSH's security specifically.

For example, you can expose your SSH server only over a VPN. (I know a lot of people do this, though I personally find it to be more of a hassle than just correctly configuring SSH.)

Or you can expose your SSH server behind an stunnel instance, thereby reducing your configuration work down to just "setting up a TLS-terminating load-balancer", that can then be shared between HTTPS and SSH traffic.

---

Or, if it's just that you don't trust the Erlang SSH impl, you could just hack up OpenSSH to act as an "SSH terminator", forwarding the SSH-channel flows out of each SSH connection, as regular (auth-headered) TCP connections to your backend. Sort of like an fCGI server for SSH. I'm pretty sure the only reason nobody has done this yet is that no language (other than Erlang) has batteries-included support for speaking the server-side of SFTP.


We have customers that batch upload data to our cloud service. We originally only offered HTTPS uploads but found a significant number of our customers really preferred SFTP workflows. My best guess as to why is that WinSCP provides a comfortable UI paradigm and there isn't an equivalent (well known) windows GUI for doing HTTPS uploads.

Our SFTP server is written in go and exposes a VFS that has nothing to do with anything on the local file system. It doesn't even allow you to download the files you just uploaded in the same session.

Of course things like the recent libssh CVEs should make anyone exposing non-standard ssh services to the internet a bit nervous.


Mandate SFTP instead?


SFTP is far more complicated than HTTP file upload, which doesn't require support for SSH to function.


That's a bit of an oversimplification I think. That HTTP upload is/should be wrapped in a TLS connection.

Now, I'm not familiar enough with either of those in such detail to be able to say which one is more complicated, but I have a suspicion.


Modern TLS is less complicated than SSH, but even if it weren't, running two competing cryptographic transports is more complicated than just running one.

SFTP also provides what is in effect a shell (or, charitably, a remote filesystem), baked into the protocol. For example, SFTP has SETSTAT, which lets you set file attributes. None of what SFTP allows you to do is particularly difficult in HTTP (some of them are made harder by browsers, but that argument doesn't cary any weight since SFTP already requires you to use something other than a browser), but at the same time, none of them need to be baked into a file transfer protocol, either.


If you only want file uploads, sure, but that's not all FTP is used for - if you do just want file uploads, sure POST to your hearts content.

But designing a HTTP API to implement the function that FTP gives you sounds like madness.


FTP is not only file uploads, you know. What about all the other filesystem manipulation? You can't simplify it to uploads.


The conversation about FTP always revolves around uploads because that's the thing browsers didn't actually do in the 1990s. (20 years later, browsers can pretty much do everything FTP does.)


I'd also like to see updates to IMAP, SMTP and FTP to name a few.

I would really like to see each of these disappear to be replaced by new ways of doing things. The whole concept of e-mail needs a new thought. FTP should have already gone the way of the dodo if for nothing else the firewall issues its has.


Apart from mandating TLS, what's wrong with email?

FTP I agree - SFTP is an existing better option.


Email protocol is so dated people can only keep patching with hacks.

It may look like it's working from users' point of view but check how encoding is handled, it's a mess and then why is there still no widely deployed end to end encryption for such a core protocol?

Also DNS and domain is broken too. There's no point using UDP which only brings security problems and still no decent way to encrypt which domain you're querying through your browser and see how domain is governed by a structure which make crappy rules like who owns new TLD and you get some absurd "base" price for each domains instead of something more transparent and public. I wish if you write a book, you could easily have 'some-title.book' or when releasing a movie, you could get 'some-title.movie' but somehow it went wrong and no one can fix it.

Lack of improvements on the security on these points make me wonder if agencies just want to keep them dated so they can tap into people's communication easily instead of making the world a better place where everything is p2p encrypted.


For DNS: D-PRIVE protocols including DoH and DoT prevent eavesdroppers seeing your queries or tampering with them. DNSSEC lets your client ensure the answers you see are genuine.

For hiding hostnames eSNI is under development. Cloudflare with a recent Firefox nightly lets you see this for yourself.


There is no reason DNS should be centralized. DoH and DoT create concentrators that can break the privacy of many people (that's one of the reasons people don't use them: why should I care if my ISP can read my queries when the alternative is making those queries on my ISP servers?)

(That said, DoH solves a very important, but different problem that is: I want to provide a service with authenticity assurances, but DNSSEC is broken for the users, what can I do? Not many people seem interested on providing that kind of service.)


Of all the complaints I have about email, encoding handling is not really one of them. Everybody decided long ago to ignore binarybody, and just send everything base64 encoded. I really think the result is much simpler than HTTP.


The "encoding" here is I think charset encoding, not MIME encoding. The determination of charsets is basically a game of roulette; the last time I looked, only somewhere around the region of 40% of messages actually adhered to the charset they declared themselves as. A classic (and simple) error case is people who write Windows-1252 and then call it ISO-8859-1, not realizing that there actually is a difference between the two.

(Email could use a better binary encoding for attachments than base64, though, since transports are basically 100% 8-bit safe, even if not binary safe. Usenet went with yEnc, which IETF balked at in what is a case of perfect being an enemy of the good).


Email was actually the first network protocol to successfully deal with this problem, by creating the standard that every other protocol uses today to declare your encoding. But broken clients will send broken messages, like they do in any protocol.

I have seen plenty of web sites broken by it too, and it's a problem when moving files between Linux and Windows.

(And yes, things would be better if people standardized on binarybody instead of 7bitmime. unfortunately, the Microsoft server announces its support everywhere, but it's broken, so nobody can rely on that one extension (an inside parenthesis, there is a work-around that works everywhere, but it goes against the standard).)


Charset encoding is definitely a pain everywhere. Email's specific problem is that the charset is essentially mandatory in terms of labeling, but the label is often incorrect. The light at the end of the tunnel is that there is general agreement that the future of charsets is "use UTF-8 everywhere," so it's just a matter of waiting a century to kill off all the legacy stuff.


> why is there still no widely deployed end to end encryption for such a core protocol?

Perfectionism & cert vendors / CA mafia.


In most ways, other messaging technologies are superior in user interface and ease of use. The only real advantage of e-mail is its universal nature. I keep hoping we get an open source server that can federate just like e-mail, has a decent client on a couple of platforms, can handle calendars, and has some form of greeting to ask permission to send messages to a specific address (obviously override-able by a sysadmin on a "two people on same server" basis).

I have been playing with workflows lately (in relation to agents) and wonder if that would be an e-mail replacement.


Most people now expect one protocol (ala ActiveSync) to combine their contacts/calendar/notes, provide push notifications, send email, etc, all in one protocol, sort of like a universal PIM protocol (ActiveSync is patent encumbered and largely driven by Microsoft). Luckily JMAP by fastmail looks promising in this regard.


I'm with you on IMAP and SMTP, as they serve a specific purpose. One thing I don't understand, is why anyone is holding on to FTP at this stage over HTTP. Outside of legacy enterprise "file transfer" appliances, which I deal with all day and can appreciate the inertia around, I just don't understand what the benefit is over HTTP.


There’s no standard for HTTP file uploads, with all the permissions that come along with it. Eg you can POST a file to a path, but the behavior is undefined, as is authorization, etc. S3 is probably the closest thing to replacing FTP, and I would hazard a guess that there just isn’t sufficient reason to dump ftp and upgrade infrastructure.

Ftp is surprisingly common when integrating between parties, and this can be expensive to change for non technical reasons.


> There’s no standard for HTTP file uploads

There is a well-established and widely-support IETF standard mechanism (consisting of several IETF standards) for that; it's called WebDAV.

> S3 is probably the closest thing to replacing FTP,

SFTP is much closer to replacing FTP, to the point that people often say FTP when they mean SFTP, which underlies most non-HTTPS enterprise integrations I've seen in the last decade. But SFTP isn't FTP (not even FTP-over-TLS, which is FTPS, which has much less use.)

> Ftp is surprisingly common when integrating between parties

SFTP is very common. FTP (including FTPS) is surprisingly common in the sense that any use of it is surprising given the well-established, battle tested, and superior in every way alternatives that are readily available.


The standard to replace FTP is/was WebDAV. It's not a very modern standard, but it's a standard.


Microsoft kinda broke WebDAV by adding various incompatible extensions and building a horribly broken client into Windows.

When users found out about all the issues, they preferred sticking to the old FTP, having a desktop sync app (eg. Google Drive), or using a WebUI for managing files (eg. Dropbox's website) etc.


Good point; I never used that directly. Why do you think that hasn’t supplanted FTP?


Because SFTP is a better fit for simple (but secure) file transfer, WebDAV handles a lot more, but it's overkill for most of the things FTP was used for.

OTOH, AFAICT, SFTP has largely replaced FTP for the system-to-system role (though ad hoc HTTP-based protocols are also common.)


I agree re: the sftp point; people often refer to it as ftp given the interface mimicry.


For me I've always found WebDAV to my owncloud server from Windows and Mac dreadfully slow. The sync client itself that uses WebDAV is just fine, so I'd put my guess on poor integration and client implementations.


Or the S3 protocol, if de facto standards count.


PUT request with HTTP authentication is quite standardized, isn't it?

The problem isn't that there are no standardized way of doing it. The problem is that the existing standardized way of doing it has no adoption, because it's not really optimal for current needs.


With FTP you login and are then in your home directory. Then you upload. With HTTP authentication is a home directory required? Can you do `pwd` to query your home folder. Some FTP clients allow standboxing where you are in /home/user folder but `pwd` says `/`.


There are like at least 3 different mainstream ways to "log in" to a web server, and all of them are more straightforward than FTP's insane mainframe-era control-channel/data-channel design.

The "sandboxing" you're referring to is a serverside chroot, for what it's worth. And, of course, web servers have been doing that since NCSA httpd.


Eh. When you realize that FTP supported opening two control channels to different systems and having them do a file transfer without involving the client, it makes more sense.

It was also possible for the client and server to fork off data handling processes for the data channel and continue to use the control connection without needing async poll/select loops.

Anyway, it wasn't insane. At the time anyway.


It's the "without needing async poll/select loops" thing that makes it so crazy. It's a protocol that encodes the multiprocessing limitations of its hosts.


In this case, choice may not be a benefit for coordination. If there’s one way to do it, there’s much less to communicate and debug.


Well, HTTP is still missing listings, and broadly HTTP seems a layer under where you would implement file semantics. But you aren’t wrong.


Could you say more about that? Is it just a question of integrating smoothly with websites and cookies, or is it a performance issue?


Frankly, I don't know why most people have chosen to use POST requests instead of PUT. Browsers and servers usually support both PUT and HTTP authentication, and when used over HTTPS it should be quite safe.


> HTTP 1.1 has drawbacks which we've been living with for far too long.

No it doesn't. Don't fix what's not broke, and doubly so if the only motivation for the change is "well, Megacorp (c) says it's the bee's knees".


I find "you can't reliably use pipelining" to be a pretty significant drawback.


> Don't fix what's not broke

Yeah, the world's communication before the Internet existed wasn't exactly broken either, so let's not invent anything? Poor choice.


> No it doesn't.

Saying that anything has no drawbacks is pretty much always going too far.


While HTTP/2 has very clear advantages in many areas, it's losing out on some major HTTP/1 advantages that have allowed it to last so long. Most importantly: Simplicity. One can create HTTP/1 server in just a few lines of code. It's the same sort of reason AT commands are still used over simple serial devices.

While a lot of the modern web is great, it also has drawbacks that should be looked at very carefully: A simple HTTP page from yonder can still be rendered just fine. A modern page can't be rendered properly a year or so down the line. We need modern protocols to solve modern problems we've created such as huge pages with tons of links and back and forth traffic to many servers. While I think work needs to be done in both directions, I don't know what we really need to go so far as to push down yet another HTTP/x protocol this soon. HTTP/1 is going to be used for quite a while still, HTTP/2 solves a lot, and where it matters (ie: dealing with specialized servers) QUIC is already and has been in use.


The main reason it’s simple is that someone else wrote the most complex code for it: A TCP stack in the OS. Without it, it would be quite painful. With Http2 one can do the same and rely on other people’s implementations to deal with the complexity. There are in the meantime good implementations for nearly every ecosystem.

What I would E.g. agree upon is that http2 is a pain for exotic systems or E.g. microcontrollers, due to the window size and buffering requirements that the protocol imposes. But those systems are still free to stay with HTTP/1.1 for as long as they like. Nobody is going to remove support for it anytime soon.


> A modern page can't be rendered properly a year or so down the line.

Examples?


> One can create HTTP/1 server in just a few lines of code.

One can create a non-conforming server in a few lines of code. The actual protocol has a horrendous number of corner cases and weirdness. E.g., determining a message body's size depends on whether content-length is sent, is it chunked, was the request a HEAD request, etc. There's also header folding (whereby a header can be split over multiple lines) though one has the out that the standard allows that to be optional (you can respond w/ 400). Trailers (additional headers after the body) and continuations (allows the client to request a potentially large upload, and the server can reject it prior to the client transmitting the body). Actually parsing headers beyond just a KV mapping, that is, trying to work with any of the actual data in a header is fraught with error: many of them can be specified multiple times, and can thus be in multiple occurrences of the header, or all in a single header but separated by commas, or some mix of the two. (Except cookie headers, those are special.) Some also allow a comma to appear in the value itself, meaning a parser that first splits on comma and then parses the resulting parts is broken.

These text protocols, while "simple" in the sense that a human can generally look at the on-the-wire data and parse with their eyeballs, require complex FSMs to parse correctly. (Complex, relative to an equivalent binary protocol with binary-encoded length-prefixed data or even a text protocol that didn't accommodate every random slightly different way of doing it.) Now, HTTP/2 also adds a lot of functionality above and beyond what HTTP/1 has, so that certainly eats up some of the gains in parsing simplicity (and I think it would be fair to say that the outcome is overall more complex).

I'd much rather have a binary protocol and good tooling. (And that's much more feasible with today's browser's devtools. I would not want to attempt this with the lame excuse for tooling IE 6 had back in the 90s.)

MIME's header encoding of non-ASCII characters is another abomination that I'd toss in this bucket. E.g.,

    Subject: =?ISO-8859-1?B?SWYgeW91IGNhbiByZWFkIHRoaXMgeW8=?=
     =?ISO-8859-2?B?dSB1bmRlcnN0YW5kIHRoZSBleGFtcGxlLg==?=
(Yes, that's a valid email Subject line.) HTTP supported this at one time, though nowadays the standard has mostly deprecated this:

   Historically, HTTP has allowed field content with text in the
   ISO-8859-1 charset [ISO-8859-1], supporting other charsets only
   through use of [RFC2047] encoding.  In practice, most HTTP header
   field values use only a subset of the US-ASCII charset [USASCII].
   Newly defined header fields SHOULD limit their field values to
   US-ASCII octets.  A recipient SHOULD treat other octets in field
   content (obs-text) as opaque data.


All this complexity is true of a number of legacy (bad) text protocols.

DNS zone files for example are specified incredibly poorly despite their apparent simplicity (multiple optional ambiguous fields, significant whitespace, poorly specified line-folding and paren semantics, DNS record-type dependent record value grammars, vaguely specified length limits, lexical includes, embedded parsing directives, it goes on...). You won't find any 2 implementations that behave the same.

It's typical to talk to engineers who haven't been through serious parser work and get responses like "well, it's just splitting a few strings isn't it?"


I don’t think that there’s any reason we should expect HTTP/2 to ever be fully adopted, and it’s not really necessary. I am excited to see TLS adopted everywhere and ALPN, SNI available to everyone who needs it, but once you have ALPN you don’t need to adopt HTTP/2. You can just choose to make HTTP/2 available if you think it will help, and in the future you could choose to make HTTP/3 available as well.

As a website operator, the hard part of HTTP/2 was getting TLS certificiates. As an end-user, I'm just happy that popular web pages load faster. And if most of the world still uses HTTP 1.1 I won’t care.

Making a datagram-based protocol available will open up a ton of possibilities for cool realtime apps over HTTP. Currently you’re stuck with WebSockets or WebRTC and for some applications these solutions aren’t good enough.


WebRTC datagrams are very 'close to the wire'. Size them correctly, and they map 1:1 with UDP packets.

I can't think of many network applications you couldn't build on top of them.


The problem with WebRTC is software support. It's not an underlying technical issue with the protocol itself.


You can do the same with QUIC streams (size them appropriately and get a 1:1 mapping with UDP packets).


its all mostly driven by Google pushing thu with chrome constant updates. they can update chrome tomorrow to v3, v4, v10... it's all the same.

on http2/spdy, they at least had the decency to publish something before forcing everyone into adoption


It helps that this also pushed the other browser vendors to step up their games. Firefox switched to a Rapid Release cycle. Even Microsoft is now on board with Edge updating fairly regularly (at least compared to IE).

A few years ago, web sites and infrastructure had to be built to serve decade-old versions. Now it's reasonable to support only a few years-old versions, and this time frame is continuously shrinking as the current generation of evergreen auto-updating browsers takes more market share.

Though I don't like the bit about forcing adoption onto others, I do think that advancing the market to allow standards changes like this to happen in years instead of decades is a good thing.


dont drink google coolaid!

they all already have ways to release security patches. firefox has a semi-secret/invisible system addon for that. so much that this is still the current way to release security patches and fix internet-breaking bugs.

rapid release for new features eas invented by microsoft with IE, to push the agenda on standards bodies. it moved so fast that nonody even remember IE5 or 5.5... and thats the reason google have rapid release. nothing to do with security, as proved on the 1st paragraph.


Too bad Windows 10 isn't exactly everyone's favorite and thus IE is still around for some time.


Don't people read about the OSI/ISO 7 layer model any more?

The biggest deliverable (to me at least) of QUIC is the formalism of a "session" layer which is transport binding agnostic: If you move to alternate underlying IP addresses, it can recover and continue.

Sessions were big in OSI. Internet applications/protocol stack basically ignored them, for reasons of NIH, complexity, place-and-time. But, Session layer is inherently useful

UDP? Schmoo Dee Pee. What matters to me, is a session layer with cryptography.

(ok. taste-testing the underlying fragmentation barriers, which was in an earlier QUIC, that was interesting because pMTU is broken)


Seems to me there is a difference between "we're gonna propose that naming" and "will most likely be"?

That said, it doesn't seem like a good idea to me, since it's less of a clear protocol evolution. A variation of the http/2 name would IMHO make more sense, e.g. http/2q or something, since it's not changing the HTTP semantics.


To be fair, helps that this is coming from the chair of the IETF HTTP working group


That was my thought in constructing the title for this post. Apologies if I failed to capture the appropriate level of nuance.


I might very well be wrong with my interpretation of the politics involved, and you right :D


> A variation of the http/2 name would IMHO make more sense, e.g. http/2q or something, since it's not changing the HTTP semantics.

HTTP/2 didn't change semantics, either. We literally only have one version of HTTP semantics, and a variety of transports for it.


> A variation of the http/2 name would IMHO make more sense

Why? The core of HTTP/2 is defining how individual HTTP streams get multiplexed across a single TCP connection. That part is not applicable to QUIC over HTTP. There HTTP streams get mapped on top of QUIC streams. Some things might be similar between HTTP/2 and HTTP/QUIC mappings, but in the end they are separate definitions. The both define how HTTP semantics are carried over a different lower layer mechanism. So HTTP/3 or HTTP/QUIC seem reasonable.


Please keep it with simple numbers, we don't want to go the wifi route of a/b/g/n/ac where you can't tell what is more modern which they fixed.


https://en.wikipedia.org/wiki/QUIC, for those like me wondering what it is.

I think it's funny that the Q in QUIC stands for Quick. Naming things is hard.


Recursive acronyms are a staple of hacker culture by now. There's of course GNU (and many GNU projects) but also PHP, PINE and many backronyms like XBMC: https://en.wikipedia.org/wiki/Recursive_acronym


You mean "Personal Home Page"? :)


Recursive acronyms often get redefined down the line


PHP was originally “Personal Home Page”. It wasn’t until version 3 that it became a recursive acronym in an attempt to legitimize it for more than personal projects.


PHP hypertext preprocessor I believe?

PHP3 was the first "real" version of PHP, it came out in 97, and wasn't called personal home page tools then. O'Reilly didn't have a PHP book until 99 if I remember rightly.


Not technically recursive because the "Q" does not stand for "QUIC".


It does, if you get rid of the redundant K, but I don't think they were thinking that deeply about it when they arrived at QUIC. It's an attempted recursive acronym.


That reminds of me recursive acronyms[0].

Some examples:

* GNU: GNU's Not Unix

* cURL: Curl URL Request Library

* WINE: WINE Is Not an Emulator

[0]: https://en.wikipedia.org/wiki/Recursive_acronym


From the old days when it ran on slow hardware: Emacs Makes A Computer Slow


No, that’s a joke. The ones you commented on are all the official definitions.


One of my all time favourites was Xinu (Xinu is not Unix), much more so than the doubly-weird Hurd.


GNOME - "GNU's Not Unix" Network Object Modelling Environment!


> QUIC, Quick UDP Internet Connections

looks like they forced the Internet word in there to make it work...


From what I understand, this email is proposing to update the name of the RFC for HTTP over QUIC, not QUIC itself.


This was the first I heard of QUIC - having dealt a lot with crypto libraries, https, TLS etc. it sounds super promising. Gets rid of a lot of old-cipher cruft and TLS latency, bake security into to the transport, multiplex connections etc. Some really cool ideas there.


Let's see a few independent implementations before we start using it. As far as I can tell, this is also horrendously complicated, and I'm not aware of any implementations outside of Google's.



This one wins the best name competition https://github.com/ghedo/quiche


I heard of it before and later also got it in school (wouldn't recommend the study but this one subject by Paola Grosso was superb). Definitely cool stuff.


QUIC uses TLS.


I had to disable QUIC in Chrome a few days ago, was completely unable to access YouTube, and google.com search only intermittently

maybe they need to make it work first


One of the ways to get it working is to get more people using it... QUIC is implemented with a number of fallbacks to deal with broken networks, but at some point it's probably better to just let things break and force network operators to fix their broken networks.


Web proxy software isn't quite ready for it AFAICT.


Google Cloud introduced QUIC support recently for load balancing

https://cloudplatform.googleblog.com/2018/06/Introducing-QUI...

Adoption of cloud gaming is accelerating. And we are already starting to see custom protocols in the wild

https://blog.parsecgaming.com/a-networking-protocol-built-fo...


QUIC is effectively TCP 2.0. It is not a lossy protocol which is what games need.


Not being familiar at all with QUIC other than it has something to do with UDP. Would this have any impact on being able to send/receive UDP from the browser for games?


QUIC is like TCP 2.0. So it won't help games


It'll probably get a non-blocking option. And some games need TCP anyway.


Even if quic would bring along an unreliable stream option: The browsers Api still only expose http semantics via xhr and fetch. So those would need to get extended too.


I would imagine it as something like an option on websockets, so it wouldn't take much API work.

Actually doesn't webRTC already have unreliable channels?


Sure there are possibilities, but they all require additional standardization (which takes time and effort).

WebRTC has unreliable data channels, but they are based on sctp/dtls/udp. Making them work over quic streams might be possible, but also requires a new standardization effort.

If someone does it I would actually prefer an API outside of webrtc, since that one carries a lot of complexity for signaling and requires again a few extra protocols (ice, stun, turn, etc). An unreliable/unordered client to server protocol and api could potentially be much simpler.


https://w3c.github.io/webrtc-quic/ is "outside" of webrtc in the sense that it doesn't use PeerConnection and doesn't require signaling. But it currently doesn't require some form of ICE (not stun or turn, but ICE lite on a server). But we're considering adding support for an ICE-less version, and I've already started a pull request to add that.


There is a WebRTC over QUIC experiment, the standard document can be found here: https://w3c.github.io/webrtc-quic/ It is currently not a work item of the W3C WebRTC WG, but we hear about it every time we meet.

The current focus right now is to deliver WebRTC 1.0, when this is done, we may reconsider it.


At TPAC last week the WebRTC WG decided to adopt the spec, although it needs to be verified on the mailing list, as there were some members present not in favor of adopting it.

(I'm an editor of the document and am in favor of it).


That's cool to hear! Thanks for sharing the info and link.


Of course it will. You can have control over retransmission.


Somewhat OT, if v3 has not yet been ratified, is there a chance to get SRV record support added for http? That could indirectly tie into QUIC optimization.


There is a 19-year old bug open for SRV support in Firefox [1] and it recently got updated (marked as blocking #1435798 APIs for p2p web applications)

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=14328


The goal would be to use a SRV record rather than an A record for http?

Whats the benefit?


Using ports other than 443.


And load balancing without a separate server to act as a load balancer in front. Just announce a bunch of servers and all clients will all just pick one, with the ratio of servers which you adjust! No more proxy!


Exactly! This also means the clients become more self healing. Throw a couple Anycast IP's out there, add some SRV records, and entire datacenter routing issues could be side-stepped by the client. That is something load balancers can't even directly handle. Today we hack around that using GSLB and short lived DNS, which makes DNS DDoS more effective.


You'd need Chrome to do some SRV experiments for a few years before proposing it.


It would be wonderful, but not likely, since Google is pushing it, and they gave completely spurious reasons for not including it in HTTP/2. I see no reason for this to be any different.


Why does it still adhere to the weird semantics of HTTP, when in reality HTTP has evolved into a file delivery protocol, with a rich inband messaging system?

Surely we have an opportunity to split HTTP into a file delivery system and a socket-like streaming system?


Does QUIC provide any advantage for wireless connections over TCP/TLS? Also, does QUIC use DTLS or something that's more customized?


It is especially beneficial for wireless connections, since it handles packet loss better than http/2, and flow control better than many http/1.1 connections.


It uses TLS 1.3. No need for DTLS as QUIC isan in-order protocol (eventhough it's built on top of UDP). It is supposed to be faster than tcp+tls


Meanwhile we are still waiting for http 1.1 pipelining to be enabled in browsers.


Stop waiting; there's no reason to work on this any more. If you want pipelining that bad, use HTTP/2.


It seems they are getting ahead of themselves...why should it be a good idea to burn the http/3 name now given the stage of development quic is in?


Names aren't sacred. Windows skipped 9 for technical reasons, iPhone skipped 9 for marketing reasons. They only people impacted by a skipped http/3 would be technical people upgrading from /2 to /4 in 5 years.


I really, really hope we're not going to be replacing fundamental low-level infrastructure code every five years.

Keep the web frontend culture contained where it is now, don't let them ruin everything else too.


I don't think of subsequent HTTP versions as replacing the previous ones - merely augmenting.


So we'll need 3 servers instead of 1 just for serving HTTP?


Nginx can support HTTP 1.0, 1.1, and 2 all at once. I'd suspect one day it will be able to support 3 as well. And I'd imagine that any other server software of consequence is going to as well.


Yes, and the more protocols running, the less likely it is to ever be replaced some day. Who has the time to write 2 (1.0 and 1.1 are basically equal) servers for a minimum package?

If somebody still sold HTTP servers, I would look how this one was bribing the standardizing process. But it looks like people are doing it freely.


I really can't figure out what you are saying. I don't see any evidence of bribery. And the technologies we're talking about have pretty legitimate reasons for existing. So, ...?


Unfortunately, the low-level infrastructure of the web is now being controlled by a company which offers significant internal performance incentives for releasing new products.


QUIC is hardly a product - very few users will ever know what it is. That Google has significant influence on internet infrastructure is hard to deny - but saying they are in control is ridiculous. And a lot of that influence comes from hiring really knowledgeable people - ignoring them just because they work for Google would be silly, especially for something like QUIC that is open and generally useful.


> Windows skipped 9 for technical reasons

Really? I thought it was because they don't plan to release another major version of windows, and wanted the number to have a nice solid "10", rather than a "9"


Idiots wrote code that says if osversion.startsWith('Windows 9') this is some crufty old Win9x system.

To revert to the actual topic, to defend against this type of thing the QUIC binding for HTTP assigns an arbitrary non-contiguous group of identifiers as reserved to be ignored in various places, with the intent that some real systems will prod these once in a while, to discourage idiots hard-coding checks that will become obsolete.


Windows wouldn't report any version beyond 8.1 (well 6.2) to old applications anyway - https://arstechnica.com/information-technology/2014/11/why-w... explains how they do it.

startsWith('windows 9') was never an issue


Urban legend.

Windows is already full of version APIs that lie to old programs.

It could also format the name differently. And people like to quote broken java code, but old java would just say "Windows NT (unknown)".

As a technical issue, it was small and easily avoided.

It was a marketing decision.


As far as I've seen, there was never any proof that Microsoft chose the Windows 10 name for that reason. I find it especially unlikely considering Vista broke even more recent compatibility.


My recollection is that MS did confirm that Windows 10 was chosen for some vague mention of technical reasons, which most people latched on to as being explained by the "Windows 9" check.

I have seen code in the 2010s that did explicitly check for "Windows 9" as a means to check if it was running on Windows 95 or 98. That doesn't mean that the code would actually work on those systems (it could well be a prelude to saying "your system is too ancient, we don't support you"). But such kind of cruft tends to last a very, very long time without active maintenance.


Is QUIC what is disrupting my video calls when I type into a google doc from chrome, or any other chrome->google interaction?


No.


Interesting. Google has been pushing TLS on the world, with its additional 750ms round-trip time to clients on the other side of the world, while taking advantage of their ownership of Chrome to use QUIC to deliver very fast connectivity for their users. That is a sweet competitive advantage that they have over competing services. EDIT: Am I misunderstanding this?


If you are False Start compatible (allow modern ECDHE, etcetera and speak HTTP/2, don't use crappy middleboxes from Cisco, Palo Alto) you get 1RTT with TLS 1.2. If you do TLS 1.3 you always get 1RTT.

You can't avoid the one round trip on first connections, you pay that in QUIC too.


There's 0RTT but once the connection setup cost is amortized by preconnects and multiplexing being 0 or 1 RTT is not that important to justify its engineering ugliness.


Isn't QUIC mixing the tcp handshake with the tls handshake? That should make it faster


If you trust people like cloudflare to do CDN termination for you, that time penalty is far lower.

There is also a proposal to allow connection sharing for different domains, which would reduce the connection time to 0-rtt even if the user has never visited your site.


When http2 will be widely adopted by most small companies, and not just big ones, yeah, let's talk about v2.

But right now, http2 adds complexity, a lot of frameworks are not compatible with it and must be fronted by a proxy that does, and the perf benefit is not obvious at all.


The perf benefits are mainly for the big companies [namely, Google, Mozilla, and few others]. Hard to see any real ones for the rest of the world.


The problem is that QUIC, due to using UDP and not trusting the NIC to do segmentation or crypto offload, is incredibly expensive on the server side. You're paying costs upwards of a factor of 4 as compared to TCP for quality of experience gains that are questionable when compared to TCP.

This doesn't matter for small sites, or for clients. But when you're serving 190Gb/s from a single box, paying 4x per byte really hurts.


Which Google can afford. Oh look your small independent cloud provider is slow, come to Google Cloud. They've been subverting standards to push up the cost of entry and very few people have calling them out ...


The article is literally about the IETF standards body making a decision. If working with the IETF to propose and refine a new technology is, by your definition, "subverting standards" I really am at a loss for what a company should be doing.


It's not one standard. It's about having a huge head on the on all the important internet standards and pushing their cloud interests. It designed to work well at their scale on their network you connect to directly. The standard group should taking into small and medium provider but there not really. Who has the resources to follow Google's multi-azimuth "standardisation" pace ?


> It's not one standard.

Do you have specifics?

> It's about having a huge head on the on all the important internet standards and pushing their cloud interests

Is Google not supposed to be trying to improve the services they offer?

> It designed to work well at their scale on their network you connect to directly.

I don't see any reason why QUIC won't be perfectly usable on most networks and that is what the article was about. I'm not sure what standard Google is spearheading which is only going to be useful on networks their size and will someone harm other networks.

> The standard group should taking into small and medium provider but there not really.

Again, what is an example of this?

> Who has the resources to follow Google's multi-azimuth "standardisation" pace ?

I really don't see what the issue here is - Google is spearheading new standards. In doing so, it's working with the IETF to develop those standards. I'm not aware of any indication that its working in bad faith or subverting IETF processes. The IETF process is fairly open. No one can follow everything happening in technology all the time - too much is going on. It seems like you are saying that Google should stop or artificially slow its work with standards bodies so that outside observers unaffiliated with those bodies can follow more closely?


The punishments for not "building/packaging/spriting" your website/app are way lower thanks to the multiplexing. Smaller shops either didn't know or didn't have the time to care about this and it added seconds to the page load in many cases. Now it's fixed (not that you shouldn't still build/package/sprite in appropriate cases) by just using HTTP/2.


A shop that doesn't do package/sprinting is unlikely to even know about http2 nor being able to deploy it. I doubt you can setup nginx if you can't even run a minifier from the command line.


They typically use web hosts who will deploy http/2 for you.


CPanel hosted websites... Hold my css


My (already minimal) blog's load time was almost halved by using HTTP2. There are real benefits for small sites/companies.


Not here. I've seen significant performance improvements from HTTP/2 across dozens of small sites.


If we're really looking to start from scratch with a modern protocol and break compatibility then HTTP v3 should be something like http://noiseprotocol.org instead.

I assume the QUIC protocol also comes with some "happy accidents" that allows Google to more easily collect data on us, too.


> I assume the QUIC protocol also comes with some "happy accidents" that allows Google to more easily collect data on us, too.

This is just conspiracy theory nonsense. If QUIC has such a flaw, it has an open spec and you are free to go find it and point it out.


Pointing it out would change nothing; Google would implement it anyway.


So? If what is claimed to be true is true, maybe Apple, Mozilla, and Microsoft won't implement it - and that would be a big deal to the alleged nefarious plot. But, that won't happen if no one speaks up. That is, of course, if this isn't just nonsense speculation.


This is exactly what we've been working on (QUIC with Noise).

As for the happy accidents, none have been spotted so far, it is an open and collaborative project so feel free to check it yourself and chime in before its finalization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: