Hacker News new | past | comments | ask | show | jobs | submit login
The HTTP crash course nobody asked for (fasterthanli.me)
902 points by g0xA52A2A on Oct 21, 2022 | hide | past | favorite | 141 comments



> HTTP/1.1 is a delightfully simple protocol, if you ignore most of it.

As someone who had to write a couple of proxy servers, I can't express how so sadly accurate it is.


And this is why I expect HTTP/2 and HTTP/3 to be much more robust in the long term: the implementations are harder to write, and you won’t get anywhere without reading at least a some spec, whereas HTTP/1 is deceptively simple with therefore a lot of badly incorrect implementations, often with corresponding security problems.


HTTP/3 is written for the use case of large corporations and does not even allow for human persons to use it alone. It requires CA based TLS to set up a connection. So if you want to host a website visitable by a random person you've never communicated with before you have to get continued permission from an incorporated entity running a CA to do so.

This is far more of a security problem than all of the bad HTTP 1.1 implementations put together. It is built in corporate control that cannot be bypassed except by not using HTTP/3. It is extremely important that we not let the mega-corp browsers drop HTTP 1.1 and continue to write our own projects for it.


Your complaint is strictly social, and quite irrelevant here.

Look, cleartext internet protocols are on the way out, because their model is fundamentally broken. For security reasons, I will note, and privacy. There, we joust security against security. Cleartext HTTP/1 is strictly a legacy matter, retained only because there’s still too much content stuck on it. But browsers will be more aggressively phasing it out sooner or later, first with the likes of scary address bar “insecure” badges, and probably within a decade by disabling http: by default in a way similar to Firefox’s HTTPS-Only Mode (puts up a network error page with the ability to temporarily enable HTTP for the site), though I doubt it’ll be removed for decades. And HTTP/1 at least over TLS will remain the baseline for decades to come—HTTP/2 could conceivably be dropped at some point, but HTTP/3 is very unlikely to ever become the baseline because it requires more setup effort.

You can still use cleartext HTTP/1 at least for now if you want, but this functionality was rightly more or less removed in HTTP/2, and fully removed in HTTP/3. Pervasive monitoring is an attack (https://www.rfc-editor.org/rfc/rfc7258.html), and HTTP/2 and HTTP/3 are appropriately designed to mitigate it.

Look, be real: the entire web is now built heavily on the CA model. If free issuance of certificates falters, the internet as we know it is in serious trouble. Deal with it. Social factors. This might conceivably happen, and if it does, HTTP/1 will not save you. In fact, cleartext HTTP/1 will be just about the first thing to die (be blocked) in the most likely relevant sequence of events.


It is a layering violation though. Not all HTTP usage is through a browser, and not all routes go over the plaintext Internet. Browsers or clients can still require HTTPS at the application layer, but it shouldn't be part of the protocol spec.

Suppose I have an app within an intranet that's secured with, say, Wireguard or an application-layer tunnel (eg, SSH or Openziti).

Bringing HTTP/3 into the picture means dealing with CAs and certs on top of the provisioning I've already done for my lower layers, possibly leaking information via Certificate Transparency logs. Then the cost of double-encryption, etc.


I agree that sending cleartext over the internet in this day and age is a bad idea. But "encrypt all communication at the application layer" doesn't have to be the only solution. There's also "encrypt communication at the <i>network</i> layer," as discussed here for example: https://tailscale.com/blog/remembering-the-lan/

I have a suspicion that this will prove to be a better abstraction than application-level encryption for everything. If I'm right, I would expect things to naturally start migrating in that direction over time. We'll see!


Thinking that social issues can't and should not shape technical discussions, even more when talking about one of the most important technological platforms for society is rather limited and short sighted.


> If free issuance of certificates falters, the internet as we know it is in serious trouble. Deal with it. Social factors. This might conceivably happen, and if it does, HTTP/1 will not save you.

This (HTTP/1 won't save us) doesn't seem entirely accurate to me.

I can run free, untrusted HTTPS easily using self-issued certificates. It's relatively simple to think of mechanisms where trust can be layered on top of that outside the traditional CA mechanisms (think Keybase derivatives like DID-systems). It's a small patch to allow that alternative trust framework to be used for HTTPS.

I don't know HTTP/3 at all, but if it is more tightly tied to CA infrastructure that is a problem.


> I can run free, untrusted HTTPS easily using self-issued certificates.

At present you can. But think about what conditions might lead to free issuance faltering: it will almost certainly boil down to pressure from governments. And do you think that such governments will lightly allow you to bypass their measures? No; once the dust settles, no technical measures will be effective: the end result will be mandatory interception of all traffic, with TLS proxying and similar, and any other traffic blocked. Countries have even done this at times, requiring anyone who wants to access the internet to install their root certificate.

The internet is designed to be comparatively robust against sociopolitical attack, but if a sufficiently powerful government decides to concertedly attack the internet as we know it, the internet will not win the conflict.

> I don't know HTTP/3 at all, but if it is more tightly tied to CA infrastructure that is a problem.

As clarified elsewhere in this thread, HTTP/3 changes absolutely nothing about certificate verification; superkuh appears to have misunderstood the meaning of the text in the spec.


>Look, be real: the entire web is now built heavily on the CA model.

No. You've just got your commercial blinders on. The entire *commercial web* is built on the CA model. But the commercial web is hardly all there is. There is a giant web of actual websites run by human persons out there that do not depend on CA TLS and who's use cases do not require dropping clear text connections. That's only a need for for-profit businesses and institutions.

I agree that the mega-corp browsers will drop support for any protocol that does not generate them profit. The consequences of this action will be dire for everyone. But you can't convince people of this. You just have to let it happen and let people learn from the pain. Just like with the social networks.


Pervasive monitoring is an attack. No public internet traffic of any character should be cleartext. The actual websites run by human persons that you speak of (such as myself) are not exceptions to this.


That seems like a societal problem.


As others said, is a layering violation. What that commandment means in practice? Essentially, you can't just udp your way around the protocol, and do frame comparison to test robustness of the protocol, you have to care how it looks like when encrypted. And you now need to use a subset of the TLS spec which most widely used implementations in the wild consider private API. So most QUIC implementations are built on some broken fork of openssl. This leads to fewer implementations, which means concentration of power (spec is not king, the implementations rule the protocol) and narrower attack surface for exploiters. And we all lose.


Can you not just have it use a self signed certificate? I don't see why a CA would need to be involved at all, nor can I even imagine how that could be enforced at the protocol level.

This sounds like a red herring to me.

edit: Yeah I've more or less confirmed that self signed certs are perfectly fine in HTTP3. This is a big ball of nothing.


About half my personal (private, in my own home LAN) sites use self-signed certs that Chrome flat out won't accept. I have to type the magic key sequence to bypass the error. I do wish we could come up with something better for this kind of use case, then having to set up letsencrypt on my public domain and issue a wildcard cert to use with RFC1918 web sites.

And that's without worrying about HTTP/3.


I wish there was an intermediate AH mode. the page is signed but not encrypted.

Or baring that I wish that browsers would ease up a bit and make tofu style self signed certs acceptable.

I really don't like how there is an expire time built into tls sites. Have you ever found someones old site, usually hosted by a university, that just lives year after year like a time capsule. well not gonna happen with tls.

And on the subject of CA's I don't think I trust them any more than a tofu model Have you looked and verified every authority in your CA file? Do you really trust the turkish government to be able to sign for any web site.

Aha! you say, this is why we have cert pinning.

To which my reply is. cert pinning is the tofu model where you have removed all user agency. it is better than the CA model but really sucks from a end user perspective. when thing go wrong, there is no easy way to fix it.


Add the certificate to your trusted store? HTTP3 will change nothing about this.


Yeah this is just a browser setting - this complaint sounds in bad faith coming from someone who apparently knows about all the other aspects of using certs?


If you control both the server and the client, you can make yourself your own private CA, issue all the certs you need, and have no browser errors anywhere.


> I have to type the magic key sequence to bypass the error.

8 keys? f + i + r + e + f + o + x + Enter?


'thisisunsafe' to answer the actual question


I suspect you're missing the subjectAltName field from your certificate(s).

https://developer.chrome.com/blog/chrome-58-deprecations/#re...


Use an empowered browser that lets you install your own private CA root.


Chrome does let you install your own private CA root. GP's problem sounds like a misconfiguration on the certificate-generation side.


I mean, if it's on the open web you can use Let's Encrypt. If it's on your private network, you can make whatever keys you want with XCA and trust your self-made CA in browsers.


Is there anything the spec that actually requires that? AFAIK it's just that major implementators (browsers) have chosen to enforce TLS.


You're thinking of HTTP/2. The QUIC protocol that underlies HTTP/3 requires SSL all the time.


The language in the RFC that Google and Microsoft forced through the IETF to open-wash it uses "MUST" in capital letters when talking about setting up the HTTP3 endpoint and verifying the cert. https://datatracker.ietf.org/doc/rfc9114/

I would be extremely relieved if I am wrong and someone could explain how I am wrong. Like... maybe there's some mechanism to self-sign without CA and use a null cypher? So even if most users would be scared away geeks could click through (like today's status quo with self-signed ssl certs).


> the RFC that Google and Microsoft forced through the IETF to open-wash it

This is a gross misrepresentation of the situation. Yes, Google played a significant role in the development of HTTP/2, QUIC and HTTP/3, providing the starting point for the development work in each case, but there was no open-washing: there was a collaborative process with the involvement of many interested parties, and the end result was significantly different from what was first proposed, and significantly better. This is how IETF works. Google did not control matters in any way, nor Microsoft.


The TLS verification requirements laid on HTTP/3 are described in RFC 9114 §3.1 ¶2, https://www.rfc-editor.org/rfc/rfc9114#section-3.1-2:

> The "https" scheme associates authority with possession of a certificate that the client considers to be trustworthy for the host identified by the authority component of the URI. Upon receiving a server certificate in the TLS handshake, the client MUST verify that the certificate is an acceptable match for the URI's origin server using the process described in Section 4.3.4 of [HTTP]. If the certificate cannot be verified with respect to the URI's origin server, the client MUST NOT consider the server authoritative for that origin.

This boils down to “this is HTTPS, so the same rules as ever apply for matching the certificate and origin”. I suspect you’ve misunderstood what authoritativity conveys. The last sentence is saying “… and if verification fails, don’t trust the connection”—and it’s up to each app to decide what to do about that; browsers put up a scary warning error page that you can normally click through (depending on server configuration). Note that it doesn’t even hardcode the CA model; I like the way RFC 9110 §4.3.3 ¶1 puts it: “The client usually relies upon a chain of trust, conveyed from some prearranged or configured trust anchor, to deem a certificate trustworthy.”

You can read more about the rules of HTTPS in https://www.rfc-editor.org/rfc/rfc9110#section-4.3.3 (sections 4.3.3 and 4.3.4). Certificate verification is the same as ever, and the only difference between HTTP/1 and HTTP/2 and HTTP/3 is that HTTP/1 has connection-per-origin, where 2 and 3 can use a connection for multiple origins (§4.3.3 ¶2–3 spells it out).


You can still use a self-signed cert with HTTP3 (including rightly scary warnings for visitors) or you can make your own CA and distribute the cert (no scary warnings when people visit your site).


I will believe this when I see it, thank you

the zealous "you must obey the law" tone of SOME comments here reinforces the worst stereotypes of corporate apparats.. individuals doing the bidding of institutions based on the letter of their "laws"

Human history has shown again and again that this ends badly .. HTTP is OK with ME


> I will believe this when I see it, thank you

I'm on my phone so I can't confirm this is http3, but how about https://self-signed.badssl.com/


ok

    $ curl -v https://self-signed.badssl.com/
    *   Trying 104.154.89.105:443...
    * Connected to self-signed.badssl.com (104.154.89.105) port 443 (#0)
    * ALPN, offering h2
    * ALPN, offering http/1.1
    *  CAfile: /etc/ssl/certs/ca-certificates.crt
    *  CApath: /etc/ssl/certs
    * TLSv1.0 (OUT), TLS header, Certificate Status (22):
    * TLSv1.3 (OUT), TLS handshake, Client hello (1):
    * TLSv1.2 (IN), TLS header, Certificate Status (22):
    * TLSv1.3 (IN), TLS handshake, Server hello (2):
    * TLSv1.2 (IN), TLS header, Certificate Status (22):
    * TLSv1.2 (IN), TLS handshake, Certificate (11):
    * TLSv1.2 (OUT), TLS header, Unknown (21):
    * TLSv1.2 (OUT), TLS alert, unknown CA (560):
    * SSL certificate problem: self-signed certificate
    * Closing connection 0
    curl: (60) SSL certificate problem: self-signed certificate
    More details here: 
    https://curl.se/docs/sslcerts.html
"curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above."

    $ curl --version
    curl 7.81.0 (x86_64-pc-linux-gnu) libcurl/7.81.0 OpenSSL/3.0.2 zlib/1.2.11 brotli/1.0.9 zstd/1.4.8 libidn2/2.3.2 libpsl/0.21.0 (+libidn2/2.3.2) libssh/0.9.6/openssl/zlib nghttp2/1.43.0 librtmp/2.3 OpenLDAP/2.5.13
    Release-Date: 2022-01-05
    Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp 
    Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd


        curl -kv https://self-signed.badssl.com/
        *   Trying 104.154.89.105:443...
        * TCP_NODELAY set
        * Connected to self-signed.badssl.com (104.154.89.105) port 443 (#0)
        * ALPN, offering http/1.1
        * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
        * successfully set certificate verify locations:
        *   CAfile: /opt/local/share/curl/curl-ca-bundle.crt
        CApath: none
        * TLSv1.2 (OUT), TLS header, Certificate Status (22):
        * TLSv1.2 (OUT), TLS handshake, Client hello (1):
        * TLSv1.2 (IN), TLS handshake, Server hello (2):
        * TLSv1.2 (IN), TLS handshake, Certificate (11):
        * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
        * TLSv1.2 (IN), TLS handshake, Server finished (14):
        * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
        * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
        * TLSv1.2 (OUT), TLS handshake, Finished (20):
        * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
        * TLSv1.2 (IN), TLS handshake, Finished (20):
        * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
        * ALPN, server accepted to use http/1.1
        * Server certificate:
        *  subject: C=US; ST=California; L=San Francisco; O=BadSSL; CN=*.badssl.com
        *  start date: Aug 12 15:59:10 2022 GMT
        *  expire date: Aug 11 15:59:10 2024 GMT
        *  issuer: C=US; ST=California; L=San Francisco; O=BadSSL; CN=*.badssl.com
        *  SSL certificate verify result: self signed certificate (18), continuing anyway.
        > GET / HTTP/1.1
        > Host: self-signed.badssl.com
        > User-Agent: curl/7.65.1
        > Accept: */*
        > 
        * Mark bundle as not supporting multiuse
        < HTTP/1.1 200 OK
        < Server: nginx/1.10.3 (Ubuntu)
        < Date: Fri, 21 Oct 2022 18:41:58 GMT
        < Content-Type: text/html
        < Content-Length: 502
        < Last-Modified: Fri, 12 Aug 2022 15:59:21 GMT
        < Connection: keep-alive
        < ETag: "62f678d9-1f6"
        < Cache-Control: no-store
        < Accept-Ranges: bytes
        <


yes minus-k says "less checking, generally proceed" but does it remember that certificate? maybe not


I wouldn't want curl to remember the exception. It's not like a browser: just because I'm currently testing a site with -k does not mean I never want it to perform the normal careful checks.


This seems like it... works exactly as intended?

If you decide you trust that certificate (which can be a legitimate thing to do - the cert signature could be communicated to you via out-of-band trusted mechanisms) then https://curl.se/docs/sslcerts.html explains how to trust it.


Among other things that's saying it's a self-signed cert and can do HTTP2. So that Chrome on my phone will connect to it does confirm that you can do self-signed certs with HTTP2 at least.


> HTTP is OK with ME

How do you propose to secure user sessions and prevent MITM or tracking otherwise?


That battle is mostly lost anyway - the web is what Google and Apple let you see for the most part for most users.


I encourage you to take solace in stories of courage and invention at this challenging time in history.


> whereas HTTP/1 is deceptively simple with therefore a lot of badly incorrect implementations

Doesn't that imply that HTTP/1 is deceptively complex?


I think the idea is that HTTP/1 is simple in the hello-world 5th-percentile-complexity case, which deceives people into thinking that it's also simple in the real-world 99.9th-percentile-complexity case, which it's not at all.


It's, like, simple in about 80-percentile-complexity case. But the rest 20% take 80% of the work (and re-architecturing). For example, 1xx responses break 1-1 correspondence between requests and responses. Then an "Upgrade" header may mean you need to turn a connection into a dumb byte pipe, ditto for "CONNECT" requests. Then there is a whole business of end-to-end vs. by-hop headers: some of the latter ones will be listed in the "Connection" header (did you know that that is its original purpose, and "close" option is but a hack?) but some of the headers are always hop-by-hop and the proxy is expected to filter them even if they're not listed in "Connection" header — but of course the comprehensive list of such by-hop headers doesn't exist. Then there is pipelining. And handling HTTP/1.0 clients (yep, one of the reasons why OP has "IT'S SET TO HTTP/1.1 AND NOTHING ELSE" in his article) who by their nature cannot support replies in "chunked" transfer-encoding. And handling POST bodies in "chunked" transfer-encoding. And handling trailers if you did not cut "Expect: trailers" in the client's request. And there may be comments in "chunked" encoding. And... multiline headers?.. The list goes on and on.

And a decent HTTP-proxy must handle all of that stuff or at least fail gracefully without affecting other clients.


I get what you're saying, but robustness through complexity feels like an odd argument nonetheless.


Its counterintuitivity is why I like bringing it up. :-)


As someone who has not read the HTTP/1.1 spec, what are some pitfalls that could actually become security issues?


The most common proximate cause of security issues in format handling (parsing and emitting) comes from implementations differing in their parsing, or implementations emitting invalid values in a way that will be parsed differently. Probably the most common type of security issue then comes from smuggling values through, bypassing checks or triggering injection. (This is the essence of injection attacks as a broad class.) One of the easiest demonstrations of this in HTTP specifically is called HTTP request smuggling: https://portswigger.net/web-security/request-smuggling. And the solution for that is pretty much: “stop using a text protocol, they’re too hard to use correctly”.


One of the simplest issues is that headers end with a newline. Most code will not generate a header with an embedded new line, so it's common that software doesn't handle this case, and passes the new line through unmodified. This means that if someone is able to set a custom value for part of a header they can often use that to inject their own custom response header. Or even their own customer response body, since that is also set off with newlines.


Being text-based. Which leads to people constructing protocol messages by printf and therefore tons of injection bugs.


I don't think I could implement a correct HTTP 1 agent even if I read the specs.


But for back compatibility implementors will still have to support HTTP/1, which will likely take more than 50% of the total effort.


HTTP/2 makes no sense at all. HTTP/3 is just a fix to HTTP/2 so that it makes some sort of sense.

Both of these are only concerned with reducing the latency of doing lots of requests to the same server in parallel.

Which is only needed by web browsers and nothing else.


I feel like this applies to many technologies. Made me think of the bootstrapping, “I-can-build-that-in-a-weekend” crowd.

The initial problem is usually easy to solve for, it’s all the edge cases and other details that makes something complex.


> As someone who had to write a couple of proxy servers, I can't express how so sadly accurate it is.

Chunked transfer/content encoding problems still give me nightmares...


“By contrast, I think about Bluetooth a lot. I wish I didn't.”

LOL, yes same here. Can’t wait for Bluetooths b̶a̶l̶l̶s̶ baggage to be chopped.


How is WiFi so much more reliable than Bluetooth?

I installed a web server on my phone and send files this way much faster (and Android -> Apple works):

https://f-droid.org/en/packages/net.basov.lws.fdroid/

I wish there were a standard for streaming (headphones could connect to your network via WPS, and stream some canonical URL with no configuration needed).


> How is WiFi so much more reliable than Bluetooth?

WiFi uses near 10x the power Bluetooth does when active (and that’s before factoring in BLE which cuts that down in half). WiFi also has access to the much less crowded 5GHz band.

IIRC WiFi is also a much simpler protocol, it’s just a data channel (its aim being to replace LAN cables).

Plus in order to support cheap and specialised devices Bluetooth supports all sorts of profiles and applications. This makes the devices simpler, and means all the configuration can be automated to pairing, but it makes the generic hosts a lot more complicated.


>IIRC WiFi is also a much simpler protocol, it’s just a data channel (its aim being to replace LAN cables).

I'm not sure what do you mean, but Wi-Fi covers the PHY layer and the MAC layers. It's not « only » a data channel. Modern Wi-Fi uses OFDMA, which is arguably more complex than what bluetooth uses (without even talking about the MAC).


I think WiFi is better abstracted and layer-delineated though. Wifi has a lot of complexity but it largely feels like necessary complexity, and the physical layer, data layer, and application layer all mostly stay in their lane. Bluetooth is a mishmash of accidental complexity with physical layer leaking data layer abstraction, and the application/data boundary is even more blurred. Instead of dumb pipes, you have to worry about codecs and the like, of which there are myriad vendor specific ones.


Bluetooth is massively more complicated than WiFi. It has a whole service enumeration/discovery layer baked in that IMHO tried to cram way too much into the spec. Whats even more amazing is that some of the hardware vendors at the table during the development went "F that" and added some side channel audio stuff that bypassing most of the stack.

But mostly the problem is that too much of this complexity fell on hardware vendors and they suck at writing software. There are umpteen bajillion different bluetooth stacks out there and they're all buggy in new and exciting ways. Interoperability testing is hugely neglected by most vendors. The times where Bluetooth works well are typically where the same vendor controls both ends of the link, like Airpods on an iPhone.

In 2020 I tried buying some reputable brand Bluetooth headphones for my kids so they could do home-schooling without disturbing each other. It was a total failure. Every time their computer went to sleep the bluetooth stack would become out of sync and attempts to reconnect would result in just "error connecting" messages, requiring you to fully delete the bluetooth device on the Windows side and redo the entire discovery/association/connection from scratch. The bluetooth stack on Windows would crash halfway through the association process about half of the time forcing you to reboot the computer to start over. Absolutely unusable. I tried the same headphones on a Linux host and they worked slightly better, but were still prone to getting out of sync and requiring a full "forget this device" and add it again cycle every few days for no apparent reason.


> Bluetooth is massively more complicated than WiFi. It has a whole service enumeration/discovery layer baked in that IMHO tried to cram way too much into the spec. Whats even more amazing is that some of the hardware vendors at the table during the development went "F that" and added some side channel audio stuff that bypassing most of the stack.

I seriously think you underestimate the complexity in Wi-Fi networks. The 802.11 2020 standard is 4379 pages long. And i'm not even counting the amendments ( https://www.ieee802.org/11/Reports/802.11_Timelines.htm ) that are in development.


Yep, had a similar annoyance using my AirPods with my gaming laptop. The laptop wouldn't reconnect after going to sleep for an extended period of time. I ended up replacing the stock wireless card for an Intel AX210 based one and then it was fine.


WiFi supposedly needs more power and has higher latency. Not sure how true that remains post WiFi6 though


Range and bandwidth is orders of magnitude larger, and both have direct limitations in terms of energy budget.


The humorous style is very refreshing, if only my networking lecturers had been more witty I might remember more of this


> This is not the same as HTTP pipelining, which I will not discuss, out of spite.

That is cause HTTP pipelining was and is a mistake and is responsible for a ton of http request smuggling vulnerabilities because the http 1.1 protocol has no framing.

No browser supports it anymore, thankfully.


Isn't "HTTP pipelining" just normal usage of HTTP/1.1?

Anyone that doesn't support this is broken. My own code definitely does not wait for responses before sending more requests, that's just basic usage of TCP.


HTTP Pipelining has the client sending multiple requests before receiving a response. It turns it into Request, Request, Request, Response, Response, Response.

The problem is that if Request number 1 leads to an error whereby the connection is closed, those latter two requests are discarded entirely. The client would have to retry request number two and three. If the server has already done work in parallel though, it can't send those last two responses because there is no way to specify that the response is for the second or third request.

The only way a server has to signal that it is in a bad state is to return 400 Bad Request and to close the connection because it can't keep parsing the original requests.

There is no support for HTTP pipelining in current browsers.

What you are thinking about is probably HTTP keep alive, where the same TCP/IP channel is used to send a follow-up request once a response to the original request has been received and processed. That is NOT HTTP pipelining.


> Isn't "HTTP pipelining" just normal usage of HTTP/1.1?

> Anyone that doesn't support this is broken. My own code definitely does not wait for responses before sending more requests, that's just basic usage of TCP.

Yep.

There is some "support" a server could do, in the form of processing multiple requests in parallel¹, e.g., if it gets two GET requests back to back, it could queue up the second GET's data in memory, or so. The responses still have to be streamed out in the order they came in, of course. Given how complex I imagine such an implementation would be, I'd expect that to be implemented almost never, though; if you're just doing a simple "read request from socket, process request, write response" loop, then like you say, pipelined requests aren't a problem: they're just buffered on the socket or in the read portion's buffers.

¹this seems fraught with peril. I doubt you'd want to parallelize anything that wasn't GET/HEAD for risk of side-effects happening in unexpected orders.


HTTP pipelining is not normal usage of HTTP/1.1. And it means that if request number 1 fails, usually request number 2 and 3 are lost because servers will slam the door shut because of the lack of framing around HTTP it is too dangerous to try and continue parsing the HTTP requests that are incoming without potentially leading to a territory where they are parsing the incoming text stream wrong.

This is what led to the many request smuggling, its because the front-end proxy treats the request different from the backend proxy and parses the same HTTP text stream differently.

Since there is no framing there is no one valid way to say "this is where a request starts, and this is where a request ends and it is safe to continue parsing past the end of this request for the next request".

Servers are also allowed to close the connection at will. So let's say I pipeline Request 1, 2, and 3.

The server can respond to Request 1 with Connection: close, and now request 2 and 3 are lost.

That's the reason HTTP pipelining is not supported by browsers/most clients.

Curl removed it and there's a blog post about it: https://daniel.haxx.se/blog/2019/04/06/curl-says-bye-bye-to-...


There is clear request framing in HTTP/1.1, it's mandated by keep-alive.


There is not, which is what leads to vulnerabilities where two HTTP protocol parsers will parse the same request in two different ways (which led to the HTTP desync attacks).

https://portswigger.net/research/http-desync-attacks-request...

There's a reason why web servers will slam the door shut even when the client requests HTTP Keep Alive because they are unable to properly parse a request in a way that makes it safe to parse a follow-up request on the same TCP/IP connection.


The link shows how to exploit certain bugs in some bad implementations.

That doesn't change the fact the protocol itself is quite well-defined.

There is no serious HTTP server that wouldn't support keep-alive, this is just FUD.


You are conflating keep alive with http pipelining, they are not one and the same. Keep alive may be supported and servers may claim to have fully parses request 1 correctly so they can be fairly confident request 2 can be parsed correctly, but reading the spec one way or another and that is no longer a guarantee that holds.

Keep alive and http pipelining are supported by major servers, some with bugs or issues, but no clients pipeline requests (at least not the major browsers, curl and other popular tooling).

It’s not FUD, pipelining and reuse of an existing connection is broken in the face of trying to parse text protocols that don’t have well defined semantics and where implementations reading the same documentation provide different results because it’s not black and white, it’s fuzzy around the edges.


Keepalive mandates that the TCP connection stays open to parse further requests and send further responses and requires the support of the two mechanisms to distinguish the boundaries between different requests or responses (explicit content length and chunked encoding).

Pipelining is just normal usage of TCP, which is a mechanism to establish two queues of bytes between two endpoints on a network.

There is no difference between sending data before or after having received data from the other party. The two directions are logically independent, even if at the transport level data from one direction contains acks of the other direction.

Now, some servers will start processing requests on a given connection in parallel, and will not correctly synchronize the multiple threads trying to write back their respective response to the client. This is just a bug on the server doing multithreading incorrectly, and has nothing to do with any framing problems in the protocol.

I suppose HTTP/2 supports that use case better, since it can multiplex the concurrent responses, but the correct thing to do is to simply treat each request synchronously one after the other, and not parallelize the processing of multiple requests on a given TCP connection.


Not even just TCP, basic usage of message passing and any data flow.


> We're not done with our request payload yet! We sent:

> Host: neverssl.com

> This is actually a requirement for HTTP/1.1, and was one of its big selling points compared to, uh...

> AhAH! Drew yourself into a corner didn't you.

> ...Gopher? I guess?

I feel like the author must know this.. HTTP/1.0 supported but didn't require the Host header and thus HTTP/1.1 allowed consistent name-based virtual hosting on web servers.

I did appreciate the simple natures of the early protocols, although it is hard to argue against the many improvements in newer protocols. It was so easy to use nc to test SMTP and HTTP in particular.

I did enjoy the article's notes on the protocols however the huge sections of code snippets lost my attention midway.


> I feel like the author must know this

The author does know this, it's a reference to a couple paragraphs above:

> [...] and the HTTP protocol version, which is a fixed string which is always set to HTTP/1.1 and nothing else.

> (cool bear) But what ab-

> IT'S SET TO HTTP/1.1 AND NOTHING ELSE.


Thanks, missed that.


You know how some movie fans will sometimes pretend the sequels to some franchise don't exist? HTTP is the opposite.


So HTTP is the old school Star Wars fans pretending the prequels don't exist?


Nah it's the Buffy TV show fans ignoring the movie


That was an excellent, well-written, well-thought out, well presented, interesting, humorous, enjoyable read. Coincidentally I recently did a Rust crash course so it all made perfect sense - I am not an IT pro. Anyhows, thanks.


I highly recommend taking a look at the other writeups on fasterthanli.me they're almost all excellent


I'd like to ask you what crash course on Rust did you take, as there are quite a few out there, and it would help if someone recommends a certain course.


Try https://fasterthanli.me/articles/a-half-hour-to-learn-rust which is also written by the same author.


You Tube Let's Get Rusty - ULTIMATE Rust Lang Tutorial! - Getting Started



After the string of positive adjectives, I was expecting the second half of your comment to take a sharp turn into cynicism. Thank you for subverting my expectations by not subverting my expectations!


I will piggyback on your comment as I totally agree. I am amazed at the amount of work that must go into not just writing the article itself but all the implementations along the way. Really amazing job!


I learned HTTP1 pretty well but not much of 2.

Since playing with QUIC, I've lost all interest in learning HTTP/2, it feels like something already outdated that we're collectively going to skip over soon.


I tend to agree with you there, however the thing I'm replacing does HTTP/2, and HTTP/3 is yet another can of worms as far as "production multitenant deployment" goes, so, that's what my life is right now.

As far as learning goes, I do think HTTP/2 is interesting as a step towards understanding HTTP/3 better, because a lot of the concepts are refined: HPACK evolves into QPACK, flow control still exists but is neatly separated into QUIC, I've only taken a cursory look at H3 so far but it seems like a logical progression that I'm excited to dig into deeper, after I've gotten a lot more sleep.


FWIW HTTP/3 very much builds upon / reframes HTTP/2’s semantics, so it might be useful to get a handle on /2, as I’m not sure all the /3 documentation will frame it in /1.1 terms.


HTTP1 is definitely outdated (it was expeditiously replaced by HTTP 1.1), but I'd argue ignoring HTTP/2 might be more like ignoring IPv4 because we have IPv6 now


It's pretty much a transport-level protocol, just like QUIC.


Amos' writing style is just so incredibly good. I don't know anyone else doing these very long-form, conversational style articles.

Plus, you know, just an awesome dev who knows his stuff. Huge fan.


https://xeiaso.net/ is equally great content in a similar style in my opinion. Different area of topics a bit, but I enjoy both very much


Oh, this looks very promising. Thanks for the recommendation!


If you're using OpenBSD nc already, just use nc -c for TLS.


Depending on your version of nc, -c is for sending CRLFs or executing sent data as commands. You might be looking for ncat instead.


In OpenBSD nc (as GP mentioned), -c is for a TLS connection: https://man.openbsd.org/nc.1


Reminder, there are many different netcats, here are some of the most commons:

- netcat-traditional http://www.stearns.org/nc/

- netcat-openbsd : https://github.com/openbsd/src/blob/master/usr.bin/nc/netcat... (also packaged in Debian)

- ncat https://nmap.org/ncat/

- netcat GNU: https://netcat.sourceforge.net/ (quite rare)

To prevent any confusion, I like to recommend socat: http://www.dest-unreach.org/socat/


My nc has that as -C, no -c option.


What a great overall site. Hopping down the links I found the section on files with code examples in JS, Rust and C, plus strace, really the best short explanation I've ever found online.

https://fasterthanli.me/series/reading-files-the-hard-way/pa...


This is awesome, didn't read all of it yet, but I will for sure, I use HTTP way too much and too often to ignore some of these underlying concepts, and when I try to look it up, there's always way too much abstraction and the claims aren't proven to me with a simple example, and this article is full of simple examples. Thanks Amos!


I hope there's a h2 or TLS crash course.


Against my better judgement, the article /does/ go over H2 (although H3 is all the rage right now).

For TLS, I recommend The Illustrated TLS 1.3 Connection (Every byte explained and reproduced): https://tls13.xargs.org/


I'd like to thank you for the time and effort it must take to research, write and edit these articles. The tone you strike with these articles is a delight to read, and I find myself gobbling these things up even for topics about which I (falsely, it usually turns out) consider myself fairly knowledgeable.


Thanks for the link! Are there other good crash courses on various protocols and standards? Directly jumping into the dry official specs is just too overwhelmingly sometimes.


I recently crammed a bunch on DNS for an interview, and I can recommend the cloudflare blogs on that topic as being quite good.


> Where every line ends with \r\n, also known as CRLF, for Carriage Return + Line Feed, that's right, HTTP is based on teletypes, which are just remote typewriters

Does it need to be pointed out that this is complete bullshit?


Well, I've definitely seen a lot of people claim (generally not word-for-word) that using a pointlessly-overlong encoding of newline that exists to cater to the design deficiencies of hardware from the nineteen-sixties is not bullshit, so... maybe? But only for rather mushy values of "need".


It's not totally right, but it's not totally wrong, either, kind of like the way the dimensions of the space shuttle booster are directly affected by the size of a pair of Roman war horses' asses.

CRLF was used verily heavily and thus got baked into a lot of different places. Namely, it conveniently sidesteps the ambiguity of "some systems use CR, others use LF" by just putting both in, and since they are whitespace, there's not much downside other than the extra byte.

Beyond that, there are many other clear and obvious connections between Hypertext Transfer Protocol and teletype machines. Many early web browsers were expected to be teletype machines [0]. So while it might be a bit of a stretch, I'd say this is far from "complete bullshit".

[0] - http://info.cern.ch/hypertext/WWW/Proposal.html#:~:text=it%2...


> kind of like the way the dimensions of the space shuttle booster are directly affected by the size of a pair of Roman war horses' asses.

I agree the two are similar, but the space shuttle story is also bullshit. See e.g. Snopes: https://www.snopes.com/fact-check/railroad-gauge-chariots/

People are suckers for plausible-sounding and amusing stories, that one's classic bait for people's lack of critical thinking skills.

> CRLF was used verily heavily and thus got baked into a lot of different places.

Well, exactly. Which is precisely why it's bullshit to claim that HTTP was "based on teletypes". It was based on technical standards at the time, that originally derived from teletypes, but there was no consideration of teletypes in the development of HTTP that I'm aware of:

> Many early web browsers were expected to be teletype machines [0].

Could you quote a relevant part of your reference? Because I don't see it. Perhaps you're confusing "dumb terminal" with "teletype"? Or confusing the Unix concept of tty, a teletype abstraction, with the electromechanical device known as a teletype - the "remote typewriters" mentioned in the original comment?

By the time that WWW spec was written in 1990, teletypes were decades out of date and not commonly used at all. PCs had existed for over a decade, and video display terminals for mainframes and minicomputers had been around for nearly three decades. No-one was using actual teletypes any more.

> So while it might be a bit of a stretch, I'd say this is far from "complete bullshit".

This conclusion would work if any of your claims had survived scrutiny.


Kind of.

Which part of it do you think is wrong?


HTTP is not “based on teletypes”. That’s just nerd hyperbole for a technical choice they don’t like, for irrational reasons.


Is HTTP always the same protocol as HTTPS - given the same version - and ignoring the encryption from TLS?

Theoretically yes, but in practice?

I've done my share of nc testing even simpler protocols than HTTP/1.1

For some reason the migration to HTTPS scared me despite the security assurances. I could not see anything useful in wireshark anymore. I now had to trust one more layer of abstraction.


> Is HTTP always the same protocol as HTTPS - given the same version - and ignoring the encryption from TLS?

> Theoretically yes, but in practice?

Yes, that's the whole point of encapsulation. The protocol is blissfully unaware of encryption and doesn't even have to be. It has no STARTTLS mechanism either.

Your HTTPS traffic consists of a TCP handshake to establishes a TCP connection, a TLS handshake across that TCP connection to exchange keys and establish a TLS session, and the exact, same HTTP request/response traffic, inside the encrypted/authenticated TLS session.

The wonderful magic of solving a problem by layering/encapsulating.

> I could not see anything useful in wireshark anymore

Wireshark supports importing private keys for that, see: https://wiki.wireshark.org/TLS


The article covers using Wireshark to decrypt TLS traffic using Pre-Shared Master Secrets!


The encapsulation isn't complete because of SNI.


For 1.1 and 2, the byte stream is the same for TCP vs TLS over TCP. For 3, it uses one stream per request over a QUIC connection which is always encrypted.


The protocol is the same, but semantics in the applications can differ. Secure cookies only working on https to give one example.


As far as i can tell the host header is pointless, because if it's ssl/tls you won't be able to read it and route it. That's what sni is for. If you aren't tls then you don't need it, unless you hit the server as an ip. But then why would you do that?


It's for one server/IP serving multiple hostnames. For instance, the same physical server at 45.76.26.79 serves both www.lukeshu.com and git.lukeshu.com with the same instance of Nginx. Once Nginx decrypts the request, it needs to know which `server { … }` block to use to generate the reply.

With TLS+SNI, this is redundant to the name from SNI. But we had TLS long before we had SNI, and we had HTTP long before we had TLS, and both of those scenarios need the `Host` header.


Proxies doing TLS termination, with multiple servers behind.


I didn't ask but I needed it.


Also, never trust the content length. It's been that way since before http was finalized. Use it as guidance, but don't treat it as canonical.


When doing http by hand, it's better to do http/1.0 because that tells the server you (and it) can't do anything exciting.


Yay! this is going to be a great read for the weekend!


More articles should be written in the style of this article. Thank you for this.


most of his articles are written in this style. they're great!


    GET / HTTP/1.0\r\n\r\n 
Still works with many websites.


Is there a way to get this guide without the annoying side-commentary?


The RFCs themselves are pretty dry, if that's your thing — https://httpwg.org/ has the freshest ones.


Funny and very helpful. Thank you.


For a crash course would the code examples have been better in something like Python rather than Rust?


My whole thing is that I'm teaching Rust /while/ solving interesting, real-world problems (instead of looking at artificial code samples), so, if someone wants to write the equivalent article with Python, they should! I won't.


Nope, that’s the author’s favourite language. A regular reader would expect rust to be used like in previous articles


This is gold.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: