And this is why I expect HTTP/2 and HTTP/3 to be much more robust in the long term: the implementations are harder to write, and you won’t get anywhere without reading at least a some spec, whereas HTTP/1 is deceptively simple with therefore a lot of badly incorrect implementations, often with corresponding security problems.
HTTP/3 is written for the use case of large corporations and does not even allow for human persons to use it alone. It requires CA based TLS to set up a connection. So if you want to host a website visitable by a random person you've never communicated with before you have to get continued permission from an incorporated entity running a CA to do so.
This is far more of a security problem than all of the bad HTTP 1.1 implementations put together. It is built in corporate control that cannot be bypassed except by not using HTTP/3. It is extremely important that we not let the mega-corp browsers drop HTTP 1.1 and continue to write our own projects for it.
Your complaint is strictly social, and quite irrelevant here.
Look, cleartext internet protocols are on the way out, because their model is fundamentally broken. For security reasons, I will note, and privacy. There, we joust security against security. Cleartext HTTP/1 is strictly a legacy matter, retained only because there’s still too much content stuck on it. But browsers will be more aggressively phasing it out sooner or later, first with the likes of scary address bar “insecure” badges, and probably within a decade by disabling http: by default in a way similar to Firefox’s HTTPS-Only Mode (puts up a network error page with the ability to temporarily enable HTTP for the site), though I doubt it’ll be removed for decades. And HTTP/1 at least over TLS will remain the baseline for decades to come—HTTP/2 could conceivably be dropped at some point, but HTTP/3 is very unlikely to ever become the baseline because it requires more setup effort.
You can still use cleartext HTTP/1 at least for now if you want, but this functionality was rightly more or less removed in HTTP/2, and fully removed in HTTP/3. Pervasive monitoring is an attack (https://www.rfc-editor.org/rfc/rfc7258.html), and HTTP/2 and HTTP/3 are appropriately designed to mitigate it.
Look, be real: the entire web is now built heavily on the CA model. If free issuance of certificates falters, the internet as we know it is in serious trouble. Deal with it. Social factors. This might conceivably happen, and if it does, HTTP/1 will not save you. In fact, cleartext HTTP/1 will be just about the first thing to die (be blocked) in the most likely relevant sequence of events.
It is a layering violation though. Not all HTTP usage is through a browser, and not all routes go over the plaintext Internet. Browsers or clients can still require HTTPS at the application layer, but it shouldn't be part of the protocol spec.
Suppose I have an app within an intranet that's secured with, say, Wireguard or an application-layer tunnel (eg, SSH or Openziti).
Bringing HTTP/3 into the picture means dealing with CAs and certs on top of the provisioning I've already done for my lower layers, possibly leaking information via Certificate Transparency logs. Then the cost of double-encryption, etc.
I agree that sending cleartext over the internet in this day and age is a bad idea. But "encrypt all communication at the application layer" doesn't have to be the only solution. There's also "encrypt communication at the <i>network</i> layer," as discussed here for example: https://tailscale.com/blog/remembering-the-lan/
I have a suspicion that this will prove to be a better abstraction than application-level encryption for everything. If I'm right, I would expect things to naturally start migrating in that direction over time. We'll see!
Thinking that social issues can't and should not shape technical discussions, even more when talking about one of the most important technological platforms for society is rather limited and short sighted.
> If free issuance of certificates falters, the internet as we know it is in serious trouble. Deal with it. Social factors. This might conceivably happen, and if it does, HTTP/1 will not save you.
This (HTTP/1 won't save us) doesn't seem entirely accurate to me.
I can run free, untrusted HTTPS easily using self-issued certificates. It's relatively simple to think of mechanisms where trust can be layered on top of that outside the traditional CA mechanisms (think Keybase derivatives like DID-systems). It's a small patch to allow that alternative trust framework to be used for HTTPS.
I don't know HTTP/3 at all, but if it is more tightly tied to CA infrastructure that is a problem.
> I can run free, untrusted HTTPS easily using self-issued certificates.
At present you can. But think about what conditions might lead to free issuance faltering: it will almost certainly boil down to pressure from governments. And do you think that such governments will lightly allow you to bypass their measures? No; once the dust settles, no technical measures will be effective: the end result will be mandatory interception of all traffic, with TLS proxying and similar, and any other traffic blocked. Countries have even done this at times, requiring anyone who wants to access the internet to install their root certificate.
The internet is designed to be comparatively robust against sociopolitical attack, but if a sufficiently powerful government decides to concertedly attack the internet as we know it, the internet will not win the conflict.
> I don't know HTTP/3 at all, but if it is more tightly tied to CA infrastructure that is a problem.
As clarified elsewhere in this thread, HTTP/3 changes absolutely nothing about certificate verification; superkuh appears to have misunderstood the meaning of the text in the spec.
>Look, be real: the entire web is now built heavily on the CA model.
No. You've just got your commercial blinders on. The entire *commercial web* is built on the CA model. But the commercial web is hardly all there is. There is a giant web of actual websites run by human persons out there that do not depend on CA TLS and who's use cases do not require dropping clear text connections. That's only a need for for-profit businesses and institutions.
I agree that the mega-corp browsers will drop support for any protocol that does not generate them profit. The consequences of this action will be dire for everyone. But you can't convince people of this. You just have to let it happen and let people learn from the pain. Just like with the social networks.
Pervasive monitoring is an attack. No public internet traffic of any character should be cleartext. The actual websites run by human persons that you speak of (such as myself) are not exceptions to this.
As others said, is a layering violation. What that commandment means in practice? Essentially, you can't just udp your way around the protocol, and do frame comparison to test robustness of the protocol, you have to care how it looks like when encrypted. And you now need to use a subset of the TLS spec which most widely used implementations in the wild consider private API. So most QUIC implementations are built on some broken fork of openssl. This leads to fewer implementations, which means concentration of power (spec is not king, the implementations rule the protocol) and narrower attack surface for exploiters. And we all lose.
Can you not just have it use a self signed certificate? I don't see why a CA would need to be involved at all, nor can I even imagine how that could be enforced at the protocol level.
This sounds like a red herring to me.
edit: Yeah I've more or less confirmed that self signed certs are perfectly fine in HTTP3. This is a big ball of nothing.
About half my personal (private, in my own home LAN) sites use self-signed certs that Chrome flat out won't accept. I have to type the magic key sequence to bypass the error. I do wish we could come up with something better for this kind of use case, then having to set up letsencrypt on my public domain and issue a wildcard cert to use with RFC1918 web sites.
I wish there was an intermediate AH mode. the page is signed but not encrypted.
Or baring that I wish that browsers would ease up a bit and make tofu style self signed certs acceptable.
I really don't like how there is an expire time built into tls sites. Have you ever found someones old site, usually hosted by a university, that just lives year after year like a time capsule. well not gonna happen with tls.
And on the subject of CA's I don't think I trust them any more than a tofu model Have you looked and verified every authority in your CA file? Do you really trust the turkish government to be able to sign for any web site.
Aha! you say, this is why we have cert pinning.
To which my reply is. cert pinning is the tofu model where you have removed all user agency. it is better than the CA model but really sucks from a end user perspective. when thing go wrong, there is no easy way to fix it.
Yeah this is just a browser setting - this complaint sounds in bad faith coming from someone who apparently knows about all the other aspects of using certs?
If you control both the server and the client, you can make yourself your own private CA, issue all the certs you need, and have no browser errors anywhere.
I mean, if it's on the open web you can use Let's Encrypt. If it's on your private network, you can make whatever keys you want with XCA and trust your self-made CA in browsers.
The language in the RFC that Google and Microsoft forced through the IETF to open-wash it uses "MUST" in capital letters when talking about setting up the HTTP3 endpoint and verifying the cert. https://datatracker.ietf.org/doc/rfc9114/
I would be extremely relieved if I am wrong and someone could explain how I am wrong. Like... maybe there's some mechanism to self-sign without CA and use a null cypher? So even if most users would be scared away geeks could click through (like today's status quo with self-signed ssl certs).
> the RFC that Google and Microsoft forced through the IETF to open-wash it
This is a gross misrepresentation of the situation. Yes, Google played a significant role in the development of HTTP/2, QUIC and HTTP/3, providing the starting point for the development work in each case, but there was no open-washing: there was a collaborative process with the involvement of many interested parties, and the end result was significantly different from what was first proposed, and significantly better. This is how IETF works. Google did not control matters in any way, nor Microsoft.
> The "https" scheme associates authority with possession of a certificate that the client considers to be trustworthy for the host identified by the authority component of the URI. Upon receiving a server certificate in the TLS handshake, the client MUST verify that the certificate is an acceptable match for the URI's origin server using the process described in Section 4.3.4 of [HTTP]. If the certificate cannot be verified with respect to the URI's origin server, the client MUST NOT consider the server authoritative for that origin.
This boils down to “this is HTTPS, so the same rules as ever apply for matching the certificate and origin”. I suspect you’ve misunderstood what authoritativity conveys. The last sentence is saying “… and if verification fails, don’t trust the connection”—and it’s up to each app to decide what to do about that; browsers put up a scary warning error page that you can normally click through (depending on server configuration). Note that it doesn’t even hardcode the CA model; I like the way RFC 9110 §4.3.3 ¶1 puts it: “The client usually relies upon a chain of trust, conveyed from some prearranged or configured trust anchor, to deem a certificate trustworthy.”
You can read more about the rules of HTTPS in https://www.rfc-editor.org/rfc/rfc9110#section-4.3.3 (sections 4.3.3 and 4.3.4). Certificate verification is the same as ever, and the only difference between HTTP/1 and HTTP/2 and HTTP/3 is that HTTP/1 has connection-per-origin, where 2 and 3 can use a connection for multiple origins (§4.3.3 ¶2–3 spells it out).
You can still use a self-signed cert with HTTP3 (including rightly scary warnings for visitors) or you can make your own CA and distribute the cert (no scary warnings when people visit your site).
the zealous "you must obey the law" tone of SOME comments here reinforces the worst stereotypes of corporate apparats.. individuals doing the bidding of institutions based on the letter of their "laws"
Human history has shown again and again that this ends badly .. HTTP is OK with ME
"curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above."
I wouldn't want curl to remember the exception. It's not like a browser: just because I'm currently testing a site with -k does not mean I never want it to perform the normal careful checks.
If you decide you trust that certificate (which can be a legitimate thing to do - the cert signature could be communicated to you via out-of-band trusted mechanisms) then https://curl.se/docs/sslcerts.html explains how to trust it.
Among other things that's saying it's a self-signed cert and can do HTTP2. So that Chrome on my phone will connect to it does confirm that you can do self-signed certs with HTTP2 at least.
I think the idea is that HTTP/1 is simple in the hello-world 5th-percentile-complexity case, which deceives people into thinking that it's also simple in the real-world 99.9th-percentile-complexity case, which it's not at all.
It's, like, simple in about 80-percentile-complexity case. But the rest 20% take 80% of the work (and re-architecturing). For example, 1xx responses break 1-1 correspondence between requests and responses. Then an "Upgrade" header may mean you need to turn a connection into a dumb byte pipe, ditto for "CONNECT" requests. Then there is a whole business of end-to-end vs. by-hop headers: some of the latter ones will be listed in the "Connection" header (did you know that that is its original purpose, and "close" option is but a hack?) but some of the headers are always hop-by-hop and the proxy is expected to filter them even if they're not listed in "Connection" header — but of course the comprehensive list of such by-hop headers doesn't exist. Then there is pipelining. And handling HTTP/1.0 clients (yep, one of the reasons why OP has "IT'S SET TO HTTP/1.1 AND NOTHING ELSE" in his article) who by their nature cannot support replies in "chunked" transfer-encoding. And handling POST bodies in "chunked" transfer-encoding. And handling trailers if you did not cut "Expect: trailers" in the client's request. And there may be comments in "chunked" encoding. And... multiline headers?.. The list goes on and on.
And a decent HTTP-proxy must handle all of that stuff or at least fail gracefully without affecting other clients.
The most common proximate cause of security issues in format handling (parsing and emitting) comes from implementations differing in their parsing, or implementations emitting invalid values in a way that will be parsed differently. Probably the most common type of security issue then comes from smuggling values through, bypassing checks or triggering injection. (This is the essence of injection attacks as a broad class.) One of the easiest demonstrations of this in HTTP specifically is called HTTP request smuggling: https://portswigger.net/web-security/request-smuggling. And the solution for that is pretty much: “stop using a text protocol, they’re too hard to use correctly”.
One of the simplest issues is that headers end with a newline. Most code will not generate a header with an embedded new line, so it's common that software doesn't handle this case, and passes the new line through unmodified. This means that if someone is able to set a custom value for part of a header they can often use that to inject their own custom response header. Or even their own customer response body, since that is also set off with newlines.
As someone who had to write a couple of proxy servers, I can't express how so sadly accurate it is.