Your complaint is strictly social, and quite irrelevant here.
Look, cleartext internet protocols are on the way out, because their model is fundamentally broken. For security reasons, I will note, and privacy. There, we joust security against security. Cleartext HTTP/1 is strictly a legacy matter, retained only because there’s still too much content stuck on it. But browsers will be more aggressively phasing it out sooner or later, first with the likes of scary address bar “insecure” badges, and probably within a decade by disabling http: by default in a way similar to Firefox’s HTTPS-Only Mode (puts up a network error page with the ability to temporarily enable HTTP for the site), though I doubt it’ll be removed for decades. And HTTP/1 at least over TLS will remain the baseline for decades to come—HTTP/2 could conceivably be dropped at some point, but HTTP/3 is very unlikely to ever become the baseline because it requires more setup effort.
You can still use cleartext HTTP/1 at least for now if you want, but this functionality was rightly more or less removed in HTTP/2, and fully removed in HTTP/3. Pervasive monitoring is an attack (https://www.rfc-editor.org/rfc/rfc7258.html), and HTTP/2 and HTTP/3 are appropriately designed to mitigate it.
Look, be real: the entire web is now built heavily on the CA model. If free issuance of certificates falters, the internet as we know it is in serious trouble. Deal with it. Social factors. This might conceivably happen, and if it does, HTTP/1 will not save you. In fact, cleartext HTTP/1 will be just about the first thing to die (be blocked) in the most likely relevant sequence of events.
It is a layering violation though. Not all HTTP usage is through a browser, and not all routes go over the plaintext Internet. Browsers or clients can still require HTTPS at the application layer, but it shouldn't be part of the protocol spec.
Suppose I have an app within an intranet that's secured with, say, Wireguard or an application-layer tunnel (eg, SSH or Openziti).
Bringing HTTP/3 into the picture means dealing with CAs and certs on top of the provisioning I've already done for my lower layers, possibly leaking information via Certificate Transparency logs. Then the cost of double-encryption, etc.
I agree that sending cleartext over the internet in this day and age is a bad idea. But "encrypt all communication at the application layer" doesn't have to be the only solution. There's also "encrypt communication at the <i>network</i> layer," as discussed here for example: https://tailscale.com/blog/remembering-the-lan/
I have a suspicion that this will prove to be a better abstraction than application-level encryption for everything. If I'm right, I would expect things to naturally start migrating in that direction over time. We'll see!
Thinking that social issues can't and should not shape technical discussions, even more when talking about one of the most important technological platforms for society is rather limited and short sighted.
> If free issuance of certificates falters, the internet as we know it is in serious trouble. Deal with it. Social factors. This might conceivably happen, and if it does, HTTP/1 will not save you.
This (HTTP/1 won't save us) doesn't seem entirely accurate to me.
I can run free, untrusted HTTPS easily using self-issued certificates. It's relatively simple to think of mechanisms where trust can be layered on top of that outside the traditional CA mechanisms (think Keybase derivatives like DID-systems). It's a small patch to allow that alternative trust framework to be used for HTTPS.
I don't know HTTP/3 at all, but if it is more tightly tied to CA infrastructure that is a problem.
> I can run free, untrusted HTTPS easily using self-issued certificates.
At present you can. But think about what conditions might lead to free issuance faltering: it will almost certainly boil down to pressure from governments. And do you think that such governments will lightly allow you to bypass their measures? No; once the dust settles, no technical measures will be effective: the end result will be mandatory interception of all traffic, with TLS proxying and similar, and any other traffic blocked. Countries have even done this at times, requiring anyone who wants to access the internet to install their root certificate.
The internet is designed to be comparatively robust against sociopolitical attack, but if a sufficiently powerful government decides to concertedly attack the internet as we know it, the internet will not win the conflict.
> I don't know HTTP/3 at all, but if it is more tightly tied to CA infrastructure that is a problem.
As clarified elsewhere in this thread, HTTP/3 changes absolutely nothing about certificate verification; superkuh appears to have misunderstood the meaning of the text in the spec.
>Look, be real: the entire web is now built heavily on the CA model.
No. You've just got your commercial blinders on. The entire *commercial web* is built on the CA model. But the commercial web is hardly all there is. There is a giant web of actual websites run by human persons out there that do not depend on CA TLS and who's use cases do not require dropping clear text connections. That's only a need for for-profit businesses and institutions.
I agree that the mega-corp browsers will drop support for any protocol that does not generate them profit. The consequences of this action will be dire for everyone. But you can't convince people of this. You just have to let it happen and let people learn from the pain. Just like with the social networks.
Pervasive monitoring is an attack. No public internet traffic of any character should be cleartext. The actual websites run by human persons that you speak of (such as myself) are not exceptions to this.
As others said, is a layering violation. What that commandment means in practice? Essentially, you can't just udp your way around the protocol, and do frame comparison to test robustness of the protocol, you have to care how it looks like when encrypted. And you now need to use a subset of the TLS spec which most widely used implementations in the wild consider private API. So most QUIC implementations are built on some broken fork of openssl. This leads to fewer implementations, which means concentration of power (spec is not king, the implementations rule the protocol) and narrower attack surface for exploiters. And we all lose.
Look, cleartext internet protocols are on the way out, because their model is fundamentally broken. For security reasons, I will note, and privacy. There, we joust security against security. Cleartext HTTP/1 is strictly a legacy matter, retained only because there’s still too much content stuck on it. But browsers will be more aggressively phasing it out sooner or later, first with the likes of scary address bar “insecure” badges, and probably within a decade by disabling http: by default in a way similar to Firefox’s HTTPS-Only Mode (puts up a network error page with the ability to temporarily enable HTTP for the site), though I doubt it’ll be removed for decades. And HTTP/1 at least over TLS will remain the baseline for decades to come—HTTP/2 could conceivably be dropped at some point, but HTTP/3 is very unlikely to ever become the baseline because it requires more setup effort.
You can still use cleartext HTTP/1 at least for now if you want, but this functionality was rightly more or less removed in HTTP/2, and fully removed in HTTP/3. Pervasive monitoring is an attack (https://www.rfc-editor.org/rfc/rfc7258.html), and HTTP/2 and HTTP/3 are appropriately designed to mitigate it.
Look, be real: the entire web is now built heavily on the CA model. If free issuance of certificates falters, the internet as we know it is in serious trouble. Deal with it. Social factors. This might conceivably happen, and if it does, HTTP/1 will not save you. In fact, cleartext HTTP/1 will be just about the first thing to die (be blocked) in the most likely relevant sequence of events.