> it cannot establish a connection to a non-CA TLS endpoint
That's plain wrong. Self signed certificates and internal CAs work just fine.
The whole point of HTTP/3 is: You can't establish an unencrypted connection. And that's a very good idea!
I'm running at the very moment a HTTP/3 development server on this machine here. I did not have to ask anybody for allowance to do so.
(Actually I'm right now building a WebTransport server. WebTransport has even more strict rules for certificates but even there it's still possible to connect to an endpoint that uses a self-signed cert that isn't signed by any CA cert.)
Inform me then. Did you compile boringssl or openssl+quic or whatever TLS lib yourself and enable the proper flags so you could do this? You and I both know that doesn't count. You certainly can't if you're using a binary distributed browser made by Microsoft, Google, Apple, or even Mozilla. If you look at the traffic you're sending it's probably hitting the HTTP/1.1 endpoint first then going to http/3 for further traffic.
Internal CAs work but that's internal and irrelevant to being able to host a visitable website to a random persons.
>You can't establish an unencrypted connection. And that's a very good idea!
That's a very good idea for incorporated persons and their websites that involve transfers of money and other private details. The trade-off is that the entire system is more fragile and complex and needs continous constant updating and approval from a third party corporation. These are very bad traits for making a personal website that can last more than a few years. It's bad for the longevity of the web and so it's health.
> Did you compile boringssl or openssl+quic or whatever TLS lib yourself and enable the proper flags so you could do this?
No, I did not.
> You certainly can't if you're using a binary distributed browser made by Microsoft, Google, Apple, or even Mozilla.
That's the part that simply isn't true.
I'm using for testing a Chromium derivative with a stock Blink engine (Vivaldi).
You can use self signed certs just fine. (But using a custom CA set up by `mkcert` is actually the simplest way for a dev setup).
Chrome has a `--ignore-certificate-errors-spki-list=${CERT_HASH}` flag enabling the use of self signed certs for HTTP/3, given the right cert hash.
I admit that this isn't something an average user could do. You need to invoke some `openssl` voodoo to generate the hash appropriate for usage with the mentioned browser flag from a given cert. But that's actually a feature, imho, as it makes talking someone into casually starting their browser with this flag for some arbitrary domain quite difficult, or in a lot of cases even impossible. (And the addition of CA certs by an unauthorized user can be prevented by other means).
For WebTransport (where it's anticipated that the endpoints could be quite well some ephemeral machines, maybe even without DNS records) you can pass just a cert's sha1 fingerprint to the `WebTransport` constructor on the user agent. This will make the browser accept the designated (self signed) cert without any further checks.
> If you look at the traffic you're sending it's probably hitting the HTTP/1.1 endpoint first then going to http/3 for further traffic.
No, I don't even have a HTTP/1 (or /2) endpoint in this setup. I need to pass `--origin-to-force-quic-on=localhost` explicitly as Google's engine is otherwise too stupid to recognize the HTTP/3 server. (ALPN seems to have currently also issue besides this, judging from some comments online. But I'm not using this mechanism anyway currently, and have just a pure HTTP/3 endpoint.)
> That's a very good idea for incorporated persons and their websites that involve transfers of money and other private details.
It's a good idea in general.
Have you ever considered that alone knowing who visits which website when is privacy related information?
Meta-data is often even considered more interesting than the actual data. The USA stated boldly things like: "We kill people based on metadata"…
Encrypting just everything is the only way forward! Otherwise sending or receiving encrypted traffic already would be a data point as such—a data point which could be used against some.
> The trade-off is that the entire system is more fragile and complex […]
Yes, it's a trade-off.
Also, I think that the CA system is fundamentally broken.
But this is nothing new coming with QUIC!
> […] and needs continous constant updating and approval from a third party corporation. These are very bad traits for making a personal website that can last more than a few years.
I'm not buying this argument any more. Before something like Let's Encrypt you've been right with that argument. But since then this point is moot.
You don't need any "approval", you just need to prove that you own the domain for which you like to have a cert. This is a completely automatic and anonymous process. Set up once it will work as long as something like Let's Encrypt exists. (And Let's Encrypt very likely won't disappear anytime soon!)
> It's bad for the longevity of the web and so it's health.
No, it makes no difference.
An unmaintained website will go away sooner or later anyway. You need at least to pay bills to keep it up. At least…
But besides that this point is also moot. Nothing of this whole digital stuff will last very long. Ever tried to open an ancient file format? By ancient I don't mean 20 000 years old like some stone carvings, I don't mean 2 000 years like some papyrus role, not even 200 years like some a little bit older book, I mean a file as "ancient" as something made by some firm that went out of business 20 years ago…
And talking about "health" in the context of the web given the current state of the internet is a joke in its own. I don't want to offend anybody but that's just the truth. The web is broken beyond repair. And that's mostly not even for technical reasons. (Even there would be also more than enough of those. But QUIC is actually one of the technologies that are more or less sane—even complex—and a step in the right direction. Every middle-box it kills on its way is a huge win for the net!)
>Have you ever considered that alone knowing who visits which website when is privacy related information?
I get this a lot. To be clear, I'm not against encryption. I am against only allowing connections to sites which are encrypted using a third party incorporated entity's tools. HTTP+HTTPS is definitely the way to go so that people can chose the HTTPS endpoint if they want, but still access the site if that fails for technical (lack of maintaince when acme2 came around and acme1 dropped, etc, etc) or malicious reasons. The problem is that HTTP/3 only allows the one mode.
> But this is nothing new coming with QUIC!
Correct. But HTTP/3 on QUIC does make it much, much more of a problem because only 0.000001% of worldwide users are going to be passing --ignore-certificate-errors-spki-list=${CERT_HASH} to chrome after their browser first prevents the link from working.
>An unmaintained website will go away sooner or later anyway. You need at least to pay bills to keep it up. At least…
I know too many late 90s/early 2000s websites to count that haven't been touched in the last decade+. And I know they would not exist not if they relied on HTTPS only or HTTP/3.
> HTTP+HTTPS is definitely the way to go so that people can chose the HTTPS endpoint if they want,
This opens up the way to downgrade attacks.
Imho there should not be any unencrypted traffic on the net. Not even the technical possibility for that as long as you're using std. software. Call me an old school crypto-nerd but I just don't see any alternative. Everything else is going to get exploited. There is just too much initiative form very powerful fractions. So crypto needs to be enfoced at a very fundamental level. Security by design, privacy by design!
> HTTP/3 on QUIC does make it much, much more of a problem because only 0.000001% of worldwide users are going to be passing `--ignore-certificate-errors-spki-list=${CERT_HASH}` to chrome after their browser first prevents the link from working.
I agree that the requirement for CA signed certs is sub-optimal.
I for my part only care about the (forced) encryption.
And that's actually all QUIC requires. The protocol does not force checking certificate chains. (Otherwise the above mentioned switch wouldn't be possible while having a complaint implementation.)
The check of the cert chain is a implementation detail of the HTTP/3 stack in browsers AFAIK. (I could be wrong here and HTTP/3 may require "WebPKI" trusted certs; I didn't read all of the spec until now.)
I would love it of course if the CA based "WebPKI" would get replaced by something decentral with self service options.
Having CA certs you didn't install yourself in your browser (after sorrow consideration!) is a mayor security risk, imho. Just look at the list… You're "trusting" in the end more of less everybody with money and power on this planet. That's not how it should be. But I don't know how an alternative could look like. (And I guess nobody really knows. Maybe you?)
But something like Let's Encrypt, which only checks the part that actually matters—namely whether you own the domain for which you like a cert, no further questions asked—is imho as close to "decentralized self-service" as it could get at the moment.
> I know too many late 90s/early 2000s websites to count that haven't been touched in the last decade+. And I know they would not exist not if they relied on HTTPS only or HTTP/3.
Why do you think HTTP/3 would have prevented the sites to last as long as they did?
Getting and renewing certs is an one-time setup. I'm pretty sure it will just work for the years to come once up and running.
Yes. A website run by a human person is vulnerable to downgrade attacks in the same way that a human person is vulnerable to rocket artillery. In some contexts, like say, active war zones or hosting a cryptocurrency market, it matters. But in most cases human people don't actually have to worry at all. Especially since the downgrade "attack" is not really an attack at all. And you're only "vulnerable" to it if you execute javascript. Othwerwise there's no intrinsic damage to using HTTP. That only applies to commercial/money exchanging contexts and things like hospitals.
>Why do you think HTTP/3 would have prevented the sites to last as long as they did?
If the site was HTTP/3 only then their cert would have expired or their update system broken and browsers would not be able to access the site.
> it cannot establish a connection to a non-CA TLS endpoint
That's plain wrong. Self signed certificates and internal CAs work just fine.
The whole point of HTTP/3 is: You can't establish an unencrypted connection. And that's a very good idea!
I'm running at the very moment a HTTP/3 development server on this machine here. I did not have to ask anybody for allowance to do so.
(Actually I'm right now building a WebTransport server. WebTransport has even more strict rules for certificates but even there it's still possible to connect to an endpoint that uses a self-signed cert that isn't signed by any CA cert.)