Hacker News new | past | comments | ask | show | jobs | submit login
RFC9460: Service Binding via the DNS (SVCB and HTTPS Resource Records) (rfc-editor.org)
88 points by teddyh 10 months ago | hide | past | favorite | 36 comments



Reasons this is significant, IMHO:

• Most importantly, it makes fallback and load-balancing available for regular people; just spin up an extra web server somewhere else and add it to the DNS, just like a secondary MX. Mostly no need for a CDN anymore.

• The importance of SNI is lessened, since the DNS can now point to a port number as well as a host name. This makes it possible to have web server configurations without an enumeration of all server names; just have the web server use a certificate from a file and listen to the correct port number – no need to configure host names.

• The possibility to have separate web sites use different ports makes process isolation easier without having to go through a gateway host.

• It frees the apex DNS domain name from having to have an IP address; an IP address is harder to keep updated than a simple host name of a web server.

In summary, these things either make it easier to live on today’s internet without Google, Cloudflare, or other centralizing actors, or they make it easier for the smaller web hosting providers, or both.


I don't get your first point. CDNs are about content distribution, not load balancing. And you can already add multiple web servers to your A/AAAA record, that is still available to regular people. I believe different DNS providers even shuffle the order of the result to get clients to change up their connections

The different port thing is something that won't ever happen because the big players in the industry are petrified of the "middlebox bogeyman" and they won't allow any changes to web infrastructure that would make their jobs difficult, even if it would work fine for everyone else. We are doomed to use 443 for eternity.

We could just change the DNS spec to stop the apex madness. I don't know why nobody ever did, other than keeping the status quo.


> We could just change the DNS spec to stop the apex madness. I don't know why nobody ever did

If you mean changing how CNAME works, that can't be done in a backwards compatible way. If you mean adding a new record type, well that is exactly what this RFC does. There have been previous attempts to make RFCs that fix the apex problem, but this is the only one to get beyond a draft. I do think it is odd that it was grouped together with a bunch of other functionality instead of having a dedicated ANAME record type.

I don't know why it took so long to fix the apex problem.


> CDNs are about content distribution, not load balancing.

For you, maybe, but many people use a CDN and related services as a form of protection from web server overload.

> We are doomed to use 443 for eternity.

Do you write your applications for Win32? There have been many such proclamations made in the past, and they have all, slowly but surely, mostly gone away. Just try to browse the web using a really old browser or operating system. You can’t do anything, since most web sites require TLS1.0 or later.

> We could just change the DNS spec to stop the apex madness.

We did. This is the RFC which did it.


I’m assuming the op is referring to relying on a commercial CDN provider’s anycast implementation but I could be mistaken.


I'm not convinced about the port stuff.

Until support is well over 99%, using a custom port will mean your website is simply inaccessible to some visitors. Even worse, a link which works perfectly fine on one computer will simply be completely broken on another, without any kind of warning or explanation. It's like trying to host a website on IPv6-only in 2023: nice idea, just not viable in practice.

Besides, weren't the issues with SNI supposed to be solved with ESNI and Encrypted ClientHello?


It seems to me this could also be an advantage for websites which want to make themself inaccessible in these circumstances. An enterprise which firewalls things so only port 443 is allowed out is also more likely to do corporate MitM of all traffic. Historically I was definitely annoyed that browsers chose to make custom root CAs an implicit exemption from HPKP (even if HPKP is now history). I'd certainly consider DoSing my own site under likely MitM conditions. Using a different port also makes it harder for organisations to successfully do this kind of filtering in the future if it becomes widespread.

Another advantage is to prevent other kinds of blocking, which are based on blacklisting specific ports rather than only allowing port 443. Many residential ISPs and even hosting providers now engage in the flagrant net neutrality violation of blocking port 25; if use of SVCB records became common to allow a different port to be used, it would let other ports be used for site-to-site email, allowing email domains to contribute to rendering this blocking ineffective. Of course I'm not aware of any plan to adopt SVCB for email yet. But TBQH as someone who's always lamented the Internet's lack of a decent service lookup mechanism and the limitations (and lack of adoption) of SRV records, I'd like to see SVCB adopted for pretty much everything in future.

Fundamentally the use of well known ports was always a mistake, and really was just a crutch which was entirely the product of the internet's lack of any real service discovery mechanism, aside from the late, extremely underused and very limited DNS SRV mechanism. Ports are a mere implementation/routing detail and should not have any semantic meaning whatsoever. The time when organisations can do port-based filtering is coming to an end, fortunately. (Even if they do continue with it, the ability to now move practically everything to port 443 makes it increasingly meaningless in terms of achieving whatever they thought they were hoping to achieve.)


Of course you have to maintain a backwards-compatible gateway proxy. But since most clients will not use it, and since it is obvious that it will be possible to remove it at some time in the future, this is, IMHO, acceptable.

> Besides, weren't the issues with SNI supposed to be solved with ESNI and Encrypted ClientHello?

Has that been implemented and/or standardized yet?


AFAIK that proposal got dropped in favor of ECH, which isn't standardized yet [0].

[0] https://blog.cloudflare.com/announcing-encrypted-client-hell...


Although it is not fully standardised, Firefox 119 and later should have it work [1]. It's now up to server implementations to get there as well.

[1] https://defo.ie/ech-check.php


The RFC generally looks to be so substantially useful that I'd expect pretty swift adoption in platforms/browsers, so while the custom port thing may not be useful immediately, that will probably change within a few years (which is the blink of an eye in RFC time ;)


> I'd expect pretty swift adoption in platforms/browsers

I am hopeful but sceptical. No browser implements it fully yet, AFAIK.


I am not that optimistic.

HTTP have lots of embedded usage.


> Until support is well over 99%, using a custom port will mean your website is simply inaccessible to some visitors. Even worse, a link which works perfectly fine on one computer will simply be completely broken on another, without any kind of warning or explanation.

Welcome to my world. Some websites are blocked by my provider. Some websites are blocking my provider and inaccessible to me. And I'm not talking about something weird, things like w3c.com and istio.io were blocked at some periods of time for me.

You want a working Internet? You need to use VPN. I don't think that anyone will evade that.


> • Most importantly, it makes fallback and load-balancing available for regular people; just spin up an extra web server somewhere else and add it to the DNS, just like a secondary MX. Mostly no need for a CDN anymore.

That's IF browsers implement this. Browser devs are typically very reticent to add yet more DNS lookups (latency!) to the page loading process.


That was the argument against SRV records. But SVCB/HTTPS records can have extra information in them which can shortcut the HTTP/3 handshake, which will save time.


My first thought was to SRV records:

> C.1. Differences from the SRV RR Type

> An SRV record [SRV] can perform a function similar to that of the SVCB record, informing a client to look in a different location for a service. However, there are several differences:

> * SRV records are typically mandatory, whereas SVCB is intended to be optional when used with pre-existing protocols.

> * SRV records cannot instruct the client to switch or upgrade protocols, whereas SVCB can signal such an upgrade (e.g., to HTTP/2).

> * SRV records are not extensible, whereas SVCB and HTTPS RRs can be extended with new parameters.

> * SRV records specify a "weight" for unbalanced randomized load balancing. SVCB only supports balanced randomized load balancing, although weights could be added via a future SvcParam.

* https://datatracker.ietf.org/doc/html/rfc9460#appendix-C.1


I was always a huge fan of SRV records; it gives you the ability to easily and quickly shape traffic centrally when clients and protocol are not under your control.

However, I found very few allies in this. Most people that I talked to disliked the fact that networking/DNS teams were organizationally far removed from application teams, and often imposed slow, onerous change processes.

I don’t see any reason why that will have changed, even though my experience is 15+ years ago.


the whole devops ("microservices") culture is about giving teams autonomy, more project/product ownership. sure, many companies hire "devops engineers" and then require them to file Jira tickets to networking/provisioning/cloud team, but it's not the fault of the protocol :)


Ooh, it’s finally published, huh?

Not certain about timing, but I believe browsers have all supported it for 2–4 years.

Simplifying beyond the point of strict accuracy, this is a signal so that when your browser asks DNS “where’s the example.com server?” the response mentions “oh BTW it supports HTTP/3” and so the browser goes straight to https://example.com over HTTP/3. Otherwise, first-time visitors will commonly go to http://example.com over HTTP/1.1, and then be redirected to https://example.com over HTTP/1.1 or HTTP/2 (negotiated at TLS handshake time), and only use HTTP/3 on subsequent visits.

It’s a pity that supporting HTTP/3 takes extra effort compared to HTTP/1 or HTTP/2, but it’s not really avoidable.


Do you happen to know what this is called in CanIUse & friends?

It would be neat to see what the actual support is. "All browsers" usually means "Chrome and maybe Firefox if you're lucky", unfortunately.


Here's a request discussion to add support in Can I use:

https://github.com/Fyrd/caniuse/issues/6091


Maybe they shouldn't have used http:// for something that isn't http.

Nah, users are too dumb and "my precious adoption rates"


I guess it's more "my precious marketing".

General consensus in marketing seems to be that people can't memorize anything more than domain names, and maybe also a "www." or a single word path behind the domain - but the scheme seems to be universally seen as "technical cruft that non-technical users can't be expected to deal with". (with the one exception that "http:// = bad and https:// = good")

So I guess companies don't want a situations in which they suddenly have to teach their users that, no, it's not "awesomecorp.com", it's actually "https3://awesomecorp.com" (but NOT https://3awesomecorp.com, that's a scammer) etc etc


The main motivation here is the desire to have a seamless transition to newer protocols without having to change the site. This way, your hosting service can enable HTTP/3 and it will just work with an HTTP/3 compatible browser. This is the way every upgrade in TLS and HTTP has worked for quite some time.

It's especially important with HTTP/3 because there are network environments where HTTP/3 doesn't work and so you need to fall back to HTTP/2, and you obviously want that to work seamlessly.

In addition, the basic security structure in the Web is the origin, defined in RFC 6454, and origins are defined by a triplet of [scheme, host, port]. It would not be good to have HTTP over TLS and HTTP/3 be different origins. For instance, you don't want to lose all your cookies if the site turns HTTP/3 on or off.


Reducing the number of hops needed before first data bytes is really cool. We cannot change the speed of light but we can design protocols which need to traverse the medium fewer times.


So, how is this different from SRV records?

They work fine for my limited use...


- it allows specifying that http3 or http2 should be used. The former is especially important, since http requires using udp instead of tcp

- it can include certificate information used for ECH

- it can be used as an alias record at the apex of a zone. Which sort of solves the problem of not being able to have a CNAME on the apex... Once this is widely deployed.


Did you read Appendix C.1 of RFC 9460? It’s addressed in the very document. If implemented it can reduce time to first byte (it saves round trips to negotiate the next protocol) and it can be retrofitted to existing protocols like HTTP. If this is implemented by most browsers the SNI reverse proxy only has to handle remaining load.


Thank you for this. No, I did not, though now, I have. :)


I like the idea of SVCB, but I hate the idea of there being an RR for one specific other protocol (HTTPS). DNS and HTTP[S] are complicated enough without building cyclical dependencies between the two protocols. Less coupling is better.

However, they don't make a good case for how SVCB is need over SRV. SRV could be modified in a different spec instead.


> SRV could be modified in a different spec instead.

I don't think it could be, at least without breaking things. Unlike SVCB, SRV wasn't designed to be extensible, so adding new functionality can't really be done without breaking anything that currently parses SRV responses.


there is no cyclical dependency for the HTTPS record


There is if you use DOH.


This (dns) approach does not authenticate. SRV and TXT records already publicize too much information to the public. A RESTful api would be ideal or adding the equivalent of bearer token auth to DNS.


Since the DNS record is extensible, it can be extended to contain any extra parameters deemed necessary in the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: