> The DNS Long-Lived Queries (LLQ) mechanism [RFC8764] is an existing deployed solution to provide asynchronous change notifications; it was used by Apple's Back to My Mac [RFC6281] service introduced in Mac OS X 10.5 Leopard in 2007. Back to My Mac was designed in an era when the data center operations staff asserted that it was impossible for a server to handle large numbers of TCP connections, even if those connections carried very little traffic and spent most of their time idle. Consequently, LLQ was defined as a UDP-based protocol, effectively replicating much of TCP's connection state management logic in user space and creating its own imitation of existing TCP features like flow control, reliability, and the three-way handshake.
I can't help but hear the skepticism in "asserted"!
Also, fun that Apple want to move a service from UDP to TCP at a time when Google are trying to move another service from TCP to UDP.
It’s still tricky to handle more than, say, a million or so TCP connections per box, but moving the connections to UDP with TCP-like state machine in user-space solves none of the issues.
It depends on how TCP like you end up being. In TCP, all outgoing data is stored for retransmit if not acked; in this application, a server wouldn't need its responses acked or stored (if the client doesn't get the response, it will resend the request), and push messages could be regenerated if not acked, rather than stored and retransmitted as-is.
That reduction in required storage could be significant. User space connection tracking might also use less memory than kernel connection tracking, depending on how many non-essential things each implementation tracks. Colocation of the connection tracking data and the application level data might be beneficial as well.
I didn't read the prior RFCs to see what kind of hoops they jumped to make the connections long lived though, my guess is TCP session timeouts for NAT boxes are longer than UDP, and there are more networks that disallow UDP than TCP, so TCP is probably a good idea from that standpoint.
yeah really, aren't you just moving the buffers that track the TCP connection states from the kernel to some program in user space? how does that solve anything
Does anyone know what mechanism is used between Route 53 and Google DNS? When I update a record in Route 53, there seems to be 0 delay in the updated records being present in 8.8.8.8, even if I've recently request the old value. I've been imagining that they had set up some sort of "cache invalidate" message that AWS could send Google, but I haven't done any investigation.
which suggests the IP addresses share infrastructure (at least near London!)
You can try this sort of thing with other providers to try and map out their internal infrastructure. (1.1.1.1/cloudflare has around 22 machines near me; quad9 has 16; opendns also has 16; verisign has 31; etc).
One thing I tried was making an ad that recorded the cookie id in the impression tracker so I could record the connecting IP addresses in our DNS server. I could then target users who have a particular network provider (or use a particular DNS provider) which could be useful if I want a large number of users who (effectively) ignore DNS caches.
8.8.8.8 is a bunch of servers, and you may be getting different server to service your request than the one that cached your recently requested old value. Make several requests, and observe the TTL -- you may notice that it jumps around as you get different servers, especially if you space your requests apart a few seconds.
Perhaps Route 53 and Google have setup a system to notify each other when a record changes so they can then request a transfer and have near zero propagation delay.
Note that this notify system is not the same as that proposed in the RFC. This notify system is configured by the admin and is meant to keep secondary servers updated with changes in the primary server, not for general notification of changes to anyone who is interested.
I haven't noticed this with Route 53 (but I also haven't used it for a year). My experience was that even Amazon's resolvers didn't update with my changes immediately. So if I added a record, then immediately did "dig @8.8.8.8 my.new.domain", I would get NXDOMAIN cached for the time on my SOA record. To avoid that wait, I had to be careful to not try resolving it for a couple minutes, so that the NXDOMAIN wouldn't be cached.
Maybe things have changed, which would be nice. Nowadays I'm on DigitalOcean and they don't let you control the TTL on the SOA record, so you have to be even more careful than with Amazon. Very annoying.
You could query your authoritative name server(s) first and only query other resolvers once you see your new record. Shouldn't be to hard to automated that using something like dnspython.
If the authoritative is using Anycast (which they should) then you run into similar problems that you get with Anycast resolvers, which is to say you might get different results depending on distribution of data. Due to this we currently provide an API for checking distribution of zone changes with our authoritative name servers: (https://developer.dnsimple.com/v2/zones/#checkZoneDistributi...).
Thanks for the replies on this. Looks like most of this was luck of the draw and not notifications from AWS to Google. I had a change to make today, and made 100 requests from 8.8.8.8 before, then made the change and it hadn't updated after a couple minutes. In Denver, looks like we have 15 different servers answering the 8.8.8.8 requests. I then hit the "clear cache" URL someone mentioned here, and it updated quickly then. Great discussion, thanks everyone!
This is what I immediately thought too. I don't think even NTP servers screamed this much potential for abuse by the time th standards were being drafted. However, I imagine this to use TCP and not UDP (didn't real the whole RFC yet), which mitigates some of the attacks.
Browsers will need to update their same-origin policy so that a change in IP address will block same requesting a different site under a different name.
This would mean that long-lived single page web apps would need to be hard-refreshed every once in a while when, through no fault of the app developer, all the IP addresses that their domain name resolves to have rotated.
The DNS already supports NOTIFY, which is a push notification for updates to a zone (this is something set up by the operators of the auth servers, typically for mirrors/secondaries so that they know when to request a zone transfer); the alternative, polling, requests the SOA RR for a zone and compares serial numbers.
Didn't give it a detailed read, but this looks like a more granular proposal, an example given being printer discovery.
They'd also be used by any clients that don't understand DNS push notifications. That's including a lot of networking hardware, industrial hardware, medical devices, etc.
TTL tells servers and clients in the wild how long to hold on to a query result. You'll want to set this very high if you expect a nuclear war soon. There is no push notification for this.
Push notifications occur when the primary is HUP'd or restarted, telling the secondaries to pull fresh zones so that everybody's is known to have the same serial. After this the secondaries poll the primary every 'refesh' seconds to check for a newer copy of the zone.
After reading the spec it looks like there is alot of differences... this RFC essentially gives complete control of the resource record set lifetime to the DNS server. This would require major changes on the DNS client side.
This getting released during WWDC does not feel like a coincidence. I wonder what Apple will be using it for, it’ll probably be mentioned in a session sometime this week.
Just speculating here, but they still have a Back-to-my-Mac-like feature: The HomePod allows for remote access to things in your home WiFi, e.g. IoT devices.
As IP addresses of home routers change often enough, this might be a use case for DNS push. Access has to work across work across carrier-grade NAT though, so they might still need more than DNS.
The more I think about this the less it makes sense to me.
My understanding of UDP is it's supposed to be for heavy, lossy traffic where late traffic is pointless or harmful (like video streaming frames, if your frame is late better toss it out than keep it). But I kind of want my notifications even if they're late. I think I'm missing something in my understanding. I was thinking if you can stream using DNS, and cut out something like Kafka, that might be a big deal, but on second thought it doesn't make sense because DNS is more about service discovery than it is about piping load; you want an alias to a server that does the heavy lifting.
Haven’t read the whole proposal, but some concerns that stand out to me are: 1 - impact on DNS system performance; and 2 - what infrastructure does this proposal rely upon?
You need both for a robust system. Simple example: On system startup, a system should poll for the current config. It should keep polling every few minutes to verify that it hasn't missed any pushes.
But it should also register for push notifications of config changes so it can get them faster.
It should also renew its subscription if it finds that it missed an update during poling.
I mostly agree with you, but having spent a good part of my career dealing with data integration patterns, I'm all too familiar with the problem of missed messages from the consumer (or failed-to-send messages from the producer) that results in something downstream being in an incoherent state. The simplest fix for that incoherent state is often to also poll periodically, or something similar.
Push requires more engineering effort to build the server-side of the system. Poll is easier to implement for consumers, and probably also easier to scale for most engineering teams (scaling a central DB is a well understood engineering problem).
I agree that push is better overall, but… tradeoffs
I can't help but hear the skepticism in "asserted"!
Also, fun that Apple want to move a service from UDP to TCP at a time when Google are trying to move another service from TCP to UDP.