But we're sitting here misunderstanding what the hosts people actually want from DNS and why they're frustrated with the current state of it.
Failure to contact the name server should not cause a failure in name resolution. It should mean that the server isn't getting record changes.
Caches suck for this use-case. Great for the public internet but terrible for intranets. I want DNS clients to replicate all my internal zones from the masters and then serve their own queries. Records don't change that often so "bad things" would happen far less often if the failure mode was the client was behind on replication.
It's the same story with DHCP and AD. If DHCP worked so that leases were basically indefinite and servers would keep using the last address they were given we would end up with fewer total problems. All server addresses are reserved anyway and change on human time. Just propagate the changes when they happen instead of having the client ask every time.
This is what all the rsync people are ranting about. They want a DNS system where the server pushes record updates to the clients. We're over here building chat systems based on message brokering and WebSockets but doing it for name resolution is suddenly weird?
> "Records don't change that often so "bad things" would happen far less often if the failure mode was the client was behind on replication."
That's... an entirely environment specific situation whether that's true or not.
If you're doing modern CI/CD practices with k8s, etc - those records might be valid for minutes at a time.
In many situations it's better to specifically get a DNS Resolution failure than to get out of date records.
That at least should be noticeable and obvious to the application and anything/anyone monitoring/using it.
> They want a DNS system where the server pushes record updates to the clients.
This is what Zone transfers between DNS servers are.
Running a DNS Server on each client in your network seems like an overly complicated situation but hey, if you want to do that - you can. I suspect it'll cause more problems than it solves by just having a set of well monitored/configured DNS servers.
But this is just as env specific! The point is that the “copy hosts” people are basically saying in their env that serving potentially stale records is the preferred failure mode.
> Zone xfer!
Yes! I just with the software support was better/more mature for “caching servers” (i.e clients) to act as slaves for zones rather than catching requests in-flight.
> Running a DNS server on each client...
This is what Ubuntu has done for ages with dnsmasq and now systemd-resolved. Every Linux server these days is running a DNS server.
This isn’t a “solving problems“ thing. This is a “what sounds happen in the event of a failure of those well configured and monitored DNS servers.” You can’t just be like “just never fail” as a solution.
Failure to contact the name server should not cause a failure in name resolution. It should mean that the server isn't getting record changes.
Caches suck for this use-case. Great for the public internet but terrible for intranets. I want DNS clients to replicate all my internal zones from the masters and then serve their own queries. Records don't change that often so "bad things" would happen far less often if the failure mode was the client was behind on replication.
It's the same story with DHCP and AD. If DHCP worked so that leases were basically indefinite and servers would keep using the last address they were given we would end up with fewer total problems. All server addresses are reserved anyway and change on human time. Just propagate the changes when they happen instead of having the client ask every time.
This is what all the rsync people are ranting about. They want a DNS system where the server pushes record updates to the clients. We're over here building chat systems based on message brokering and WebSockets but doing it for name resolution is suddenly weird?