Hacker News new | past | comments | ask | show | jobs | submit login

From experience with multi-datacenter setups, if you set a 60second TTL on your DNS records, you'll see 95%+ of traffic get the update within 5 minutes.

Also, you can associate multiple addresses with a record. It's up to the client to retry on failure, but all browsers do (as far as I know)




Wouldn't that kill DNS if everyone did that considering it relies on caching for performance across the world?


I'm no expert, but no. Most big sites rely on a fairly short TTL.

It's a thick layer of caches. Your browser, OS, router, ISP, and a bunch of intermediaries can cache the DNS. So even at 60s, you get good cache hits (the busier, the more true that is, of course)

Also, the update can always happen asynchronously. You and 9999 people ask your ISP for Facebook's IP. It serves all of you a slightly stale IP and asynchronously fetches a new one (thus turning 10000 requests into 1). AKA: thundering heard problem.

DNS mostly uses UDP, which is more efficient for the server and harder to DOS (the server doesn't have to maintain state per request).

Finally, # of requests is usually (always?) a factor in the price of DNS services. So the cost is borne by the clients, not the service providers. And since DNS hosting is seemingly profitable, I assume they're more than happy to build up the infrastructure to deal with additional requests.


But there is a difference between a big site and many small sites regarding DNS caching. If 10 big sites has 1 million requests each within an hour most requests will be cached. If 1 million small sites has 10 requests each within an hour most request will NOT be cached but forcing a cache-refill. I think that might strain the DNS infrastructure.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: