Hacker News new | past | comments | ask | show | jobs | submit login
DNS server in Go - Big NTP Pool upgrade (ntppool.org)
106 points by tshtf on Oct 9, 2012 | hide | past | favorite | 27 comments



This seems like a great time to prod more folks into joining the pool. If you have a static IP and a stable machine please consider becoming part of the pool:

http://www.pool.ntp.org/en/join.html

My firewall at home is part of the pool. You can configure how much traffic you can handle in the manage servers interface. With any decent cable connection you can set the speed to 768Kbit and never notice the traffic (ntp is just tiny udp datagrams).

While we are at it lets go through /etc/ntp.conf and remove any references to stratum one servers. With a recent version of ntp you can replace all of your server lines with:

  pool 0.CC.pool.ntp.org iburst
  pool 1.CC.pool.ntp.org iburst
  pool 2.CC.pool.ntp.org iburst
  pool 3.CC.pool.ntp.org iburst
Where CC is your country code {us,ca,de,mx,fr,etc}. I decided to err on the side of caution and give an example that will work with any reasonably recent version of ntp. With the most recent stable release the following one line will suffice:

  pool CC.pool.ntp.org iburst
That one line will work for Mountain Lion, debian/unstable and the most recent ubuntu release and ntp will automatically poll more servers as needed.

More info on using the pool can be found here:

http://www.pool.ntp.org/en/use.html


Thanks dfc. I made a note for myself to update the site to mention/recommend the pool statement as well:

https://github.com/abh/ntppool/issues/69


Miek Gieben's [1] DNS library [2] is truly excellent. I use it at Port 6379 for an authoritative DNS server, looking up Redis instances [3]. I too have found Go surprisingly productive and enjoyable to code, and our servers seem very stable so far.

[1] http://www.miek.nl/

[2] https://github.com/miekg/dns

[3] https://port6379.com/blog/2012/09/03/using-dns-to-find-insta...


> Since mid-September, twenty of the name servers have been running the new software and except for a bit of trouble on i386 (32-bit) and low-memory systems, it's been running very smoothly.

So is http://code.google.com/p/go/issues/detail?id=909 a real-world problem?


If 32-bit, low-memory machines are part of your world, I would say obviously yes.


4GB of addressable memory per process is hardly low memory, x32 was recently merged into Linux so I doubt it's going away anytime.


How can this even exist without generics, template meta-programming or a turing complete type system? I doubt that they know what they're doing.


The first time I read one of these comments by you on reddit[1] I thought it was sincere - replies to it suggested I was not alone, although I was surprised that one was on your side.

So, I guess your sarcasm was a bit too subtle. This one is better :)

[1] http://www.reddit.com/r/programming/comments/10zsz4/i_wrote_...


FYI I'm the guy that replied to that, I was really interested to see how the OP of that post got around it.


O and don't forget "Go Error Handling Sucks".


Toosh!


For people searching for performance benchmarks like me, here are the numbers I dug around from the post ( http://geo.bitnames.com)

If you need less than 200 DNS lookups per second per DNS server the Perl version is fine and might be a little easier to setup. The Go version is much faster (in prodution we've seen it do 5-6000 requests a second on commodity hardware and even virtual servers).

Is 6000 rps a high number for a DNS server? Let me do a back of the envelope calculation: one simple dns request (assuming udp here) response msg size is 100bytes, 6000rps x 100bytes /1024/1024 x 8 = 4.69mbps. Even we consider the request msg size is equal to that of response msg, the throughput is still 9.38mbps, far from saturating the network pipe. It sounds to me there is still space for improvement.


High performance DNS software can handle well over 100k QPS on a single core on commodity hardware, but that's not an apples to apples comparison since the benchmarks I'm remembering didn't include any weighting or geo-distribution special sauce.


Was it implied that the 6000 requests was peak performance of the software, or was it just the current max usage they've seen?


5-6000 requests was just the number I remembered seeing for sure. I haven't tried pushing the software to get some benchmark number.

It was just on one core (all the production servers are currently just using one core for the geodns process), but the response time was similar to idle load so I think there's lots of head-room

The monitoring doesn't log data at the second granularity and the traffic is very bursty, so the number was just from staring at the real-time monitoring dashboard I have.


I suppose it's the latter. That number is pretty much hardware-specific, though it would give us a bit more details if they mentioned the hardware profile.


The DNS servers are (mostly) virtual machines on all sorts of hardware. In many cases I don't actually know the exact hardware specs or how much other load the boxes have.

http://www.pool.ntp.org/dns-server.html


I'm in love with Go. The more I use it the more i fall in love with it.


I've seen a number of near-identical comments, but they're not interesting unless you expand on why.


In no particular order:

   - new, shiny, toy (boys will be boys)
   - sufficiently different (allows for think different moments)
   - sufficiently subtle (does promote deep think moments)
   - bipolars are sexy (Go is seriously bipolar.)
   - pedigree (Go is something of a pedagogue)


Source code linked by the OP: https://github.com/abh/geodns/


pretty neat. hopefully this will get some tractions.

thanks for sharing.


[deleted]


It is a well known practice. In this case, you can easily load the same configuration and replay the same queries on the Perl and Go servers, then compare. This means that you have only a single component to test. Supposing the author had selected a different configuration format, he would have had to also check that the configuration was correctly created and loaded. It also allowing the author to put a single server in production to load test once the preliminary tests give good results.


It also makes it much easier to roll back to the previous implementation. If he had created a new configuration file format, he'd have to keep the old and new config files in sync until the new implementation had proven out. Speaking from similar experience, that's no fun.


You read it wrong.

Try this: the new software reads the old config file format, so we can stagger the roll-out (install new code on some servers while keeping the old code on other servers), and expect things to keep humming along with no downtime and a simple fallback procedure if we see any problems. The new code should perform better than the old code, that's why we wrote it.


Why use such a bad performing server in the first place ? Even 6000 qps is not all that good for a dns server actually.

Even if you do geo lookup on a per /24 basis, surely that doesn't detract that badly ?


It's fast enough for now, so investing more time in making it faster would just be spinning wheels.

I haven't done detailed profiling, but the geoip lookup is pretty fast. I think more time is spent picking IPs to return (weighted from a list of sometimes thousands of IPs) and likely more time than that is in the underlying DNS library (which hopefully will get better over time without me doing anything!).

Both the Perl and the Go versions are optimized more for developer time, correctness and robustness over raw performance. I only have so much time to work on it and lots of you depend on it working, so I think those are appropriate trade-offs.

If it was a full-time job more than a hobby maybe it'd make sense to do the work in C instead of Go (but probably not).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: