Hacker News new | past | comments | ask | show | jobs | submit login

It most likely does, but blink and you'll miss it.



The answer is: it is complicated.

POSIX:2001 specifies:

"As represented in seconds since the Epoch, each and every day shall be accounted for by exactly 86400 seconds."

And that's typically no problem for normal use. The specialists know that there are different time standards and that for "real" number of seconds one has to use TAI, not UTC.

The problem in Unix world with NTP and the datetime algorithms was that some programmers believed that they have to actually see the leap second on their own computers in the kernel timestamps, up to the kernel intentionally producing discontinuities for kernel times (behavior which never had sense for timestamping purposes but implemented as such anyway). So now we have the configuration variations like this:

https://access.redhat.com/articles/15145

and, to avoid Linux kernel discontinuities:

https://developers.google.com/time/smear

In fact, the smoothing of UTC and using TAI for those who need the "real number of seconds" since point x was known as the reasonable approach long ago:

https://www.cl.cam.ac.uk/~mgk25/time/utc-sls/draft-kuhn-leap...

Now it's clearer why it's complex: too much people locally "assumed" what was not to be assumed and didn't understand the effects of their local decisions and the global context.

Hopefully "smearing" will get standardized and accepted and nobody will have to care, except the specialists who really need TAI. The leap second corrections should be invisible for normal uses, just like nobody cares for correcting the clocks for much bigger differences.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: