Hacker News new | past | comments | ask | show | jobs | submit login
How Google handle leap seconds (googleblog.blogspot.com)
138 points by cientifico on June 30, 2012 | hide | past | favorite | 28 comments



Does anyone know what the original reasoning was for Unix timestamps to not account for leap seconds? So that one timestamp can actually point to two physical times a second apart?

I mean, I know leap seconds aren't scheduled, and it's convenient to find a day by dividing by 86400, but it really seems like "physical seconds since the epoch" is the "fundamental" amount of time, as opposed to physical days, and the function that calculates datetimes (including time zones and DST) could just handle the leap seconds too.

It's obviously not changing now, was just wondering about the historical context of it. It seems like Unix time and leap seconds both come from the beginning of the 1970's... was Unix time defined before the concept of leap seconds were?


From the Wikipedia article on Leap Seconds:

> Because the Earth's rotation speed varies in response to climatic and geological events, UTC leap seconds are irregularly spaced and unpredictable. Insertion of each UTC leap second is usually decided about six months in advance by the International Earth Rotation and Reference Systems Service (IERS)

I don't see how applications could make unix second <-> day conversions without downloading a map from the IERS, if leap seconds were included.


The real solution is not to use broken NTP: http://cr.yp.to/proto/utctai.html


I was disappointed to read that their solution didn't involve switching to TAI. The kernel should use seconds since epoch and leap seconds should be a user space issue, just like timezones.


This method lets you isolate the changes to this level of the stack; everything up top can make the convenient, if slightly incorrect assumptions about time.


How is handling a leap second any different from dealing with Daylight Savings Time, when a whole hour can skip or repeat itself? Wouldn't you just use the same logic?

Or is it the fact that servers tend to ignore DST, being set to GMT and using timezone+DST only for datetime rendering/parsing, like Unix timstamps? While leap seconds actually affect the clock itself?


Yep, it's the latter. Leap seconds actually affect time_t values, whereas daylight savings does not.

I think it's simpler to think of time_t (or "unix time") as independent of any time zone. It's the number of seconds since an arbitrary "epoch" that happened simultaneously everywhere in the world. It so happens that the epoch happened at midnight GMT.

Of course it's not literally the number of seconds since the epoch because of leap seconds.


It is literally the number of seconds because of leap seconds :)


Huh? When a leap second is added, a day has 86401 seconds, but time_t says it has only 86400. So the true number of seconds since the epoch drifts from time_t every time a leap second is added.


servers tend to ignore DST, being set to GMT ... While leap seconds actually affect the clock itself?

Yes.


Former discussion: http://news.ycombinator.com/item?id=3002009

A much longer discussion, but on different link, we had 16 days ago: http://news.ycombinator.com/item?id=4112002


There is a dead link in the blog entry to the Chubby paper; it's available at http://research.google.com/archive/chubby.html


Markus Kuhn proposed something similiar:

"UTC with Smoothed Leap Seconds (UTC-SLS)": http://www.cl.cam.ac.uk/~mgk25/time/utc-sls/


I hate that the header stays fixed, takes up like half the screen


Yeah, I never thought I'd see the day I'd have to view -> page style -> no style on a Google site.


Strange, mine condenses into a small line at the top as soon as I scroll. (Chrome on OSX)


It seems to me that the problem is in all kinds of code that relies on time to do some critical operation when it really should not. Time is for people.

Computers should use a separate "time", that only moves forward. A numbered pulse.


I wonder why it's a cosine-based formula, rather than linear.


Cosines are smooth at every level of differentiation.


Maybe they chose it because they wanted to "ease" in the change to prevent the machines from distrusting the time server?


For smoothness, I presume, to avoid a sudden change in the rate of time passage at the start and end of the day.


I don't follow -- wouldn't a linear approach yield a maximally even rate of adjustment?


No, there would be continuity breaks at the beginning and end of the window where the smear occurs if purely linear. I think the goal here is a kind of "ease-in-out" transition.


Is that part of google's "TrueTime" project? I heard about it at a Google Spanner presentation. They use GPS receivers in their DCs to get an exact time.


(2011)


Has this been contributed back to the community?


yes, I don't know about code but it's available in the form of an article that is detailed enough to implement it in any system.


smart. Spoiler/summary: they use a "leap smear" to keep code logic from breaking.

Instead of making code encounter the same second twice or not encounter a certain second, they smear the extra second over several hours beforehand through the central time server; by the time the leap second comes you're already sufficiently ahead/behind. (My comment: This works because the granulatiy of the time isn't that low anyway, so obviously no code can rely on it. Therefore, if code is correct without the smear it will be correct with the smear.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: