Does anyone know what the original reasoning was for Unix timestamps to not account for leap seconds? So that one timestamp can actually point to two physical times a second apart?
I mean, I know leap seconds aren't scheduled, and it's convenient to find a day by dividing by 86400, but it really seems like "physical seconds since the epoch" is the "fundamental" amount of time, as opposed to physical days, and the function that calculates datetimes (including time zones and DST) could just handle the leap seconds too.
It's obviously not changing now, was just wondering about the historical context of it. It seems like Unix time and leap seconds both come from the beginning of the 1970's... was Unix time defined before the concept of leap seconds were?
> Because the Earth's rotation speed varies in response to climatic and geological events, UTC leap seconds are irregularly spaced and unpredictable. Insertion of each UTC leap second is usually decided about six months in advance by the International Earth Rotation and Reference Systems Service (IERS)
I don't see how applications could make unix second <-> day conversions without downloading a map from the IERS, if leap seconds were included.
I was disappointed to read that their solution didn't involve switching to TAI. The kernel should use seconds since epoch and leap seconds should be a user space issue, just like timezones.
This method lets you isolate the changes to this level of the stack; everything up top can make the convenient, if slightly incorrect assumptions about time.
How is handling a leap second any different from dealing with Daylight Savings Time, when a whole hour can skip or repeat itself? Wouldn't you just use the same logic?
Or is it the fact that servers tend to ignore DST, being set to GMT and using timezone+DST only for datetime rendering/parsing, like Unix timstamps? While leap seconds actually affect the clock itself?
Yep, it's the latter. Leap seconds actually affect time_t values, whereas daylight savings does not.
I think it's simpler to think of time_t (or "unix time") as independent of any time zone. It's the number of seconds since an arbitrary "epoch" that happened simultaneously everywhere in the world. It so happens that the epoch happened at midnight GMT.
Of course it's not literally the number of seconds since the epoch because of leap seconds.
Huh? When a leap second is added, a day has 86401 seconds, but time_t says it has only 86400. So the true number of seconds since the epoch drifts from time_t every time a leap second is added.
It seems to me that the problem is in all kinds of code that relies on time to do some critical operation when it really should not. Time is for people.
Computers should use a separate "time", that only moves forward. A numbered pulse.
No, there would be continuity breaks at the beginning and end of the window where the smear occurs if purely linear. I think the goal here is a kind of "ease-in-out" transition.
Is that part of google's "TrueTime" project? I heard about it at a Google Spanner presentation. They use GPS receivers in their DCs to get an exact time.
smart. Spoiler/summary: they use a "leap smear" to keep code logic from breaking.
Instead of making code encounter the same second twice or not encounter a certain second, they smear the extra second over several hours beforehand through the central time server; by the time the leap second comes you're already sufficiently ahead/behind. (My comment: This works because the granulatiy of the time isn't that low anyway, so obviously no code can rely on it. Therefore, if code is correct without the smear it will be correct with the smear.)
I mean, I know leap seconds aren't scheduled, and it's convenient to find a day by dividing by 86400, but it really seems like "physical seconds since the epoch" is the "fundamental" amount of time, as opposed to physical days, and the function that calculates datetimes (including time zones and DST) could just handle the leap seconds too.
It's obviously not changing now, was just wondering about the historical context of it. It seems like Unix time and leap seconds both come from the beginning of the 1970's... was Unix time defined before the concept of leap seconds were?