Hacker News new | past | comments | ask | show | jobs | submit login

From [1], it is likely messier than that:

> The time() function will resolve to the system time of that server. If the server is running an NTP daemon then it will be leap second aware and adjust accordingly. PHP has no knowledge of this, but the system does.

> It took me a long time to understand this, but what happens is that the timestamp increases during the leap second, and when the leap second is over, the timestamp (but not the UTC!) jumps back one second.

[1] http://stackoverflow.com/questions/7780464/is-time-guarantee...




It would seem much simpler to just use TAI for timestamps, and get civil datetimes from UTC. Why didn't people do this? Why use UTC for timestamps??


That doesn't seem simpler at all.

- UTC timestamps can unambiguously refer to times years in the future; TAI timestamps cannot, because it is unknown how many seconds will be in each year.

- Converting UTC timestamps to human-readable UTC times is simple modular arithmetic. A beginner in any programming language can do it. Converting TAI to UTC requires a lookup table, and it must be updated after the software is released.

What would be simpler would be ending the use of leap seconds for a millennium or so.


TAI timestamps can unambiguously refer to times years in the future. You can subtract the TAI timestamp from the current time, set an alarm for that number of seconds and it will actually occur on cue.

TAI timestamps don't refer unambiguously to UTC timestamps or calendar dates in the future, because the latter two depend on the variable rotation of the earth and (for zoned times) geopolitical whimsy.

I don't see why this matters though - most "timestamps" are for events in the past. The proper representation for events in the future will depend on your application (eg. are you writing a calendar for humans, or a spacecraft guidance system - does the event happen at a fixed point in time or a fixed point in the human work day?).


Think through your proposal to track all times in TAI and then convert them for display. What TAI time do you pick to represent the time that will be displayed as "00:00:00 on January 1, 2020"? Are you going to be okay with it changing to "23:59:58 on December 31, 2019" due to the geopolitical whimsy you mention? What if you just wanted it to represent the day "January 1, 2020"? Do you not use timestamps for anything but exact seconds anymore?

UTC doesn't know how long a second is, and TAI doesn't know how long a day or a year is. But most people need to specify "10 years from now" more often than they need to specify "300 megaseconds from now".

Banning leap seconds would be fine. UTC leap seconds are messy but at least we get by. But updating all time conversion software (instead of just updating authoritative clocks) every time there's a leap second is ludicrous.

It's true that applications should use the proper representation for what they're intended to do. And they do. Most applications use UTC because they describe human-centric events. Astronomers use TAI.


"00:00:00 on January 1, 2020" UTC is a rather useless timestamp for most practical purposes, unless you live close to th zero meridian. In most places you want to talk about 00:00:00 on January 1, 2020 localtime, which depends on what timezone you are in, and of course the daylight savings rules. Both can change. In some countries, daylight seems to be even more random than leap seconds.

In my opinion, the real problem is that TAI is not an option in most current systems. There is no way to get the time in TAI, no way to convert between TAI and UTC etc.

So even in applications where it makes sense to use TAI (think logging and billing) we don't do that because the necessary infrastructure is not available.

I think it is time that the technical community gets together can make TAI a first class citizen.

TAI doesn't help with timestamps in the future, but usually those applications don't need second level granularity anyhow. The applications that break during a leap second are the ones that need to track the current time or passage of time with sub-second accuracy. And those can be served perfectly with TAI.


"00:00:00 on January 1, 2020, local time", if it is to represent a calendar event in the human work day in the future, should be represented as neither a TAI "timestamp" nor a UTC "timestamp", but as a data type containing exactly "00:00:00 on January 1, 2020, local time".

> But updating all time conversion software (instead of just updating authoritative clocks) every time there's a leap second is ludicrous.

All time conversion software is already updated every time a government changes a time zone - by downloading the most recent tzdata. All software that needs second-level granularity is constantly updated, by synchronizing with NTP. There's nothing at all ludicrous about distributing leap second tables instead of mutilating the NTP time signal.


"- UTC timestamps can unambiguously refer to times years in the future; TAI timestamps cannot, because it is unknown how many seconds will be in each year."

Isn't it the other way around, since TAI has no leap seconds?

"What would be simpler would be ending the use of leap seconds for a millennium or so."

That effectively means using TAI consistently, which is what software not aware of leap seconds would be doing anyway (despite the fact that it's actually working with UTC.)


TAI has no leap seconds, but years, as currently defined, do.

And yes, I'm advocating for changing that definition, and making UTC a constant offset from TAI that works the same as TAI for the foreseeable future.


Read what I wrote carefully. For things that you need to convert to human-readable dates, by all means use UTC.

But for timestamps that can be used to obtain a time difference between them, TAI should be readily available to be used. Most timestamps are for figuring out time differences here on Earth and not relative to astrological signs, they are for knowing what came before what, etc. They are not for generating human-readable dates. It's silly that such a major use case is hardly implemented on major systems, and instead an unreliable Unix Time is used which can "at any moment" have the same second twice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: