Hacker News new | past | comments | ask | show | jobs | submit login
UTC Is Enough for Everyone, Right? (zachholman.com)
757 points by bpierre on May 29, 2018 | hide | past | favorite | 304 comments



So the article seems to imply you should store all timestamps as UTC (with an additional timezone string ID). But for events in the future that needs to happen on a specific "wall clock point in time", it might be better to actually store the yyyy-mm-dd hh:mm:ss as a string with a timezone next to it, because timezones can and do change often unpredictably. If you pre-calculate what "4.00pm next August 1st" is as a UTC timestamp today, and the timezone rules are updated between now and then, your UTC timestamp may end up being incorrect. I guess you could have an additional "precalculated-utc-timestamp" column but regularly re-calculate this from the "yyyy-mm-dd hh:mm:ss + timezoneID" (especially when you upgrade your Olson tzdata).


We spent weeks on this for our new conference calling app to determine when a user says "Setup a call at 10am for my group every week" that come October 29th 2018 the call takes place at 10am, not 9am following a DST change.

After a heckuva lotta research and reading we determined that we needed to store the scheduled meeting time using two values; the local datetime and the desired timezone eg.

scheduledAt: 2018-05-29T23:41:16.167 scheduledAtZone: Europe/London

Everything else in the system like created, updated, ended data is stored using UTC and transposed on the client using the user's stored time zone preference. eg.

createdAt: 2018-05-29T22:41:16.167Z

The application server your app is running on of course needs to run on Etc/UTC but with that in place (so far...touch wood) we haven't had a problem in our usage.


This kind of thing becomes a lot of fun when you're trying to schedule a regular international call between e.g. the US and the UK, where the offset in hours changes four times per year because their daylight saving adjustments are not simultaneous.

When I say "becomes a lot of fun" what I actually mean is that this observation lets us understand very quickly that the problem is not generally solvable. One way or another, somebody's 9 o'clock meeting is going to suddenly take place at 10. Having accepted that, we can push through the idealism and start working actual solutions like the one you described - have the user attach the regularly scheduled meeting time specifically to the UK or to the US.


> [..] regularly scheduled meeting time specifically to the UK or to the US

very funny things happen when there is a shared physical resource e.g. a booked meeting room, and you have a bunch of people schedule meetings with different reference time zones.

ah, and if you think this creates havoc only for a the few weeks while the TZs are out of sync, remember that in the southern hemisphere DST is applied in "reverse"


Oh, that's fun: "A change in the definition of a timezone has caused previously non-conflicting bookings to conflict."


Yep, time zones are lame and we’d be much better off if everyone just got used to saying the time in UTC. But then an “9 to 5” would become a “regular 8-hour shift” and be less fun to talk about, school might let out at 0300, and the times would vary when it’s dark / light out, which would probably be too much for the average joe to handle.


Everyone on UTC is really just useful for scheduling stuff internationally and just because of that, telling everyone to change their daily life, so you have it a bit easier, doesn't seem like the right plan. Additionally, timezones aren't really the big issue, it's more the changing of timezones due to DST or similar is what's causing the biggest issue. Getting rid of DST looks like a lot better solution for everyone.


The worst thing about DST is that it breaks the linearity of time, allowing for it to skip back and forth and causing all sorts of problems such as "missing" and "empty" hours.

In my country the train stops for one hour at the spring DST change and just waits for time to pass to account for the "missing" hour and not mess up the schedules by arriving one hour earlier.


Also depending where you live, it might become the next calendar day in mid-afternoon. Imagine: "Want to meet up Friday?" ... "You mean tomorrow afternoon, or the morning after that?"


Eh, people would keep saying 9-to-5 and post in Future-Reddit "TIL 9-to-5 jobs are called that because in the early 21st century 9am was defined as when the work-day started" or something else equally half-right.


Then someone else would reply that it was actually because Government workers traditionally finished their work day at 4:51pm.


well, technically the "funny" aspect I was talking about, was the things that happened once the meeting room booking has conflicted and two group of people clashed both claiming that their booking was more right (as opposed to "how funny this can happen" in the first place).

It's also interesting that since this doesn't happen frequently enough it's usually hard to develop a good way to get out of it.


I think I'd resolve it by saying that any bookings made in a timezone that has changed its definition will be cancelled if the new definition causes the booking to conflict with another (including the case where two or more timezones have changed simultaneously: all involved bookings attached to changing timezones are cancelled).

"The timezone you have used to define this booking has been altered, and the booking cannot be updated due to unavailability of the resource XXXX. Please choose a new time for this booking."


yeah, and given that in most times such change is well known in advance, the software could help dealing with it in the least disruptive way!


> This kind of thing becomes a lot of fun when you're trying to schedule a regular international call

Heheh, exactly. We're currently debating how best to implement this since this one is more a UI issue than a UTC/Time issue.

For example if the user wants to schedule a call at 2pm New York time (UTC-4) we need to show them the consequences of dialling a participant in the UK (UTC+1) and even more so someone in Sydney (UTC+10).

If you use your ZonedDateTime classes carefully in your code (or equivalents if not using Java) then your app does take care of the DST changes. You can then show the user the scheduled time for not just the person making the creating the event but for every participant - ultimately the user has to decide what compromise to make when scheduling events across time zones and DST changes.


I live near a border crossing thats opening hours are set in the time zone on the other side.

My side doesn't observe DST. The other side does. The border keeps different Summer and Winter hours.

Trying to figure out when I need to leave to make sure I get to the border during its open hours to get through, and then when I need to leave (in local time on the other side) to get back through during the open hours is an extremely tedious problem.

Worst part is crossing the border is just me driving South. My longitude doesn't really change.


I've generally decided to just avoid scheduling calls for the working week before and after the DST switchover, because I never remember which of the UK and US switches first, and in which direction, and so many other people mess it up too. Of course, not everyone has that luxury of avoiding calls...


Well, Morocco changes its DST four times per year so that during Ramadan the night (and subsequently lunchtine) falls earlier.

    Time is an illusion. 
    Lunchtime doubly so.
        — Douglas Adams


Yep, that's my conclusion too: storing timestamps for events in the past and up to present time - unix timestamp with optional tzid, and re-calculate local time formatting from scratch for display every time. Events in the future, do the opposite: store as yyyymmddhhiiss + tzid, and re-calculate utc from scratch for calculations every time. As a rule of thumb, at least.


Interesting. And at some point the future becomes the past and you have to switch the representation and freeze it. Not trivial.


You also need to periodically compute the next event, but you can't do it too far in advance. You have to do it close enough to the event that that you can adjust to timezone definition changes. Hopefully you don't have to think about leap seconds.


Why would anybody schedule events in local time to happen on leap seconds?


If it's say a scheduler for an application, you might have an activity that occurs every x seconds. At a previous job events like that were often scheduled such that x was a different prime number for each task, in order to minimize regular simultaneous executions. A system like that could easily end up with tasks running on a leap-second.


Your example isn’t scheduling in local time but in system time.

In local time would be “my meeting starts at xxx”. Leap seconds schould not be involved there, and neither the system time. The users aren’t going to be glad you’re doing that.


To make sure you covered those edge cases correctly!


The best coverage is not introducing special cases where they aren’t relevant.


> store the scheduled meeting time using two values; the local datetime and the desired timezone

How does that work when the same local datetime happens twice during a DST switch, or when a local datetime is skipped during the other side of a DST switch?


IMO the biggest problem isn't the how you store information or even do the date-math. The most frustrating thing is drawing out user-intent, distinguishing between two use-cases which are so similar that most users won't even know what they want.

Specifically, future events which aren't tied to a geographical location, and ones which are. And in the latter case, determining which single location it should be pegged to.

A: "We'll have a global phone call to discuss this crisis in exactly X hours from now."

B: "The keynote for the convention in CityName will occur at 3pm."


Another example of confusing user-intent is "last X days".

I.e Does "Last 2 days" indicate: A. Last 48 hours? B. Yesterday + Today up to now C. Yesterday + the day before.


"One month from now" is horrible. Is that "30 days from now" or "same day number next month"? If the latter, then how do you handle a 31st when the next month has 30 days or fewer? (Or a 29th when the next month is February and this is not a leap year?)

This is a difficult subject.

Recurrence is especially tough to model in a database.


It depends on the context. I've programmed date logic for holiday calculation, days open for business and deadlines. I like working with dates because it's always well specified.


How about adding the number of days in the current month?


It depends on user intent.

Often the user means "same day of the week, four or five weeks from now".


Even something as simple as "tomorrow" can be ambiguous. I often ask my phone to set a reminder for tomorrow. If I'm asking after midnight, what I really mean is "today" since I haven't gone to sleep yet. Google seems to get this right, thankfully.


> Google seems to get this right, thankfully.

Gets it right for you. If I mean today, I say today. This is especially true when I'm talking to a computer.


Both the words "today" and "tomorrow" ("morrow" being related to the word "morning") are defined in terms of the day, not whether it's before or after midnight. "Noon tomorrow" spoken at 23:55 or 0:05 refers to the same point in time (~12 hours later) in human language. I would argue that "today" is basically unassigned at night - there is no current day in scope.

Sometimes we need to use non-human language to communicate intent correctly to a computer, but we shouldn't let that redefine perfectly good and well established human language.


This is how you use today and tomorrow. It's not universal to all human language or even to English speakers. Words have different meanings depending on lots of factors including context and region.

Your definition has the same issue that mine does, just with sunrise being the time around which the meaning of tomorrow is unclear. How high does the sun have to be before "today" becomes defined and the meaning of "noon tomorrow" shifts by 24 hours? If I wake up before sunrise, does tonight refer to now or to after the next sunset?

There's a certain amount of ambiguity inherent to the English language.


Yes, there's ambiguity, but there's no reason to introduce new ambiguity where none existed before by trying to reason about these words in terms of midnight (possibly for the benefit of computers), when that was never where these words were anchored. That was the point I was trying to make.


I've always reasoned in terms of midnight because that's when the date changes. I use tomorrow to mean the next date and today to mean the current date. I'm not doing it for the sake of computers, that's just what I've always understood these words to mean.

I think using midnight as the anchor for the today/tomorrow distinction is a lot less confusing and ambiguous than having today/tomorrow not tied to the current date.


IIRC Google uses 4am as the cuttoff. So saying "tomorrow" after that cuttoff will be the following calendar day.


Google also says that an event that took place 47 hours ago is "yesterday" which is hilariously bad.

Just print the date and time.


Tomorrow is tied to the morrow, so it isn't tomorrow until sunrise.


So in roughly 1200 hours then. Check.

/folks up north


Seems like the only correct behaviour, really, would be to ask the user for clarification.


Exactly that. It's like when people ask the difference between crond and atd, one fires events at a defined date/time in the future and the other fires events at a count of date/time from the present.


Normalizing temporal expressions, that is, putting human language onto a strict timeline, is a hard problem in natural language processing. There's some literature on it, which may or may not be helpful, at least in establishing the most likely intent.


Time is difficult and system design depends on the intended consumer.

For the case you propose, I read it as a "meeting" planned at a point in the future. In this case storing the start of the meeting as a bare timestamp is inappropriate design. What really should be encoded is the set of rules for identifying the proper timestamp. There was an existing standard mentioned for that which I'd use as a starting point if designing or implementing (if I chose their design) such a system.


Thank you. There seems to be some serious overdesign in the original article and a many of the follow up comments. Design should follow specific classes of user’s goals and intent, not an attempt to craft a universal solution for everyone, across all time.

Form follows function. If the concept of “10 AM on Saturday” is important to your users, model the software in line with that domain knowledge. All problems are bounded by context, and knowing the boundaries makes designs simplier and easier to manage.


> "wall clock point in time"

The tricky thing to realize is that this is not a datetime. This is more of a contract/condition saying "when localtime will be at this value" or better yet "when localtime will have exceeded this value for the first time" because you know someone will put a date on a leap second or missing DST hour at one point.


It depends on the application and what the customer need is.

My son’s friend in school was born on February 29. We could go through the time pedantry and claim that she is 1 year old (she is 6), or pick an arbritrary moment in time (say March 1, except for leap year) and get on with the party.

If you’re plotting a satellite course, the details matter, but many, if not most use cases require consistency over precision.


If you are dealing with physics, like a satellite course, "wall time clock" is irrelevant. Wall time clock is a human construct that is ambiguous in many edge cases, dependent on position and local politics.

Indeed, your son's friend's age is an ambiguous notion, Hence the need of a more precise contract than a simple date.


All of calendaring is essentially conditions, though. Your point about this being a separate concept still stands.

It's unfortunate how few libraries exist to handle abstract datetime constructs, and how frequently programming environments screw this up. The only one I know of that doesn't mince concepts is Java 8 Time [1], where points-on-timeline timestamps are clearly separate from human-centric calendrical concepts like Time-of-Day (LocalTime) and Month-Day without a year, so there's an appropriate datatype to model lots of common use-cases of date math.

[1] https://docs.oracle.com/javase/8/docs/api/java/time/package-...


When we were building a golf tee time web site we eventually decided on this as well: always store the date-time as a string in the timezone local to the event. Convert to a date-time object in whatever timezone the user prefers (or golf tee time service APIs prefer :-P) on demand.

There is still a caveat, though. Locations can change timezone. Sometimes they do it fairly often (like every few years). You need access to a good timezone/location database that is updated regularly :-(

When we were doing this project, I was really surprised at how often we got problems. If you sell things all over the world (there are golf courses in ridiculous places), the probability that you will stumble into an anomaly is surprisingly high. Alas, this project was not a commercial success as the margins on tee times are quite small (and the tee time services you have to work with are often quite badly written). But it was quite fun to work on.


So to be absolutely correct, you store future timestamps in localtime, along with GPS coordinates of the location for the event. Then you determine the time zone of the physical locale based on a GIS lookup. Then when rendering, you offset that against the timezone of the user’s current location. I think.

I hate time.


Yep. That's exactly what we did :-) For the short time we were running the service (about a year or so) it worked great ;-)


Some timezones change twice a year. Seriously![1] A fellow engineer was responsible for the web front-end framework for dealing with time (back when we built it all here) and he spent months working through some of the strangest crap, some JS, some wierd crap humans dream up.

Good example: you, user A, are in a TZ that switches biannually - at one point in the year you lose an hour, at another you experience an hour twice. Now, another user B is also in one of these whacko places and creates a meeting (while experiencing an hour for the second time) or what-have-you for a future time where user A goes through their local groundhog hour. You have a backend server that, for legacy reasons, simply truncates TZ and goes with its own local TZ. That server is in another TZ that has an identity crisis. JS is doing dumb stuff that only JS could do. Does your brain hurt yet? I'm pretty sure that this engineer can now perceive four dimensions.

He didn't receive a bug report until we migrated to UTC. UTC made things worse, somehow. As it turns out UTC is only good for when you care about a machine doing something at some time, or when working relationally.

Store TZ (ideally location, an offset is not a TZ) along your dates. When presenting them include that information, as well as relational ("3 days, 4 hours ago at 00h15 in WA").

Time is nuts.

[1]: https://en.wikipedia.org/wiki/Daylight_saving_time


> "3 days, 4 hours ago at 00h15 in WA"

This sounds exactly like a line out of some Douglas Adams book. Which is perfectly fine as long as no one booking meetings on that app has read any of his books.


It gets even more fun if you need to answer questions like: "When this event was scheduled 6 months ago for today at 2pm, what UTC time did that represent then vs now?". Timezone rules also change. So when you're dealing with past and future you might also need the timestamp of the timestamp!


Exactly, storing both the universal time and the legal local time.


Yes, you have to periodically recompute times.

For example, if you store time in TAI then you'll have to recompute all other future times as leap seconds are announced.


There's "Google time", their approach to leap seconds. 12 hours before a leap second, Google's version of UTC time starts running slightly slow, so that it loses a second over the next 24 hours.[1]

Some Google database systems rely on tight time synchronization to order update events, so they had to have a global monotonic clock.

[1] https://developers.google.com/time/smear


I don't think I ever heard of anyone (besides scientists) to store timestamps in TAI. Though Ada supports leap second calculations in the standard library: http://www.adaic.org/resources/add_content/standards/05rat/h...


Dealing with time well into the future is rough because of the unknown timezone changes and unknown leap seconds. There's not much you can do.

Computing future times in TAI from user inputs in wall clock time is fraught, so I guess you can't store TAI in those cases.

For a calendar app, since users want to deal in wall clock time, you have to store time with timezone and with leap seconds (so UTC + time zone). And you have to store timezone, not offset to UTC -- timezones can change.

At least internally, however, dealing in TAI can be helpful[0].

Also, IIRC there's a proposal to redefine UTC as without leap seconds[1]... That scares me. Users really need wall clock time.

    The continued existence of TAI was questioned in a 2007 letter from the BIPM to the ITU-R which stated "In the case of a redefinition of UTC without leap seconds, the CCTF would consider discussing the possibility of suppressing TAI, as it would remain parallel to the continuous UTC."[16]
[0] ttps://cr.yp.to/time.html

[1] https://en.wikipedia.org/wiki/International_Atomic_Time


"That scares me. Users really need wall clock time."

Why would getting rid of leap seconds from UTC be a problem for that?

The reason there are leap seconds is nothing to do with wall clock time, it's because some people feel the time ought to be intimately connected to the Earth's rotation, but the Earth doesn't oblige by rotating steadily.

In my view the people demanding this relationship be maintained ought to take responsibility for fixing it from their side. Speed up or slow down the Earth. Can't? Too bad then, but don't expect us to keep fiddling with the clocks.


I don't understand why you say leap seconds have nothing to do with wall clock time. Don't wall clocks include leap seconds?


Wall clocks just track UTC. Today they have leap seconds because we defined UTC to include leap seconds, tomorrow if UTC or a replacement universal time no longer has leap seconds then wall clocks won't have leap seconds.


But then people will notice a few decades later that noon in UTC-minus-leap-seconds is not noon.


But it _is_ noon.

You are probably thinking of sun transit time, "solar noon", the moment in each day when the sun appears to be "highest" in the sky. This varies of course by position on the Earth, it's how people set "noon" when they didn't need to agree with anybody more than a horse ride's distance away what the time was. So, let's say at least 200 years or more ago.

Do you know when solar noon is where you live now? No? Because it's irrelevant. Huge numbers of people live in places where the solar noon changes by an entire hour twice a year for no sensible reason. Does this cause a huge problem? No, there's a slightly elevated rate of road accidents and things like that, but nothing major. A few seconds per decade is _nothing_.


What's noon? Depends on who defines it. This is the problem with time.


Times in the future being problematic reminds me of the quote attributed to Neils Bohr:

"it is very difficult to predict — especially the future."


That's what we do in the Cyrus IMAP scheduler for future alarms. We set the trigger at most 30 days in the future, and at that point it will re-calculate when the alarm should fire. Using the tz database at the time each time around.

Of course, calendar events in icalendar have the tzdata stored along with them, in theory, so they should never change unless your client updates them. Which leads to its own world of fun.


Depends. Some things need to happen N seconds from now. Other things (usually things involving humans) need to happen on a day that those humans will look at their calendars and say "oh, it's [that day]! Better do [that thing]!"


The comment above was referring to the second kind, but it's true that it englobed too much


I might not be understanding your scenario correctly, but isn't it stated that UTC has 00:00 offset and should be stored that way? You would then add an offset based on current tzdata for that timezone string which would make it dynamic and accurate for the situation you described.


There's two distinct things at play here - storing times in the past and present, and times in the future.

For the past and present, UTC is fine, and a source timezone if necessary. To track the exact instant when something happened to me, just storing UTC is fine. If I want to know what where the hands on my clock were when that happened, you also store which clock I was using separately (timezone), but this is optional and doesn't prevent accuracy.

For future times, relating to humans, like on a calendar, there's a very important distinction - I want to set appointment when my clock shows a certain time, irrespective of what the timezone rules (Daylight saving time rules can change) are at that point. Here, storing the UTC time for my appointment according to today's rules is a problem - you must simply store what I expect to see on the clock and apply the timezone rules lazily at the last possible minute.


Actually the meaning of a date like:

2018-05-26T13:45:21+02:00

Will never change, regardless how many time zone changes you may have. What does change is the method to compute the time difference. So can you say how many days, hours, minutes, seconds ago that was from:

2043-02-12T11:15:16+02:00

You won’t be able to, because then you have to account for leap seconds, etc.

Also “next Tuesday” does have to take into account the time zone as you may need to deal with daylight saving time.

However “next Tuesday” is not a date-time, but a relative difference from where you are now and by pre-calculating you lose information.

Also relevant, in such cases storing the time zone doesn’t help because the user himself might travel and next Tuesday at 7:00 can remain the same, no matter what time zone you’re talking about.


It depends. It could be a meeting in a particular city at 4pm local time. A "+02:00" or "-05:00" specifier is insufficient because the government of the particular city/country might change the time zone rules between now and when the event is to occur. So you really need to store "2018-08-01 16:00:00 America/Argentina/Buenos Aires" for example, because maybe after 2018-06-01 they decided to go for "-05:30".


Indeed. There's a concept of relative physical time

a) "I'll be there in 12 hours", implication being that if there happens to be a DST change, it doesn't matter.

and relative calendar time

b) "I'll meet you 15:00 next monday".

Both of which are valid.

What's interesting is: which of these is most useful in programming, generally? Which should the APIs make easier, or even cater for at all?

I would submit that most applications only care about "physical time"[1], but specifically [b]logging[/b] is actually really interested in calendar time. Fortunately, with logging there's so much volume that you can usually tell pretty easily when there's been a discontinuous time event -- it's a pain in the ass to rebuild timestamps, post hoc, though.

[1] Calender-type application is actually pretty niche IME. Opinion may vary.


The meaning of the date will not change compared to the UTC absolute coordinates, but usually users care only about their local time. If you set an alarm to wake you up at 8:00, you expect it to ring at 8am your time every day. What if in future user's local time zone gets changed - say if daylight savings are abandoned, as many countries consider doing - The user might not be in +2 offset anymore and when you convert it to the local time the alarm will ring at 7am or 9am instead of 8:00. It's definitely not what user would expect.


what the user expect is of course completely different if there’s not one but several users, or several referentials.

Then adapting for local time will screw any schedule not pegged to local time (even for stuff as trivial as following an international event)

Time is hard.


The meaning of “2018-05-26T13:45:21+02:00” may not change, but if 2018-05-26 rolls around and the restaurant where you have a lunch reservation at 13:45 local time is no longer in the +02:00 time zone, it’s not the same time that your reservation is scheduled for anymore.


But users want wall clock time. That you stored 2018-05-26T13:45:21+02:00 is irrelevant. The user wants that time to change if the original timezone's definition changes.

So you really need to store {datetime, timezone_name}.

It gets tricky when you don't know the timezone... For example, you're on a plane with your devices in airplane mode and no wifi connectivity, and you want to create a calendar event in destination time... The UI had better let you specify a TZ, and you'll have to know what it is. Whereas if you want until you land you can just let your devices figure out what TZ you're in and spare you the bother of thinking about it. That's an unlikely edge case, but the UI should really let you input a TZ if you want to. Perhaps the UI should insist on you entering a TZ if you're in airplane mode.


>Actually the meaning of a date like: 2018-05-26T13:45:21+02:00 Will never change,

It will respective to the local time though if a state decides to stop honoring DST or changes TZ.


I once ran across a book of changes over time to local time zones, I believe for historical research. It was the size of a phone book.

Ah yes, American Atlas: United States Latitudes, Longitudes, Time Changes and Time Zones 5th Edition by Thomas G. Shanks 448 pages.

Covers probably well over 100K locations each with its own history of timekeeping.


If you had UTC and the original timezone code, you could run a job that updates future timestamps according to timezone changes. This might keep things cleaner.


There is a solution for this, Unix timestamps.


Looks like someone didn't read the article :) If you normalize to UTC for some future date with a timezone in Lower Left Elbonia Standard Time (GMT+7:42), and the Elbonian Revolutionary Army successfully form an overnight junta, retroactively declaring the country's timezone to be Free Peoples' Elbonia Revolutionary Time (GMT-12:00), suddenly your DB rows make no sense whatsoever.

The absolute point in time relied on future knowledge that did not yet exist, that invalidated any previous estimate of when it would occur. An external timezone reference (the Olsen DB) must be continually updated and separately referenced


This is exactly wrong I think:

If the timezone changes under my feet the unix timestamp might suddenly not refer to my birtday anymore.

Watching people struggle with this is frustrating.

Also I currently work with frontend and while it is no surprise that JavaScript has messed this up badly [0] it surprised me when I realised that a certain large UI framework had a) messed it up b) didn't realize it and tried to close it down when people tried to help.

[0]: they copied Javas first broken Date handling and suddenly they were stuck with it IIRC.

Edit: heh, double-ninjaed


That fails for the case that you have an appointment at 3pm in 30 days, and 15 days later the timezone's offset changed.


How does that fail? I have an integer for an appointment. I am at that appointment at that given integer. This applies for all timezones. If the timezone changes, the integer does not. What you are saying is that the timezone changes and the integer changes aswell, but then it's not the same appointment anymore.


Your integer is wrong because it was computed by translating from the current offset of the timezone. What we are saying is that the timezone definition might change in the future (ex. Government no longer honor DST). At that point when you translate back your timestamp into local time it no longer matches the original schedule that the user set.


But the integer stays the same. That's the main point here. Everything else is just converting it to local time. Sure timezones might change that.


From the user's perspective it is the same appointment. They think they have an appointment at 3:00 PM on Thursday. If there's a timezone change, and you just stored an integer, then from the user's perspective the meeting changed from 3:00 PM to 2:00 PM.

Of course, if you have a meeting that spans timezones, this is unavoidable. I think the real solution actually lies in how the meeting time is expressed to the user so they don't have misconceptions about it.


> If you're like me and immediately said ohhhhhhhhhhhh, so THAT'S why there's twelve hours in a day! and immediately followed it up with: wait, why the fuck are they using twelve instead of ten? ...

> It's most likely because you have twelve joints in your hands: three in each of the four fingers, excluding the thumb. I thought that was pretty nifty to discover. Like, I had never looked at my hands before, really. Hands are really wild, when you think about it.

Probably not. This explanation of duodecimal is really farfetched and I’m pretty sure it arose from someone staring at their hands desperately trying to come up with a way to count 12 on their fingers to explain this. (Ten fingers and two feet seems as likely.)

Clocks are (probably) divided into (two sets of) 12 hours for the same reason feet are divided into 12 inches. Because it’s convenient for everyday use. Subdivisions of 1/12th suck for complex math (if your number system is decimal anyway), but are great for lay use. 12 subunits captures halves, thirds, and quarters, all of which are very intuitive and very common in everyday use.

12ths are handy enough that we have a special word for “12 of something” (dozen) even though our numbering system is decimal.


> Probably not. This explanation of duodecimal is really farfetched and I’m pretty sure it arose from someone staring at their hands desperately trying to come up with a way to count 12 on their fingers to explain this.

You are mistaken. Finger counting by using your thumb against one of the 12 segments of your fingers is common in many asian cultures. It's utterly simple and immensely practical as it allows you to count up to 144 with both hands.


That is an interesting fact I wasn't aware of. I have doubts about the actual relevance of this, though. e.g. Chinese seems (so far as I can tell) to have always used decimal numbering systems, making this essentially a clever counting trick.


"I had no idea this thing even existed but let me continue to tell you why it's inferior"


I didn’t say it’s “inferior”. I said it’s not the foundation of their numbering system because their numbering system is not duodecimal.


I might not be remembering my History of Math correctly, but my recollection is that the Babylonians used Base 60, and 12 is an even subdivision of that. I also recall other Asian number systems having base 12 in some capacity, so there may be something to the counting methodology he outlined if it showed up in a few separate parts of the world.

Counting that way on one hand and then using the other hand to tally the groups would give you 5 sets of 12 or 60.


Wiki[1] seems to support your last sentence's description of how base 60 appeared.

Now, that approach would guarantee that your base is a product of two numbers (# fingers * # bones on non-thumb fingers). And, if your non-thumb fingers are roughly equivalent to each other, then that would guarantee it's a product of three numbers (# fingers * # non-thumb fingers * # bones per non-thumb finger). So I would say "it's a product of lots of numbers" does have a lot to do with the origins of base 60. Base 10, of course, is (# hands * # fingers per hand). Base 12 being more factorable than base 10 is just the luck of getting 3 * 4 vs 2 * 5.

[1] https://en.wikipedia.org/wiki/Sexagesimal


Wait, doesn’t that just give you 25?


No, counting to 12 using the sections between your knuckles of your 4 fingers using your thumb on same hand will give you 3x4 and then 5 of those is 60. If you were doing fingers as a single digit, you'd get 25, though.


> 12ths are handy enough that we have a special word for “12 of something” (dozen) even though our numbering system is decimal.

My understanding is that the etymology of dozen is originally traced back to duodecim (Latin for twelve) via French dozeine. Which is interesting, but doesn't really explain why it's special beyond your observation that it's a highly composite, but comfortably small, number.


> dozeine.

French nitpicker here. We now write it "douzaine" but spellings have changed through the ages.


Yep. Most of the "French" words in English (beef, butcher, danger, very, act...) actually come from dialects brought over by the Normans when they conquered England in 1066. That's closer on the timeline to Latin than modern French.


It's special because we don't have a word for, e.g., "13 of something", say, "trezen".


Modern French allows the "aine" suffix to be applied to many numbers, meaning "a group of X." I've mostly heard it used with 10 ("dixaine") or 12 ("douzaine"), though if you asked for a "treizaine" (13) of something, you would be understood.

It is interesting that "dozen" made it into everyday English but other group sizes did not.


> It's special because we don't have a word for, e.g., "13 of something", say, "trezen".

Hmm, how about 14 of something?

You have a fortnight to respond. Though if you take about a sennight longer to think about it, you'll still get a good score.


"baker's dozen"


That's two words.


Yes! 12-column UI layout templates caught on in the early days of RWD for good reason.


"Did you hear about the clock maker who was the first to add a second hand to a clock? His first prototype was a complete failure, but he got it working the second time."

Very funny, but here's the real reason (https://www.etymonline.com/word/second):

> second (n) from Old French seconde, from Medieval Latin secunda, short for secunda pars minuta, "second diminished part," the result of the second division of the hour by sixty (the first being the "prime minute," now called the minute).

Etymonline.com is awesome. I use it several times a day.


One of my most visited and favorited website. Donate to the author.



Having dealt with that crap… that should be an interesting read

scroll scroll scroll nod nod nod scroll scroll scroll

> Recurring events

Whelp, shit just got very very real, and I actually disagree with Zach here:

> If someone held a gun to your head and demanded in the next week you either 1) programmed a comprehensive system that included full support for recurring events, or 2) invent full-scale ready-to-go-to-market cold fusion, then you should abolutely start brushing up on atomic physics.

You should just beat yourself with the gun (or whatever the nearest heavy implement is) immediately, no cause to prolong the pain, you're not Harold.


Yeah. I've lived through this scenario (implement recurring events based on a calendar system in very little time). And even though I definitely had a longer runway than a week (a whole month! on top of the work that I already had on my plate), I would still agree to take the beating immediately if I was presented with that situation again.

Yeah, sure, I "got it done" but it was very challenging to maintain for a while. And, of course, I learned well after the fact that what I had designed and implemented had basically, rather poorly, implemented some existing solution / FOSS library (I had essentially written some pieces of the Ruby gem ice_cube without realizing it) that I hadn't been able to find via Google at the time.


Postgres have done a really good job here. It not only understands offsets (which most devs think are timezones) it also understands timezones. What's the difference?

GMT-8 is an offset. America/New_York is a timezone.

This becomes important when you want to schedule a meeting on the east coast of the USA in April - Europe and the US change DST on different dates. But if you tell postgres you want a time in a particular zone it will do the calculation correctly. If you use an offset you might turn up an hour late.


What's the point of unqualified offsets like that anyway? You either use UTC or a local time in the appropriate tz.


They are useful for when you want the semantics of UTC but still a human readable time. One example is logs that people will actually read. People have a hard time doing mental math but 14:00-05:00 is just as valid as 19:00+00:00 and easier for people in Eastern Time.

It's also useful for disambiguating a local time:

2018-11-04T01:45-05:00 America/New_York 2018-11-04T01:45-04:00 America/New_York

That said, I do think they are mostly around for less noble reasons: they are way easier to serialize than actual time zone rules.


Useful when you need to store the offset of a specific event in the past where the timezone data doesn't matter. Especially logs since you want to know when it occured, so you store unqualified offset so the DoS attack doesn't suddenly happen an hour earlier than it says in the logs if DST happens between attack and the admin reading the log.


How do you update the tzdb?


It looks like it's embedded in the PostgreSQL source, tho I imagine there's probably also a way to update it without upgrading PostgreSQL itself:

- [postgres/src/timezone at master · postgres/postgres](https://github.com/postgres/postgres/tree/master/src/timezon...)


[postgresql - How to import tz data (IANA) into Postgres server - Stack Overflow](https://stackoverflow.com/questions/26560003/how-to-import-t...)


> Properly storing timezone-aware times

I'd just like to point out that a lot of RDBMSs support storing date and time alongside timezone information directly without using two separate fields. SQL Server has datetimeoffset, PostgreSQL has timestamp with time zone, and Oracle has timestamp with time zone. I think MySQL does as well, but I seem to recall something strange about it. Or maybe that's just me expecting MySQL to do something weird.

Similarly, most programming languages have support for datetimes with time zones.

It's still all a huge pain in the butt, but you don't need to use two fields and ensure that you use both all the time. Also, you can meaningfully compare datetimes within the same database without translating the timezone if its all in one field.

Also, of course, remember that a even a time with timezone without a date attached to it is inherently ambiguous.


> I'd just like to point out that a lot of RDBMSs support storing date and time alongside timezone information directly without using two separate fields. SQL Server has datetimeoffset, PostgreSQL has timestamp with time zone, and Oracle has timestamp with time zone.

PostgreSQL's "timestamp with time zone" doesn't store timezone, it converts the time to UTC and stores that, and on retrieval converts the value to the connection's time zone.


Part of the problem is the ANSI SQL standard specifies that 'TIMESTAMP' does not have any offset/timezone applied. The postgres docs pretty much admit this is a bad idea and recommend all timestamps be stored as 'TIMESTAMPTZ' (aka TIMESTAMP WITH TIME ZONE).

It's problematic when moving data between systems with different Locale settings. Because ANSI timestamps are stored as 'local' time, timestamps will shift if you read from a DB in New York and write to one in San Francisco. Both will interpret an ANSI SQL timestamp as being in their locale unless told otherwise.


Yeah, that's true. That's closer to Oracle's TIMESTAMP WITH LOCAL TIME ZONE data type. In PostgreSQL you have to use the AT TIME ZONE to override the output time zone, so the client has to know which time zone they want. And I'm sure there are weird corner cases where information is lost.


That sounds dumb (if column type is called like that text in the quotes)


Might sound "dumb" or confusing (which it is), but makes a lot of sense. I've written a post explaining the difference between "timestamp with/without time zone": http://phili.pe/posts/timestamps-and-time-zones-in-postgresq...


Why? If you're doing things correctly, you don't need to specify data types particularly often in SQL. You specify it once when the table is created and that's it. All you care about on the programming interface is that your application knows what data type of your language to use with each data column. What's wrong with a verbose and descriptive name for a data type?


I don't think they're saying that having verbose name is dumb, I think it's that a type called "timestamp with time zone" doesn't actually store a time zone.


Yeah, that's what I initially thought, too, but some of the other responses have made me question that.


SQL data types have some verbose names, like "NATIONAL CHARACTER VARYING (20)". PG adds a "TIMESTAMPTZ" as alias for "TIMESTAMP WITH TIME ZONE" (which I'm not sure whether is a sql ansi standard type or just convention).


It is, and it is.


Erp. This sentence made me wince.

Times don't have timezones - places have timezones. The idea that time handling is made easier by attaching timezones to times is the cause of so many headaches. Correct time handling involves understanding place - the place where things happen, the place where the user is observing them from, the place where a clock displays a particular time.


Yeah, that's kind of an issue with the ANSI standard. It specifically allows time with time zone.

Some RDBMSs only support datetimes (timestamps) with time zone, but if you want to aim for complete ANSI compliance you're supposed to allow time with time zone. PostgreSQL, which allows that data type, specifically points out in the doc that you shouldn't use it:

> The type `time with time zone` is defined by the SQL standard, but the definition exhibits properties which lead to questionable usefulness. In most cases, a combination of `date`, `time`, `timestamp without time zone`, and `timestamp with time zone` should provide a complete range of date/time functionality required by any application.

https://www.postgresql.org/docs/current/static/datatype-date...


As well as when these observations happened or will happen in and from these places, as the rules themselves will change over time.


I guess it depends on if the event concerned only happens at one place, such as a clinical site visit, or many places, such as a global call


Yes! I have an old rant about iCalendar being wrong for not modelling place: https://fanf.livejournal.com/104586.html


You're confusing time zone with UTC offset. The two are not the same concept. You can determine a UTC offset given a time zone ID and a date/time but the opposite is not necessarily true. A time zone ID cannot be reliably discovered given only its UTC offset; the mapping is not unique.


Yeah! We work around that in our logic at FastMail by choosing the largest population area which exactly matches the tzmapping data (generally it's from an icalendar file with some weird name and some rules which show the next couple of timezone transtions)

It's ugly and horrible, but that's life.


Implementations of time-and-time-zone different enough between databases, and languages, and libraries, etc that I believe it's worth storing them separately as the author suggests, both for simple clarity and to force the developer dealing with them to actually think about timezones. Even if the database gets it right in a single value, it's way too easy for the developer to read the field and ignore the timezone information without consideration.


> time with timezone without a date attached to it is inherently ambiguous

I'd view that as incomplete (rather than ambiguous) until applied to a specific date. "You people in the Hawaii office feel free to join the Boston 2PM call whenever you want" is a fine thing to say even though you don't know what the Hawaii time will be until you know the date.


> Similarly, most programming languages have support for datetimes with time zones

They do, but in many languages they're easy to misuse, and I think this is an underappreciated part of the problem. I'd even argue that timezones really aren't that hard, they're hard when you use the wrong abstractions or try to tack them on as an afterthought. Ultimately there's a big lookup table that someone else manages for you (the Olsen database) that gives the current definitions of timezones and how they relate to UTC at various times of the year due to daylight savings changes. Given a local wall-clock time and a timezone name you can use this to unambiguously get the global instant in time it corresponds to.

But a lot of libraries let you ignore or mix up these underlying types, or implicitly convert between them in a way that you may not realize is happening. The biggest problem I've seen by far is that a timezone like America/Los_Angeles is not the same as an offset like -07:00 or a named offset like PDT (the latter is the most confusing and really just needs to stop being used. It has the definition of a static offset in that it means 7 hours behind UTC, but also corresponds to a place which is sometimes on PDT and sometimes PST, and also is usually referred to as just a timezone with nothing to distinguish it from timezones from the Olson database like America/Los_Angeles. And I think it also has a name collision with a timezone in Asia).

As an example of the problems this can cause, say you start with a time like 0:00 March 11 2018 in America/Los_Angeles. If you call the "to_iso861" method in any given time library you may or may not be implicitly converting the timezone to an offset as it'll return "-07:00" or maybe PST (ISO8601 doesn't have anything to say about what a timezone should be, so these are all valid). But now when you parse that again and add 4 hours to it, you'll still have a time with -7 hour offset even though the place you were originally referring to has changed and now has a -8 hour offset.

The javascript Date type is notoriously bad on its own as well, since it implicitly converts everything to the browser's local timezone offset. And moment isn't a whole lot better, the timezone support is tacked on as a separate library and it will happily let you mix up offsets and timezones as well, or let you go back and forth to a Date object which can have that same implicit conversion problem.

The only library I've seen that actually requires you to treat these concepts separately is Joda, which has port as js-joda. The syntax can be a little confusing and the js port is missing good builtin support for locale-aware formatting though, but those are all minor annoyances and well worth it for the better abstractions it gives you. The good news is there's a new builtin time library on the horizon for JS as well which seems to be borrowing from both joda and moment, as well as browser support for Olsen timezones and locale-aware time formatting, so I'm hopeful it will be easier to just "do the right thing" by default in the not too distant future.


Slightly off topic but I wonder how you handle time where dilation is present. For example if a spacecraft travels to mars and there is mars local time, plus time dilation from the journey, when does a transmission start from a reference frame? Not UTC related for sure! Or is it? And then what happens if someone else goes there in a different orbital pattern and the dilation is different.

and then there is the question of how you would represent a recurring calendar entry between both places.


This is a very old problem and not limited to spacecraft. If you sail around the world eastward, each of your days is slightly shorter than the days of your countrymen and by the time you get home, you will have experienced a full additional day-night cycle. I think the solutions would be the same: separate timekeeping standards where it makes sense, like Mars, and frequent corrections to spaceships' clocks to keep in step with a universal standard like UTC as sailing ships used to do[1].

1. https://en.wikipedia.org/wiki/Noon_Gun#Time_signalling


That was part of the plot of Around the World in 80 Days...


That's not time dilation, that's... I guess you would call it clock dilation.


If two spacecraft start going fast enough in different directions, they can disagree about what order two different events happened in. At that point, it's not enough to give an event a single time coordinate in some reference frame; you also need to know its position in space.


If you can compute time dilation, then you can map back to Earth time (UTC) or whatever, and use that in your communications. Otherwise you give up and use only local time in your communications. And if ever our interpid travelers meet again they can compare their clocks and determine dilation (which will not change the fact that they'll have observed events in different orders, but they'll understand this, hopefully).


So you would need to know both the position and momentum? Good luck with that.


At the scale of a spaceship, it's easy.


With care you can calculate UTC independent of any time dilation. Local system is mostly irrelevant in terms of UTC and we already deal with systems with different clocks due to leap seconds.

Now you might care about communication lag, but nothing makes earth unique so you need to handle lag with arbitrary endpoints anyway.


Speaking of physics, I just read The Order of Time by Carlo Rovelli. The book is about time in physics and the title refers to the fact that is no real ordering of events.

Good read.


Love this article, but one thing it skipped on was more in-depth on Leap Seconds.

You see, UTC, is kinda like another human-made-up timezone. Humans made up some rules, and UTC is a 37 second offset from TAI / International Atomic Time:

https://en.wikipedia.org/wiki/International_Atomic_Time


Right? And that that 37 second offset will change from year to year? I was waiting for the bombshell, that some minutes actually have 61 seconds in them, and some have 59 seconds (though it occurs to me that I don't know if astronomers ever add or subtract more than a second for any given clock adjustment).


The goal is to always keep UTC within one second of UT1. Short of very drastic events I don't think it will ever be neccessary to introduce more than one leapseconds in a fairly long time interval. And even if it was, I would think that they would do seperate leapseconds events some time apart.


There are two official primary times a Leap Second can occur. December 31 and June 30. There are provisions for more slots, but they're not really expected to ever be used.

Like most things associated with time Leap Seconds are a huge headache to implement properly on computers. If you want some fun watch what various NTP servers around the world do when Leap Seconds roll around. You will see clocks that start to drift for 15 minutes before jumping to the right time, some that wait until 8AM local time on the following day to correct, and some that just go crazy. And of course you have Google's clock smearing across most of a day.

IIRC there was one major earthquake that caused a sooner than expected Leap Second adjustment.

Fun fact: GPS, NTP, and many similar timing formats have flags in the signal that warn clients of upcoming leap second events. Few receivers pay attention to them however.


> If you want some fun watch what various NTP servers around the world do when Leap Seconds roll around. You will see clocks that start to drift for 15 minutes before jumping to the right time, some that wait until 8AM local time on the following day to correct, and some that just go crazy. And of course you have Google's clock smearing across most of a day.

As a note, clock smearing is a non-standard hack invented by Google, because most programs are too broken to handle the leap second warning event and inserting/deleting a second properly, per designed. And you shall never add a clock smearing server to the www.pool.ntp.org.


The Earth's rotation is slowing down. In a few hundred years with the current system, we will need more than 1 leap second per 6 months. How many centuries is hard to tell, since it is affected by many things, including global warming.


source?


https://www.seeker.com/ice-age-clues-help-explain-mysterious... is one of many.

Note that if the day slows by 5.5 ms from 24 hours, then we need an average of more than one leap second per 6 months to keep up. Per https://en.wikipedia.org/wiki/Earth%27s_rotation the Earth's rotation changed by 1.7 ms in the last century, an average of 2.3 ms in last thousand, and it is being affected by a variety of causes right now.


Our moon is slowing down Earth's rotation by tidal friction (angular momentum is transferred from the Earth to the Moon). The effect is rather small with the period only slowing down by around 2ms per 100 years currently (depends on the configuration of the oceans and the orbit of the moon).

But internal changes in the Earth can also cause changes in the rotation period. For instance the 2011 Tohoku earthquake shortened the day by 1.8 us [1].

[1] https://www.earthobservatory.sg/blog/how-did-2011-tohoku-ear...


The 61th second is even in the standards, but it will always be right before UTC midnight: YYYY-MM-DDT23:59:60Z

https://tools.ietf.org/html/rfc3339#page-15



We don't need more standards. We have UTC and TAI. Pick one, use that one.


Leap smearing isn't related to TAI, it's just a way to minimize the effect of leap seconds on machines using NTP servers.


Ah, I confused this with a proposal (IIRC discussed on the ietf@ietf.org list a while back) for a new time standard that involves guesstimating leap seconds.


I wonder if it would be possible to just straight-up use UT1 instead of UTC (or is UT1 only available post-facto?). This would essentially be near-infinite smear.


... and UT1


Really, it's TAI that should be enough for everyone.


Yes. Just count seconds and have a handy database of timezones and another of leap seconds. Yes, your computations of future times will be wrong if you can't predict leap seconds accurately (indeed, you cannot!). But if we're talking about a calendar then you want to store wall-clock times and every so often recompute a near-in-the-future time at which to resolve (into TAI) all relevant near-in-the-future times in the calendar so you can fire off reminders/alerts/whatever at the right times.


Four days ago I saw an excellent presentation about the trickeries of time, dates, and timezones at DjangoCon Europe. It was by Russell Keith-Magee and is short, to the point, and super informative: https://www.youtube.com/watch?v=qabriMQ1SYs

I recently ran into a completely novel time bug while working with Excel file for GoodGrids. In Excel files, date and time values are stored as epoch time. But you can not tell what time a particular number represents until you check whether the file was made with Excel on Windows or on the Mac. It turns out, the developers chose a different start of the epoch on these two platforms.

A while ago I came to the conclusion that time is simple from any one perspective, but complicated when you try to handle all possible perspective. Try to explain the complexities of time to someone who does not travel, resides in the same timezone all of their life, and does not need to coordinate with anyone outside of their timezone. For them, the rules that govern time are pretty easy.


I have been thinking a lot lately that we are trying to solve class of XY Problem.

Universal time is in part about trying to order events in a strict order. Probelm is, nobody observed that order. Later we infer things about the system based on this chronology but it’s completely fictitious and we have to stretch our brains to explain the state of the system.

The Java Memory Model, and subsequently several other languages, agreed on “happens before” semantics. We don’t care what order certain things occurred in as long as these key events happen in the right order. Because those events cause the other events or are caused by them. Some transactional models have a flavor of this as well. This write happened based on a decision made by observing all of the transactions up to N, which may or may not result in a rollback because N+1 overwrote a read value.

I can’t help thinking that our logs should show something like “customer saw their balance was $102.75 and withdrew $100, and on another server a payment for $55 was honored around the same time, which is why they are now overdrawn”. The two events were independent, and maybe we should remember them the way the distributed system experienced them.

Or maybe that’s even harder to reason about than the fiction we use now...


If people could regularly reason like that, asynchronous programming wouldn't be such a pain in the ass.


My knee jerk response is “Events in a git repository are partially ordered. And we have a visualization for that.” And then I recall how often people misunderstand a git diagram with more than three independent chains of events.

Still doesn’t rule out bad tools but it doesn’t paint a good prognosis.


Every bank I've ever been a sucker-err-customer of seemed to use whichever order results in the most fees.

Not the only reason I use a credit union these days.


Yep, transaction reordering to optimize fees is definitly a thing in the finance world. At least some Credit Unions do it too.


There used to be a law against that in the US, but the GOP repealed it early in Trump's first year.


What was the law name or number?


Federal Reserve rules published under regulation E.


My favorite date quirk not mentioned in the article or here yet is the month of September 1752. If you're on a *nix/Mac machine, try this command:

  $ cal 9 1752
What you'll see is:

     September 1752
  Su Mo Tu We Th Fr Sa
         1  2 14 15 16
  17 18 19 20 21 22 23
  24 25 26 27 28 29 30
No, there's not a bug in `cal`, this is correct ;)


That was fun. So what happened in 1752?

edit: I should have searched (http://mentalfloss.com/article/51370/why-our-calendars-skipp...)


> Probably around this time your first boss's ancestors were also discovering minutes as a good way to make sure your ancestors got to work on time: "You're exactly 56 minutes late today, Holman, what the fuck is wrong with you? Go shave the sheep!"

OK, I guess this is a joke, but this actually began to happen during the Industrial Revolution, when factory owners were obliged, as part of worker training, to teach workers how to read a clock.


> [1883] ... a bunch of rich, white railroad tycoons met at a fancy Chicago hotel to agree on a standard timezone so their trains would work better together...

The Brits did this for their railroad in 1847. Time balls were used to synchronise clocks before telegraph and radio.

https://en.wikipedia.org/wiki/Time_ball

On clock accuracy, I was reading last night about time dilation due to gravity, and the difference in time rates from points less than one meter apart in height on earth have been experimentally verified.


Indeed, I thought it was well known that Britain was the first to standardise time and that it was done for the railway.


One thing I’d love to know is how the heck as software devs will we manage a future moon or Martian colony with its own set of discrete time zones...

Admittedly you can’t schedule a phone call or chat since the latency is too great but if you wanted to schedule an event to occur in say a software system on both planetary bodies at the same time your software or time library would need awareness of celestial mechanics...sounds like a fun problem :)


In my 20 years of working with software and distributed architectures, no problems have been harder than working with time zones and calendars.

It is truly nightmarish stuff. I am convinced you could run a very profitable consultancy specializing in debugging date and time related problems in people's systems.


That would be profitable, but I'm not convinced anyone could stomach that job without going insane. Maybe a year if you were extremely hardy, but I'd be running for the padded cell after 3 months of dealing with nothing else.


Character encoding can be equally scary.


> Egypt was the first ones to really start doing this: they had a duodecimal system already, so that's why it was split as base twelve, or our two parts of twelve hours in a day. If you're like me and immediately said ohhhhhhhhhhhh, so THAT'S why there's twelve hours in a day!

The reason why we use numbers like 12 and 60 and 360 in time and length, from ancient times onwards to today, is because these numbers have many divisors, meaning that they can be divided many different ways without needing decimals.

12 can be divided by 1, 2, 3, 4, 6, 12 (i.e., it can be divided in half, into thirds, fourths, sixths). By comparison, you can't cleanly divide 10 into thirds or fourths without using decimals.

60 can be divided a whole bunch of ways: it can be divided by 1,2,3,4,5,6,10,12,15,20,30,60. The same is true for 360 (e.g. degrees in a circle).

Even in the modern era, where we understand decimals and everyone is trained on them, it's convenient to be able to perform common operations on common units without having to involve decimals. It's convenient that an hour can be divided into three 20-minute periods, or two 30-minute periods, or six 10-minute periods, and so on.


Ironically, the article is dated "Spring 2018", which for half the planet doesn't happen yet :)


There are only two types of time systems:

1. TAI, and systems that have a fixed 1-to-1 mapping to it (e.g., GPS).

2. Systems that are generally unable to calculate how many seconds will elapse until your next birthday. UTC is in this category.

Articles like this spend a lot of time on the varying degrees of insanity in category 2. But I would rather start with category 1 and negotiate any additional complexity from there.


The exact start of my birthday depends on where will be at the time. It starts and ends at midnight in the current timezone. With some clever time zone jumping my birthday could start twice or more in the same year. Also if I was born on February 29, you need to move it Feb 28 or March 1 on non-leap years.


> This stuff moves a lot, too: the tz database (also known as the Olson database), which is the listing of timezone rules we use as programmers to calm this chaos, gets updated many times a year.

Without looking it up, I would say the last change happened when North Korea decided to sync its timezone with South Korea... or have they reverted that since Trump cancelled their summit?


I think it's one of the more recent changes.

It's actually pretty cool- all of this is discussed on the tz mailing list, so you can see how it all works yourself! https://mm.icann.org/pipermail/tz/


Thanks that's a great link. Indeed as you said I'm learning quite a lot about how it works just from this thread which started about the Western Greenland timezone!

https://mm.icann.org/pipermail/tz/2017-December/025606.html


Checking 2018e, that is in fact the last change to the database, though they also updated some past timestamps:

> From 1994 through 2017 Namibia observed DST in winter, not summer.

> In 1946/7 Czechoslovakia also observed negative DST in winter.

2018d had a change to palestine's DST date (a week earlier than usual), Casey Station moving from +11 to +08 and a few other historical changes/fixes.


To clarify: Namibia had negative DST in (southern hemisphere) winter. They had UTC+1 in the winter (April to September) and UTC+2 in the summer. Now they're UTC+2 year-round, aligned with South Africa. The reason seems to be historical - if you're on year-round UTC+2, and you want to be on UTC+1 in winter/UTC+2 in summer, it's easier to say "we're setting the clocks back an hour in winter" than to say "we're setting the clocks back an hour year-round... and then forward an hour in summer."


I really enjoyed this.

And I think no one mentioned the very related Computerphile video yet: "The Problem with Time & Timezones " https://www.youtube.com/watch?v=-5wpm-gesOY

Same topic, equally entertaining (not AS geeky though).


We could've all switched to Swatch™ Internet .Beat Time™ by now, but nooooo, we wanna stick with all this mess instead!

:P


If you think calendar apps are a pain try television broadcast automation... frame accuracy instead of per second, with fractional frame rates (29.97, 23.98, fps) and a mix of content of different frame rates on the same playlist and conversion to different frame rate on the way out, plus operators in a different timezone than the channel and the need to handle timezone changes without messing things up on air.

I have nothing to do with it but this is an open source automation project I have bookmarked with some of those functions:

https://github.com/jaskie/PlayoutAutomation/blob/develop/TAS...

Lots of hassles.


If you want a technical audience to read your page, get rid of the videos and unnecessary animations.


> It’s so predictable that developers will pooh-pooh having to write timezone code, almost as much as it is predictable that some clueless commenter on Hacker News will complain that this page has autoplaying video on it. And then someone will calmly quote this passage in response, quietly pleased with themselves that the initial commenter was rude and certainly didn’t read the post at all. Then a third person will chime in on the thread saying the author was playing you all like a fiddle anyway, and the real problem is that the post was way too long to start with.

I guess I get to play the part of the second person…


The author played us all like a fiddle.


He's playing my laptop like a fiddle, too, a really high-pitched one: all the videos are keeping the CPU at 100% CPU so I had to downclock everything to keep the system below 64°C.

So: the first time I ever found a paragraph like the one the GP has quoted, happened on the same site I've ever been on where the autoplaying videos are making my whole system (including typing this comment) lag worse than any other site I've been on.

At least it gave me one good idea for the Extension I'm Making One Day™: watch the CPU and if the current page is killing it and it has autoplaying videos and/or CSS animations, automatically kill both.

In the meantime, I had to close the article as it was unreadable. (Wow, getting my low typing latency back is wonderful!)


It was too long to start with


Isn't that exactly what reader mode is for?


Implementing recurring events is a nightmare. Also working with potentially 4 different timezones in a single situation is terrible. We have instances where we need to display a time in a browser but we have to take into account the browser's timezone (fuck JS Date objects), the call center agent's timezone, the supervisor they are talking to's timezone, and the participant they are talking about's timezone. I hate time. I wish we would just go to a seconds-based time system ("I'll be done with this in about a kilo-second", "I work just under 28 kilo-seconds a day").


Lemme tell you about a fun little bug in the TFS Build system. You can create build definitions, you can set the schedule for those builds. You can, for example, schedule builds that run at a certain time of day every day (or every weekday or every Tuesday, or whatever). That build information is then stored in an internal representation which is actually used for scheduling. And that internal representation, it uses UTC times. Can you see the bug here already?

So, your build system will be scheduling daily builds at the same time every day, and then suddenly after a DST change those builds will be an hour off from when you expect them. But here's the even more fun part. If you simply happen to edit and then save one build definition after the DST switch then it'll be saved with a different UTC time (due to the different UTC offset) and it'll run at the correct local time of day.


Probably not a good time to propose decitime, I guess. (No pun intended.)

Day is the basic measure. Deciday = day/10. (=~ 2.4 hour) Centiday = day/100

Either could be used in place of the old fashioned hour.

Milliday = day/1000 (=~ 84 seconds)

Milliday seems like a good replacement for the traditional minute.

Centimilliday to replace seconds? It's a but unwieldy but could be abbreviated as cmd (similar to the commonly used 'sec.')

Years are a little troublesome. However with sufficient energy input a Kiloyear could be made to be 3 years. With sufficient precision (and perhaps occasional correction) it could resolve that leap year and occasional leap second issue.

There would be some ramifications. For example the Newton is defined in terms of seconds so there are some units beyond pure time that would also require revision.

Let's get rid of this madness of 24 hours, 60 minutes and 60 seconds. And 365.2425 days was just plain wrong from day 1.


I can't tell if you're being serious...


Only slightly. Changing the orbit of the earth to achieve a 333.333 day year is not feasible b y any foreseeable technology.


Great description of pains I've been feeling the last 5 years in my daily life. Just one comment on colors, I can read the whole post but when I switch or look somewhere else my eyes have a hard time adjusting to all those crazy colors I just saw. Maybe I'm broken.


Cone receptors saturate when you look at the same hue for a period of time and loose sensitivity for a while. It's known as retinal fatigue or cone fatigue and causes after images. It's normal and used in several optical illusions.


> (My all-time fave is RFC 2606, thanks for asking! I’m in awe of that absolute unit. Where would we be without that banger? We’d be in complete fucking chaos, that’s where.)

RFC 2606 ("Reserved Top Level DNS Names") not RFC 2616 ("Hypertext Transfer Protocol -- HTTP/1.1")? I would have gone with the latter.

Related: I just noticed Cloudflare's DNS service (1.1.1.1) follows the suggestion in RFC 2606 and resolves *.localhost to 127.0.0.1. That can be handy for local web development etc. There are some other wildcard domains that resolve to 127.0.01 but I usually can't remember them.


xip.io is one service that will wildcard for any IP address.


Enjoy RFC 6761.


That was an efficient way to heat a battery.


Same here. About a quarter way through the article I figured out what the hell my system was running; then, halfway through, I had enough. I disabled Javascript and reloaded the page. No dice. Page renders perfectly including videos. I scrolled a little faster and skipped parts from there on.


Obviously it all depends on the application:

Birthdays (as the article mentioned) are not points in time and mean different things depending on where you are, so best stored as a year-month-day combo without any zone information.

Past times are not that big of an issue actually, just use UTC and convert to local time to display, but calendars schedule for the future, and you can't always calculate a future time in UTC.

In e.g. distributed system where monotony and ordering is important UT1 would actually be even better than UTC, as UTC has occasional jumps (hence the workarounds, like googles skewed NTP)


The Olson database is great, but I wish it included more cities. I hate having to know the mapping from city to time zone and time zone to city. Quick, what's the zone to use for Columbus, Ohio?


Wow... this site completely freezes my mobile browser dead in its tracks.


> The Happy Monday System, which honestly I just loved based on the name of it alone. Shows how Japan has moved their holidays schedules around just to make people happier with a longer weekend.

In countries with significant amounts of legal leaves (e.g. most of europe), tuesdays and thursdays are also very nice as "burning" a single leave day provides for a 4 days weekend (and 3 days week). French even has an expression for this: "faire le pont" (bridging between holiday and weekend).


Independence Day, in the USA, is always celebrated on July 4. One year in seven this creates some awkwardness: which weekend is "Fourth of July weekend" if the 4th of July is a Wednesday?

2012: https://www.theatlantic.com/national/archive/2012/07/serious...

2007: https://www.inc.com/news/articles/200706/independence.html

1990: https://www.nytimes.com/1990/07/04/garden/and-thank-god-it-s... (NYT)


In Portugal we have the same expression (probably copied from French). It gives rise to some conversations that sound funny when translated, such as "do you have bridge this Friday?"


This article covers a lot of important points, but I must nitpick one in particular. GMT is not a timezone. Europe/London is a timezone. GMT is solar time which is distinct from UTC as a time system. The purpose of leap seconds is to track the difference between TAI and GMT which results in UTC (i.e. UTC = TAI + leap seconds).

Since everything is relative, my preference for recording accurate historical time is microseconds since TAI0. Everything else prior and future is an estimate.


I've seen a few systems mixing up their clocks, sometimes using the database server clock, sometimes the web server clock, sometimes the user's local clock. Whatever clock you're using, be consistent about it.

Users can and do configure different time zones than where they're actually located, e.g. if someone in India is working with a team in the UK, they'll probably have Indian time on their local clock, but select UK in their user profile.

Also, some people travel.


I really think we need a standard way to handle linear time that is not effected by leap seconds but can be mapped between back to calendar time (UTC)

The Unix Epoch does not handle leap seconds (time just rewinds). Google's time smearing approach is a great hack, but trades one issue for another.

This is a niche case in high resolution time, auditing and security considerations.


Shouldn't past datetimes be stored as nanoseconds offsets from EPOCH and future datetimes be stored as local time?

In this way, we can tell the exact time something happened in the past, no matter how timezones/DST changes, but we can also have future events stored according to people's expectations.


"Physicists are still debating on whether or not time actually exists in the universe."

I'm not aware of any physicists debating this. If time didn't exist, there wouldn't be a physical dimension for it, and everything would be at a fixed position in 3-dimensional space.


>If time didn't exist, there wouldn't be a physical dimension for it, and everything would be at a fixed position in 3-dimensional space.

It's not necessarily the case. Maybe the universe is just an tangled graph of events. Maybe the dimensionality of spacetime only emerges as an asymptotic behavior of this graph and it breaks down in small scales. In this view time doesn't exist at a microscopic level.


I would be interested to see a proposal for a standard for establishing planetary time with the feature that you can easily convert from one planet to the other, but I have no idea how you would do that. What happens when we move to Mars with Elon? UTC would not be practical.


> Definitely check it out if you get really excited about reading whitepapers. Also get yourself checked out if that’s actually the case.

Highlight of the article! Even though I disagree with certain bits, the article in itself helps explain how to think about programming time in a project.


A tiny bit of good news recently was North Korea getting rid of their fraction-of-an-hour different time zone and harmonizing with South Korea. Sadly, Australia and New Zealand are still some of the few holdouts with stupid pointless not-on-the-hour time zones :(


> UTC isn’t directly used by people (unless they’re really weird).

I find it best to stick with UTC. Otherwise there's too much ambiguity. Especially when we're all pretending to be somewhere that we're really not.


My current time is:

    1527698540
That seems easy enough for any application to deal with. Perhaps let each app or user convert that back to their ISO 8601 of preference.


The one that has been a problem for me is comparing hourly breakdowns for DST. How do you compare last week vs a day with 25 hours in it


There's a lot of really weird edge cases like that. A lot of the times you sort of have to make a judgement call.

There are cases where you have to make a judgement call, one way or another. Take the case where I invite you to a weekly meeting at 2pm. You live in a place with DST, I don't. DST happens and... is the meeting at 2pm, or 3pm, or what? Google Calendar (and many others) basically say whoever owns the event wins, so our "regular" meeting time would track however you go through life (at least in terms of DST), so times might change for you but not me, and vice versa.

Anyway, yeah, there's some really odd interactions once you start digging deeper.


We’re going to need a new data type: spacetime


I don't know if you were trying to make a joke or not, but you are absolutely right! With space travel, both are needed to tell time. Things get even more complicated quickly. Albeit, things like 15 minute timezone offsets from UTC will become a terrestrial problem.


We have had some issues with timezones and DST... Time is not fun. Funny well written article - thanks!


Full screen images as a lead are the most annoying thing that can happen, right?

Wrong:

> zachholman.com/video/utc-title.mp4


Could you start making websites which are readable? Jumping icons, moving background, flickering colors, gray letters on colorful backgrounds - all that makes me hard to read... so I'm sorry, I'm not going to read that.


I thought the 12-hour thing came from Mesopotamia, not Egypt.


I was blown away by the design on this blog, super slick!


Yet over here I am finding the design to be terrible, lacking good user experience, and overall terrible to use.


This is tougher to get then time in general relativity...


I always figured if spacetime was relative, why don't we just report the location we are at and that local time? If I know I was in Lower Manhattan at 2:00pm on November 19th, 2007, somebody else can just do a little math to figure out when that actually was compared to their own local time and place. All times are interpretive. UTC just seems like a giant hack to try to avoid the interpretation.


Who's clock is correct though.


Whatever the town clock is set to.

I'm surprised the author didn't mention Railway time. The standardization of time did not begin in the USA in 1883, it began in England in 1840. Before then, sundials, and later a local mean time, was used to determine local time. Railroads published almanacs with different local times, and instructions on how to reset a watch along a route to know what time it was. However, the small changes in time, or even differently timed sunsets, created problems and accidents, so to solve this they used the newly invented telegraph service along the railroad's telegraphic lines to synchronize clocks to the official time at Greenwich. This method to synchronize time then spread to India and then the United States. It was in 1880 that a law passed in Great Britain finally made the new time official across the whole country.

However, as the attached article shows, neither GMT nor UTC solve everything. When humans are involved there will always be mistakes unless things are simplified for them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: