1. Time never goes backwards (as other people have pointed out, time zones break this).
2. UTC time never goes backwards (as other people have pointed out, leap seconds break this).
3. The system boot time never changes. On most platforms, the current time is defined as "boot time plus uptime", and setting the current time is performed by changing the boot time.
4. System uptime never goes backwards. Some platforms handle setting the current time by changing the system uptime.
5. POSIX's CLOCK_MONOTONIC never goes backwards. On some platforms and virtualization environments this can break with CPUs shared between virtual machines.
6. On systems without virtualization, CLOCK_MONOTONIC never goes backwards. On some platforms this can occur due to clock skew between CPUs.
Ok, I oversimplified. UTC time does not go backwards, but the value returned by POSIX time(3) -- which is supposed to be the number of seconds since 1970-01-01 00:00:00 UTC -- does. (Assuming you have sub-second precision, of course; time_t isn't required to be an integer type, and there's other APIs which access the same UTC-seconds clock and provide microsecond or nanosecond precision.)
The value returned by POSIX time is commonly said to be "POSIX time" or "UNIX time". This names a time system other than UTC. In my experience, programmers don't seem to confuse these two systems.
I have seen good programmers be surprised by the fact that POSIX time goes backwards in the event of leap seconds, as I was when I learned it. I think it would have at least as much punch if you edited your "falsehood 2" to be about UNIX time instead of UTC. As a bonus, you would also be correct ;)
> 5. POSIX's CLOCK_MONOTONIC never goes backwards. On some platforms and virtualization environments this can break with CPUs shared between virtual machines.
> 6. On systems without virtualization, CLOCK_MONOTONIC never goes backwards. On some platforms this can occur due to clock skew between CPUs.
Could you explain these situations in more detail? Or cite a source I can take a look at?
CLOCK_MONOTONIC is what I use for timing quite often. I tend to do soft real time stuff and that clock seems the best suited for my tasks.
(5) is just a specific instance of the general principle "virtualization screws everything up". The most common issue is with virtualization systems trying to hide the fact that time is being "stolen" by the hypervisor and/or other domains.
(6) is a case of "synchronization is really hard" combined with "benchmarks measure system performance, not system correctness". Most high-performance timing these days involves reading an on-die clock counter, scaling, and adding a base (boot time) value. For that to work on SMP, the clocks need to be synchronized -- and they don't start that way, since CPU #0 is enabled first and does some hardware probing before it turns the other CPUs on. Even worse, on many platforms, power-saving features will slow down the clock, resulting in the counters getting out of sync.
As alexs says, CLOCK_MONOTONIC should be monotonic... but in reality, it's much faster to return a mostly-good-enough value. In FreeBSD, in addition to CLOCK_{UPTIME, REALTIME, MONOTONIC}, we have CLOCK__{FAST, PRECISE} so that applications can choose between accuracy and performance.
CLOCK_MONOTONIC is what I use for timing quite often. I tend to do soft real time stuff and that clock seems the best suited for my tasks.*
As long as you avoid virtualization, turn off all power-saving features, and your "soft real time" can tolerate non-monotonicity on the sub-microsecond scale, you should be safe.
If CLOCK_MONOTONIC goes backwards your platform's implementation is broken. As defined in POSIX it does not ever go backwards. It counts the time since an unspecified point in the past that never varies after system start-up.
If your process is rescheduled to a different CPU, it must still go forwards regardless of TSC variance between the CPUs.
Of course if your uptime hits 68 years or so, the clock will wrap. If your app can't have any downtime in 68 years though I hope you've got the budget to think about this sort of thing :)
And it doesn't even mention all the bizarre things that have been done (for reasons good and bad) to time by various governments. Like adjusting from local solar time to standard GMT offset timezones (which involves skipping a given number of minutes and seconds or having them twice). Or introducing/abolishing/moving around daylight savings time. Or "super daylight savings time" with a 2 hour offset. Or moving from one side of the international date line to the other. And of course the real biggie: the Gregorian calendar reform that various countries adopted at different times between 1582 and the 1920s, skipping between 10 and 13 days depending on when they adopted it.
Oh man, I read that article years ago, lost the link and have been unable to find it since. Thanks! Any technical document which starts with "The measurement of time has a very long history, dating back to the first records of human civilization" and has a section titled "Political Time" gets a special place in my heart.
Also, computer systems have different Gregorian calendars. Your system could use the proper Gregorian calendar and a system you're communicating with could be using the proleptic Gregorian calendar. Most people don't notice the difference because dates aren't frequently slung around < 1582 in the modern world, but if a system has an idea of an "unset" date being equal to 1/1/1 it could lead to an error if transmitted to the other, resulting in an invalid date 12/30/0.
Not to mention dealing with the numerous differences in when the Gregorian calendar was adopted, which is great fun if you need to deal with dates even back to the early 1900's across borders.
E.g. the "October revolution" falls in November - Russia didn't change until the Bolcheviks took power in 1917. And Greece didn't switch until 1923... China switched in 1912, but different factions in the civil war used different systems and it wasn't until 1929 they got a single (Gregorian) calendar again.. And there are many other countries that switched "recently".
My favorite is when Daylight Savings Time started after an election in Brazil, but before the date for the election runoffs. Turned out the voting machines couldn't be changed to handle the time change. Solution? They just pushed back the date when DST started until after the runoffs.
http://statoids.com/tbr.html -- "Note that the government frequently changes its mind [about DST] at the last minute."
Let's keep some perspective. The vast majority of software applications don't use dates from the 18th century so it's fine to for your app to assume that the Gregorian calendar always was and always will be.
...except that there were countries that weren't using the Gregorian calendar until well into the 20th Century, so you can't even reliably deal with dates as recent as the mid-1900s across borders under that assumption.
Imagine a historical database of Russian birthdays, copied from historical archives by data-entry clerks who had no idea when Russia switched from the Julian to the Gregorian calendar. (For extra fun, imagine that half of the clerks doing the data entry converted the dates before entering them, and half didn’t.)
Software makes assumptions, though most decent time libraries will correctly combine local and date to show the skipped days. It might be a bit lazy though, running "cal sept 1752" will show the same for all locals as far as I can tell from the man page.
The comment about KVM in CentOS is probably inaccurate--not sure what it's referring to.
In the 5.4-ish time scale, there were a lot of clock related problems. One of them had to do with frequency scaling...
PCs have many time sources. The processor has it's own internal clock that ticks at a very fast rate (nanoseconds). There's the wallclock time which ticks at a slow rate (seconds). The internal clock starts at 0 when the system boots so it can't be used for wallclock time without adjustment.
Some Operating Systems (like Linux), get the boot time from the real time clock (slow tick rate) but then compute the current time by adding the CPU internal clock to it.
The CPU internal clock (TSC) can be wildly inaccurate for various reasons. One of them is frequency scaling which actually changes the frequency of the TSC dynamically. Unfortunately, if you're changing the frequency of the TSC on the host, guests that are running and accessing the TSC directly don't realize this has happened.
So if you scale the TSC frequency by 50%, time starts moving 50% more slowly. BIOS can also scale processor speed on some servers without the OS knowing which can lead to the same problem on bare metal.
More modern processors now have fixed TSC frequencies and KVM now has a paravirtual clock source both which address this problem.
BTW, Windows does not use the TSC as a time source so Windows typically won't have this problem (although it has other problems).
N. The offsets between two time zones will remain constant.
N+1. OK, historical oddities aside, the offsets between two time zones won't change in the future.
N+2. Changes in the offsets between time zones will occur with plenty of advance notice.
N+3. Daylight savings time happens at the same time every year.
N+4. Daylight savings time happens at the same time in every time zone.
N+5. Daylight savings time always adjusts by an hour.
N+6. Months have either 28, 29, 30, or 31 days.
N+7. The day of the month always advances contiguously from N to either N+1 or 1, with no discontinuities.
Explanations:
(N)-(N+2): There exist enough jurisdictions in the world that time zone changes occur often enough to require regular updates to the time zone database, more frequently than many distribution release schedules occur.
(N+3)-(N+5): Jurisdictions change DST policies even more frequently than they change time zones.
(N+6)-(N+7): September 1752 had 19 days: 1, 2, 14, 15, ..., 29, 30.
> September 1752 had 19 days: 1, 2, 14, 15, ..., 29, 30.
That's only in the British Empire. Other countries moved to the Gregorian Calendar at different times. It began being adopted on 15 October 1582 (the day after 4 October 1582) in some Roman Catholic countries (Spain, Portugal, Italy, Poland). Russia didn't switch until 1918.
So, you could also have:
N+8. There is only one calendar system in use at one time.
And the list hasn't actually covered the Gregorian calendar at all, since many people may not know the actual leap year rule.
N+9. There is a leap year every year divisible by 4.
The actual rule is that there is a leap year every year divisible by 4, except those divisible by 100, with a further exception to that exception for years divisible by 400, (so there is a leap year on such years). The year 2000 is one such year divisible by 400, so we have not yet passed a year divisible by 4 which is not a leap year since the invention of computers.
It was a bug in Lotus 1-2-3 that was intentionally implemented into Excel, so technically it was a backwards-compatibility "feature". And some items from the original list aren't exactly "truths" as much as implementations or configurations, e.g.
> 19. The system clock will never be set to a time that is in the distant past or the far future.
N+8. It will be easy to calculate the duration of x number of hours and minutes from a particular point in time.
The reason this is impossible on Windows and downright awkward on Unix is because DST changes from time to time. Unless you have something like tzinfo, you cannot work out a particular date and time by merely adding a duration to another date and time in the past.
Instead, you must work out the timezone you are doing the calculation in, then work out whether the duration needs to incorporate a DST change, which involves working out what that DST changeover date and time was...
Of course, it's impossible on Windows because Microsoft only give the start and end datetime offsets for the current year. Amazingly, after all the patches they've had to issue to fix DST changes over the years, they still haven't implemented a Windows equivalent of tzinfo yet. And may never do so, even though they really aught to, given they are one of the leaders in calendaring software.
My favorite eye-stabby example of this was parsing RSS feeds (with meaningless timezone abbreviations!) from sites in the southern hemisphere. Depending on the date and various last-minute laws (thank you, President Chavez), the differential would be 0, +1, +2, +2.5 or +3 hours between various points.
Actually, almost no country outside the British Empire (and not even Scotland inside it) implemented the Gregorian reform at the same time. Britain was pretty late in doing so, too.
As for "all modern computer systems", Unix time is completely ignorant about anything except seconds.
> Unix time is completely ignorant about anything except seconds
That's not correct, actually: UNIX time tracks UTC instead of TAI meaning it "corrects" for leap seconds. As a result, UNIX time is not "the number of seconds since epoch" but "86400 * (number of whole days since epoch) + (number of seconds since midnight)", and UNIX time will go forwards (never so far) and backwards on leap seconds (a second will repeat in most implementations as the day goes from 23:59:60 to 00:00:00, as they have the same timestamp).
I'm not sure this counts as a constructive comment, but an interesting anecdote anyway - an early coding partner of mine once wrote a procedure he expected to run once exactly every second by continuously polling the time in a while/do_nothing loop until exactly one second had passed. It took some convincing to get him to accept that try as he might, it was very unlikely that "==" was what he wanted there.
The best part was that this was in Javascript (and obviously he was not using a timeout). The entire page would lock up while it waited for the second to elapse. He never even figured out the obvious usability concern because he was so confounded by the fact that the procedure wasn't getting called after a second had passed.
Ok, to be fair, it was a freshman year programming class - not an unexpected or even unusual mistake. But it gave me a chuckle just now remembering it.
Somewhat related, but I've seen a lot of retry-with-a-timeout loops that will either throw a spurious timeout error or wait for a very long time if the system clock just happens to jump at the right time.
Another amusing anecdote: I got an old PowerBook out of storage and booted it up. The battery had run flat, so the clock reset to 1970. Spotlight noticed this and decided it had to reindex the entire drive. At some point during all of this, NTP kicked in and reset the clock to the correct year. I wanted to see how long Spotlight would take to finish indexing, so I popped down the menu. The progress bar indicated that it was about halfway done, and based on how long it had taken so far, it estimated that only another 40 years would be required to finish!
This was hilarious for people who had booked tickets on flights out on the 30th. Do you leave on the 29th or the 31st...? This was a real problem for the airlines and headache for a number of travellers.
36: Continguous timezones are no more than an hour apart. (aka we don't need to test what happens to the avionics when you fly over the International Date Line)
If you really want to stress test time handling code, try to get it to accept Feburary 29, 1900 and Feburary 29, 2000. The first doesn't exist but the second does.
But really, with regards to time use whatever library came with you programming language.
Well, you don't know if 31.12.2015 will have a 23:59:60 or no. These seconds are announced just prior to being added, the system couln't possibly know it.
June is already decided. We'll know either way about the December second when the next bulletin C is published, sometime next July: http://hpiers.obspm.fr/iers/bul/bulc/
I was reading this and realizing that he was mixing two very different things: design considerations and design errors. Treating every year as 365 days is a design error. Leap years will break it. "Timezones next to eachother don't require changes of more than 1 hr" might also doom that F22 flying across the international date line (ok, I am assuming that was more of a test assumption than a code assumption).
On the other hand, requiring that clocks be set to within, say, five minutes (Kerberos 5) is a design consideration. This is why Kerberos is usually used with something like NTP.
But beyond this there are a lot of things you can't do (like expire cookies) if you don't assume that client and server have similar times on their clocks. Sometimes it makes sense to require things you can't assume to always be true.
> requiring that clocks be set to within, say, five minutes (Kerberos 5) is a design consideration.
Right. I love this. You can't assume it unless you document it as a requirement and make someone else make it true for all the systems your software runs on.
Never forget that specifications are a contract, an agreement entered into between the implementer and the user. If either side lets down their end, the agreement is void and the software can only fail. Either.Side.
Even if you have a mathematical proof that your software is correct, it's still only correct given certain assumptions taken as axioms in the proof. Violate those axioms and your software can't be held responsible.
> Even if you have a mathematical proof that your software is correct, it's still only correct given certain assumptions taken as axioms in the proof. Violate those axioms and your software can't be held responsible.
"Beware of bugs in the above code; I have only proved it correct, not tried it." - knuth
This sort of stuff is why I scream "NNNNOOOOOOOOOOOOOO!!!!!!" when people store date/time as integers (i.e. "unix timestamps") when almost every database and programming language has a proper date or datetime data type, and a boatload of library functions that correctly compute differences between dates, handle leap years and time zones, etc. Well maybe not always correctly, but it's more likely that YOU will screw up your date math than it is that the well-tested date library will.
IMHO you will change your tune with more time and experience; numerics are often more portable and unambiguous than string or serialized-object alternatives... e.g., if you pass around datetimes as "int64 count of 100-ns intervals since 1-1-1601 UTC" there is little opportunity for someone who doesn't know how to use it to get a quasi-usable yet incorrect datetime out of it.
Also note, there are plenty of database systems that either have no proper datetime datatype, or have primarily datetimes that include no TZ info.
The '100ns ticks' example is actually the Windows FILETIME, not my own invention. ISO 8601 is fine and dandy for stringified datetimes, although it's my position that strings open the door to more errors when consumed or produced by lazy/poor programmers.
> when almost every database and programming language has a proper date or datetime data type, and a boatload of library functions that correctly compute differences between dates, handle leap years and time zones, etc.
In Unix-land, this data type and boatload of library functions is the operating system. The system provides and deals with local time conversion when necessary. If your application isn't very involved with time (eg. it's not a calendar or scheduling application) then it is sensible to use Unix timestamps.
This ties you to a Unix OS, which isn't usually too bad a decision since other more important things do as well. On the other hand, using your programming language or database ties you to that database or language, which is arguably worse.
> In Unix-land, this data type and boatload of library functions is the operating system.
Which can lead to problems as well. Basically when dealing with "local" time (DST), you need a reliable source of data and the question boils down to whether you want system administrators to keep OSes patched and up-to-date every time tzdata is updated, or whether that should be handled at the application layer.
Java chooses to bake in tzdata, and so do a number of other app-layer platforms. JS in the browser relies on the OS instead of bundling binary tzdata. Windows and a few commercial *nix platforms do not bundle tzdata either and maintain their own definitions. The crowd-sourced tzdata has demonstrated itself better than even what Microsoft ships in Windows. I don't know anyone who would claim a dataset other than tzdata is "better".
There might come a point in time where browsers have sufficiently advanced automatic patching invisible to the end user that they would be better off bundling tzdata internally rather than relying on the OS because they can guarantee their data is better in a higher percentage of cases. It all comes down to whose update system is more seamless and more likely to occur given all the external factors that come into play. (e.g., a user might have local machine privileges to apply browser updates, but not OS updates)
I've always thought that the GNU date command captured this confusion well.
On Linux, if you run `info date` and go to "input date formats":
Our units of temporal measurement, from seconds on up to months,
are so complicated, asymmetrical and disjunctive so as to make
coherent mental reckoning in time all but impossible. Indeed, had
some tyrannical god contrived to enslave our minds to time, to
make it all but impossible for us to escape subjection to sodden
routines and unpleasant surprises, he could hardly have done
better than handing down our present system. It is like a set of
trapezoidal building blocks, with no vertical or horizontal
surfaces, like a language in which the simplest thought demands
ornate constructions, useless particles and lengthy
circumlocutions. Unlike the more successful patterns of language
and science, which enable us to face experience boldly or at least
level-headedly, our system of temporal calculation silently and
persistently encourages our terror of time.
... It is as though architects had to measure length in feet,
width in meters and height in ells; as though basic instruction
manuals demanded a knowledge of five different languages. It is
no wonder then that we often look into our own immediate past or
future, last Tuesday or a week from Sunday, with feelings of
helpless confusion. ...
-- Robert Grudin, `Time and the Art of Living'.
How about some calendaring issues I'm sure any Israeli is familiar with:
1. Weeks start on Monday.
2. Days begin in the morning.
3. Re: 2, holidays span an integer number of whole days.
Explanations:
1: In Israel, the week starts on Sunday. Most programs have support for changing the "start of week day". Most programs.
2-3: In the Jewish calendar, the day starts when the moon comes out. This means that holidays that most calendars write as "Wednesday" will actually start on Tuesday night, and last until Wednesday night.
Or anyone living in the Arab Middle East where:
1. Weeks start on a Sunday
2. Friday and Saturday are the weekend, except where it's Thursday and Friday
3. Some countries have changed their weekends in the last 10 years to Fri/Sat to make doing business internationally easier
4. Not everyone has a two-day weekend
5. Religious holidays depend on moon sightings and cannot be precisely predicted ahead of time
Actually, in Jewish tradition, children are taught that the day begins at sunset. The justification comes from Genesis... "And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day."
The terminator is only that fast near the equator. Further north, " ... it is possible to walk faster than the terminator at the poles, near to the equinoxes. The visual effect is that of seeing the Sun rise in the west."
N. The local time offset (from UTC) will not change during office hours.
I got bitten by this once when developing a scheduling tool for project management. Since daylight-saving offset changes were always close to midnight, I assumed they would not occur while anyone was using my program (and there was no point in using it outside "office hours.")
Then a technician on a night shift used my program on the night DST kicked in, it went into an infinite loop and took down the CRM database server, disabling automatic software updates and an (unrelated) DRM system.
In Vernor Vinge's A Deepness in the Sky, the Unix epoch is still used thousands of years in the future, but "programmer archaeologists" of the time mistakenly believe that was the date when humans first landed on the moon. They also measure time in kiloseconds and megaseconds because "day" or "week" don't mean much for interstellar travelers.
* floats are ever a good idea for storing time. a system i use (no, not excel) has one of its time types defined as a double of days since their epoch. the problem is that it's universally represented in the interface as a timestamp to millisecond precision, and many different values may have the same string representation.
i just ran a quick test, and for a specific millisecond around now, about 13,000 distinct timestamps have the same string representation. if you use that string representation as a serialization of any one of those timestamps, it will always map back to a single float value, which will be only one of those 13,000, meaning the others aren't round-trippable.
the system implements a comparison tolerance for floating point numbers, but this helps only slightly, as only about 1100 of those 13,000 test as equal to the one you get if you enter it as a string.
the end result is that you can have data printing to the screen that you can't actually find in the system because its string representation doesn't match its internal one due to precision issues.
(the solution is not to use the type--they deprecated it in favor of one based on longs of nanos several years ago.)
Who believes these things? February is always 28 days long? Any 24-hour period will always begin and end in the same day (or week, or month)? A week always begins and ends in the same month?
I thought about the same thing. I choose to read the title as "Falsehoods programmers seem to believe about time". Or "act as if they believe" in place of "seem to believe".
I can easily imagine somebody introducing a bug that relies on a 24 hour period ending in the same month as it began.
He's talking about test code. So if those test cases are missing, but the programmer thinks the tests are thorough, then it looks like the programmer believes those things (or forgot that they weren't true).
I work on a calendar application and I can attest to all kinds of issues dealing with time, dates, and time zones. It's very difficult to get right and most of the time we just hope that for our purposes it's close enough.
A big issue is dealing with timezone conversions especially because different applications represent time zones with different english language versions of the names, like "US Mountain Standard Time" (used by Outlook) is the same as "US/Arizona" (used by PHP among others).
> "US Mountain Standard Time" (used by Outlook) is the same as "US/Arizona" (used by PHP among others).
It's not quite that simple. During Daylight Saving Time, Arizona stays on US Mountain Standard Time; i.e., it's the same as Pacific Daylight Time and one hour behind Mountain Daylight Time. During the rest of the year, Arizona is on the same time as the rest of Mountain Standard Time, or one hour ahead of Pacific Time.
the "country/(city/region)" notation is from zoneinfo, the standard unix timezone system. "X standard time" (and "X daylight time") are the common names most people in america use when referring to timezones.
zoneinfo's form has the benefit of having a nice, unambiguous way to refer to the various daylight-saving time exceptions (arizona, indiana, hawaii, etc.).
recently i've seen huge timezone lists that basically throw in the kitchen sink--they'll have the entire zoneinfo set, the common american names, miscellaneous other common regional names (euro, australia, etc.) and raw whole-hour offsets as well. makes for a long drop-down to navigate....
I'd question #34 on this list somewhat: while formats like mm/dd/yyyy and dd/mm/yyyy definitely allow ambiguous interpretations, the ISO format yyyy-mm-dd seems fairly unambiguous; I've never seen any instance of yyyy-dd-mm floating around to confuse it with.
It's ambiguous for human users who are not familiar with the format. Some people may not even recognise it as a date, in some contexts. (2012-06-19, that's 1,987 right?)
35. People will just assume all APIs and standard libs that deal with dates are sane
Adobe managed to build a pretty shitty Date object for ActionScript that takes days as a 1-indexed parameter, but months as 0-indexed. Hopefully ActionScript is not used in banking [1]
This is not that insane, or at least relatively insane, because this matches the behavior of localtime(). I have heard the reason for using a 0-indexed month is for the convenience of mapping to an enum of month names.
IMHO a default month of January makes rather little sense, and would tend to facilitate subtle bugs... it seems you'd want the user to specify a month always, or maybe have a default of "indeterminate" or "Nevember"; the first position of an enum is very often the "nothing" case.
The way you're thinking about it, yes -- but the passage of a month may take you from December the [mumbleth] to January the [mumbleth], which are in different years. For some strange reason, code in the wild doesn't always account for that, so the due date for an item may well be eleven months before the request was entered.
If the tests are failing because the production code is confused about daylight savings time, then you may have a serious problem.
If on the other hand it's just the test code that's flaky, usually such problems can be alleviated by refactoring such that one passes the time function as a parameter. This is in preference to hard-coding calls to the system time in test, which imho one should never do.
Once an arbitrary time function can be passed as a parameter, one can provide a mock or fake system clock for test purposes. Ideally one would still want to test under daylight savings' conditions. But at the least this approach leaves one in a place where one can test the common 24-hours-in-a-day case without having the tests spuriously fail two days out of every year.
Between 1582 and 1752 England had two different New Year's Days, January 1 and March 25, used for different purposes. The same day had different years depending on who you were talking to. See http://en.wikipedia.org/wiki/Dual_dating.
You can sum most of it up with: the belief that calendrical (human-conventional) time is identical to, or acts the same as, or is inherently related to, physical time.
"7. 7.A week (or a month) always begins and ends in the same year."
Not sure what the (or a month) is doing there. I'm pretty sure the end of december is the end of the year and the beginning of January is the beginning of the year. A month can't span multiple years...can it?
You are right. I said "a month" but I meant something more like "a period of 28 days" or "a month-long period." I'll think about how to word this more clearly.
1) You're (possibly) assuming that both timestamps come from the same machine. They could be from two machines (Server timestamps a transaction start, client timestamps the end.) Clocks are not accurate, so the delta time is not correct.
2) Time A is before a DST change forward or back, Time B is after. Delta would be wrong by +/- 1 hour (assuming all other factors are tracking with accuracy.
3) Both times are taken on the same machine, but far enough apart that clock drift plays a factor.
4) Both times are taken on the same machine, but an ntptimesync cron job kicked off in between them and adjusted the system clock.
Timestamps are not affected by (or aware of) DST. In fact that's one reason why it's so easy to compute the time elapsed between two timestamps. From [0]:
$ ./timetool 1130647000
1130647000 = Sunday Oct 30, 2005 00:36 EDT
1130647600 = Sunday Oct 30, 2005 00:46 EDT
1130648200 = Sunday Oct 30, 2005 00:56 EDT
1130648800 = Sunday Oct 30, 2005 01:06 EDT
1130649400 = Sunday Oct 30, 2005 01:16 EDT
1130650000 = Sunday Oct 30, 2005 01:26 EDT
1130650600 = Sunday Oct 30, 2005 01:36 EDT
1130651200 = Sunday Oct 30, 2005 01:46 EDT
1130651800 = Sunday Oct 30, 2005 01:56 EDT
1130652400 = Sunday Oct 30, 2005 01:06 EST
1130653000 = Sunday Oct 30, 2005 01:16 EST
1130653600 = Sunday Oct 30, 2005 01:26 EST
1130654200 = Sunday Oct 30, 2005 01:36 EST
1130654800 = Sunday Oct 30, 2005 01:46 EST
1130655400 = Sunday Oct 30, 2005 01:56 EST
1130656000 = Sunday Oct 30, 2005 02:06 EST
1130656600 = Sunday Oct 30, 2005 02:16 EST
1130657200 = Sunday Oct 30, 2005 02:26 EST
$
So it's down to the accuracy (and synchronicity) of the clocks used to measure the timestamps: The difference between two timestamps is an accurate measure of the time elapsed (but see below), but that's probably only useful to you if the timestamps themselves are accurate.
However, leap seconds -- during which time passes, but the timestamps typically do not -- and numerical issues stemming from truncation and subtraction do have a systematic impact and reduce the accuracy of the difference. You can address the former for timestamps in the past by simply taking into account the leap seconds; you can address the latter by using higher resolution timestamps, ie. using millisecond timestamps if you need better-than-second-accuracy for the amount of time elapsed.
True, but that depends on what exactly you're reading as a timestamp.
Using localtime in perl for example (a very common method to read the system time) does not return a timestamp, but returns a formatted string (see: http://perldoc.perl.org/functions/localtime.html) that could easily be thrown off by a DST changeover.
#4 is why you really should be using ntpd(8): after syncronization, NTP will try to slew the clock if the difference is less than 128ms, step it if it's offset is between 128ms and 1000ms, and will exit with an error if the offset changes to be greater than 1000ms.
Also, any other change to the system clock. NTP will either slew or step the system clock to keep it synchronized with the upstream time source. And the user can explicitly set the clock, as well. See also assumptions 9 and 10.
What moskie explained, but got me thinking - if somehow your laptop always synchronizes with the current zone, and you are in a airplane flying east->west, or west->east?
points in time are relative to some fixed point, and are (more or less) dimensionless. durations are not, and are (more or less) vectors. this has a couple consequences: durations are independent of epoch, while points aren't, and only certain types of math make sense with each.
basically, the only thing you can do with two points is subtract them (yielding a duration)--the rest of arithmetic (including addition) is meaningless. the only things you can do with a point and a duration is add or subtract them (yielding a point). you can't do anything at all with a point and a dimensionless scalar. the only things you can do with two durations is add or subtract them (yielding a duration) or divide them (yielding a scalar). the only things you can do with a duration and a scalar is multiply or divide them.
(personally i'd say that even the commutative operations shouldn't necessarily be commutative--i'd say point+duration->point, but duration+point->undefined--but that may be a bit too strict.)
as a quick rule of thumb, if your code would break if you changed epochs, it's already broken.
If you are moving faster than the speed of light, you are perceiving whatever you are moving away from in reverse, because the light you are seeing was generated before when you left. (you are passing into light that is older than the light you started with). In terms of space-time, you would then be going backwards in time, if your point of reference for time was the Earth (and hint, given that we have all these nuanced time thingamajigs, it is)
Heh for this I like Cheops' Maxim: "nothing is ever built on time or within budget." There's also Hofstadter's Law: "it always takes longer than you expect, even when you take into account Hofstadter's law."
This is why I really like logical clocks: by abandoning the primitive and outdated concept of "time" altogether, they allow you to deal reliably with causality, which is often all you're really interested in.
That said, UTC timestamps since the epoch are one of the more straightforward ways of dealing with time, if you must sully your hands with the foul concept of time at all.
Since we add leapseconds, and UTC includes those leap seconds, it's hard to know "time since the epoch". For example, unix/posix time is not the number of seconds since the epoch.
> That said, UTC timestamps since the epoch are one of the more straightforward ways of dealing with time, if you must sully your hands with the foul concept of time at all.
Definitely. Leave their rendering to the experts, but to store times and dates in any other way is indefensible.
Can someone explain when this isn't true? Is he referring to leap seconds, or local timezone DST changes? Or something more interesting I'm failing to think of?
DST changes are a big issue. A simple example from real life - there is a system that takes a measurement every hour, 24/7. Make a report that prints a table of the historical measurements for each day. Does your report show correctly that some days have 24 rows, some have 25 rows and some 23?
Also don't forget that different time zones convert to DST at different offsets, and some places don't observe DST altogether. Oh, and some places go backwards.
15.9.1.2 Day Number and Time within Day # Ⓣ
A given time value t belongs to day number
Day(t) = floor(t / msPerDay)
where the number of milliseconds per day is
msPerDay = 86400000
N+?? The time library in your programming language is correct.
Every time library I've ever dealt with will have serious problems with at least one issue listed on the original article or in the comments here. JodaTime (on the JVM) is by far the best, but even they have problems, and are creating a new library to solve those.
I think that for the vast majority of real-world software applications JodaTime is bloated and unnecessary.
Most applications need only three 'classes' to represent time:
1. A timestamp (ie. number of milliseconds since midnight on 1 January 1970 GMT)
2. A Gregorian Date (ie. three numbers representing day, month, year)
3. A TimeZone (to convert between 1 and 2)
In Java the first and third types are perfectly represented by java.util.Date and java.util.TimeZone. The second class can be represented by something like this:
http://calendardate.sourceforge.net/
Hmm...I did not know that! I should really have said 'eg.' instead of 'ie.'
The point of the above 'timestamp' is is to represent a specific instant in time, independent of any time zone or calendar system. You can use Unix time for this purpose.
While these are all valid issues, and good to be aware of, don't rush out and fix all your "broken" code. Plus, don't climb up on a pedestal and lecture your fellow developers on their "falsehoods" about time. Use your experience to know when exactness is important and when it's not.
DST is an evil perpetrated by curtain and blind manufacturers so that their products need replacing more often due to UV damage. Oh yeah, its the cause of global warming as well.
"The smallest unit of time is a milli/second" seems a bit silly, as it really depends on your application. Is he expecting programmers to implement time in Planck units?
The specific issue that bit me wrt units of time was seconds vs. milliseconds. This came up the first time I needed to mix PHP time stamps with Jenkins build log time stamps. One is in seconds, the other in milliseconds. Unfortunately the PHP date() function exhibits odd behavior when a time stamp contains two extra digits. I could have saved myself the debugging time had I not been so attached to the assumption that time stamps are always in seconds rather than sometimes in milliseconds.
It's not uncommon for languages and libraries to assume that millis are the smallest resolution. Java is getting better, but is still pretty hit or miss. But if you're writing performance sensitive code you're definitely in micros and maybe in high nanos. There's a lot of those in a millisecond.
Missed the most important one for this month, minutes always have 60 seconds. Except for the end of this month, when one minute will have 61 seconds, ie, Leap Second.
Checking the terminal, I don't see any corporate bonds maturing in 2079 yet. But I do see a handful in every year >= 2070 and <= 2077. So maybe a few more years and some people / firms will start hitting bugs...
If the day is one on which we either spring forward or fall back and the time the clock reads somewhere in the shift hour, then the clock will be right either 1 or 3 times that day. And it's different in different countries.
A meta-misconception: It's possible to establish a total ordering on timestamps that is useful outside your system.
Virtual machines can be screwed with comprehensively. So can non-virtual ones, come to that, and your software likely isn't in a position to tell that it's already seen 02:34:17 GMT 2013-05-25 five times already.
So what can you do about that? Bloody nothing. Absolutely nothing whatsoever. You're screwed. Everything you could do can be screwed with in ways your program can't defend against.
The lesson: Don't worry about bugs you can't fix. Refusing to play games you can't win will keep your hair in your head a lot better than knowing the intricacies of human timekeeping systems past and present.
1. Time never goes backwards (as other people have pointed out, time zones break this).
2. UTC time never goes backwards (as other people have pointed out, leap seconds break this).
3. The system boot time never changes. On most platforms, the current time is defined as "boot time plus uptime", and setting the current time is performed by changing the boot time.
4. System uptime never goes backwards. Some platforms handle setting the current time by changing the system uptime.
5. POSIX's CLOCK_MONOTONIC never goes backwards. On some platforms and virtualization environments this can break with CPUs shared between virtual machines.
6. On systems without virtualization, CLOCK_MONOTONIC never goes backwards. On some platforms this can occur due to clock skew between CPUs.