That's a long page, but basically the answer is no. We don't randomly have a 61 second minute in "clock" units; instead we have a day that takes 86401 _real_ seconds but only the standard 86400 clock seconds.
By way of analogy, imagine we implemented leap years by slowing our clocks to half speed on Feb 28 instead of adding a Feb 29.
In Europe, some weeks before. Cuz why shouldn’t we have a time difference that’s different for a couple of weeks compared to the usual, it’s not like ppl in CET and EST ever have meetings together.
Unix time is the number of seconds since Jan 1, 1970 00:00:00 UTC. This number never goes back or forward, it's simply the number of Mississippis you would have counted if you started Jan 1, 1970 at 00:00:00 UTC.
When you run "date", it asks the system for this number of seconds, then uses a database to look up what that number corresponds to in the local timezone, accounting for leap years and seconds, and spits out that value. So when (some of us) recently "fell behind", the Unix time incremented by 1 second, but that database told us that the time had decreased by an hour.
Not exactly sure how Windows does it, but I recently set up monitoring of a job that Task Scheduler runs, starting at midnight and running for 1 day, every 5 minutes. I got paged at 11:30 because the scheduler decided "1 day" is "24 hours", and it stopped running it at 11pm because of the time change, but didn't start the one for the next day.
I know cron can be confusing, but I'll take it any day over this.
> Unix time is the number of seconds since Jan 1, 1970 00:00:00 UTC. This number never goes back or forward, it's simply the number of Mississippis you would have counted if you started Jan 1, 1970 at 00:00:00 UTC.
Wrong. UNIX time is number of days since epoch × 86400 + seconds since beginning of day.
In real world some days have been actually 86401 seconds long, which means that UNIX timestamp is (currently) 37 less than number of seconds since epoch.
Go ahead and correct it on Wikipedia, I double-checked my understanding against the entry there before posting my message, and re-reading it I think my description matched Wikipedia.
You'll also need to correct it in the time(2) man page, which says:
"number of seconds since the Epoch, 1970-01-01 00:00:00 +0000 (UTC)." (ubuntu 22.04)
POSIX.1 defines "seconds since the Epoch" using a formula that
approximates the number of seconds between a specified time and
the Epoch. [...] This value is not the same as the actual number of
seconds between the time and the Epoch, because of leap seconds
and because system clocks are not required to be synchronized to
a standard reference [...] see POSIX.1-2008
Rationale A.4.15 for further rationale
The word approximates is carrying a lot of weight here. So basically "seconds since the epoch" is a posix codephrase which doesn't actually mean what you'd naively think it does. Why this useful info is buried in the VERSIONS section instead of the main DESCRIPTION I don't know.
For Wikipedia, there is already discussion about the confusing wording of the opening paragraph: https://en.wikipedia.org/wiki/Talk:Unix_time#Flat_out_wrong? but the body of the article is mostly better, and especially the table showing what happens during leap second should clarify matters.
Exactly. UNIX time, GPS time, and TAI are a fixed offset from one another. They're a variable offset from UT1 (based on earth's angle with respect to some distant quasars). If you care about things like where the stars are, use UT1. If you want monotonic time, use TAI or one of the time standards that's a fixed offset from it. If you want to plan a spacecraft trajectory through the solar system, use barycentric coordinate time (TCB). If you want something that approximates UT1 but otherwise works like TAI, use UTC.
If you're dealing with human activity, UTC will usually be the easiest choice. If you're just dealing with computers, TAI (or one of its fixed-offset variants like UNIX time) will usually be the easiest choice.
That would be even more wrong. There has been some fringe suggestions to run UNIX clock on TAI, but that hasn't caught on (probably because it would be contrary to posix). So we are stuck with this mangled utc based monstrosity.
It’s funny how relevant this niche fact is for me. When I started my last job it was at 1.3 and I remember seeing it go through 1.4, 1.5 and 1.6 since I debugged a lot of data with timestamps. I remember commenting to my team about the 1.5 change and got some “so what” faces so I’m glad someone else looks at these changes as a sort of long term metronome like I did.
Such an excellent coincidence that it happens to be on my birthday! In fact to celebrate, I did set up the only livestream in existence on YouTube (afaik) to capture this: https://www.youtube.com/live/DN1SZ6X7Vfo
whew, seeing that makes me have bottomless sympathy for ladybird since I (quite literally) cannot imagine the amount of energy it would take to implement CSS in a modern browser
put another way: I'd get great thrills if any proposal to whatwg had to come bundled with the assembly(perl? bash? pick some language) implementation of any such proposal so the authors share the intellectual load of "but, wait, how would we _implement this_?!"
These are important typographical features and I'm fairly sure this is handled by font rendering libraries, not the browser (other than handling the CSS rules), so I can't imagine it's that difficult to implement? Correct me if I'm wrong!
It’s handled by text shaping libraries (e.g. Harfbuzz in Chrome and Firefox), not font rendering APIs (usually provided by the OS), but you have the right idea.
I still remember when we were at 1.2 billion seconds. Time flies.
While we're still here: my favorite way to appreciate the scale of million and billion is with seconds: 1 million seconds is approximately 12 days, whereas as 1 billion seconds is approximately 31 years.
That seems rather convoluted. The generator function is gratuitous, and the promise stuff is unfortunately verbose (pity there’s no async version of setTimeout built in). I’d write it with a recursive function, this way:
function t(){let m=Date.now(),s=Math.ceil((m+10)/1000);setTimeout(()=>{console.log(s);t()},s*1000-m)}t()
That maintains the accuracy, but frankly just a 1000ms interval will probably be good enough in most cases, though it’ll drift all over the second (e.g. it loses about a millisecond each tick on my machine under Node), and it’s certainly much simpler to reason about.
I mixed up two things I was trying to do. I was writing an async generator and making it so inside the loop it could decide what to do with the delay. That way you could add a half tick if you wanted. :) But here's how I would rewrite it:
async function* tickSeconds() {
while (true) {
const m = new Date().valueOf()
const s = Math.ceil((m + 10) / 1000)
await new Promise((resolve, _) => {
setTimeout(() => resolve(), s * 1000 - m)
})
yield s
}
}
for await (const s of tickSeconds()) {
console.log(s)
}
I prefer to inline some obvious stuff that is missing than do a paradigm shift to using callbacks or build up an ad hoc library of misc functions.
Still not convinced the generatorness is worthwhile; I’d split out the Promise/setTimeout dance into a `sleep(ms)` or `sleepUntil(date)` function before replacing a simple while loop that logs with a infinite generator and for loop that logs. But certainly if you were scaling it up further you’d want to head in this general direction.
This brings back memories of all of the Y2K doomsayers about airplanes falling out of the sky, gas pumps not working, and all of the other just wildly bonkers theories people came up with. Even without the help of social media, these notions spread like wildfire.
I wasn't suggesting it wasn't a real issue that needed solving. Just that the end of days scenarios were blown well out of proportion. The preppers community really came into their own and out of the shadows. Even my ex-in-laws were buying drums full of flour and what not storing them in their suburban backyard.
If they weren't blown out of proportion, they might not have been fixed and the bad situations might have happened. Sometimes you need to spread fear to get stuff fixed. But then people think "oh it wasn't that bad, those engineers are just the boy who cried wolf" and then next time they won't take us seriously
And what not. You missed that bit trying to be smart. Their whole 8’ wooden privacy fence was lined with weatherproof blue drums full of various foodstuff. Yes, they had rice, beans, various grains. I’m not a damn prepped, so I just rolled my eyes at the whole thing. I always assumed the weatherproof wasn’t really and it would all be ruined after the first rain. But yes smarty pants, they had all of that stuff. Don’t you feel good about yourself now?
Planes will be fine; but many of those things actually will happen as a result of Y2038. Embedded systems with ancient versions of Linux will stop being able to verify certificates, and I bet there are some digitised pumps that rely on that.
I work with an electronic health records system. Recently they delivered a patch to customers because on Nov 8th 2023 around 0100 in the morning their time field internally would exceed its number of characters allowed.
This was weird to hear about nowadays but I was still not surprised.
They botched the patch and it was not a good time when it happened.
A big chunk of their older code based systems use an epoch of Mar 1,1980, another newer code chunk uses Mar 1, 1992 as their epoch. And I've been told their newest platform uses yet another epoch, or calculates times in such a way to not be concerned.
So while their patch increased the field size for the seconds counter, it has done so a few times only fixing the fields that use the epoch about to be hit, rather than all of them. In top of that, after the counter grew by a digit on Nov 8th,2023, they then realized, much to many hospital customer displeasure, that not all their code the displayed medical documentation/results, often sorted by most recent, accounted for the field size correctly, requiring two more patches last week.
Math with time is hard, and even harder to know what systems have moved forward with enough future thought to still not have these surprises sitting in wait deep in their codebase.
The replacement is not without issues: each package can pick an ABI, nothing is enforced. A define in C toggles time_t between 32 and 64 bits, so if that struct is exposed in a library, dependents have to deal with it.
$ date --date="@1800000000"
Fri Jan 15 03:00:00 AM EST 2027
$ date --date="@1900000000"
Sun Mar 17 01:46:40 PM EDT 2030
$ date --date="@2000000000"
Tue May 17 11:33:20 PM EDT 2033
I want to use this opportunity to flog one of my favorite topics: whether or not to store epoch time using variable-length numbers in protobuf.
TL;DR: never do this
If you are storing the epoch offset in seconds, you could store it as int32 or fixed32 (assuming the range is adequate for your application). But int32 will need 5 bytes, while the fixed32 field would only use 4. So you never save space and always spend time using int32.
Similarly, if you are storing the offset as nanoseconds, never use int64. Except for a few years on either side of the epoch, the offset is always optimal in a fixed64. int64 will tend to be 9 bytes. Fixed64 nanoseconds has adequate range for most applications.
You'll note that the "well-known" google.protobuf.Timestamp message commits both of these errors. It stores the seconds part as a varint, which will usually be at least 5 bytes when it could have been 4, and it stores the nanoseconds separately in an int32, even though this is more or less an RNG and is virtually guaranteed to need 5 bytes, if present. So nobody should use that protobuf.
Thus ends this episode of my irregular advice on how to represent the time.
I don't use Unix time. If someone gives you a Unix time timestamp x, it doesn't mean much unless you check it against a list of leap seconds. By default, your fancy unix time timestamp X doesn't point to a unique second in history, for dozens of times, it pointed to some pretty random 2 seconds intervals. TAI is the only sane choice if you do understand what you are doing.
Btw, if you already know that leap second is dead and wondering what happens next, well, they are going to implement leap minute, the good news is you are unlikely to see one in your life time. They are meeting next week to decide on this leap minute proposal.
And yet... the Sun stubbornly insists on coming up each morning. Therefore, TAI cannot be "the only sane choice" or else the conversion would have happened by now.
Either off-by-n leap seconds don't matter enough, or periodically sync'ing with good NTP authorities mitigate it without undue harm. Or ?
Use the time standard that's best suited to your use case. Of course that requires understanding time standards at least a bit. Usually the differences will be minor, but they can matter for things like navigation.
Use TAI if you want a monotonic time. UNIX time and GPS time work too, they're fixed offsets from TAI.
Use UTC if you want a calendar date/time. Most human interactions are with time zones relative to UTC.
Those two cover almost all the cases you'll need.
Use UT1 if you want time that roughly tracks the solar time, averaging out the differences in day length due to earth's tilt & elliptical orbit.
Use apparent solar time if you really care about calculating where the sun is or will be for some reason and can't just point your solar telescope at it.
Use sidereal time if you're trying to calculate where extrasolar objects will be.
Use barycentric coordinate time (TCB) if you're plotting the trajectories of interplanetary spacecraft missions or the ephemerides of planets.
Various religions have their own time standards used to calculate when holidays occur.
There are a few others still usable, even more niche than the above. Use one of the first two and you'll probably be fine.
Unix time isn't a fixed offset from TAI: it stops the clock during leap seconds (the timestamp for a leap second is the same as for the next second), so the offset from TAI changes at every leap second.
Strangely enough, I once read a comment about a kid saying to himself that he will remember a specific but insignificant moment, and ever since then (maybe 8 years) I can recall saying to myself "I will remeber this".
Since we're less than a decade away from the doomsday point, I wonder if it would be easier to transition from signed to unsigned 32 bits, as it would buy everyone multiple decades to transition to something else.
Also, this first transition should be less disruptive than any other one, since the unsigned format is backwards compatible with the current signed one in its current usage (positive numbers).
> since the unsigned format is backwards compatible with the current signed one in its current usage (positive numbers).
For existing binaries (many of them burned in ROM) it is that much compatible that it will also break at the same time as the current system.
So, you need new binaries. If you do, why limit yourself to 32 bits?
Also, I’m not sure all current usage is for positive numbers, only.
Using time stamps for datetimes isn’t what you should do, but chances are older software does it, anyways, because memory was expensive up to the 1980s, and those dates may have been before 1970.
OT but may be of interested to folks that find this kind of numerology fun.
Your 10_000 th day passes when you're 27.x years old (IIRC). I had a celebration with friends as it seemed more significant than any of the other milestones that are usually celebrated because you won't reach 100_000 and don't remember 1_000. Can recommend!
Dividing the Unix time by 100,000 produces a (currently) 5-digit number that increments every ~28 hours, and behaves pleasingly like a stardate.
(It doesn't align with any of the various stardate systems actually used in the different Star Trek series and films, but aesthetically it's similar enough to be fun.)
It should be 63835596800 (63.8 billion) because it was kind of self-centred to start counting from 1970 instead of year 1. It doesn't make sense to make memorizing a 4 digit number a prerequisite to becoming a programmer.
The year 1 is also completely arbitrary though? Humans existed long before then too. It doesn't really matter when the epoch is, as long as everyone uses the same one, and negative values are supported as appropriate
The internet is telling me that Taiwan uses both calendars. Is that not true? Surely even your grandparents know that it's 2023 in the rest of the world.
It's irrelevant to my point though. Just because the standard is used by 99% of people instead of 100% doesn't mean programmers should invent a new one, for the same reason they chose January 1st 1970 and not November 15th 1970 or any other day of the year. There's only one obvious choice.
That list includes things like the "British Regnal year" and its mostly historical calendars that aren't used at all... The real items on it are almost all ceremonial calendars and everyone (who isn't a priest) uses the Gregorian calendar, or at least most people know it. It's the obvious choice for the standard, I'm not sure why people are pretending it's not.
Notably, the newest entry on that list is "Unix time". Just because there's already more than one entry on the list doesn't mean a small group of people should add another. That's not even "there are now 14 standards" that's just deliberately adding to the pile.
> The real items on it are almost all ceremonial calendars and everyone (who isn't a priest) uses the Gregorian calendar, or at least most people know it.
Isn’t this just the “no true scotsman” defense, but for calendars?
No, since year 1 being 2023 solar years ago is a Christian thing, humanity does not agree on this. Among others, Japan, Korea, China as well as a whole host of Muslim countries do not.
I would say it's a secular international standard of Christian origin, similar to how "information" is an English word even if it's actually a French word. The origin of concepts is important but usage is what ultimately defines them, not etymology. The exceptions prove the rule and most of those countries only use the other calendar ceremonially, and the ones that don't are still familiar with the Gregorian calendar.
Regardless, just because the standard is not used by 100% of humanity doesn't justify programmers coming up with their own calendar instead of using the obvious choice. It's unnecessary.
Arbitrariness aside, some other complications would be (A) having to store 64-bit numbers on very early PDP machines; and (B) the Julian/Gregorian transition
Can you please not post in the flamewar style to HN? It's not what this site is for, and destroys what it is for. We want thoughtful, respectful, curious conversation, so if you'd please make your substantive points in that style instead, we'd be grateful.
I'll just utilize my skills as a programmer to rewrite the script. I never reuse code written for a different employer. That's unethical. If you're the type to use other peoples code, I'm sure you could just npm install your way through it though (or whatever package manager du jour will be)
Not to be a wet blanket, but I'm really surprised to see posts about this all over social media today. This isn't even a nice round number, and we hit a new decimal value every 5 years or so.