$ date -u; date; date +%s
Sun Sep 13 12:26:39 UTC 2020
Sun Sep 13 17:56:39 IST 2020
1599999999
$ date -u; date; date +%s
Sun Sep 13 12:26:40 UTC 2020
Sun Sep 13 17:56:40 IST 2020
1600000000
An important point worth noting from the POSIX.1-2008 specification:
"Coordinated Universal Time (UTC) includes leap seconds. However, in POSIX time (seconds since the Epoch), leap seconds are ignored (not applied) to provide an easy and compatible method of computing time differences. Broken-down POSIX time is therefore not necessarily UTC, despite its appearance."
By the way, in case you got curious like me to find out if this Hacker News story (the original post) was posted exactly at this time, the answer is, it was posted at Unix timestamp 1599999975. See https://hacker-news.firebaseio.com/v0/item/24460382.json for the details.
Over long periods of time the approximation that a day takes 86400 SI seconds will become less and less accurate as the rotational period of the Earth changes. I wish calendars would be either purely astronomical in nature or purely SI in nature. Hybrid systems like UTC become more and more messy over time as the amount of adjustment needed increases. We've had ~25 leap seconds in UTC already, and it's a relatively young calendar system.
EDIT: I also wish we would change the name of the SI measurement "second". An SI second and an astronomical second are two different things, and deserve two different names.
Oh my. Why are people matching an epoch with a regex?! Or you know, just think a couple of years ahead
Funny how the comment says "replace this before 2017" oh well.
> The app executes bash/Powershell at Splunk startup to check for the above regex and add a '6' if needed. It may not be the best fix, but it does kick-the-can on the problem for another 3 years, at least until epoch time reaches 1.7e10.
I’ve never used this piece of software (and their web site is remarkably uninformative about what it does, besides transform my enterprise), but from the comments it looks like some kind of search engine. It wouldn’t be too surprising to have a bunch of heuristics that try to extract meaning from unlabeled strings. Indeed, while viewing that very page, my iPad turned several of the epoch timestamps into telephone numbers.
So I don’t know if this is what actually happened, but this is a plausible reason for having such a regex that does not depend on stupidity: a “guess what this number could mean” routine written around 2015 might make include finding Unix timestamp-ish values not more than a few years into the future.
I’d still be pretty unhappy with the hard coded magic number approach and wish to see the specific requirements documented, along with some sort of tunable parameter for the range, as well as a test that verifies it works for the given requirements. Which gets into a fun and potentially philosophical exercise as well, since a test that passed yesterday should pass today, but this might be one case where “doesn’t fail with dates less than 5 years into the future” is a reasonable request.
> I’ve never used this piece of software (and their web site is remarkably uninformative about what it does, besides transform my enterprise), but from the comments it looks like some kind of search engine.
It is an 'enterprise' log aggregator, storage system, and log search engine/alert generation engine.
One sets one's Java code (remember: "enterprise") to stream log output to splunk, and splunk handles receiving, storage, alert generation from programmed patterns matching, and archived log data search.
> It wouldn’t be too surprising to have a bunch of heuristics that try to extract meaning from unlabeled strings.
That is a very accurate description of just what it does.
There's an example of the sort of chronic (pun intended) patch-driven-development, YAGNI(Y) thinking that leads to this sort of thing, in the last paragraph:
"For instances that can't/won't get updated in time, this Splunk app can be deployed as a workaround. The app executes bash/Powershell at Splunk startup to check for the above regex and add a '6' if needed. It may not be the best fix, but it does kick-the-can on the problem for another 3 years, at least until epoch time reaches 1.7e10."
Someone, somewhere is saying "but it passed all the unit tests..."
Am I the only one that thinks this isn't completely unreasonable? Is it a hack? Definitely, but I don't see a much better way without more context of determining whether some number is likely to be a timestamp. Should it base it on a range of numbers determined at startup? Probably, but it's not fundamentally much different.
A much saner heuristic could use a reasonable range (for some value of reasonable) around the current timestamp to check whether it's also one. The assumption being that logs will be streamed, hence any timestamp will likely refer to something that happened in the recent past.
I celebrated the Billenium with my brother (two days before 9/11, actually). He's dead now, and we didn't have a lot of shared interests, so it'll always be a nice memory for me.
Sweet 16 seems fitting here, considering that Unix/Linux systems have gained so much momentum over the past decade (even Lenovo started shipping Fedora on laptops this month).
"There were programs here that had been written five thousand years ago, before Humankind ever left Earth. The wonder of it—the horror of it, Sura said—was that unlike the useless wrecks of Canberra’s past, these programs still worked! And via a million million circuitous threads of inheritance, many of the oldest programs still ran in the bowels of the Qeng Ho system. Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely…the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind’s first computer operating systems."
Look at it yet more closely even than that, however, and you'll find that it isn't the year that that operating system was invented, nor the year of its 1st Edition, nor even the way that it counted time before 1974. It was simply a year that, in the words of one of its creators, "seemed to be as good as any" (dmr, Wired, 2001).
Sort of like I remember the rollover from 9.. to 1000000000 in 2001 like it wasn't all that long ago as I was well into my career. It must have been a notable moment as I remember that I was doing some consulting work hooking up remote datasources for some telecom apps.
I use BitBar on my macbook to display the epoch second on my menu bar in a format like this: { 1,600,003,345 }
The script to do that is:
Separating the epoch into thousands was done with the help of this Stackoverflow question: https://unix.stackexchange.com/questions/113795/add-thousand...EDIT: This is a much simpler way of writing it, thanks to GNU Coreutils' "numfmt"