Hacker News new | past | comments | ask | show | jobs | submit login
Unix Time reaches 1.7 billion (epochconverter.com)
298 points by AlwaysNewb23 on Nov 14, 2023 | hide | past | favorite | 165 comments



Actually a cool date pivot point. Did not realize it was coming up (and has now passed). Thanks.



Thanks! Macroexpanded:

The Unix timestamp will begin with 17 this Tuesday - https://news.ycombinator.com/item?id=38222909 - Nov 2023 (75 comments)

Let's restart counting Unix timestamp to from 2020 - https://news.ycombinator.com/item?id=35202256 - March 2023 (21 comments)

Tomorrow the Unix timestamp will get to 1,666,666,666 - https://news.ycombinator.com/item?id=33316429 - Oct 2022 (116 comments)

Happy 1600M epoch second - https://news.ycombinator.com/item?id=24460382 - Sept 2020 (48 comments)

The Unix timestamp will begin with 16 this Sunday - https://news.ycombinator.com/item?id=24452885 - Sept 2020 (203 comments)

Unix Time 1500M – Friday July 14 02:40UTC - https://news.ycombinator.com/item?id=14758615 - July 2017 (99 comments)

Today at 16:53:20 GMT, it'll be 1400000000 in Unix time. - https://news.ycombinator.com/item?id=7736739 - May 2014 (57 comments)

Ask HN: What will you be doing when the unix timestamp reaches 1234567890, next friday? - https://news.ycombinator.com/item?id=475437 - Feb 2009 (3 comments)


> Feb 2009 (3 comments)

Interesting. And one of those comments was mentioning an EEEpc. Simpler times indeed.

Seems like HN grew the most between then and 2014, continuing up to 2020


next one is in 3 years


Set a calendar notification for Friday, January 15, 2027 8:00:00 AM GMT.

Woah, is that weird that it's exactly on the hour? ...I guess not so weird, since 180 is divisible by 60.


Does that mean unix date calculations do not take into account the leap second?

https://en.m.wikipedia.org/wiki/Leap_second


That's a long page, but basically the answer is no. We don't randomly have a 61 second minute in "clock" units; instead we have a day that takes 86401 _real_ seconds but only the standard 86400 clock seconds.

By way of analogy, imagine we implemented leap years by slowing our clocks to half speed on Feb 28 instead of adding a Feb 29.


> imagine we implemented leap years by slowing our clocks to half speed on Feb 28 instead of adding a Feb 29

Thanks I will imagine this! Too bad we can’t slow the real Earth time down and have a bizarrely slow and long day every 4 years.


Leap hours would be neat!


Not if you write code that calculates time.


I guess it wouldn’t matter if the code was calculating time offsets and so on.

But it would matter if I wanted something to take 60mins and then set a start time and an end time.

Dates and timezones are the worst! :-)


I guess that would still be neat when finished.


We had one of those last weekend (At least in most of the US)


In Europe, some weeks before. Cuz why shouldn’t we have a time difference that’s different for a couple of weeks compared to the usual, it’s not like ppl in CET and EST ever have meetings together.


And it's intentional, so each day has exactly 86400 seconds, even though it doesn't correspond to UTC.

In fact, UTC will change its leap second logic much sooner than Unix time logic, so UTC logic will be more like Unix time.


> In fact, UTC will change its leap second logic much sooner than Unix time logic, so UTC logic will be more like Unix time.

This. If anyone is interested in a source:

https://www.scientificamerican.com/article/the-leap-seconds-...



Unix time is the number of seconds since Jan 1, 1970 00:00:00 UTC. This number never goes back or forward, it's simply the number of Mississippis you would have counted if you started Jan 1, 1970 at 00:00:00 UTC.

When you run "date", it asks the system for this number of seconds, then uses a database to look up what that number corresponds to in the local timezone, accounting for leap years and seconds, and spits out that value. So when (some of us) recently "fell behind", the Unix time incremented by 1 second, but that database told us that the time had decreased by an hour.

Not exactly sure how Windows does it, but I recently set up monitoring of a job that Task Scheduler runs, starting at midnight and running for 1 day, every 5 minutes. I got paged at 11:30 because the scheduler decided "1 day" is "24 hours", and it stopped running it at 11pm because of the time change, but didn't start the one for the next day.

I know cron can be confusing, but I'll take it any day over this.


> Unix time is the number of seconds since Jan 1, 1970 00:00:00 UTC. This number never goes back or forward, it's simply the number of Mississippis you would have counted if you started Jan 1, 1970 at 00:00:00 UTC.

Wrong. UNIX time is number of days since epoch × 86400 + seconds since beginning of day.

In real world some days have been actually 86401 seconds long, which means that UNIX timestamp is (currently) 37 less than number of seconds since epoch.


Go ahead and correct it on Wikipedia, I double-checked my understanding against the entry there before posting my message, and re-reading it I think my description matched Wikipedia.

You'll also need to correct it in the time(2) man page, which says:

"number of seconds since the Epoch, 1970-01-01 00:00:00 +0000 (UTC)." (ubuntu 22.04)

https://en.wikipedia.org/wiki/Unix_time


The time(2) manpage also states

       POSIX.1 defines "seconds since the Epoch" using a formula that
       approximates the number of seconds between a specified time and
       the Epoch. [...] This value is not the same as the actual number of
       seconds between the time and the Epoch, because of leap seconds
       and because system clocks are not required to be synchronized to
       a standard reference [...] see POSIX.1-2008
       Rationale A.4.15 for further rationale
https://man7.org/linux/man-pages/man2/time.2.html#VERSIONS

The word approximates is carrying a lot of weight here. So basically "seconds since the epoch" is a posix codephrase which doesn't actually mean what you'd naively think it does. Why this useful info is buried in the VERSIONS section instead of the main DESCRIPTION I don't know.

For Wikipedia, there is already discussion about the confusing wording of the opening paragraph: https://en.wikipedia.org/wiki/Talk:Unix_time#Flat_out_wrong? but the body of the article is mostly better, and especially the table showing what happens during leap second should clarify matters.


Then parent was correct but `s/UTC/TAI/g`; no counting of days nor multiplication required.


Exactly. UNIX time, GPS time, and TAI are a fixed offset from one another. They're a variable offset from UT1 (based on earth's angle with respect to some distant quasars). If you care about things like where the stars are, use UT1. If you want monotonic time, use TAI or one of the time standards that's a fixed offset from it. If you want to plan a spacecraft trajectory through the solar system, use barycentric coordinate time (TCB). If you want something that approximates UT1 but otherwise works like TAI, use UTC.

If you're dealing with human activity, UTC will usually be the easiest choice. If you're just dealing with computers, TAI (or one of its fixed-offset variants like UNIX time) will usually be the easiest choice.


> UNIX time, GPS time, and TAI are a fixed offset from one another.

Wrong again. UNIX time definitely is not fixed offset from GPS time. Here is handy dandy conversion tool between the two: https://bag-of-tools.com/gps-time-converter/

1700000000 UNIX = 1384035218 GPS

700000000 UNIX = 384035207 GPS


That would be even more wrong. There has been some fringe suggestions to run UNIX clock on TAI, but that hasn't caught on (probably because it would be contrary to posix). So we are stuck with this mangled utc based monstrosity.


They do not, no.

You can do things like find midnight GMT by checking (t % 86400 == 0) for example.


It’s funny how relevant this niche fact is for me. When I started my last job it was at 1.3 and I remember seeing it go through 1.4, 1.5 and 1.6 since I debugged a lot of data with timestamps. I remember commenting to my team about the 1.5 change and got some “so what” faces so I’m glad someone else looks at these changes as a sort of long term metronome like I did.


Same here, I started to work at 1.4 and thought it'll never change.

Now I things it should be an occasion to open champagne with fellow developers, as it's rarer than new years!


Such an excellent coincidence that it happens to be on my birthday! In fact to celebrate, I did set up the only livestream in existence on YouTube (afaik) to capture this: https://www.youtube.com/live/DN1SZ6X7Vfo


Happy birthday! Here's a gift to help with numbers shifting place when updating:

    font-variant-numeric: tabular-nums;
https://developer.mozilla.org/en-US/docs/Web/CSS/font-varian...


whew, seeing that makes me have bottomless sympathy for ladybird since I (quite literally) cannot imagine the amount of energy it would take to implement CSS in a modern browser

put another way: I'd get great thrills if any proposal to whatwg had to come bundled with the assembly(perl? bash? pick some language) implementation of any such proposal so the authors share the intellectual load of "but, wait, how would we _implement this_?!"


These are important typographical features and I'm fairly sure this is handled by font rendering libraries, not the browser (other than handling the CSS rules), so I can't imagine it's that difficult to implement? Correct me if I'm wrong!


It’s handled by text shaping libraries (e.g. Harfbuzz in Chrome and Firefox), not font rendering APIs (usually provided by the OS), but you have the right idea.


I can one-up you there - not only was it my birthday, it was my 17th as well :-)


Dang, I missed this by roughly an hour :(

I still remember when we were at 1.2 billion seconds. Time flies.

While we're still here: my favorite way to appreciate the scale of million and billion is with seconds: 1 million seconds is approximately 12 days, whereas as 1 billion seconds is approximately 31 years.


Another fun one: pi * 1e7 seconds is approximately one year.


Easier to remember: “pi seconds in a nanocentury”


Lucky you: When I finally got onto HN, it's already been 2 hrs since then :(


I watched in `deno repl` neatly sandboxed :)

    new Date().valueOf() / 1000
I was counting down by thousands of seconds, rather than millions of milliseconds, which is why I divided instead of using the native js value.

Happy 1.7 gigaseconds!


Whoa didn't know about deno repl. This is awesome. Happy 1.7 gs!


Thanks! You can paste this into deno repl and watch the seconds count up :)

for (const [s, d] of (function*() { while (true) { const m=new Date().valueOf(), s=Math.ceil((m+10)/1000); yield [s, s*1000-m] } })()) { await new Promise(r => { setTimeout(()=>r(), d) }); console.log(s) }


That seems rather convoluted. The generator function is gratuitous, and the promise stuff is unfortunately verbose (pity there’s no async version of setTimeout built in). I’d write it with a recursive function, this way:

  function t(){let m=Date.now(),s=Math.ceil((m+10)/1000);setTimeout(()=>{console.log(s);t()},s*1000-m)}t()
That maintains the accuracy, but frankly just a 1000ms interval will probably be good enough in most cases, though it’ll drift all over the second (e.g. it loses about a millisecond each tick on my machine under Node), and it’s certainly much simpler to reason about.


I mixed up two things I was trying to do. I was writing an async generator and making it so inside the loop it could decide what to do with the delay. That way you could add a half tick if you wanted. :) But here's how I would rewrite it:

    async function* tickSeconds() {
      while (true) {
        const m = new Date().valueOf()
        const s = Math.ceil((m + 10) / 1000)
        await new Promise((resolve, _) => {
          setTimeout(() => resolve(), s * 1000 - m)
        })
        yield s
      }
    }
    
    for await (const s of tickSeconds()) {
      console.log(s)
    }
I prefer to inline some obvious stuff that is missing than do a paradigm shift to using callbacks or build up an ad hoc library of misc functions.


Still not convinced the generatorness is worthwhile; I’d split out the Promise/setTimeout dance into a `sleep(ms)` or `sleepUntil(date)` function before replacing a simple while loop that logs with a infinite generator and for loop that logs. But certainly if you were scaling it up further you’d want to head in this general direction.


Is that like a reverse codegolf?

setInterval(() => console.log(Math.ceil(Date.now()/1000)), 1000);0

This could be 0.999s late, but seems good enough for REPL usage


It doesn't just handle the gap but when the first one sends.

Suit yourself, though.


We draw close now to the (i32) end-times.


I wonder about that. I expect the replacement system will allow timestamps after 2038, but will we also be able to timestamp files from pre-1970?


The replacement system’s already here; modern OSes and software tend to use 64bit ints for timestamps.

The trouble is all the old embedded systems, and time_t->i32 casts which are currently fine, but lying in wait…


>but lying in wait…

This brings back memories of all of the Y2K doomsayers about airplanes falling out of the sky, gas pumps not working, and all of the other just wildly bonkers theories people came up with. Even without the help of social media, these notions spread like wildfire.


A lot of stuff would have broken during y2k if a lot of engineers hadn't put in a massive effort.

We know this because we just changed the clocks to 2000 and things broke...badly.


I wasn't suggesting it wasn't a real issue that needed solving. Just that the end of days scenarios were blown well out of proportion. The preppers community really came into their own and out of the shadows. Even my ex-in-laws were buying drums full of flour and what not storing them in their suburban backyard.


If they weren't blown out of proportion, they might not have been fixed and the bad situations might have happened. Sometimes you need to spread fear to get stuff fixed. But then people think "oh it wasn't that bad, those engineers are just the boy who cried wolf" and then next time they won't take us seriously


Flour? That sounds unoptimal. Probably rice would be better?


And what not. You missed that bit trying to be smart. Their whole 8’ wooden privacy fence was lined with weatherproof blue drums full of various foodstuff. Yes, they had rice, beans, various grains. I’m not a damn prepped, so I just rolled my eyes at the whole thing. I always assumed the weatherproof wasn’t really and it would all be ruined after the first rain. But yes smarty pants, they had all of that stuff. Don’t you feel good about yourself now?


Planes will be fine; but many of those things actually will happen as a result of Y2038. Embedded systems with ancient versions of Linux will stop being able to verify certificates, and I bet there are some digitised pumps that rely on that.


Y2K's kin are still out there...

I work with an electronic health records system. Recently they delivered a patch to customers because on Nov 8th 2023 around 0100 in the morning their time field internally would exceed its number of characters allowed.

This was weird to hear about nowadays but I was still not surprised.

They botched the patch and it was not a good time when it happened.

A big chunk of their older code based systems use an epoch of Mar 1,1980, another newer code chunk uses Mar 1, 1992 as their epoch. And I've been told their newest platform uses yet another epoch, or calculates times in such a way to not be concerned.

So while their patch increased the field size for the seconds counter, it has done so a few times only fixing the fields that use the epoch about to be hit, rather than all of them. In top of that, after the counter grew by a digit on Nov 8th,2023, they then realized, much to many hospital customer displeasure, that not all their code the displayed medical documentation/results, often sorted by most recent, accounted for the field size correctly, requiring two more patches last week.

Math with time is hard, and even harder to know what systems have moved forward with enough future thought to still not have these surprises sitting in wait deep in their codebase.


The replacement is not without issues: each package can pick an ABI, nothing is enforced. A define in C toggles time_t between 32 and 64 bits, so if that struct is exposed in a library, dependents have to deal with it.

Similar of course to 32 vs 64 bit file offsets.


I think the easiest thing to do is to love to 64 but timestamps, not changing the epoch


2038 is sure going to be an interesting year.


It's our Mayan calendar moment :)


Relevant username


The Y2K moment for the new generation.


I sure hope I retire before then. Late 50s for me. New generation, indeed!


At least on my system, it's not an issue:

    $ date --date="@2200000000"
    Sun Sep 18 07:06:40 PM EDT 2039


You are probably not running a 32-bit IoT device.


It came fast, I’m barely done celebrating 1696969420


How do you celebrate that number?

Edit: nm, I don't want an answer.


Wish you all good health and fortune until 2.0 short billion

> Wednesday May 18 2033 03:33:20 GMT


As for what the future holds:

    $ date --date="@1800000000"
    Fri Jan 15 03:00:00 AM EST 2027

    $ date --date="@1900000000"
    Sun Mar 17 01:46:40 PM EDT 2030

    $ date --date="@2000000000"
    Tue May 17 11:33:20 PM EDT 2033


I opened Node.js and did

  setInterval(() => console.log(Date.now()), 1);
to watch the transition. Happy 1.7B seconds since Jan 1 1970!


hmm, it didn't occur to me to use a timer rather than hit the up arrow and enter. Now I have this to watch the time count up :)

    async function* seconds() {
      while (true) {
        const millis = new Date().valueOf()
        const seconds = Math.ceil((millis + 10) / 1000)
        const delay = (seconds * 1000) - millis
        await new Promise(r => { setTimeout(() => r(), delay) })
        yield seconds
      }
    }
    for await (const s of seconds()) {
      console.log(s)
    }


>async generator

>for await (const s of seconds())

I'd hate to be your code reviewer!


We had our game of code-names self destruct at the exact moment using very similar code. Good times


Fun fact: setInterval() might not be the most accurate timer, as it might drift after a while.


Tue 14 Nov 2023 02:13:17 PM PST = 1699999997

Tue 14 Nov 2023 02:13:18 PM PST = 1699999998

Tue 14 Nov 2023 02:13:19 PM PST = 1699999999

Tue 14 Nov 2023 02:13:20 PM PST = 1700000000

Tue 14 Nov 2023 02:13:21 PM PST = 1700000001

Tue 14 Nov 2023 02:13:22 PM PST = 1700000002

Tue 14 Nov 2023 02:13:23 PM PST = 1700000003


Are we supposed to clink our calculators now?


Doesn't the 100M mark rollover every 3 yrs and change? What's the big significance of the 1.7B mark?


If it only comes once every 3 years, it's at least as notable as a new software release, which is something that often gets posted here.


I want to use this opportunity to flog one of my favorite topics: whether or not to store epoch time using variable-length numbers in protobuf.

TL;DR: never do this

If you are storing the epoch offset in seconds, you could store it as int32 or fixed32 (assuming the range is adequate for your application). But int32 will need 5 bytes, while the fixed32 field would only use 4. So you never save space and always spend time using int32.

Similarly, if you are storing the offset as nanoseconds, never use int64. Except for a few years on either side of the epoch, the offset is always optimal in a fixed64. int64 will tend to be 9 bytes. Fixed64 nanoseconds has adequate range for most applications.

You'll note that the "well-known" google.protobuf.Timestamp message commits both of these errors. It stores the seconds part as a varint, which will usually be at least 5 bytes when it could have been 4, and it stores the nanoseconds separately in an int32, even though this is more or less an RNG and is virtually guaranteed to need 5 bytes, if present. So nobody should use that protobuf.

Thus ends this episode of my irregular advice on how to represent the time.


The REAL big non-event that no one cares about is this one:

https://www.epochconverter.com/countdown?q=2000000000


I'll remember to post that in 2033.


>In human years, the UNIX timestamp is about 80.

It's 53, right? Since 1 Jan 1970?


it aged quickly during the stressful period of Y2K


I don't use Unix time. If someone gives you a Unix time timestamp x, it doesn't mean much unless you check it against a list of leap seconds. By default, your fancy unix time timestamp X doesn't point to a unique second in history, for dozens of times, it pointed to some pretty random 2 seconds intervals. TAI is the only sane choice if you do understand what you are doing.

Btw, if you already know that leap second is dead and wondering what happens next, well, they are going to implement leap minute, the good news is you are unlikely to see one in your life time. They are meeting next week to decide on this leap minute proposal.

https://www.nytimes.com/2023/11/03/science/time-leap-second....


And yet... the Sun stubbornly insists on coming up each morning. Therefore, TAI cannot be "the only sane choice" or else the conversion would have happened by now.

Either off-by-n leap seconds don't matter enough, or periodically sync'ing with good NTP authorities mitigate it without undue harm. Or ?


Use the time standard that's best suited to your use case. Of course that requires understanding time standards at least a bit. Usually the differences will be minor, but they can matter for things like navigation.

Use TAI if you want a monotonic time. UNIX time and GPS time work too, they're fixed offsets from TAI.

Use UTC if you want a calendar date/time. Most human interactions are with time zones relative to UTC.

Those two cover almost all the cases you'll need.

Use UT1 if you want time that roughly tracks the solar time, averaging out the differences in day length due to earth's tilt & elliptical orbit.

Use apparent solar time if you really care about calculating where the sun is or will be for some reason and can't just point your solar telescope at it.

Use sidereal time if you're trying to calculate where extrasolar objects will be.

Use barycentric coordinate time (TCB) if you're plotting the trajectories of interplanetary spacecraft missions or the ephemerides of planets.

Various religions have their own time standards used to calculate when holidays occur.

There are a few others still usable, even more niche than the above. Use one of the first two and you'll probably be fine.


Unix time isn't a fixed offset from TAI: it stops the clock during leap seconds (the timestamp for a leap second is the same as for the next second), so the offset from TAI changes at every leap second.


Feels weird to be almost 1900 seconds in this bold new number right now!


Oh wow. Is it strange that I remember the 1600M epoch second?


I remember where I was when we hit 1,000,000,000. My coworkers thought I was strange for thinking it was significant.

... two days later all hell broke loose.


I thought it was sept 11 and went on to check. Indeed it was.


It was only 100M seconds ago…


100,000,000 / 60 / 60 / 24 / 365 = 3.17 years


not at all, I remember 1234567890


I remember you remembering that when we celebrated 1500000000!


Strangely enough, I once read a comment about a kid saying to himself that he will remember a specific but insignificant moment, and ever since then (maybe 8 years) I can recall saying to myself "I will remeber this".


Since we're less than a decade away from the doomsday point, I wonder if it would be easier to transition from signed to unsigned 32 bits, as it would buy everyone multiple decades to transition to something else.

Also, this first transition should be less disruptive than any other one, since the unsigned format is backwards compatible with the current signed one in its current usage (positive numbers).


> since the unsigned format is backwards compatible with the current signed one in its current usage (positive numbers).

For existing binaries (many of them burned in ROM) it is that much compatible that it will also break at the same time as the current system.

So, you need new binaries. If you do, why limit yourself to 32 bits?

Also, I’m not sure all current usage is for positive numbers, only.

Using time stamps for datetimes isn’t what you should do, but chances are older software does it, anyways, because memory was expensive up to the 1980s, and those dates may have been before 1970.


The epochalypse is well over a decade away.


OT but may be of interested to folks that find this kind of numerology fun.

Your 10_000 th day passes when you're 27.x years old (IIRC). I had a celebration with friends as it seemed more significant than any of the other milestones that are usually celebrated because you won't reach 100_000 and don't remember 1_000. Can recommend!


Interesting if morbid bit of serendipity that that should occur during the age of 27 when the 27 club also exists.


i have a countdown on my phone that counts the days left until i am 80, when i expect to die. Makes life more interesting


But what if your phone goes first?


gonna live forever


Dividing the Unix time by 100,000 produces a (currently) 5-digit number that increments every ~28 hours, and behaves pleasingly like a stardate.

(It doesn't align with any of the various stardate systems actually used in the different Star Trek series and films, but aesthetically it's similar enough to be fun.)


Okay, so at what value will the Unix timestamp equal the human population? I should start a pool…


It should be 63835596800 (63.8 billion) because it was kind of self-centred to start counting from 1970 instead of year 1. It doesn't make sense to make memorizing a 4 digit number a prerequisite to becoming a programmer.


The year 1 is also completely arbitrary though? Humans existed long before then too. It doesn't really matter when the epoch is, as long as everyone uses the same one, and negative values are supported as appropriate


> as long as everyone uses the same one

Exactly my point. The rest of humanity uses year 1 as the epoch.


Calling in from year 112 in Taiwan currently, hello, hello.


The internet is telling me that Taiwan uses both calendars. Is that not true? Surely even your grandparents know that it's 2023 in the rest of the world.

It's irrelevant to my point though. Just because the standard is used by 99% of people instead of 100% doesn't mean programmers should invent a new one, for the same reason they chose January 1st 1970 and not November 15th 1970 or any other day of the year. There's only one obvious choice.


> Exactly my point. The rest of humanity uses year 1 as the epoch.

The rest of humanity uses a variety of reckoning systems:

https://en.m.wikipedia.org/wiki/Calendar_era


That list includes things like the "British Regnal year" and its mostly historical calendars that aren't used at all... The real items on it are almost all ceremonial calendars and everyone (who isn't a priest) uses the Gregorian calendar, or at least most people know it. It's the obvious choice for the standard, I'm not sure why people are pretending it's not.

Notably, the newest entry on that list is "Unix time". Just because there's already more than one entry on the list doesn't mean a small group of people should add another. That's not even "there are now 14 standards" that's just deliberately adding to the pile.


> The real items on it are almost all ceremonial calendars and everyone (who isn't a priest) uses the Gregorian calendar, or at least most people know it.

Isn’t this just the “no true scotsman” defense, but for calendars?

https://en.m.wikipedia.org/wiki/No_true_Scotsman


No, since year 1 being 2023 solar years ago is a Christian thing, humanity does not agree on this. Among others, Japan, Korea, China as well as a whole host of Muslim countries do not.


I would say it's a secular international standard of Christian origin, similar to how "information" is an English word even if it's actually a French word. The origin of concepts is important but usage is what ultimately defines them, not etymology. The exceptions prove the rule and most of those countries only use the other calendar ceremonially, and the ones that don't are still familiar with the Gregorian calendar.

Regardless, just because the standard is not used by 100% of humanity doesn't justify programmers coming up with their own calendar instead of using the obvious choice. It's unnecessary.


Advocating for arithmetic in a system without a zero is bold.


We count years from year 1, but seconds from 1970? It's just illogical, arithmetic has nothing to do with it.


Which year 1?

There's only two calendars where it's been 2023 years.


The one that everyone uses.


Unix Time?


Go counts time from year 1. See also https://en.wikipedia.org/wiki/Epoch_(computing)#Notable_epoc...

Arbitrariness aside, some other complications would be (A) having to store 64-bit numbers on very early PDP machines; and (B) the Julian/Gregorian transition


Do you not understand what comes after zero? Yes, you can have a negative epoch and I’ll let you ponder what that signifies.


Can you please not post in the flamewar style to HN? It's not what this site is for, and destroys what it is for. We want thoughtful, respectful, curious conversation, so if you'd please make your substantive points in that style instead, we'd be grateful.

https://news.ycombinator.com/newsguidelines.html


$ date -r 1700000000

Tue Nov 14 14:13:20 PST 2023


see you in 4 years...

$ date -d @1800000000 Fri 15 Jan 2027 12:00:00 AM PST


is that really 4 years


What's with the fetish for round numbers in, of all things, base 10?


Yes, truly curious. It's not like base 10 is used anywhere in our society.


In our society, the HN community, getting an extra bit at 16, 32 or 64 is more exciting than turning say 30 or 60.

At least in my household this is the case…


To all of you that wrote adhoc scripts to “see” this happen, what are you doing to preserve them so they work for 1.8 gigaseconds on January 15, 2027?

I lost my 1.6 gigasecond script because it was on a work laptop at a previous role.


I think the best solution to that is to create an appointment in you calendar or a reminder in your todo-app with that ad hoc script in a comment.


I'll just utilize my skills as a programmer to rewrite the script. I never reuse code written for a different employer. That's unethical. If you're the type to use other peoples code, I'm sure you could just npm install your way through it though (or whatever package manager du jour will be)


Don’t forget to update your regex appropriately (/s hopefully)






Had a weird timestamp bug yesterday, and I did (stupidly) consider if it was an artifact of hitting 1.7 billion seconds.


while date '+%s'; do sleep 1; done


watch -n 0.5 date +%s


watch -n1 date +%s


hah! noticed this because I was testing some logic and a timestamp popped up at exactly 1.7 billion and some decimal.


I told myself I wouldn't miss it this time D:

Welp, time to wait another 3 years. No way I'm missing the Epochalypse.


1.7e9 is a rather crooked time stamp. 1.8e9 is much neater.


I'm looking forward to "The 2038 Problem"


I filmed it on my phone so I can watch it roll over later...


I was in college when it hit 1 billion. I feel old now.


Not to be a wet blanket, but I'm really surprised to see posts about this all over social media today. This isn't even a nice round number, and we hit a new decimal value every 5 years or so.


We get to celebrate every three years? Nice


In 5180 days, unix time will overflow.


Respectable reason to drink :)))


And .... ??

Not only that, it's almost 9AM.

Sorry, but I can't see what the big deal is supposed to be. Maybe I'm missing something.


This guy must be fun at new year parties.


Lots of zeroes my guy


Happy 1.7 and prosper next!


1699996851


1699999000


1699999900

Is live blogging still a thing?


Neat!


in decimal


Not sure why this is significant


Humans love repeating digits.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: