Did anybody notice Sun, 09 Sep 2001 01:46:40 GMT, when it was 10000....? And why looking at decimal values, anybody calculated when we have some nice binary timestamps?
The FreeBSD Project actually had a gigasecond bug -- the cvsup protocol (used for CVS tree replication and checkouts) transmitted time as an ASCII seconds-since-epoch value, and when September 2001 arrived, the changed string length caused a protocol sanity check to fail.
Yes. In java at the time if you wrote a number with 9 or less digits, it would always be considered to be an int. If the number was above 1 billion (10 digits), you had to end it with an L (1000000000L), probably since it possibly could overflow an int.
We had a trial version of our software that would expire after 30 days. To make that we had a script that inserted the expiry date into the source and recompiled every night. Around 30 days before it passed 1 billion the compiler started to give an error, and the script crashed.
(It may actually have been 12->13 digits, since java use milliseconds since the epoch, but I'm not so sure this many years later)
Openldap got hit by the billennium bug. I remember because we told our Noc to keep an eye open (Sunday afternoon where we were) and we started getting alerts that all LDAP replication was broken.