This is (serious) a part of my retirement planning. I'll be mid-50s when this hits, and have enough low level system knowledge to be dangerous. In about 15 years, I'll start spinning up my epochalypse consultancy, and I'm expecting a reasonable return on investment in verifying systems as 2038 compliant.
My retirement planning is to work on the Y10K problem. When the year 9997 comes along, everyone is going to start worrying about the rollover to five digit years. In about the year 9995 I will start seriously brushing up on my COBOL.
Yes, I know some of us will never retire. 8000 years in the future career planning is an interesting thing to think of though. 8000 years ago the only thing that was going on was some neolithic agriculture. Domestication of the Jungle Fowl (modern day chicken) in India and the beginning of irrigated agriculture in Sumeria were probably the biggest news items of that millennium. I guess someone from 8000 years ago could have said he'd be raising chickens or doing irrigated agriculture in 8000 years and wouldn't have been wrong had he lived that long. Makes me think of F.H King's book "Farmers of Forty Centuries" about how Chinese agriculture has been farming the same fields without artificial fertilizer for 4000 years.
There was almost certainly some forms of relatively advanced seafaring 8000 years ago possibly including skin boats, sails and paddles, ropes, sealants and astronomy. Also, fairly sophisticated metallurgy was widespread with at least silver/iron/gold, possibly bronze. Writing was known to some cultures. Horses, camels and water buffalo were likely all domesticated. Use of drying/smoking for preservation and curing of meat. Yogurt may have been known in some areas. Advanced pottery. Probably nontrivial herbalist / medicinal / architectural / construction knowledge. Plus of course trapping, fishing, textiles, stonework, etc.
I don't know how much you know about yogurt, so I apologize in advance if this sounds condescending; but yogurt has been eaten since at least the 5000s BC, and is easy to produce, probably even by accident. It's obtained by controlled souring of unpasteurized milk; clabber, which is almost too sour to be edible but is safe to eat if you can stand the taste, comes from spontaneous souring.
Fernand Braudel (in The Structures of Everyday Life) talks of how it was the staple food of the poor in Turkey, and I think in Persia. US commercial yogurt is weak and sugary; the Eastern variety is much more lifelike.
This is obviously a joke, but here's a serious reply to it: There will already be serious Y10K problems in 9997, just like there were Y2K problems earlier than 1997.
Epochs & date formats aren't only used to represent & display the current date, but dates in the future, e.g. think about reservation systems, graphing libraries that display things 10-15-50 years in the future etc.
People get bans on XBox Live until 9999-12-31
I guess it's probably just an Input validation thing. Also I think it's fair to given most people their account back after that amount of time.
Another idea that'd be way cheaper - start writing books on how to survive the epochalypse. You can crib a lot of material out of the books written in 99.
There is a certain amount of snark embedded in this post, but its not a bad idea at all - I'll be in much the same boat at that time - perhaps I can borrow the same idea...
It wasn't intended to be sarcastic. I suspect there will be a lot of businesses (especially finance) interested in verifying that the "ancient" Linux system they setup to replace their mainframe 25 years "ago" is safe. I lived through the Year 2000, and watched the same thing happen there.
With suitable groundwork, there will be a willing and wealthy market looking for people to assuage their fears - a service I see myself as happy to provide. Much as it was in the "Millennium bug" area, a lot of the effort in getting the business is PR spadework, but I've got 15-20 years of prep time to position myself suitably. I also hope to provide a slightly more useful service than many Millennium consultancies did at the time.
Surely these sorts of errors in major systems are already being/have already been addressed - for healthcare for example a search for which members of a GP's practice are going to be pensioners in 2040 is going to error badly? Strikes me that the time to address them was shortly after the millennium bug.
If you were around for the millennium bug, then surely you remember how many people waited until about October, '99 to start looking for problems. If you think those people went looking for another problem 38 years away to fix...
If this class of business is not seeing a problem this minute, it isn't a bug. And it won't be a serious-enough bug to spend money on until whatever workarounds they can think of start having negative effects on income.
Sources: experience with Y2K remediation, experience with small business consulting & software development, experience with humans.
So true. That said, thinking 38 years into the future is usually not sensible for most businesses, because it's very possible that they're bankrupt before then. Thinking 21 years into the future also offers poor ROI for the same reason.
I was thinking large companies, banks etc.. I'm not in IT/programming but remember (despite being young) that 2 other calendar related 'bugs' were mentioned at the time. IIRC we've had one and the epoch bug is the second.
Strikes me that big businesses would have thought "will we need to do this again in a few years" and acted appropriately and that there should be a trickle down effect as large corps demand future proofing in IT products.
Yes negligence, ignorance, lack of foresight, corner cutting, and other human traits feed in to that.
> Surely these sorts of errors in major systems are already being/have already been addressed [...] search for which members of a GP's practice are going to be pensioners in 2040
If you know anything about programmers, you know they found that error at some point during development, thought about how other people dealt with that, remembered that windows used to treat >30 as 1900s and <30 as 2000s and did the same. So probably the people in this thread planning their retirement solving this will have to figure out which random unix timestamp number is treated as pre 2038 and which one is after. And then they will have to undo all the last-minute spaghetti code tying it up on the original program.
That's not been my experience. A lot of companies don't address needs that are farther in the future, because there are nearer term needs and a fixed budget. Also, legacy systems that haven't been patched or updated in decades aren't that uncommon.
And, even if they happen to be safe, they do tend to pay well just for an audit to confirm that.
Those type of systems use various date representations instead of datetime/timestamp fields, they should be mostly immune. Well, until 2038 arrives and some of these dates start being compared with the current datetime.
>verifying that the "ancient" Linux system they setup to replace their mainframe 25 years "ago" is safe.
The problem with mainframes is that they can't be trivially upgraded or migrated to 64-bit like modern OS's on x86 hardware can be. Vendor lock-in, retirement of OS, bare to the metal coding, etc caused this. If these mainframes were running a modern OS, it would have been trivial to upgrade them to a 64-bit version and make whatever small changes are needed to date storing in the old 32-bit apps. You won't need a wizened COBOL guy for this. A first year CS student would be able to look at C or C++ code and figure this out. Modern languages are far more verbose and OO programming makes this stuff far easier to work with.
Comparing mainframes to unix systems really doesn't make sense. Its two entirely different designs. Not to mention, the idea of running a 32-bit OS today is odd, let alone 20+ years from now, especially with everything being cloudified. You'd be hard pressed to even find a 32-bit linux system in 20+ years, let alone be asked to work on one. That's like being asked to setup 1000 Windows 98 workstations today.
Pretty much everything in your post is wrong. IBM mainframes are heavily virtualized and have very good support for moving to larger address spaces. VM and MVS moved from 24-bit to 31-bit to 64-bit address spaces. You can run the old 24-bit applications and upgrade them as needed. Even assembly programs - the old assemblers and instructions are supported on newer hardware. System i (System/38-AS/400) was built around a 128-bit virtual address space from the start. There is much more support for fixing old software on mainframes than there is for proprietary 1980s-era PC and Unix applications.
I have no idea why you think running 32-bit today is "odd." 32-bit desktops and small servers are still perfectly usable today. 32-bit microcontrollers are going to be around for a very long time (just look at how prevalent the 8051 remains), and a lot of them are going to be running Linux. It also makes a lot of sense to run 32-bit x86 guests on AMD64 hypervisors - your pointers are half the size so you can get a lot more use out of 4GiB of memory.
Also note that IBM mainframes can run 64-bit Linux just fine. Indeed, IBM's been marketing its LinuxONE mainframe line as a z series machine that doesn't run z/OS at all.
We're talking Mainframe system designs and code from the 70s and 80s. No they aren't running 64-bit linux. I think you guys need to re-read my post. The legacy systems on Y2K had none of these features.
It isn't 1000 seats, but I know of a Win98/NT4 shop. It is an isolated network supporting mostly phone sales and pick-and-pack, runs some ancient copy of MAS90 and some home-grown software.
These installations exist, and (outside of tech startupland) isn't even that strange, although he is probably pushing things. The owner of that business is proud of how long he's made his IT investment last; his main concern is that dirt-cheap second-hand replacements that can run 98 are apparently getting harder to find.
Be careful about your definition of absurd. Somewhere, right now, some poor bastard is building an NT 3.51 workstation for some stupid reason. I'll bet you $0.05 that some future poor bastard will be building NT4 or 2000 devices in 2038. :)
I built an NT3.5 VM in 2012 to run some ancient book binding publishing junk that relied on an ancient version of access... might still be in production, no idea.
That is a great term. Start investing time in static analysis tools in FOSS that find it for you or your customers. Then extend refactoring tools ("source-to-source translators") to automate the job. Apply for YC for growth. Get acquired by IBM who saw all kinds of adaptations for the tool in their mainframe offerings.
There's always someone on HackerNews who has done something pertinent to almost any discussion (it's one of the best things about being here, in fact) - are there any COBOL guys or gals who made a fortune fixing Y2K bugs who'd like to share their story?
I never quite understood why the transition from 1999-2000 should be a big deal for a computer system, until I learned about how COBOL works: it stores numbers as its string-representation unless told otherwise, and trying to store 100 in a field of two bytes will happily be stored as "00". Of course we had other bugs, with the same cause, well after the y2k-period.
Not quite literally strings, it typically uses binary-coded-decimal (BCD) format for numbers, but it has the same effect when years are stored as two digits.
I work with a database that has its origins in the COBOL era. All of the date fields are specified in the copybooks as four PIC 99 (i.e. two decimal digits) subfields, CCYYMMDD. This separation of CC and YY surprised me until I realized that it allowed them to add Y2K support by setting the default for CC to '19' and switching it later.
It's not just cobol, there are still plenty of devs out there storing dates as strings. Aside from using more space, most won't notice until the try and filter on a date range.
I would guess pacemakers don't run Linux. I would be surprised if they run an OS at all. I work on devices that have to survive 20 years on one non-rechargeable and non-serviceable battery, and there's at most a simple scheduler in place to control tasks. We use 32 bits for epoch time, but our epoch starts Jan 1, 2000, so we have 30 years on Linux before this becomes a problem.
This is really interesting! Obviously it is a topic in itself, but do you mind sketching how one develops software in an environment that requires this extreme amount of reliability? If you do not use an existing general-purpose OS, do you even use a MMU? Is it hard real-time?
E.g. what language do you use? Is it SW or HW that you "ship"? You probably perform some kind of verification and or validation - how does the tool chain look like?
Do you perform model-checking on all possible inputs?
Lots of questions, and you do not have to go into detail, but I would appreciate your input, as it is an interesting topic.
No MMU. It is hard real-time in the sense that there are events that need to be processed withing a small time window (a few microseconds (with help from hardware typically) to milliseconds).
The product is custom hardware built with off-the-shelf parts like microcontroller, power converters, sensors, memory. Texas Instruments MSP430 family of microcontrollers [1] is popular for this type of application. They are based around MIPS CPU cores with a bunch of peripherals like analog-to-digital converters, timers, counters, flash, RAM, etc.
I don't work on medical devices, so validation is more inline with normal product validation. We certainly have several very well staffed test teams: one for product-level firmware, one for end-to-end solution verification, others for other pieces of the overall solution. We are also heavy on testing reliability over environmental conditions: temperature, pressure, moisture, soil composition, etc.
The firmware is all done in-house written in C. Once in a while someone looks at what the assembler the compiler, but nobody writes assembler to gain efficiency. We rely on microcontroller vendor's libraries for low-level hardware abstraction (HAL), but other than that the code is ours. The tool chain is based on GCC I believe, but the microcontroller vendor configures everything so that it crosscompiles to the target platform on PC.
Debugging is done by attaching to the target microcontroller through a JTAG interface and stepping through code, dumping memory, checking register settings. We also use serial interfaces, but the latency introduced by dumping data to the serial port can be too much for the problem we're trying to debug and we have to use things like togging IO pins on the micro.
We don't model the hardware and firmware and don't do exhaustive all possible inputs test like one would do in FPGA or ASIC verification.
I need to go, but if you have more questions, feel free to ask, and I'll reply in a few hours.
I am surprised that you do not apply some kind of verification or checking using formal methods, however it might be the case (at least it is the experience I have) that this is still too inconvenient (and so expensive) to do for more complex pieces of software.
Actually, the high-assurance field that does such things is very small. A tiny niche of the overall industry. Most people doing embedded systems do things like the parent described. The few doing formal usually are trying to achieve a certification that wants to see (a) specific activities performed or (b) no errors. Failures mean expensive recertifications. Examples include EAL5+, esp DO-178B or DO-178C, SIL, and so on. Industries include aerospace, railways, defense, automotive, supposedly medical but I don't remember a specific one. CompSci people try formal methods on both toy, industrial, and FOSS designs all the time with some of their work benefiting stuff in the field. There's barely any uptake despite proven benefits, though. :(
For your pleasure, I did dig up a case study on using formal methods on a pacemaker since I think someone mentioned it upthread.
One important thing to note is that the 20-year life expectancy includes several firmware updates. An update may take several hours to several days to complete, so, it's not something that is commonly done, but it's an option.
I am fairly new to this field and I share your surprise that more formal methods are not used in development. To be honest, the development process in my group and others I'm familiar with can be improved tremendously with just good software development practices like code reviews and improved debugging tools.
I see "wireless", but not "WiFi" or "802.11*" in the PDF.
For what it's worth, devices I work on have a few wireless interfaces while guaranteeing 20-year life time: one interface is long-range (on the order of 10km), two are short range (on the order of a few mm). There is no way we can get to 20-year life time with doing WiFi (maintaining current battery size/capacity) for long'ish range and maybe not even BT for shorter range.
The microcontrollers on these devices don't have MMUs. There is typically not even a USB interface. The microcontroller is in deep sleep mode saving power 99.9% of the time. During that time only essential peripherals are powered on and no code is executing.
A RasPi has no chance of running for 20 years off a single A-size non-rechargeable non-serviceable battery.
Same, I'll be 55. My son will be 23. Either our generation fixes it, or his generation will have to fix it in their first jobs out of college (sorry kids!)
It's an issue now, and it will be even more urgent as the deadline approaches.
Once we hit the ten year out mark then you're going to see things like expiry dates for services roll over the magic number. The shit will hit the fan by degrees.
not a bad plan. I did something similar in 1998-1999 as a Y2K consultant. Companies at the time wanted to be certified Y2K compliant - some good years. I certified a lot of companies using medical equipment from Perkin Elmer and Khronos time clocks.
I hate to be to buzzkill but more than the computer systems is the food supply... climate change is going to reek havoc on our "retirement" we will likely die young starving and thirsty
No one is deploying 32-bit linux now, outside of tiny edge cases and mobile. Mobile devices that go in the trash every 2 years. What do you reasonably expect to be around in 2038 in 32-bit form?
Once 64-bit processors became mainstream, the 2038 problem pretty much solved itself. There's only disincentives to building a 32-bit system today let alone in 20+ years.
Unlike with Y2k where there was nothing but incentives to keep using Windows and DOS systems where the 2000 cut-over was problematic. The non-compliant stuff was being sold months before Jan 1, 2000. The 32-bit linux systems have been old hat for years now, let alone 20+ years from now.
Not to mention that those old COBOL programs were nightmares of undocumented messes and spaghetti code no one fully understood, even the guys maintaining them at the time. Modern C or C++ or Java or .NET apps certainly can be ugly, but even a second year CS student can find the date variables and make the appropriate changes. They won't be calling in $500/hr guys for this. Modern systems are simply just easier to work with than proprietary mainframes running assembly or COBOL applications that have built up decades of technical debt.
There are, in fact, a lot of 32-bit ARM chips still being deployed today. Yes, arm64 is usable, but using e.g. a Beagleboard- or even Raspberry Pi-class device still often makes sense (for cost or compatibility reasons.)
No 2016 beagleboard implementation will be running BigCo's finances and be irreplaceable in 2038. Lets be realistic here.
Those 70s and 80s programmers were working on mainframes with multi-decade depreciation. We work on servers and projects with 3-5 year deprecation when we aren't working on evergreen cloud configurations. Not to mention we've already standardized on 64-bit systems, outside of mobile, which is soon following and has typically a 2 year depreciation anyway.
But some industrial- or military-spec ARMv7 core running a critical embedded system or two, in 2038? Twenty-year design lifespans (often with servicing and minor design updates) are definitely not unheard of, and successful systems often outlive their design lifespan.
It won't be the same magnitude of issues. However, I'm sure there will be plenty of apps on said 64 bit Linux that have issues. I commented about a mysql problem here that exists on 64 bit MySQL, on 64 bit Linux. It's not much of a stretch that some internal apps at a company would have similar issues.
Edit: Ntp has something of a protocol issue to be addressed as well.
Yeah. I was working on an IoT system that uses NTP to set time. I generally plan a 20 year life span for tings like this. There's S/W I wrote over 20 years ago that is in devices still in use. The only good thing about this is that I'll not likely be around when trouble crops up. I didn't see any way to accommodate changes that are nearly decades away in code I write today.
Yeah but that's a protocol issue. Single graybeard devs aren't going to be paid to fix that. The people who run ntp are going to push out a new protocol way before 2038.
Even if there are issues, more than likely they'll be able to handle it internally. OO programming isn't going anywhere and modern languages and concepts are easier to work with than piles of undocumented COBOL from Y2K. They won't be calling you with help to change date fields. That's trivial stuff.
Right, but at a high level, I'm answering why there might be money in consulting on this later.
There will be Fortune 500 companies that have old ntp clients running somewhere in 2038...pretty much guaranteed. They'll also have apps with 32 time_t structures running as well, database columns that overflow, etc. Or maybe they won't, but aren't sure. You sell them a service that audits all of those things, scripts that look for troublesome stuff using source code greps, network sniffing for old protocols, static analysis, simplistic parsing of ldd, etc. And, a prepackaged methodology, spreadsheets, PowerPoint to socialize the effort, and so on.
It was the same for Y2K. Fixes for many things were available well ahead of time. Companies had no methodology or tools to ensure that the fixes were in place.