Hacker News new | past | comments | ask | show | jobs | submit login
Feds spend billions to run museum-ready computer systems (ap.org)
96 points by twakefield on May 26, 2016 | hide | past | favorite | 98 comments



Using floppy disks and old hardware and software doesn't sound like a problem if it still runs and does what it's supposed to do. I'm skeptical that building a modern system would really save money since the temptation for feature creep is too great.


> Using floppy disks and old hardware and software doesn't sound like a problem if it still runs and does what it's supposed to do.

It sounds like a problem to me for the simple fact that replacement parts are near impossible to find, and it's clearly costing the taxpayer a lot of money. That alone should be reason enough to upgrade.

It's also a huge security problem because many of these machines were designed before modern security procedures were invented. How am I supposed to maintain a cryptographically secure password system on a machine with a processor too slow to run a hashing algorithm?

> I'm skeptical that building a modern system would really save money since the temptation for feature creep is too great.

So, because we might be tempted to add a few features, we shouldn't upgrade technology? I don't understand where this weird pseudo-luddite mentality in the tech world comes from. I see it all the time in tech forums, and I just don't get it.

"If it's not broke, don't fix it" is a fun, catchy phrase, but it really breaks down as soon as you try to apply it anywhere. "Hey, you should really change the oil in your car." "If it's not broke, don't fix it."


That's because a lot of time the suggestion is more along the lines of "Why bother changing your brake pads on your old car when you can just buy a new car with new brake pads" than "You should really change the oil in your car." Nevermind the fact that your old car is a truck that you need in order to haul things, is easily repairable and you know how it works inside and out and people are telling you to get a SmartCar to replace it when you really just need new brake pads.


But now your old truck is 70 years old and all the replacement parts have to be hand made by some people you called back from retirement. It's okay to keep it running to show it off from time to time, but it's no way cost effective to do any real work with.


> How am I supposed to maintain a cryptographically secure password system on a machine with a processor too slow to run a hashing algorithm?

Counter-argument: How are you supposed to develop a virus for such obsolete technology, and even then, how are you supposed to infect a system without it being networked in a meaningful fashion?



Cannot read the article (ad blocker blocker) but that does not look like '50 year old computers' ; those were actually running some version of Windows making them relatively new and easy to infect.


That's what happens with new technology is what I am saying.


Or, solve just what you need to. With floppies getting hard to find, but old industrial systems still working just fine, people designed and built floppy emulators:

https://en.wikipedia.org/wiki/Floppy_disk_hardware_emulator

It's often a lot cheaper to deploy a fix for what's actually broken, than it is to completely replace a business critical system.


Also, replace mentally replace every instance of "museum ready" in this article with "mission critical" (which most of them probably are), and it'll seem a lot less ridiculous that they maintain them.


I think the article is pretty light on the specifics.

There are parts of technology where you're absolutely right - if some mission-critical system is running on some old circuit, but it works reliably and maintenance is feasible, don't change it!

But other pieces of tech include the Windows 95 computer that is the only one anyone can do _____ with, because of some complex system that only runs on it. And anytime you do _____, you need to copy your data onto a file, and exclude some specifically formatted config file that you write in notepad, in order to get it done. And the whole system is a small project that could be feasibly implemented quickly and cheaply to run on modern computers.

So there are two sides to this, and arguing absolutes doesn't get us much closer to the truth.

From the article, it mentions that Social Security has a variety of legacy systems, and updates the ones that it thinks are the slowest and costliest. Which is the right way to think about it, so long as they have the technical expertise to make those judgements correctly.


Agreed. Plus, even if it is decided to modernize any of these systems, we would likely have to run an old system with the new one in parallel for quite some time to ensure a properly smooth transition. Air traffic control is an example of where such practices occur.


Even if the software never needs an update (possible for isolated process-control systems, less so for central record-keeping at the IRS), there's hardware to consider -- where, nowadays, do you get a replacement 8-inch floppy drive? Or, say, new read heads for one?



> Using floppy disks and old hardware and software doesn't sound like a problem if it still runs and does what it's supposed to do.

That's great reasoning, until the day it stops working.


Yeah but what happens further down the road when it doesn't run well anymore? My old company was facing issues where they were struggling to find SMEs for legacy systems


The real problem is there's huge, legacy systems tied to these platforms that they don't understand and are too risky to port/re-engineer. Think of our military systems or payroll going down since software was ported wrong or relied on underdocumented assembler or compiler feature.

There's some hope on reducing costs at least. Look up NuVAX for an example of emulators being designed to work exacly like old hardware for fraction of price, space, energy, and so on. I haven't heard attempts but next step might be instrumenting them to trace programs code/data for porting. Or binary translation to modern architectures. I know DEC did latter for VAX-to-Alpha port.


NuVAX or equivalent if you need real hardware compatibility and physical to virtual for everything else. Seems like a golden consulting opportunity to leverage SIMH.


The private sector usually manages to keep it's systems updated. This here seems like the typical incentive and accountability problems of government bureaucracy.


There are similar analogs in private sector such as the massive prevalence of COBOL and IBM mainframes in even technology-heavy sectors like finance.

Factors affecting public sector that private industry typically doesn't face typically, though, is that most federal contracts basically mean that the entire supply chain must be "Made in America." When the federal government buys something, they want to have as much of it made by US citizens on US land and with US investors unless there's no viable alternative that meets criteria (there's hardly any fabrication facilities in the US, for example). Outsourcing labor and not owning as much of your supply chain is profitable. The federal government operates somewhat similar to the Carnegie model of vertical integration in the classic sense. Obviously, this does not hold very well in today's business environment except for a select few companies (Apple being the notable one) because cost models are so different.

There are accountability, alignment, and bureaucracy problems across the Fortune 100 aplenty even excluding the parts that sell to the federal government. It's tough to scale organizations as behemoth as large as the Fortune 10, I don't see how the federal government's millions of workers and perhaps even more millions of contractors are any easier.


Dead wrong. In sectors that are in a safety critical or mission critical environment, it's not at all unusual to see systems that are decades out of date. We just recently retired the last of our VAX machines, and we still run a large part of our infrastructure for software build on Windows XP machines (airgapped). Our software configuration for these machines is certified, extensively tested, and has heaps of documentation proving that it does exactly what we say it does. Unimaginably large heaps. Moving to new hardware/software is incredibly expensive in some environments (in this case, I'm referring to a build and test platform for DO178B Level A certified software).


You mean the private sector that spends billions annually maintaining and updating old mainframe, COBOL, minicomputer, DOS, OS/2, Windows NT, and proprietary UNIX systems and apps? They're more similar than different on legacy system end.


I'm not sure what private sector you're talking about, but the healthcare industry keeps systems updated only due to regulatory requirements, and even then there are a fair number of practices that stick with old systems and just have their upstream providers (e.g. billing services) massage the data on the way to its destination.


Take just a quick look at MUMPS. And they start new deployments, new projects even today. With my tax money, here in Europe.


Use floppy disk


Most federal systems that are anywhere over ten years old (and that's most of them) are complete mysteries to the people who both use and maintain them.

A long time ago, I was responsible for such a system. I didn't ask for the job; I simply was the smartest person in the room for too long.

I vividly remember one day we had a problem with folks in a remote location entering things and those things getting mangled and/or lost on the way to the system-of-record.

For one system, with maybe five thousand users and perhaps a few gigabytes of traffic a month, I was on a call with 30 people spanning most of the Earth. I learned that there were at least a dozen separate systems at that location between the person entering the data and the data being sent to HQ. Each system was old. Each system had a separate vendor which claimed to be the only vendor to understand that system (Sometimes this was true. Many times they were just bluffing.)

And -- and this was the kicker -- for each of our dozens of locations, each location manager, because of their friendship with politicians, made their own decisions about how machines were configured and which programs were installed. They were complaining to us because things were bad, but they did not feel like they answered to us.

I was responsible for fixing it.

At the end of that call, I was reminded of Arthur C. Clarke's quip: Any sufficiently advanced technology is indistinguishable from magic.

But I doubt I thought of it in the way he meant it.


As a younger developer, I can't emphasize enough how much I enjoy hearing these types of stories. :)


To be fair, my last employer (aerospace manuf.) ran an incredibly dated and ancient OS with pretty decent results. It was simple, to the point, ugly as hell but got the job done without needing constant updates etc etc. Also we never had a problem with malware (because who writes malware for a 30 year old OS?)

I understand this article mentions many different sectors and functions for antiquated systems but sometimes an update simply isn't needed.


I worked, once, at a large, national (US) company that will not be named.

They had an old database from the early 70s that stored _all_ of their data, everything, contacts, billing, etc.

That was accessible through a special proprietary program, that was overseen by one college kid after the rest of his team was let go.

That proprietary program was essentially an old DS prompt, which was connected to via a Java applet for an early version of Internet Explorer < 7 that emulated the old dos-like user interface.

That Java Applet was connected to via a Java web service, running on machines that had the java applet and ie installed.

The Java web service was accessed by tens, possibly a hundred thousand people nationally, through a few different interfaces.

The one I was aware of (probably the largest), connected to the Java Web service, through a C# WCF service.

The C# WCF service was built as a backend for a new javascript / html4 front end for their company.

That new UI, was intended to partially (BUT ONLY PARTIALLY) replace a strange, 100% actionscript web ui made previously.

Learning about their system architecture, was quite literally like stepping back through time. I felt like an archaeologist, uncovering layers of an ancient city.

It was also amazing how many people were employed supporting each system, each of which could have replaced the lower tiers if they had just upgraded at the time.

It was similarly amazing, how the company had laid off everyone at entire layers, when management arbitrarily decided they needed to make cuts, while other layers doing the _exact_same_thing were staffing huge quantities of people, completely unaware that a single bug in the layer beneath them was not being overseen, or supported, and could bring the whole house down at any time.


These two anecdotes (parent and grand-parent) show that there are two ways to manage legacy systems: a mindful, efficient way and a wasteful, risky way. Come to think of it, those same two ways apply to any project, old or new. So it all really boils down to project management and corporate practices.

I agree with all the comments that say that legacy systems, government or corporate are not inherently, necessarily bad, inefficient, insecure, and in need of replacing.


alternatively, the grandparent post's mindset becomes the parent post's when the application needs to be extended, and the grandparent's simply hasn't had that need yet : )


> because who writes malware for a 30 year old OS?

Careful. I believe I just read an article (Wired?) about some hackers doing exactly this.

Security through Seniority might be a good name for the mindset. :-)


Is there a term for when you say something knowingly broad but ALSO understand there are specific cases where it doesn't apply? Like a simple term I can paste onto the end of statements like (sic) or similar so that I can avoid the nitpickers?

EDIT: honestly not being facetious just curious because this exact thing happens constantly.


That's how models and rule work in general. There's a rule or pattern that mostly applies but always exceptions.

In this case, the old systems were all torn apart by older hackers. There's young ones, just a few, hacking mainframes and stuff now. The ease of finding first flaws showed only reason they weren't found sooner is nobody cared enough to look. Or couldn't afford the relics to play with. So, believing one is safe just because computers are old is akin to Security via Senority.

Now, there are older approaches like tagged HW or dedicated lines whose intrinsic properties reduce odds of attacks. Definitely apply any good techniques from past to present situation. Just that it's old... especially since it's older than INFOSEC itself... doesnt make it safer.


The rebuttal was pertinent and polite, and (almost) provided a reference, so I really don't think it can be qualified as nitpicking. Your statement was a bit exaggerated for effect, so you can hardly criticize someone who responds to it, for effect.

I suggest you avoid the knowingly broad and include nuance to your arguments. I like the suggestion of adding "generally".


That's an interesting question. I can't think of one off the top of my head. You should try http://english.stackexchange.com/

Questions like this are pretty common there.

As an added bonus, Peter Shor (famous for Shor's algorithm) just might answer your question.


"(generally)" ?

But you're right, it's sad how many people insist on feeling smart by taking a general statement as a universal statement and nitpicking it. It's so tiring.


an exaggeration?


I remember a story of someone that was decommissioning an HP-UX box at a university after years of service doing... not much it seems. While shutting it down they noticed that the system had been compromised and an investigation began. It turns out the intruder had exploited a remote vulnerability, logged in as a regular user, and had tried to compile a rootkit. After days of attempts to compile the software, the intruder seemed to have found a way to patch the remote vulnerability, and logged out to never return.

This is the same thing as security through obscurity or security through lack of usability IMO.


That is pretty funny because decades ago we used to remotely compromise university HP 9000 systems, create new privileged accounts, and apply the latest patch sets to the system to close up any security issues. I would assume that is a pretty common practice, you don't want someone else messing with your system. I would bet the investigation didn't go far enough.


That's great. I'm going to have to remember that phrase. Another I considered is Security through Obsolesence. Makes it sound more retarded. ;)


Security through unplanned obsolescence.


I remember reading a counterpoint about this a while back -- sometimes for critical systems, the risk of updates is really high.

For example, NASA still uses hardened 808x systems. On top of that, for space-based systems in an ionized environment, the risk of having hardware developing hardware faults is non-zero, and the kind of things people do for error correction in that environment is insane.

The flip side of this is that, when a technology is widely used, there is a scaling that happens. If you are the sole user of a technology, then part of your cost is maintaining that technology that used to be shared by all the other users of that technology.

And then there's the bigger question: how effective is nuclear deterrence? And I don't mean for the United States of America vs. the rest of the world. I mean for the global, human civilization, and homo sapiens as a species.


W/ regard to space-based and airborne systems (commercial jets also actually operate in a fairly high radiation environment):

The risk of faults is not only non-zero, it's expected. Google 'single event upset' and 'nor flash soft error. Short version - (SEU) any bit in your RAM, CPU, peripherals, or databus can (and will, eventually) be randomly flipped in a high radiation environment. (Soft Error) - A random cell in your flash may be pushed into an indeterminate voltage range, and will return a different value every time you read it. So, for instance, you may CRC it, think it's good, copy it to RAM, and then the CRC of the RAM copy will fail because you got a different value the second time you read it.

There are ways to deal with this. Google 'lockstep CPU' for information on a common, fairly hardcore approach. Basically you replicate the CPU (and the rest of the hardware) and cross-check every single clock cycle; you essentially have two or more computers in one box doing the work of a single computer.

As you can imagine, this hardware is typically entirely custom, tested more thoroughly than anything you've ever imagined (unless you're in a safety critical industry yourself), and very expensive to design & test. Hardware refresh cycles are typically measured in decades, often governed by component obsolescence.


>If you are the sole user of a technology, then part of your cost is maintaining that technology that used to be shared by all the other users of that technology.

Yeah the guy who maintained the OS was basically the only guy in the country who could so obviously we had to put up with his bullshit lol


Space Cowboys (2000) was along these lines. The cranky guy who was the only one who understood the ancient "OS", being its creator, was played by Clint Eastwood.


    > who writes malware for a 30 year old OS?
This may be relevant ... https://en.wikipedia.org/wiki/Stuxnet


>>because who writes malware for a 30 year old OS?)

Sounds like a taunt. Almost makes me want to write some code


I'm interested in your idea for the delivery mechanism.


Just leave some 8" floppies in the parking lot ;)


Ebay. It'll take a bit of effort to gather up the obsolete materials yourself, but get a few obsolete drives, get a backdoor firmware in there, and wait until some .gov clicks buy it now.


An old joke about the pentagon goes along the lines of "we don't need to worry about security because our systems are too old, rare and proprietary to find."

As someone else said, security via age and rarity.


Control of the network helps a lot too.


On the other hand, they're getting decades out of the software. Use the bleeding-edge stuff, and it's obsolete in two or three years now. Use a "cloud" service, and the service probably goes away within five years. The new stuff has too much churn. Where will Rails be in ten years? Python? Java will still be around; it's the successor to COBOL.


Python's been around for like 27 years, so 10 more doesn't seem that tough.


2.x or 3.x?


I understand a lot of people whinge about this, but converting most of my 2.x code to 3.x code involved nothing more than adding brackets to my print statements.


Not meaning to trivialize your experience, and I'm sure that's true for a lot of projects. But on the other hand, big enterprise and government tend to accumulate projects that were written over decades of change cycles.

Porting something like that across a non-backwards-compatible version bump while preserving identical behavior which is sparsely documented and uncovered by unit tests? Starts to be a serious amount of effort quickly.


> The new stuff has too much churn.

<3 <3 <3

Indeed. Basically anything in the JS world, except maybe nodejs, will vanish in 5 years. (I'm looking at you, bower, gulp, EmberJS, even the giant dinosaurus jQuery is on the decline, given everyone and their dogs shift to Angular)

Seriously, if one is after long-lived software, choose a solid PHP framework (Drupal 7, for example, is around for 5 years now and probably will be supported for two or three years) or a Java framework, together with a rock-solid database (i.e. no NoSQL crap)...

Hate PHP and Java all you want, but these two languages put backwards compatibility as priority.


Ember is going to be 5 years old this year and has a team dedicated to backwards compatibility. It's the least "churn" of the frameworks.

And the government could just choose not to use any third party framework.

But to prove your point, people are moving to React, not Angular :-) jQuery is used in almost all Angular apps anyway.


Nassim Taleb talks about this a tautology that is reflective of this concept in Antifragile -- basically says that technologies that have been around for "x" years are likely to remain in use for another "x" years. Think about the wheel. Or paper. Or (as cited above) COBOL. Interesting food for thought. I use this when talking to folks who insist that client-side file storage is going away (e.g. "but everything is going to move to the cloud!"). Explicit filenames on the desktop have been around for decades -- and are likely to remain a fundamental part of our system structures for a long time to come (though are likely to be joined by documents stored in the cloud).


They're getting decades out of the software, but "about three-fourths of the $80 billion budget goes to keep aging technology running". Is that worth it to you?


That isn't a useful metric without details, as it probably includes such sundries as licensing, maintenance agreements, and salaries. A brand-new suite of Oracle enterprise systems would probably be as unfavorable a cost profile.


"75% of the budget to maintenance, and 25% to new system acquisitions", which is what I suspect that statistic really means, isn't that bad.

Honestly it might be heavy on new system acquisitions. I tend to think that enterprise lifecycles are too fast, most of the time, but that's because they're frequently driven by vendor licensing policies that make old-but-working systems prohibitively expensive on purpose. (IBM is notorious for this.)

I don't know if there's a name for this already, but I think there's a common fallacy in which people think that a "chuck it and redo it from the ground up" effort will be easier than fixing their existing system's bugs and/or will result in a less-buggy system. But there's a danger in that you're just moving from a situation where you know where all the bugs are, to a situation where you haven't found them yet. I've seen companies that seem to be stuck endlessly in this cycle.


Depends. Where would that 3/4 be going if they invested in new technology? Training? Support?


Nothing if they keep the interfaces the same. The real cost, immediate and over time, is the port of the applications or systems to new HW and toolchains. I can't overstate enough how many problems this can cause. Especially if they don't have source for these apps or it's barely commented.


Debugging, constant break-fix, and given the typical quality of modern software bribing the international community into ignoring the occasional unintended ICBM launch. :P


Yeah, like all the Python code I've written is just going to expire in 10 years. Popular languages rarely ever die, there is clearly an extraordinarily long tail


Does your code all work in python 3? I know none of my old ruby 1.8.3 code still works.


I don't think your Ruby 1.8.3 code is supposed to work with Python 3.

Seriously, though, as far as I can tell, the transition to Python 3 is extremely low on pain. My Perl 5 to Perl 6 transition will be much more interesting, however.


I thought this too back when I coded perl for a living. Still scratching my head over how PHP basically won that fight.


Perl 5 has maintained almost perfect backward compatibility for decades (going back to Perl 4 in some cases...some of our code has some Perl 4-isms, like use of "local", that still work fine). I guess they wanted to wait until they really figured out how to completely break backward compatibility to break it (and they did, of course...nobody's gonna be able to transition to Perl 6 without a rewrite, though it'll be possible to do a transition process with some Perl 5 and some Perl 6, which is cool, I guess).


I'm amazed at that. I have a Perl program running on a server that's been running since 1999, quietly updating a MySQL database of SEC filings. It's been moved to a new hosting company twice, as old ones went out of business, but it's still doing its job.

I hate Perl as a programming language, but it does provide stability.


New stuff only churns as fast as you churn it. Why wouldn't Rails from today, or your other examples, still run in 10 years?


I wonder, what would be your answer is someone came up to you and said "We need a computer that we can maintain and keep in service for 50 years or more. What should we do?".


Take the IBM route. Write to a VM spec. Hardware comes and goes, but VMs last forever. The System 38's virtual architecture is still used on modern AS/400 machines, so programs written in the late 70s will still work today without needing to lift a finger.


Yeah, that makes quite a lot of sense and is a proven technique. On a side note, did they ever build a System 38 VM for something like VMWare, Virtual Box, etc.?


If you know up front that it needs to last for 50 years, and the requirements are unlikely to change, it's easy to justify the expense of getting lots of spare hardware to be able to handle component failure. That's almost certainly going to be cheaper than trying to port the software forward every decade as technology changes.

Besides that, use open protocols and open source for the whole stack. Use open source hardware too, but don't depend on being able to order new hardware. Processes could change (we could move away from silicon to something else), or file formats for schematics can change so that you can't find anyone to produce the product.

Also helpful would be to target a common VM like the JVM instead of relying on hardware-specific abilities.


This is exactly what you do -- lots of spares. If you're big enough, and have a big enough need, you can also design you own hardware ... although even that has limits, since foundries do eventually retire process nodes.


Build it with React, Node, and MongoDB on top of AWS, of course... oh, and toss in DSL-of-the-month for fun.


I've read that the answer to this question is why a large part of our military-industrial complex exists - the US government gave Lockheed and others 50+ year long non-cancellable contracts to build equipment for the government back in WW2 because they couldn't justify the business risk (similar to why Fannie Mae and FMAC were created) and it was deemed in the government's "best interest" to keep these companies afloat in some capacity. Aircraft carriers we don't need, guns that are long obsolete, tanks that the Pentagon doesn't want anymore - the list goes on with the excess manufacturing spending the government seems to be roped into buying because of... "reasons."

I've never read through those contracts myself but I would hope that they're public record if they're so old and important.


That sounds a bit hard to believe in the specifics, but in general I don't think it's particularly controversial that the US government has something of a vested interest in keeping the defense industry from contracting too much during peacetime, and structures contracts and purchasing in such a way that companies are kept around who might otherwise go out of business or get acquired. E.g. I think it's pretty much accepted that the Pentagon has gone out of its way to keep the market for military aircraft from collapsing too much. That's an industry that would be difficult to rebuild once the expertise and tooling is gone. (Personally I was surprised when they let McD-D merge in the late 90s.)

Interestingly, there's case studies in the private sector for doing exactly this sort of thing in cyclical industries. Toyota is famous (within the rarefied air of supply-chain textbooks, anyway) for supposedly buying up its suppliers parts and stockpiling them, or even just paying the suppliers to be idle, so that they don't go out of business (or start retooling for other clients) during lean times and are ready to go when demand returns. If you view the military-industrial complex as basically a "supplier" to the US Government, and view the demand for military equipment somewhat cynically as a cyclical demand curve, the government's behavior makes sense. You don't let your suppliers go out of business if you think you might need them again...


Tell them to buy a factory that makes computers.

That's really the best way to do it; you have to own (or control) as much of the supply chain as you can. Not really practical if you're a private company who has to compete on thin margins, but not really that hard to imagine for a government.

As far as impracticality arguments: keep in mind that each government that maintains a nuclear arsenal (rogue/client states excepted) already has a completely captive technological supply chain, generally at least at a 1950s level, in order to manufacture the weapons in the first place. So if you wanted to build secure nuclear C2 systems, it would make sense to basically build them with the same degree of security, and using the same presumptively-secure supply chains, that are used to create the weapons themselves.


Honestly, I don't see good solution to this until the rate of technological change really slows down. It seems like your options are to either pay to periodically re-engineer every system or pay to maintain obsolete hardware.


Maybe pay for vmware or similar to run your old systems on modern hardware?


Vmware virtualises 'modern' hardware.


"Feds spend billions to maintain museum-ready buildings"

"Feds spend billions to enforce museum-ready laws"

These computer systems aren't even that old compared to many things the government spends money on.


anecdotal, in the early to mid eighties I was in the US Air Force. The machine I was first assigned to watch over was in the secure comm center. This burroughs machine was the first non-tube computer made by burroughs. it could boot from paper tape or card and was replete with blinking lights.

later I moved up in tech to a sperry/unisys system. all our personnel data and such was loaded via cards, physical cards in multiple boxes till near 88.

So honestly I don't doubt they still do similar. I was just so glad we got out of boxes of cards because having to fix runs each night got old and all for bent card.

it got me into programming, turbo pascal at the time. why, when we moved off physical cards it was then onto 360k floppies. The problem was, the upload/download programs provided could take half an hour or more to transfer to the 1100/70. The turbo pascal program did it in five or less per disk without issue.


On one hand it seems inefficient and perhaps dangerous to be reliant on such old systems. On the other hand, the idea of a new software project to replace it also sounds at risk of being extremely expensive and overly complicated. Because of all the government contracting anti-patterns.

In theory there's a middle ground that avoids both these extremes. In reality, with government software... I'm skeptical it will happen.


I want to know only one thing: COBOL is named under Social Security so I suspect the "outdated computer language that is difficult to write and maintain" Treasury uses is not COBOL -- but oh god then what it is??


Dating from 1960 ("the systems are about 56 years old"), and especially given that the system in question is likely an IBM mainframe, Fortran would be my guess.

Fortran can actually be surprisingly pleasant, at least as pleasant as C, but I'm guessing their particular code is not.


Could be PL/I


One of the many other articles that have been floating around the past few days mentioned that Treasury has a bunch of programs written in an old IBM architecture's version of assembler.


Bingo! I just found http://arstechnica.com/information-technology/2016/05/govern... mentioning both Individual Master File and Business Master File written in IBM mainframe assembler 56 years ago. Joy.

Although I suspect it's a bit of an exaggeration because it seems the IRS is using System/360 which started 1965 https://books.google.ca/books?id=HwniRloeB6cC&pg=PA2&lpg=PA2...


I'd guess these are coded in assembler, or possibly Fortran.


Despite all the ipads and what note I have found a pen and notebook to be the better note taking equipment than anything else.

If it is getting the job done cheaply and efficiently that required and better than the alternatives it is the best technology to use.


Seems like a great opportunity for virtualization...


I think this deserves its own term - porkware




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: