Hacker News new | past | comments | ask | show | jobs | submit login

To be fair, my last employer (aerospace manuf.) ran an incredibly dated and ancient OS with pretty decent results. It was simple, to the point, ugly as hell but got the job done without needing constant updates etc etc. Also we never had a problem with malware (because who writes malware for a 30 year old OS?)

I understand this article mentions many different sectors and functions for antiquated systems but sometimes an update simply isn't needed.




I worked, once, at a large, national (US) company that will not be named.

They had an old database from the early 70s that stored _all_ of their data, everything, contacts, billing, etc.

That was accessible through a special proprietary program, that was overseen by one college kid after the rest of his team was let go.

That proprietary program was essentially an old DS prompt, which was connected to via a Java applet for an early version of Internet Explorer < 7 that emulated the old dos-like user interface.

That Java Applet was connected to via a Java web service, running on machines that had the java applet and ie installed.

The Java web service was accessed by tens, possibly a hundred thousand people nationally, through a few different interfaces.

The one I was aware of (probably the largest), connected to the Java Web service, through a C# WCF service.

The C# WCF service was built as a backend for a new javascript / html4 front end for their company.

That new UI, was intended to partially (BUT ONLY PARTIALLY) replace a strange, 100% actionscript web ui made previously.

Learning about their system architecture, was quite literally like stepping back through time. I felt like an archaeologist, uncovering layers of an ancient city.

It was also amazing how many people were employed supporting each system, each of which could have replaced the lower tiers if they had just upgraded at the time.

It was similarly amazing, how the company had laid off everyone at entire layers, when management arbitrarily decided they needed to make cuts, while other layers doing the _exact_same_thing were staffing huge quantities of people, completely unaware that a single bug in the layer beneath them was not being overseen, or supported, and could bring the whole house down at any time.


These two anecdotes (parent and grand-parent) show that there are two ways to manage legacy systems: a mindful, efficient way and a wasteful, risky way. Come to think of it, those same two ways apply to any project, old or new. So it all really boils down to project management and corporate practices.

I agree with all the comments that say that legacy systems, government or corporate are not inherently, necessarily bad, inefficient, insecure, and in need of replacing.


alternatively, the grandparent post's mindset becomes the parent post's when the application needs to be extended, and the grandparent's simply hasn't had that need yet : )


> because who writes malware for a 30 year old OS?

Careful. I believe I just read an article (Wired?) about some hackers doing exactly this.

Security through Seniority might be a good name for the mindset. :-)


Is there a term for when you say something knowingly broad but ALSO understand there are specific cases where it doesn't apply? Like a simple term I can paste onto the end of statements like (sic) or similar so that I can avoid the nitpickers?

EDIT: honestly not being facetious just curious because this exact thing happens constantly.


That's how models and rule work in general. There's a rule or pattern that mostly applies but always exceptions.

In this case, the old systems were all torn apart by older hackers. There's young ones, just a few, hacking mainframes and stuff now. The ease of finding first flaws showed only reason they weren't found sooner is nobody cared enough to look. Or couldn't afford the relics to play with. So, believing one is safe just because computers are old is akin to Security via Senority.

Now, there are older approaches like tagged HW or dedicated lines whose intrinsic properties reduce odds of attacks. Definitely apply any good techniques from past to present situation. Just that it's old... especially since it's older than INFOSEC itself... doesnt make it safer.


The rebuttal was pertinent and polite, and (almost) provided a reference, so I really don't think it can be qualified as nitpicking. Your statement was a bit exaggerated for effect, so you can hardly criticize someone who responds to it, for effect.

I suggest you avoid the knowingly broad and include nuance to your arguments. I like the suggestion of adding "generally".


That's an interesting question. I can't think of one off the top of my head. You should try http://english.stackexchange.com/

Questions like this are pretty common there.

As an added bonus, Peter Shor (famous for Shor's algorithm) just might answer your question.


"(generally)" ?

But you're right, it's sad how many people insist on feeling smart by taking a general statement as a universal statement and nitpicking it. It's so tiring.


an exaggeration?


I remember a story of someone that was decommissioning an HP-UX box at a university after years of service doing... not much it seems. While shutting it down they noticed that the system had been compromised and an investigation began. It turns out the intruder had exploited a remote vulnerability, logged in as a regular user, and had tried to compile a rootkit. After days of attempts to compile the software, the intruder seemed to have found a way to patch the remote vulnerability, and logged out to never return.

This is the same thing as security through obscurity or security through lack of usability IMO.


That is pretty funny because decades ago we used to remotely compromise university HP 9000 systems, create new privileged accounts, and apply the latest patch sets to the system to close up any security issues. I would assume that is a pretty common practice, you don't want someone else messing with your system. I would bet the investigation didn't go far enough.


That's great. I'm going to have to remember that phrase. Another I considered is Security through Obsolesence. Makes it sound more retarded. ;)


Security through unplanned obsolescence.


I remember reading a counterpoint about this a while back -- sometimes for critical systems, the risk of updates is really high.

For example, NASA still uses hardened 808x systems. On top of that, for space-based systems in an ionized environment, the risk of having hardware developing hardware faults is non-zero, and the kind of things people do for error correction in that environment is insane.

The flip side of this is that, when a technology is widely used, there is a scaling that happens. If you are the sole user of a technology, then part of your cost is maintaining that technology that used to be shared by all the other users of that technology.

And then there's the bigger question: how effective is nuclear deterrence? And I don't mean for the United States of America vs. the rest of the world. I mean for the global, human civilization, and homo sapiens as a species.


W/ regard to space-based and airborne systems (commercial jets also actually operate in a fairly high radiation environment):

The risk of faults is not only non-zero, it's expected. Google 'single event upset' and 'nor flash soft error. Short version - (SEU) any bit in your RAM, CPU, peripherals, or databus can (and will, eventually) be randomly flipped in a high radiation environment. (Soft Error) - A random cell in your flash may be pushed into an indeterminate voltage range, and will return a different value every time you read it. So, for instance, you may CRC it, think it's good, copy it to RAM, and then the CRC of the RAM copy will fail because you got a different value the second time you read it.

There are ways to deal with this. Google 'lockstep CPU' for information on a common, fairly hardcore approach. Basically you replicate the CPU (and the rest of the hardware) and cross-check every single clock cycle; you essentially have two or more computers in one box doing the work of a single computer.

As you can imagine, this hardware is typically entirely custom, tested more thoroughly than anything you've ever imagined (unless you're in a safety critical industry yourself), and very expensive to design & test. Hardware refresh cycles are typically measured in decades, often governed by component obsolescence.


>If you are the sole user of a technology, then part of your cost is maintaining that technology that used to be shared by all the other users of that technology.

Yeah the guy who maintained the OS was basically the only guy in the country who could so obviously we had to put up with his bullshit lol


Space Cowboys (2000) was along these lines. The cranky guy who was the only one who understood the ancient "OS", being its creator, was played by Clint Eastwood.


    > who writes malware for a 30 year old OS?
This may be relevant ... https://en.wikipedia.org/wiki/Stuxnet


>>because who writes malware for a 30 year old OS?)

Sounds like a taunt. Almost makes me want to write some code


I'm interested in your idea for the delivery mechanism.


Just leave some 8" floppies in the parking lot ;)


Ebay. It'll take a bit of effort to gather up the obsolete materials yourself, but get a few obsolete drives, get a backdoor firmware in there, and wait until some .gov clicks buy it now.


An old joke about the pentagon goes along the lines of "we don't need to worry about security because our systems are too old, rare and proprietary to find."

As someone else said, security via age and rarity.


Control of the network helps a lot too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: