Hacker News new | past | comments | ask | show | jobs | submit | zarakshR's favorites login

A few years ago, I had the good fortune of working at IBM's Poughkeepsie location on a mainframe subsystem team. What everyone is saying is technically correct, but it's not completely accurate either. Note that I do not work there anymore; these are thoughts I had when I was there and since then.

A large entity purchases 12 fridge-sized mainframes from IBM for over $100 million. Who might do that? Airlines, banks, governments, logistics, and others needing high levels of reliability.

To understand why this clientele would use a Z-series mainframe, first consider what the "z" in the name stand for: "zero," as in zero downtime. Typical compute providers express their downtime as "#-nines". For example, 5-nines reliability would mean you're down for around 30 seconds per year, on average. The Z-series mainframes are sold as having zero downtime, period. A remarkable amount of research, development, and engineering effort goes into achieving this level of reliability. Now, these clients usually perform jobs which are not computationally difficult (validating a credit card transaction, for example) but must work, since the economy depends on the availability of these services. The Z-series mainframe shines in processing these loads of many, short jobs.

There's a security angle to mainframes as well. Commodity hardware allows for fast scaling and redundancy. However, commodity hardware also allows for exploits to be shared easily. Once those exploits are discovered, companies need to patch, and there's no guarantee the patch will happen. Now, imagine trying to develop exploits for a system which is not commercially available (governments could still presumably acquire one), is a completely custom computer architecture (Z/Architecture, custom compiler, Z/OS, pretty much every layer below JVM), and has very few design documents available online. Oh, and consider that, from z14 onwards, any data in the mainframe is encrypted when at rest. (Decryption/encryption is handled beneath the ISA; once an instruction is run, the mainframe uses the central key management chip (tamper-resistance, designed to handle natural disasters, etc.) to decrypt necessary information. The information is processed, then encrypted again before the instruction is completed.) The likelihood of a script-kiddie getting into and exfiltrating data from one of these things is very unlikely. Hacking one of these mainframe would take an intense, coordinated effort.

Another important component is backward-compatibility. Take IBM's two main in-house storage protocols, FICON and FCP (FCP is FICON, minus most support for old systems to get higher throughput). FICON connects mainframes with giant storage arrays from EMC, Teradata, and others. FICON replaced ESCON, which replaced the parallel data communication system from the System/360 era. When a company upgrades their mainframe, knowing that your 20-year-old storage unit can still talk to your new machines relieves stress. Companies WILL pay for this level of backwards compatibility, and there's no reason to hate them for it.

Supporting backwards compatibility has historically not been too much of a problem for IBM. I worked with a person who took a class in IBM Poughkeepsie's now-abandoned Education Building on this hot new programming language called C (this was sometime in the 80's). Multiple people in my department were around for the development of not just the current generation of IBM tech but those before as well. The levels of technical depth they had were immense. I've heard people say, "oh, but that depth is narrow and won't get them jobs outside IBM mainframes." Perhaps, but in my experience, they don't care. They build systems the world depends on, whether the users of those systems realize it or not. I'll also add that in the days of Big Blue, your job was basically secured. Even after the layoffs of the 90's, IBM still needed to retain the old talent. (Imagine a company with lots of employees who've worked there less than 10 years and lots who've worked there more than 30 years. You'd describe IBM's mainframe division well.) Makes me sad to hear that IBM is discriminating against their older employees to push them out.

One commenter asks why IBM doesn't have "micro-mainframes" for smaller companies. For all I know, they could be moving this direction. At the same time, it seems like it wouldn't make much sense for IBM to do this. Why deal in thousands of dollars when you can deal in millions? Why put engineering effort into building computers for non-critical companies when, as long as you keep advancing performance and capabilities, your mainframes will provide you one of the best long-term cash flows possible?

Another commenter said new companies do not consider mainframes because they aren't cost-effective. I think it's for a different reason: new companies come and go. Their services aren't that important to the world, but they're trying to show the world their importance. Because of that, startups whip-up an infrastructure concoction which is inefficient, but that's ok because 1) they aren't encountering the issues of scale and 2) their workload and information can run anywhere. They just don't need a mainframe because they don't need that level of reliability.

Happy to answer other relevant questions you might have.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: