Hacker News new | past | comments | ask | show | jobs | submit login

I agree, it's obviously very hard to compare transaction rates, and I also agree that I have hard time seeing companies currently using mainframes recouping the cost of migrating. If it works, it works.

But.

> Rewriting your system to a mainframe architecture is equally as expensive.

There was a new bank mentioned in this thread that actually started using mainframes from scratch, but other than that I've never heard of any "modern" fintech (or really any) company introducing mainframes. Organisations actually rewriting functioning systems TO mainframe must be almost never heard of (in the last 10-20 years at least).

If System Z, Cobol and DB2 are so obviously superior, why are so many successful new competitors in industries where they are the norm in older companies choosing to not use them?

I'm not saying banks should rewrite their stuff in node.js (or deno - even better of course), it makes sense for them to stay.

I just have a hard time believing that mainframe systems are so technically impressive, to the point where some people claim it's almost impossible to build a similar system on non-mainframe technologies.




The software on mainframes only shines in reliability and the fact that the machines have been build for money transaction from the start. For example doing "decimal" math (if you think python) is as inexpensive as doing float math due to hardware support.

The machines themselves are impressive (hardware wise) and reliability wise, for example you can swap mainboards one by one in a full frame without ever taking down the machine (think raid on a mainboard level, RAIMB ?).

But the high start-up cost makes most startups going the other road. I am not convinced that the vertically scaling is cheaper than horizontally, if you need the ACID guarantees... but it is hard to say.

The reason why us old dogs say it is hard (not impossible) is due to the single-image and acid requirements. There is no good way to do that distributed (look up the CAP theorem).

So having a massive computer (with double digit terabytes of memory AND cache, and truly massive i/o pipes.. just makes building the need-to-work stuff simpler.

As an example, a few years ago I was (on my own money) on a mainframe conference (not doing mainframe work in my work day).. at that time the machine had more bandwidth to the roughly 200 PCIe adapters that a top-of-the-line intel CPU had between the L1 cache and the computing cores) - and that meant that given enough ssd's you could move more data into the system from disk that you could move into an intel cpu from cache...

Also mainframes can run two mainframes lockstep (as long as they are less than 50km apart), that means if one of them dies during a transaction (which in itself is extremely rare), the other can complete it without the application being any the wiser.. Try that in the cloud :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: