It's because they're not cost competitive. For the same amount of money you can get a whole heck of a lot more servers (CPU and memory!) running Intel, AMD, or more specialized hardware. Even if we're only talking about their (IBM's) cost!
If mainframes were actually competitive with modern server hardware everyone would be using them. Even IBM uses regular Intel hardware in their cloud stuff!
Mainframes aren't even fast... IBM will make all sorts of BS claims about memory speed and the bandwidth of their interconnects and whatnot but all of it is 100% artificial benchmark nonsense that doesn't translate into real-world usefulness because nobody is rewriting their mainframe shit to take advantage of it.
I don't know about you but in the time I've been in IT "hardware failures" that actually had any sort of serious impact on operations were few and far between. The whole point of modern solutions (everything from N-tier architecture to containers and on-demand compute/function stuff) is to make the hardware irrelevant. At my work we had a whole data center taken down as part of a planned test and I doubt that any end users even noticed (and it was down for hours because they screwed something up when bringing things back online hehe). I think something like 6,000 servers and some large amount of networking equipment were complete powered down? I don't know the specifics (and probably shouldn't give them out anyway).
The whole point of mainframes is to serve a function: All the hardware is redundant/super robust (within itself). That function is mostly meaningless in today's IT infrastructure world.
Mainframe performance is very fast within the mainframe itself. as with every other platform, performance drops quite a lot when you escape into the real world.
If mainframes were actually competitive with modern server hardware everyone would be using them. Even IBM uses regular Intel hardware in their cloud stuff!
Mainframes aren't even fast... IBM will make all sorts of BS claims about memory speed and the bandwidth of their interconnects and whatnot but all of it is 100% artificial benchmark nonsense that doesn't translate into real-world usefulness because nobody is rewriting their mainframe shit to take advantage of it.
I don't know about you but in the time I've been in IT "hardware failures" that actually had any sort of serious impact on operations were few and far between. The whole point of modern solutions (everything from N-tier architecture to containers and on-demand compute/function stuff) is to make the hardware irrelevant. At my work we had a whole data center taken down as part of a planned test and I doubt that any end users even noticed (and it was down for hours because they screwed something up when bringing things back online hehe). I think something like 6,000 servers and some large amount of networking equipment were complete powered down? I don't know the specifics (and probably shouldn't give them out anyway).
The whole point of mainframes is to serve a function: All the hardware is redundant/super robust (within itself). That function is mostly meaningless in today's IT infrastructure world.