Though nowadays if AWS or Azure or GCP finds a way to run some proprietary database more efficiently using a new kind of language and cpu architecture, the scale there somewhat changes the playing field.
The issue with Itanium was not that market didn’t care of superior performance. The issue was that Itanium is slow. Market didn’t care for theoretical beauty over actual performance.
It is not the first time Intel does something similar. In the 80’s they tried to introduce an ”object oriented” processor. It was ridiculously slow and no-one cared about it.
I'm not in the field, but I'd wager we're due for a shift in the current software-hardware relationship. With the end of Moore's law continuing to improve practical computing performance will requisite making more efficient use of transistors. Or maybe processors will go 3D and we'll have a few decades of relative status quo - I'm hardly Nostradamus
Plus even at AWS scale, I imagine HW homogeneity of the fleet is more important than efficiency of a single application, unless it's insanely more efficient for a top 10 service. Homogeneity improves predictability, load balancing, operations, security, and all sorts of other things.
On the other hand, AWS offers AMD machines and ARM machines in addition to the typical Intel stuff. If there was some new architecture that offered a 50% speedup, and all you had to do was recompile your code, I assume they'd support it in a heartbeat.
In the context of the article, the changes would by necessity be far more invasive than "recompile your code". The premise is that C cannot be an effective implementation language. A full rewrite would be required.