I'm certainly not going to claim that x86 and its irregularities and extensions of extensions is in _any way_ a good choice for the lingua franca instruction set (or IR in this way of thinking). Its aggressively strictly ordered memory model likely even makes it particularly unsuitable, it just had good inertia and early entrance.
The "RISC of the 80s and 90s" RISC principles were that you exposed your actual hardware features and didn't microcode to keep circuit paths short and simple and let the compiler be clever, so at the time it sort of did imply you couldn't make dramatic changes to your execution model without exposing it to the instruction set. It was about '96 before the RISC designs (PA-RISC2.0 parts, MIPS R10000) started extensively hiding behaviors from the interface so they could go out-of-order.
That changed later, and yeah, modern "RISC" designs are rich instruction sets being picked apart into whatever micro ops are locally convenient by deep out of order dynamic decoders in front of very wide arrays of microop execution units (eg. ARM A77 https://en.wikichip.org/wiki/arm_holdings/microarchitectures... ), but it took a later change of mindset to get there.
Really, the A64 instruction set is one of the few in wide use that is clearly _designed_ for the paradigm, and that has probably helped with its success (and should continue to, as long as ARM, Inc. doesn't squeeze too hard on the licensing front).
Seems to me that you just have to be careful when bringing out a new version. You can't change the memory model from chip to chip but that goes for x86 to. Not sure what other behaviors are not really changeable.
Can you give me an example of this? SPARC of the late 90s ran 32bit SPARC.
However I disagree that its the best of both worlds.
RISC doesn't necessary require changing the ISA, not anymore then on x86.