today, learned heuristics have a couple of pitfalls that make them hard to add to such systems
1. they are usually hard to run efficiently
2. they are usually hard to explain
The former is definitely changing with low precision formats like fp16 and useful coprocessors that can do matrix multiplications efficiently (M1, Intel). The latter hasn't been developed much and unless you're just training a model to memorize the entire space the heuristic operates in, it can be scary to trust it on unseen data.
Most interesting cases don't really look like this. The heuristic is applied to the user's code; it's not a one-time knob in the compiler. If it were, then you would likely be able to afford an exhaustive search to pick it & wouldn't need ml.
The analogy I'm making doesn't especially apply to compilers, which have human defined defaults in the first place.
What I've found is that it's important to not run the ML model then use its output as the default state. Have a human, heuristic choice as the default state.
Also, they are not stable. I.e, if you have a fast program and you change a tiny detail, it is not guaranteed that the program remains fast. Also between versions of the compiler.