You dont need a VM. You just push the last stage of compilation to the user. The user can then compile to their architecture before starting execution. Optimal compilation involves some NP complete problems, so depending on how good there heuristics are, this could cause a performance issue in loading code, or in running underoptimized code. However, these issues could be solved by moving compilation to install time.
Pushing a final compilation to the user is not a good idea - either it will happen everytime you run, or at install time. I have a lot of programs that are just executeables, and will not be installed. Having that overhead every time something is run is just a waste. Install compilation means that I cannot expect reasonable performance if I upgrade the CPU, if it'll work at all.
So for any reasonable implementation we're looking at JIT with caching, for running platform independent code. The very definition of a process VM is "to execute computer programs in a platform-independent environment"[0].
You've really missed the point. The point is to lower power consumption and increase performance on the CPU level. Running a VM on top of x86 does not achieve that goal because you are still on x86.
I know very well that is the goal, I've been following this project for years now. My point still stands, you need to compile specifically for the CPU it's run on, or you need to move the final compilation to the end user. The only way to achieve reasonable performance if you distribute platform independent code is by using a process VM.
I'm not saying it will not be faster, just that it is a weakpoint in the Mill architecture compared to x86.