Unfortunately this isn't so simple. Autocorrecting to something could result in picking something that's actually wrong and impacting whole program semantics, either causing no errors when there should be errors, or causing errors when there should be none. That could be extremely confusing for new developers.
Instead, compiler authors need to understand and prioritize good ergonomics. Diagnostics should be accurate, come with suggestions, have unique error codes you can look up, and follow patterns you can predict over time.
I wasn’t making any claim it’s simple. Writing a compiler is simple (1), writing a compiler that produces good code is hard, writing a compiler that produces good error messages is very hard. Once you can produce good error messages, error recovery isn’t that hard anymore.
I think languages that use different ways to delineate different loops (do…od, if…fi, while…wend, repeat…until) make it easier to do error recovery of “about compilable” source than C-style ones that use {…} everywhere. In general, redundancy will improve the ability to do error recovery.
(1) The trick is to not think about code quality at all. Emit assembly for every individual statement, never inline functions, feel free to write a load from/store to memory for every variable read/write, etc. It will get you slow code, but also code that’s faster than an interpreter for the same language (your version 2 could post-process to eliminate superfluous loads and stores. Even only removing loads gives a speed up and a code size decrease).
Instead, compiler authors need to understand and prioritize good ergonomics. Diagnostics should be accurate, come with suggestions, have unique error codes you can look up, and follow patterns you can predict over time.