My 2nd CS course at college was (I'm showing my age here) PL1 programming. The Watfiv compiler would correct obvious parse errors. Often, this would lead to much more insidious and not-so-obvious bugs down the line.
I think you're thinking of Cornell's PL/C compiler (Watfiv was for Fortran, and didn't have a lot of error-correction), circa 1970ish. PL/C would famously convert
PTU LIST('Hello, world!
into a valid program (in fact, the claim was that it would never fail to convert any string of text into a valid program).
PL/C made a lot of sense when short student programs were entered on punched cards (and hence trivial typos were tedious to correct) and batch turnaround times were measured in hours. This makes much less sense when (a) editors can give us clues about typos right away, e.g., by indenting in a surprising way, and (b) compile times for short modules are very short.
Yes, you’re right - it’s been a long time. So long, in fact that I was using punch cards at the time. I remember getting my printouts and wondering at the results only to realize that it had converted bad input text into a “valid” program. Good times.
I also remember when compilers made more of an effort to fix trivial errors. It was worth it when using a batch system and you could only run a few compilations each day.