Hacker News new | past | comments | ask | show | jobs | submit login

> It was only a few years back and we were building C++ compilers on machines with barely 16MB.

"A few years" seems like a serious understatement, but beyond that... compilers in the time period you're talking about weren't really doing optimizations, they were just shoveling assembly out the door. Doing very little work takes very few resources, but the resulting binaries were way slower than they could have been. Any compiler that uses LLVM tends to be slow, in my experience, but it does offer excellent optimizations.




Just wait until you get old... a "few" years ago, starts becoming a larger number...


This is not the case, optimizing compilers were quite far along by the 16MB era. RISC relied on them, GCC was widely adopted due to optimizations, etc.


The level of optimization available back then is a far cry from what we have today, as far as I've ever heard or seen. Based on looking at some histories of GCC, the EGCS fork in 1997 was responsible for starting to introduce meaningful optimizations into GCC, such as global CSE (common subexpression elimination), and that wasn’t merged back into mainline GCC until years later. LLVM didn’t really hit its stride until late 2000s, and I believe it was used as a research platform to develop a lot of the modern optimizations that exist today.

Do you want to be more specific about the kinds of optimizations that existed in GCC in ~1995? My understanding could be incomplete or wrong, but today's compilers aren’t just twiddling their thumbs and using tons of RAM. They’re doing useful work that simply didn’t happen back then. As with all software, there is surely room for improvement even today, both with the implementations of the compilers and with the optimizations that are being done.

Peephole optimizations (and efficient register allocation) would probably have been very helpful for RISC, and I’m guessing that’s what you’re referring to, but that’s a comparatively simple level of optimization that requires little RAM or heuristic analysis.


https://ftp.gnu.org/old-gnu/gcc/ has old GCC versions. Eg checking the manpage gcc.1 from the 2.7 tarball there's inlining, loop unrolling, strength reduction, cse, peephole, instruction scheduling, jump threading, and others, probably better described in thebtexinfo doc.

There are opts not covered in the command line options. Eg see this discussion of tail call optimizations already in GCC 1.x: https://groups.google.com/g/gnu.gcc.bug/c/Zzbfyvi2uAM/m/GVDI...

Sure, later compilers did more, but the above-mentioned opts made GCC known as an optimizing compiler. And there were other optimizing compilers too, for C but also other languages.

Fortran compiler histories are interesting, they were doing their own thing on numerical code, SIMD (called vector processors back then), etc. Eg https://www.deepdyve.com/lp/wiley/evaluation-of-fortran-vect...


Happened to continue my browsing for compiler history and found also this interesting rationale for a new version of the Dhrystone benchmark from 1988 where they say that the previous version was too badly broken by optimizing compilers: https://dl.acm.org/doi/10.1145/47907.47911

The languages are interesting as well (versions for C, Pascal and Ada).

So this places the wide use of optimizing compilers earlier than the 16MB era, 1988 was sub-1 MB era for personal computers, VAX class minis might have had 16 MB (but in multiuser context, compilers probably wouldn't use nearly that much memory)

But of course the later Dhrystone was also later broken by improved compiler optimizations, supporting the notion that compilers did keep improving as well.

Also as another tangent, according to https://en.wikipedia.org/wiki/CMU_Common_Lisp the CMUCL Python compiler (famous for its optimizations) was started in 1985 and had to be fairly good at optimizing already by the mid-90s 16MB era. I found some release notes from 1993 that talk about optimizations: https://trac.common-lisp.net/cmucl/wiki/Release17c (summarized as "Improvements in compiler source-level optimization, inline expansion and instruction scheduling")




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: