Hacker News new | past | comments | ask | show | jobs | submit login

So if you ran benchmarks compiled using the best C compiler from 2004 compared against the best current C compiler on 2004 era hardware you'd see a factor of 2 performance gain ? That's possible, I suppose, but I doubt it.



I have seen that kind of thing happen, yeah. I used to use dumb Fibonacci as an easy microbenchmark for getting a rough idea of language implementation efficiency:

    __attribute__((fastcall)) int fib(int n)
    {
        return n < 2 ? 1 : fib(n-1) + fib(n-2);
    }
    main(int c, char **v) { printf("%d\n", fib(atoi(v[1]))); }
This gives a crude idea of the performance of some basic functionality: arithmetic, (recursive) function calls, conditionals, comparison. But on recent versions of GCC it totally stopped working because GCC unrolls the recursive loop several levels deep, doing constant propagation through the near-leaves, yielding more than an order of magnitude speedup. It still prints the same number, but it's no longer a useful microbenchmark; its speed is just determined by how deeply the unrolling happens.

It's unusual to see such big improvements on real programs, and more recent research has shown that Proebsting's flippant "law" was too optimistic.


Current compiler optimisations are written with current hardware in mind, while I doubt that older optimisations would become pessimisations on newer hardware, so I'd compare the performance of the best C compiler from 2004 against the performance of the current best C compiler on today's hardware instead.


Turn off optimizations and find out.


Compiler optimizations existed 18 years ago.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: