Hacker News new | past | comments | ask | show | jobs | submit login

It seems the trend is in increased diversity of hardware, with the dominance of x86 being reconsidered with smartphones, energy-optimal cloud, GPU computing, and ML accelerators.

LLVM allows hardware manufacturers to more easily provide mainstream language support for their platform than before, but the problem here was mostly that GCC was hostile to modular design, not really theoretical advances.

In terms of frontends, I guess we're seeing more languages reach C-level performance thanks to LLVM again.

But in terms of optimizations driven by theory? There were some significant advancements in generic auto-parallelization for imperative languages, about a decade ago I think. And it it doesn't magically solve the codegen problem, and remains hampered by language semantics that are not always parallelization-friendly.

There were a bunch of improvements which were driven by making languages more hardware-aware, e.g. the concurrent C++ model in 2011 which was widely copied to other low-level programming languages.

We're also seeing more and more libraries that are specifically designed to target hardware features better.

So ultimately it looks like most of the advances are driven by better integration of how the hardware works throughout the compiler, language and community.




> I guess we're seeing more languages reach C-level performance thanks to LLVM again.

If they made JavaScript as fast as C, the software industry will become a cheap perversion of what it once was, and I'm picking up my toys and going home.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: