Hacker News new | past | comments | ask | show | jobs | submit login

This was the idea of Itanium. It failed mostly because of economics.

It turns out programmers, or rather their employers, don't really care about using hardware efficiency. They care about shipping things yesterday, because that's how business deals get closed, and making software efficient is secondary to money changing hands. Performance never really matters to business people after the checks are cashed.

Multicore computers have been ubiquitous for more than a decade, yet the overwhelming majority of software built today is single-threaded microservices, where in they spend most of their time serializing and deserializing message payloads.

This is all really to say that most performance is already being left on the table for the majority of what computers are used for today.




I do want to say that I think the Itanic would have fared way, way better in a post-LLVM world where the importance of smart, optimizing compilers is much more valued and understood and language designers actively work hand-in-hand with compiler devs far more often (with much more significant back-and-forth from hardware manufacturers).


I don't think LLVM is particularly good at optimizing VLIW code.

Very good optimizing compilers existed before LLVM. Intel had one specifically for Itanium. It wasn't enough.


Why would llvm be particularly good at optimizing vliw code when there’s no demand for it to be? You can’t believe everything else would remain the same in the hypothetical I posed.


A) optimizing for VLIW is hard. B) the null hypothesis would be no change.


Given how much of today's computer needs are dependent on a database query, this is no surprise. Who cares about the micros you gain with added efficiency while there's a 100ms db query return in the path?


Apparently Apple and Intel do, since they introduced those changes into their silicon


Not every pipeline involves a db query.


Where do you think DBs run?


I mean sure, I don't doubt 99% of end-user programmers wouldn't look twice at something like this, but compilers designers probably would care.

And it's not like the companies arent trying this idea (again, M1 exploit). But for whatever reason they want to keep cpus as black box, perfect abstract machines, even though we know they aren't




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: