Hacker News new | past | comments | ask | show | jobs | submit login

Not sure what you mean here.

Itanium was a VLIW and the Yale Bulldog compiler solved the VLIW code generation problem with trace scheduling back in the 80s. At this point, that's standard Dragon Book stuff.

Itanium failed to be sure but it wasn't the compiler. However I'll grant that the workload didn't match the EPIC architecture and the compiler. EPIC succeeded at scientific workloads.

As to whether Mill respects the current level of compiler tech a lot more I'd like to know more about that. They need to get their LLVM backend up and running.




Agreed: 'compiler technology' was a marketing promise/excuse over the fact that for any real-world problem, even the best possible code could not come near to the theoretical hardware performance. Intel had previously made the same mistake with the i860 but didn't learn from it.


So the hardware didn't match the workload and they blamed the compiler? That sounds plausible. I think they oversold the use cases for the Itanium though, they made it sound more like a general-purpose server CPU. The Mill might not be general-purpose either but they seem to be more honest about its niche (it's basically a DSP that happens to support general computation). Maybe that will be enough for it not to be a disappointment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: