Hacker News new | past | comments | ask | show | jobs | submit login

Frequent thing I see here on HN and many other sites is the immediate dismissal of VLIW... This sort of archtiecture has had a bad rap since it was first talked about (over 30 years ago now), to the present day. This is really encouraging me to write a post on why VLIWs have been unjustly shit on, and why what most people think of VLIWs (E.g. Itanium) is not a true VLIW.

At least the linked article gives a mostly fair (but very brief) description of VLIW, but then inappropriately compares it to Intel/HP's EPIC, which while inspired by VLIW, went so far off the rails, which in my opinion, led it to be a failed architecture.




Please do, I would love to hear a piece with a pro-VLIW perspective.

I always found Transmeta's approach to VLIW very interesting. In a way, it did for VLIW what current GPUs are doing for SIMD: take the dispatching away from the compiler and use the more rich runtime information to fill in the slots.

On a side-note, how does Rex's architecture compare to other VLIW tiled mesh-based architectures from before (e.g. Tilera)?


Ok. Just link to examples where it works vs modern processors while avoiding failures of Itanium etc. Outside of embedded space or constraints: HPC or at least desktop-grade stuff. That should either strongly support your claim or settle the debate depending on quality of evidence.


I'm actually going to commit to writing this over the weekend... I'll post it on HN, aiming for Monday.


Well, that and what it says in your profile should make for a popular topic.

(Although demanding audiences want 'two orders of magnitude', perhaps 25x can squeak by ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: