Hacker News new | past | comments | ask | show | jobs | submit login

This is a bit snarky — but will Intel actually keep this product line alive for more than a few years? Having been bitten by building products around some of their non-x86 offerings where they killed good IP off and then failed to support it… I’m skeptical.

I truly do hope it is successful so we can have some alternative accelerators.




The real question is, how long does it actually have to hang around really? With the way this market is going, it probably only has to be supported in earnest for a few years by which point it'll be so far obsolete that everyone who matters will have moved on.


We're talking about the architecture, not the hardware model. What people want is to have a new, faster version in a few years that will run the same code written for this one.

Also, hardware has a lifecycle. At some point the old hardware isn't worth running in a large scale operation because it consumes more in electricity to run 24/7 than it would cost to replace with newer hardware. But then it falls into the hands of people who aren't going to run it 24/7, like hobbyists and students, which as a manufacturer you still want to support because that's how you get people to invest their time in your stuff instead of a competitor's.


What’s Next: Intel Gaudi 3 accelerators' momentum will be foundational for Falcon Shores, Intel’s next-generation graphics processing unit (GPU) for AI and high-performance computing (HPC). Falcon Shores will integrate the Intel Gaudi and Intel® Xe intellectual property (IP) with a single GPU programming interface built on the Intel® oneAPI specification.


I can't tell if your comment is sarcastic or genuine :). It goes to show how out of touch I am on AI hw and sw matters.

Yesterday I thought about installing and trying to use https://news.ycombinator.com/item?id=39372159 (Reor is an open-source AI note-taking app that runs models locally.) and feed it my markdown folder but I stop midway, asking myself "don't I need some kind of powerful GPU for that ?". And now I am thinking "wait, should I wait for `standard` pluggable AI computing hardware device ? Is that Intel Gaudi 3 something like that ?".


I think it's a valid question. Intel has a habit of whispering away anything that doesn't immediately ship millions of units or that they're contractually obligated to support.


Long enough for you to get in, develop some AI product, raise investment funds, and get out with your bag!


I hope it pairs well with Optane modules!


I'll add it right next to my Xeon Phi!


I'm not very involved in the broader topic, but isn't the shortage of hardware for AI-related workloads intense enough so as to grant them the benefit of the doubt?


Itanic was a fun era


Itanium only stuck around as long as it did because they were obligated to support HP.


Itanium only failed because AMD was allowed to come up with AMD64, Intel would have managed to push Itanium no matter what, if there were no alternatives to a 64bit compatible x86 CPU.


Itanium wasn't x86 compatible, it used the EPIC VLIW instruction set. It relied heavily on compiler optimization that never really materialized. I think it was called speculative precompilation or something like that. The Itanium suffered in two ways that had interplay with one another. The first is that it was very latency sensitive and non-deterministic fetches stalled it. The second was there often weren't enough parallel instructions to execute simultaneously. In both cases the processor spent a lot of time executing NOPs.

Modern CPUs have moved towards becoming simpler and more flexible in their execution with specialized hardware (GPUs, etc) for the more parallel and repetitive tasks that Itanium excelled at.


I didn't said it was, only that AMD allowed an escape hatch.

Had it not happened, PC makers wouldn't have had any other alternative other than buy PCs with Windows / Itanium, no matter what.


I doubt that Itanium would have ever perked down to consumer level devices. It was ill suited for that workload because it was designed for highly parallel work loads. It was still struggling with server workloads at the time it was discontinued.

At Itanium's launch, an x86 Windows Server could use Physical Address Extension to support 128GBs of RAM. In an alt timeline where x86-64 never happened, we'd have likely seen PAE perk down to consumer level operating systems to support greater than 4GB of RAM. It was supported on all popular consumer x86 CPUs from Intel and AMD at the time.

The primary reasons we have the technologies we have today was wide availability and wide support. Itanium never achieved either. In a timeline without x86-64 there might have been room for IBM Power to compete with Xeon/Opteron/Itanium. The console wars would have still developed the underlying technologies used by Nvidia for it's ML products, and Intel would likely be devoting resource into making Itanium an ML powerhouse.

We'd be stuck with x86, ARM or Power as a desktop option.


But Itanium was not compatible with x86, it used emulation to run x86 software.


I didn't said it was, only that AMD allowed an escape hatch.


I haven’t read the article but my first question would be “what problem is this accelerator solving?” and if the answer is simply “you can AI without Nvidia”, that’s not good enough, because that’s the pot calling the kettle black. None of these companies is “altruistic” but between the three of them I expect AMD to be the nicest to its customers. Nvidia will squeeze the most money out of theirs, and Intel will leave theirs out to dry when corporate leadership decides it’s a failure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: