Hacker News new | past | comments | ask | show | jobs | submit login

it will be sticky as long as there's a cambrian explosion of AI innovations happening. NVIDIA built the best swiss-army-knife for handling GPGPU problems in general, spent 15 years building the ecosystem and adoption around it, and then tailored it to AI specifically.

Once the tech settles down a bit, Google and Amazon and others can absolutely snipe the revenue at a lower cost, just like they did with the previous TPUs/gravitons. But then some new innovation comes out that the ASICs (or, ARM+accelerator) don't do, and everyone's back to using NVIDIA because it just works.

AMD potentially has a swiss-army knife too, but, they also have a crap software stack that segfaults just running demos in the supported OS/ROCm configurations, and a runtime with a lot of paper features and feature gaps. And NVIDIA's just works and has a massive ecosystem of libraries and tools available. And moreover they just have a mindshare advantage. Innovation happens on NVIDIA's platform (because NVIDIA spent billions of dollars building the ecosystem to make sure it happens on their platform). And it actually does just work and has a massive codebase etc. Sure it's a cage but it's got golden bars and room service.

https://github.com/RadeonOpenCompute/ROCm/issues/2198

So I guess I'd say it's sticky until the technology settles. Steady-state, I think competitors will capture a lot of that revenue. But during the periods of innovation everyone flocks back to NVIDIA. AMD could maybe break that trend but they'll have to actually do the work first, they have tried the "do nothing and let the community write it" strategy for the last 15 years and it hasn't worked. You gotta get the community to the starting line, at least. Writing a software ecosystem is one thing, writing runtime/drivers is another.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: