Hacker News new | past | comments | ask | show | jobs | submit login

The CUDA lock-in is over played. Tensorflow, Pytorch and any large framework supports multiple hardware including Google TPUs. Any company making significant investment will steer some of that towards hardware support in the software they need.



Name one model (besides Gemini obviously) that was trained on non-Nvidia.


Who knows, likely not many aside from some folks training in GCP on TPUs but any large funded corporation has a path laid out by Google. And Apple with its M-series. You can build hardware and dedicated ML chips and if you can do that the software ecosystem knows how to handle it. CUDA isn't the moat, it's the chips. NVIDIAs moat is still the chips. Building huge systems and ecosystems is a game for only the most capitalized entities but all of them can do so. The software part is already a solved problem, at the cost of a new compiler.


That probably had less to do with CUDA and more to do with the fact that Nvidia dominates the high end of the market.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: