Hacker News new | past | comments | ask | show | jobs | submit login

I keep hearing conflicting accounts of ROCm. It is deprecated or abandoned, or it is going to be (maybe, someday) the thing that lets AMD compete with CUDA. Yet the current hardware to buy if you're training LLMs or running diffusion-based models is Nvidia hardware with CUDA cores or tensor hardware. Very little of the LLM software out in the wild runs on anything other than CUDA, though some is now targeting Metal (Apple Silicon).

Is ROCm abandonware? Is it AMD's platform to compete? I'm rooting for AMD, and I'm buying their CPUs, but I'm pairing them with Nvidia GPUs for ML work.




They released an SDK for some windows support a month ago. As far as I understand, it’s still being developed. A bit slow, but it’s not abandoned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: