Hacker News new | past | comments | ask | show | jobs | submit login

Don’t agree at all. PyTorch is one library - yes, it’s important that it supports AMD GPUs but it’s not enough.

The ROCm libraries just aren’t good enough currently. The documentation is poor. AMD need to heavily invest in their software ecosystem around it, because library authors need decent support to adopt it. If you need to be a Facebook sized organisation to write an AMD and CUDA compatible library then the barrier to entry is too high.




Disagree that the Rocm libraries are poor. Their integration with everything else is poor because everything else is so highly Nvidia centric, and AMD can't just write to the same API because it's copyright Nvidia (see Oracle's Java case).

The adoption of CUDA has been such a coop for Nvidia, it's going to take some time to dismantle it.


I don’t use high level frameworks like PyTorch because my work is in computational physics so I do actually use the lower level libraries. The documentation doesn’t even come close although it has got better. But they’re just not at feature parity, and that’s not on anyone but AMD currently. They need to invest more in the core libraries.

Just look at cuFFT vs rocFFT for e.g… they aren’t even close to being at feature parity - things like multi GPU is totally missing and callbacks are still “experimental”. These are pretty basic features - bear in mind that when people ported from CPU codes CUDA had to support these because they existed in FFTW (transforms over multiple CPUs rather than GPUs though via MPI).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: