CUDA runs on most recent Nvidia GPUs, which are replete on college campuses and well-supported in server software. AMD's GPGPU compute support differs from GPU to GPU, and Apple didn't start contributing acceleration patches to Pytorch and Tensorflow until stuff like Llama and Stable Diffusion took off.