You know what’s missing from all of these and many more like them? Support for ROCm. This is all before you get to the really wildly performant stuff like Triton Inference Server, FasterTransformer, TensorRT-LLM, etc.
ROCm is at the “get it to work stage” (see top comment, blog posts everywhere celebrating minor successes, etc). CUDA is at the “wring every last penny of performance out of this thing” stage.
In terms of hardware support, I think that one is obvious. The U in CUDA originally stood for unified. Look at the list of chips supported by Nvidia drivers and CUDA releases. Literally anything from at least the past 10 years that has Nvidia printed on the box will just run CUDA code.
One of my projects specifically targets Pascal up - when I thought even Pascal was a stretch. Cue my surprise when I got a report of someone casually firing it up on Maxwell when I was pretty certain there was no way it could work.
A Maxwell laptop chip. It also runs just as well on an H100.
https://github.com/InternLM/lmdeploy
https://github.com/vllm-project/vllm
https://github.com/OpenNMT/CTranslate2
You know what’s missing from all of these and many more like them? Support for ROCm. This is all before you get to the really wildly performant stuff like Triton Inference Server, FasterTransformer, TensorRT-LLM, etc.
ROCm is at the “get it to work stage” (see top comment, blog posts everywhere celebrating minor successes, etc). CUDA is at the “wring every last penny of performance out of this thing” stage.
In terms of hardware support, I think that one is obvious. The U in CUDA originally stood for unified. Look at the list of chips supported by Nvidia drivers and CUDA releases. Literally anything from at least the past 10 years that has Nvidia printed on the box will just run CUDA code.
One of my projects specifically targets Pascal up - when I thought even Pascal was a stretch. Cue my surprise when I got a report of someone casually firing it up on Maxwell when I was pretty certain there was no way it could work.
A Maxwell laptop chip. It also runs just as well on an H100.
THAT is hardware support.