That is what it is afaict, but NVIDIA says this in their 2019 post "Machine Learning Acceleration in Vulkan with Cooperative Matrices"[0]:
> Additionally, if the GPU includes dedicated hardware for high-speed matrix operations, such as the Tensor Cores on Turing GPUs, then the Cooperative Matrix extension can tap into the power of this acceleration with no application changes.
The benchmark graph doesn't look too great, though - around half the "theoretical peak tensor core performance".
> Additionally, if the GPU includes dedicated hardware for high-speed matrix operations, such as the Tensor Cores on Turing GPUs, then the Cooperative Matrix extension can tap into the power of this acceleration with no application changes.
The benchmark graph doesn't look too great, though - around half the "theoretical peak tensor core performance".
[0]: https://developer.nvidia.com/blog/machine-learning-accelerat...