I thought Nvidia cards already had some kind of ML hardware for upscaling video games? Which seems them going full circle, since large AI models (except Google TPU code) are usually trained on GPUs, which were originally intended for games.
The Tensor cores on nVidia GPUs are effectively "a separate chip", they're just on die. That's where they can run DLSS and frame generation ML inference.
Separating them out would hamstring them, since they wouldn't be able to process the frames as they're being rendrered without performance penalty.