Hacker News new | past | comments | ask | show | jobs | submit login

I thought Nvidia cards already had some kind of ML hardware for upscaling video games? Which seems them going full circle, since large AI models (except Google TPU code) are usually trained on GPUs, which were originally intended for games.



I was thinking that you’d need a separate chip so that there’s no memory issues when rendering, but I don’t have enough expertise in this


The Tensor cores on nVidia GPUs are effectively "a separate chip", they're just on die. That's where they can run DLSS and frame generation ML inference.

Separating them out would hamstring them, since they wouldn't be able to process the frames as they're being rendrered without performance penalty.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: