Nvidia is specifically not on the LLM train - almost all possible forms of AI can use parallel compute chips to great effect. Their own AI products like DLSS have nothing to do with LLMs
I do not believe this is correct. Sure you can run llama or something locally on a cpu or your Apple arm but it’s not even close to the speed of running on a Nvidia card.
Do you think nvidias trillion dollar valuation is because entire governments are buying Nvidia cards to run stable diffusion?!