Hacker News new | past | comments | ask | show | jobs | submit login

Nvidia is specifically not on the LLM train - almost all possible forms of AI can use parallel compute chips to great effect. Their own AI products like DLSS have nothing to do with LLMs



DLSS sure, but they have products targeted squarely at model training eg the H100. https://www.nvidia.com/en-gb/data-center/dgx-h100/?ncid=pa-s...


I do not believe this is correct. Sure you can run llama or something locally on a cpu or your Apple arm but it’s not even close to the speed of running on a Nvidia card.

Do you think nvidias trillion dollar valuation is because entire governments are buying Nvidia cards to run stable diffusion?!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: