Hacker News new | past | comments | ask | show | jobs | submit login

>One possibility is simply cost: if your device does it, you pay for the hardware, if a cloud does it, you have to pay for that hardware again via subscription.

Yeah but in the cloud that cost is ammortized among everyone else using the service. If you as a consumer buy a gpu in order to run LLMs for personal use, then the vast majority of the time it will just be sitting there depreciating.




But then again, every apple silicon user has an unused neural engine sitting around in the SoC an taking a significant amount of die space, yet people don't seem to worry too much about its depreciation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: