Hacker News new | past | comments | ask | show | jobs | submit login

All true, but the nature of those models means that consumer-grade experience while running locally is still perfectly doable. Imagine a hardware black box with the appropriate hardware that's preconfigured to run an LLM with chat-centric and task-centric interfaces. You just plug it in, connect it to your wifi, and it "just works". Implementing this would be a piece of cake since it doesn't require any fancy network configuration etc.

So the only real limiting factor is the hardware costs. But my understanding is that there's already a lot of active R&D into hardware that's optimized specifically for LLMs, and that it could be made quite a bit simpler and cheaper than modern GPUs, so I wouldn't be surprised if we'll have hardware capable of running something on par with GPT-4 locally for the price of a high-end iPhone within a few years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: