Hacker News new | past | comments | ask | show | jobs | submit login

i think local ai systems are inevitable. we continue to get better compute, and even today we can run more primitive models directly on an iPhone. the future exists in low power compute running models of the caliber of gpt-4 inferring in near-realtime



The technical capability is inevitable, but remember that people hate doing things themselves, and have proven time and time again that they will overlook all kinds of nasty behavior in exchange for consumer grade experiences. The marketplace loves centralization.


All true, but the nature of those models means that consumer-grade experience while running locally is still perfectly doable. Imagine a hardware black box with the appropriate hardware that's preconfigured to run an LLM with chat-centric and task-centric interfaces. You just plug it in, connect it to your wifi, and it "just works". Implementing this would be a piece of cake since it doesn't require any fancy network configuration etc.

So the only real limiting factor is the hardware costs. But my understanding is that there's already a lot of active R&D into hardware that's optimized specifically for LLMs, and that it could be made quite a bit simpler and cheaper than modern GPUs, so I wouldn't be surprised if we'll have hardware capable of running something on par with GPT-4 locally for the price of a high-end iPhone within a few years.


i dont believe that local ai implies bad experience. i believe that the local ai experience can be better than what runs on servers fundamentally. average people will not have to do it themselves, that is the whole point. the worlds are not mutually exclusive in my opinion




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: