Hacker News new | past | comments | ask | show | jobs | submit login

As someone who is very interested in decentralized services (as in my day job involves decentralized databases and I'm actively working on WebGPU support for training) I'd say that the browser-based vision is a fair way off.

The software ecosystem is pretty immature, and there are numerous things that need to change before the core technologies are good enough to fine tune competitive LLMs.

I do think fine tuning moderate sized LLMs on your own (pretty expensive) hardware using consumer GPUs maybe possible this year.

Unfortunately all the evidence is that training (as opposed to inference) requires high-precision, and hence high memory. This is something that consumer GPUs for the most part lack. New techniques are likely to be required (eg better sharing of training on low memory GPUs) but it's hard to predict how they will develop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: