Hacker News new | past | comments | ask | show | jobs | submit login

“Training costs scale with the number of researchers, inference costs scale with the number of users”.

This is interesting, but I think I disagree? I'm most excited about a future where personalized models are continuously training on my own private data.




How can you disagree with that statement? Training takes significantly more processing power than inference, and typically only the researchers will be doing the training, so it makes sense that training costs scale with the number of researchers, as each researcher needs access to their own system powerful enough to perform training.

Inference costs scaling with the number of users is a no-brainer.

I'm pretty dumbfounded how you can just dismiss both statements without giving any reasoning as to why.

EDIT:

> I'm most excited about a future where personalized models are continuously training on my own private data.

This won't be as common as you think.


> typically only the researchers will be doing the training

Citizen LLM developers are becoming a thing. Everyone trains (mostly fine-tunes) models today.


Non-technical people will not be fine-tuning models. A service targeted at the masses is unlikely to fine-tune a per-user model. It wouldn't scale without being astronomically expensive.


We will need at least one- if not several- research and data capture breakthroughs to get to that point. One person just doesn't create enough data to effectively train models with our current techniques, no matter what kind of silicon you have. It might be possible, but research and data breakthroughs are much harder to predict than chip and software developer ergonomics improvements. Sometimes the research breakthroughs just never happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: