Hacker News new | past | comments | ask | show | jobs | submit login

I have been curious about this for a while, particularly in relation to the increasing cost of training LLMs.

I was recently talking to a friend who works on the AI team for one of the large tech companies, and I asked him this directly. He said that each generation is ~10x the training cost for ~1.5X improvement in performance (and the rate of improvement is tapering off). The current generation is ~$1 billion, and the next generation will be about $10 billion.

The question is whether anyone can afford to spend $100 billion on the next generation after that. Maybe a couple of the tech giants can afford that, but you do rapidly get to unaffordability for anyone smaller than the government of a rich country.

It will likely be possible to continue optimizing models for a while after that, and there is always the possibility of new technology that creates a discontinuity. I think the big question is whether AI is "good enough" by the time we hit the asymptote, where good enough is somewhat defined by the use case, but roughly corresponds to whether AI can either replace humans or improve human efficiency by an order of magnitude.




this is the facebook.zuck.yann view. I wonder what anthropic and openai see as the future




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: