Hacker News new | past | comments | ask | show | jobs | submit login

Do you understand the difference between training and inference?

Yes, it costs a lot to train a model. Those costs go up. But once you trained it, it’s done. At that point inference — the actual execution/usage of the model — is the cost you worry about.

Inference cost drops rapidly after a model is released as new optimizations and more efficient compute comes online.




That’s precisely what’s different about this approach. Now the inference itself is expensive because the system spends far more time coming up with potential solutions and searching for the optimal one.


I feel like I’m taking crazy pills.

Inference always starts expensive. It comes down.


And again, no. The cost of inference is a function of the size of the model and if models keep getting bigger, deeper, badder, the cost of inference will keep going up. And if models stop getting bigger because improved performance can be achieved just by scaling inference, without scaling the model- well that's still more inference; and even if the cost overall falls, there will need to be so much more inference to scale sufficiently to keep AI companies in competition with each other that the money they have to spend will keep increasing, or in other words it's not how much it costs but how much you need to buy.

This is a thing, you should know. It's called Jevon's Paradox:

In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced.[1][2][3][4]

https://en.wikipedia.org/wiki/Jevons_paradox

Better check those pills then.

Oh but, you know, merry chrimbo to you too.


>> Do you understand the difference between training and inference?

Oh yes indeed-ee-o and I'm referring to training and not inference because the big problem is the cost of training, not inference. The cost of training has increased steeply with every new generation of models because it has to, in order to improve performance. That process has already reached the point where training ever larger models is prohibitively expensive even for companies with the resources of OpenAI. For example, the following is from an article that was posted on HN a couple days ago and is basically all about the overwhelming cost to train GPT-5:

In mid-2023, OpenAI started a training run that doubled as a test for a proposed new design for Orion. But the process was sluggish, signaling that a larger training run would likely take an incredibly long time, which would in turn make it outrageously expensive. And the results of the project, dubbed Arrakis, indicated that creating GPT-5 wouldn’t go as smoothly as hoped.

(...)

Altman has said training GPT-4 cost more than $100 million. Future AI models are expected to push past $1 billion. A failed training run is like a space rocket exploding in the sky shortly after launch.

(...)

By May, OpenAI’s researchers decided they were ready to attempt another large-scale training run for Orion, which they expected to last through November.

Once the training began, researchers discovered a problem in the data: It wasn’t as diversified as they had thought, potentially limiting how much Orion would learn.

The problem hadn’t been visible in smaller-scale efforts and only became apparent after the large training run had already started. OpenAI had spent too much time and money to start over.

From:

https://archive.ph/L7fOF

HN discussion:

https://news.ycombinator.com/item?id=42485938

"Once you trained it it's done" - no. First, because you need to train new models continuously so that they pick up new information (e.g. the name of the President of the US). Second because companies are trying to compete with each other and to do that they have to train bigger models all the time.

Bigger models means more parameters and more data (assuming there is enough which is a whole other can of worms) more parameters and data means more compute and more compute means more millions, or even billions. Nothing in all this is suggesting that costs are coming down in any way, shape or form, and yep, that's absolutely about training and not inference. You can't do inference before you do training, you need to train continuously, and for that reason you can't ignore the cost of training and consider only the cost of inference. Inference is not the problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: