They just need to keep it going until GPUs get cheaper. The point is to build/protect the monopoly until that happens. Then costs come down and they start printing an absurd amount of money.
The problem is that they don't have a monopoly. Not hard to see how CodeLlama and other future open source models can't stand-in here for GPT-4 at dramatically lower cost.
well, and straight caching. If they know that 80% of people ask the same 10,000 questions without a back and forth dialogue, it's not hard to just write a front end for that.
They are proving the market to generate demand to invest to vertically integrate, which then drives down costs while revenue remains flat or (hopefully) increases.
I suspect many would pay far higher than $20/month to use an intelligent LLM. Just that too many players are subsidizing free use so its non-competitive to charge that much.
Eventually reality will force cost to the consumer towards cost of serving. Cost of serving LLMs will decline alongside this, so maybe the magic number stays at $20
Unless even their paid api usage is subsidized (and it very well could be for all I know) they can’t be losing that much.
But it doesn’t matter, I’m they never would have gotten the valuations they did if it weren’t for the insane hype around the (free) ChatGPT offering. It more than paid for itself just in that regard.
OpenAI is amassing an ungodly amount of though, there’s efficiency efforts that have been made for sure but scooping all that data up is what they need to train for gpt5. Those chat logs are worth their proverbial weight in gold.
Who's gonna pay for this as soon as the VC money runs out?