You _do_ understand everything we've seen from OpenAI Google already showed us they have? Not to mention OG research and being the primary r&d force behind vast majority of AI you're seeing. They haven't put it in hands of users as directly yet though, reasons to be speculated upon.
I have the feeling that smaller players are about as likely to get past GPT-n family in the next 2-3 years as I am to turn a Farnsworth Fusor into a useful power source.
Major technical challenges that might be solvable by a lone wolf, in the former case to reduce the data/training requirements and in the latter to stop ions wastefully hitting a grid.
But in 10 years the costs should be down about 99%, which turns the AI training costs from "major investment by mega corp or super-rich" into "lottery winner might buy one".
Isn't that quite a lot of other-than-personnel cost for a software startup? And how many iterations do you throw away before you get one that generates income?