Also even if that were the case, is that worth diverging from the main mission of the creation of the company to do? If they created the company with non profit in mind (since the consequences of a profit maximizing AI organization is the risk they wanted to avoid) then if you have to relax that constraint in order to build AGI, is that really a trade you should do? (Vs just being smarter with algorithms and more resourceful with resources. Or like tackling something else than LLMs.)
>The idea that massive compute would lead to massive improvement was non-obvious.
Honestly no, it was obvious, but only if you listened to those pie in the sky singularity people. It was quite common for them to say, add lots of nodes and transistors and a bunch of layers and stir in some math and intelligence will pop out.
The groups talking about minimal data and processing have not had any breakthroughs in, like forever.
Google and all the big players in AI have known they need tons of data and hence compute power for processing it, for a very long time, way before OpenAI even existed. Anyone getting involved in that game would have definitely known.
Well John Carmack is trying to make inroads towards AGI without going the huge compute route, so I don't think it's inherently obvious that it's the only game in town.
I think he'll be able to do some good stuff on the software side (i.e. the industry is full of AI cowboys who can't code) but on the fundamental side it's hard to see him doing much.
In terms of research background, you're right. But he's someone with a history of original thought and as he states, it's not clear that we're at the stage of machine learning where useful contributions from newcomers taking a different direction are vanishingly unlikely.
I'm sure OpenAI wouldn't have offered him a job if they thought he couldn't contribute anything of value.
Next Sam will tell that is that farmers need a lot of land to grow crops.
I'm calling BS on this. It's an excuse not an explanation.