If the barrier for entry is low enough for several players to enter the field this fast - I wonder what could raise the barrier? The models getting bigger I suppose.
A few months (weeks?) ago I would've said that this already was the case for language models. It's absolutely mind-blowing to me what is happening here - same with stable diffusion. Once Dall-E was out, I was sure that there was no way that anything like this could be run on consumer hardware. I'm very happy to be proven wrong.
In a way, things are still moving in this direction, though. 8 or so years ago it was more or less possible to train those models yourself to a certain degree of usefulness, as well, and I think we've currently moved way past any feasibility for that.
Fortunately, there still are some possibilities to improve training efficiency and reducing model size by doing more guided attentional learning.
This will make feasible to train models at least as good as the current batch (though probably the big players will use those same optimizations to create much better large models).
Our saving grace seems to be the insatiable push by the gaming industry for better graphics at higher resolutions. Their vision for real-time path traced graphics can’t happen without considerable ML horsepower on consumer level graphics cards.
The Vice Chairman of Microsoft already mentions that he is open to regulation. The EU also is working on plans to regulate AI. So you probably only are allowed to use AI in the future if it's approved by something like the FD(A)A.
Maybe I'm having a looped view of this but I fail to see that regulation wouldn't harm more than it saves here. The truly dangerous actors wouldn't care or would be based in some other country. Having a large diversity of actors seem like the best way to ensure resilience against whatever threats might arise from this.