I think you are wrong. The risks are real and, while I am sure OpenAI and others will position themselves to take advantage of regulations that emerge, I believe that the CEOs are doing this at least in part because they believe this.
If this was all about regulatory capture and marketing, why would Hinton, Bengio and all the other academics have signed the letter as well? Their only motivation is concern about the risks.
Worry about AI x-risk is slowly coming into the Overton window, but until very recently you could get ridiculed by saying publicly you took it seriously. Academics knew this and still came forward - all the people who think its nonsense should at least try to consider they are earnest and could be right.
The risks are real, but I don't think regulations will mitigate them. It's almost impossible to regulate something you can develop in a basement anywhere in the world.
The real risks are being used to try to built a regulatory moat, for a young industry who famously has no moat.
You can't build gpt-3 or gpt-4 in a basement, and won't be able to without several landmark advancements in AI or hardware architectures. The list of facilities able to train a GPT-4 in <5 years can fit on postcard. The list of facilities producing GPUs and AI hardware is even shorter. When you have bottlenecks you can put up security checkpoints.
I'm very confused that this point is being ignored so heavily on HN of all places. If tomorrow ASML and TSMC are struck by a meteor, or indeed controlled/sanctioned, it would take either the US or China spending trillions and cost many years to rebuild this. It's not something that can be done in secret either.
State of the art AI models are definitely not something you can develop in a basement. You need a huge amount of GPUs running continuously for months, huge amounts of electrical power, and expensive-to-create proprietary datasets. Not to mention large team of highly-in-demand experts with very expensive salaries.
Many ways to regulate that. For instance, require tracking of GPUs and that they must connect to centralized servers for certain workloads. Or just go ahead and nationalize and shutdown NVDA.
(And no, fine-tuning LAMA based models is not state of the art, and is not where the real progress is going to come from)
And even if all the regulation does is slow down progress, every extra year we get before recursively self improving AGI increases the chances of some critical advance in alignment and improves our chances a little bit.
> State of the art AI models are definitely not something you can develop in a basement. You need a huge amount of GPUs running continuously for months
This is changing very rapidly. You don’t need that anymore
Roll to disbelief. That tweet is precisely about what I mentioned in my previous post that doesn't count: finetuning LAMA derived models. You are not going to contribute to the cutting edge of ML research doing something like that.
For training LAMA itself, Meta I believe said it cost them $5 million. That is actually not that much, but I believe that is just the cost of running the cluster for the the duration of the training run. I.e, doesn't include cost of cluster itself, salaries, data, etc.
Almost by definition, the research frontier work will always require big clusters. Even if in a few years you can train a GPT4 analogue in your basement, by that time OpenAI will be using their latest cluster to train 100 trillion model parameters.
Academics get paid (and compete hardcore) for creating status and prominence for themselves and their affiliations. Suddenly 'signatory on XYZ open letter' is an attention source and status symbol. Not saying this is absolutely the case, but academics putting their name on something surrounded by hype isn't the ethical check you make it out to be.
This a letter anyone can sign. As someone pointed out Grimes is one of the signatories. You can sign it yourself.
Hinton, Bengio, Norvig and Russell are most definitely not getting prestige from signing it. The letter itself is getting prestige from them having signed it.
Nah, they're getting visibility from the topic of 'AI risk'. I don't know who those people are but this AI risk hype is everywhere I look including in congressional hearings.
If this was all about regulatory capture and marketing, why would Hinton, Bengio and all the other academics have signed the letter as well? Their only motivation is concern about the risks.
Worry about AI x-risk is slowly coming into the Overton window, but until very recently you could get ridiculed by saying publicly you took it seriously. Academics knew this and still came forward - all the people who think its nonsense should at least try to consider they are earnest and could be right.