I think he understands a lot about ML. But he doesn't give a shit about how actual brains work. For dumb reasons, ideological and personal, he has convinced himself that machine learning is a plausible model of intelligence.
A common thread among both the doom and utopia folks is a sneering contempt for the intelligence of nonhuman animals. They refuse to accept GPT-4 is very stupid compared to a dog or a pigeon - in their world, it's a ridiculous thing to consider. ("Show me the dog who can write a Python program!")
He's funded by the Centre for Study in Existential Risk, which is one of the main organizations the EA community has been funding.
This article itself is published during an EA supported AI Safety Conference cosponsored by the British and Korean governments, and Sunak is a 2nd and 1st degree connect with most people in the space well before he became politically prominent (Stanford GSB is top BSchool for a reason)
It's a social movement with a cult-like vibe among a subset of AI/ML enthusiasts that became popular after an infusion of funding from SBF, Tallinn (of Skype fame), Dustin Moskovitz, and a couple other techies.
I know a lot of people in the scene who think it's dumb as well, but remain for the professional aspect.
GPT-4 as a brain is already quite capable. It needs a body, be it virtual or real. It need arms and legs. And that's just code, which it is already quite capable of doing. An integrated system where GPT4 can run commands and receive feedback about them is quite smart.
A common thread among both the doom and utopia folks is a sneering contempt for the intelligence of nonhuman animals. They refuse to accept GPT-4 is very stupid compared to a dog or a pigeon - in their world, it's a ridiculous thing to consider. ("Show me the dog who can write a Python program!")