Conversely, why should we expect an expert in "human risk" with no understanding of AI to have anything productive to add to the conversation? We should be looking for people who are well-grounded in both areas, if any exist.
What additional information would understanding AI actually bring to the table? Worst case you just assume the AI can become human-like.
Certainly all the vocal AI experts can come up with is things that humans already do to each other already, only noting that AI enables it at larger scale. Clearly there is no benefit in understanding the AI technology with respect to this matter when the only thing you can point to is scale. The concept of scalability doesn't require an understanding of AI.
Worst case you can expect AI to become alien, becoming human like is probably one of the better outcomes.
We as humans only have human intelligence to reference what intelligence is, and in doing so we commonly cut off other branches of non human intelligence in our thinking. Human intelligence has increased greatly as our sensor systems have increased their ability to gather data and 'dumb' it down to our innate senses. Now imagine an intelligence that doesn't need the data type conversion? Imagine a global network of sensors feeding a distributed hivemind. Imagine wireless signals just being another kind of sight or hearing.
No, if one can conceive of such aliens, that would be the best case. The worst case is falling back to the understanding of the human, for the reasons you describe.
However, as you point out, one does not need to have any understanding of AI to conceive of what AI can become as natural creatures can theoretically evolve into the same thing. Experts will have already been thinking about what problems might arise if alien life came emerged earth long before AI even existed. AI doesn't change anything.
That's like saying that you can regulate nuclear materials without input from nuclear physicists or regulate high risk virology without input from virologists.
The goal is (or should be) to determine how we can get the benefits of the new technology (nuclear energy, vaccines, AI productivity boom) while minimizing civilizational risk (nuclear war/terrorism, bioweapons/man-made pandemics, anti-human AI applications).
There's no way this can be achieved if you don't understand the actual capabilities or trajectory of the technology. You will either over-regulate and throw out the baby with the bathwater, stopping innovation completely or ensuring it only happens under governments that don't care about human rights, or you will miss massive areas of risk because you don't know how the new technology works, what it's capable of, or where it's heading.
Experts are not lawmakers. We aren't looking for them to craft regulation, we're looking to hear from them about the various realistic scenarios that could play out. But we're not hearing anything.
...probably because there isn't much to hear. Like, Hinton's big warning is that AI will be used to steal identities. What does that tell us? We already know that identity isn't reliable. We've known that for centuries. Realistically, we've likely known that throughout the entirety of human existence. AI doesn't change anything on that front.
I guess my experience is different. I've heard plenty about realistic scenarios. It's out there if you look for it, or even if you just spend time thinking it through. Identity theft is far from the biggest danger even with current capabilities.
Though to your point, I think part of the issue is that people who study this stuff are often hesitant to give too much detail in public because they don't want to give ideas to potentially nefarious actors before any protections are in place.
Of course. Everyone has an opinion. Some of those opinions will end up be quite realistic, even if just by random chance. You don't have to be an expert to come up with right ideas sometimes.
Hinton's vision of AI being used to steal identities is quite realistic. But that doesn't make him an expert. His opinion carries no more weight than any other random hobo on the street.
> I think part of the issue is that people who study this stuff are often hesitant to give too much detail in public because they don't want to give ideas to potentially nefarious actors
Is there no realistic scenario where the outcome is positive? Surely they could speak to that, at least. What if, say, AI progressed us to post-scarcity? Many apparent experts believe post-scarcity will lead us away from a lot of the nefarious activity you speak of.
Oh, I've heard plenty of discussion of positive scenarios as well, including post-scarcity.
If you just look for a list of all the current AI tools and startups that are being built, you can get a pretty good sense of the potential across almost every economic/industrial sphere. Of course, many of these won't work out, but some will and it can give you an idea of what some of the specific benefits could be in the next 5-10 years.
I'd say post-scarcity is generally a longer-term possibility unless you believe in a super-fast singularity (which I'm personally skeptical about). But many of the high risk uses are already possible or will become possible soon, so they are more front-of-mind I suppose.
What makes him an expert in the subject matter? A cursory glance suggests that his background is in CS, not anything related to social or humanitarian issues.
It is not completely impossible for someone to have expertise in more than one thing, but it is unusual as there is only so much time in the day and building expertise takes a lot of time.
I don’t mean Dwarkesh himself, though he asks great questions. He’s had some very knowledgeable guests.
The most recent episode with Paul Christiano has a lot of good discussion on all these topics.
I’d suggest evaluating the arguments and ideas more on the merits rather than putting so much emphasis on authority and credentials. I get there can be value there, but no one is really an “expert” in this subject yet and anyone who claims to be probably has an angle.
> I’d suggest evaluating the arguments and ideas more on the merits rather than putting so much emphasis on authority and credentials.
While I agree in general, when it comes to this particular topic where AI presents itself as being human-like, we all already have an understanding at the surface level because of being human and spending our lives around other humans. There is nothing the other people who have the same surface level knowledge will be able to tell you that you haven't already thought up yourself.
Furthermore, I'm not sure it is ideas that are lacking. An expert goes deeper than coming up with ideas. That is what people who have other things going on in life are highly unlikely to engage in.
> no one is really an “expert” in this subject yet
We've been building AI systems for approximately a century now. The first LLMs were developed before the digital computer existed! That's effectively a human lifetime. If that's not sufficient to develop expertise, it may be impossible.
In what way? The implementation is completely different, if that is what you mean, but the way humans interpret AI it is the same as far as I can tell. Hell, every concern that has ever been raised about AI is already a human-to-human issue, only imagining that AI will take the place of one of the humans in the conflict/problem.
> but we only just figured out how to build AI that actually works.
Not at all. For example, AI first beat a world champion human chess player in 1967. We've had AI systems that actually work for a long, long time.
Maybe you are actually speaking to what is more commonly referred to as AGI? But there is nothing to suggest we are anywhere close to figuring that out.
Well, to state the obvious, a model trained on much of the internet by a giant cluster of silicon GPUs is fundamentally different than a biological brain trained by a billion years of evolution. I'm not sure why anyone should expect them to be similar? There may be some surface-level similarities, but the behavior of each is clearly going to diverge wildly in many/most situations.
I wouldn't really say an AI beat a human chess player in 1967--I'd say a computer beat a human chess player. In the same way that computers have for a long time been able to beat humans at finding the square roots of large numbers. Is that "intelligence"?
I grant you though that a lot of this comes down to semantics.
> but the behavior of each is clearly going to diverge wildly in many/most situations.
I expect something akin to "Why the fuck did he do that?" is a statement every human has uttered at some point. In other words, human behaviour can be a complete mystery to the outside observer. Your suggestion that an AI model will diverge in a way that a human would not is reasonable, but as outside observers are we able to tell the difference between an AI model going off the rails and a human going off the rails? I suspect not. At least not when AI is at a sufficiently advanced level.
> Is that "intelligence"?
No. But it is what is labelled artificial intelligence – AI for short. Maybe someday we'll be able to create machines that are intelligent, but that's still on the level of complete science fiction. Our best work thus far is just computers running particular algorithms that appear to exhibit some limit qualities that are similar to qualities that we consider to be a product of intelligence in humans. Hence the artificial moniker.