I don’t mean Dwarkesh himself, though he asks great questions. He’s had some very knowledgeable guests.
The most recent episode with Paul Christiano has a lot of good discussion on all these topics.
I’d suggest evaluating the arguments and ideas more on the merits rather than putting so much emphasis on authority and credentials. I get there can be value there, but no one is really an “expert” in this subject yet and anyone who claims to be probably has an angle.
> I’d suggest evaluating the arguments and ideas more on the merits rather than putting so much emphasis on authority and credentials.
While I agree in general, when it comes to this particular topic where AI presents itself as being human-like, we all already have an understanding at the surface level because of being human and spending our lives around other humans. There is nothing the other people who have the same surface level knowledge will be able to tell you that you haven't already thought up yourself.
Furthermore, I'm not sure it is ideas that are lacking. An expert goes deeper than coming up with ideas. That is what people who have other things going on in life are highly unlikely to engage in.
> no one is really an “expert” in this subject yet
We've been building AI systems for approximately a century now. The first LLMs were developed before the digital computer existed! That's effectively a human lifetime. If that's not sufficient to develop expertise, it may be impossible.
In what way? The implementation is completely different, if that is what you mean, but the way humans interpret AI it is the same as far as I can tell. Hell, every concern that has ever been raised about AI is already a human-to-human issue, only imagining that AI will take the place of one of the humans in the conflict/problem.
> but we only just figured out how to build AI that actually works.
Not at all. For example, AI first beat a world champion human chess player in 1967. We've had AI systems that actually work for a long, long time.
Maybe you are actually speaking to what is more commonly referred to as AGI? But there is nothing to suggest we are anywhere close to figuring that out.
Well, to state the obvious, a model trained on much of the internet by a giant cluster of silicon GPUs is fundamentally different than a biological brain trained by a billion years of evolution. I'm not sure why anyone should expect them to be similar? There may be some surface-level similarities, but the behavior of each is clearly going to diverge wildly in many/most situations.
I wouldn't really say an AI beat a human chess player in 1967--I'd say a computer beat a human chess player. In the same way that computers have for a long time been able to beat humans at finding the square roots of large numbers. Is that "intelligence"?
I grant you though that a lot of this comes down to semantics.
> but the behavior of each is clearly going to diverge wildly in many/most situations.
I expect something akin to "Why the fuck did he do that?" is a statement every human has uttered at some point. In other words, human behaviour can be a complete mystery to the outside observer. Your suggestion that an AI model will diverge in a way that a human would not is reasonable, but as outside observers are we able to tell the difference between an AI model going off the rails and a human going off the rails? I suspect not. At least not when AI is at a sufficiently advanced level.
> Is that "intelligence"?
No. But it is what is labelled artificial intelligence – AI for short. Maybe someday we'll be able to create machines that are intelligent, but that's still on the level of complete science fiction. Our best work thus far is just computers running particular algorithms that appear to exhibit some limit qualities that are similar to qualities that we consider to be a product of intelligence in humans. Hence the artificial moniker.
The most recent episode with Paul Christiano has a lot of good discussion on all these topics.
I’d suggest evaluating the arguments and ideas more on the merits rather than putting so much emphasis on authority and credentials. I get there can be value there, but no one is really an “expert” in this subject yet and anyone who claims to be probably has an angle.