Hacker News new | past | comments | ask | show | jobs | submit login

>> “AI models that can pass the Turing test [where someone cannot tell in conversation that they are not speaking to another human] are the same warning for the kind of AI that you can lose control over. That’s why you get people like Geoffrey Hinton and Yoshua Bengio – and even a lot of tech CEOs, at least in private – freaking out now.”

I don't understand that. Tegmark -let alone Hinton and Bengio- should be enough of an expert on AI to reocgnise that "the Turing test" (which Turing didn't even mean as a test) is no test of artificial intelligence, let alone the equivalent of finding out that Fermi built a self-sustaining nuclear reaction.

There are much better "A-IQ tests", like the Winograd Schmea or Bongard Problems, or more recent ones like the Abstraction and Reasoning Corpus. While the latter two stand for now, Winograd Schmeas have been defeated I don't see Tegmark or any other on his side of the existential risk debate "freaking out" about that. And with good reason: those tests were beat by sheer force of data and compute and not because some AI system entered anything comparable to a self-sustaining reaction.

Anyway the AI world freaks out every few years whenever a deep learning researcher beats some benchmark. Then everyone forgets about it five years later when the Next Big Thing in AI comes along. It's all par for the course.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: