Hacker News new | past | comments | ask | show | jobs | submit login

But he also has the incentive to exaggerate the AI's ability.

The whole idea of double-blind test (and really, the whole scientific methodology) is based on one simple thing: even the most experienced and informed professionals can be comfortably wrong.

We'll only know when we see it. Or at least when several independent research groups see it.




> even the most experienced and informed professionals can be comfortably wrong

That's the human hallucination problem. In science it's a very difficult issue to deal with, only in hindsight you can tell which papers from a given period were the good ones. It takes a whole scientific community to come up with the truth, and sometimes we fail.


No. It takes just one person to come up with the truth. It then can takes ages to convince the "scientific community".


Well, one person will usually add a tiny bit of detail to the "truth". It's still a collective task.


I don't think so. The truth is advanced by individuals, not by the collective. The collective is usually wrong about things as long as they possibly can be. Usually the collective first has too die before it accepts the truth.


I thought (and could be wrong) that all of these concerns are based on a very low probability of a very bad outcome.

So: we might be close to a breakthrough, that breakthrough could get out of hand, then it could kill a billion+ people.


> I thought (and could be wrong) that all of these concerns are based on a very low probability of a very bad outcome.

Among knowledgeable people who have concerns in the first place, I'd say giving the probability of a very bad outcome of cumulative advances as "very low" is a fringe position. It seems to vary more between "significant" and "close to unity".

There are some knowledgeable people like Yann LeCun who have no concerns whatsoever but they seem singularly bad at communicating why this would be a rational position to take.


Given how dismissive LeCun is of the capabilities of SotA models, I think he thinks the state of the art is very far from human, and will never be human-like.

Myself, I think I count as a massive optimist, as my P(doom) is only about 15% — basically the same as Russian Roulette — half of which is humans using AI to do bad things directly.


Unlikely. We'll know when OpenAI has declared itself ruler of the new world, imposes martial law, and takes over.


Why would you ever know? Why would the singularity reveal itself in such an obvious way(until it's too late to stop it)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: