Hacker News new | past | comments | ask | show | jobs | submit login




Eliezer says anything, for conscious or unconscious reasons, to further spread his opinions as fact. There is also a large profit incentive for him to do so, since his foundation and personal income depends on donations from people who are concerned about AI safety. I've seen him imply some pretty ridiculous things that amount to hyperbole.

He is ultimately doing nothing in the engineering and development side of AI and his predictions about this technology are based on armchair philosophy exercises, not reality.

As an aside, I love this gem from the transcript: "We're all crypto investors here. We understand the efficient market hypothesis for sure"

Fairly confident most crypto investors have never even heard that phrase.


Eliezer started his career focused on the benefits of AI. It was not till the non-profit he founded had been going for 5 years that he started talking about the dangers of AI.


Yudkowsky is one of those people who is all IQ and no judgement. He rationalizes on top of what is basically a fear of the dark, and unfortunately does it well enough to persuade himself and a great number of other people.

A good corrective exercise is to go back and look at his early writing, and evaluate how well his judgement looks in hindsight. My favorite is the idea that XML programming languages were the future, but really, pick your poison.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: