Biggest poulariser of the idea of existential risks, founder of the Field of Friendly AI research, founder of MIRI, an organisation dedicated to its research, author of a number of published articles on same. Better than most ever do but if that's all it's not enough given his ambitions.
Founder of a bunch of organizations that have done what exactly?
I like his writings, both fiction and not. At one point, I was I guess, kinda of a fan, and I wanted to look up what progress he'd made to his self-assigned goal of Friendly AI, and I couldn't find anything besides a few cute papers.
I was unimpressed. No doubt Eliezer is smart, but contrary to what he seems to think, there are hundreds of thousands of people in the world just as smart, though maybe in different ways. In the scheme of things, he's not that unique. I think Eliezer's ego would be appropriate for someone who had made some progress in those goals. Presently it's a little cringy ..... but I still hope he surprises us.
> Biggest poulariser of the idea of existential risks,
Hardly. Even EY points to science fiction as what inspired him in a lot of ways. Probably the biggest mainstream popularizer of the idea of existential risk these days is the History/Discovery Channel with the nonsense it puts out. Actually, you could probably just go with the movie/tv industry in general.
Even more scientifically, you've had worries about asteroid impacts, supernova radiation, grey goo, etc. longer than EY has been alive, and these ideas were "popular" and in the mainstream consciousness in a way that EY and his ideas are not and probably will never be. EY and MIRI are unknowns outside of a very narrow field.
> founder of the Field of Friendly AI research,
I am not really sure how much to credit him with this, but I suppose it is true that most AI research pre-EY consisted of trying to develop AI with discussions of "friendliness" being more informal.
> founder of MIRI, an organisation dedicated to its research,
An unknown.
> author of a number of published articles on same.
Articles with virtually nonexistent circulation outside MIRI and LessWrong. How many citations of EY's published articles exist outside of those communities? Being self-published is not exactly extraordinary.
Again, I don't have anything against EY. He's just simply not that significant of a figure. Maybe he will be in the future - he certainly thinks MIRI is the only organization worth donating money to because it is the only way to save mankind - but he isn't now. I would not be surprised if most AI people regarded him mostly as a crank. (I don't think that's so, but I think EY's circumstances make him somewhat antithetical to the mainstream scientific community.) To be sure, I haven't founded anything as successful as LessWrong, even, and I certainly haven't convinced anyone to pay me to think and formalize my ideas. By most measures EY is more successful than I am.
Sidenote: Don't google MIRI at work. The Machine Intelligence Research Institute is not the first result.
But I too wonder what on Earth he's actually accomplished other than just words about existential risks.