Hacker News new | past | comments | ask | show | jobs | submit login

I appreciate your reasonableness.

I follow LW to some degree, but even the best of it (like the post you link) feels very in-group confirmation centric.

That post is long and I have not read it all, but it seems to be missing any consideration of AGI upside. It’s like talking about the risk of dying in a car crash with no consideration of the benefits of travel. If I ask you “do you want to get in a metal can that has a small but non-zero chance of killing you”, of course that sounds like a terrible idea.

There is risk in AGI. There is risk in everything. How many people are killed by furniture each year?

I’m not dismissing AGI risk, I’m saying that I have yet to see a considered discussion that includes important context like how many people will live longer/happier because AGI helps reduce famine/disease. Somehow it is always the wealthy, employed, at-risk-of-disruption people who are worried, not the poor or starving or oppressed.

I’m just super not impressed by the AI risk crowd, at least the core one on LW / SSC / etc.




While I agree that the rhetoric around AI Safety would be better if it tried to address some of the benefits (and not embody the full doomer vibe), I don't think many of the 'core thinkers' are unaware of the benefits in AGI. I don't fully agree with this paper's conclusions, but I think https://nickbostrom.com/astronomical/waste is one piece that embodies this style of thinking well!


Thanks for the link -- that is a good paper (in the sense of making its point, though I also don't entirely agree), and it hurts the AI risk position that that kind of thinking doesn't get airtime. It may be that those 'core thinkers' are aware, but if so it's counter-productive and of questionable integrity to sweep that side of the argument under the rug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: