Hacker News new | past | comments | ask | show | jobs | submit login

> borderline offensive to question them

I would be happy to politely discuss any proposition regarding AI Risk. I don't think any claim should go unquestioned.

I can also point you to much longer-form discussions. For example, this post, which has 670 comments, discussing various aspects of the argument: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...




I appreciate your reasonableness.

I follow LW to some degree, but even the best of it (like the post you link) feels very in-group confirmation centric.

That post is long and I have not read it all, but it seems to be missing any consideration of AGI upside. It’s like talking about the risk of dying in a car crash with no consideration of the benefits of travel. If I ask you “do you want to get in a metal can that has a small but non-zero chance of killing you”, of course that sounds like a terrible idea.

There is risk in AGI. There is risk in everything. How many people are killed by furniture each year?

I’m not dismissing AGI risk, I’m saying that I have yet to see a considered discussion that includes important context like how many people will live longer/happier because AGI helps reduce famine/disease. Somehow it is always the wealthy, employed, at-risk-of-disruption people who are worried, not the poor or starving or oppressed.

I’m just super not impressed by the AI risk crowd, at least the core one on LW / SSC / etc.


While I agree that the rhetoric around AI Safety would be better if it tried to address some of the benefits (and not embody the full doomer vibe), I don't think many of the 'core thinkers' are unaware of the benefits in AGI. I don't fully agree with this paper's conclusions, but I think https://nickbostrom.com/astronomical/waste is one piece that embodies this style of thinking well!


Thanks for the link -- that is a good paper (in the sense of making its point, though I also don't entirely agree), and it hurts the AI risk position that that kind of thinking doesn't get airtime. It may be that those 'core thinkers' are aware, but if so it's counter-productive and of questionable integrity to sweep that side of the argument under the rug.


That link is about the risks of AGI, which doesn't exist, and there's no reason to believe that it ever will exist.

(If I'm wrong about AGI then I'm open to being convinced, but that's a different conversation as the topic here is non-general AI, is it not?)


I disagree that there's no reason to believe it will ever exist. For one thing, many smart people are trying to build the technology right now and they believe it to be possible. I see no compelling case that the intelligence scale simply tops-out with humans; that a more intelligent system is ruled out by the laws of physics.

The topic here is human extinction caused by AI. I don't know of any serious argument for why a non-general intelligence (really a system less intelligent than a human) would pose an extinction risk to humanity.

Plus, my background understanding of the people who signed this is that they're worried about AGI, not present-day systems, but that's an inference.


Maybe these AI Apocalypse articles published for general consumption would be justified if there were any signs whatsoever that we were on a path towards AGI but there are none, are there? Even the best we have today are still just machines. They are clearly not really intelligent. At best they simulate intelligence, but poorly (because they still make ridiculous mistakes). Just because there are no physical limits to intelligence doesn't mean it's possible for beings with finite intelligence to create infinite intelligence. It all just seems extremely premature to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: