Hacker News new | past | comments | ask | show | jobs | submit login

> Another is the fact their lines of reasoning are literal fantasy.

Isn't this also to be expected at this stage of development? i.e. if these concerns were not "fantasy", we'd already be experiencing the worst outcomes? The risk of MAD is real, and yet the scenarios unleashed by MAD are scenarios that humankind has never seen. We still take the the risk seriously.

And what of the very real impact that generative AI is already having as it exists in production today? Generative AI is already upending industries and causing seismic shifts that we've only started to absorb. This impact is literal, not fantasy.

It seems naively idealistic to conclude that there is "no real risk" based only on the difficulty of quantifying that risk. The fact that it's so difficult to define lies at the center of what makes it so risky.




> The risk of MAD is real, and yet the scenarios unleashed by MAD are scenarios that humankind has never seen. We still take the the risk seriously.

Yeah, because nuclear weapons are real and the science behind them is well-understood. Super intelligent AI is not real, and it is nowhere near becoming real. It is a fantasy fueled by science-fiction and wishful thinking.

> And what of the very real impact that generative AI is already having as it exists in production today?

This is a real concern, but it is not what is meant by "existential risk of AI". Losing jobs does not threaten our existence; it just means we'll need to figure out different ways to build society.

> The fact that it's so difficult to define lies at the center of what makes it so risky.

The fact that it's so difficult to define lies at the center of what makes it so profitable for many of these people.


> Super intelligent AI is not real, and it is nowhere near becoming real.

I don’t think anyone claiming that super intelligent AI is already here have thought this through. But on what basis do you feel confident to place a bet with certainty that it’s “nowhere near becoming real”?

At a minimum, we know that AI technology has made a huge leap forward, if nothing else in the public consciousness. Again, entire industries are about to be eliminated, when just a few years ago no one would have believed claims about language models so good they could convince misguided engineers into thinking they’re sentient.

This explosion of AI is itself accelerating the explosion. The world is now focused on advancing this tech, and unlike “Web3”, people recognize that the use cases are real.

It’s in the context of this acceleration that I don’t understand where “everything is fine” can possibly come from? And how are the underlying factors used to derive such a stance substantively better than the factors leading people to worry?

> Losing jobs does not threaten our existence;

Based on a growing understanding of psychology, there’s an argument to be made that losing jobs is akin to losing one’s purpose in life. This is not to say that people can’t learn to derive satisfaction from other activities, but if job loss outpaces our ability to transition masses of people to a fundamentally new kind of living, that’s going to drastically alter the state of public consciousness and the resulting decisions we collectively make. We’re already experiencing a mental health crisis, and that seems to be coming from our failure to understand and safely integrate the last generation of new technology. We’re still in the stage of trying to figure out what we’ve fucked up and not much closer to having an answer or solution. And once we identify the problem, it’s not clear that solutions can be implemented effectively at scale.

I think too many people are looking at this as some kind of “Humans vs. AI” thing, but are missing the fact that we’re talking about drastic changes to an ecosystem without considering the 2nd/3rd/nth order implications of those changes, and their likely impact on collective consciousness and mental health, especially when you add the backdrop of hyper-polarization and dysfunction in the institutions that are supposed to be responsible for figuring out that new societal structure. All of which ultimately impacts global power structures, the destabilization of which leads to hard-science kinds of obliteration.

> it just means we'll need to figure out different ways to build society.

That’s an enormously large hand-wave when you consider the implications of such an unplanned transition. “Figure out different ways to build society” almost universally comes from/with violence, poverty, starvation, unrest, etc. The status quo will not change until things have gotten truly bad.

An AI system need not be super-intelligent to have serious implications for humanity. There is a significant amount of harm that precedes “complete extinction”, and I think we can’t discard the myriad of intermediately bad outcomes just to posit that there is no existential risk. To me, “existential” primarily points to “the possibility of fundamentally altering humanity’s existence in ways that most people would agree are undesirable”. The culmination of which is total extinction, but there’s a long distance between “this is adversely impacting humanity enough that we should take active steps to counteract that harm” and “this is so bad it will kill all humans”. You could be right that we’re nowhere close to extinction level tech, but if we’re heading towards it, there are a lot of checkpoints along the way worthy of scrutiny.

> The fact that it's so difficult to define lies at the center of what makes it so profitable for many of these people.

The fact that this may ultimately benefit the businesses currently at the forefront is orthogonal to the credibility of the risks. The two are not mutually exclusive, and focusing only on the potential of profit is a trap.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: