Hacker News new | past | comments | ask | show | jobs | submit login

>Very much agree with this. If the signatories believed this, they would shut down development.

This just ignores the very real coordination problem. The signatories do not represent the entirety of AI development, nor do they want to unilaterally forgo business opportunities that the next man will exploit. Government is the proper place to coordinate these efforts, and so that is where they appeal.




It’s a wording and media frenzy point, personally if I thought I was doing something that was going to wholly or partly cause the “extinction” of the human race. I would stop doing it. These CEOs signing this statement and running these companies are not despotic psychopaths and have the ability to stop what they’re doing. So to me, it seems this type of wording is hyperbole and will cause us to miss some of the very real, very present and very large risks of AI. Those risks as you say can and should be dealt with government coordination but are distracted from if the media only talk about extinction.


>personally if I thought I was doing something that was going to wholly or partly cause the “extinction” of the human race. I would stop doing it

There are so many things wrong with this line of thinking. First, it mischaracterizes the issues. Few people believe AGI guarantees the extinction of humanity. The issue is that there is a significant potential for extinction and thus we need a coordinated effort to either manage this risk or prevent its creation. It does little to stop the coming calamity to single-handedly abstain from continuing to build. Coordination is a must. Besides, most people will think they stand a greater chance of building it safely than the next guy. The coordination is required to keep other people from being irresponsible. Human hubris knows no limits.

The other mistake is misjudging nerd psychology. You can believe there's a high chance of what you're working on being dangerous and still be unable to stop working on it. As Oppenheimer put it, "when you see something that is technically sweet, you go ahead and do it". It is a grave error to discount the motivation of trying to solve a really sweet technical problem.

Ultimately these kinds of claims are self-serving, they provide rational cover to justify your predetermined beliefs that those calling for regulation are trying stifle competition. Folks don't want to be left out of the fun. The justification is in service to the motivation.


> Few people believe AGI guarantees the extinction of humanity.

“Artificial intelligence could lead to extinction, experts warn” - BBC front page news headline earlier today in reaction to this story.

This is the problem. Reading that BBC article will make your average joe petrified. “Extinction” is a much catchier headline than the slow creep of automation replacing/changing jobs. The latter is literally already happening around us right now and it serves some of those signatories if those issues aren’t regulated against [1]. I’m not saying forget about extinction threat. Clearly that’s an important risk to manage, but let’s not ignore these near term, huge disruptions because policy makers are busy reacting to distracting headlines.

Edit: add ref; [1] https://www.reuters.com/technology/openai-may-leave-eu-if-re...


This is what bothers me. The risks they're talking about are the most unrealistic ones you can think of. In the meantime, they're completely ignoring much more likely devastating (although not extinction-level) risks.

It smells like bullshit being espoused to push an agenda, but I can't tell what the agenda is. My guess: play up the huge unrealistic risks in order to distract from the more realistic ones.

Two things about this tech that I'm personally not worried at all about: "evil agi" and "only the elites will control this".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: