Hacker News new | past | comments | ask | show | jobs | submit login

> Sure, but for s-risk-caused-by-human-intent scenario to become an issue the x-risk problem has to be solved or negligible.

Sure. I can chew gum and walk at the same time. s-risk comes after x-risk has been dealt with. Doesn't mean that we can't think of both.

> seems more like a political problem than one of feasibility

Don't know what to tell you but "political problem" is not 1% of the solution. Political problem is where things get really stuck. Even when the tech is easy the political problem is often intractable. There is no reason to think that this political problem will be 1%.

> Not sure what level of precision you were expecting?

I provided a variant of the sentence which I can agree with. I will copy it here in case you missed it: "AI is not safe if it causes extinction of humanity." (noticed and fixed a typo in it)

> They lack the unity of will and high-bandwidth communication between their parts that'd I'd expect from a real superintelligence.

Sure. If you know the meme[1] when the kids want to eat AGI, corporations is the "food we have at home". They are not kinda the real deal and they are kinda suck. They are literally made of humans and yet we are really bad at aligning them with the good of humanity. They are quite okay at making money for the owners though!

> A superintelligence wouldn't be dumb.

Yes.

> That means "kinda works" is not the same as "selected for being compatible with human existence".

During the AGI's infancy someone made it. That someone has spent a lot of resources on it, and they have some idea what they want to use it for. That initial "prompting" or "training" will have an imprint on the goals and values of the AGI. If it escapes and disassembles all of us for our constituent carbon then we run into the x-risk and we don't have to worry about s-risk anymore. What I'm saying is that if we avoid the x-risk, we are not safe yet. We have a gaping chasm of s-risk we can still fall into.

If the original makers created it to make them rich (very common wish) we can fall into some terrible future where everyone who is not recognised by the AGI as a shareholder is exploited by the AGI to the fullest extent.

If the original makers created it to win some war (another very common wish) the AGI will protect whoever they recognise as an ally, and will subjugate everyone to the fullest extent.

These are not movie scenarios, but realistic goals organisations wishing to create an AGI might have.

Have you heard the term "What doesn't kill you makes you stronger"? There is a not as often repeated variant of it: "what doesn't kill you sometimes makes you hurt so bad you wish it did".

1: https://knowyourmeme.com/memes/we-have-food-at-home




Tbh, if you replaced the word "AI" with the word "technology" this sounds more like an overwhelming paranoia of power.

As technology progresses, there's also not much difference if the "creators" you listed pursued their goals with "dumb" technologies. People/Entities with differing interests will cross with your interests at some point and somebody will get hurt. The answer to such situations is the same as the past. You establish deterrence, you also adopt those technologies or AGI to serve your interests against their AGIs. And so balance is established.


> this sounds more like an overwhelming paranoia of power

You call it overwhelming paranoia, I call it well supported skepticism about power based on the observed history of humankind so far. The promise, and danger of AGIs is that they are intelectual force multipliers of great power. So if not properly treated they will also magnify inequalities in power.

But in general your observation that Iā€™m not saying anything new about humans is true! This is just the age old story applied to a new technological development. That is why i find it strange how much pushback it received.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: