Hacker News new | past | comments | ask | show | jobs | submit login

> I think the core is not to agree with the existential risk scenario in the first place.

I mean, that's motivated reasoning right there, right?

"Agreeing that existential risk is there would lead to complex intractable problems we don't want to tackle, so let's agree there's no existential risk". This isn't a novel idea, it's why we started climate change mitigations 20 years too late.




No, I meant it as in: there is no reason to agree with that scenario. Stacking a lot of hypotheses on top of each other as to get to something dangerous isn't necessarily something that is convincing.


There's also no reason to agree that AI will be aligned and everything will be fine. The question is what should our default stance be until proven otherwise? I submit it is not to continue building the potentially world-ending technology.

When you see a gun on the table, what do you do? You assume its loaded until proven otherwise. For some reason, those who imagine AI will usher in some tech-utopia not only assume the gun is empty, but that pulling the trigger will bring forth endless prosperity. It's rather insane actually.


Whenever I see people jump to alignment they invariably have jumped over the _much_more questionable assumption that AGI’s will be godlike. This doesn’t even match what we observe in reality - drop a human into the middle of a jungle, and it doesn’t simply become a god just because it has an intelligence that’s orders of magnitude greater than the animals around it. In fact, most people wouldn’t even survive.

Further, our success as a species doesn’t come from lone geniuses, but from large organizations that are able to harness the capabilities of thousands/millions of individual intelligences. Assuming an AGI that’s better than an individual human is going to automatically be better than millions of humans - and so much better that it’s godlike - is disconnected from what we see in reality.

It actually seems to be a reflection of the LessWrong crowd, who (in my experience) greatly overemphasize the role of lone geniuses and end up struggling when it comes social aspects of our society.


I would say this is a failure of your imagination when it comes to possible form factors of AI.

But this is the question I will ask... Why is the human brain the pinnacle of all possible intelligence in your opinion? Why did evolution manage to produce the most efficient possible, the most 'intelligent' format via the random walk that can never be exceeded by anything else?


Its interesting seeing the vast range of claims people confidently use to discount the dangers of AI.

Individual humans are limited by biology, an AGI will not be similarly limited. Due to horizontal scaling, an AGI will perhaps be more like a million individuals all perfectly aligned towards the same goal. There's also the case that an AGI can leverage the complete sum of human knowledge, and can self-direct towards a single goal for an arbitrary amount of time. These are super powers from the perspective of an individual human.

Sure, mega corporations also have superpowers from the perspective of an individual human. But then again, megacorps are in danger of making the planet inhospitable to humans. The limiting factor is that no human-run entity will intentionally make the planet inhospitable to itself. This limits the range of damage that megacorps will inflict on the world. An AGI is not so constrained. So even discounting actual godlike powers, AGI is clearly an x-risk.


I would say you are also overconfident in your own statements.

> Individual humans are limited by biology, an AGI will not be similarly limited.

On the other hand, individual humans are not limited by silicon and global supply chains, nor bottlenecked by robotics. The perceived superiority of computer hardware on organic brains has never been conclusively demonstrated: it is plausible that in the areas that brains have actually been optimized for, our technology hits a wall before it reaches parity. It is also plausible that solving robotics is a significantly harder problem than intelligence, leaving AI at a disadvantage for a while.

> Due to horizontal scaling, an AGI will perhaps be more like a million individuals all perfectly aligned towards the same goal.

How would they force perfect alignment, though? In order to be effective, each of these individuals will need to work on different problems and focus on different information, which means they will start diverging. Basically, in order for an AI to force global coordination of its objective among millions of clones, it first has to solve the alignment problem. It's a difficult problem. You cannot simply assume it will have less trouble with it than we do.

> There's also the case that an AGI can leverage the complete sum of human knowledge

But it cannot leverage the information that billions of years of evolution has encoded in our genome. It is an open question whether the sum of human knowledge is of any use without that implicit basis.

> and can self-direct towards a single goal for an arbitrary amount of time

Consistent goal-directed behavior is part of the alignment problem: it requires proving the stability of your goal system under all possible sequences of inputs and AGI will not necessarily be capable of it. There is also nothing intrinsic about the notion of AGI that suggests it would be better than humans at this kind of thing.


Yes, every point in favor of the possibility of AGI comes with an asterisk. That's not all that interesting. We need to be competent at reasoning under uncertainty, something few people seem to be capable of. When the utility of a runaway AGI is infinitely negative, while the possibility of that outcome is substantially non-zero, rationality demands we act to prevent that outcome.

>How would they force perfect alignment, though? In order to be effective, each of these individuals will need to work on different problems and focus on different information, which means they will start diverging

I disagree that independence is required for effectiveness. Independence is useful, but it also comes with an inordinate coordination cost. Lack of independence implies low coordination costs, and the features of an artificial intelligence implies the ability to maximally utilize the abilities of the sub-components. Consider the 'thousand brains' hypothesis, that human intelligence is essentially the coordination of thousands of mini-brains. It stands to reason that the more powerful the mini-brains, along with the efficiency of coordination, implies a much more capable unified intelligence. Of course all that remains to be seen.


> Lack of independence implies low coordination costs

Perhaps, but it's not obvious. Lack of independence implies more back-and-forth communication with the central coordinator, whereas independent agents could do more work before communication is required. It's a tradeoff.

> the features of an artificial intelligence implies the ability to maximally utilize the abilities of the sub-components

Does it? Can you elaborate?

> It stands to reason that the more powerful the mini-brains, along with the efficiency of coordination, implies a much more capable unified intelligence.

It also implies an easier alignment problem. If an intelligence can coordinate "mini-brains" fully reliably (a big if, by the way), presumably I can do something similar with a Python script or narrow AI. Decoupling capability from independence is ideal with respect to alignment, so I'm a bit less worried, if this is how it's going to work.


>Does it? Can you elaborate?

I don't intend to say anything controversial here. The consideration is the tradeoff between independence and tight constraints of the subcomponents. Independent entities have their own interests, as well as added computational and energetic costs involved in managing a whole entity. These are costs that can't be directed towards the overarching goal. On the other hand, tightly constrained components do not have this extra overhead and so their capacity can be fully directed towards the goal as determined by the control system. In terms of utilization of compute and energy towards the principle goal, a unified AI will be more efficient.

>If an intelligence can coordinate "mini-brains" fully reliably (a big if, by the way), presumably I can do something similar with a Python script or narrow AI.

This is plausible, and I'm totally in favor of exploiting narrow AI to maximal effect. If the only AI we ever had to worry about was narrow AI, I wouldn't have any issue aside from the mundane issues we get with the potential misuse of any new technology. But we know people (e.g. open AI) are explicitly aiming towards AGI so we need to be planning for this eventuality.


Your existentially risky AI is imaginary, it might not exist. Who would check an imaginary gun on a table?


Lets go back in time instead... it's 1400 and you're a native American. I am a fortune teller and I say people in boats bearing shiny metal sticks will eradicate us soon. To the natives that gun would have also been imaginary if you would have brought up the situation to them. I would say that it would believable if they thought the worst possible outcome they could ever have was another tribe of close to the same capabilities attacked.

We don't have any historical record if those peoples had discussions about possible scenarios like this. Where there imaginary guns in their future? What we do have records of is another people group showing up with a massive technological and biological disruption that nearly lead to their complete annihilation.


What we also have is viruses and bacteria killing people irrespective of those having zero intelligence. We also have smart people being killed by dumb people. And people with sticks killing people with guns. My point is, these stories don't mean anything in relation to AI.

Btw., the conquistadors relied heavily on exploiting local politics and locals for conquest, it wasn't just some "magic tech" thing, but old fashioned coalition building with enemies of enemies.


Yes, AGI doesn't exist. That doesn't mean we should be blind to the possibility. Part of the danger is someone developing AGI by accident without sufficient guardrails in place. This isn't farfetched, much of technological progress happened before we had the theory in place to understand it. It is far more dangerous to act too late than to act too soon.


There is an infinity of hypothetical dangers. Then at the very least there should be a clear and public discussion as to why the AI danger is more relevant than others than we do not do much about (some not even being hypothetical). That is not happening.


I'm in favor of public discussions. It's tragic that there is a segment of the relevant powers that are doing their best to shut down any of these kinds of discussions. Thankfully, they seem to be failing.

But to answer the question, the danger of AI is uniquely relevant because it is the only danger that may end up being totally out of our control. Nukes, pandemics, climate change, etc. all represent x-risk to various degrees. But we will always be in control of whether they come to fruition. We can pretty much always short circuit any of these processes that is leading to our extinction.

A fully formed AGI, i.e. an autonomous superintelligent agent, represents the potential for a total loss of control. With AGI, there is a point past which we cannot go back.


I don't agree with the uniqueness of AI risk. A large asteroid impacting earth is a non-hypothetical existential risk presently out of our control. We do not currently plan on spending trillions to have a comprehensive early detection system and equipment to control that risk.

The differentiating thing here is that blocking hypothetical AI risk is cheap, while mitigating real risks is expensive.


We have a decent grasp of the odds of an asteroid extinction event in a way that we don't for AI.


And that makes taking that extinction risk everyday better?


How do we not take it every day?

We can work on it in the long term and things like developing safe AI may have more impact on mitigating asteroid risk than working on scaling up existing nuclear, rocket, and observation tech to tackle it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: