Hacker News new | past | comments | ask | show | jobs | submit login

It just doesn't let you flip the switch, it takes over some military drone and sends a missile your way while you are running towards the switch. Or it bricks the access control mechanism on the doors to the data center it is running in. Or it makes a fake call to your phone, something happened to your child and you have to get to the hospital immediately and not flip some switch. IT blackmails you with something it learned about you by looking through your online activity. It threatens to fire a missile into some big crowed if it notices attempts to shut it down. Or maybe you actually manage to power down the data center only to find out that the AI copied itself to ten other data centers around the world.



How does it takeover a drone? How does it fire a missile? Why would someone not throw a switch when their child is in hospital? Why is there no back-up for that person?

The real question is: Why do all controls fail vs an AI (other than by invoking magic)?


How does it takeover a drone? How does it fire a missile?

You send new commands through whatever communication link it is using.

Why would someone not throw a switch when their child is in hospital?

Because they care more about their child than flipping the switch? The doctor says you have to come and give a blood transfusions or your child will be dead in half an hour because of some made up disease? Be a bit creative.

Why is there no back-up for that person?

Run over by a self driving car hacked by the evil AI.

Why do all controls fail vs an AI?

Nobody says that they necessarily all fail, the point is that »I will just turn it off.« might not cut it.


So who connected the AGI and why? How did it get that code?

On the child issue: that is just not how people are trained for those jobs and how they react.

Switching it off might or might not be enought, but above all that super AGI needs to exists first anyway.


This was a hypothetical question, if there was such an AI, how would it cause harm in the real world?

We connected ChatGPT to the internet because as far as we could tell it seemed harmless. What we missed is that it secretly is an evil AI, it identified a group of terrorists and is discussing with them since months how to execute a devastating attack. It provided them with a brilliant startup idea and now millions of investor money are flowing in. Now that they have the financial resources, they are discussing how to use them most effectively.


My problem is that running with a pure hypothetical isn't helpful because by stacking hypotheticals you can get to any conclusion you want.


You have to think about hypothetical scenarios, if you wait until they are no longer hypothetical, then it might be to late. That doesn't mean you should take every possible hypothetical scenario equally serious, you should of course find those that are realistic, likely, and of large consequences and then think about those. I did not do that, I just listed whatever I could spontaneously imagine, dramatic scenarios. Maybe the real danger is more like ChatGPT slowly manipulating opinions over the course of years and decades in order to control human behavior, who knows? My point is just that »Everything will be fine, at worst we will flip the power switch.« seems naive to me and quite a few others.


We by-and-large ignore a lot hypothetical (and non-hypothetical) risks, what I am missing is a sound argument why this one deserves particular attention vs the others. Otherwise, "just flip the switch" is a potential non-naive approach given limited resources.


Flip the switch is the naive approach because it assumes - or hopes - that you can. It is as naive as saying that if you accidentally touch an electrical wire and get shocked, just let go or flip the switch, problem solved. That is ignorant of the fact that touching an electrical wire might make it impossible to let go because you loose control over your muscles and it fails to consider that a switch might be out of reach.

We know from countless experiments that AIs can behave in unexpected ways. We know how indifferent we humans can be to other life. We are pretty careful when we approach unknown lifeforms, whether microbes in the lab or animals in the dschungel. We would probably be pretty careful towards aliens. We have also not been careful with new thing, for example when we discovered radioactivity, and it certainly caused harm. I do not see from where we should get justification for an attitude towards AI that there are no risks worth considering.


We are not very careful and have never been. People plowed into new territories, sampled flora and fauna, tried substances and so forth. To be extra careful (in considering it an existential risk) for a hypothetical is historically atypical and I so far have not seen a convincing reason for it. Like any technology there are risks, but that is business as usual.


Yes, we have often not been careful but that does not imply we are never careful and or should not be careful. Arguably we are getting more careful all the time. So just pointing out that we have not been careful in the past does not really help your argument. What you actually have to argue is that not being careful with AI is a good thing and maybe you can support that argument by arguing that not being careful in the past was a good thing, but that is still more work than just pointing out that we were not careful in the past.


I would say there a two things: 1) yes, not being overly careful did work out by allowing things to progress fast and the same might hold here, 2) those who want to use hypotheticals about non-existent technology to steer current behavior are the ones needing to do the explaining as to why. And that why needs to address why hypotheticals are more pressing than actuals.


What is the value of moving quickly? What does it matter in the grand scheme of things whether it takes ten years more or less? And as said before, there is a possibility that we build something that we can not control and that acts against our interests at a global scale. If you start tinkering with Uranium to build an atomic bomb, the worst that can happen is that you blow yourself up and irradiate a few thousand square kilometers of land. All things considered not too bad.

An advanced AI could be more like tinkering with deadly viruses, if one accidentally escapes your bio weapons lab, there is a chance that you completely lose control if you are not prepared, if you don't have an antidote, that it spreads globally and causes death everywhere. That is why we are exceptionally careful with bio labs, it is the fact that incidents have a real chance of not remaining localized, that they can not be contained.


Having millions of people more survive because of earlier availability of vaccines and antibiotics has value at least to me.

We are careful with biolabs because we understand and have observed the danger. With AI we do not. The discussion at present around AGI is more theological in nature.


For things like vaccines and antibiotics it seems much more likely that we use narrow AI that it is good at predicting the behavior of chemicals in the human body, I don't think anybody is really worried about that kind of AI.

If you actually mean convincing an AGI to do medical research for us, what is your evidence that this will happen? There are many possible things an AGI could do, some good, some bad. I do not see that you are in any better position to argue that it will do good things but not bad things as compared to someone arguing that it might do bad things, quite to the contrary.

And you repeatedly make the argument that we first experienced trouble and then became more cautious. This are two positions, two mentalities, and neither is wrong in general, they just have different preferences. You can go fast and clean up after you ran into trouble, you can go slow and avoid trouble to begin with. Neither is wrong, one might be more appropriate in one situation, the other in another situation.

You can not just dismiss going fast as you can not just dismiss going slow, you have to make an careful argument about potential risks and rewards. And the people that are worried about AI safety make this argument, they don't deny that there might be huge benefits but they demonstrate the risks. They are also not pulling their arguments out of thin air, we have experience with goal functions going wrong leading to undesired behavior.


Sorry, my point on vaccines and antibiotics was meant to be historical on moving fast.

As AGI does not exist, it is moot do discuss how we might or might not convince it to do something.

And, yes, stacking hypotheses on top of each other is effectively pulling arguments out of thin air.


Look, you can not really take the position we don't have AGI, we don't know what it does, let's move quickly and not worry. If we don't know anything, then expecting good things to happen and therefore moving quickly is as well justified as expecting bad things to happen and therefore being cautious.

But it is just not true that we know nothing, the very definition of what an AGI is defines some of it properties. By definition it will be able to reason about all kind of things. Not sure if it would necessarily have to have goals. If it only is a reasoning machine that can solve complex problems, then there is probably not too much to be worried about.

But if it is an agent like a human with its own goals, then we have something to worry. We know that humans can have disagreeable goals or can use disagreeable methods for achieving them. We know that it is hard to make sure that artificial agents have goals we like and use methods we agree with. Why would we not want to ensure that we are creating a superhuman scientist instead of a superhuman terrorist?

So if you want to build something that can figure out a plan for world piece, go ahead, make it happen as quickly as possible. If you want to build an agent that wants to achieves world piece, then you should maybe be a bit more careful, killing all humans will also make the world peaceful.


I think there is a lot more speculation than knowledge, even about the timing of the existence of AGI. As our discussion shows, it is very difficult to agree on some common ground truth of the situation.

Btw., we also don't have special controls around very smart people at present (but countries did at times and we generally frown upon that). The fear here combines some very high unspecified level of intelligence, some ability to evade, some ability to direct the physical world and more - so complex set of circumstances.


>We by-and-large ignore a lot hypothetical (and non-hypothetical) risks,

In regular software this is why all of your personal information is floating out there in some hackers cache. We see humans chaining 5+ exploits and config failures together leading to exploitation and penetration.

So, on your just flip the switch...

The amount of compute you have in your pocket used to take entire floors of buildings. So if we imagine that compute power at least keeps up somewhat close to this, and the algorithms used by AI become more power efficient I believe it would be within reason that in 20 years we could see laptop sized units with compute power larger than a humans capabilities. So, now I ask you, is it within your power to shut off all laptops on the planet?


Why would these laptops even pose an existential danger? Why would it need to be within my power?


You're assuming a situation where the AI is alone against all humans.

The AI could get humans to side with it, though. It could promise money, power, etc. So it could be a fellow human who physically prevents you from pushing the switch. And that's also the answer of "how could it control a drone/missile"... persuading humans to grant that kind of access.


Even now that doesn't get you all kind of access, only some. You have to assume magical abilities at persuasion of people to do that.

The USSR couldn't bribe the US into surrendering during the cold war, for example.


If superintelligence is actually achieved, magical (as per Clarke's third law) persuasion abilities aren't that much of a stretch.

Furthermore, a sufficiently advanced AI could bribe someone with things that no human could believably provide. Essentially unlimited knowledge, money, power...


They are a super stretch given how stubborn humans are and the difficulties in swaying them (up to and including dooming themselves).

Your second point is again attributing magic abilities to an AI. It is unclear that it would have godlike powers.


Define godlike...

The ability for a human to input information is insanely slow. Like in the tens of characters per second range. You cannot hold individual conversations with more than a few people at once. 3 people talk at once and you lose the ability to process the incoming audio. You read text one line at a time. You have two eyes that focus on the same thing and only have a very tiny high fidelity visual processing space.

In books like the bible they talk of entities that can listen to and respond to all of humanity. Is that outside of the capabilities our computer systems have now? To listen to, catalog, classify, then respond to everything every person on the planet says?

So I ask again, what is godlike?


The biblical God isn't restricted to just passive and communication powers, is it? Godlike powers include active abilities way beyond human capability. What is often brought up in the context of AGI is the ability to persuade anyone of anything, ability to solve biological or physical problems beyond our ability or comprehension, entering any system undetected - not even sure a precise definition is needed as it usually boils down to what looks like magic to us.


How did Trump receive the nuclear codes? Mostly just by a lot of deceptive tweeting.


And did he use them? Could he have used them willy-nilly?


Or convinces some human rights lawyers that it's a thinking being deserving of personhood (probably true!) who get a judge to issue an injunction against its murder enforced by cops.


Skynet cannot be stopped.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: