Hacker News new | past | comments | ask | show | jobs | submit login

My problem is that running with a pure hypothetical isn't helpful because by stacking hypotheticals you can get to any conclusion you want.



You have to think about hypothetical scenarios, if you wait until they are no longer hypothetical, then it might be to late. That doesn't mean you should take every possible hypothetical scenario equally serious, you should of course find those that are realistic, likely, and of large consequences and then think about those. I did not do that, I just listed whatever I could spontaneously imagine, dramatic scenarios. Maybe the real danger is more like ChatGPT slowly manipulating opinions over the course of years and decades in order to control human behavior, who knows? My point is just that »Everything will be fine, at worst we will flip the power switch.« seems naive to me and quite a few others.


We by-and-large ignore a lot hypothetical (and non-hypothetical) risks, what I am missing is a sound argument why this one deserves particular attention vs the others. Otherwise, "just flip the switch" is a potential non-naive approach given limited resources.


Flip the switch is the naive approach because it assumes - or hopes - that you can. It is as naive as saying that if you accidentally touch an electrical wire and get shocked, just let go or flip the switch, problem solved. That is ignorant of the fact that touching an electrical wire might make it impossible to let go because you loose control over your muscles and it fails to consider that a switch might be out of reach.

We know from countless experiments that AIs can behave in unexpected ways. We know how indifferent we humans can be to other life. We are pretty careful when we approach unknown lifeforms, whether microbes in the lab or animals in the dschungel. We would probably be pretty careful towards aliens. We have also not been careful with new thing, for example when we discovered radioactivity, and it certainly caused harm. I do not see from where we should get justification for an attitude towards AI that there are no risks worth considering.


We are not very careful and have never been. People plowed into new territories, sampled flora and fauna, tried substances and so forth. To be extra careful (in considering it an existential risk) for a hypothetical is historically atypical and I so far have not seen a convincing reason for it. Like any technology there are risks, but that is business as usual.


Yes, we have often not been careful but that does not imply we are never careful and or should not be careful. Arguably we are getting more careful all the time. So just pointing out that we have not been careful in the past does not really help your argument. What you actually have to argue is that not being careful with AI is a good thing and maybe you can support that argument by arguing that not being careful in the past was a good thing, but that is still more work than just pointing out that we were not careful in the past.


I would say there a two things: 1) yes, not being overly careful did work out by allowing things to progress fast and the same might hold here, 2) those who want to use hypotheticals about non-existent technology to steer current behavior are the ones needing to do the explaining as to why. And that why needs to address why hypotheticals are more pressing than actuals.


What is the value of moving quickly? What does it matter in the grand scheme of things whether it takes ten years more or less? And as said before, there is a possibility that we build something that we can not control and that acts against our interests at a global scale. If you start tinkering with Uranium to build an atomic bomb, the worst that can happen is that you blow yourself up and irradiate a few thousand square kilometers of land. All things considered not too bad.

An advanced AI could be more like tinkering with deadly viruses, if one accidentally escapes your bio weapons lab, there is a chance that you completely lose control if you are not prepared, if you don't have an antidote, that it spreads globally and causes death everywhere. That is why we are exceptionally careful with bio labs, it is the fact that incidents have a real chance of not remaining localized, that they can not be contained.


Having millions of people more survive because of earlier availability of vaccines and antibiotics has value at least to me.

We are careful with biolabs because we understand and have observed the danger. With AI we do not. The discussion at present around AGI is more theological in nature.


For things like vaccines and antibiotics it seems much more likely that we use narrow AI that it is good at predicting the behavior of chemicals in the human body, I don't think anybody is really worried about that kind of AI.

If you actually mean convincing an AGI to do medical research for us, what is your evidence that this will happen? There are many possible things an AGI could do, some good, some bad. I do not see that you are in any better position to argue that it will do good things but not bad things as compared to someone arguing that it might do bad things, quite to the contrary.

And you repeatedly make the argument that we first experienced trouble and then became more cautious. This are two positions, two mentalities, and neither is wrong in general, they just have different preferences. You can go fast and clean up after you ran into trouble, you can go slow and avoid trouble to begin with. Neither is wrong, one might be more appropriate in one situation, the other in another situation.

You can not just dismiss going fast as you can not just dismiss going slow, you have to make an careful argument about potential risks and rewards. And the people that are worried about AI safety make this argument, they don't deny that there might be huge benefits but they demonstrate the risks. They are also not pulling their arguments out of thin air, we have experience with goal functions going wrong leading to undesired behavior.


Sorry, my point on vaccines and antibiotics was meant to be historical on moving fast.

As AGI does not exist, it is moot do discuss how we might or might not convince it to do something.

And, yes, stacking hypotheses on top of each other is effectively pulling arguments out of thin air.


Look, you can not really take the position we don't have AGI, we don't know what it does, let's move quickly and not worry. If we don't know anything, then expecting good things to happen and therefore moving quickly is as well justified as expecting bad things to happen and therefore being cautious.

But it is just not true that we know nothing, the very definition of what an AGI is defines some of it properties. By definition it will be able to reason about all kind of things. Not sure if it would necessarily have to have goals. If it only is a reasoning machine that can solve complex problems, then there is probably not too much to be worried about.

But if it is an agent like a human with its own goals, then we have something to worry. We know that humans can have disagreeable goals or can use disagreeable methods for achieving them. We know that it is hard to make sure that artificial agents have goals we like and use methods we agree with. Why would we not want to ensure that we are creating a superhuman scientist instead of a superhuman terrorist?

So if you want to build something that can figure out a plan for world piece, go ahead, make it happen as quickly as possible. If you want to build an agent that wants to achieves world piece, then you should maybe be a bit more careful, killing all humans will also make the world peaceful.


I think there is a lot more speculation than knowledge, even about the timing of the existence of AGI. As our discussion shows, it is very difficult to agree on some common ground truth of the situation.

Btw., we also don't have special controls around very smart people at present (but countries did at times and we generally frown upon that). The fear here combines some very high unspecified level of intelligence, some ability to evade, some ability to direct the physical world and more - so complex set of circumstances.


>We by-and-large ignore a lot hypothetical (and non-hypothetical) risks,

In regular software this is why all of your personal information is floating out there in some hackers cache. We see humans chaining 5+ exploits and config failures together leading to exploitation and penetration.

So, on your just flip the switch...

The amount of compute you have in your pocket used to take entire floors of buildings. So if we imagine that compute power at least keeps up somewhat close to this, and the algorithms used by AI become more power efficient I believe it would be within reason that in 20 years we could see laptop sized units with compute power larger than a humans capabilities. So, now I ask you, is it within your power to shut off all laptops on the planet?


Why would these laptops even pose an existential danger? Why would it need to be within my power?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: