Hacker News new | past | comments | ask | show | jobs | submit login

Think of corporations as analogue versions of digital AI. Instead of silicon the thinking is done by humans executing algorithms. How effective are humans at stopping the series of wars that are likely being triggered to the benefit of the US military industrial complex? The answer is not very. Disbanding a corporation is about as easy as turning off a computer; and yet here we are.

AI is the same, except there is less need for any human survivors on the other side of a conflict. It isn't the endgame yet, but if we don't need humans to do thinking jobs, humans are not so useful any more in wars any more (it seems to be missiles, drones and artillery that matter these days) and so it really comes down to the edge we have over machines in manual dexterity and labour tasks. Which is not nothing, but it is an edge that is plausibly overcome in a matter of decades. Then the future gets really difficult to predict.




Corporations are disbanded all the time. Wars started and ended all the time throughout history.


Sure, but there is still a risk that World War 3 will only end when all humans are dead. There is still a risk that one side will start to use nukes, and provoque a response. And that's in a human vs human war where we generally do want each other to survive.

More and more military systems have software or AI control (drones, etc). If these do get too dangerous, I doubt all current superpowers could be persuaded to stop using them.

In an AI vs humans war, the AI probably doesn't care if all humans die from nuclear winter.

(I'm not saying this is likely, I'm just saying that it's not impossible and that the fact that some corporations are disbanded doesn't really matter for AI's)


That is one of infinite hypotheticals, sure. It could also be that everyone agrees to put some failsafe in the form of EMP weapons in place to destroy a rogue AI - it's all super speculative.


You dying in a car accident is super speculative too, yet we have thousands of different actions and regulations that are put into place to reduce that from occurring.

If I had to make a bet I would say the airbag in your car is never going to go off. And yet we engineer these safety devices to ensure the most likely bad outcomes don't kill you. This is the point of studying AI safety, so we understand the actual risks of these systems simply because some low probability but existential outcomes are possible.

>It could also be that everyone agrees to put some failsafe in the form of EMP weapons in place to destroy a rogue AI

So we would commit suicide? Are we talking about EMPs in data centers that could run AI? Ooops, there goes the economy. And that doesn't address miniaturization of AI in much smaller formats in the future. Trying to build it safe in the first place is much better bet then picking what remains from ashes because we were not cautious.


A car accident isn't super speculative, nor the way these accidents happen, the injuries they can cause and so forth. There is nothing speculative or hypothetical about them.

We don't know the actual risks of something that does not exist and is vaguely characterized. Any number of hypotheticals can have existential risks with a certain probability, that is not enough to warrant study.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: