Hacker News new | past | comments | ask | show | jobs | submit login

Another explanation is that there are those who considered and thoughtfully weighed the ramifications, but came to a different conclusion. It is unfair to assume a decision process was agnostic to harm or plain ignorant.

For example, perhaps the lesser-evil argument played a role in the decision process: would a world where deep fakes are ubiquitous and well-known by the public be better than a world where deep fakes have a potent impact because they are generated seldomly and strategically by a handful of (nefarious) state sponsors?




there's also the issue that most of the AI catastrophizing is a pretty clear slipperyslope argument:

if we build ai AND THEN we give it a stupid goal to optimize AND THEN we give it unlimited control over its environment, something bad will happen.

the conclusion is always "building AI is wrong" and not "giving AI unrestricted control of critical systems is wrong"


The massive flaw in your argument is your failure to define "we".

Replace the word "we" with "a psychotic group of terrorists" in your post and see how it reads.


If you’re talking about some group of evildoers that deploy ai in a critical system to do evil… the issue is why do they have control to the critical system? Surely they could jump straight to their evil plot with the ai at all


Your question is equivalent to "if you have access to the chessboard anyway, why use Stockfish, just play the moves yourself."


Or "board of directors beholden to share-holders".


I completely agree that's a valid argument. I just think it is rational for someone to come to a different conclusion, given identical priors.


If it wasn’t clear, I agree with your parent comment


My main takeaway from Bostrom's Superintelligence is that a super intelligent AI cannot be contained. So, the slippery slope argument, often derided as a bad form of logic, kind of holds up here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: