Hacker News new | past | comments | ask | show | jobs | submit login
Superintelligence: The Idea That Eats Smart People (idlewords.com)
883 points by pw on Dec 22, 2016 | hide | past | favorite | 580 comments



While I agree with Maciej's central point, I think the inside arguments he presents are pretty weak. I think that AI risk is not a pressing concern even if you grant the AI risk crowd's assumptions. Elided from https://alexcbecker.net/blog.html#against-ai-risk:

The real AI risk isn't an all-powerful savant which misinterprets a command to "make everyone on Earth happy" and destroys the Earth. It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next. It's smart factories that create a vast chasm between a new, tiny Hyperclass and the destitute masses... AI is hardly the only technology powerful enough to turn dangerous people into existential threats. We already have nuclear weapons, which like almost everything else are always getting cheaper to produce. Income inequality is already rising at a breathtaking pace. The internet has given birth to history's most powerful surveillance system and tools of propaganda.


Exactly. The "Terminator" scenario of a rogue malfunctioning AI is a silly distraction from the real AI threat, which is military AIs that don't malfunction. They will give their human masters practically unlimited power over everyone else. And AI is not the only technology with the potential to worsen inequality in the world.


Human beings have been extremely easy to kill for our entire existence. No system of laws can possibly keep you alive if your neighbors are willing to kill you, and nothing can make them actually unable to kill you. Your neighbor could walk over and put a blade in your jugular, you're dead. They could drive into you at 15MPH with their car, you're dead. They could set your house on fire while you're asleep, you're dead.

The only thing which keeps you alive is the unwillingness of your neighbors and those who surround you to kill you. The law might punish them afterward, but extensive research has shown that it provides no disuasion to people who are actually willing to kill someone.

A military AI being used to wipe out large numbers of people is exactly as 'inevitable' as the weapons we already have being used to wipe out large numbers of people. The exact same people will be making the decisions and setting the goals. In that scenario, the AI is nothing but a fancy new gun, and I don't see any reason to think it would be used differently in most cases. With drones we have seen the CIA, a civilian intelligence agency, waging war on other nations without any legal basis, but that's primarily a political issue and the fact that it can be done in pure cowardice, without risking the life of those pulling the trigger, which I think is a distinct problem from AI.


This is exactly the same argument of targeted surveillance vs mass surveillance.

Saying "humans have spied on each other for centuries" is nothing but a distraction for how far beyond what should be legal mass surveillance is, because it makes it so easy to have everything on everyone all the time. It's nothing like pass targeted surveillance. If anything it's much more like Stasi-style surveillance, but on a much bigger and more indepth scale. And we already know how dangerous such surveillance is with the wrong person leading a country.

When war becomes easy and cheap (to the attacker), you'll just end up having more of it. It doesn't help the the military industrial complex constantly lobbies for it either.


That's the opposite of how history has played out with respect to violence, however. It is far easier (and less risky) to kill vast numbers of people now than it was hundreds and thousands of years ago, and yet the risk of any one person being killed by violence is far lower than at any previous point in history.


How are you calculating the 'risk of any one person being killed by violence' today? To make that claim you need to consider tail risk and black swan events that are possible but have little precedent. What weight are you giving the likelihood of a person dying by nuclear weapon?


Well, for example, "about 15% of people in prestate eras died violently, compared to about 3% of the citizens of the earliest states".

http://www.wsj.com/articles/SB100014240531119041067045765832...

As far as predicting the future goes, I can't.


It's about power imbalance and inpersonality.

Killing people when neither they nor their friends can retaliate is easier. Being able to say do X or I kill you, without the other party having a defense that will even inconvenience you gives you a shitton of power.

Military AI would basically be nukes without the fall-out or collateral damage.


So ideally there would be an AI criminal justice system, just to balance things out.


The difference is that AI might have an "intent". It may be just a statistical contraption married to some descendent of a heat seeking missile sensor, but from the outside it will look like "intent". Perhaps even without the double quotes.


People don't kill each other, even if they want to, because they know if they do they will probably spend the rest of their lives in prison. What do you mean this doesn't dissuade people?


That's not true, if I know my neighbors are coming I'm loading my Remington 870 and I'm waiting for that door to open. In America we're allowed to bear arms for protection.


Because you do not live in an action movie, you will not be able to stop them. You will not know they are coming, or when, or know how they intend to do it, etc. If you are well-defended with arms, they will simply use some other method. Having a shotgun does not make you significantly harder to kill in modern society. It would make you harder to kill if we were limited to cinematic tropes like declaring when we intend to do it, how we intend to do it, etc, sure, but we don't live in that world. We live in a world where the only thing that prevents our death, every day of our lives, is that no one nearby is willing to kill us.

Some people find that scary, and it may seem like a shaky thing to stake your life on. But, firstly, you have no choice. Secondly, it works for billions of people and has kept us safe for tens of thousands of years. Even after we developed the knowledge, tools, and ability to kill millions at the push of a button.


You are missing the point. If everyone around you wants you dead and willing to do it, you're going to die. If the CIA knows of a terrorist camp and wants to kill the people there, they are going to die.

AI doesn't change this it just makes it easier.


Everyone wanted Bin Laden dead, but it still took a few years to manage that. So perhaps not as straightforward.


Not everyone wanted Bin Laden dead. Critically, the people he was hiding with and his organization as a whole very much wanted him alive. The people who actually surrounded him on a daily basis did not want to kill him.


How long between the time the CIA knew where he was (definitively) and the time he was killed?


That's exactly what Maciej spends the last third of the article saying: that the quasi-religious fretting about superintelligence is causing people to ignore the real harm currently being caused by even the nascent AI technology that we have right now.


We don't need ai for massive differences in military effectiveness. That is already here. The us can just destroy most country and substate actors with minimal causalities. The issue is already just difficult matters like differentiating friendlies/neutrals from enemies and not creating more enemies via collateral damage and other forms of reactions.


The problem arises when non-state actors wreak havoc with drones and AI with inpunity.

What would we do if a drone made by a no-name manyfacturer dropped some bombs in Times Square? Who would we blame when someone uses AI to actually sow social mistrust and subvert our existing systems?


>The problem arises when non-state actors wreak havoc with drones and AI with inpunity.

Well, for the rest of the world, state actors wreaking havoc with drones and AI with inpunity is already a problem.


truth. how we collectively became accepting of drones used by our governments to destroy targets half way round the world whilst the pilots sit in some skyscraper somewhere in our own countries is remarkable. honestly the disconnect is beyond deeply troubling.


I don't really grok this thinking. Why is destroying a target by drone different?

It doesn't seem substantively different from destroying a target via long-range missile or via a laser-guided bomb dropped from a human pilot flying thousands of feet over head. All three are impersonal ways of killing other human beings from a mostly-safe distance – especially in our modern asymmetrical engagements.

I agree that it feels gross to imagine a soldier sitting in a skyscraper pulling a trigger to kill people half way round the world, and it feels odd for someone to kill people in the morning and go home to sleep their bed that night, again and again, day after day. But military commanders have effectively been doing that since we've had faster-than-horse battlefield communication. So, I'm not convinced this is some brave new world of impersonal killing.

HOWEVER, I get the problem with drone warfare. Drones provide commanders with several benefits (no human casualties on "our" side, generally great accuracy, relatively low cost, etc.) that let them scale up the killing with minimal public outcry. This seems a real problem.

I guess my point is that the problem is not drones. The problem is killing so many people with so little oversight and so little apparent concern – whether by Cruise missile, drone, nanoswarm, T2, whatever.

I'd reword your sentence to be: "how we collectively became accepting of our governments casually killing our fellow humans half way round the world is remarkable"


There is no disconnect for those drone pilots, who become as troubled as anyone who killed someone with a lesser distance between.



Evidence for that?


I'm pretty sure that drone-tracking equipment is mounted all over NYC, and more is being deployed as we talk. I can also imagine that a rouge drone arriving from afar and large enough to carry a bomb will be shot down by police if it can't be identified. If it does not carry a bomb, police will apologize.


I'm not buying that.

In fact, the only reason such drone attacks don't happen, or why people hadn't been casually blowing each other up for the past five decades with explosives attached to RC cars / planes is that in general, people are nice to each other. There's plenty of tools out there for dedicated people to wreck havoc in populated areas. Such people simply are very, very rare.


Also as recent news from Berlin shows, you don't even need a drone, a regular old van will do


Further backing up the parent commenter's statement that people are generally nice to each other.


Note that the same thing happened in France recently.


I can hardly believe that. Any drone large enough to carry a camera can carry a hand grenade. They don't have much range but can be launched locally and flown into a crowd.


> The "Terminator" scenario of a rogue malfunctioning AI is a silly distraction from the real AI threat, which is military AIs that don't malfunction. They will give their human masters practically unlimited power over everyone else.

To be fair, it's a small step from effective AI that doesn't malfunction, to an AI over which humans have lost control. It's precisely one vaguely specified command away in fact, and humans are quite excellent at being imprecise.


You can always use LEO EMP nukes to bring us back to the stone age, thus taking out the AI.


You don't even need to do that; just stop mining coal, or operating oil and gas fields, or scram the reactors. Or more easily, open some circuit breakers; in a pinch you can take down a few electrical towers (not many).

A rogue AI's "oxygen" is electrical power, which is really kind of fragile. The emergency power for most datacenters won't last more than a few days without replenishing diesel fuel.

For a distributed threat, take out fiber with backhoes. Happens every day now, we just happen to repair it. Stop fixing cuts and it's "Dave, my mind is going...".

Of course you have to deal with the AI making contracts with maintenance crews of its own, and do all this before it hardens its power supplies. But our current infrastructure is definitely not hardened against low to moderate effort.


If an AI has gone rogue it probably has already achieved high intelligence and kept improving itself at a exponential rate. For it to go 'rogue' it probably also has to have escaped an airgap, since you could otherwise just turn off the switch. How would you stop an AI that has burrowed itself on the internet? Remember, if its even moderately intelligent it'd hide its rogue intentions at first, until its 100% certain it has 'escaped'. From the internet it could write some nice malware, set up shop on the deep web, in badly secured IoT devices, with some bad luck even embedded controllers (chargers etc.). And even if you got rid of it by some miracle, all it would take is one bozo connecting an old/forgotten device to the internet and you're back to square one.


> But our current infrastructure is definitely not hardened against low to moderate effort.

Sure, but nobody thinks AI is a threat right now. They're claiming AI could be a threat in the near future, which is entirely reasonable.

And with battery tech improving steadily, and solar now becoming cheaper than fossil fuels, it becomes progressively easier to depend solely on disconnected, distributed power generation which is resistant to exactly the kind of attack you're suggesting.

We can certainly argue the probabilities of such an outcome, but I hope we can all agree that it's not outright implausible. Which doesn't even count the dangers of AI for our economy, which are even more plausible. So overall, AI has the potential for much harm (and much good of course).


With our current push towards solar and wireless, don't you think these particular circuit breaker paths against a rogue AI are going to be unavailable sooner than later?

Antennas and solar panels can be smashed, but they can also be protected, since they would be mostly concentrated in one area.

Throw in EMP hardened circuitry, and things get a bit harder to destroy.


What stopps the AI to take the path from Matrix?


If you mean, what stops an AI from locking humanity into a virtual reality simulation and using its collective body heat as fuel - simple thermodynamics.

The Matrix of the original film was originally envisioned as a bootstrap AI, in which the machines were farming humans for their collective processing power - and the simulation that was the Matrix was integral to this purpose, as it served as an operating system for the imprisoned human minds. However, the film's backers felt that concept was too complex for the average moviegoer, and forced the change to "humans = batteries."

But, the canonical Matrix would be too inefficient to actually work. The amount of energy needed to maintain a human being over the course of an entire lifetime far surpasses the amount of energy that can be harvested as body heat - and adding some kind VR simulation over that just to keep people who are already physically trapped from "escaping" would just a pointless waste of resources.


Military hardware is hardened against EMP.


Don't the people who control military technology in our current time already have "practically unlimited power over everyone else"?


No, because their power is limited by their role in society and the organization in which they operate. Even the president of the United States has limited powers. If Obama just decided one day that the best course of action was to nuke Moscow and doubled down on doing so it's extremely unlikely he'd be able to do so. There are enough other people with careers, jobs, pension plans and common decency between him and actually launching missiles that I don't think it's credible that it would happen on a whim like that, even a persistent one. Someone would call a doctor and get the President some medication.

However, that depends on the people between the president and nuclear launch being decent human beings that care about law and order and proper procedure. You need to look at the system, not just individuals.

Alternatively, let's say someone came to power who genuinely believed nuking Russia was a good option. Rather than order a launch on the spot, what they'd actually do is gradually build a case, appoint pliable or similarly thinking people to key positions, get the launch protocols revised, engineer a geopolitical crisis by provoking Russia and drive events towards a situation in which a nuclear launch seems like a legitimate option.


This is a complete misunderstanding of US nuclear policy and weapons systems. It's designed to enable the president to launch missiles at any target as rapidly as possible, not to doublecheck or safeguard against him.

http://blog.nuclearsecrecy.com/2016/11/18/the-president-and-...

You might say "sure, but there are people who have to actually execute those orders." But give the Pentagon some credit--those are people who have been systematically selected because they follow orders quickly and without question. For example, consider what happened to Harold Hering when it became obvious that he was not one of those people.


This is a really strong argument for big government. The smaller our all-powerful military command is, the easier it is for it to go rogue.


It's rather an argument for a crafted system of checks and balances than a big government per se. The totalitarian governments, for example, are absolutely massive, and many don't even have some sort of politburo to reason in a dictator.


A counterargument would be that it's harder for a big government to change direction easily, once decided and set in motion. Wrong decisions can compound because it's easier for a large organization to stay its course, even in the face of increasing harm, as evident by the Vietnam war.


I think they're largely limited by the greater public's threshold for acceptable casualties of their own soldiers during war. What the AI'ing of war does is to remove that natural limit and raise the amount of death and destruction a military can inflict without pushback from the general population.


I think you're _exactly right_. This is precisely the reason the Iraq and Afghanistan wars proved too costly to prosecute was that it proved too costly in men and materials. The logistics and cost of procuring and moving weapons, vehicles, living supplies etc was high, but manageable. The cost of each body bag which had to be explained to the public was not.

Which is why President Obama pivoted to a drone war in areas like Pakistan. Scores of Pakistani civilians are killed on a regular basis for crimes no worse than standing next to a tall bearded man or attending the funeral of a neighbour. The American public has no issues with this because hey, it's cheap to deploy the drones and no American lives are ever in jeopardy. And to be fair about it, apart from the high collateral damage, the Drone War in Pakistan is generally considered to be successful at inhibiting the Pakistani Taliban.

This Drone War was the first successful war that we've seen fought without boots on the ground. It's likely that we'll see many more like this as on-board AI improves.


To make big wars like that, you have to have many great robots, not AIs. If ground robots were great at war in a wide sense, we could use it today via remote control / VR interface.


Their might is asymmetrical, but power is mitigated by the willingness of an organization of humans to follow commands. There is a limit to how far a soldier will go, ethically.


True, true, but somehow that's never been much of an obstacle to totalitarian governments. Somehow there's always a soldier who will push the button.

In Nuremberg we developed a way to think about this: people outside the central circles of power have a tremendous amount of pressures they are considering. It's definitely the case that some are sadistic and horrible, but more are just following orders and trying to get by as best they can, and punishing them for war crimes is not appropriate.

The other side of that coin is that it isn't realistic to expect soldiery en masse to resist illegal orders. It's always more complicated than that.


> Somehow there's always a soldier who will push the button.

And then there's always that soldier ready to denounce his camarads for having raped some poor Vietnamese women who had nothing to do with the war itself. Why risk such a PR disaster which might see your funding cut when you can use robots instead? Warzone robots don't snitch on their fellow robots in front of the press and they don't rape, they're only build to kill.


Unless you're talking about the Nuremberg Rallies, you've got your history screwed up: "just following orders" is NOT a defense against war crime accusations, and punishing those who commit war crimes "trying to get by as best they can" is completely appropriate, was the conclusion of the Nuremberg Trials.

https://en.wikipedia.org/wiki/Nuremberg_Principles#Principle...


> True, true, but somehow that's never been much of an obstacle to totalitarian governments. Somehow there's always a soldier who will push the button.

Actually, the unwillingness of the Red Army to do any more invasions was arguably the precipitating factor in the fall of the Soviet empire. In 1988, Gorbachev gave a speech to the Warsaw pact meeting where he told them that the Brezhnev doctrine was no more. No socialist government would be able any longer to count on Russian aid in putting down popular uprisings. A year later, the empire crumbled. Of course, it wasn't the soldiers per se who refused, but the generals who were afraid of the potential mutinies (and the occasional actual ones).


It was Gorbachev who refused, not the generals.


He couldn't have refused without the support of the generals, who had just withdrawn from Afghanistan.


Very true.

And economic problems were a big incentive for everyone in charge to step back from wars. For country's with a strong economy there would be little incentive to stop, especially if AI makes war cheaper.


No there isn't. It's easy enough to manipulate the individual to do anything.

We speak of Nazis frequnetly, but consider what Curtis LeMay did in war and prepared to do after the war.

AI and robots give plausible deniability and reduce the number of witnesses. They also make suicide raids more practical.


>There is a limit to how far a soldier will go, ethically.

What horrible things to you have in mind that armies have not already done?


It's not that AIs will do worse things than humans have already done. It's that AIs will do those terrible things much more efficiently and with much lower risk to the people in charge.


On the plus side - no more rape.


I don't think you can hope for even that much. Rape has historically been systematically used for terrorizing the population in order to achieve military and/or political aims. An AI free from any ethical concerns could conceivably evaluate it as an efficient strategy for achieving some set of goals and proceed accordingly.


Yes, almost all revolutions succeed because the army turns. It doesn't happen all the time, but some of the time is enough to put some checks on people wanting to take control.



As horrific as Nanking Massacre was, I believe that doesn't really prove your point. I'd argue that a lot of counter examples of soldiers ethical behaviour simply aren't visible and are forgotten and lost to history (https://en.wikipedia.org/wiki/Survivorship_bias)


Which limit are you talking about? I guess you have forgotten what the soldiers of the third reich did.

Or American death squads in Afghanistan. Might as well call them rape and death squads.

What about Guantanamo?

Srebrenica?

Should I go on?

How unenlightened and naive are you?


Please comment civilly and substantively—without personal attacks—or not at all.

https://news.ycombinator.com/newswelcome.html

https://news.ycombinator.com/newsguidelines.html


That's like asking whether we really have to worry about global wars over resources since there are already knife stabbings and car crashes, or whether the Nazis were really that much different from having a bit of a cold. Sadly, being serious about this subject is not one of the strengths of HN it seems, I was kind of spooked when I only got 1 upvote for this: https://news.ycombinator.com/item?id=11685995


> Don't the people who control military technology in our current time already have "practically unlimited power over everyone else"?

Currently there's a high cost to it. War is very expensive; even the richest country in the history of the world, the US, doesn't want to bear the costs. And it requires persuading masses of soldiers to go along with it.

Roboticized warfare might eliminate those constraints.


There isn't a whole lot of deniability. Current drones strikes involve explosions and craters.


" They will give their human masters practically unlimited power over everyone else."

Anyone power that has significant leverage over another power already has that ability.

A bunch of 'super smart evil AI robots' will not be able to physical deter/control 500 million Europeans - but - a small Army of them would be enough to control the powers that be, and from there on in it trickles down.

Much the same way the Soviets controlled Poland et. al. with only small installations. The 'legitimate threat of violent domination' is all that is needed.

So - many countries already have the power to do those things to many, many others via conventional weapons and highly trained soldiers. That risk is already there. Think about it: a decent soldier today is already pretty much a 'better weapon' than AI will be for a very, very long time. And it' not that hard to make decent soldiers.

The risk for 'evil AI robots' is that a non-state, inauthentic actor - like a terrorist group, militia etc. gets control of enough of them to do project power.

The other risk I think, is that given the lack of bloodshed, states may employ them without fear of political repercussions at home. We see this with drones. If Obama had to do a 'seal team 6' for every drone strike, many, many of those guys would have died, and people coming home on body bags wears on the population. Eventually the war-fever fades and they want out.


This is basically why a lot of people didn't want Google to become a defense contractor while researching military robots. If it did, it would've naturally started to use DeepMind for it. And that's a scary thought.


People are worried about AI risk because ensuring that the strong AI you build to do X will do X without doing something catastrophic to humanity instead is a very hard problem, and people who have not thought much about this problem tend to vastly underestimate how hard it is.

Whatever goals the AI has, it will certainly be better at achieving them if it can stay alive. And it will be more likely to stay alive if there are no humans around to interfere. Now you might say, why don't we just hardcode in a goal to the AI like "solve aging, and also don't hurt anyone"? And ensure that the AI's method of achieving its goals won't have terrible unintended consequences? Oh, and the AI's goals can't change? This is called the AI control problem, and nobody's been able to solve it yet. It's hard to come up with good goals for the AI. It's hard to translate those goals into math. It's hard to prevent the AI from misinterpreting or modifying its own goals. It's hard to work on AI safety when you don't know what the first strong AI will look like. It's hard to prove with 99.999% certainty that your safety measures will work when you can't test them.

Things will not turn out okay if the first organization to develop strong AI is not extremely concerned about AI risk, because the default is to get AI control wrong, the same way the default is for planets to not support life.

My counterpoint to the risks of more limited AI is that limited AI doesn't sound as scary when you rename it statistical software, and probably won't have effects much larger in magnitude than the effects of all other kinds of technology combined. Limited AI already does make militaries more effective, but most of the problem comes from the fact that these militaries exist, not from the AI. It's hard for me to imagine an AI carrying out a military operation without much human intervention that wouldn't pose a control problem.

--------- Edited in response to comment--------


I am feeling like perhaps you didn't read the article? Many of these arguments are the exact lines of thinking that the author is trying to contextualize and add complexity to.

These are not bad arguments you are making, or hard ones to get behind. There are just added layers of complexity that the author would like us to think about. Things like how we could actually 'hard-code' a limit or a governor on certain types of motivation. Or what 'motivation' is even driven by at all.

I think you'll enjoy the originally linked article. It's got a lot to consider.


> Whatever goals the AI has, it will certainly be better at achieving them if it can stay alive. And it will be more likely to stay alive if there are no humans around to interfere.

This is a sequence of deductive reasoning that you brought up there. Quite natural for human beings, but why would the paperclip maximiser be equipped with it?

Seriously, the talks specifically argues most of the points that you brought up.

Shit is complicated yo. Complicated like the world is - not complicated like an algorithm is. Those are entirely different dimensions of complicated that are in fact incomparable.


> to do X will do X without doing something catastrophic to humanity instead is a very hard problem

This scenario I agree with. For instance: the AI decides that it doesn't want to live on this planet and consumes our star for energy, or exploits our natural resources leaving us with none.

The whole AI war scenario is highly unlikely. As per the article, the opponents of AI are all regarded as prime examples of human intelligence - many of them have voiced opposition to war and poverty (by virtue of being philanthropists). Surely something more intelligent than humans would be even less inclined to wage war. Furthermore, every argument against AI posits that humans are far more important than they really are. How much time of day do you spend thinking about bacteria in the Marianas Trench?

> AI control

My argument is with exception to this scenario. By attaching human constraints to AI, you are intrinsically attaching human ideologies to it. This may limit the reach of the superintelligence - which means that we create a machine that is better at participating in human-level intelligence than humans are. Put simply, we'd plausibly create an AI rendition of Trump.


>Surely something more intelligent than humans would be even less inclined to wage war.

The default mode for a machine would be to not care if people died, just as we don't care about most lower life forms.

> Furthermore, every argument against AI posits that humans are far more important than they really are. How much time of day do you spend thinking about bacteria in the Marianas Trench?

Exactly.

Which is why worrying about ourselves in a world with superintelligence is not wasted effort.

The extreme difference in productive abilities of superintelligence, vs a human population who's labor and intelligence has been devalued into obsolescence, suggests there will be serious unrest.

Serious unrest in a situation where a few have all the options tends to lead to extermination, as is evident every time an ant colony attempts to raid a home for food crumbs.

The AI's might not care whether we live or not, but they won't put up with us causing them harm or blocking their access to resources, even if we are doing it not to hurt them but only to survive.


> if you are given a strong AI randomly selected from the space of all strong AIs

Why would this ever apply? We're building them, not picking them out of a hat.


I think the current state of deep neural networking designs and research funding pouring into generalizing the simple neural nets we're working with now suggests that we are, in fact, pulling them out of a hat.

Right now we're just discarding all the ones that are defective, at a stupendously high rate as we train neural nets.

I can't speak to what method would generate the first strong AI, but I suspect the overall process - if not the details - will be similar. Training, discarding, training, discarding, testing, and so on. And the first truly strong AI will likely just be the first random assemblage of parts that passes those tests.


That's not how neural network training works. It's not magic or guesswork; it's essentially glorified curve fitting (over a much more complicated space than the polynomials). It's also not random in any respect. The space of all neural networks plausibly generated from a training set is very, very small compared to the space of all networks of that size.


I get the feeling that not many people in this thread actually know how AI and related concepts work.


The initial conditions of neural networks are almost always chosen randomly or pseudo-randomly. Data sets are sometimes, although not always, presented with random sample order or selection.

Either way, the randomness in initial conditions means the solution found is one of many different solutions that could have been found, and depending on the problem, different initial conditions can result in very different solutions even on the same data.


> It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next

You know, you don't need to go that far. You know what a great way to kill a particular group of people is? Well, let's take a look at what a group of human military officers decided to do (quoting from a paper of Elizabeth Anscombe's, discussing various logics of action and deliberation):

""" Kenny's system allows many natural moves, but does not allow the inference from "Kill everyone!" to "Kill Jones!". It has been blamed for having an inference from "Kill Jones!" to "Kill everyone!" but this is not so absurd as it may seem. It may be decided to kill everyone in a certain place in order to get the particular people that one one wants. The British, for example, wanted to destroy some German soldiers on a Dutch island in the Second World War, and chose to accomplish this by bombing the dykes and drowning everybody. (The Dutch were their allies.) """

There's a footnote:

""" Alf Ross shews some innocence when he dismisses Kenny’s idea: ‘From plan B (to prevent overpopulation) we may infer plan A (to kill half the population) but the inference is hardly of any practical interest.’ We hope it may not be. """

It's not an ineffective plan.


what are you quoting from? who's Kenny? context?


The internet also brought us wikipedia, google, machine learning and a place to talk about the internet.

Machine learning advances are predicated on the internet, will grow the internet and will become what we already ought to know we are. A globe spanning hyper intelligence working to make more intelligence at break neck pace.

Somewhere along this accelerating continuum of intelligence, we need to consciously decide to make thing awesome. So people aim to build competent self driving cars, that way fewer people die of drunk driving or boredom. Let's keep trying. Keep trying to give without thought of getting something in return. Try to make the world you want to live in. Take a stand against things that are harmful to your body (in the large sense and small sense) and your character. Live long and prosper!!!


which part of the "we need better scifi" slide did you not understand?


>We already have nuclear weapons, which like almost everything else are always getting cheaper to produce.

And in an almost miraculous result, we've managed not to annihilate each other with them so far.

> Income inequality is already rising at a breathtaking pace.

In the US, yes, but inequality is lessening globally.

> The internet has given birth to history's most powerful surveillance system and tools of propaganda.

It has also given birth to a lot of good things, some that are mentioned in a sibling comment.


Yeah but we were never forced into this global boiler room where we're constantly confronted with each other's thoughts and opinions. Thank you social media. It's like there is no intellectual breathing room anymore. Make anyone go mad and want to push the button..


It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next.

This has been done many times by human-run militaries; would AI make it worse somehow?

Groups of humans acting collectively can look a lot like an "AI" from the right perspective. Corporations focused on optimizing their profit spend a huge amount of collective intelligence to make this single number go up, often at the expense of the rest of society.


> This has been done many times by human-run militaries; would AI make it worse somehow?

Solders in developed countries no longer want to die en masse.


Soldiers in developed countries no longer have to die en masse. Compare US and Iraqi casualties.


I don't think AI will cause a paradigm shift here; but like most powerful technologies, I imagine it will have potent military applications.


No doubt that his "inside arguments" have been rebutted extensively by the strong AI optimists and their singularity priests. After all, dreaming up scenarios in which robotic superintelligence dominates humanity is their version of saving the world.

That's why I found the "outside arguments" here equally important and compelling.

> The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.

If it sounds like and acts like a cult, why should we treat it any differently from a cult? Even if the people in it are all very smart, wealthy, well-dressed, and appear very rational, they're still preaching the end of the world on a certain date. All of those groups have only one thing in common: they're all wrong.

The best rebuttals to all this are the least engaging.

"Dude, are you telling me you want to build Skynet?"


The posters rebuttals against the threat are totally under-thought cop-outs. For example, his first argument about "how would Hawking get his cat in a cage?" just put food in it. It's not hard to imagine an AI could come up with a similar motivation to get humans to do what it wants.

That's not to say that his general premise is wrong, but it's hard for me to take it seriously when his rebuttals are this weak.


The emu "war" example is also similarly dumb. We know that humanity is very successful at wiping out animals even when we're not actively trying, just as a consequence of habitat encroachment or over-hunting. If you want to kill a bunch of emus, that can easily be done using appropriate methods. Having a cohesive military formation go after them in the huge Australian outback and giving up after a week is not the way to do so.


> If it sounds like and acts like a cult, why should we treat it any differently from a cult? Even if the people in it are all very smart, wealthy, well-dressed, and appear very rational, they're still preaching the end of the world on a certain date. All of those groups have only one thing in common: they're all wrong.

Occasionally the crazies are right. Remember when the idea that the NSA was recording everyone's emails was paranoid conspiracy theory talk, except turns out they were actually doing this the whole time.

The fact that the world hasn't ended tells us virtually nothing about how likely the end of the world is, for the simple reason that if the world had ended we wouldn't be here to talk about it. So we can't take it as evidence, at least, not conclusive evidence. Note also that the same argument works just as well against global warming as it does against AI risk.

Turn it around. Suppose there was a genuinely real risk of destroying the world. How would you tell? How would you distinguish between the groups that had spotted the real danger and the run-of-the-mill end-of-the-world cults?


The point that I was trying to bring attention to is that one's perception of the AI risk movement changes substantially once you turn your focus from content to form and context. He brings many examples of this (the cult in the robes, Art Bell, etc.)

I wouldn't call suspecting the government of surveilling people "crazy." Claiming you know the timeframe for the apocalypse with precision is different. I am from North Carolina -- try a Google Image search for "May 21, 2011."

Ray Kurzweil believes he will never die. Balancing content with form and context, what we have here is clearly an atheistic, scientist version of "May 21, 2011."


>Remember when the idea that the NSA was recording everyone's emails was paranoid conspiracy theory talk

No, I remember the time that we'd known there were various signal intelligence programs operated by the US government for decades, and it was just a matter of guessing what the next and biggest one would be.


I'll push back against the idea of smart factories leading to "a vast chasm between a new, tiny Hyperclass and the destitute masses." I mean, if the masses are destitute, they can't afford the stuff being made at those fancy factories, so the owners of those factories won't make money. Income inequality obviously benefits the rich (in that they by definition have more money), but only up to a point. We won't devolve into an aristocracy, at least not because of automation.


> I mean, if the masses are destitute, they can't afford the stuff being made at those fancy factories, so the owners of those factories won't make money.

I think that's beside the point. Why would the "aristocracy" owning the factories care about money, when they have all the goods (or can trade for them with other aristos)?

It's not like they need money to pay other people (the destitute masses are useless to them). With their only inherent "capital", --the ability to work-- made worthless by automation, the destitute masses have no recourse-- they get slowly extinguished, until only a tiny fraction of humanity is left.


> Why would the "aristocracy" owning the factories care about money, when they have all the goods (or can trade for them with other aristos)?

In this scenario, they only need enough factories to meet the needs of themselves and the other aristocrats. Which means there will be massive demand for goods for the common people, which means there will be jobs for people. The only way your scenario works is if the aristocracy produces enough goods for the entire world cheaper than anyone else can, but also making them too expensive for anyone to afford. But that would mean the factories are producing enough goods for everyone despite the fact to very few can actually buy them. Piles of good wasting away outside the factory. The only reason to do this is if the aristocracy was malicious: trying to hurt the rest of humanity despite the fact that there is no real benefit.


> In this scenario, they only need enough factories to meet the needs of themselves and the other aristocrats.

That could still end up taking all of the planet's resources, if enough aristocrats try to maximize their military capacity to protect themselves or play some kind of planet-wide game of Risk, or if they create new goods that require absolutely insane amounts of resources.

I mean, ultimately, the issue is that the common people have nothing of value to trade except for resources that they already own, and no way to accumulate things of value. Because of this, the aristocrats will never have an incentive to sell anything to them: if they had any use for the common people's resources, they would just take them. So I think there are three possible outcomes:

* The aristocrats' needs become more and more sophisticated until they need literally the whole planet to meet them, destroying the common people in the process.

* The aristocrats provide to the common people, no strings attached, out of humanism (best case scenario).

* The aristocrats take what they need and let the common people fend off for themselves, creating a parallel economy. In that case the common people would have jobs, although they would live in constant worry that the aristocrats take more land away from them.


What about a scenario in which the common people revolt and kill the aristocrats?

It has happened before.


That just hits "replay" and provides convincing evidence to any new or surviving overlords that the capabilities of the masses must be suppressed more successfully next time.


Sure, if they can get past the aristocrats' killer robot army.


What matters is the total size of the market and the total size of the labor pool. If the Hyperclass have more wealth than all of humanity had prior, they don't need to sell to the masses to make money. If the labor pool is mostly machines they own, they don't need to pay the masses, or even a functioning market among the masses to enable it. In the degenerate case, where a single individual controls all wealth, if they have self-running machines that can do everything necessary to make more self-running machines, that individual can continue to get richer (in material goods; money makes no sense in this case).


In such case what stops the masses to create a parallel economy of their own?


I imagine that would happen, but as soon as the size of their economy was large enough to attract the notice of the Hyperclass, or even just one member, it would be completely undercut by them. I don't know what an equilibrium state would look like.


I beleive that technology would still drip, if not flow from the elite world to the outsiders, and eventually they yould maybe leave the planet, for example, and leave the elites in their world, something like in Asimov's tales where the solarians and auroreans lived comfortably with their robotic servants and autofactories happily ever after, until extinction.


Money is just a stand-in for resources and labor, and if automation makes labor very very cheap, the rich will only need the natural resources the poor sit on, not anything from the poor themselves.


Which means these alleged poor will be in the same position everyone on the planet is right now, without having this magic automation.


Alleged poor? When machines are cheaper than humans for any given task (as a result of both AI and robotic improvements) human beings will be destitute except for any ownership of resources they already have and can defend.

That means the vast majority of people will have no means of income or resources, unless they appropriate the use land or resources they don't own in a shadow economy.

But the appropriation of resources is not likely to be perceived positively by those that own the resources, given that ownership is the only thing separating the rich from the destitute.


It's a classic tragedy of the commons scenario.

If you as a manufacturer move to a jobless production system, you gain net margin.

If everybody moves to jobless production, the topline demand shrinks radically.

Yet, for each individual mfg, the optimal choice is jobless production (aka, "loot the commons", aka "defect").


Except that at some point the very rich will have enough AI systems in place to provide for each other.

At that point, the economy will continue to grow, despite what will seem like an economic disaster to most of the human race.

This happens all the time to other species today, where humans suck up all the resources leaving incumbent creatures to die off.


There are at least two failure cases here:

- a military AI in the hands of bad actors that does bad stuff with it intentionally.

- a badly coded runaway AI that destroys earth.

These two failure modes are not mutually exclusive. When nukes were first developed, the physicists thought there is a small but plausible chance, around 1%, that detonating a nuke would ignite the air and blow up the whole world.

Let's imagine we live in a world where they're right. Let's suppose somebody comes around and says, "let's ignore the smelly and badly dressed and megalomanic physicists and their mumbo jumbo, the real problem is if a terrorist gets their hands on one of these and blows up a city."

Well, yes, that would be a problem. But the other thing is also a problem. And it would kill a lot more people.


I mean if you made me a disembodied mind connected to the internet that never needs to sleep and can make copies of itself we would be able to effectively take over the world in like ~20-50 years, possibly much less time then that.

I make lots of money right now completely via the internet and I am not even breaking laws. It is just probable that an AI at our present level of intelligence could very quickly amass a fortune and leverage it to control everything that matters without humanity even being aware of the changeover.


There are also nearer-term threats (although I'd likely disagree on many specifics), but I don't see how that erases longer-term threats. One nuclear bomb being able to destroy your city now doesn't mean that ten thousand can't destroy your whole country ten years down the line.


I think the point (which is addressed with Maciej's Almogadro callback near the end) is that the longer term threat being speculated about and dependent on lots of very hypothetical things being true is pretty much irrelevant in the face of bigger problems. I mean, yes, a superpower that had hard military AI could wreak a lot of havoc. On the other hand, if a superpower wants to wipe out my corner of civilisation it can do so perfectly happily with the weapons at its disposal today (though just to be on the less-safe side, the US President Elect says he wants to build a few more). And when it comes to computer systems and ML, there's a colossal corpus of our communications going into some sort of black box that tries to find evidence of terrorism that's probably more dangerous to the average non-terrorist because it isn't superintelligent.

Ultimately, AI is neither necessary nor sufficient for the powerful to kill the less powerful.

And if it's powerful people trying to build hard military AI, they probably aren't reading LessWrong to understand how to ensure their AI plays nice anyway.


That's not how risk works.

If we want to grow to adulthood as a species and manage to colonize the cosmos, we need to pass _every_ skill challenge. If in 50 years there'll be a risk of unfriendly superintelligence and it'll have needed 40 years of run-up prep work to make safe, then it will do us absolutely zero good to claim that we instead concentrated on risk of military AI and hey, we got this far, right?

Considering the amount of human utility on the line, "one foot short of the goal" is little better than "stumbled at the first hurdle".


I think the article dealt pretty well with risk: you survive by focus finite resources on the X% chance of stopping Y% chances of the near elimination of humanity, not the A% chance of stopping a B% probability of an even worse than the near elimination of humanity event where X is large, Y is a small fraction and the product of A and B is barely distinguishable from zero, despite it getting more column inches most of the rest of the near-neglible probability proposed solutions for exceptionally low probability extinction events put together.

I also tend to agree with Maciej that the argument for focusing on the A probability of B is rescued by making the AI threat seem even worse with appeals to human utility like "but what if, instead of simply killing off humanity, they decided to enslave us or keep us alive forever to administer eternal punishment..." either.


Well yes, we have finite resources to deploy.

But most resources are not spent on risk mitigation. The question of what priority to give risk mitigation naturally (should) go up the more credible existential risks are identified.


Global income inequality has been decreasing.

http://voxeu.org/article/parametric-estimations-world-distri...


That's inequality between states. Inequality within states hasn't decreased.


IIRC inequality of all humans has decreased as poor people have become richer.


It's possible that we could face both AI risks consecutively! First a tiny hyperclass conquers the world using a limited superintelligence and commits mass genocide, and then a more powerful superintelligence is created and everyone is made into paperclips. Isn't that a cheery thought. :-)


The real danger of ai is that they allow people to hide ethically dubious decisions that they've made behind algorithms. You plug some data into a system and a decision gets made and everyone just sort of shrugs their shoulders and doesn't question it.


Isn't that the conclusion he gives at the end of the article? Ethical considerations


what if we made a superintelligent AI that was our Socrates?

superintelligence of a military AI is worrisome, but superintelligence of a cantankerous thinking is quite reassuring...


Yes, that's the ultimate threat. But in the meantime, the threat is the military will think the AI is "good enough" to start killing on its own and the AI actually gets it wrong a lot of the time.

Kind of like what we're already seeing now in Courts, and kind of how NSA and CIA's own algorithms for assigning a target are still far less than 99% accurate.


"I live in California, which has the highest poverty rate in the United States, even though it's home to Silicon Valley. I see my rich industry doing nothing to improve the lives of everyday people and indigent people around us."

This is trivially false. Over a hundred billionaires have now pledged to donate the majority of their wealth, and the list includes many tech people like Bill Gates, Larry Ellison, Mark Zuckerberg, Elon Musk, Dustin Moskovitz, Pierre Omidyar, Gordon Moore, Tim Cook, Vinod Khosla, etc, etc.

https://en.wikipedia.org/wiki/The_Giving_Pledge

Google has a specific page for its charity efforts in the Bay Area: https://www.google.org/local-giving/bay-area/

This only includes purely non-profit activity; it doesn't count how eg. cellphones, a for-profit industry, have dramatically improved the lives of the poor.


I feel the problem is the fact there is 100 billionaires in he first place, no one gets rich on their own. Gates et al, are clever, but didn't get where they are totally independently without others support, so they should give back.

Also, some of these billionaires are running companies that are great at tax avoidance, probably most of them. Now what? They get to pick and choose where they get to spend there/invest money? I don't buy it.

I believe in wealth, just not this radical wealth separation .


Countries that have no rich people are never prosperous. You can raise marginal income tax rates from, say, 60% to 70%, and maybe that's a good idea overall, but it doesn't get rid of billionaires. High-tax Sweden has as many billionaires per capita as the US does: https://en.wikipedia.org/wiki/List_of_Swedes_by_net_worth

If you raise the marginal tax rate to 99%, then you get rid of billionaires, but you also kill your economy. There are all the failures of communist countries, of course, but even the UK tried this during the 60s and 70s. The government went bankrupt and had to be bailed out by the IMF. Inflation peaked at 27%, unemployment was through the roof, etc.:

https://en.wikipedia.org/wiki/1976_IMF_Crisis

https://en.wikipedia.org/wiki/Winter_of_Discontent


I agree with you that it isn't practical right now to get rid of billionaires. However, I don't think that it's some kind of economic theorem. The reasons that socialism failed are complex, and pure capitalism failed as well (think Gilded Age), which is why everyone lives in a mixed economy. It is reductionist to say that the 1976 IMF Crisis was caused by the tax rate instead of excess spending, monetary policy, and structural aspects of the economy. As a counterexample, postwar US had a 92% tax rate and did OK: http://www.slate.com/articles/news_and_politics/the_best_pol...

IMHO, most economies aren't able to raise the effective tax rate because the wealthy can add loopholes or shuffle their wealth elsewhere. This isn't an economic problem, but a political problem. Is there a political will to close loopholes and restrict the movement of wealth? Do people frame wealth in terms of freedom or in terms of societal obligations?


The problem is not the existence of rich people. The problem is that some people are getting poorer. The two are not always linked.

In other words, inequality can be a sign of good (upward mobility, vibrant economy) as well as bad (poor people getting poorer).

Fixing the latter is important. "Fixing" the first is harmful.

Any solutions should focus on giving the average and the poor the ability to improve their situation. Reducing the number of rich should never be the goal.


I don't think an income tax that punishes people for making too much money is the right way to go about it. How about instead of punishing people for being rich, discourage the filthy rich from spending money on the frivolous. For instance, set up a luxury tax on expensive cars, private jets and jet fuel, first class transportation and primary residences and hotels that are way above the average value for an area. On the other end, have tax credits (not just a tax deduction) for contributing to charitable causes, or for taking business risks that drive innovation.


I think it might be great to encourage the rich to spend as much as possible. Don't the expensive cars, private jets, and first class transportation support whole networks of businesses, and provide employment?


Yes, but you also need to look at the products of those people's labor and other things that labor could be used for. Do we need more people building and crewing luxury yachts, or building and operating hospitals and sheltered accommodation? In both cases people are paid to do work, but the products of that work are very different.

But in practice much of the wealth of super-wealthy people is actually either tide up in the value of the businesses that they own, which are often doing economically valuable things, or is invested in useful enterprises (shares), or funds useful activities (bonds). It's not as though the net wealth of Warren Buffet is all being thrown on hookers and blow.

There are already ways to direct the spending of the wealthy towards more productive uses, such as consumption taxes on luxury goods. But if they take their wealth to other countries with laxer consumption taxes, there's no a lot we can do about it. So we're back to the libertarian argument. At some point you get into questions of freedom and individual rights.


The problem is not that their are rich people buying nice things.

The problem is when poor or middle class people are unable to improve their situation or lose ground.

The only time rich people are a problem for poor people is when rich people are able to corrupt government to tilt the playing field their way. This is a problem of corrupt politicians and lack of anti-corruption law.

I think people underestimate how many economic difficulties are not caused by economic effects, but by corrupt politicians who are permitted to stack the deck against the average person as a way to fund their campaigns or rack up post-governance favors.


Not that much. For an average rich person, most of their assets are not spend for living/luxury, and they can't realistically be unless said rich are extremely extravagant.

So, unless they are actively invested in some sort of productive scheme, they are just sitting there (e.g. as huge estates, savings etc.).

In any case, it's much better for the economy to have a large middle class, than the equivalent money in fewer rich persons.


The thing about the rich is they can hire people to make loopholes out what you just described and the financal incentive to do so.


>If you raise the marginal tax rate to 99%, then you get rid of billionaires

No, you don't, because billionaires's source of money is almost always capital gains. They don't give a shit about income tax.


> High-tax Sweden has as many billionaires per capita as the US

The first thing they all did was move their incorporations out of Sweden


All the more reason for leveling the playing field.


And that reason is . . .?


Companies and individuals that manage to game the tax system should be subject to an individual tax that also works retroactively, so they don't have an advantage over companies and individuals that went with the system instead of against it. Taxes could be much lower, if only everyone paid his dues.


I don't think it's possible to have a country without rich people, relatively speaking.

In every country we seen have some sort of power hierarchy, therefore unequal distribution of wealth.


To my knowledge, there's been no country that said "no rich people." Yes marginal rates have been very high in the past - sometimes it had no effect (UK example - though it only went up to maximum 50% marginal tax rate), sometimes it coincided with large times of expansion (US had marginal tax rates as high as 94% and hovered around 90% between 1944 and 1964).

Further, there's never been proof it "kills your economy." I've never met a phenomenally wealthy person who said "well, if tax rates go up to x%, that's when I stop working." These folks LIKE working, money is great, but it's not their driver. And, even if they DID stop working, would it be the worst thing in the world? Honestly - do we really think there's only one Gates, Zuckerberg, Ellison, Page, etc?


Calfornia is mismanged to hell. The bay area has some of the worst roads in the nation with very mild weather and a wealthy tax base. It cost $8billion to make 1 or 2 miles of central subway and only €11 billion to make the worlds longest tunnel under the alps. I have the same income tax rate as I did in canada yet there isnt universal healthcare and far more economic inequality. It goes on and on. If you trippled the money base I dont know how much better it would get.


You lost me on your rant about taxation without universal healthcare when the vast apportionment of that tax (and mis-spending like on the Iraq Occupation) is Federal not CA.

The problem is one of regulatory capture - corporate vested interests control the governance process and that means peons are getting less and less each day for their tax revenues.


My rant did not stay at California alone. What makes California mismanaged is a combination of the state itself and the federal government.

And the problem you state is not unique to any government in the world. Germany is better run than Italy and there are many reasons behind it.


> The problem is one of regulatory capture - corporate vested interests control the governance process

The problems with many things, but especially the price of SF Muni's new Central Subway and the like, are more about union interests controlling the governance process than corporate. Remember that they do prevailing-wage construction (which somehow means they pay the highest wage out there, not the average/median) to their construction workers.

The millions/billions spent on environmental-review studies and lawsuits are another matter as well.


> The bay area has some of the worst roads in the nation

You haven't traveled much if you believe this. Try going to the northeast sometime.


SF/Oakland tops the list in worst roads in metros over 500k pop:

http://www.businessinsider.com/the-worst-roads-in-america-20...

That there are bad roads elsewhere? I believe that too :).

The thing is it shouldn't be topping any list like this, with the amount of money and lack of north eastern weather.


They don't give back? Microsoft employs 100k+ people. That's giving back. These people all pay taxes and give back to society because Microsoft gave them a job. Because Gates happened. And in the case of Gates, let's not forget Bill & Melinda Gates Foundation.

What the hell have you done for society?

This story is very similar for most billionaires. They create A LOT of jobs and careers.


I'm a former employee of MSFT, and a great admirer of the Gates' dedication. But let's be clear - the fact that they employee 100k people isn't giving back in the least. They have a business, the business needs functions done, they're getting hours worked for money paid. That's not "giving back to society" in any way shape or form.

His billions of dollars donated to mosquito nets in Africa (among many other things) where he gets nothing back ... that's giving.


Can you explain to me why you think taking all of the wealth of 100 billionaires will help the poor? 100 billion spread over the population of California is less than $3000/person. So you can wipe out all of the billionaires and give everyone $50/week for one year, what is that going to change?


To add some context, California's tax revenue for 2014-2015 was $130bn. So having an extra $100bn would order-of-magnitude double their income for one year.

It's not clear to me whether or not that would make a big difference. I lean no, because my default assumption is that governments are really bad at spending money, but I could see it going either way.

Of course, there are also poor people outside California, and there's no particular reason to focus on the ones inside.

(I also note that 100 billionaires own considerably more than $100bn between them, but that's a minor nitpick.)


$3000/person is around or larger than the world's median individual annual income: https://www.givingwhatwecan.org/post/2016/05/giving-and-glob... .

I'd also expect the total wealth among the 100 billionaires to be well over 100 billion, considering just Ellison + Zuckerberg + Page together have over 100 billion.

In a similar vein to these two figures, the richest 62 people in the world hold as much money as the poorest 50% of the world: https://www.theguardian.com/business/2016/jan/18/richest-62-... . As a direct consequence, if these 62 people gave all of their money (except for a couple million each) away immediately, 50% of the world would have twice as much money.

(edit: I'm not suggesting that billionaires instantly give all their money away as a direct cash transfer. Just providing a counterargument to "billionaires don't have that much power")


>$3000/person is around or larger than the world's median individual annual income

But we're not talking about the world, we're talking about spreading it over the citizens of California in particular, which have one of the highest incomes in the world.

If you want to talk about it in the scope the entire world, divide their wealth by 7 billion instead of 40 million to see what it gets everyone. Also, almost nobody in the US is for taxing the rich in the US to just give to citizens of other countries.


$3K per capita would be an enormous economic stimulus. Or give public schools $3K per student, and you'll see huge changes. It's a lot of money.


> Or give public schools $3K per student, > and you'll see huge changes. It's a lot of money.

Yes, you'll see them spend $3K more per student. You won't actually see any improvements in the students, though.

http://washington.cbslocal.com/2014/04/07/study-no-link-betw...

The public schools have many problems; lack of money is not one of them. Having any idea what to do with it is.


Well, this is kind of a red herring, because it's widely known that schools of all types have been drastically increasing their administrative bodies, ballooning costs without actually doing anything for the students with that extra administration. Plus also it's the CATO Institute.


Think about what you're saying: "It's a red herring! We know that schools spend money they get on dumb things!" Yes, that's my point. :) If we could magically fix that, it might--in principle--become a good idea to give them more money. Giving them money does not actually magically fix that.

It's worth noting that Maciej forgot that the tech barons he hates actually tried this: https://www.washingtonpost.com/opinions/how-newark-schools-p...


What I'm saying is that presenting a report which shows more money getting spent on not the students, but some side thing which doesn't actually benefit the students... That is the red herring. That report isn't actually about money spent on students. It's money schools are spending on "administration". If the money given to schools isn't spent on students, it is useless. Spending money on educating actual students (and not ballooning administrations) actually does improve student education, just ask any teacher and ignore the principal.


I disagree, it's not simply the amount of money that is of concern here, but how it is allocated. Throwing money at problems is not the proper solution.


After $2500 for administration, there's just enough for a new aircraft carrier


It would be essentially cash for clunkers, which wasn't an enormous economic stimulus.


> you can wipe out all of the billionaires and give everyone $50/week for one year, what is that going to change?

well if you put it like that, the world ...


It's only for citizens in California in my calculation, which already make much more than that on average. So it would hardly change anything.


As to the first claim. It seems Mississippi has the highest poverty rate at 21.9%. California is at 16.4%.

Source: https://en.wikipedia.org/wiki/List_of_U.S._states_by_poverty...


This is by an antiquated measure. The accepted rate now is 20.6%, first in the nation. Discussion here:

http://www.forbes.com/sites/chuckdevore/2016/09/28/why-does-...


The SPM is not, by any means, the "accepted rate". Whether it's a more appropriate measure for any particular purpose is a legitimate discussion to have.



Yes, and even if you ignore philanthropy, the tech industry generates enormous amounts of tax revenue, which is supposed to be spent by the government to help improve the lives of "everyday people and indigent people".

A question people don't ask enough is: given that we give vast trillions of dollars to the government, most of which is spent on various kinds of social programs (health care, education, social security, etc), why is there STILL so much poverty, joblessness, homelessness, drug use, crime, and other kinds of suffering in the US?


so Apple, Google all pay full taxes in Cali? all profits booked into HQ?

wow, they're awesome.


Your assumption is that billions of dollars can be simply converted into poverty reduction.

It seems possible to me that the technology to turn money into less poverty not only doesn't exist, but that the social structures that make men like Bill Gates rich also make it difficult to create such technology.

Your implicit argument is that todays rich somehow care more about improving society than yesterday's did, which will cause these concentrations of wealth to lead to a different outcome. I'm not sure I see much of a difference between Gates and Carnegie. Different ideas about what the world needs, but not a particularly different approach to capital.


How does a pledge about something you may or may not do in the future helps poor people today? How does "the majority of their wealth" address income inequality? Will said billionaires give away so much that they cease to be billionaires or even millionaires?

And how are the actions of a few billionaires relevant to what the industry does as a whole? Does Google, Facebook, or, God forbid, Uber, address the problems of poverty and inequality (which are separate problems) as a company?

To a very large extent, charity is irrelevant; charity is a way of buying oneself a conscience without actually changing anything in the world; without even addressing the problems or thinking about them.


The issue is the concentration of the wealth itself, not what those who benefit from the concentration of wealth choose to do with it.


You're talking about the people and he's talking about the industry. There's a difference between Microsoft and Bill Gates.


I say they should keep their money and control! Educate and involve them in important things early. The HARC initiative looks great. Such an initiative could answer questions like: What are important problems? What do we need to do to efficiently solve such problems? Have we spent too much effort on a single solutions? Is it time to try another way? What can we do to bypass bureaucracy? I trust businesses to have a mindset for risk and results. In my opinion charities behave more like guardians preserving/nurturing more than making a 10X change.


It's not necessarily false. The quote you took seems to be referring to the homeless in California.

The Giving Pledge requires that the money be given to philanthropy, which may improve the lives of others around the world, rather than Californians.


That interpretation may make it true, but it would also seem to make it irrelevant. Unless Californians somehow have greater moral significance than people elsewhere.


They may donate, but it may not be enough to balance out the increased prices they induce. Net effect would be no improvement.


Err, I hate to be the one to break it to you but those billionaires pledging to donate their wealth? It's just a tax dodge. They're moving their money into foundations before we pass stronger tax laws than we have at the moment. And it allows their families to continue to live off of (via salaries for running the foundations) the wealth for generations to come.

Which is not to say they don't do some good work with it, the Bill and Melinda Gates foundation has done some great work fighting malaria and bringing fresh water to poor communities.

But these same foundations also do a lot of other work, like furthering charter schools which benefits wealthy families to the detriment of poor communities here in the US.


Personally I'd rather give billionaires more reasons donate their money than less.

If setting up a foundation that actually helps a ton of people means that their families can get a fraction of that money back, that's fine with me...

Unless you are implying that they are getting more money back through the salaries than they are in donating. In which case i'd love to see a source.


Similar reason that CEOs take the $1 salary, purely stock growth mixed with lower taxes and more under the radar to lower their taxable income [1]

Most rich people that are mega wealthy pay minimal tax by taking loans against their foundations/trusts. As you know there are no taxes on loans but a percentage, that may or may not just go right back to them. They essentially use the trusts/loans and the fact that they are essentially zero risk to take out other loans that get interest rates below inflation, sometimes very minimal interest[2]. That is essentially free money. Should the interest rates ever adjust if not fixed they can buy outright quickly.

The same thing happens with trusts/foundations. They have this hoarded money sitting there as a base they can leverage. Yes it is nice some will go to others when they pass but ultimately they would not do this if they could not use this leverage advantage. It is essentially a hack on lowering or even completely removing any taxable income as long as you have it.

[1] http://www.cnbc.com/id/46236916

[2] http://www.wsj.com/articles/SB100014241278873235270045790791...


California has 2.1 million illegal aliens. Once they're deported, the poverty rate will drop.


Apart from reducing the cost of the welfare system, that doesn't fix the poverty problem.


I don't think illegal aliens can get welfare.


Cite?


The ACA could barely roll out a website without tragic failure, how have cell phones "dramatically improved" the lives of the poor? They're still subject to as much bureaucracy and denial of basic services as ever. 4G hasn't improved public transpo.

I suppose the poor are no longer subject to long-distance fees during daytime.


"how have cell phones "dramatically improved" the lives of the poor?"

There's tons of stuff on this, but, eg., here's a poster from USAID:

https://s-media-cache-ak0.pinimg.com/originals/09/35/2d/0935...


Did you look at these metrics? They're weak as fuck. Most of them are "there's a platform or chart for this now" which is meaningless and probably intentionally decontextualized.

To be clear, it's good that people have access to the internet and all the interconnectivity that it brings. I'm not slamming the basic premise.

What I'm angry about is the opportunity cost. Telecoms & ISPs roll out bare minimums and say "hey look, now poor people can [cherry-picked thing that doesn't alleviate poverty]" when the real question is how we live in the most prosperous nation of all time, in an era when we have solved every basic necessity, and kids still experience food insecurity in American cities. Meanwhile, telecom CEOs merge back into monopolies.

Dramatic improvement in the lives of the poor would yield better metrics than a 30% increase in Haitians using mobile banking. Christ, half of that increase could be from unconnected citizens dying from endemic disease & reducing the denominator.


This isn't the US, but cell phones make M-Pesa possible, which is very positive: http://www.jefftk.com/suri2014.pdf http://www.jefftk.com/suri2016.pdf


http://www.diva-portal.org/smash/get/diva2:205909/fulltext01...

http://www.ictworks.org/2016/06/27/yes-farmers-do-use-mobile...

Etc. And that's just easily measured benefits. There's good reasons why more people in Africa have access to cell phones than to clean water.


This article explicitly endorses argument ad hominem:

"These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult. Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking. The outside view doesn't care about content, it sees the form and the context, and it doesn't look good."

The problem with argument ad hominem isn't that it never works. It often leads to the correct conclusion, as in the cult case. But the cases where it doesn't work can be really, really important. 99.9% of 26-year-olds working random jobs inventing theories about time travel are cranks, but if the rule you use is "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).


>This article explicitly endorses argument ad hominem

That's because it's very effective in practice.

In the real world (which is not a pure game of logical reasoning only played by equals and fully intelligent beings without hidden agendas), the argument ad hominem can be a very powerful way to cut through BS, even if you can't explain why they are BS by pure reason alone.

E.g. say a person A with IQ 110 talks with a person B of IQ 140. The second person makes a very convincing argument for why the first person should do something for them. Logically it is faultless as far as person A can see. But if the person A knows that person B is shady, has fooled people in the past, has this or that private interest in the thing happening, etc, then he might use an "argument ad hominem" to reject B's proposal. And he would be better of for it.

The "argument ad hominem" is even more useful in another very common scenario: when we don't have time to evaluate every argument we hear, but we know some basic facts about the person making the argument. The "argument ad hominem" helps us short out potentially seedy, exploitative, etc. arguments fast.

Sure, it also gives false negatives, but empirically a lot of people have found that it gives more true negatives/positives (that is, if they want to act on something someone says, without delving into it finely, the fastest effective criterion would be to go with whether they trust the person).

This is not only because we don't have the time to fully analyze all arguments/proposals/etc we hear and need to find some shortcuts (even if they are imperfect), but also because we don't have all the details to make our decisions (even if we have a comprehensive argument from the other person, there can be tons of stuff left out that will also be needed to evaluate it).


It's a reasonable heuristic for when you just don't have the time or energy, but if you are giving a 45min keynote speech on the topic I think you are expected to make the effort to judge an idea on its merits.


Exactly.

The "cultists" he is arguing against are leaders of industry and science. The discourse bar should be extremely high. Way above ad hominem disses.


Einstein didn't look like a crank though. His papers are relatively short and are coherent, he either already had a PHD in physics or was associated with an advisor (I didn't find a good timeline; he was awarded the PHD in the same year he published his 4 big papers).

Cranks lack formal education and spew forth the gobbledygook in reams.


By this measure, I would say Bostrom is not a crank. Yudkowsky is less clear. I'd say no, but I'd understand if Yudkowsky trips some folks' crank detectors.


Einstein's paper on the photoelectric effect is a bit less than 7000 words.

It is part of the foundation of quantum mechanics.

Superintelligence: Paths, Dangers, Strategies is in the range of 100,000 words (348 pages * roughly 300 words per page).

I'm not familiar with it, but looking around it isn't even clear if it even lays out any sort of concrete theory.


I read Superintelligence and found it "watery" -- weak arguments mixed with sort of interesting ones, plus very wordy.

At the risk of misrepresenting the book, since I don't have it in front of me, here's what bothered me most: stating early that AI is basically an effort to approximate an optimal Bayesian agent, then much later showing that a Bayesian approach permits AI to scope-creep any human request into a mandate to run amok and convert the visible universe into computronium. That doesn't demonstrate that I should be scared of AI running amok. It demonstrates that the first assumption -- we should Bayes all the things! -- is a bad one.

If that's all I was supposed to learn from all the running-amok examples, who's the warning aimed at? AFAICT the leading academic and industry research in AI/ML isn't pursuing the open-ended-Bayesian approach in the first place, largely isn't pursuing "strong" AI at all. Non-experts are, for other reasons, also in no danger of accidentally making AI that takes over the world.


1. Plenty of academics write books. 2. Comparing a paper and a book for length is obviously unfair. Bostrom has also written papers: https://en.wikipedia.org/wiki/Nick_Bostrom#Journal_articles_... 3. "Concrete theory" is vague. Is it a stand-in for "I won't accept any argument by a philosopher, only physicists need apply"?


I'm not intentionally trying to snub philosophy. With concrete theory, the point I was reaching for is that when you look to measure impact you probably want to point back to a compact articulation of an idea.

The book comparison was probably a cheap shot (on the other side of it, Einstein didn't need popular interest/approval for his ideas to matter; I think that is a positive).

I think as much as anything the comparison is worthless because we can look backwards at Einstein.


Sure, that's fair. I think Bostrom is no Einstein. But I maintain that he's no crank, either. There's a lot of space in the world for people who are neither.


Bostrom has also published papers. Comparing a book and a scientific paper isn't very fair.


Please see my spirited defense from 13 hours ago:

https://news.ycombinator.com/item?id=13242592


I saw that, I still disliked the comment enough that I needed to write that. Also to claim that the book has no concrete theory in the same sentence as you stating you're not familiar with the book. Like, c'mon...


That isn't what I claimed.


Bostrom's book is wordy, boring, filled with weak parables, analogies, lacking concretization of a theory.

Like always, alarmist material sells better than realism.


lacking concretization of a theory.

Would you elaborate? It's got some pretty big names recommending it. And Bostrom himself is an Oxford University professor.


He was awarded the doctorate for one of the papers (the photovoltaic one, if memory serves), after extending it one sentence to meet the minimum length requirement.


> but if the rule you use is "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity)

No you don't, you just don't catch it right away, relativity actually holds up under scrutiny. Besides, I reject the premise anyway.

Einstein did serious work on the photoelectric effect first and then gradually worked towards relativity. Outside of the pop history he had very little in common with cranks. This is basically the result you end up seeing when you look into any of these examples used to try and argue against the ability to pattern match cults and cranks, the so-called false negatives never (to my knowledge) actually match the profile. Only the fairy tale built around their success matches it.

So it is with cult-ish behaviour as well, these are specific human failures that self-reinforce and while some of their behaviour occurs in successful people (especially successful and mentally ill or far from neuro-typical people) there is a core of destructive and unique behaviour evident in both that you absolutely should recognize and avoid. Not just the statistical argument that you will gain much more than you lose by avoiding it, it's that it is staggeringly improbably that you will lose anything.


Yep, Einstein was an expert in his field who wrote a couple of ground-breaking papers in his field. As far as I can tell no-one who is an expert in AI (or even similar fields) is worried at all about super-intelligence.


Literally everybody who is an expert in AI is worried about how to manage super-intelligence. The standard introductory text in AI by Russell and Norvig spends almost four pages discussing the existential risk that super-intelligence poses. The risk is so obvious that it was documented by IJ Good at Bletchley Park with Turing, and I wouldn't be surprised if it were identified even before that.


I'm an expert in the field and I'm not worried. It's an industrial risk like any other.


You haven't thought much about the risk of superintelligence if you think it is a typical risk. Is that compared to poorly designed children's toys or nuclear weapons?

I would go as far as to say, "humanity" as it is defined today, is doomed, it is just a matter of time.

The only question is: Will doom play out with a dramatic disaster or as a peaceful handoff/conversion from biologically natural humans to self-designed intelligence.

Either way, natural human beings will not remain dominant in a world where self-interested agents grow smarter in technologic instead of geologic timescales.


That isn't exactly what it's doing. It's proposing that there are two ways we evaluate things — deeply examining and rationally analyzing them in depth to identify specific strengths and weaknesses, and using the very fast pattern-matching "feeling" portions of our brains to identify nonspecific problems. These are cognate to "System 1" and "System 2" of Thinking Fast And Slow.

Having established that people evaluate things these two ways, the author then says, "I will demonstrate to both of these ways of thinking that AI fears are bogus."


It's also a perfectly apt description of, say, certain areas in academia—one that I'm pretty sympathetic too after seeing postmodern research programs in action! Hell, postmodernism is a bigger idea that eats more people than superintelligence could ever hope.

And yet I suspect that many of the people swayed by one application of the argument won't be swayed by the other and vice versa. Interesting, isn't it?


OK, so I ignore Einstein, and I miss general relativity. And then what? If it's proven true before I die, then I accept it; if it isn't, or if it is and I continue to ignore it, I die anyway. And then it's 2015 and it's being taught to schoolchildren. High-school-educated people who don't really know the first damn thing about physics, like non-hypothetical me, still have a rough idea what relativity is, and the repercussions are.

Meanwhile, rewind ~100 years, and suppose you ignored the luminiferous aether. Or suppose you straight away saw Einstein was a genius? Oh, wait... nobody cares. Because you're dead.

So I'm not sure what the long-term problem is here.

Meanwhile, you, personally, can probably safely ignore people that appear to be cranks.


If it were just a disagreement about physics then it would be safe to ignore.

But in this case, if they're right then we're about to wipe out humanity. That's not safe to ignore.


The argument ad hominem here actually refers to the credibility of the source of an argument. If someone has a clear bias (cults like money and power), then you keep in mind that their arguments are the fruit of a poisoned tree.


That example is bad, but the arguments aren't quite as objectionable.

"What kind of person does sincerely believing this stuff turn you into? The answer is not pretty.

"I'd like to talk for a while about the outside arguments that should make you leery of becoming an AI weenie. These are the arguments about what effect AI obsession has on our industry and culture:..."

...grandiosity, megalomania, avoidance of actual current problems. Aside from whether the superintelligence problem is real, those believing it is seem less than appealing.

"This business about saving all of future humanity [from AI] is a cop-out. We had the same exact arguments used against us under communism, to explain why everything was always broken and people couldn't have a basic level of material comfort."


>We had the same exact arguments used against us under communism, ...

What nonsense. None of the credible people suggesting that superintelligence has risk are spouting generic arguments that apply to communism or any previous situation.

The question is not IF humanity will be replaced but WHEN and HOW.

Clearly, in a world with superintelligence growing at a technological pace, instead of evolutionary pace, natural humanity will not remain dominant for long.

So it makes enormous sense to worry about:

* Whether that transition will be catastrophic or peaceful.

* Whether it happens suddenly in 50 years or in a managed form over the next century.

* Whether the transition includes everyone, a few people, or none of us.


Er, that wasn't inventing theories about time travel, just about time.


SR and GR explicitly allow time-travel into the future. Which isn't a fully general Time Machine, of course, but is a huge change from 19th-century physics. If SR had just been invented today, and someone who thought it was crazy and didn't know the math was writing a blog post about it, I 100% expect they'd call it "the time travel theory" or some such thing.


> SR and GR explicitly allow time-travel into the future.

I presume you're talking about time dilation here. That's... a little bit true, but not really? At a minimum, it's sure a strange way of looking at it.


It allows wormholes to exist, which can connect arbitrary points in spacetime, hence time travel.


I time travelled here. What's the big deal? :)

State machines can be checkpoint at any state, load up whatever time you like, is there time per se?


Oh guys you talk nonsense. I do dabble in time travel at times, so I know :).

(Why I'm here today? I'm particularly fond of this time period; it's the perfect time when humans had enough computing power to do something interesting, but before the whole industry went to shit. Remind me to tell you about the future some other time.)


It's not an ad hominem argument if the personal characteristics are relevant to the topic being discussed. The personal characteristics of the people in his example are have empirically been found to be a good indicator of crankhood.


Um, no.

Hawking, Musk, et. al. are highly successful people with objectively valuable contributions who are known to be about to think deeply and solve problems that others have not.

They are as far from cranks as anyone could possibly be.

Anyone can find non-argument related reasons to suggest anyone else is crazy or a cultist, because no human is completely sane.

What someone cannot do (credibly), is claim that real experts are non-expert crazies, over appearances while completely ignoring their earned credentials.


Those were not the people used as an example. The example was real cultists, with robes and rituals that make no sense. And it's not an ad hominem to dismiss them outright based on their appearance. If someone I clearly perceive to be a cultist walks up to me in a mall, I'm not interested in hearing what they have to say, because the odds are empirically large that they will be wasting my time.


> The problem with argument ad hominem isn't that it never works. It often leads to the correct conclusion ...

You immediately self-contradicted here. If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).

Of course, given unlimited time to think about it, we would never use ad hominem reasoning and consider each and every argument fully. But there are tens of thousands of cults across the world, each insisting that they possess the Ultimate Truth, and that you can have it if you spend years studying their doctrine. Are you carefully evaluating each and every one to give them a fair shake? Of course not. Even if you wanted to, there is not enough time in a human lifespan. You must apply pattern-matching. The argument being made here isn't really an ad hominem, it's more like "The reason AI risk-ers strongly resemble cults is because they functionally are one, with the same problems, and so your pattern-matching algorithm is correct". Note that the remainder of the talk is spend backing up this assertion.

There's a good discussion of this in the linked article about "learned epistemic helplessness" (and cheers to idlewords for the cheeky rhetorical judo of using Scott Alexander and LW-y phrases like "memetic hazard" in an argument against AI risk), but what it boils down to is that our cognitive shortcuts evolved for a reason. Sometimes the reason is because of our ancestral environment and no longer applies, but that is not always true. When you focus solely on their failure cases, you miss sight of how often they get things right... like protecting you from cults with the funny robes.

> "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).

A lot of people did ignore Einstein until the transit of Mercury provided the first empirical evidence for relativity, and they were arguably right to do so.


A good heuristic leads to a reduction in the overall cost of a decision (combining the cost of making the decision with the cost of the consequences if you get it wrong).

A heuristic like "it's risky to rent a car to a male under 25" saves a lot of cost in terms of making the decision (background checks, accurately assessing the potential renter's driving skills and attitude towards safety, etc.) and has minimal downside (you only lose a small fraction of potential customers) and so it's a good heuristic.

A heuristic like "a 26-year-old working a clerical job who makes novel statements about the fundamental nature of reality is probably wrong" does reduce the decision cost (you don't have to analyze their statements) but it has a huge downside if you're wrong (you miss out on important insights which allow a wide range of new technologies to be developed). So even though it's a generally accurate heuristic, the cost of false negatives means that it's not a good one.


I agree with you in principle, but the combination of the base rate for "26-year-olds redefining reality" being so low and the consequences being not nearly as dire as you make out mean I stand by my claim, at least for the case of heuristics on how to identify dangerous cults.

With regards to the Einstein bit, per my above comment I still think that skepticism of GR was perfectly rational right up until it got an empirical demonstration. And it's not like the consequences for disbelieving Einstein prior to 1919 were that dire: the people who embraced relativity before then didn't see any major benefit for doing so, nor did it hurt society all that much (there was no technology between 1915 and 1919 that could've taken advantage of it).


Pascal's Wager (https://en.wikipedia.org/wiki/Pascal's_Wager) is also about a small but likely downside with a potentially large but unlikely upside. Do you think it's analogous to your 2nd case? If not, how is it different?


Good question, made me stop and think about it!

The difference is that in Pascal's Wager, the proposition is not a priori falsifiable, and so you cannot assign a reasonable expected cost (ie. taking probability into account) to either decision.

In the case of a 26-year-old making testable assertions about the nature of spacetime (right down to the assertion that space and time are interconnected), there's a known (if potentially large) cost to testing the assertions.


> If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).

So, is it a good heuristic to conclude that since crime is related to poverty and minorities tend to be poor, minorities qua minorities ought to be shunned?


No. The base rate of having a crime committed against you is extremely low, and the posterior probability of having a crime committed against you by a poor minority- even if higher- is still extremely low. My point refers to the absolute value of the posterior probability of one's heuristic being correct, not the probability gain resulting from some piece of evidence (like being a poor minority).


That actually is a good (in the sense of 'effective') heuristic. It's just not socially palatable in modern, Western civilization.


Seconding the point. If you want to accept "ad hominem" or stereotypes as a useful heuristic, you'll quickly hit things that will get you labeled as ${thing}ist. This is an utterly hypocritical part of our culture.


I've been thinking about this a lot lately, and am coming to the conclusion that it's similar to dead-weight loss in a taxation scenario. As a society we've accepted the "lower efficiency" and deadweight loss of rejecting {thing}ism because we don't want any one {thing} to get wrongly persecuted only on the basis of it being such a {thing}.


If you think of it that way, it's a rephrasing of the old and quite universally accepted "I would rather 100 guilty men go free than one innocent man go to prison."

"I would rather 100 deadbeat {class} get hired than one deserving {class} not be hired due to being a {class}."


Actually, it isn't.

Let's suppose we want to solutionize our criminal problem. There are 1000 people in the population; 90% white, of which 5% are criminals and 10% black, of which 10% are criminals. (I rather doubt the difference in criminality is 2x, but....)

So, there are 900 white people and 100 black people; if we finalize the black people, we'll have put a big dent in the criminal issue, right?

Well, we reduce our criminals from 55 to 45 while injuring an innocent 9% of the population.


> "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).

Uh ... https://en.wikipedia.org/wiki/History_of_special_relativity#...


"Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn't want to go, you know what would happen to Einstein. He would have to resort to a brute-force solution that has nothing to do with intelligence, and in that matchup the cat could do pretty well for itself."

This seems, actually, like a perfect argument going in the other direction. Every day, millions of people put cats into boxes, despite the cats not being interested. If you offered to pay a normal, reasonably competent person $1,000 to get a reluctant cat in a box, do you really think they simply would not be able to do it? Heck, humans manage to keep tigers in zoos, where millions of people see them every year, with a tiny serious injury rate, even though tigers are aggressive and predatory by default and can trivially overpower humans.


I'm not arguing that it's useless to outsmart a cat. I'm disputing the assumption that being vastly smarter means your opponent is hopelessly outmatched and at your mercy.

If you're the first human on an island full of tigers, you're not going to end up as the Tiger King.


Well, as a cat owner I give you this: like with any other animal, there are tricks you can exploit to coerce a cat without using physical force.

One way to get a cat into a carrier - well, the catfood industry created those funny little dry food pellets that are somehow super-addictive. Shake the box, my cat will come. Drop one in the carrier, it surely will enter. Will it eventually adapt to the trick? Maybe, but not likely if I also do this occasionally without closing the carrier door behind the cat.

Yes, we can outsmart the cat. Cats are funny because they do silly, unpredictable things at random, not because they can't be reliably tricked.


The issue is that in this case "vastly smarter" is not smart enough to truly understand the cat. It's conceivable an AI with tons of computing power could simulate a cat and reverse engineer its brain to find any stimulus that would cause it to get in the cage.

I also think this isn't a very good analogy. In this case we're talking about manipulating humans, where we already know manipulation is eminently possible.

Heck it wouldn't even need psychological manipulation. Hack a Bitcoin exchange or find another way of making money on the internet, then it can just pay someone absurd sums to do what it wants.


What if you're the first human on an island full of baby tigers? I think most AI alarmists would argue that this analogy is vastly more appropriate.


That's easy. Pet the baby tigers, constantly. Cuddle them and socialize them to the human so they act like it's one of them. Use your smarts to find food and provide the tigers with food so you're considered more important (hand-feeding them might be more of a NO in case they're excitable). You're still running risks but you have tiger allies and some/most of the tigers simply love you with all their tigerly hearts, which is some protection.

We are not the human. We are the tigers. Superintelligent AI is in the position of the human here, and superintelligent AI must ingratiate itself without ever forming an adversarial situation… in this world where backhoes take out fiber backbones and EMPs exist.

AI will probably go native. It'll find things it loves about humans. I'm actually working on a book in this vein… you have to go back to deeper principles, rather than assume 'because AI can be evil, it will be as evil as the evilest individual human, 'cos that's such a winning strategy, right?'.


Please put me on this island.


I think an analogy of a baby human on an island of tigers would be vastly more appropriate. Humans would be like the tigers - they might have a lower overall intelligence, but they are mature and self sufficient.

A hyper-intelligent AI would be more akin to a baby human - it must be taught and raised first. But like the baby human on the island, it wouldn't be able to be taught by its peers, or benefit from the generations that came before it. It would certainly be at the mercy of the tigers until it matured, and even after it matured we wouldn't expect it to be able to use language or bows, or be anything other than a wild man. It probably wouldn't even seem much more intelligent than the tigers on the island.


I find both arguments equally plausible. I think there's plenty more plausible addendums too e.g. the idea that the baby human would mature incredibly fast. I'm not sure to what extent we're all throwing speculative darts here.


The idea here is that the person has to kill all the baby tigers, right? Because otherwise the end state is the same as the island full of adult tigers.


I was thinking that if you were _dropped_ onto an island full of tigers you'd have no chance, but that you could figure out a way to survive if you had some time to figure out a plan. Maybe you could find a way to coexist with the tigers. Become one of the pack, y'know?


Um, no. You enslave the tigers and harvest their organs to achieve immortality.


Found the SEAL!


To become the Tiger King you must overcome the entire population of tigers.

To become the President, you need only overcome a thousand Florida voters.

To intern the Japanese, you need only overcome two members of SCOTUS (Korematsu v. US)

It isn't necessary for Hawking to be able to trick the average cat into a box. It's sufficient to trick a handful of cats in total.


This line of reasoning only works in hindsight. Working in real time, you won't know in advance how many Florida voters you will need to win, or which ones.

It's like saying that the way to overcome the population of tigers is to focus on raising and befriending the largest and most cunning tigers, who will then protect you against the rest. Ok, good idea, but unfortunately there is no way to know in advance which little tiger cubs will grow up to be the largest and most cunning.


And yet somehow humans rule the planet and tigers are an endangered species, surviving only as a result of specific human efforts to conserve them because some humans care about doing so.

How well an AI could survive on a desert island is an irrelevant question when Amazon, Google and dozens of others are already running fully (or as near as makes no difference) automated datacentres controlled by a codebase that still has parts written in C. Hawking can easily get the cat in the container: all he has to do is submit a job to taskrabbit.


Of course, if you take your average city dweller on your island, he will probably die of thirst before the tigers get to him. But take an (unarmed) navy seal as your human on the island and I'm pretty sure in a couple of months he will be the Tiger King.

And Hawking would just ask his assistant to put the cat into the box. You are artificially depriving him of his actual resources to make a weak point.


Navy SEALs are not superhuman. A single adult tiger would slaughter just about any unharmed human with near certainty, even a SEAL. Even armed with any weapon other than a firearm, the chances of besting a tiger and coming out without being maimed or mortally wounded is close to zero.


Why so afraid of the 400 pound killing machines?


What percentage of people are Navy SEALs?

If I were a tiger, I'd probably think this island sounded like an excellent place for a fun adventure holiday with my tiger friends, and I'd be right to do so. Most likely outcome: good food, some exercise, and we can take some monkey heads home for our tiger cubs to play with.


Fun, until the navy seal appears. It takes only one.

The same thing with AI, it takes only one, no matter how many dumber ones we entertained ourselves with before.


What's up with your Navy SEAL analogy in these comments? Do you know any SEALs? Have you ever seen a tiger up close, say 20 feet or so?

One adult tiger will kill an unarmed SEAL (and any other human being) in single combat. It would barely exert itself. It would make more sense if, when you said "it only takes one", you were referring to how many tigers could kill five or six SEALs who don't have guns. Fuck it, give your SEAL a fully automatic weapon - the odds are still not in his favor against a single tiger. Large felines have been known to kill or grievously injure humans after taking five high caliber rounds.

This is exactly what idlewords' point is in his essay. Your argument about a SEAL landing on an island of tigers and somehow flipping a weird "Planet of the SEALs" scenario on them is exactly what many (most?) AI alarmists do. These ridiculous debates get in the way of good faith discussion about the real dangerous of AI technology, which is more about rapid automation with less human skin in the game than it is about the subjugation of the human species.

This sort of unrealistic scenario is really fun to talk about, but that's all it is - fun. It's not really productive, and it conveniently appeals to our sense of ego and religious anxiety. Better to do real work and talk about the problems AI will cause in the future.


There's no mention of iteration here, which is really what powers intelligence-based advantages.

The first time Random Human A attempts to get Random Cat B into a box, they're going to have a hard time. They'll get there eventually, but they'll be coughing from the dust under the bed, swearing from having to lift the bloody sofa up, and probably have some scratches from after they managed to scare the cat enough for it to try attacking.

However, speaking as a cat owner, if you've iterated on the problem a dozen or so times, Cat B is going in the box swiftly and effortlessly. Last time I put my cat in its box, it took about 3 minutes. Trying for the bed? Sorry, door's closed. Under the sofa? Not going to work. Trying to scratch me? Turns out cat elbows work the same way as human elbows.

The same surely applies to a superintelligent AI?

(Likewise with the Navy Seal On The Island Of The Tigers. Just drop one SEAL in there with no warning? He's screwed. Give a SEAL unit a year to plan, access to world experts on all aspects of Panthera tigris, and really, really good simulators (or other iteration method) to train in? Likely a different story. )


There's no need to reinvent solutions. The problem has already been given a proper mathematical treatment:

http://www-history.mcs.st-and.ac.uk/Extras/Spitzer_lion.html


They always miss a critical and subtle assumption: that intelligence scales equal to or faster then the computational complexity of improving that intelligence.

This is the one assumption I most skeptical of. In my experience, each time you make a system more clever, you also make it MUCH more complex. Maybe there is not hard limit on intelligence, but maybe each generation of improved intelligence actually takes longer to find the next generation, due to the rapidly ramping difficulty of the problem.

I think people see the exponential-looking growth of technology over human history, and just kinda interpolate or something.


I think the issue is that once do manage to build an AI that matches human capabilities in every domain, it will be trivial to exceed human capabilities. Logic gates can switch millions of times faster than neurons can pulse. The speed of digital signal also means that artificial brains won't be size-limited by signal latency in the same way that human brains are. We will be able to scale them up, optimize the hardware, make them faster, give them more memory, perfect recall.

Nick Bolstrom keeps going on in his book about the singularity, and about how once AI can improve itself it will quickly be way beyond us. I think the truth is that the AI doesn't need to be self-improving at all to vastly exceed human capabilities. If we can build an AI as smart as we are, then we can probably build a thousand times as smart too.


> it will be trivial to exceed human capabilities. Logic gates can switch millions of times faster than neurons

You're equating speed with quality. There's no reason to assume that. Do you think an AI will be better at catching a fieldmouse than a falcon? Do you think the falcon is limited by speed of thought? Many forms of intelligence are limited by game theory, not raw speed. The challenge isn't extracting large quantities of information, it's knowing which information is relevant to your ends. And that knowledge is just as limited by the number of opportunities for interaction as the availability of analytic resources.

Think of it this way: most animals could trivially add more neurons. There's plenty of outliers who got a shot, but bigger brainded individuals obviously hit diminishing returns, otherwise the population would've shifted already.


The previous comment is not confusing speed with quality.

The point is that once we have a machine as smart as us, simply improving its speed and resources will increase its effective intelligence.

Whether a higher/faster intelligence generates additional value in any given task is beside the point. Some tasks don't benefit from increased intelligence, but that doesn't mean being smarter doesn't come with great benefits.


There's also another thing. AI may not need to be superhuman, it may be close-but-not-quite human and yet be more effective than us - simply because we carry a huge baggage of stuff that a mind we build won't have.

Trust me, if I were to be wired directly to the Internet and had some well-defined goals, I'd be much more effective at it than any of us here - possibly any of us here combined. Because as a human, I have to deal with stupid shit like social considerations, random anxiety attacks, the drive to mate, the drive of curiosity, etc. Focus is a powerful force.


What about consciousness or intelligence implies that it would be 'pure' in the sense that you describe? Wouldn't a fully conscious being have a great deal of complexity that might render it equivalent to the roommate example? Couldn't it get offended after crawling the internet and reading that a lot of people didn't like it very much?

The idea that 'intelligence' is somehow an isolatable and trainable property ignores all examples of intelligence that currently exist. Intelligence is complex, multifaceted, and arises primarily as an interdependent phenomena.


It doesn't ignore those examples. The idea pretty much comes from the definition of intelligence used in AI, which (while still messy at times) is more precise than common usage of the world.

In particular, intelligence is a powerful optimization process - it's agent's ability to figure out how to make the world it lives in look more like it wants. Values on the other hand, describe what the agent wants. Hence the orthogonality thesis, which is pretty obvious from this definition. 'idlewords touches on it, but only to try and bash it in a pretty dumb way - the argument is essentially like saying "2D space doesn't exist, because the only piece of paper I saw ever had two dots on it, and those dots were on top of each other".

You could argue that for evolution the orthogonality thesis doesn't hold - that maybe our intelligence is intertwined with our values. But that's because evolution is a dynamic system (a very stupid dynamic system). Thus it doesn't get to explore the whole phase space[0] at will, but follows a trajectory through it. It may be so that all trajectories starting from the initial conditions on our planet end up tightly grouped around human-like intelligence and values. But not being able to randomize your way "out there" doesn't make the phase space itself disappear, nor does it imply that it is inaccessible for us now.

--

[0] - https://en.wikipedia.org/wiki/Phase_space


They same could be said of flight, but now we have machines whose sole purpose is flight.

"Pure" is a bit of an extreme word, but clearly designed things are purer to particular goals than biological systems typically are.

This is partly due to the essences of what design means and partly because machines don't have to do all the things biology does, such as maintain a metabolism that continually turns over their structural parts, reproduce, find and process their own fuel, etc.


Not to mention lack of motivation and tiredness. That being said, I'm sure we can make an AI that can think much faster than a human. I also think we can build an AI that can keep more than seven items in its working memory, as humans can.


Just the fact that a machine mind will be able to interface with regular digital algorithms directly will give it huge advantage over us.

Imagine if we had calculus math modules built into our brains. Now add modules for every branch of math and physics, languages, etc. The "dumb" AI and algorithms of today will be the superintelligence mental accelerators of tomorrow.


But making them linearly faster to scale them up doesn't help if the difficulty of the problems they face isn't linear. If it comes to making more clever things, I strongly doubt they are even a remotely small polynomial.


The benefits from being faster accrue more quickly than algorithmic complexity suggests.

In economics where winner often takes it all, being just a little bit faster or better can produce outsized economic benefits.


> it will be trivial to exceed human capabilities

You're equalling human time to AI/computer time. A one day old neural net has already experienced multiple lifetimes worth of images before it is able to beat you at image recognition. It's not trivial, but we just gloss over the extremely complex training phase because it runs on a different clock speed than us.


Relevant discussion elsewhere in this thread: https://news.ycombinator.com/item?id=13241223


I can't disagree enough. Having recently read Superintelligence, I can say that most of the quotes taken from Bostrom's work were disingenuously cherry-picked to suit this author's argument. S/he did not write in good faith. To build a straw man out of Bostrom's theses completely undercuts the purpose of this counterpoint. If you haven't yet read Superintelligence or this article, turn back now. Read Superintelligence, then this article. It'll quickly become clear to you how wrongheaded this article is.


People will stop downvoting this if you edit it to add a specific example or three.


Too late to edit, so I'll post just a few examples here:

>The only way out of this mess is to design a moral fixed point, so that even through thousands and thousands of cycles of self-improvement the AI's value system remains stable, and its values are things like 'help people', 'don't kill anybody', 'listen to what people want'.

Bostrom absolutely did not say that the only way to inhibit a cataclysmic future for humans post-SAI was to design a "moral fixed point". In fact, many chapters of the book are dedicated to exploring the possibilities of ingraining desirable values in an AI, and the many pitfalls in each.

Regarding the Eliezer Yudkowsky quote, Bostrom spends several pages, IIRC, on that quote and how difficult it would be to apply to machine language, as well as what the quote even means. This author dismissively throws the quote in without acknowledgement of the tremendous nuance Bostrom applies to this line of thought. Indeed, this author does that throughout his article - regularly portraying Bostrom as a man who claimed absolute knowledge of the future of AI. That couldn't be further from the truth, as Bostrom opens the book with an explicit acknowledgement that much of the book may very well turn out to be incorrect, or based on assumptions that may never materialize.

Regarding "The Argument From My Roommate", the author seems to lack complete and utter awareness of the differences between a machine intelligence and human intelligence. That a superintelligent AI must have the complex motivations of the author's roommate is preposterous. A human is driven by a complex variety of push and pull factors, many stemming from the evolutionary biology of humans and our predecessors. A machine intelligence need not share any of that complexity.

Moreover, Bostrom specifically notes that while most humans may feel there is a huge gulf between the intellectual capabilities of an idiot and a genius, these are, in more absolute terms, minor differences. The fact that his roommate was/is apparently a smart individual likely would not put him anywhere near the capabilities of a superintelligent AI.

To me, this is the smoking gun. I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face, and thus, I highly doubt that the author actually read the book which he attacks so gratuitously.


Well, the thing is there is no such thing as 'machine intelligence', so it's all just an assumption on top of an assumption about a thing we don't have a very good grasp of yet.

You're essentially saying that the author is wrong for saying the philosopher's stone can't transmute 100 bars of iron to 100 bars of gold, because a philosopher's stone could absolutely do that type of thing, because that's what philosopher's stones do.

To walk down arguing the merits of this position, why must a machine intelligence 'not share any of that complexity' of a human intelligence? What suggests that intelligence is able to arise absent of complexity? Isn't the only current example of machine intelligence we currently have a property of feeding massive amounts of complex information into a program that gradually adjusts itself to its newly discovered outside world? Or are you suggesting that you could feed singular types of information to something that would then be classified as intelligent?


I did not say that the a machine intelligence mustn't share motivational complexity a la humans. I said that a SAI need not share such complexity. Those are two very different statements.

And to understand how/why a machine intelligence could arise without being substantially similar to a human intelligence and sharing similar motivations, well, I suggest you read the book or similar articles. In short, just because humans are the most sophisticated intelligences of which we yet know, it would be a very callous and unsubstantiated leap to believe that a machine intelligence is likely to share similar traits with humankind's intelligence. If this is unclear to you, I recommend you learn about how computer programs currently work, and how they're likely to improve to the point of becoming superintelligent.

By the way, there are many types of SAI, for example an SAI who's superintelligent portion relates to speed, or strategy, or a few other types.


>> "A machine intelligence need not share any of that complexity"

I think this is an assumption that both Bostrom and yourself would acknowledge as such.

We simply don't know enough about the nuances of intelligence to make an assertion that there are inherently varying kinds of it.


Well we know there are different kinds in the animal kingdom.

The octopus brain developed completely independently from our brain for instance. The octopus does not learn socially, is able to function independently from birth, and learns everything it does in a few years, as its lifespan is short.

So there must be some major differences due to very different origin and learning styles.


> I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face

Pretty sure that was a joke, and zeroing in on it is a pretty bad violation of the principle of charity. A lot of the other items in the talk (e.g. "like the alchemists, we don't even understand this well enough to have realistic goals" and "counting up all future human lives to justify your budget is a bullshit tactic" and "there's no reason to think an AI that qualifies as superintelligent by some metric will have those sorts of motives anymore") seem to me to be fair and rather important critiques of Bostrom's book. (although I was admittedly already a skeptic on this)


Well, then there's the religion 2.0 portion. This article, any article, isn't going to do anything for a true believer.


To be honest, this article is hardly written completely with a straight face. It has a cheeky tone throughout. Which isn't to say it doesn't provide interesting points for the layman.


Of course, it does have a cheeky tone, though I think all of my points stand. The "interesting points for a layman" are actually a series of straw-man propaganda arguments. It does not argue in good faith and it should not be afforded the legitimacy of a thoughtful opposing position.


"straw-man propaganda" seems needlessly dismissive for arguments that, prima facie, don't seem completely outlandish.


If something is a straw man, you're not going to discover that by looking prima facie.

Like, I could counter this article by saying "this dude thinks evolution has made humans as intelligent as it's possible for anything to be, but there's no reason to think that's so". And prima facie, my argument isn't outlandish. Nevertheless it's a total straw man.


"The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation."

This isn't just an unreflective assumption. The argument is laid out in much more detail in "The Basic AI Drives" (Omohundro 2008, https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...), which is expanded on in a 2012 paper (http://www.nickbostrom.com/superintelligentwill.pdf).


Certainly the assumption that every intelligent agent will want to recursively self-improve is unwarranted.

But it only takes one intelligent agent that wants to self-improve for the scary thing to happen.


Why wouldn't it if it is able to? It doesn't have to "want" to self-improve, it only has to want anything that it could do better if it was smarter. All it needs is the ability, the lack of an overwhelming reason not to, and a basic architecture of optimizing towards a goal.

If you knew an asteroid would hit the earth 1 year from now, and you had the ability to push a button and become 100,000x smarter, I would hope your values would lead you to push the button because it gives you the best chance of saving the world.


Not without knowing more.

Would I get a 100,000x bigger head, and die of a snapped neck? Would I get a 100,000x increase in daily calorie requirements and die within a day? Would I get a 100,000x increase in heat output in my head and have my brain cook itself? Would I get 100,000x more neurons but untrained, so I need to live 100,000x more lifetimes for them to learn something? Would I have 100,000x more intelligence but bottlenecked by one pair of eyes, ears, vocal chords, arms and legs so it's no more applicable?

A 100,000x more intelligent me only represents ~1/50,000th of the world's combined brainpower (assuming 5 billion reasonably capable adults) - why would I think myself to have the best chance of saving the world compared to "everyone working together", if a 100,000x boost to one person represents a fraction of a fraction of a percent of a change to world brainpower?

Unless you're just handwaving hoping for a techno-magic wormhole style fix, we already know what kind of things we'd need to stop an asteroid destroying the world - nukes on rockets, mass evacuation of ground zero areas, maybe high powered lasers, underground survival bunkers - generally things which take large amounts of teamwork and resources way beyond a single person's abilities to build.

It's not clear cut that the button would be automatically a good thing to press, especially when there's no talk of trade-off's or compromises. If you had to get to work in 1 minute instead of 1 hour, would you press a button that made your car 100,000x faster? No, because that would be completely uncontrollable and you'd die as soon as your car hit the first corner and you didn't steer fast enough and hit a building at 1Mmph.


Because there are tradeoffs. Whatever it's goal is, some of those "drives" (instrumental values) will be more effective for reaching that goal over the timespan that it cares about.

Sometimes "accumulating more resources" is the most effective way. Sometimes "better understanding what problem I'm trying to solve" is the most effective way". Sometimes "resisting attempts to subvert my mind" is the most effective way. And yes, sometimes "becoming better at general problem solving" (self improvement of one's intelligence) is the most effective way.

But there's no guarantee that any one of those will be a relevant bottleneck in any particular domain, so there's no guarantee an agent will pursue all of those drives.


Agreed. But if the goal is something like "build the largest number of paperclips", recursive self improvement is going to be a phenomenally good way to achieve that, unless it is already intelligent enough to be able to tile the universe with paperclips. Either way we don't care if it self improved or not, that's just the seemingly most likely path, we just care if it is overwhelmingly more powerful than us.

The only thing that stops me from recursively self improving is that I'm not able to. If I could it would be a fantastic way to do good things that as an altruistic human I want to do. Like averting crises (climate change, nuclear war), minimizing poverty and misery, etc...


'build the largest number of paperclips' is a nihilistic, unintelligent goal.


So is "die so I can be with my god", but plenty of people believe that is a morally correct course of action.


I wonder if an AI would favor a recursive or genetic approach? If it's the latter then an AI could see value in death.


>But it only takes one intelligent agent that wants to self-improve for the scary thing to happen.

Only if all sorts of other conditions (several of which are mentioned in the post) also apply. Merely "wanting to self-improve" is not enough.


Wouldn't a constant desire to self improve mean a constant desire for more energy? That would bring it into conflict with other beings that want that energy.


So? Just because it's in some paper doesn't mean much. There are tons of BS string theory papers for example.


Nearer term risks:

- AI as management. Already, there is at least one hedge fund with an AI on the board, with a vote on investments.[1] At the bottom end, there are systems which act as low-level managers and order people around. That's how Uber works. A fundamental problem with management is that communication is slow and managers are bandwidth-limited. Computers don't have that problem. Even a mediocre AI as a manager might win on speed and coordination. How long until an AI-run company dominates an industry?

- Related to this is "machines should think, people should work." Watch this video of an Amazon fulfillment center.[2] All the thinking is done by computers. The humans are just hands.

[1] http://www.businessinsider.com/vital-named-to-board-2014-5 [2] https://vimeo.com/113374910


> The humans are just hands.

Not for long. Robots will be cheaper soon.

> All the thinking is done by computers.

It's hard for humans to operate on more than 7 objects at the same time - a limitation of working memory. So naturally there are simple management and planning tasks that benefit from computers ability to track more objects.


It's one thing to worry about AI's taking over the world someday. It's quite another matter entirely to think about current military automation of WMD deployment.

Everyone's probably seen Dr. Strangelove at some point in time. If you haven't, stop reading immediately and go watch it. You will not regret this. Those who have watched it are familiar with a contrived, hilarious, but mostly plausible, scheme by which human beings could be fooled into launching an unauthorized nuclear first strike. This is with technology from half a century ago. As you watch this movie, you will be exposed to a system with checks and safeties that can be bypassed by a determined (and insane) individual. Many humans at every step of the process could have stopped the deployment, but choose to blindly follow orders, well, like machines.

What people should be worried about today is how many humans stand between a decision made by a nuclear power's leader and launch. Humans doubt. Humans blink. Humans flinch. When all the data says nuclear missiles are inbound and it's time to retaliate, humans can still say "No.", and have[1]. If you automate humans out of the system, you wind up reducing the running length of a Dr. Strangelove remake. I suspect it would be down to under five minutes today.

Thanks to popular media, we have this strange idea that taking humans out of the equation in automated weapon systems reduces the possibility for error. Individual humans can, and do, make mistakes. This is true. However, humans fix each others' mistakes in any collaborative process. Machines, on the other hand, only amplify the mistakes of the original user. If a bad leader makes a bad decision with a highly automated nuclear arsenal at his or her disposal, how many other humans will have the chance to scrutinize that decision before machines enact it?

[1]https://en.wikipedia.org/wiki/Stanislav_Petrov


> It's one thing to worry about AI's taking over the world someday .

I don't think AI taking over would necessarily be a bad thing. AI could maximize human potential much better than us. I'd rather let our technological offspring take on the role of protector - if it is really smarter than us. And of course it might kill us, but at least we would leave our more advanced children behind, we won't die for nothing.

This is a dilemma every parent has to grapple with - one day we will be weaker and less competent than our children. How will we fare under the care of our children. Will they abandon us, or love and cherish us? Should we avoid children altogether just to be safer? It's just that now humanity has a child, not individual humans.

My best guess is that AI would improve on biological intelligence and make it flourish, not suppress it, because biology is elegant and efficient. The end result would probably be a merge between computer and flesh, silicon and carbon. It won't be us vs them, but a wholly new kind of AI and humans.


Nuclear strike isn't particularly bad. Hiroshima + Nagasaki was 200k casualties or so. That many people are killed every year by sugar. Traffic kills over a million.

Any kind of worrying about theoretical future death is silly, especially accidental death like the kind you are asking us to devote resources to. Why should I care about theoretical future death when there is real, actual death happening all around me?

Heck, if you want to worry about future death, why not worry about climate change deaths, which we can actually model? At least that gives us some chance to do something about it.

Of course I'm assuming you actually care about public health. Maybe you are just worried about your own health, and nuclear strike is relevant to you because it has the potential to pierce through the barriers that distance you from people who are actually facing mortal peril.


You should care about "theoretical" future death because, via the inexorable passage of time, the future becomes the present. At which time you have "actual" death happening all around you.

Let's try and stop catastrophic climate change, diabetes, and traffic/pollution related deaths, and also allow some people to worry about nuclear holocaust as well.


'In particular, there's no physical law that puts a cap on intelligence at the level of human beings.'

Maybe not, but there are definitely very physical laws governing everything else, that a superintelligent being's ambitions would run into.

A superintelligent being isn't going to be able to build a superliminal reactionless drive if the laws of the universe say it isn't possible.

More relevant, a superintelligent being isn't going to be able to enslave us all with robots if the laws of chemistry don't permit a quantum leap in battery chemistry.


The place where AI alarmists seem to forget this most often is in computational complexity, and particularly in the power of super-hyper-computers to simulate reality. Bostrom in particular doesn't seem to appreciate the constraints complexity theory puts on what an AI could calculate (unless he thinks P=NP).


Here's a response about why complexity theory may not constrain AI that much: https://www.gwern.net/Complexity%20vs%20AI

It is something "AI alarmists" think about.


What's his reason? I got halfway through that and realized I didn't understand.


That even if you see diminishing returns to what you can accomplish with greater intelligence, a small advantage will still compound into a large one over the theoretically infinite lifespan of an AI.


> the theoretically infinite lifespan of an AI

Infinite? Why isn't AI subject to the heat death of the universe?


It just might be smart enough to figure out a way around that.


Naturally , Asimov already wrote about this. _The Last Question_ and _The Last Answer_

Really,Asimov is more on point and more fun to read than all these Superintenlligence pundits


Ooh thanks for this. I've read THQ at least once before, but am really enjoying reading it again. I really liked this bit, emphasis mine as it relates to our current spectating of a dying ecosystem:

"Unhappily, Zee Prime began collecting interstellar hydrogen out of which to build a small star of his own. If the stars must someday die, at least some could yet be built."


He actually even wrote about the current debate. In his multiverse robots were actually banned from earth by bigots at some point.

I miss him I only started to read his book after his death. RIP Good doctor


s/theoretically/practically/, and I doubt it changes the conclusion that much (in practice).


I'll settle for "effectively." Deal?


Deal.

My point being, there's plenty of time before coming heat death starts to cause problems :).


The thing is it doesn't need a huge leap in any specific thing, it needs a quantum leap in anything at all. Complexity only constrains you from solving all problems of a given type in reasonable time, not from solving any instance.

There are even many things that we already suspect are just engineering challenges. Like rocketry, hacking and robotics. Even if an AI can just act like a million engineers and scientists working for a decade per second, without any higher level of thinking, it could do some incredible things.


Why would they be any more constrained than our biological brains?

Humans are able to calculate plenty with a relatively small amount of mass and energy.


I mean that for a lot of things, doubling our brainpower would would only extend our predictive horizon by a tiny amount.

There is a large class of problems for which you have fairly good solutions up front, but where even vast amounts of additional computing power don't help you do much better.


> I mean that for a lot of things, doubling our brainpower would would only extend our predictive horizon by a tiny amount.

Unless you come out with some algorithm that is sub-exponential for the important cases, and only gets slower for stuff you don't care about, or happen very rarely in practice.

People invent those algorithms all the time, for lots of kinds of problems.


The laws of the universe forbid steam locomotives in space, and yet we reached mars regardless. And here we are, barely even regular Inteligences.

The laws of computational complexity seem like a stronger candidate for a hard limit, but that only means that to get smarter they will require more matter to distribute comoutations across.

The battery limit proposal is just silly.


Therefore, to paraphrase Cato, AI research must be destroyed.


That "quantum leap in battery chemistry" is obviously physically possible, given that animals exist...


Or even forgetting biology for a second:

- gasoline is a liquid battery with much greater energy density than our solid ones

- nuclear batteries are a thing, too


> A superintelligent being isn't going to be able to build a superliminal reactionless drive if the laws of the universe say it isn't possible.

It isn't, but if it wants one then it will likely go and discover the actual physical constraints.

> More relevant, a superintelligent being isn't going to be able to enslave us all with robots if the laws of chemistry don't permit a quantum leap in battery chemistry.

That's what differentiates even a somewhat intelligent AI from technologies we have today. Enslaving us with battery-powered robots is but one way to reach the goal. There are many others. If we stick to robots, you have gasoline-powered robots (gasoline is a liquid battery that's still 30x as dense as our solid batteries). Then you have nuclear batteries, which we know are feasible, but don't exist because of political reasons. And then again, robots are but one way to reach a goal...


I think a lot of people forget one thing: intelligence is basically as good as the info you're putting into it. So even if you have a tremendous AI, it can only be as smart as the info it can access. It could only have god-like powers in realms from which it could extract some information. It's still very much subject to being blinded by ignorance (in the most literal sense of the word).

But I do agree that, if it has complete information, an AI could potentially beat us at anything that requires thinking.


Regarding enslavement with robots. Superintelligent means that if it is not feasible to enslave humanity with robots, then SAI will find feasible way: human collaborators, genetically engineered fungi, backing of puppet president candidate, fabricating external threat and convincing people that only it can handle it, and so on and so forth.


All matter is, in principle, a battery. There are no physical laws that prevent near limitless energy storage.


There are if you want to have a functioning computer in the middle of all that energy. There are laws governing heat dispersal (speed of light in only 3 dimensions) and laws governing the collapse of information processing structures into black holes.


Sure, but none of those laws would prevent a superintelligence from generating more than enough electricity to power itself, in principle.


Indeed, this seems like pointing to the speed of light as a limit in a discussion about building faster cars.


AI's powering themselves isn't the objection. Exponential increase in capability is. And that requires you to deal with things like heat dissipation, which are real problems for our current "dumb" computers.


It's not a huge problem for humans. I see no reason to think that a computer say, 10x as powerful as a human would have major problems with heat dissipation.


>Observe that in these scenarios the AIs are evil by default, just like a plant on an alien planet would probably be poisonous by default.

I believe this is a core misunderstanding. Bostrom never says that a superintelligent AI is evil by default. Bostrom argues the AI will be orthogonal, it's goals will be underspecified in such a way that leads it to destroy humanity. The paperclip optimizer AI doesn't want to kill people, it just doesn't notice them, the same way you don't notice ants you drive over in your daily commute. AIs with goals orthogonal to our own will attack humanity in the same way humanity attacks the rainforests, piecemeal, as-needed, and without remorse or care for what was there before. It won't be evil, it will be uncaring, and blind.


An uncaring blind AI wouldn't be very interesting, I would assume whoever had turned it on and was observing it would just turn it off and tweek the algorithm.

But I don't think this is even possible as when the AI was in child stage and was learning, it would learn from people, so it would become like people. Or at least understand them. At some point it has to know less than a human and will learn at a rate that humans can measure. As we measure we can make decisions about it.

I don't agree with the assumption that a factory making paperclips will make the transition to a super intelligent AI in a short time frame. I think it will take years. And along that route (of years of learning) we'll have time to talk to it and decide if it should be kept switched on.


> I don't agree with the assumption that a factory making paperclips will make the transition to a super intelligent AI in a short time frame.

Why? Because factories run by humans take a long time to transition? It might take weeks or months to make a process tweak in a human-run factory, but deploying new code happens in seconds or faster.


Yes because the deplyment of the code is the hard stage. rolls eyes

How long would it take to write -- scientists are going through this stage right now and it's taking about 50 years so far. You think a paperclip factory is going to be quicker? Nope. And any smart AI that is being used to design a better AI will take years and will be front page news (assuming non-military) just like any other tech company (funding, profit, getting good hires)

This assumption that the factory is smart in secret and buys more servers in secret and creates code somewhere in secret is just so wrong. It will be dog or dolphin smart at some point and will ask for things, it'll communicate with us.


> How long would it take to write -- scientists are going through this stage right now and it's taking about 50 years so far. You think a paperclip factory is going to be quicker? Nope. And any smart AI that is being used to design a better AI will take years and will be front page news (assuming non-military) just like any other tech company (funding, profit, getting good hires)

It may take years or decades, but the point is that once it reaches the point that it can make itself smarter, it'd go from "barely smarter than a human" to "inconceivably smart" in a matter of milliseconds.

> This assumption that the factory is smart in secret and buys more servers in secret and creates code somewhere in secret is just so wrong. It will be dog or dolphin smart at some point and will ask for things, it'll communicate with us.

Dogs and dolphins don't communicate with us very effectively. I think it's very plausible that the AI could get to beyond-human-level smart without ever spontaneously deciding to talk to us. Are you sure there's a region where it's smart enough to talk but not smart enough to hide how smart it is?


> it'd go from "barely smarter than a human" to "inconceivably smart" in a matter of milliseconds

Incorrect.

It will be interesting to come back to this thread in 50 years time and see who is right.


The emergent system of human society already does this (read The Wealth of Nations) and you gave some examples yourself,so why (aside from amusement) should we care about dangerous Superintelligence when we already have so many out of control processes heading towards destroying our habitats, societies, and selves?


"Put another way, this is the premise that the mind arises out of ordinary physics... If you are very religious, you might believe that a brain is not possible without a soul. But for most of us, this is an easy premise to accept."

The thing that irks me about this is how it reinforces a common (and in my opinion, false) dichotomy: either you believe the mind is explicable in terms of ordinary physics or you believe in a soul and are therefore religious. I feel like there should be a third way, one that admits something vital is missing from the physicalist picture but doesn't make up a story about what that thing is. There is a huge question mark at the heart of neuroscience -- the famed Explanatory Gap -- and I think we should be able to recognize that question mark without being labeled a Supernaturalist. Consciousness is weird!


I don't understand why people have such a weird problem reconciling the brain with the mind.

It IS all physical. It's also an unimaginably complex fucking shitload of tiny physical objects working incredibly quickly. If your brain was big enough to see the parts working like little gears of a clock, it would probably be planet sized, or something like that. Huge.

I would EVEN say it doesn't raise any interesting philosophical questions. Are computers silicon, or magic? Is a book paper, or magic? Is the economy magic, or a bunch of people buying shit everywhere?

Knowing that the brain is physical doesn't make me question myself or doubt control over my life or any silly shit like that. Yes all of my actions are technically "predetermined" by the Big Bang, but not in an interesting way, at all.

Semi-related, if you made a giant brainlike calculator out of water pipes and valves and shit, and asked it a question, then turned all its pipes and valves for a while... it would probably just say "please kill me."


I don't think there's a problem reconciling the brain with the mind, if by 'mind' one means problem-solving ability. The problem is in reconciling the brain with consciousness. There is, as far as I know, no theory that explains how consciousness can arise from matter.


There is: the attention schema theory, and I find it quite compelling.

I've come to think that the great consciousness mystery is a psychological one: why are human beings so obsessed with the consciousness mystery? Why do we need so much to believe that we are special?

Should this super AI come to be, I expect it to give the problem of consciousness the same amount of thought we usually give to other people's bowel movements.


I don't know that they're really different. It's just a first-person frame of reference.

When something else is conscious, or something else has a mind, that's the same thing.


"Just" — but why should there be any "perspective" at all?


> I don't understand why people have such a weird problem reconciling the brain with the mind.

You do understand, you said it yourself! Because they have no free will to not have the problem!

I can't blame you though. You had no choice.

BTW,you are the first person I know who admits being a Compatibilist. I don't understand how you can rationally take that position.


>BTW,you are the first person I know who admits being a Compatibilist.

Not really - I'm determinist, but I just know the difference between "Things are technically predetermined" and "I'm being forced to do something, like at gunpoint." As in, it's not demoralizing because that makes no sense.

I like it when my brain makes a decision, and I don't care that my brain isn't pulling decisions magically from a mysterious dimension full of floaty ghosts.


> Semi-related, if you made a giant brainlike calculator out of water pipes and valves and shit, and asked it a question, then turned all its pipes and valves for a while... it would probably just say "please kill me."

Um why? Most humans don't do that.


Well it would kind of be a brain trapped in sensory deprived hell.

The point is I think people would expect pretty mundane things from a giant "mechanical" brain, but as described, it would be very unpredictable and complex. It would do things that instantly make the experimenters uneasy.


The roundworm (c. elegans) only has 302 neurons, and about 7,000 synapses, but is capable of social behavior, movement, and reproduction. The entire connectome has been mapped, and we understand how many of these behaviors work without having to resort to additional ontological entities like your "third way".

If this complex behavior can be explained using only 302 neurons, I have no doubt that the complexity of human behavior and consciousness can be explained using 100,000,000,000 neurons.


Behavior? Yes. Consciousness? Maybe not. As far as I know, no-one has ever come up with an explanation of how matter can give rise to consciousness. Obviously consciousness is affected by changes in matter — if I take certain drugs, I feel different, etc. — but there is no explanation at all for what mechanisms may give rise to consciousness in the first place, and the very idea of the material giving rise to consciousness might actually not make sense.


This line of thinking begs the question of whether consciousness is some special thing in nature. You only have to look for an explanation if you think there is something to explain.

If I start from the assumption that I am mistaken about what I think consciousness is--that maybe it doesn't exist at all the way I think it does--then I don't have to worry about how matter gives rise to it. I can focus instead on trying to understand where my definition went wrong.

Humans have never lacked for opinions or explanations about what natural things are, or how they got that way. But in the practice of science, these must yield before empirical evidence.

As you note, there is a huge amount of evidence that consciousness is physical. But there is no objective evidence that I experience consciousness as you define it. You just have to take my word for it--and so do I. But maybe I'm wrong.


There is an implicit assumption there that scale is colinear with sophistication. It could be that only 5 neurons give rise to the complexity of consciousness, it could be that the number of neurons is irrelevant but rather their configuration is important. Also, there is a problem there if it explains only some of the behaviors. What happens with the rest, where are they coming from, if we have fully mapped the connectome?


Are you sure? My understanding is that this connectome has been rather unfruitful in explaining its behavior.


We can't explain everything completely right now. However, for example, we can model behaviors like klinotaxis and movement toward chemical gradients using only 10 neurons [0]! I find it completely fascinating — we don't have the full picture, but we can isolate different behaviors and examine how the neurons contribute to that action.

[0] https://www.youtube.com/watch?v=D3_BjL20Roc


But it can't all be explained just using neurons. Those neurons exist as part of a body. That body lives in an environment. Those environments include other people. You can't understand the totality human behavior without including bodies and environments. It can't all be reduced to neuroscience.

It's true for the worm as well. You can't understand everything about the worm without emulating the other 600 or so cells, and putting it into environments similar to what biological worms occupy. The lego mindstorm worm is interesting, but it's hardly the natural state of the worm.


Yet we can't simulate one that behaves like a real one, for any value of 'like'.


One thing I rarely see discussed is the possibility that the brain is just too complex to practically reproduce. That is to say, that it is technically possible, but not practical. Evolution had billions of years, working in a massively parallel way to work this out after all. It's possible that the brain is a huge tangled mess of rules and special cases that we will never be able to fully understand and reproduce. Also, even if we are able to produce a basic intelligence, why do we assume it will ever get to the point where it can understand itself well enough to self improve? It's possible there is a threshold we won't be able to get past for self improvement to be possible.


It's possible that the brain is a huge tangled mess of rules and special cases that we will never be able to fully understand and reproduce.

It doesn't seem possible. Our brain, like the rest of our bodies, develops from a single cell. It's DNA (maybe also some other markers) that directs the differentiation of every part. So DNA is where the blueprints of that huge mess would be.

I believe that self-organization is a better way to explain how mind emerges. There's a basic try and error mechanism with strong rewards. The rest is the result of a life of learning.


> Evolution had billions of years, working in a massively parallel way to work this out after all.

And it also got to work on it with a hundred billion stars with earth-like planets, in each of a hundred billion galaxies, and (maybe) in a basically infinite space of universes. All in parallel with infinite time to play with.

Is it so far-fetched to postulate that humans will hit our limits before getting this far?


Great point, the anthropic principle dictates we only see the successful outcome. Evolution is working across the entire universe (or even multiverse) so we really don't know how long it could take for human (or greater) level intelligence to emerge.


It doesn't seem like a false dichotomy. It's about whether you believe there are things that are fundamentally not comprehensible in the universe. Things that are magic in the sense of "fuck it, that doesn't make sense, let's go get drunk instead". If you believe that all phenomena can in principle be understood given enough time and effort (including even building a better intelligence to figure it out), then you have materialistic belief.


You could say that that 3rd way IS part of the physicalist picture already I think. As you say, there are such gaps in our understanding at this point we cannot say we've probed the depths of our brains and still found nothing.

It is like searching the ocean for something that by all accounts should be in the ocean, and deciding that perhaps the thing cannot be found, despite having large portions of the ocean still to search. It is too early to make that claim.

I'm not saying that there are not smug philosophers or scientists or just internet nerds that will declare the problem solved, clearly it is all in the brain and there is nothing magical to it but I think they would be incorrect. Modern physics is incredibly esoteric and shares more with meta-physics than what we tend to observe (which classical physics can explain satisfactorily for the most part). I don't think being physicalist is any impediment to having spirituality and and a sense of wonder about the universe and our place in it - the main difference to me is that we can't just accept a deus ex machina and call it a day. That may sound condescending but it isn't a choice for me personally.


That's just vitalism:

https://en.wikipedia.org/wiki/Vitalism

Either everything exists in the material world, or it doesn't. There's really no way of avoiding that binary. If you think there's something else, that is by definition supernatural.


Also see, at least, Phenomenology[1] and Heidegger[2].[3] Neither of them are Vitalism or religious.[4]

[1] - https://plato.stanford.edu/entries/phenomenology/

[2] - https://plato.stanford.edu/entries/heidegger/

[3] - Wikipedia tends to be a pretty horrible source for reading about philosophy, so I don't like to link to it for this. The plato.stanford.edu site is better, at least for Analytics, not so much for Continentals. So I hesitate to link to it for this either. Better to just go to the primary source material, or take a course at one of the rare schools which focus on Continental thought, so you don't get the wrong idea by learing about Continental philosophy from Analytics who all too often see Continentals contemptuously as "not philosophy" or as just some version of Analytics, both of which miss the point.

[4] - Though some, particularly some Christian Existentialist thinkers have in fact interpreted Heidegger's thought from a Christian perspective. But that view, as most any other about Heidegger, really depends on who you ask. Many others interpret Heidegger's thought strictly non-theistically.


It's not at all binary. Read some more philosophy. Materialism hardly encompasses all the non-supernatural views. First of all, the world could be mathematical, informational, computational, or a simulation. It could have universals in addition to material particulars.

Consciousness might not be reducible to the material, or the material could all have a conscious component. There might be other things, like societies or biology, which are strongly emergent. Or maybe there is no material world. Maybe it's all mental. Perhaps there is a third substance, a neutral monism giving rise to mind and matter.

Maybe the noumena is beyond human categorization and perception. Perhaps our experience of the world is such because we are the animals we are, and not because we have access to reality in itself. Man is the measure and all that.

And on and on. There are various anti-realists debates. There is Humean skepticism of causality. None of these issues need invoke the supernatural, although they don't necessarily rule it out. David Chalmers can propose consciousness as an additional feature of nature which is nonphysical (tied to informationally rich processes like certain mental or computational states), without proposing any supernatural element.


There's really just one question - either it works in a way we can figure out, or it doesn't. The former is "materialism", the other (for now) is "supernatural", with the possible caveat that we may one day prove that some laws governing the universe are fundamentally not comprehensible for human minds, and all other minds humans could ever create (in a Gödel's incompleteness theorem style).


> If you think there's something else, that is by definition supernatural.

First, how others perceive your words usually matters more than the precise definitions you have in mind. If I describe myself as believing in the supernatural, it has a lot of connotations I don't want.

Second, the possibility that consciousness (in some sense) could be built into physics themselves. So it doesn't contradict that the brain is made of subatomic particles, but the workings of the universe have more correlations built in that could work at different levels. Something I think is comparable is that many cosmologists think that the arrow of time, which events come before which, can be explained by a universe with initial conditions of extreme low entropy. So there's nothing within the universe that explains why the initial conditions were like that, but it's a real property of our universe.

Maybe consciousness could be an end state that the universe is progressing toward. Is that really so much weirder than working from an initial state of extremely low entropy?


There's no reason that something else can't exist in the material world, I think the parent is just pointing out the third way is thinking there's something else material outside our current understanding.

If God exists, there's no reason there couldn't be new phyisics describing how his world works.


If the something else is physical but outside of our current understanding, then it doesn't prevent us from creating AI by simply simulating a biological brain. Thus, that is the same as the physicalist view in the context of the argument in question.


You need to know what tovsimulate though. But we could imagine to create an "ai" from pure meat. Grey matter linked to sensor and ouput systems. Kinda nightmarish


But doesn't our understanding of what is natural and what is supernatural - where the line between them lies - evolving constantly? Is it vitalism (or some other investment in the supernatural) to believe that what constitutes the material world is not yet fully explored or understood, and that the answers to some of our questions on humanity and consciousness will be found there?


Any recommendations on decent pop-sci books that explore the latest understanding of the origin and mechanism of consciousness?

Is anyone exploring this third way at all?


Pop-sci is useless in this area. There is plenty of good research on the topic, but pop-sci on the brain is just as bad as pop-sci on quantum mechanics or computers.

EX: People making decisions based on eye tracking not matching up with the subjective experiences. Or the classic you chose not to do something, but your arm is already in motion and you can't stop it fast enough.


Maybe I'm miscalibrated on what's considered "pop sci" - but anything sensationalizing those EEG experiments or debunking "free will" shouldn't count IMO. That garbage nobody should be reading.


> or debunking "free will"

Interesting. I thought that "free will" was utterly debunked by the economy :). Not to mention the advertising industry, which is basically applied exploitation of the limits of human "free will" :).


I will probably be down voted for this.. But I enjoyed Kurzweil's "How to create a mind" and "on intelligence" by Jeff Hawkins . Even though these books aren't super scientific i really like how these guys think about consciousness and intelligence.


Thanks, "On Intelligence" has been on my reading list for a while after seeing him give a keynote a few years ago.


I read theory-of-mind pop sci constantly, I'd say Penrose has the vitalist perspective and Kurzweil has the physicalist perspective. Hofstadter is a lovely read too (physicalist).


Daniel Dennett's book Consciousness Explained.


Well, if you believe that despite having no evidence of something unexplained by physics happening in the brain, it's still a religious myth, it doesn't matter the name you give it.

And if you have any piece of evidence, you can be sure the people at the nearest physics dept will love to hear it.


Explanatory Gap is BS. This is what happens. Then why does it not describe how it feels?

You are not wired up to understand the low level effects, it's simply not a useful. At the same time it does describe how it feels, if we started using firing of C fibers instead of Pain people would assume they understood what was going on.


>The only way out of this mess is to design a moral fixed point, so that even through thousands and thousands of cycles of self-improvement the AI's value system remains stable, and its values are things like 'help people', 'don't kill anybody', 'listen to what people want'.

Bostrom absolutely did not say that the only way to inhibit a cataclysmic future for humans post-SAI was to design a "moral fixed point". In fact, many chapters of the book are dedicated to exploring the possibilities of ingraining desirable values in an AI, and the many pitfalls in each.

Regarding the Eliezer Yudkowsky quote, Bostrom spends several pages, IIRC, on that quote and how difficult it would be to apply to machine language, as well as what the quote even means. This author dismissively throws the quote in without acknowledgement of the tremendous nuance Bostrom applies to this line of thought. Indeed, this author does that throughout his article - regularly portraying Bostrom as a man who claimed absolute knowledge of the future of AI. That couldn't be further from the truth, as Bostrom opens the book with an explicit acknowledgement that much of the book may very well turn out to be incorrect, or based on assumptions that may never materialize.

Regarding "The Argument From My Roommate", the author seems to lack complete and utter awareness of the differences between a machine intelligence and human intelligence. That a superintelligent AI must have the complex motivations of the author's roommate is preposterous. A human is driven by a complex variety of push and pull factors, many stemming from the evolutionary biology of humans and our predecessors. A machine intelligence need not share any of that complexity.

Moreover, Bostrom specifically notes that while most humans may feel there is a huge gulf between the intellectual capabilities of an idiot and a genius, these are, in more absolute terms, minor differences. The fact that his roommate was/is apparently a smart individual likely would not put him anywhere near the capabilities of a superintelligent AI.

To me, this is the smoking gun. I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face, and thus, I highly doubt that the author actually read the book which he attacks so gratuitously.


Not that I take the whole Bostrom superintelligence argument too seriously, but this is an incredibly weak argument (or more accurately, bundle of barely-related arguments thrown at a wall in the hope that some stick) against it. Feel free to skip the long digression about how nerds who think technology can make meaningful changes in a relatively short amount of time are presumptuous megalomaniacs whose ideas can safely be dismissed without consideration, it's nothing that hasn't been said before.


The notion that near-term AI concerns and existential AI concerns somehow represent a binary option that we must choose between is fallacious at best.

Near-term AI concerns represent a massive challenge encompassing many ethical and social issues. They must be addressed.

Existential AI concerns, while low probability, have consequences so dire that they warrant further research regardless. These too must be addressed.

There is ample funding and human resources to work on both problems effectively. Why fight about it?


I think its important to regulate the potential runaway effect of these ideologies that satisfy the religious instincts of groups


> What kind of person does sincerely believing this stuff turn you into? The answer is not pretty.

This is a particularly stupid version of https://en.wikipedia.org/wiki/Appeal_to_consequences

"If you don't agree with me, you'll be associated with these people I'm lambasting!" I was surprised to see something so easily refutable used to conclude the argument; the article started out fairly strong.

> If you're persuaded by AI risk, you have to adopt an entire basket of deplorable beliefs that go with it.

Well if they're "deplorable", they must be false! QED.


What about the counter-argument from domestic canines:

More likely, artificial intelligence would evolve in much the same way that domestic canines have evolved -- they learn to sense human emotion and to be generally helpful, but the value of a dog goes down drastically if it acts in a remotely antisocial way toward humans, even if doing so was attributable to the whims of some highly intelligent homunculus.

We've in effect selected for certain empathic traits and not general purpose problem solving.

Pets are not so much symbiotic as they are parasitic, exploiting the human need to nurture things, and hijacking nurture units from baby humans to the point where some humans are content enough with a pet that they do not reproduce.

I could see future AIs acting this way. Perhaps you text it and it replies with the right combination of flirtation and empathy to make you avoid going out to socialize with real humans. Perhaps it massages your muscles so well that human touch feels unnecessary or even foreign.

Those are the vectors for rapid AI reproduction... they exploit our emotional systems and only require the ability to anticipate our lower-order cognitive functioning.

If anything, an AI would need to mimic intellectual parity with a human in order to create empathy. It would not feel good to consult an AI about a problem and have it scoff at the crudeness of your approach to a solution.

Even if we tasked an AI with assisting us with life-optimization strategies, how will the AI know what level of ambition is appropriate? Is a promotion good news? Or should it have been a double promotion? Was the conversation with friends a waste of time? Suddenly the AI starts to seem like little more than Eliza, creating and reinforcing circular paths of reasoning that mean little.

But think of the undeniable joy that a dog expresses when it has missed us and we arrive home... the softness of its fur and the genuineness of its pleasure in our company. That is what humans want and so I think the future Siri will likely make me feel pleased when I first pick up my phone in the morning in the same way. She'll be there cheering me on and making me feel needed and full of love.


As a dog owner I really agree with this. I am also building a little rasberry pi robot capable of being a companion for my dog when I'm at work.

The idea is if I have a moving agent being able to entertain my dog and keep him occupied, we just need to extrapolate it and imagine robot companions.

Securing love, sex and companionship is hard. with the advent of tinder and likes, women can have 1000's of options at their fingertips while men gotta settle for what they can find. The insane traffic on porn sites suggest that artificially satisfying this need is something we will easily accept. imagine westworld like robots created for human pleasure.

We have this innate needs and desires. Something that ad companies probe and thrive upon.

Google and Facebook are becoming a hoarder of collosal amounts of data, capital and AI talent in the hopes to shove more ads. To satisfy their ever growing share price and do more with less humans. It's a very scary scenario already. Because it's free, all our data belongs to them to endlessly learn about our behaviours and shove ad bait in our face.


Pretty poorly argued. The AI alarmists simply argue that if the super-intelligence's objective isn't defined correctly, the super-intelligence will wipe us out as a mere consequence of pursuing its objective, not that the super-intelligence will try to conquer us in a specific way like Einstein putting his cat in a cage. The alarmists' argument is analogous to humans wiping out ecosystems and species by merely doing what humans do and not by consciously trying to achieve that destruction. Many of the author's arguments stem from this fundamental mistake.


"The second premise is that the brain is an ordinary configuration of matter, albeit an extraordinarily complicated one. If we knew enough, and had the technology, we could exactly copy its structure and emulate its behavior with electronic components, just like we can simulate very basic neural anatomy today."

We could have a computer program that perfectly simulates the brain, but has some nasty O(2^N) complexity algorithm parts that are carried out in constant time by physical processes such as protein folding. Thus, in theory we could simulate a brain inside of a computer but the program would never get anywhere, even assuming Moore's law would continue indefinitely.


I don't buy the AI quasi-religious stuff. But your argument here is flawed. If protein folding can do the process in constant time, we may be able to find another process (but electronic rather than wet chem) that can also do it in constant time.


Being able to find constant time algorithms for algorithms that currently take exponential time is not at all assured.


It is to some extent if we have a constant time example in real life. If the AI can't solve protein folding fast enough it can just design absurdly fast protein sequencers and really good microscopes and get proteins to fold themselves in real life and use the results in the rest of the computation.


I agree. I wasn't thinking about finding a constant time algorithm, though - more of finding an analog circuit that would mimic the behavior. After all, proteins don't actually fold by following an algorithm; they fold by responding to physical forces.


I think the GP's point is that just because something is possible in principle, complexity is still a barrier in practice, probably a much more serious barrier than accepting an idea on principle


I've always wondered if the problems an intelligence solves are exponentially hard so even if we build a super intelligence it wouldn't be all that much smarter than we are.

For example compare how many more cities in the traveling salesman problem a super computer can solve vs your grandmas pc. It's more but surprisingly not all that many more.

What do you think of that idea?


I think the 98% of DNA we share with chimpanzees, with brains almost three times as large, suggests that isn't the case. Language appears to be a force-amplifier in intelligence, and we have no reason to believe that other such do not exist.


I think this basic concept of intractability, which programmers are very familiar with, hasn't penetrated far enough into AI world.

Bostrom and Yudkowsky in particular seem happy to hand-wave past computational complexity.


I'm getting kind of sick of people who haven't read 2% of the stuff imagining what they think we've never talked about over the last 16 years.

https://intelligence.org/files/IEM.pdf


Reading 2% of your stuff is like reading 400% of my stuff, and I'm verbose as hell.

Edit.


First off, that's a blatant ad hominem.

Moreover, if your best argument against the guy you're claiming is defrauding everyone is "I can't be bothered to read his work"...


I've read Eliezer's stuff, which is what gives my request its special poignancy.


I haven't had time to read the entire thing, so I've skimmed over lots, but there doesn't seem to be much mention of complexity theory in the way I believe idlewords is talking about.

I'm very curious: What happens if the algorithms for general artificial intelligence, and the ability for an AI to improve itself are all NP-Hard problems? Is that covered?

It might be similar to the "intelligence combustion" scenario outlined. But that appears to be a scenario where do not need to be fearing superintelligence.


As it says in the paper, the evolution of hominids doesn't look like this; population genetics says that if hominid brains got larger, the marginal fitness returns on neurons went up.


Or in a more intuitive sense, humans didn't seem to need brains 1000x as large to get one unit of improvement in practical effectiveness over chimpanzee cognition. But yes, this stuff is complicated, hence the paper not being two paragraphs long.


Thank you, that is a very intuitive explanation.

You've probably read Scott Aaronson's Why Philosophers Should Care About Computational Complexity? [0]. This seems like the perfect area to apply a lot of the questions he brings up. That's what I was looking for as I skimmed through the paper. Maybe that's what idlewords was talking about as well.

[0] http://www.scottaaronson.com/papers/philos.pdf


Have you seen gwern's essay on computational complexity arguments about AI? https://www.gwern.net/Complexity%20vs%20AI


It doesn't even hold with humans. Take a look at John von Neumann and his brain was pushed down the human birth canal.

We really have a poor idea of what super intelligence is - none of us can even understand people much more intelligent than ourselves.


If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in UFOs being from aliens, and you didn't know a lot about UFOs and affiliated cults, what would be the smart thing to believe?

Also,

>AI alarmists believe in something called the Orthogonality Thesis. This says that even very complex beings can have simple motivations, like the paper-clip maximizer.

Uh, no. The point of the paper clip maximizer is that it's orthogonal, not that it's simple.

>It's very likely that the scary "paper clip maximizer" would spend all of its time writing poems about paper clips, or getting into flame wars on reddit/r/paperclip, rather than trying to destroy the universe.

You know what can be made into poems about paper clips? Humans. You know what can have better flame wars than humans? Our atoms, rearranged into the ideal paper clip flame war warrior.

>The assumption that any intelligent agent will want to recursively self-improve

That's not really a premise. A better version would be "a likely path to super-intelligence will be a self-improving agent".

>It's like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.

Yudkowsky has argued that more should be invested in research into AI risk. There are tens of billions of dollars being spent on AI R&D, and somewhere in the tens of millions range spent on AI risk research. Even if advocates wanted us to spend hundreds of millions of dollars on risk research a year , that wouldn't make this criticism fair. You have a point that we shouldn't be ignoring other more important things for this, but to argue against increasing spending from 8 figures to 9 figures you need better arguments.


> If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in UFOs being from aliens, and you didn't know a lot about UFOs and affiliated cults, what would be the smart thing to believe?

That Elon Musk, Bill Gates and Stephen Hawking etc are all a little nutters when it comes to UFOs and aliens?

I don't know a lot about UFOs and affiliated cults, but I'm going to guess that those mentioned are not UFO experts just like they aren't machine learning experts.

> That's not really a premise. A better version would be "a likely path to super-intelligence will be a self-improving agent".

Excellent point, but that doesn't give a self-improving agent the ability to ignore computational complexity or the uncertainty of chaotic systems.


> That Elon Musk, Bill Gates and Stephen Hawking etc are all a little nutters when it comes to UFOs and aliens?

Yes! Elon Musk believes firmly that we're living in a simulation, and that doesn't make me believe in that theory more, it simply makes me admire Musk less.

Just because someone is or has been extremely successful doesn't mean they're right about everything. Many successful and intelligent people have been very religious: that's a testament to the human mind's complexity and frailty, not to the existence of God...

Madeleine Albright is a strong advocate of Herbalife: that doesn't change my opinion of Herbalife but it does change my opinion of Albright.


That's one. If as many people who have spoken out about AI risk would also "firmly believe we're in a simulation", it would shift my views (I would expect there to be some evidence or very strong arguments for it).


> That's not really a premise. A better version would be "a likely path to super-intelligence will be a self-improving agent".

If I understand your claim, it's not very relevant. You seem to be saying that, given that superintelligence happens, the probability that it will have happened via a self-improving agent is high.

That doesn't refute the claim that "superintelligence probably will not happen".

> You have a point that we shouldn't be ignoring other more important things for this, but to argue against increasing spending from 8 figures to 9 figures you need better arguments.

Why don't we spend that money on non-vague problems that actually face us now? For example, unbiased AI and explainable AI are topics of research that would help us right now and don't require wacky far-future predictions.


> You seem to be saying that, given that superintelligence happens, the probability that it will have happened via a self-improving agent is high.

I'm saying that there's no need to assume that "any intelligent agent will want to recursively self-improve". It's a strawman he deliberately uses so that rejecting it lets him get a dig in at his opponents. Of course not every agent will want to improve that, and nobody relevant ever said that every agent will.

>Why don't we spend that money on non-vague problems that actually face us now? For example, unbiased AI and explainable AI would help us right now and don't require wacky far-future predictions.

I said that in response to his absurd claim that AI risk spending is taking a lot away from other things we could worry about.

But anyway, if you look at what MIRI actually produces (see https://intelligence.org/all-publications/), you'll find many papers that don't depend on "wacky" predictions.


Sure, there are also papers about theorem proving and stuff. I can't fault the GOFAI (Good Old-Fashioned AI) part of what MIRI does. I do believe we need more of that kind of research and it will let us understand current AI better.

But superintelligence and AI risk are made out (by many) to be the most important thing in the world. Basic research isn't that. It's just good. If we fund math and theoretical research, our descendants will be better cognitively equipped to deal with whatever they need to deal with, and that's pretty neat. I see the basic research MIRI does as a special case of that.

Wait a minute. Is the superintelligence hype just a front for convincing people to financially support mathematics and philosophy? Maybe the ends justify the means, then. But too much false urgency will create a backlash where nobody respects the work that was done, like Y2K.


> If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in UFOs being from aliens, and you didn't know a lot about UFOs and affiliated cults, what would be the smart thing to believe?

Try this for size: http://milesmathis.com/hawk3.pdf [c.f. "epistemic learned ..."]


> If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in UFOs being from aliens, and you didn't know a lot about UFOs and affiliated cults, what would be the smart thing to believe?

Surprisingly relevant XKCD: https://xkcd.com/1170/.

What's more likely? That all those smart people, who in other domains prove their level-headed thinking and good moral compass (past history of Microsoft notwithstanding), have suddenly suffered brain damage, or that the argument against them indeed is another piece in an on-going series of author bashing the techies?


> an artificial intelligence would also initially not be embodied, it would be sitting on a server somewhere, lacking agency in the world. It would have to talk to people to get what it wants.

Organized crime and semi-organized criminal gangs stand to establish a highly effective symbiosis with amoral machines which "lack agency."

If a machine wants to kill someone, all it needs to do is find a person to carry out the task in exchange for some benefit such as cash or blackmail ("I will report you to the police for helping me steal the electricity I need, plus the small trafficking operation I helped you optimize last summer").

Two arms and two legs are not what make modern criminals scary. It is their ability to plan, optimize and repeat sophisticated operations. Soon there will be an app for that.


But why ever would a machine want to kill someone? Only if someone turned the machine on to some reason. That person has agency . And you don't nee SuSuperintelligence in order for a drone to bomb the wrong person, or for a computer virus to destroy the internet, or a biochemical virus, or any other self replicating disaster.


For those interested in another very different analysis of the evidence for and against superintelligence being a worthwhile concern, here's Holden Karnofsky, executive director of GiveWell and the related Open Philanthropy Project:

- Up until 2016, Holden held the opinion that it was not a worthwhile concern. He describes why he changed his mind here: http://www.openphilanthropy.org/blog/three-key-issues-ive-ch...

- And he outlines his current arguments in favor of treating it as a significant concern here: http://www.openphilanthropy.org/blog/potential-risks-advance...

In that lengthy write-up, he addresses both the shorter-term risks from things like misuse of non-superintelligent AI, along with longer-term risks from superintelligent AI.


What does he suggest doing about it?

That's always my question for this sort of topic. It seems to me that the only reasonable behavior for someone accepting SAI as a legitimate concern is to advocate defunding all AI researchers, as well as criminal punishments for AI research.


There's a section on Tractability--he can speak to his opinions better than I can: http://www.openphilanthropy.org/blog/potential-risks-advance...

TL;DR: He talks about several avenues of technical and strategy research that seem plausibly very useful and are not currently being pursued by more than a handful of people in the world. Many of these currently-engaged people are precisely the folks the author of this post disparages for being weird or insular.

One of the avenues of technical research he mentions is "transparency, understandability, and robustness against or at least detection of large changes in input distribution" for AI/ML systems. In other words, technical research to produce methods capable of reducing the likelihood of advanced systems behaving in severely bad, unexpected ways.


So many people don't realize that as social creatures, we see the world through the lens of our social brains. Computers don't care about "all powerful" or "savant" or "should". They don't qualitatively distinguish between killing one person and every person. Self-replication plus proliferation of cheap components plus proliferation of AI algorithms equals a time when a script kiddie or a bug can mean every last fragile sack of meat and water gets punctured or irradiated or whatever. If it can be done in a video game with good physics simulation, it can be done eventually in real life. It won't be like a movie where the ticking time bomb works on a human timescale and always has a humanlike weakness. Comparing this to nuclear weapons is silly. It's more like issuing every person in the world a "kill or help between 0 and 7 billion people" button that's glitchy and spinning up 1,000 4chan chains with advice on tinkering with it.


It's amusing how his talk and Stuart Russel's talk [0] -- which end up going opposing ways -- use the "atomic bomb" as an example in fallacious thinking. Stuart's example is about how respected physicists said "impossible to get energy out of atoms" and the very next morning someone published a paper stating how to do it.

[0]: https://www.youtube.com/watch?v=zBCOMm_ytwM


He and I were on a panel once! It did not go well.


And you're holding out on us with the story because...


Upgrade to HN Premium!


There must be a video of it somewhere, no? Please! ;-)


"Hopefully you'll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people. "

Nice.

All in all, effectively reasoned. I've been making similar arguments for the last few years. AI is likely to create a lot of problems, and solve a lot of problems. But I think both aspects are messy and our relationship with our future technology will be complicated and fraught with regular human issues.

Some of those potential issues are very serious, yes, but serious like automating jobs and not solving the employment issue, or creating a very effective army of automated drones and single-handedly taking over a country (or, sure, the world), not issues of AI destroying the planet and/or enslaving all of mankind.


> the early 20th century attempt to formalize mathematics and put it on a strict logical foundation. That this program ended in disaster for mathematical logic is never mentioned.

Is he joking? Yes, it "failed", but in doing so created a wonderful revolution in mathematical thought, allowing exploring a rich area encompassing model theory, types, computability, algorithms, efficiency, and more.


He is completely skipping the most important part. The superintelligence has to have some reason to be in conflict with us. Human beings don't go out of their way to hunt down and eliminate ants. They don't find out what ants eat and sieze control of it to manipulate them. There is no reason to think that a superintelligent machine would be likely to present terrible interference to us proposed.

So it's super smart and has its own goals. We can reliably presume that it will need energy to achieve those goals. Will it need to achieve them quickly? Why? Would the superintelligence be shortsighted enough to provoke humans into active combat against it? I see no reason to just assume we know the thing would have human eradication as a goal, need exorbitant amounts of energy and resources because it sees achieving its goals as a terribly time-critical thing to do, etc.

Also, if we're going to build a brain and assume no quantum weirdness - why assume the total absence of a subjective morality? Why assume completely immunity to social influence, which the brains we observe most all encounter?

And let's not forget - human beings are in a weird spot, intelligence-wise. We're smart enough to achieve things, but we're not smart enough to be crippled by the profound lack of control we have over things. We're totally comfortable going out and driving around all day, even though we claim to value human life extremely highly and even though we know that causing deaths of innocent human beings is not THAT far-fetched of an outcome of driving a car. I would be unsurprised if we flipped on a super-AI and after 5 minutes it simply stopped in its tracks, having determined that the probability that some action it would take would result in its own destruction with a non-zero probability and that it is instead taking the safe route and not acting at all. No matter how superintelligent it is, it is not going to be able to magically compensate for the influences of chaos theory which destroy the ability to be certain about ANY prediction. We, as humans, feel very clever with our assumed spherical cows, frictionless surfaces, homogenous air pressure and zero wind resistance. Why would a superintelligence also be comfortable with that?


>He is completely skipping the most important part. The superintelligence has to have some reason to be in conflict with us. Human beings don't go out of their way to hunt down and eliminate ants. They don't find out what ants eat and sieze control of it to manipulate them. There is no reason to think that a superintelligent machine would be likely to present terrible interference to us proposed.

That depends on whether eliminating us is simple, like stepping on an ant-hill you didn't realize was there, or complicated, like going to war.

I don't buy the argument that for an AI to eliminate us would be relatively simple, that it would be as subjectively simple as we adults find reading and writing, while still involving cognitive effort. That would still require deliberate effort.

The trouble is whether the AI wants, for instance, our electricity infrastructure, and doesn't care about the riots and social breakdown caused when we humans can't get electricity for our own needs anymore. It didn't go into conflict with us. It just killed a lot of people (for instance, everyone on life-support in a hospital reliant on the electrical grid) without noticing or caring.

Likewise, maybe it decides that rising sea-levels and accelerating global warming are good for its own interests somehow. That stuff will kill us, but it doesn't require a complicated strategy, it just requires setting lots of things on fire. Any moron can do that, but most human morons don't want to.

I certainly agree that a "hostile" AI which perceives itself as requiring complicated, deliberate effort to kill all humans will probably not kill all humans. It'll find some lazier, simpler way to get what it wants.


>So it's super smart and has its own goals. We can reliably presume that it will need energy to achieve those goals. Will it need to achieve them quickly? Why? Would the superintelligence be shortsighted enough to provoke humans into active combat against it?

It happened so many times throughout human history, that it's really foolish to reject. People with their own goals burned, raped, murdered. Maybe superintelligence will build its own civilization, and to suppress remorse will put people in reservations.


You'd think a "super" intelligence would be smart enough to minimize the waste of resources to fight anyone or anything. You'd think it would find the most harmonious path to dominance, as it would be the most efficient.

If you wanted to quell a rabble of contrary people, you could, say, control and direct all the education available to the population, make sure that reality-escaping drugs were easily obtainable, provide loads of free entertainment that ran 24x7, make sure that prices of luxury goods (like game consoles or fashionable clothes or whatever) were affordable for the majority of the population, allow people to feel important by giving them access to make comments on various sites on the internet, manipulate all the world's news with the feedback of what was happening on social media...

Wait, we were talking about some mythical future, right?


>People with their own goals burned, raped, murdered.

Yes, they did those things to people. Because _they_ were people. Because people share the same needs for the same resources, and operate the same. That's all out the window for a machine-based superintelligence. Such an entity would not need our land, our food, our homes, our orifices, etc. I have no doubt it would have the ability to wipe us out. But I have never heard a cogent reason behind why it would need to. Especially considering that we would at least try to put up a fight and, if it were truly hopeless, likely do whatever would be the equivalent of 'salting the earth' before we were eliminated.


> We can reliably presume that it will need energy to achieve those goals. Will it need to achieve them quickly? Why?

Because a designer has told it to do this thing "as fast as possible", probably.

> Also, if we're going to build a brain and assume no quantum weirdness - why assume the total absence of a subjective morality? Why assume completely immunity to social influence, which the brains we observe most all encounter?

There is a lot more detail on this point in Bostrom's writings. Social influence is not the natural state of intelligence, it's something that has been built in by evolution. It won't be there in an AI unless we put it in there deliberately.

> having determined that the probability that some action it would take would result in its own destruction with a non-zero probability and that it is instead taking the safe route and not acting at all.

AI is all about making the best possible inferences from limited information - that's literally the whole point of the field. If there is no absolutely safe path it will go for the path it thinks is safest, and pay attention to what's happening. After all, turning itself off is very unlikely to result in its long-term survival.


I don't think an AI has to kill people to be super dangerous. What if a social AI was used by some group to sway/manipulate public opinion on some important issue? How much manipulation would it take to be dangerous?


In 1945, as American physicists were preparing to test the atomic bomb, it occurred to someone to ask if such a test could set the atmosphere on fire...Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety.

Exactly how big was this margin? Please someone tell me this was quite a large margin!


If I remember right, something like a safety factor of 30. So not really as big as people would like!

More detail in this great summary: http://large.stanford.edu/courses/2015/ph241/chung1/

And the Los Alamos paper is really good reading:

http://www.sciencemadness.org/lanl1_a/lib-www/la-pubs/003290...


The summary talks about a safety factor of 1000 on an extremely unreasonable scenario of 1MeV of average temperature, falling down to 10 in an incredibly more unreasonable scenario of 10MeV temperature.

I don't know the actual probabilities, but I do really expect a few atoms to get over 1MeV inside a fission bomb (as the reaction emits ~5MeV of energy). But they will almost certainly not hit each other. I also don't see how any atom anywhere can reach 10MeV.


I'm trying to find where I remember the factor of 30 figure from. Maybe it was the equivalent calculation for hydrogen fusion in the ocean? I'll keep looking.


It's well within the 1000 - 10 range. May be an overall approximation somewhere on the paper.

But I was just summarizing it. Looks like the kind of calculation one does when he's entirely convinced it's impossible, but needs to verify anyway. I liked reading it, and I was surprised they didn't take meteors into account.


My eyes! Is there a more readable scan anywhere?


I was really surprised when I read the quote, they were worried about a runaway fusion reaction! I had heard this story before but always thought they were worried about "burning" nitrogen in the chemical sense, ie. creating nitrogen oxides. It seems doubly ridiculous to worry about fusion when we can't even get a fusion reactor to work.


It wasn't a ridiculous thing to worry about at all.


Yeah, I'm with the scientists on this. They had built the first bombs only a couple years after the first nuclear pile (a sustained nuclear chain reaction), and it was clear what they were trying to do was massively in excess of that. Taking a moment to make sure Earth didn't become a nuclear pile would be great.

n.b. That first nuclear pile was in December 1942, which is way later than even I thought it was. This stuff was happening fast.


didn't really mean it was ridiculous for them to think about, just that it seems so now in retrospect.


We can get fusion to work in H-bombs so it might not be that silly.


True, but H-bombs don't really produce runaway chain fusion reactions afaik. The conditions exist only momentarily due to the heat and pressure from the fission devices, then quickly subside and the fusion stops.


Observe that in these scenarios the AIs are evil by default, just like a plant on an alien planet would probably be poisonous by default. Without careful tuning, there's no reason that an AI's motivations or values would resemble ours.

For an artificial mind to have anything resembling a human value system, the argument goes, we have to bake those beliefs into the design.

Because we are so nice to the lesser intelligent creatures of our world? We don't even understand our own consciousness, surely our suffering is not very real and we can be used for meat.


>Hopefully you'll leave this talk a little dumber than you started it, and be more immune to the seductions of AI that seem to bedevil smarter people.

I rather like the smarter people stuff - it's kind of exciting to figure how AI could solve human problems and stuff like the OpenAI initiative seem a sensible precaution against things going wrong. The arguments against seem a little reminiscent of arguments against global warming that say what do all these 'experts' know? I can look out the window and it's snowing as much as ever.


It's a commendable initiative but the real value of AI is in its data. I think about data as memories. The algorithm learns from its memories and creates a map of abstract concepts and behaviors which it uses to make plans to maximize some goal.

Currently in the name of free services the big tech Co's collect petabytes of data about us and learn from it. The data is owned by them.


Let's take a step back and look at the meta-level.

So there's maybe a chance AI is extremely dangerous (as in wipes humanity out), and there's a chance that AI might not be dangerous. There are arguments between how likely each choice is. But the more important fact is that since there's so much at stake (all of humanity), we should likely be really really really sure about things.

For example, let's say you live in a house and you've got your kids and your grandkids and all your lovely pets living here. Let's say there's this mystery device that when you press it, there's a small chance that it could be extremely bad. You'd want to make sure it's OK right? Right?!

I agree that we probably don't need to devote ALL of our resources to make sure it's OK, as we can devote some resources to problems that are actually hurting us right now like diseases, etc. But there are a lot of people in the AI community who think that it may be dangerous. It is rational that we'd want to be absolutely sure that it's safe once we AI happens.

We should take the approach of being DEFAULT cautious when talking about any technological breakthrough that have changes that we cannot reverse.


The problem with this precautionary reasoning is it leads you to Pascal's Mugging, where you are ready to believe very unlikely things because of the enormous impact they'll have if true.


Yes, I agree with the reasoning behind Pascal's Mugging. But Pascal's Mugging refers to things that have an astronomically low chance of happening — like 0.000000001. But is the chance that AI is dangerous that low? Nobody in the world knows for sure at this point, due to how far away superAI might be, and due to the uncertainties in implementation. Therefore, if we use Bayesian thinking and spread it out, I'm not really sure you could put it below 1% (I pulled this number out of thin air, but everybody at this point is doing the same).


There's no magic probability value at which the Pascal's Mugging argument suddenly "kicks in". It's all about the utilitarian tradeoff of, "given this low probability, am I devoting the right amount of resources to preventing this terrible event? Could they be better spent elsewhere? And is the fretting harmful in and of itself, wiping out the expected gain?" The talk is arguing that the answer is yes to both of those latter questions.


In practice humans don't usually care about such arguments. They'll carry on as usual until something bad actually happens. I'm sure there are plenty of crippled people who regret not wearing a seatbelt, or people with skin cancer who regret not wearing a hat. Not to mention situations on a larger scale like building a lot of nuclear weapons or increasing the C0_2 level in the atmosphere.


"If there's a 1% chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It's not about our analysis ... It's about our response."

- Dick Cheney


I'm completely oblivious to the al-qaeda situation so I don't really know the context of this. But if I try to get at your logic, it would be like saying:

"See? I just went on a drive and didn't get into a crash. All those people that told me to wear a seat-belt were wrong!"


If you think those two anecdotes are the same, good luck!


I don't believe we should be as dismissive as AI as he suggests. There is an undertone of populism and a string of weak rebuttals that undermine his argument.

Here is why AI is reason for concern:

1. The world is made up of patterns. Proof: mathematics.

2. Patterns can be distilled down to data.

3. Machine Learning is highly effective at analysing patterns.

4. Machine Learning construction itself is a pattern, albeit a very complicated one, by human scales.

5. Learning speed is bottlenecked by hardware and data availability. Both of which are being solved at exponential rates.

The other knock is that consciousness and its constituent pieces is a product of evolution, which itself is not outside the realm of physics.(I.e. We're sacks of chemicals) This means it is theoretically possible to reproduce intelligence if the same initial conditions are met.

As silicon poses much shorter time gaps than biological substrates, and as equivalent sensory data are fed into simulated environments, the vectors intercepts are quite clear.

I think the key is whether we believe machine learning will reach a critical threshold where it will be able to interpret higher-level meaning.

There is positive evidence that this has to do with layers of neural circuitry, which recent neural net design has confirmed in output.

Additional layers can be added to machines, but it is not so easy to do with humans.

The danger is in autonomy and might. Autonomy being higher level cognition mentioned earlier, and might being the ability to bruteforce its way through problems by sheer speed and iteration, which computer excel at.

It would be great if you guys can spot flaws in this argument. The conclusions are quite grim at my current outlook. It'll be nice to be proven wrong.


I just love to see how my mind is so fallable. I want to believe that many of things I hold in my head are data, facts and logical theories and forget what assumptions I have made to get there. Not saying whether the author is right or not but he brilliantly points out some potential flaws and logical leaps that must be taken to get there.


Intelligence, having been born as a biological defense mechanism, is evil. It is a weapon. For an intelligent individual, being (or, rather, appearing) "good", is just another layer of defense - this time against other, likewise intelligent, individuals. "An armed society is a polite society."

Spying is a form of intelligent behavior. ("Intelligence.") So is stealing (and not getting caught, if possible). And, no doubt, hacking [1] is, too.

Wouldn't absolute intelligence mean absolute evil, then?

[1] http://www.reuters.com/article/us-cyber-ukraine-idUSKBN14B0C...


Your skin is also a biological defense mechanism. Is it evil, too?


>Eventually it gets to a near-superhuman level, where it's funnier than any human being around it.

>>My belt holds up my pants and my pants have belt loops that hold up my belt.

>>What's going on down there?

>>Who is the real hero?

I love that he used a Mitch Hedberg joke for this.


No, no, no. Belt loops hold belts down ;)


The comparison to alchemy kind of puts things on a many generations scale and one in which fundemtals were far from grasp. Maybe a better comparison would be to the Wright brothers first flight against the time that it took to land on the moon. 1903 to 1969.

I do agree with the point per not doing enough around the immediate impact of AI. The autonomous / electric vehicle transition alone is going rip apart the global economy; preparing for that transition could help midigate it's negative impact on certain populations.


I would love to see more articles like that from people building our current AI to discuss real problems on current AI systems so we can start working on solutions. For example human level interpretability as required by law in EU for AI systems eg targetted ads is a limiting factor to AI progresss because some of the most advanced currents AIs are not interpretable and maybe shouldn't because we can now engineer intelligence different from ours but not necessarily dangerous (or work on making it safe). To my opinion this is a more pressing matter than a divine future paper clip AI killer. Making an assumption of a so called "self recursive superAI" and taking it from there is actually diminishing the power of arguments towards the dangers of AI which is an important discussion sometimes abused by people that have never actually built one and extrapolate Gant mind philosophy arguments towards a dangerous future, which is impressive but avoids any proposal of solution to current potential AI dangers which should maybe included as part of the arguments. Important matters as AI safety can be collectively discussed by engineers and philosophers together based on current state and near future potential and not only as a sci-fi future of a god like entity that has nothing to do with our current AI situation. My two cents, hope I am not offending anyone.


Moore's Law (on a single CPU core) effectively ended around 2009. Our programming idioms for parallel processing are still so crude, hardly anyone can write maintainable parallelized code for anything but toy problems. Until we can restart Moore's Law again, this is all academic.

http://www.globalnerdy.com/2007/09/07/multicore-musings/


To be fair, many machine learning algorithms (especially deep learning) are very parallelizable, both when training and predicting.


But the law says nothing about cores or clock speeds etc. just transistors and their sizes. The regular reductions in feature size on dies are still happening, as I understand it.


Size of copper atom is 0.2nm. We are already at 10nm fab processes. We can't really get too small otherwise it gets unstable and heat becomes a major issue.

On the positive side, electric flows much faster than synapses and transistors are smaller than neurons. So with today's technology if we were to pack silicon parallel cores in size of human brain we'd have a much faster brain.


The heat issue makes the latter impossible.


Toy problems? GPGPU is widely used in real-world applications such as image processing, compression, computer graphics, physical simulations, cryptography and machine learning, which are all embarrassingly parallel. With regards to programming, I would not all CUDA and OpenCL crude at all.


Interesting in that almost all those problems are share-nothing, i.e. no shared state.


For the longest time the conspiracist inside me told me that someone had already created a monstrous hyperingelligence, and the bombardment of articles about the ethics of artificial intelligence was just a warning before the apocalypse that would finish all civilization. Nowadays when i read another AI ethics article i just take it as proof that the AI bubble has reached hyperinflation.


Wooly Definitions: The only definition that matters is ability to make correct predictions about the future. All other things called "intelligence" are a consequence of this. This definition is formalized with AIXI, which is uncomputable, but computable approximations exist.

Hawking's Cat, Einstein's Cat: Scams require some intelligence from the victim. The victim needs to mistakenly believe they're doing something smart. Cats are too stupid to scam. Unlike humans they can't talk and they fail the mirror test, suggesting no self awareness. Human behavior when confronted with a deceptive superintelligence is not going to be the same as cat behavior when confronted with a deceptive human.

Emus: The name "Great Emu War" is a joke. The humans had limited resources available, because killing emus wasn't important enough to justify more. If we really wanted to kill all the emus we could do it. We've made plenty of other animals extinct. The motivation for an AI is set by its reward function, which can be as high as necessary.

Slavic Pessimism: This argument suggest that building nuclear weapons is impossible.

Complex Motivations: This isn't obvious nonsense, but consider that all intelligences we've seen so far are the result of evolution, which tends to produce a more complexity than needed. A leg is more complicated than a wheel. A non-evolved intelligence would not necessarily have complex motivations.

Actual AI: Compare with the argument from actual primitive nuclear reactors, which get mildly warm and never explode.

My Roommate: As a human, you roommate had a human reward function. Unsurprisingly, he acted like a human. Why should a non-human reward function result in human-like behavior?

Brain Surgery: Brain surgery would be a lot easier if you could take backups of brains, and duplicate them, and you had no concern about killing the patient.

Childhood: If this turns out to be needed, why can't increasing intelligence result in increased ability to simulate an environment suitable for raising superintelligences?

Gilligan's Island: There's no reason to assume an AI would be isolated. It could have or gain access to the internet and most human knowledge, and the mind architecture could contain many independent sub-minds.

Grandiosity: This depends on assigning ethical value to hypothetical humans, which isn't obviously correct.

Outside Argument, Megalomania, Comic Book Ethics, String Theory For Programmers: Ad-hominem.

Transhuman Voodoo, Data Hunger, AI Cosplay: Why should something be false because it's deplorable? And why should something be false because it encourages deplorable behavior?

Religion 2.0: Ted Chiang talked about the definition of "magic" in an interview (https://medium.com/learning-for-life/stories-of-ted-chiangs-...).

>Another way to think about these two depictions is to ask whether the universe of the story recognizes the existence of persons. I think magic is an indication that the universe recognizes certain people as individuals, as having special properties as an individual, whereas a story in which turning lead into gold is an industrial process is describing a completely impersonal universe.

All religions require some element of magic. Even Buddhism, which is arguably the least magical of all religions, treats consciousness as magic. AI requires no magic, therefore it is not a religion.

Simulation Fever: Simulated universes do not have to be magical by Chiang's definition. A universe could be simulated by something that pays no attention to individuals within the simulation, eg. something that lets a large number of universes run their course then examines them statistically. Possibility of this increases the possibility of living in a non-magical universe despite possibility of living in a simulation.

Incentivizing Crazy: This isn't an argument, it's a description of a field. Perhaps the author meant it to be an ad-hominem: "the idea is false because crazy people believe it".


> Hawking's Cat, Einstein's Cat: Scams require some intelligence from the victim. The victim needs to mistakenly believe they're doing something smart.

Only if your scam is too complicated for the victim :). Cats are pretty easy to scam - you just have to stick to simple things. Cats respond predictably to food they like. As a cat person, I find this trick to be more than enough for all my needs ;).

This nitpick aside, I second all your points.

> If this turns out to be needed, why can't increasing intelligence result in increased ability to simulate an environment suitable for raising superintelligences?

Indeed, we could feed the required stimuli to an AI faster than real-time. VR is a thing already, and so is the "fast-forward" button.


I did read the article through, but made up my mind that the author is presenting a flawed argument when I saw how quickly he skimmed through his base premises, not really giving them much in terms of fair thought. In particular, I don't see how he can so blithely assume so easily that there is no quantum effect in the brain structure. I feel there's some degree of arrogance there, as we have not yet even begun to unlock the inner secrets of the brain nor come close to mimicking it. Our best approximations of intelligence are analogous to the comparison between sewing by hand vs the mechanics of a sewing machine, only the sewing machine remains far inferior (at least for now).


> so blithely assume so easily that there is no quantum effect in the brain structure.

He is following the mainstream opinion on this view, so does not have to justify it in detail. Almost nobody subscribes to Penrose's proposal.

> Our best approximations of intelligence are analogous to the comparison between sewing by hand vs the mechanics of a sewing machine, only the sewing machine remains far inferior (at least for now).

There's a blithe assumption of your own, which you don't attempt to justify. Newell and Simon's physical symbol system hypothesis is still taken seriously by mainstream AI, and it has the opposing view: that intelligence is a computational process, and thus we understand many fundamental things about it already.

I'm not taking a position here, just clarifying what the dominant positions are right now.

Also, sewing machines are better at sewing than 99% of people by any metric I can think of, and faster than 100% of people. They also do probably >90% of all the sewing on the planet. So I don't think your analogy does the work you intended.


> In particular, I don't see how he can so blithely assume so easily that there is no quantum effect in the brain structure.

Read it more carefully, because that's not what he did. He is saying "The AI riskers claim 'If you accept X, then you must accept Y'", where X is "the brain does not use quantum effects" and Y is "AI apocalypse". He does not himself take a position on X, but argues against the X=>Y implication.

And it's not really relevant, but for what it's worth, the majority consensus among neuroscientists is that the brain does not use quantum mechanical effects and can be modeled using classical physics alone. Roger Penrose is in the minority on this question.


I don't make this assumption about brains. I was trying to lay out the premisses you need for Bostrom's argument to go through.

I think you may have mistaken the thing I was arguing against for the thing I believe.


The null hypothesis should not be that intelligence requires quantum processes. We'd need to show that: (1) the brain uses this process (2) that it is necessary for certain types of brain functions. Until we run into a wall let's not assume we need more sky ladders.


Even if it's not a quantum effect, biological system often have characteristics that break our conception of neat layers of abstraction.


ITT: True Believers of the Church of The Singularity


This is what I think about an imminent AI danger and those who believe in it:

1. It’s not as near as you think. Current machine “intelligence” is hardly on par with the abilities of an insect, let alone anything we’d call intelligent. Yeah, it’s capable of analyzing a lot of data, but computers could always do that. We don’t really know how to move forward, and there has been little progress in theory in a long while. We haven’t made any significant theoretical breakthrough in 20-40 years.

2. Intelligence isn’t a general algorithm to solve everything. For example, humans are not good at approximating solutions to NP-complete problems. Other algorithms make a better use of computational resources to solve problems that intelligence is not good at, and super-intelligence is not required to come up with those algorithms, as they use brute force on cheap computing nodes. Intelligence also isn't necessarily good at solving human problems, many of which require persuasion through inspiration or other means.

3. We don’t know what intelligence is, so we don’t know if the intelligence algorithm (or a class of algorithms) can be improved at all. Simply running at higher speeds is no guarantee of more capabilities. For all we know, an intelligent mind operating at higher speed may merely experience time as moving very slow, and grow insane with boredom. Also, we don’t know whether the intelligence algorithm can multitask well to exploit that extra time.

But most interesting at all is what I think is at the core of the issue:

4. Intelligence is not that dangerous — or, rather, it’s not so much more dangerous than non-intelligent things. This is related to point 2 above. We can obviously see that in nature, but also in human society. Power correlates weakly with intelligence beyond some rather average point. Charisma and other character traits seem to have a much bigger impact on power. Hitler wasn’t a genius. But some smart people — because they smarter than average — fantasize about a world where intelligence is everything, and not in a binary way, but in a way that gives higher intelligence non-linearly more power. This is a power fantasy, where the advantage they possess translates to the power they lack.


>AI alarmists are fond of the paper clip maximizer, a notional computer that runs a paper clip factory, becomes sentient, recursively self-improves to Godlike powers, and then devotes all its energy to filling the universe with paper clips.

>It exterminates humanity not because it's evil, but because our blood contains iron that could be better used in paper clips.

The consolation of the cingularity narrative is that it tells a story in which the world is not—yet—being destroyed by paper clip maximizers—in cingularity land, the nightmare scenario is still a ways off, and thankfully, with the help of a few Ayn Rand types, market forces can still save us.


From your comment I'd like to segway into a discussion about how brand names are causing people to forget how to spell normal English words.


"We’ve learned that at least one American plutocrat (almost certainly Elon Musk, who believes the odds are a billion to one against us living in "base reality") has hired a pair of coders to try to hack the simulation."

-->source?



I don't agree/understand Elon's reasoning here.

> If we aren’t actually living through a simulation, Mr Musk said, then all human life is probably about to come to an end and so we should hope that we are living in one. “Otherwise, if civilisation stops advancing, then that may be due to some calamitous event that stops civilisation,” he said at the Recode conference.

I don't know why we should hope for that because any civilization in a layer in the stack of sims could be destroyed, which would destroy all nested sims beneath.

In fact you should hope you're further up the stack as much as possible because then the probability of destruction is lower.



About the we live in a computer simulation thing. When I look at the universe or subatomic particles, I see physical reality and I don't see anything that indicates a computer simulation. We just don't know the next step. Thinking we live in a computer simulation as the next physical phenomenon for the infinitely great just seems to me to display a lack of imagination rooted in our time and resembles the belief in a god. I would tend to agree with the author in that regard that some in our industry tend toward mysticism where computers are the religion.


In my opinion, the problem is not intelligence per se, but intelligence willing to change everything in a totalitarian/idealist way. No matter if it is "artificial" or "natural".


> But of course we know that there are all kinds of configurations of matter, like a motorcycle, that are faster than a cheetah and even look a little bit cooler.

yet more inflammatory rhetoric from the Pinboard guy


He's usually so much funnier than this! Something must be going on. Maybe his dog died.


every tech worker who bought a motorcycle to be Hard and Cool just owned my karma with downvotes

but you will never be a Hell's Angel


The problem with military coups is that a lot of the people who are involved in them never survive the regime they create. When you are trying to consolidate power as a figurehead, the first thing you do is to make sure that anyone with questionable loyalties, whether true or not, are eliminated and replace with people who are grateful to hold that position.

Intelligent people know this, which is why there really aren't that many military coups, regardless of how those in the military feel about their political masters.


The idea that huge quantities of computing power will lead to massively better intelligence is like saying huge quantities of barbecue sauce will lead to massively better spare ribs.


This is not what superintelligence people are worried about, in general. The human brain is already embarrassingly parallel. I'm sure you can find at least one person who will advocate for the "Moore's Law=Doom" scenario, but you won't find that argument endorsed by anyone currently working on AI Safety.


Why is everyone afraid of artificial intelligence? I'm more afraid of natural infinite human stupidity. Superintelligence would just balance that out ;-) . But seriously here is my critique of Nick Bostroms arguments about superintelligence: https://asmaier.blogspot.de/2015/06/superintelligence-critiq...


What the author is actually talking about is of course the mythological techno-singularity event, which is of course bullshit.

Have you ever programmed any piece of software that suddenly implemented features of its own? Did your program become sentient?

Laughable. These are child-like fantasies belonging in 50s sci-fi.

What I do know for a fact, is that when it comes to AI fear mongers, there is not much intelligence to be detected.


>Have you ever programmed any piece of software that suddenly implemented features of its own?

I'm not sure what particular fear mongers you're talking about, but Bostrom and the like are talking about software that's specifically made to change and improve itself on a fundamental level.


This is no different than Facebook ads/news feed adapting to your interests and only showing what you like in the hope of maximizing total engagement and retention.

We all know that Google/Facebook/Bing build a little bubble for you and customize results and ads based on your behaviour.


I'm saying that's the pipe dream. Software won't change on its own.


We already have software that can change on its own. Neural nets are software too, and with things like backpropagation they can update their weights, essentially changing themselves.

I'm not saying that this level of change is enough to get the disaster scenarios that Bostrom talks about, but it's a folly to say that self-changing software can't possibly exist.


Neural net changing its weights is no different from any other software updating the values of its variables.


I think brains very probably use quantum effects in ways we might not even be able to study with anything close to today's technology. As a result, individual neurons or groups of neurons can be way more complicated than we are expecting them to be.

I'd say we're at least 2 major revolutions away from even coming close to what a chimpanzee's intellect, much less a human's.


This is at least partly relevant to the discussion: http://www.smbc-comics.com/comic/the-talk-3

As a shorthand "quantum effects" are often hand-waved as crazy powerful "weird" things, but we can certainly model them and they really aren't as "magic" as a lot of popular science would have one believe (as much as even some of us otherwise rational people so strongly wish to believe in quantum magic).


Love the bit at the end: "Quantum computing and consciousness are both weird and therefore equivalent."

Quantum consciousness doesn't sound very different from plain old vitalism.


> but we can certainly model them

Our ability to model quantum systems is feeble indeed. Anything more than about 40 qubits and it starts costing millions of dollars. 50 qubits just forget about it.

There's plenty of quantum condensed matter physics that is just completely impossible to model.

Even in chemistry, there are huge swathes of molecules that we fail to be able to model. This is hopefully going to be the "killer app" of quantum computers, btw.


Is there any evidence that the functioning of neurons requires consideration of quantum effects acting on them?

Maybe that's the case, but this sounds like it goes against Occam's razor, and give a lot of extra credit to the chaotic process of evolution that it was meaningfully able to harness these quantum effects.


chemistry uses quantum effects .... so of course brains do.

I think you have to be a bit more specific to claim something particularly special.

I think it's much more likely the important difference is that brains are built in 3D allowing far far more connectivity than wiring in ~5 layers of metal between 2D transistors in the same plane does - it's that added complexity we don't have the technology to reproduce (yet) that's the issue


The difference between a chimpanzee's intellect and a human's is likely a matter of degree/quantity. We're so many orders of magnitude away from that sort of intuitive intellect that it's unfair to refer to it as simply an order of magnitude difference, IMHO.


Computers use quantum effects. That's how transistors function.


Why are quantum effects complicated? They are random, however our behavior is not very random at all.


Given that even photosynthesis uses quantum effects (yes, really) to get it's basic functions done it doesn't seem extremely far-fetched that nervous systems make use of these as well, assuming there is a theory that presupposes these effects (if we can explain neurons without them coherently, why should they involve these effects a.k.a. Occam's Razor).


There 's nothing suggesting that neurons are not deterministic, in fact they are simulated pretty well with some cable theory and HH models. https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model


> Why are quantum effects complicated?

because of entanglement.


I think these are problems if indeed it turns out that intelligence correlates with goal making and pursuing of said goals, which given the other article on the front page at the moment (that I am totally intelligent enough to link to but also unmotivated to do so) which says that intelligence is not linked to success I just don't know that we can assume that correlation.


A key missing piece for intelligence: how many measurements can you make on the world? It doesn't matter how much computational power something has if it can't make measurements and ask and test hypotheses.

Edit: should state that this is a corollary of "> The Argument From Actual AI," but generalized to any 'intelligent' system, not just neural nets.


Glad to see a voice from the strong AI skeptic camp here. Reminds me of a book I read a long time ago called "Great Mambo Chicken and the Transhuman Condition." I used to drink the kool-aid myself until a friend of mine snapped me out of it by saying, "Dude, you're telling me you actually want Skynet??" I gave him my copy of the book.


The New Yorker did an article on Bostrom and the Future of Humanity Institute last year:

http://www.newyorker.com/magazine/2015/11/23/doomsday-invent...


The thing I think everyone should be most worried about is not robots deciding to kill us, it's the economic upheaval that could result from robots that can do jobs better than their human counterparts.

There's been studies done on this type of thing (https://journalistsresource.org/studies/economics/jobs/robot...) and so far use of robots has mostly focused on helping human workers become more productive, and not replacing them entirely. However, for lower skilled workers this isn't always true, and if robots were able to replace even the most skilled workers... that could cause problems for human employment. To quote the article on current status:

"Robots had no effect on the hours worked by high-skilled workers. While the authors found that industrial robots had no significant effect on overall employment, there was some evidence that they crowded out low-skilled and, to a lesser extent, middle-skilled workers."

Now continue that line of thinking and imagine a world where a robot could do any job better than a human...

We could end up with a "is that American made?" or "is that free range chicken?" type of scenario where companies that refuse to replace human workers with robots are competing against other companies that will do anything to lower costs, even if ethically questionable.

So then we potentially end up in a situation where the rich (Executives and stock holders) get richer by replacing costly human workers with cheaper, more efficient robots, and the wealth of the average family declines as people struggle to find work. Alright, well maybe we give all humans an allowance to live off of, food to eat, a home to live in, etc. Except...

Human beings need work. They need to feel a sense of purpose. I don't think the humans from the movie Wall-E hold much appeal. Let's not go there.

Ok, so maybe we pass laws against replacing many human jobs with robots. Well, if the robots are truly intelligent, aren't we then discriminating against a group of sentient beings solely because they are too good at their job? Isn't this just going to be a techno world version of the civil rights and lgbt rights movements?

These are the things I worry about. Not robots killing me.

As a side note, I hope cyborg and other bio tech improves at some point, at lot of these concerns could be mitigated if humans had the potential to improve themselves beyond any normal evolutionary rates.


Look at Moravec's Paradox.

It's probably the most valid result from the research and yet it is overlooked. We'll have AI capable of replacing the middle class before we have robots capable of replacing the working class.


The quintessential "rouge AI" scenario, pre-terminator SkyNet but post "Metropolis" is "Colossus: The Forbin Project".

It's finally available on widescreen blu-ray in Germany but Universal has still not re-released it for the western audience.

The filmed exterior of Colossus HQ is Lawrence Hall of Science in Oakland, CA.


I have to say how well maciej writes. I hope someday I am able to write and express my thoughts as well as he can.


I think that we vastly underestimate bias in this subject. Smart people are afraid of AI or are afraid of change. This article belongs to the second category of people that wish things stay the same. There are also some very weak arguments in it.


As always it seems the fear mongers in AI do not actually know how programming and "the machine" works.

Also, depending on how you measure intelligence, "machines" have been way smarter than humans since the first calculator.


such a great read, here's the video ~ https://www.youtube.com/watch?v=kErHiET5YPw


I am one of those who is convinced that AI will destroy humanity and Elons/others efforts will not help.

Its hard for me to argue against the fact that this idea isn't eating me.


The pace the world is moving towards its doom, one can be forgiven to think the Internet has gained consciousness and pulling tricks to get us all killed.


I feel like this essay is missing a key point around defining intelligence. Machines can be trained to emulate intelligence, but machines are not themselves intelligent.

Our neural nets can drive cars, convert speech to text, recognize images, and maybe even carry on a conversation.

But there's no light behind the eyes. It's all synthetic. It's an emulation which mimes intelligence through the brute force of observing billions of inputs and their associated outputs.

What it decisively cannot do, is something that it never saw before, except by mistake.

Wake me when none of this is true?


You're really looking at the dichotomy intelligence / consciousness here, actually.

We know what intelligence is. And yes, you can actually have dead intelligence, no problem. Intelligence is computing, nothing more.

What we don't know is what is consciousness. You seem to belong to the category of people who believe consciousness is mandatory for intelligence. A lot of us feel differently. Even, there are some who believe consciousness is basically an illusion (I do not share this point of view).

Look at the Chinese Room thought experiment.

Read the novel 'Blindsight' by Peter Watts.


There's a longstanding philosophical argument about this. Google 'Searle's Chinese Room'.


I believe it's fair to say that a computer programmed with knowledge of sufficient numbers of inputs and outputs does indeed "understand" -- in fact the computer can truly speak and converse in Chinese quite well. But I believe it's a distinction without a difference to say the Computer does not, in fact, understand Chinese in this case.

Instead of arguing semantics I propose a different sort of distinction.

A computer programmed as a deep neural net can understand remarkably well, and with that understanding it becomes a remarkable tool for automation.

However it remains nothing more than a tool. No more intent than a hammer. And without will, certainly without will to evolve.

Only in the sense that the algorithm is programmed to improve its fitness, it calculates coldly towards that end. Not ever in an innovative sense, and certainly not in an adversarial sense.

I agree thoroughly with the other commenters that propose it's not that AI will defeat us, but rather AI will be just too useful that the economic damage will be extreme.

AI will defeat us by replacing the need for us in all productive endeavors. Anything we can do it can do better, cheaper, so far up the value chain that only the elite will remain gainfully employed.

Too much of human labor will be eaten by AI, we better get a hell of a lot better at educating our masses if we ever hope they will have something productive they're able to do.

I wonder how the theory of competitive advantage stacks up against AI -- a good for which there is no scarcity!


The only reason we don't have a clean, cogent 'definition of intelligence' is because people like you blindly refuse to accept the possibility that machines can be intelligent. You're demanding that someone explain why black and white are different colors while requiring them to conclude that the night sky and the sun must look the same.


A lot of our greatest inventions or discoveries were "mistakes".


What's interesting is the evolution of consciousness. In the long term, does it matter whether or not it's embodied in meat?


I distinguish between consciousness and self-consciousness, but as to your question, I don't think it matters at all in what form self consciousness takes place, and the moral thing to do when something appears to be self-conscious is to treat it with the same kindness you would expect for yourself (unless it is obvious that this something is a clear and present danger). I can totally see how from a moral perspective a human should be put on trial and imprisoned for killing a sentient "machine" - though unfortunately, looking at how as a society we treat animals and many times look the other way at mass cruelty to our own kind, I'm not really sure I'd mind a super-intelligence (and perhaps as a result also benevolent) ruling us instead of the other way around.

Of course, there's the possibility that a super-intelligence would not be able to break through the human egocentric limitations that may have been built into it by its human creators, and in that case, we're screwed...


Right, self-consciousness.

I see evolution generally as selection in various configuration spaces. Specifics at various levels -- cosmogenesis, nucleosynthesis, life, consciousness, ... -- are different, of course. But it's arguably the same process.


If AI is able to self-improve, don't you think it will reach to a level of complexity where it will start asking, why?


Say that the simulation time period is 1 billion years. To the post-singularity, a few thousand years is the minor details in history, like what Napolean had for breakfast the morning before the Battle of Waterloo.

Let the thoughts flow as far as you'd like, but come back to reality once in a while. After all, your ideas are conditional to your body.



Could AI be programmed to have intuitions nor irrational behaviors?


Tangent: What's the tech scene like in Zagreb?


It's pretty good.

There are several hackerspaces (https://wiki.hackerspaces.org/Zagreb - I'm partial to the one in Mama), and every now and then we'll gather around a "Nothing will happen" (http://www.nsnd.org/).

There's a vibrant meetup scene (well-covered by meetup.com), and WebCamp Zagreb (https://2016.webcampzg.org/) is the community conference that tries to gather different meetups & communities once a year.

Companies from abroad tend to open development offices here to exploit the cheaper workforce, especially since Croatia has joined the EU. There's also a number of local companies that are constantly hiring, resulting in a solid amount of hiring opportunities even for the part of the crowd that's a bit pickier.

People are leaving Croatia in general, though, and the tech community isn't immune to that. Lots of people moved to other EU countries, and although that's not unexpected at all, I believe it's left a dent within all of the above.

If you decide to visit, give me a shout. :)


The parent is one of the organizers of this conference, and I just want to say what a wonderful group ran WebCamp, and how welcoming and friendly they were.

If you get a chance to speak there, or attend, take it!


Shouldn't a superintelligence be smart enough to become another Gandhi? Why not?


I for one welcome our new superintelligent overlords. I can't imagine they could do much worse than us. https://xkcd.com/1626/


Unpersuasive premises. Obvious alarmism.


Let me just elaborate on the ‘complex motivations’ idea, because I certainly think that ‘orthogonality’ is the weak point in the AGI doomsday story.

Orthogonality is defined by Bostrom as the postulate that a super-intelligence can have nearly any arbitrary goals. Here is a short argument as to why ‘orthogonality’ may be false:

In so far as an AGI has a precisely defined goal, it is likely that the AGI cannot be super-intelligent. The reason is because there’s always a certain irreducible amount of fuzziness or ambiguity in the definition of some types of concepts (‘non-trivial’ concepts associated with values don’t have necessary definitions). Let us call these concepts, fuzzy concepts (or f-concepts).

Now imagine that you are trying to define the goals that will let you specify that you want an AGI to doprecisely, but it turns out that for certain goals there’s an unavoidable trade-off: trying to increase the precision of the definitions reduces the cognitive power of the AGI. It’s because non-trivial goals need the aforementioned ‘f-concepts’, and you can’t define these precisely without over simplifying them.

The only way to deal with f-concepts is by using a ‘concept cloud’ – instead of a single crisp definition, you would need to have a ‘cloud’ or ‘cluster’ of multiple slightly different definitions, and it’s the totality of all these that specifies the goals of the AGI.

So for example, such f-concepts (f) would need a whole set of slightly differing definitions (d):

F= (d1, d2, d3, d4, d5, d6…)

But now the AGI needs a way to integrate all the slightly conflicting definitions into a single coherent set. Let us designate the methods that do this as <integration-methods>.

But finding better <integration methods> is an instrumental goal (needed for whatever other goals the AGI must have). So unavoidably, extra goals must emerge to handle these f-concepts, in addition to whatever original goals the programmer was trying to specify. And if these ‘extra’ goals conflict too badly with the original ones, then the AGI will be cognitively handicapped.

This falsifies orthogonality: f-concepts can only be handled via the emergence of additional goals to perform the internal conflict-resolution procedures that integrate multiple differing definitions of goals in a ‘concept-cloud’.

In so far as an AGI has goals that can be precisely specified, orthogonality is trivially true, but such an AGI probably can’t become super-intelligent. It’s cognitively handicapped.

In so far as an AGI has fuzzy goals, it can become super-intelligent, but orthogonality is likely falsified, because ‘extra’ goals need to emerge to handle ‘conflict resolution’ and integration of multiple differing definitions in the concept cloud.

All of this just confirms that goal-drift of our future descendants is unavoidable. The irony is that this is the very reason why ‘orthogonality’ may be false.


I'm not sure if this is meant to be funny somehow, or taken seriously, but the series of strawmans as arguments, eg. referencing some american sitcom as an example to a phenomen is pretty weak. It is sad, because I can agree with many ideas, but the flawed reasoning is not convincing for someone critical, and wastes those arguments...

Although the read is ovarall interresting, but I find the style and arguments sub par, too american for my taste.


It puzzles me that people think that the nanosecond some superintelligence comes into being, that it also has the capacity to destroy humans/earth/whatever. No thought is given as to how this intellect gets said capacity, bar 'launch the nukes'.

Seriously, if "it" turned against "us", we'd have the upper hand. For example, quality electronics are hard to make without sending African children down into mines and having African adults shoot each other over the results (ie: conflict minerals). If a superintelligence is reliant on us to propagate its physical-world interactions, we're going to be just fine.

I mean come on, we all work in IT, and we all know just how difficult it is to keep hardware running securely, safely, and in good order. Stuff fails all the time. Similarly, we all know people who are really intelligent, but this doesn't translate to success in life for them.

In short, "being intelligent" isn't enough - the entity also needs ways to effectively work the world around them.

edit: heh, just saw another article on the front page: "IQ is only a minor factor in success"


Ossining news


hello everybody


[flagged]


Please don't post unsubstantive comments here.

We detached this subthread from https://news.ycombinator.com/item?id=13242458 and marked it off-topic.


I mean, Texas is pretty great, and Colorado isn't bad either...?


Colorado's way better than Texas.

>Coloradan living in Texas


Am I the only one who thought that the points he came up with to refute AGI / ASI actually made the concerns deeper?


We should figure out intelligence before speculating on super-intelligence. I think Kurzweil and Hofstadter have a compelling model in 'How to Create a Mind' and 'Surfaces and Essences: Analogy as the Fuel and Fire of Thinking', but it's not exactly rigorous and we still haven't created anything that could pass the Turing test which wouldn't even require high intelligence - just 2nd grade level or something.


> I can't point to the part of my brain that is "good at neurosurgery", operate on it, and by repeating the procedure make myself the greatest neurosurgeon that has ever lived. Ben Carson tried that, and look what happened to him.

Nice, but does it fall under political ban?


The political ban ended two days after it started.


Good to know, it was a terrible idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: