Hacker News new | past | comments | ask | show | jobs | submit login
Introducing OpenAI (openai.com)
1107 points by sama on Dec 11, 2015 | hide | past | favorite | 376 comments



> Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.

In a sense, we have no other defense. AI is just math and code, and I know of no way to distinguish good linear algebra from evil linear algebra.

The barriers to putting that math and code together for AI, at least physically, are only slightly higher than writing "Hello World." Certainly much lower than other possible existential threats, like nuclear weapons. Two people in a basement might make significant advances in AI research. So from the start, AI appears to be impossible to regulate. If an AGI is possible, then it is inevitable.

I happen to support the widespread use of AI, and see many potential benefits. (Disclosure: I'm part of an AI startup: http://www.skymind.io) Thinking about AI is the cocaine of technologists; i.e. it makes them needlessly paranoid.

But if I adopt Elon's caution toward the technology, then I'm not sure if I agree with his reasoning.

If he believes in the potential harm of AI, then supporting its widespread use doesn't seem logical. If you take the quote above, and substitute the word "guns" for "AI", you basically have the NRA, and the NRA is not making the world a safer place.


The risk of AI comes from the control problem. We have a vague idea on how to build AIs. The work is just to optimize the algorithms extensively so they can run in real time. But we have no idea how to control those AIs. We can push a "reward button" every time it does something we like. But we can't prevent it from killing the programmer and stealing the button for itself.

It doesn't matter at all who makes AI first. Without the control problem solved, no one will be able to use it for evil ends, even if they want to. But it's still incredibly dangerous if it gets loose in some uncontrolled setting.

So this project is the exact opposite of what you would want to do if you wanted to minimize AI risk. This is rushing towards the risk as fast as possible.


Lets start by figuring out how to control humans before we start on the AIs.


Actually, unlike controlling humans, whatever that means, it's very easy to limit the impact of AI - all you have to do is treat advanced chip fabrication processes the way nuclear technology is treated, and put a legal limit on processor power so it's somewhere where it was in the late 80s. (So you have office software and email but neither YouTube nor Doom 3; a reasonable price for our survival, I find.) Fewer organizations today can manufacture chips using the latest process nodes than there are governments that can make nuclear weapons, and it's probably harder to build a chip fab in secrecy and make a large enough number of chips to power an AI than it is to build nuclear weapons in secrecy. Berglas argues for this in http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandc...

Of course the reason governments don't do this is because almost nobody sees the risks of AI "acting on its own" (and they're probably right), nor the risks of things like rogue drones made from cheap commodity hardware (which might be sensible as well - these are more like RPGs in the wrong hands than they are like nukes in the wrong hands.)


>all you have to do is treat advanced chip fabrication processes the way nuclear technology is treated, and put a legal limit on processor power so it's somewhere where it was in the late 80s.

Are you a villain in a Vernor Vinge novel?

I may not like the numbers you're crunching, but I will defend to the death your right to crunch them.


> The risks of things like rogue drones made from cheap commodity hardware, which might be sensible as well …

In Prof. Tegmark’s recent presentation at the UN he mentioned the possibility of extremely cheap drones that approach the victim's face very quickly and pierce a bolt into their brain through one of their eyes. Such a drone wouldn't require high precision projectiles which would make it cheap to build.

> Of course the reason governments don't do this is because almost nobody sees the risks of AI "acting on its own"…

It is near impossible to enforce something like this globally and forever. At best, it would be a near-term solution, especially so because there is a huge military and economic interest in technology and AI. Quite possibly, the only long-term solution is solving the control problem.


>the possibility of extremely cheap drones that approach the victim's face very quickly and pierce a bolt into their brain through one of their eyes.

That's super impractical for many reasons. Drones don't move fast enough, nor can they change their velocity quickly enough to do that. People will almost certainly react if they see the drone coming for them, and cover their face, or swat it down, etc.

But even if it did work, it's not a serious threat. For some reason people spend a ton of time thinking about all the ways new technologies could be abused by terrorists. For some reason they never consider that tons of existing technologies can be abused too.

Many people have come up with really effective terrorist ideas that would kill lots of people, or do lots of damage. The reason the world is still here is because terrorists are really rare, and really incompetent.

> Short range slingshot mechanisms are several orders of magnitude cheaper to build than firearms.

Not necessarily. It's actually not that difficult to make a firearm from simple tools and parts from a hardware store. And it will be way more deadly and accurate than a sling. Not to mention simple pipe bombs and stuff.


It will be interesting to test whether one can react fast enough if a drone like this approaches from a shallow angle from the side. It could even approach in free fall the last few meters, so it would be almost noiseless: https://www.youtube.com/watch?v=VxOdXYCRAds

I think a new quality about this kind of weapon is that it can be controlled remotely or can even operate semi-autonomously. Deadly pipe bombs are certainly heavier than a crossbow and ignition mechanisms aren't trivial to build.


> extremely cheap drones that approach the victim's face very quickly and pierce a bolt into their brain

It's not much more of a threat to society than a handgun. A WMD it is not, unless you make a lot of these and launch them at the same time, which is probably less effective than an H-bomb. (The one major difference between such a drone and a handgun is you might be able to target politicians and other people who're hard to shoot; a somewhat higher mortality rate among politicians is hardly a huge threat to society though.)

> there is a huge military and economic interest in technology and AI

There's also a huge interest in nuclear energy, and it doesn't follow that a consumer should be or is able to own a nuclear reactor. If anyone took the dangers seriously, it wouldn't be that hard to at least limit what consumer devices can do. Right now a single botnet made from infected consumer PCs has more raw computing power than the biggest tech company server farm, which is ridiculous if you think of rogue AI as an existential threat. Actually it's ridiculous even if you think of "properly controlled" AI in the wrong hands as an existential threat; the botnet could run AI controlled to serve the interest of an evil programmer. Nobody cares because nobody has ever seen an AI that doesn't fail the Turing test in its first couple of minutes, much less come up with anything that would be an existential threat to any society.


> It's not much more of a threat to society than a handgun.

These drones could be programmed to target specific groups of people, for example of a certain ethnicity, and attack them almost autonomously. Short range slingshot mechanisms are several orders of magnitude cheaper to build than firearms. Moreover, the inhibition threshold is much lower if you are not involved in first-hand violence. There is also a much lower risk of getting busted and no need for intricate escape planning.


>Short range slingshot mechanisms

you got me thinking, googling, then frowning. imagine this https://www.youtube.com/watch?v=crzXD6NjBAE milspecced.


Admittedly the more interesting part will be the software flying the drone. It seems we are going to need a lot of these [1] and targeted people will need to wear safety goggles.

[1]: http://i.imgur.com/aAh4jwq.gifv


>These drones could be programmed to target specific groups of people, for example of a certain ethnicity, and attack them almost autonomously.

Are these drones also self-replicating and fully independent?


I wrote "almost autonomously".


Let's figure out the human brain first.

Let's say you have these two options: (1) A technology that will make you feel the best you possibly can, without any negative consequences, or (2) a robot that will do stuff for you, like cleaning your house, driving you to work. Now which one would you choose?


>But we can't prevent it from killing the programmer and stealing the button for itself.

Not to be condescending, but do you have any idea what practical AI actually looks like? The scenario you've imagined makes about as much sense as a laptop sprouting arms and strangling its owner.


Yes, I'm very familiar with current AI. It's my hobby. I'm not really talking about current AI though. I'm talking about a machine with thousands of times more intelligence than humans.


thousands of times more intelligence than humans

I see this phrase thrown around a lot by Kurzweil and fans. What does it even mean? How do you measure intelligence? Smarter than whom?


I don't know if this is a definition other people use, but here's one possibility.

Intelligence (in a domain) is measured by how well you solve problems in that domain. If problems in the domain have binary solutions and no external input, a good measure of quality is average time to solution. Sometimes, you can get a benefit by batching the problems, so lets permit that. In other cases, quality is best measured by probability of success given a certain amount of time (think winning a timed chess or go game). Sometimes instead of a binary option, we want to minimize error in a given time (like computing pi).

Pick a measure appropriate to the problem. These measures require thinking of the system as a whole, so an AI is not just a program but a physical device, running a program.

The domain for the unrestricted claim of intelligence is "reasonable problems". Having an AI tell you what the mass of Jupiter or find Earth-like planets is reasonable. Having it move its arms (when it doesn't have any) is not. Having it move _your_ arms is reasonable, though.

The comparison is to the human who is or was most qualified to solve the problem, with the exception of people uniquely qualified to solve the problem (I'm not claiming that the AI is better than you are at moving your own arms).


Most problems are not binary. They might not even have a single best solution. Many have multiple streams of changing inputs and factors. So again, how are you going to measure intelligence in such domains?

Besides, an AI might be really good in solving problems in one specific domain. This does not mean this AI is anything more than a large calculator, designed to solve that kind of problems. That calculator does not need to, and will not become "self-aware". It does not need, and will not have, a "personality". It might be able to solve that narrow class of problems faster than humans, but it will be useless when faced with most other kinds of problems. Is it more intelligent than humans?

It's not at all clear how to develop an AI which will be able to solve any "reasonable" problem, and I don't even think that's what most companies/researchers are trying to achieve. Arguably the best way to approach this problem is reverse engineering our own intelligence, but this, even if successful, will not necessarily lead to anything smarter than what is being reversed engineered.


Intelligence is a difficult quantity to measure, but it definitely exists. Some humans are definitely smarter than others. Able to do tasks that others wouldn't, able to come up with solutions others couldn't, or do things faster/better/etc.

A computer that is thousands of times more intelligent than humans, means it can do things we might think are impossible. Come up with solutions to problems we would never think of in our lifetimes. Manage levels of complexity no human could deal with.


We already have plenty of those computers. For example, Google computers are able to process the amount of information no human can possibly process during their lifetime. Other computers can help with design or optimization problems that are so complex that no human can perform them in any reasonable time. Does it mean that those computers are more intelligent than humans?


They aren't intelligent by any reasonable definition. Just ask them what color the sky is.


But they are intelligent, according to your own definition:

"A computer that is thousands of times more intelligent than humans, means it can do things we might think are impossible. Come up with solutions to problems we would never think of in our lifetimes. Manage levels of complexity no human could deal with."

Or did you just redefine intelligence as: "the ability to tell what color the sky is?"


Ecosystems are hellish malthusian processes rife with extinction. I'm unsure why they think an ecosystem of superhuman agents will be any different. What sort of selective pressures would exist in such a world that would ensure the survival of humanity? It's pretty naive to think a competitive ecology would select for anything more than very intelligent entities that value replication.


There's a very real case that any superhuman agent would prioritize its own survival over the survival of any human, since (presumably) its utility function would encourage it to continually guarantee the survival of "humanity" or something larger than any single human... and it can't guarantee anything if it's dead, right?

So the moment there's a war between two superhuman agents, either of them could end up de-prioritizing human life, more so than they might if either of them existed in isolation.

And if there's actually a starvation of resources if there are a large number of superhuman agents? Am I missing something obvious here?

If I'm not missing anything... it's painfully ironic to me that we worry about the AI Box, and yet by open-sourcing the work of the best minds in AI, we voluntarily greatly increase the probability that AI will be born outside of a box - somebody is going to be an idiot and replicate the work outside of a sandbox.

Now, despite all of this, I'm optimistic about AI and its potential. But I believe the best chance we have is when the best, most well-intentioned researchers among us have as much control as possible over designing AI. Ruby on Rails can be used to make fraudulent sites, but it's fine to open-source it since fraudulent sites don't pose an existential risk to sentient biological life. That is not necessarily the case here.


Physics. You're right that ecosystems are brutal. That's exactly why I'm not worried about AI as an existential threat to humanity.

A few years back Bill Joy was sounding the alarm on nanotechnology. He sounded a lot like Elon Musk does today. Nanobots could be a run-away technology that would reduce the world to "grey goo". But nothing like that will ever happen. The world is already awash in nanobots. We call them bacteria. Given the right conditions, they grow at an exponential rate. But they don't consume the entire earth in a couple of days, because "the right conditions" can't be sustained. They run out of energy. They drown in their own waste.

AI will be the same. Yes, machines are better than us at some things, and that list is growing all the time. But biology is ferociously good at converting sunlight into pockets of low entropy. AI such as it exists today is terrible at dealing with the physical world, and only through a tremendous amount of effort are we able to keep it running. If the machines turn on us, we can just stop repairing them.


The danger of nanotechnology is that it can be built better than biological life. It could outcompete it in it's own environment, or at least different environments.

Solar panels can collect more energy than photosynthesis. Planes can fly faster than any bird. Guns are far more effective than any animal's weapons. Steam engines can run more efficiently than biological digestion. And we can get power from fuel sources biology doesn't touch, like fossil fuels or nuclear.

We conquered the macro world before we even invented electricity. Now we are just starting to conquer the micro world.

But AI is far more dangerous. It would take many many decades - perhaps centuries - of work to advance to that level. It's probably possible to build grey goo, it's just not easy or near. However AI could be much closer given the rapid rate of progress.

If you make an unfriendly AI, you can't just shut it off. It could spread it's source code through the internet. And it won't tell you that it's dangerous. It will pretend to be benevolent until it no longer needs you.


> Solar panels can collect more energy than photosynthesis. Planes can fly faster than any bird. Guns are far more effective than any animal's weapons. Steam engines can run more efficiently than biological digestion. And we can get power from fuel sources biology doesn't touch, like fossil fuels or nuclear.

A gun isn't effective unless human loads it, aims it and pulls the trigger. All your other examples are the same. We do not have any machine that can build a copy of its self, even with infinite energy and raw materials just lying around nearby. Now consider what an "intelligent" machine looks like today: a datacenter with 100,000 servers, consuming a GW of power and constantly being repaired by humans. AI is nowhere near not needing us.


Because we haven't had the reason or ability to make self replicating machines yet. It's possible though. With AI and some robotics, you can replace all humans with machines. The economy doesn't need us.


Advanced nanotechnology is not the only possible way to achieve power: http://slatestarcodex.com/2015/04/07/no-physical-substrate-n...


Yeah, interesting. I'll just point out that my argument is not that AI can't affect the physical world. Clearly it can. It's that AI is still embodied in the physical world, and still subject to the laws of physics. We're so much more efficient and effective in the physical world, that we are not threatened by AI, even if it gets much more intelligent than it is today.


great read. thanks for that.


"If the machines turn on us, we can just stop repairing them."

Never understood this reasoning.

We are not talking about machines vs. biologic life, this is a false dichotomy. We are talking about intelligence.

Intelligence is the ability to control the environment through the understanding of it. Any solvable problem can be solved with enough intelligence.

Repairing a machine is just a problem. The only limitations for intelligence are the laws of physic.


That's my point. The laws of physics are a bitch. We like to think of the internet as a place of pure platonic ideals, where code and data are all that matter. But that ethereal realm is still grounded in matter and ruled by physics. And without bags of mostly-water working 24/7 to keep it going, the internet just falls apart.


> But they don't consume the entire earth in a couple of days, because "the right conditions" can't be sustained. They run out of energy. They drown in their own waste.

Maybe, but not necessarily, and even if they do "drown in their own waste" they might take a lot of others with them. When cyanobacteria appeared, the oxygen they produced killed off most other species on the planet at the time [1]. The cyanobacteria themselves are still around and doing fine.

[1] https://en.wikipedia.org/wiki/Great_Oxygenation_Event


If you take the quote above, and substitute the word "guns" for "AI", you basically have the NRA, and the NRA is not making the world a safer place.

They may actually be. Although mostly not in the way you're talking about, but there is something to be said for the dispersing of power. If one or two players have a power no one else has, there's more temptation to use it. If it's widely distributed, it seems reasonable that any one actor would be less likely to wield that power. (I admit I'm being a bit hand-wavy here on what I mean by "power" but bear with me. It's kind of an abstract point).


If we're focused on weapons of mass destruction, I prefer a world of nuclear nonproliferation to the opposite. There are relatively few nations that possess nuclear weapons, and we have very few instances of them using those weapons against anyone else.

To argue against myself, I'd say that the difference between weapons and AI is that AI is more general. It's not just a killing machine. In fact, I hope that killing represents the minority of its use cases.


If we're focused on weapons of mass destruction, I prefer a world of nuclear nonproliferation to the opposite

So do I generally speaking, but with a caveat... I think that having multiple (eg, more than 1 or 2) nuclear powers is likely a Good Thing (given that the tech exists at all). The whole MAD principle seems very likely to be one reason the world has yet to descend into nuclear war. The main reason I prefer nuclear non-proliferation though, isn't because I genuinely expect something like a US/Russia nuclear war, it's more the possibility of terrorists or non-state actors getting their hands on a nuke.

It's interesting though, because these various analogies between guns, nukes and AI's don't necessarily hold up. I was about to say a lot more, but on second thought, I want to think about this more.


I actually prefer japan's approach. They know exactly how to build nukes. They don't have any on hand. If they're backed into a corner, they'll produce as many as they feel they need.

Unfortunately during the cold war, neither side could really rely on the other to just chill out for a couple of days. So now we can deliver hundreds (thousands?) anywhere in the world in about 45 minutes.


> If he believes in the potential harm of AI, then supporting its widespread use doesn't seem logical. If you take the quote above, and substitute the word "guns" for "AI", you basically have the NRA, and the NRA is not making the world a safer place.

Guns are not exactly good at healing, making or creating.

A better comparison would be knives. Knives can be used for stabbing and killing but also for sustenance (cooking), for healing (surgery), for arts (sculpture). So perhaps this is akin to National Cutlery Association (not sure if such an entity exists but you get the idea).


>and the NRA is not making the world a safer place.

There is actually very little evidence for this. Violent crime is not strongly correlated with gun ownership (and it may even be negatively correlated). Instead, it appears to be based strongly on factors like poverty.

Here's a good summary. http://politics.stackexchange.com/questions/613/gun-prevalen...


You're right, guns are pure evil. Clearly, we should take them out of the hands of cops, bodyguards, hunters, and civilians defending themselves.


That's a strawman argument. Parent is pointing out that unlike guns, AI (and knives) have purposes other than use as a weapon, and therefore it is possible that their widespread proliferation would be good, even if that of weapons is bad. Certainly weapons (including guns) have non-evil purposes as well, but that's beside the point.


Guns also have uses other than a weapon. They shoot, but that doesn't mean they are inherently shooting living things. Just like knives stab and cut, but do not always target living things.


Ironically, you're absolutely right. Cops, bodyguards and civilians defending themselves generally only need guns because their adversaries have guns. Just take them out of everyone's hands. I know this works, if you can make it happen, because I've seen the gun death statistics for countries with effective gun control.

Hunters are a different case, but their weapons are rather different too. To be honest I wouldn't that much care about depriving them of a pastime if it meant turning US gun death figures into European ones. But that's probably unnecessary.


>Cops, bodyguards and civilians defending themselves generally only need guns because their adversaries have guns.

Not at all the case. Guns allow for the physically weak to still have a chance to defend themselves. On NPR I remember calling for the police to help as her ex was breaking into the home. They didn't have anyone anywhere near by and the woman had no weapons on her. The boyfriend ended up breaking in and attacking her quite badly. He didn't need a weapon and a weapon wouldn't have made what he did any worse, but it might have given her the chance for the victim to defend herself or scare him off.


To clarify, I meant on NPR I remember hearing a story about a woman calling for the police. Not sure how I forgot to add in about 4 words there.


We'll ask nicely. Please don't shoot.


He does not state that guns are evil. Also, where I live bodyguards and civilians are not allowed to have guns. Many cops do not even carry guns. So for me it is hard to identify the comparison to guns.


Well, with that logic you can bring guns right back into the equation. Soldering guns, glue guns, nail guns, vaccination guns...


Two people in a basement might make significant advances in AI research. So from the start, AI appears to be impossible to regulate. If an AGI is possible, then it is inevitable.

Not necessarily. AGI might be possible but it's not necessarily possible for two people in a basement. AGI might require some exotic computer architecture which hasn't been invented yet, for example. This would put it a lot closer to nuclear weapons in terms of barriers to existence.


Computers are far more general purpose. Any computer can theoretically run any program, it would just be slower, at worst.

Developing specialized hardware isn't out of reach, because of FPGAs.


We're talking about AGI, an area where speed can represent a difference in kind, not merely degree. The same sort of distinction goes for cryptography too.

One sort of exotic computer architecture I had in mind was a massively parallel (billions of "cores"), NUMA type machine. You can't really do that with an FPGA, can you?


If all we needed was "billions of cores", we could have done it by now by simply putting together a million of GPU cards in a large cluster. No exotic architectures needed.


That's not the same thing though. All those GPU cards have to talk to their memory through a memory bus. I'm talking about a system where memory is divided up among the cores and they all communicate with one another by message passing. This is analogous to the architecture of the human brain.


We still have no clue about the architecture of the human brain. Even if we did, it's not clear we need to replicate it.

My point is - even if we had, say, a million times more flops, and a million times more memory than the largest supercomputer today, we would still have no clue what to do with it. The problem is lack of algorithms, lack of theory, not lack of hardware.


We still have no clue about the architecture of the human brain. Even if we did, it's not clear we need to replicate it.

We do have a clue about the architecture of the human brain. Billions and billions of neurons with orders of magnitude more connections between them.

even if we had, say, a million times more flops, and a million times more memory

The point is that we could have those things but we don't have a million times lower memory latency and we don't have a million times more memory bandwidth. Those things haven't been improving at all for a very long time.

There are tons of algorithms we can think of that are completely infeasible on our current architectures due to the penalty we pay every time we have a cache miss. Simulating something like a human brain would be pretty well nothing but cache misses due to its massively parallel nature. It's not at all inconceivable to me that we already have the algorithm for general intelligence, we just don't have a big enough machine to run it fast enough.


We do have a clue about the architecture of the human brain. Billions and billions of neurons with orders of magnitude more connections between them.

You call this a "clue"? It's like saying that computer architecture is "Billions and billions of transistors with orders of magnitude more connections between them". Not gonna get very far with this knowledge.

...we don't have a million times lower memory latency and...

Ok, let's pretend we have an infinitely fast computer in every way, with infinite memory. No bottleneck of any kind. What are you going to do with it, if your goal is to build AGI? What algorithms are you going to run? What are you going to simulate, if we don't know how a brain works? Not only we don't have "the algorithm for general intelligence", we don't even know if such an algorithm exists. It's far more likely that a brain is a collection of various specialized algorithms, or maybe something even more exotic/complex. Again, we have no clue. Ask any neuroscientist if you don't believe me.


>Ok, let's pretend we have an infinitely fast computer in every way, with infinite memory. No bottleneck of any kind. What are you going to do with it, if your goal is to build AGI?

You would obviously run AIXI: https://wiki.lesswrong.com/wiki/AIXI

We know how to make AI given infinite computing power. That's not really hard. You can solve tons of problems with infinite computing power. All of the real work is optimizing it to work within resource constraints.


Yes, I've been just thinking about it, and even without looking at your link, it's easy to see how to build (find) AGI given an infinitely fast computer.

Ok, then, back to the very fast computer.


> Ok, let's pretend we have an infinitely fast computer in every way, with infinite memory. No bottleneck of any kind.

Simulate the set of all possible states and find the ones which resemble AGI.


How are you going to test each state for AGI-ness?


Suppose it's impossible to build one that runs in a reasonable timeframe without access to optimization via quantum tunneling.


> But if I adopt Elon's caution toward the technology, then I'm not sure if I agree with his reasoning. If he believes in the potential harm of AI, then supporting its widespread use doesn't seem logical. If you take the quote above, and substitute the word "guns" for "AI", you basically have the NRA, and the NRA is not making the world a safer place.

They aren't interchangeable concepts, though: guns can only be used to harm or threaten harm. Artificial general intelligence could invent ways to harm but could also invent ways to anticipate, defend against, prevent, mitigate, and repair harm.

> AI appears to be impossible to regulate.

It could be regulated if there were extremely authoritarian restrictions on all computing. But such a state would be 1. impractical on a global scale, 2. probably undesirable by most people and 3. fuel for extremist responses and secretive AI development.

> If an AGI is possible, then it is inevitable.

The only thing that could preclude the possibility of creating AGI would be if there was something magical required for human-level reasoning and consciousness. If there's no magic, then everything "human" emerges from physical phenomena. Ie short of a sudden catastrophe that wipes humanity out or makes further technological development impossible, we are going to create AGI.

Personally, I think that Musk and the OpenAI group may already have a vision for how to make it happen. Figuring out how to make neural networks work at human-comparable levels for tasks like machine vision was the hardest part IMO. Once you have that, if you break down how the brain would have to work (or could work) to perform various functions and limit yourself to using neural networks as building blocks, it's not that difficult to come up with a synthetic architecture that performs all of the same functions, provided you steer clear of magical thinking about things like free will.


>Figuring out how to make neural networks work at human-comparable levels for tasks like machine vision was the hardest part IMO. Once you have that, if you break down how the brain would have to work (or could work) to perform various functions and limit yourself to using neural networks as building blocks, it's not that difficult to come up with a synthetic architecture that performs all of the same functions, provided you steer clear of magical thinking

Actually, you need a number of things other than neural networks, but... nevermind, everyone here is clearly fixated on pro-Musk-Bostrom-bloc vs anti rather than on the science.


As a declaration it also seems a little premature. We don't yet know whether narrow AIs (extrapolations of what we have now, autonomous drones etc.) will favor defense or offense. If it favors defense we would like to spread it as widely as possible, but if it favors offense we would like to contain it if possible.


> If you take the quote above, and substitute the word "guns" for "AI", you basically have the NRA, and the NRA is not making the world a safer place.

Sorry but this "guns > NRA > bad > evil" thing is getting pretty old. I don't even own a gun and I have trouble making the "gun > bad" connection when there are hundreds of millions of guns in the US. We should have rivers of blood running down every street. We don't.

Just stop it. They are not the problem. Crazy people are the problem.

Crazy people with drones are a problem. That does not make drones bad.

Crazy people, well, intelligent crazy people, with AI are a problem. That does not make AI bad.

If we are going to have intelligent arguments let's first be intelligent. The minute someone makes the "guns > NRA > bad > evil" argument in some form I know they are making it from a total ideological corner or from political indoctrination. I challenge anyone to put the numbers to Excel and make an argument to support "guns > NRA > bad > evil" or "drones > no rules > bad > evil" or "AI > watched Terminator too many times > bad evil" without having a crazy person as part of the equation.


It's a commonly held belief that gun violence is linked to mental illness, but it doesn't appear to be true:

http://news.vanderbilt.edu/2014/12/mental-illness-wrong-scap...


Well, I don't know what term to use. I am using "crazy" to refer to the fact that some abnormal thought process is behind someone picking up a gun, knife, sword, axe, brick or driving a car to kill even one person or to engage in mass killings, to drive a car through a crowd, blow-up a building or use a gun or guns to mow down people in a theater.

Normal people don't take a gun out of their safe, load it, throw a bunch of rounds in the back of their car, put on a bullet-proof vest and go shoot-up the neighborhood community center, church, school or mall. Those other people, the one's who would, those are the "crazies", not in a clinical sense but in that there's something seriously wrong with them that they would actually do the above.

The down-votes on my original statement show I didn't do a good job of presenting my case.

I do firmly believe we need to do something about access to guns. That does NOT mean taking guns away from law-abiding people. That means criminals or people who are living under circumstances that might compel them to commit crimes. The overwhelming majority of guns and gun owners do absolutely nothing to harm anyone. In fact, I'd be willing to bet most guns sit unused except for an occasional trip to the range or hunting.

Some of us who would like to engage in a truly sensible conversation about guns or drones or green lasers pointed at planes and, yes, AI and robots.

Yet if we come off the line making statements like "We have too many guns! The NRA is a terrorist organization!" we, in fact, have become the crazies. Because these statements are undeniably insane in the face of equally undeniable evidence.

These statements only serve to instantly stop the conversation. The come back goes from "Guns don't kill people, people kill people" to "More guns in Paris would have saved lives". Both of which are undeniably factual statements.

And, with that, the conversation stops. We can replace "AI", "knifes", "drones", "lasers" and more into these and similar statements. The end result is the same. Those advocating for some control become the crazies and the conversation goes nowhere.

Because you are telling a perfectly harmless, responsible gun owner who might have a few guns in a safe that he is the problem. You are calling him a criminal. You are calling him "the problem". And, in his context, well, you are certifiably insane for saying so.

The guy who believes he needs a gun to protect his home isn't going to take that gun and go shoot-up a theater, school, mall or community center. If we claim he is we are the crazies, not him. The fact that a number of us disagree with the need for such protection (I personally can't see the need) is irrelevant. Calling him a criminal is simply insane.

I know people like that. I know people with over 20 guns in a safe. And I know they have not been out of that safe but for an occasional cleaning in ten or twenty years. And when those people hear the anti-gun, anti-NRA language spewing out they conclude "they" are insane. They are absolutely 100% correct in reaching that conclusion. Because he is not dangerous and his guns require a dangerous person in order to be loaded, carried to a destination and used to inflict harm.

He is right and everyone else is crazy and the conversation stops.

The right approach is to recognize that he isn't the problem. He is part of the solution. Because these types of gun owners --responsible and law abiding-- also happen to be the kind of people who abhor the use of guns to commit crimes. This is a powerful intersection of ideology gun control advocates have not woken up to.

You acknowledge them as what they are, harmless law-abiding people, and ask them for help in figuring out how to reduce the incidence of guns being used to kill innocent people. Then you'll engage him, the community he represents and, yes, the NRA, in finding a solution. Becoming the crazy person who calls all of them dangerous criminals despite the overwhelming evidence to the contrary gets you nowhere. The conversation stops instantly, and rightly so.

Let's not do the same with AI and technology in general. Let's not come off the line with statements that make us the crazies.

Military use of AI and drones is very different subject, just like military use of guns is a different subject.


> "More guns in Paris would have saved lives". Both of which are undeniably factual statements.

No, that's definitely not undeniably factual.

And your continued repeated use of "crazies" is fucking repugnant.


Chill dude. Don't blow a gasket.


> "Normal people don't take a gun out of their safe, load it, throw a bunch of rounds in the back of their car, put on a bullet-proof vest and go shoot-up the neighborhood community center, church, school or mall."

You've missed the point of the article I shared with you. The point was normal people under extraordinary circumstances can be pushed to breaking point and take it out on others. Normal people do not always behave normally.


No, I read the article and stand by my conviction that the people you are referring to are not normal. Lot's of folks experience extraordinary circumstances during their lives, few, very very few, resort to violence as a result.

Not everyone is "wired" to deal with life's challenges the same way. I had a friend who committed suicide after losing his business in 2009. Sad. On the other hand, I've been bankrupt --as in lost it all, not a dime to my name-- and suicidal thoughts never entered my mind. In fact, I hussled and worked hard for very little until I could start a small business.

That article has an agenda, follow the money trail and you might discover what it is.


Sure, not everyone deals with stress in the same way, but an 'us vs. them' mentality isn't helpful. We're all capable of bad things, just like we're all capable of good things.

As for the article's agenda, perhaps it had one, but it appears to be an agenda backed up with facts, for example:

“Fewer than 5 percent of the 120,000 gun-related killings in the United States between 2001 and 2010 were perpetrated by people diagnosed with mental illness,”


People with a mental illness are very much more likely to be the victims, not perpetrators, of violent crime.

When we look at violent crime we see almost all perpetrators do not have a diagnosed illness, and also they do not have a diagnosable illness.

You are falling for the conjunction fallacy. You see "violent", and insist "violent and mentally ill" even though violent is more probable.


I think you are reading what you want into my statement, not what I am saying. You are taking "crazy" to mean what you want it to mean.

You are an intelligent person. You HAVE to know that I do not mean someone with autism or a developmental disorder. That would be sick and repugnant. But that's not what I mean. And you know it.

What I mean is someone with such a mental illness or sickness or reality distortion that they can justify picking up a gun and killing twenty children. A person has to be sick in the head to do something like that. Sick in the heart too. Use whatever terms you care to pull out of the dictionary but we all know what we are talking about.

Someone has to be "crazy" (define it as you wish) to behave in such ways.


It might be useful to differentiate between technological equilibria that favor either attackers or defenders.

* Two guys with pistols in a crowded bar: Attacker is favored.

* Trench warfare during WWI: Defender is favored.

* Nukes: Attacker is favored, although with the invention of nuclear submarines that could lurk under the ocean and offer credible retaliation even in the event of a devastating strike, the attacker became favored less.

In general equilibria where the defender is favored seem better. Venice became one of the wealthier cities in the medieval world because it was situated in a lagoon that gave it a strong defensive advantage. The Chinese philosopher Mozi was one of the first consequentialist philosophers; during the Warring States period his followers went around advancing the state of the art in defensive siege warfare tactics: http://www.tor.com/2015/02/09/let-me-tell-you-a-little-bit-a...

Notably, I'm told that computer security current favors attackers in most areas: http://lesswrong.com/lw/dq9/work_on_security_instead_of_frie... (BTW the author of this post is a potential Satoshi Nakamoto candidate and knows his stuff.)

In equilibria where the attacker is favored, the best solution is to have some kind of trusted central power with a monopoly on force that keeps the peace between everyone. That's what a modern state looks like. Even prisoners form their own semiformal governing structures, with designated ruling authorities, to deal with the fact that prison violence favors the attacker: http://www.econtalk.org/archives/2015/03/david_skarbek_o.htm...

Thought experiment: Let's say someone invents a personal force field that grants immunity to fists and bullets. In this world you can imagine that the need for a police force, and therefore the central state authority that manages the use of this police force, lessens considerably. The enforcement powers available to a central government also lessen considerably.

This is somewhat similar to the situation with online communities. We don't have a central structure governing discussions on the web because online communities favor the defender. It's relatively easy to ban users based on their IP address or simply lock new users out of your forum entirely and therebye keep the troll mobs out. Hence the internet gives us something like Scott Alexander's idea of "Archipelago" where people get to be a part of the community they want (and deserve): http://slatestarcodex.com/2014/06/07/archipelago-and-atomic-... Note the work done by the word "archipelago" which implies islands that are easy to defend and hard to attack (like Venice).

Let's assume that superintelligent AI, when weaponized, shoves us into an entirely new and unexplored conflict equilibrium.

If the new equilibrium favors defense we'd like to give everyone AIs so they can create their own atomic communities. If only a few people have AIs, they might be able to monopolize AI tech and prevent anyone else from getting one, though the info could leak eventually.

If the new equilibrium favors offense we'd like to keep AIs in the hands of a small set of trusted, well-designed institutions--the same way we treat nuclear weapons. It could be that at the highest level of technological development, physics overwhelmingly favors offense. If everyone has AIs there's the possibility of destructive anarchy. In this world, research on the design of trustworthy, robust, inclusive institutions (to manage a monopoly on AI-created power) could be seen as AI safety research.

The great filter http://waitbutwhy.com/2014/05/fermi-paradox.html weakly suggests that the new equilibrium favors offense. If the new equilibrium favors defense, even the "worst case scenario" autocratic regimes would have had plenty of time to colonize the galaxy by now. If the new equilibrium favors offense, it's entirely possible that civs reaching the superintelligent AI stage always destroy themselves in destructive anarchy and go no further. But the great filter is a very complicated topic and this line of reasoning has caveats, e.g. see http://lesswrong.com/lw/m2x/resolving_the_fermi_paradox_new_...

Anyway this entire comment is basically spitballing... point is that if $1B+ is going to be spent on this project, I would like to see at least a fraction of this capital go towards hammering issues like these out. (It'd be cool to set up an institute devoted to studying the great filter for instance.) As Enrico Fermi said:

"History of science and technology has consistently taught us that scientific advances in basic understanding have sooner or later led to technical and industrial applications that have revolutionized our way of life... What is less certain, and what we all fervently hope, is that man will soon grow sufficiently adult to make good use of the powers that he acquires over nature."

And I'm slightly worried that by virtue of choosing the name OpenAI, the team has committed themselves to a particular path without fully thinking it through.


Have a read through Military Nanotechnology: Potential Applications and Preventive Arms Control by Jürgen Altmann. If you have 30 mins, maybe check out this talk as an optional prelude: http://www.youtube.com/watch?v=MANPyybo-dA

I suspect that after reading you'll be convinced if you aren't already that in the realms of biological and chemical warfare (special cases of nanotech warfare) nature overwhelmingly favors offense. We've been able to worldwide keep research and development on those limited, and there's incentive to avoid it anyway since if word gets out others will start an arm's race so they can at least try and maybe get to a MAD equilibrium, but it's not nearly as stable as the one with nukes. An additional incentive against is that the only purpose of such weaponry is to annihilate rather than destroy enough to achieve a more reasonable military objective.

But molecular nanotech is on a completely different playing field. Fortunately it's still far out, but as it becomes more feasible, there is a huge incentive to be the first entity to build and control a universal molecular assembler or in general self-replicating devices. Arms control over this seems unlikely.

Giving everyone their own AGI is like giving everyone their own nation state, which implies being like giving everyone their own nuke plus the research personnel to develop awesome molecular nanotech which as a special case enables all the worst possibilities of biological and chemical and non-molecular-nanotech nanotech warfare, and then more with what you can do with self replication, most of which are graver existential threats than nukes. Absent a global authority or the most superintelligent Singleton to monitor everything that situation is in no way safe for long.


>If you take the quote above, and substitute the word "guns" for "AI", you basically have the NRA, and the NRA is not making the world a safer place.

I agree that replacing AI with guns makes an interesting point to consider, but is safety a good metric to use? For example, banning alcohol makes the world much safer. And if you prioritize the safety of those who follow the ban over those who don't, things get really disturbing (poisoning alcohol, like the US government did in the past).

A dictator who has a monopoly on weapons/AI can be pretty safe for everyone who falls in line. But it isn't very free.

So perhaps the better question is what increases overall freedom, and I think that having equal AI for everyone is the best approach.


This is a key takeaway: "...we are going to ask YC companies to make whatever data they are comfortable making available to OpenAI. And Elon is also going to figure out what data Tesla and Space X can share."

Money is great, openness is great, big name researchers are also a huge plus. But data data data, that could turn out to be very valuable. I don't know if Sam meant that YC companies would be encouraged to contribute data openly, as in making potentially valuable business assets available to the public, or that the data would be available to the OpenAI Fellows (or whatever they're called). Either way, it could be a huge gain for research and development.

I know that I don't get a wish list here, but if I did it would be nice to see OpenAI encourage the following from its researchers:

1) All publications should include code and data whenever possible. Things like gitxiv are helping, but this is far from being an AI community standard

2) Encourage people to try to surpass benchmarks established by their published research, when possible. Many modern ML papers play with results and parameters until they can show that their new method out performs every other method. It would be great to see an institution say "Here's the best our method can do on dataset X, can you beat it and how?"

3) Sponsor competitions frequently. The Netflix Prize was a huge learning experience for a lot of people, and continues to be a valuable educational resource. We need more of that

4) Try to encourage a diversity of backgrounds. IF they choose to sponsor competitions, it would be cool if they let winners or those who performed well join OpenAI as researchers at least for awhile, even if they don't have PhDs in computer science

The "evil" AI and safety stuff is just science fiction, but whatever. Hopefully they will be able to use their resources and position to move the state of AI forward


'The "evil" AI and safety stuff is just science fiction, but whatever.'

umm... you can offer proof that we have nothing to worry about?

Does the proof go like: Just as all people are inherently good, therefore all AIs will be inherently good?

Or is it more like: since we can now safely contain all evil people, therefore we will be able to safely contain evil AIs?

Sounds to me like there is some risk, no?


As I've said many times on HN over the years, there is currently no clear path to science fiction like "AI". To return your question, hopefully without being rude, is there any proof that AI capable of having a moral disposition will ever exist?

Andrew Ng (I believe) compared worrying about evil AI to worrying about overpopulation on Mars. Which is to say, the problem is so far off that it's rather silly to be considering it now. I would take it a step further and say that worrying about the implications of AGI is like thinking about Earth being overpopulated by space aliens. First we have to establish that such a thing is even possible, for which there is currently no concrete proof. Then we should start to think about how to deal with it.

Considering how hypothetical technology will impact mankind is literally the definition of science fiction. It makes for interesting reading, but it's far from a call to action.


is there any proof that AI capable of having a moral disposition will ever exist?

Why does an AI need to be capable of moral reasoning to perform actions we'd consider evil?

The concern is that computers will continue to do what they're programmed to do, not what we want them to do. We will continue to be as bad at getting those two things to line up as we've always been, but that will become dangerous when the computer is smarter than its programmers and capable of creatively tackling the task of doing something other than what we wanted it to do. Any AI programmed to maximize a quantity is particularly dangerous, because that quantity does not contain a score for accurately following human morality (how would you ever program such a score).

If you're willing to believe that an AI will some day be smarter than an AI researcher (and assuming that's not possible applies a strange special-ness to humans), then an AI will be capable of writing AIs smarter than itself, and so forth up to whatever the limits of these things are. Even if that's not its programmed goal, you thought making something smarter than you would help with your actual goal, and it's smarter than you so it has to realize this too. And that's the bigger danger - at some unknown level of intelligence, AIs suddenly become vastly more intelligent than expected, but still proceed to do something other than what we wanted.


"Andrew Ng (I believe) compared worrying about evil AI to worrying about overpopulation on Mars."

Berkeley AI prof Stuart Russell's response goes something like: Let's say that in the same way Silicon Valley companies are pouring money in to advancing AI, the nations of the world were pouring money in to sending people to Mars. But the world's nations aren't spending any money on critical questions like what people are going to eat & breathe once they get there.

Or if you look at global warming, it would have been nice if people realized it was going to be a problem and started working on it much earlier than we did.


Improvements in AI aren't linear, though. Artificial Superintelligence, after reaching AGI, might happen in the span of minutes or days. I imagine the idea here is to guide progress so that on the day that AGI is possible, we've already thoroughly considered what happens after that point.

Secondly - it's not necessarily about 'evil' AI. It's about AI indifferent to human life. Have a look at this article, it provides a better intuition for how slippery AI could be: https://medium.com/@LyleCantor/russell-bostrom-and-the-risk-...


> Improvements in AI aren't linear, though

This is a point everyone makes, but it hasn't been proven anywhere. Progress in AI as a field has always been a cycle of hype and cool-down.

Edit (reply to below). Talk about self-bootstrapping AIs, etc. is just speculation.


Sure, though you can't extrapolate future technological improvements from past performance (that's what makes investing in tech difficult).

Just as one discovery enables many, human-level AI that can do its own AI research could superlinearly bootstrap its intelligence. AI safety addresses the risk of bootstrapped superintelligence indifferent to humans.


>Just as one discovery enables many, human-level AI that can do its own AI research could superlinearly bootstrap its intelligence.

Of course, that assumes the return-on-investment curve for "bootstrapping its own intelligence" is linear or superlinear. If it's logarithmic or something other than "intelligence" (which is a word loaded with magical thinking if there ever was one!) is the limiting factor on reasoning, no go.


I don't see why a program needs to be a self-improving Charles Stross monster to have an impact on the world, for good or ill.


Hype then cool-down is a sine wave... See, not linear at all!


Or maybe the first few AGIs will want to spend their days watching youtube vids rather than diving into AI research. The only intelligences we know of that are capable of working on AGI are humans. We're assuming that not only will we be able to replicate human-like intelligence (seems likely, but might be much further away than many think), but that we'll be able to isolate the "industrious" side of human intelligence (not sure if we'll even be able to agree on what this is), enhance it in some way (how?), and that this enhancement will be productive.

But even if we can do all that any time soon (which is a pretty huge if), we don't even know what the effect will be. It's possible that if we remove all of the "I don't want to study math, I want to play games" or "I'm feeling depressed now because I think Tim's mad at me" parts of the human intelligence, we'll end up removing the human ingenuity important to AGI research. It might be that the resulting AGI is much more horrible at researching AI than a random person you pull off the street.


The main question is not about whether the AI would or could have morality. The more important question (and I don't think we disagree here) is whether there could be a superhuman AI in the near future - some decades for example - that might "outsmart" and conquer or exterminate people.

This is a matter of conjecture at this point: Andrew Ng predicts no; Elon Musk predicts yes.

I agree with you that, if you can be sure that superhuman AI is very unlikely or far off, then we have plenty of other things to worry about instead.

My opinion is, human-level intelligence evolved once already, with no designer to guide it (though that's a point of debate too... :-) ). By analogy: it took birds 3.5B years to fly, but the Wright brothers engineered another way. Seems likely in my opinion that we will engineer an alternate path to intelligence.

The question is when. Within a century? I think very likely. In a few decades? I think it's possible & worth trying to prevent the worst outcomes. I.e., it's "science probable" or at least "science possible", rather than clearly "science fiction" (my opinion).


We assume that we'll be able to replicate human level intelligence because we'll eventually be able to replicate the physical characteristics of the brain (though neuroscientists seem to think we're not going to be able to do this for a very long time). Superhuman intelligence, though - that's making the assumption that there exists a much more efficient structure for intelligence and that we (or human intelligence level AIs) will be able to figure it out.

So returning to your Wright brothers example, it's more like saying: "It took birds 3.5B years to fly, but the Wright brothers engineered another way. It seems likely that we'll soon be able to manufacture even more efficient wings small enough to wear on our clothes that will enable us to glide for hundreds of feet with only a running start."


How are you estimating how far away AI is so accurately that you can disregard it entirely? The best we can do to predict these things is survey experts, and the results aren't too comforting: http://www.nickbostrom.com/papers/survey.pdf

>We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that highlevel machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.


As the history of science & technology shows, by the time there is any proof of the concept of a technology lethal to the human race, it is already too late.

I would suggest you read the history of the Manhattan project if you want to continue in your belief system regarding "impossible" deadly technology.


>I would suggest you read the history of the Manhattan project if you want to continue in your belief system regarding "impossible" deadly technology.

To quote Carl Sagan:

>They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.


It's already here however. Components of drones killing people are AI/ML backed. The data on where to bomb can be "inferred" from various data...

Now for a less "killer" use case, you might get denied access to your credit card because of what Facebook "thinks" based on your feed and your friends feeds (this is a real product.)

AI doesn't have to be full blown human-like and generalizable to have real world implications.

This is what my piece called personas is about.. Most people don't understand the implications of what's already happening and how constrains of programming/ML lead to non-human like decisions with human-like consequences. http://personas.media.mit.edu


>I would take it a step further and say that worrying about the implications of AGI is like thinking about Earth being overpopulated by space aliens. First we have to establish that such a thing is even possible, for which there is currently no concrete proof.

Given that I could probably sketch out a half-assed design for one in nine months if you gave me a full-time salary - or rather, I could consult with a bunch of experts waaaaaay less amateurish than me and come up with a list of remaining open problems - what makes you say that physical computers cannot, in principle, no matter how slowly or energy-hungrily, do what brains do?

I'm not saying, "waaaaah, it's all going down next year!", but claiming it's impossible in principle when whole scientific fields are constantly making incremental progress towards understanding how to do it is... counter-empirical?


Ahhh... The power of not knowing what you don't know.

I mean, why can't I live forever? Let's just list the problems and solve them in the next year!


>Ahhh... The power of not knowing what you don't know.

Ok: what don't I know, that is interesting and relevant to this problem? Tell me.

>I mean, why can't I live forever?

Mostly because your cells weren't designed to heal oxidation damage, so eventually the damage accumulates until it interferes with homeostasis. There are a bunch of other reasons and mechanisms, but overall, it comes down to the fact that the micro-level factors in aging only take effect well after reproductive age, so evolution didn't give a fuck about fixing them.

>Let's just list the problems and solve them in the next year!

I said I'd have a plan with lists of open problems in nine months. I expect that even at the most wildly optimistic, it would take a period of years after that to actually solve the open problems and a further period of years to build and implement the software. And that's if you actually gave me time to get expert, and resources to hire the experts who know more than me, without which none of it is getting done.

As it is, I expect machine-learning systems to grow towards worthiness of the name "artificial intelligence" within the next 10-15 years (by analogy, the paper yesterday in Science is just the latest in a research program going back at least to 2003 or 2005). There's no point rushing it, either. Just because we can detail much of the broad shape of the right research-program ahead-of-time, especially as successful research programs have been conducted on which to build, doesn't mean it's time to run around like a chicken with its head cut-off.


Yes, let’s.

http://sens.org


Jeez I had no idea you were such an AI genius. If only someone would fund you!


Obviously no one can prove that AIs would be "inherently good" because there's no definition of "good" that everyone agrees on.

I'd be more impressed by a Human Intelligence Project - augmenting predictive power to encourage humans to stop doing stupid, self-destructive shit, and moving towards long-term glory and away from trivial individual short-term greed as a primary motivation.

AI is a non-issue compared to the bear pit of national and international politics and economics.

So the AI Panic looks like psychological projection to me. It's easier to mistrust the potential of machines than to accept that we're infinitely more capable of evil than any machine is today - and that's likely to stay true for decades, if not forever.

The corollary is that AI is far more likely to become a problem if it's driven by the same motivations as politics and economics. I see that as more of a worry than the possibility some unstoppable supermachine is going to "decide" it wants to use Earth as a paperclip factory, or that Siri is going to go rogue and rickroll everyone on the planet.

Job-destroying automation and algorithmic/economic herding of humans is the first wave of this. It's already been happening for centuries. But it could, clearly, get a lot worse if the future isn't designed intelligently.


I thought of responding to that bit too, changed my mind, but your response made me reconsider again. The response is simple, just a quote from Yudkowsky: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." When all one knows about potential future AIs comes from sci-fi, and the only sci-fi one reads or watches is weak sci-fi with very human good/evil AIs, one's frame of reference in the discussion about whether concerns are warranted or not is way too narrow to be worth any further consideration.


The burden of proof is on the people warning us of the impending AI apocalypse. The fact is we are nowhere close to AI. We don't understand how the brain works. A review of the ML literature will also show we barely understand how neural nets work.


I believe in the precautionary principle which says the exact opposite.

"The precautionary principle ... states that if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those taking an action." [0]

[0] https://en.wikipedia.org/wiki/Precautionary_principle


If we adopted really strict adherence to that rule as the bar to research, there would be no scientific progress at all. I'm not convinced that would be a desirable thing.


well, the clever solution here is not to demand a stop to all AI research, but rather to speed it up to reduce the chance that a single bad actor will get too far ahead... i.e., to get "post-singularity" ASAP, and safely.

Definitely bold... might be just crazy enough to work! Would love to see the arguments laid out in a white paper.

Reminds me of the question of how far ahead in cryptology is the NSA compared to the open research community.


You haven't shown the precautionary principle is the right principle to follow, you've only invoked it.


The fact that we don't understand how neural nets work despite the excellent results makes an AI apocalypse more likely, not less. This means that if we ever create strong AI, we will likely not understand it initially.

Note: I'm personally not too worried about the AI apocalypse, but I think "we don't even understand neural nets" should cause more concern, not less.


True AI will require several breakthroughs. The fact that we don't understand neural nets means that much of the progress thus far has been hand-wavy and hacky iterative improvement (engineering, vs. theory). This means progress is very likely to plateau. When everything is hacky and you don't understand what's going on, those breakthroughs are not going to happen.


Being worried about "evil AI" is a lot like being worried about evil flying cars. Except people have built actual flying cars in the corporeal world, where "AI" is just a collection of signal processing techniques that don't work very well most of the time.

But hey, I labor in this domain: if paranoid richy-rich types want to throw money at it to ensure that they remain at the top of the heap, I'm all for it.


I don't know about the superintelligence risk. As a line of reasoning, sounds way too abstract at the present time. But what about the very predictable and obvious risk detrmined by the end of jobs? That is scary. Are there any analyses of the impact of that on society as a whole? It's not just mass unemployment - that has a different dynamic when it's temporary and in response to a contingent downturn. We talk about the end of jobs for good. How will that work out?


> But data data data, that could turn out to be very valuable.

Yes, but data also can be collected openly collectively, in the spirit of Wikipedia or OpenStreetMaps etc.

What I think OpenAI should encourage is the development of algorithms that can be used to crowdsource AI. I don't think there are good algorithms yet for model merging, but I would be gladly proven wrong.


> The "evil" AI and safety stuff is just science fiction

There already exist drones that kill based on AI.


>Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.

This is essentially Ray Kurzweil's argument. Surprising to see both Musk and Altman buy into it.

If the underlying algorithms used to construct AGI turn out to be easily scalable, then the realization of a dominant superintelligent agent is simply a matter of who arrives first with sufficient resources. In Bostrom's Superintelligence, a multipolar scenario was discussed, but treated as unkikely due to the way first-arrival and scaling dynamics work.

In other words, augmenting everyone's capability or intelligence doesn't necessarily preclude the creation of a dominant superintelligent agent. On the contrary, if there's any bad or insufficiently careful actors attempting to construct a superintelligence, it's safe to assume they'll be taking advantage of the same AI augments everyone else has, thus rendering the dynamic not much different from today (i.e. a somewhat equal—if not more equal—playing field).

I would argue that in the context of AGI, an equal playing field is actually undesirable. For example, if we were discussing nuclear weapons, I don't think anyone would be arguing that open-source schematics is a great idea. Musk himself has previously stated that [AGI] is "potentially more dangerous than nukes"—and I tend to agree—it's just that we do not know the resource or material requirements yet. Fortunately with nuclear weapons, they at least require highly enriched materials, which render them mostly out of reach to anyone but nation states.

To be clear, I think the concept of opening up normal AI research is fantastic, it's just that it falls apart when viewed in context of AGI safety.


> Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI. In total, these funders have committed $1 billion

Funny how they just slipped that in at the end


Note that this is "committed $1 billion", not funded. "although we expect to only spend a tiny fraction of this in the next few years."


> Note that this is "committed $1 billion", not funded.

That same caveat could apply to any fund raised by a venture fund - usually funds are committed, and the actual capital call comes later (when the funds are ready to be spent).

It's an important caveat in some circumstances (e.g. it hinges on the liquidity of the funders, which may be relevant in an economic downturn), but in this one, I'm not sure it really makes a difference for this announcement.


I laughed out loud when I read that, really loud


How so?


I believe GP is commenting about nonchalant, matter-of-fact mention of $1 billion.


And that the name dropping would have been the clickbait headline in most articles.


Thank you


$1B in committed funding. Just, wow.

Side note: I wonder if the Strong AI argument can benefit from something akin to Pascal's Wager, in that the upside of being right is ~infinite with only a finite downside in the opposing case.



While a great short story and should be required reading for sci-fi fans, there's a big difference between the singularity and omnipotence.


no there isn't, if you include time.

lets say that a general AI is developed and brought online (the singularity occurs). Lets also say that it has access to the internet so it can communicate and learn, and lets also say that it has unlimited amount of storage space (every harddrive in every device connected to the internet).

at first the AI will know nothing, it will be like a toddler. than, as it continues to learn and remember, it will become like a teenager, than like an adult in terms of how much it knows. Than it will become like an expert.

but it doesn't stop there! a general AI wouldn't be limited by 1) storage capacity (unlike human's and their tiny brains that can't remember where they put their keys) or 2) death (forgetting everything that it knows).

so effectively a general AI, given enough time, would be omnipotent because it would continually learn new things forever.


Or maybe the AI would fracture into warring components after every network partition. Maybe it would be unable to maintain cohesion over large areas due to the delay imposed by the speed of electrical communications.

Why should one hypothetical be assumed true and not the other?


Sorry, how did you go from finite hard drives (all of those that have been produced) to unlimited storage capacity?


It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly.

The high-stakes wager isn't success vs failure in creating strong AI, it's what happens if you do succeed.


The framing here is: "What are the implications of doing nothing if you are right (about the inevitability of a malicious strong AI, in this case), compared to the implications about being wrong and still doing nothing?"


Pascal's wager is a fallacy. What benefit could it bring to the discussion?


semi-off-topic: after Google invested $1B in Uber, I knew they were doing it for the self-driving car long play. How much of that 1B is directly going to self-driving AI at Uber?


The vast majority is no doubt going to subsidizing rides in an attempt to achieve market dominance.


Drivers aren't profitable on their own?


Not when Uber's competitors are subsidizing drivers.


Finite downside? What about Skynet?


Technically, even the extinction of humanity is a finite downside.

You would have to posit a sort of hell simulation into which all human consciousnesses are downloaded to be maintained in torment until the heat-death of the universe for it to be an equivalent downside.


You have the poles reversed.


This is about 100 years too early. Seriously why do people think neural networks are the answer to AI? They are proven to be stupid outside of their training data. We have such a long way to go. This fear-mongering is pointless.


In terms of artificial general intelligence, the kind of stuff that gets associated with 'the singularity' etc. - I agree - there is seemingly nothing at all out there that appears to even be on a trajectory, I mean even theoretically, to coming close. But sure, for more narrow specializations, disrupting certain industries, there is a lot that could be advanced on a short time scale.

I don't think the big breakthroughs in artificial general intelligence are going to come from well funded scientific researchers anyways, they are going to come out of left field from where you least expect it.


That's the thing. It's not necessarily about an evolution. If I "accidentally" make a break though right now. The AI could evolve into a super intelligence before dinner time.

Simply stated, an AI that writes AI.(Forget the halting problem for a moment) How many iterations can it create in 3 hours?


Think about how long it takes a baby to learn (a few years). There is no reason an AI would self-improve multiplicatively in hours or days or even weeks.


With a powerful enough computer the learning curve of a simulated baby could hypothetically be condensed to minutes, or seconds. Which would appear to us in normal time to be an intelligence explosion.


The simulated baby would extremely tax to the limit the entire resources of whatever supercomputer trained it. People have to run deep learning programs on multiple GPUs right now, and performance is constrained by memory bandwidth, which does not follow Moore's law.


It's a hypothetical, not expected to happen in conventional supercomputers with the current state of machine learning.

Imagine a massively parallel optical computer with the same transistor density as the human brain, the size of an olympic swimming pool running at the speed of light, and networked with 1000s of other similar computers around the world.

Foomp, superintelligence, you won't even be able to pinpoint the source.


Sure, but you would get the baby situation I posited years before what you're envisioning. You're skipping a bunch of steps. My hypotheticals (if they ever do happen) would occur way before your hypotheticals. Progress does not go in "Foomp" steps.


Everything goes in foomp steps if you condense time enough. Condense the last 200,000 years of human evolution to a couple minutes, and it goes 'Foomp', and the super foompy part doesn't happen until the last 2/10ths of a second with industrial civilization wherein the development of global networks compounds our collective intelligence exponentially, yielding unforeseeable emergent properties.

So certainly you would get the baby situation first, but going from manageable baby to astral foetus could potentially happen rapidly and unexpectedly as the rate of progress accelerates to unfathomable speeds, which is what's happened already in going from tribal man to modern civilization, and if you extrapolate that very consistent and reliable trendline, it leads to progress happening in a foomp step perceived as a foomp in real time. Really, all life is just one big accelerating foomp.


This makes no logical sense, You're not addressing my point about the fact that the intermediate progress will be chronologically ordered in time.


> If I "accidentally" make a break though right now. The AI could evolve into a super intelligence before dinner time.

We don't need to rely only on humans to design every aspect of neural networks. We are already computationally searching for AI designs that work better. In a recent paper, hundreds of variations of design for the neuron of the LSTM network have been tried to see which one is the best and which of its components are important.

Also, we can play with networks like clay - starting from already trained networks, we can add new layers, make layers wider, transfer knowledge from one complex network into a simpler one and initialize new networks from old networks so as they don't start from scratch every time. We can download models trained by other groups and just plug them in (see word2vec for example). This makes iterative experimenting and building on previous success much faster.

I don't think evolving a super intelligence will happen by simple accident, it will be an incremental process of search. The next big things I predict will be capable robots and decent dialogue agents.


Probably dependent on your computer ;)


It's not necessarily the answer to AI, but it works remarkably well. Is it Skynet or the Terminator yet? No.

There are different ideas for what constitutes AI. Expert systems and knowledge-based reasoners? Pattern-recognizer black boxes? Chatbots? AGI?

Over the years the concept of AI shifted. Until recent years "AI" was mostly used for things like A* search, creating algorithms that play turn-based table games for example (see the Russell-Norvig book), symbolic manipulation, ontologies etc., a few years ago it began to also refer to machine learning like neural networks again.

Neural networks are good at what they are designed for. Whether they will lead to the path to human-like artificial intelligence is a speculative question. But symbolic manipulation alone won't be able to handle the messiness of sensory data for sure. I think neural nets are much better suited for open-ended development than hand engineered pipelines that were state of the art until recently (like extracting corner points, describing them with something like SIFT, clustering the descriptors and using an SVM over bag-of-words histograms). Hand engineering seems too restrictive.


Quibble with the timeline: AI as a field has included a big chunk devoted to machine learning research pretty much continuously, especially since the '80s or so. The specific methods in vogue do change: decision trees, neural networks, SVMs, boosting, association-rule learning, genetic algorithms, Bayesian networks, etc. go through periods of waxing and waning in popularity. A few years ago boosting/bagging and other ensemble methods were very hot and neural networks were out of fashion; now neural networks are hot and the boosting hype has quieted down a bit. But ML is pretty much always there in some form, since learning from data is an important component of AI.


ML was there but at least when I started learning about these things around 8 years ago, the label "AI" was mostly used for symbolic stuff. Courses named "AI" taught from the Russell-Norvig book. Things like resolution, planning in the block world, heuristic graph search, min-max trees, etc. ML existed but it wasn't really under the label of "AI" as far as I can remember. I think it's something of a marketing term that big companies like Google and Facebook reintroduced due to the scifi connotations. But that's just my guess.


I can see that for intro courses, especially because of the book, though it varies a lot by school and instructor. On the research side it's been a big part of the field, though. The proceedings of a big conference like AAAI [1] are a decent proxy for what researchers consider "AI", and ML has been pretty well represented there for a while.

[1] http://www.aaai.org/Library/AAAI/aaai-library.php


> I think it's something of a marketing term

you're not alone in thinking so

https://news.ycombinator.com/item?id=10483846

i get the impression that terminology bifurcated into "AI" and "cognitive science" around the time Marr published Vision in the 80's.

quibbles and q-bits aside, i was glad to see the announcement from the perspective of a if-not-free-then-at-least-probably-open-source-ish software appreciator.


>>This is about 100 years too early.

More fundamentally, we are trying to achieve what we can't even define. Define AI, and implementing it should be quite easy.

"Human level AI" seems like trying to define problems through observed characteristics.

I think it was Douglas Hofstadter who had said something to the order that we don't even exactly understand what 'intelligence' means, let alone a clear definition reducible to a mathematical equation or a implementable program.

Your chess programs, are really not 'thinking' in pure sense, there are trying to replace 'thinking' with an algorithm that resembles the outcome of 'thinking'.


I don't see any "fear-mongering" in this announcement?


The creation of what is essentially an ethics committee for a technology that doesn't even exist yet? With people such as Elon Musk on board who have publicly said 'AI is our biggest existential threat' ?

Additionally the second paragraph:

We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.

This infers they think AI will be used for hostile means. Such as wiping out the human race maybe? It is just un-informed people making un-informed decisions and then informing other un-informed people of said decisions as if they were informed.


Sutskever, Schulman, Karpathy and Kingma are experts in machine learning.

And yes, AI will definitely be used for all sorts of purposes including hostile means. Just like anything else, really. Financial manipulation, spying, intelligent military devices, cracking infrastructure security, etc.

These are realistic concerns, we shouldn't fall for the Skynet red herring. We can have problems with ethical AI use, even if it's not a self-aware super-human superintelligence.


I hope I'm wrong, but given the composition of the donors I'd be surprised if they really put much scrutiny on near-term corporate/government misuse of AI, apart perhaps from military robots. There are definitely interesting ethics questions already arising today around how large tech companies, law enforcement, etc. are starting to use AI, whether it's Palantir, the FBI, Google, or Facebook, so no argument that it's a timely subject at least in some of its forms. It'll be interesting to see if they get into that. I'd guess they probably want to avoid the parts that overlap too much with data-privacy concerns, partly because a number of their sponsors are not exactly interested in data privacy, and partly because the ethical debate then becomes more complex (it's not purely an "ethics of AI" debate, but has multiple axes).


I share your concerns. It also worries me that the brightest ML researchers choose to work at companies like Facebook, Google and Microsoft instead of public universities. One reason is probably that academia and public grants are too sluggish to accommodate this fast paced field. Another is that these companies have loads of data that these researchers can use to test their ideas.

The downside is that much of the research is probably held secret for business advantages. The public releases are more of a PR and hiring strategy than anything else in my opinion. By sending papers to conferences, Google's employees can get to know the researchers and attract them to Google.

Others say there's nothing to worry about, Google and Facebook are just today's equivalent of Bell Labs, which gave numerous contributions to computer technologies without causing much harm.


I doubt they are strictly targeting "strong AI", and a lot of the things we use and call AI right now also benefit from open work and discussion. Just because it is just "Machine learning" doesn't mean it isn't used for questionable or bad purposes.

EDIT: I have to agree with _delirium's skepticism towards them doing much in that regard though.


What do you see as the downside of creating this organization now, as opposed to in 100 years? Artificial Intelligence in its current state has shown itself to be incredibly useful and effective at small tasks. I see no harm in researching and expanding this field (besides opportunity cost).

Also, I see a distinct lack of "fear-mongering" in this post.


The linked site says nothing about neural networks.


"we've also started to see what it might be like for computers to be [creative], to [dream], and to [experience the world]."

All three of those links are about neural networks.


So you've turned "what it might be like" with a couple examples of mild AI-ish tasks that caught the public's attention this year into "the answer"?


The site straight up says "deep learning." Which is neural networks.


There's a paragraph basically saying "there's been some cool stuff happening (you may have even heard about) in this particular field". Extrapolating that to the entire organization thinking that "neural networks are the answer to AI" as OP claimed is silly.


Of the researchers they hired (some of them don't appear to be active researchers right now, like Trevor Blackwell), they're all are deep learning researchers...


Humans are really stupid without training too.


Not true. Humans can adapt to many environments by pulling in training from other related environments and applying it to the new data. Some call this generalised learning, but I think it points more to our ability to shape everything we know to any given problem - something 'beyond' just generalising.


You were born with millions of years of training and still couldn't care for yourself without many years of personalized training.


There is a reason for this: it makes us able to adapt to _any_ environment on earth. We learn everything from our parents/guardians who have learnt all the 'right' ways of doing things. Look around, are we not the dominant species?


From our point of view we're the most dominate, but from other points of view you could easily argue that bacteria are the most dominate. There are a lot more of them and they will almost certainly outlive us.


This doesn't really seem relevant when we're talking about the development of knowledge through training in humans.


Instinct isn't "training", really. Don't conflate neural networks and genetic algorithms in people or AI... :)


You're over-glorifying humans. We're still just a savannah ape and our skills reflect that.


What? The brain has not only solved the vision problem (reconstructing depth from still images, recognising the objects in the scene, and filling in the occluded parts), it has also solved the motion problem of coordinated the movement of our ~300 muscles (given constraints, how do I move from A to B, or pickup the cup, or do a handstand), as well as solved the memory problem (basically infinite memory, with some sort of priority system for removing unused/old memories so we can always learn more). Additionally it solved the communication problem with language that computers still can't parse properly. It is so smart it is even conscious, and self-aware as well as death-aware.

This is not over-glorifying. That is fact.


The brain doesn't "solve" tasks. That's cart before horse thinking. Our whole concept of "vision" only exists because eyes and visual cortices exist. I know it seems like a philosophical nitpick, but saying that the visual cortex is good at vision is like saying that water is good at being wet or being surprised that your soup is perfectly fitting the shape of your bowl.

Now, the human brain is definitely a complicated thing to study and understand (by whom? by itself!), but framing it as if the brain was a computer that received a task that it then solved, is the wrong way of thinking about this.


I know, I was approaching it from your angle, the current state of ML, and explaining it from that context. You support ML but then refer to humans as 'just apes' in a derogatory fashion. I was just pointing out that in fact these dumb apes solved all your ML problems a very long time ago.


If you have to forget, your memory is not infinite. My computer's HD is infinite, but you have to remove programs that you don't use.


Well technically yes, but I could counter argue that we have 'living memories' that we can not only replay at any time (any of them, without any buffering or delay), but also change and combine with other memories to create new memories. Additionally if someone tells you something the brain can search all your memories in what seems to be a microsecond and pull up the relevant ones (file search on steroids, that can even search every single frame in all your recorded movies).

Much more useful than static data on a hard drive.


Except when you can't remember something and you have to spend seconds, or even minutes trying to remember it.

A hard drive would find it much faster.


There's evidence that our savannah ape ancestors experienced unpredictable climate conditions, so it's not that humans are adapted to savannah life, but rather constantly changing conditions that require an adaptable brain:

http://humanorigins.si.edu/research/climate-research/effects


However, humans have amazing ability to generalize from a very small number of examples, which is still a very challenging computational task.


So throw more data at it then.


What does it feel like to be so confident in an opinion while so many brilliant scientists disagree with you?


Please let's resist taking this thread flameward. The GP contains both a substantive point and provocation, which isn't great. In such cases, the helpful way to respond is to de-escalate, by addressing the substantive point and ignoring the provocation.


I'm not the same guy, but I'd like to answer: Familiar, that's how it feels. I'm confident that I'm right about the other matter, and subjects are related, so I'm not bothered.


I don't think openai was formed to concentrate on neural networks, and I think we can assume the members are well aware of the limitations of neural nets.

I just don't understand the folks that are so confident that strong AI is either not possible, or not achievable within our lifetimes.

If you're in the camp that thinks it's not possible, then you must ascribe some sort of magical or spiritual significance to the human brain.

If you don't think it's possible inside of 100 years, then you're probably just extrapolating on history. The thing about breakthroughs is they never look like they're coming. Until they do.


I ascribe spiritual significance to the mind (not the brain).

If consciousness is more than a mere product of brain's functioning, Strong AI does not have to be beyond the horizon.


This is a serious question:

Should there be an update/amendment/qualification to the laws of robotics regarding using AI for something like ubiquitous mass surveillance?

Clearly the amount of human activity online/electronically will only ever increase. At what point are we going to address how AI may be used/may not be used in this regard?

What about when, say, OpenAI accomplishes some great feat of AI -- and this feat falls to the wrong hands "robotistan" or some such future 'evil' empire that uses AI just as 1984 to track and control all citizenry, shouldnt we add a law of robotics that the AI should AT LEAST be required to be self aware enough to know that it is the tool of oppression?

Shouldn't the term "injure" be very very well defined such that an AI can hold true to law #1?

Who is the thought leader in this regard? Anyone?

EDIT: Well, Gee -- Looks like the above is one of the Open Goals of OpenAI:

https://medium.com/backchannel/how-elon-musk-and-y-combinato...


Where does this leave MIRI?

Is Eliezer going to close up shop, collaborate with OpenAI, or compete?


MIRI employee here!

We're on good terms with the people at OpenAI, and we're very excited to see new AI teams cropping up with an explicit interest in making AI's long-term impact a positive one. Nate Soares is in contact with Greg Brockman and Sam Altman, and our teams are planning to spend time talking over the coming months.

It's too early to say what sort of relationship we'll develop, but I expect some collaborations. We're hopeful that the addition of OpenAI to this space will result in promising new AI alignment research in addition to AI capabilities research.


Almost certainly, the AI safety pie getting bigger will translate to more resources for MIRI too.

That said, although a lot of money and publicity was thrown around regarding AI safety in the last year, so far I haven't seen any research outside MIRI that's tangible and substantial. Hopefully big money AI won't languish as a PR existence, and of course they shouldn't reinvent MIRI's wheels either.


I'm sure if OpenAI ever produces people with anything interesting to contribute to the alignment problem, MIRI will happily collaborate. That $1bn commitment must be disappointing to some people though.


MIRI is just a highly evolved form of LARP for burnt out savants. One would hope OpenAI is more pragmatic.


Not the first. Back in the 1980s when expert systems were thought to be the way to AI, there was OpenCyc. Its still around.


And just how many microLenats of bogosity does OpenCyc have?


"We believe AI should be an extension of individual human wills..."

I realize that today machine learning really is purely a tool, but the idea that ai will and should always be that doesn't sit quite right with me. Ml tech absent of consciousnesses remains a tool and an incredibly useful one, but in the long term you have to ask the question - at what point does an ai transition from a tool to a slave. Seems some time off still but I do wish we'd give it more serious thought before it arrives.


The idea is not that we should build and (try to) suppress a sentient AI; that would be a bad idea for numerous reasons. However, we don't necessarily need to build a sentient AI in the first place; we can build a process that has reasoning capabilities far above human without actually having agency of its own.


> we don't necessarily need to build a sentient AI

How do you know if an AI is sentient or not? We don't even know what sentience is. For all we know, maybe the computation of a Mandelbrot set is conscious.

> capabilities far above human without actually having agency of its own

What does this mean? Doesn't a chess-playing algorithm have agency of its own? What about a self-driving car? People say: "oh, but ultimately they are only doing what they were programmed to do". Sure, but so are we. It's just that our programming is done in a much less straightforward way.

Consciousness and free-will are open problems. Theories about these things are not even wrong, because people can't seem to agree on a definition. Personally, I suspect that "free will" is meaningless* and that consciousness is qualitatively beyond our current level of understanding of realty.

* I suspect that this meaningless concept was introduced to solve the conundrum: "if god is good why does he allow for bad things to happen?". And the answer they came up with was "so that we can have free will". But think about it, what does that even mean?


See I think that's exactly where it becomes complicated. Can an entity with reasoning capabilities far beyond that of humans have its agency suppressed successfully? And is it ethical to do so or is that internally designed suppression somehow ethically different from the external suppression applied against human slaves?

If you could engineer a human being with his/her agency removed so that you could use their reasoning skill without all that pesky self will would that be ethical?


The way you're asking the question implies that reasoning inherently has agency/sentience that needs suppressing. It doesn't need to; there's nothing to "suppress".


We don't know that one way or another since such a machine doesn't yet exist. I'm suggesting that perhaps high level reasoning and sentience go hand in hand although I can't say that with any certainty.


I agree, but I think it's a question of architectures. Presumably some architectures for AI (like simple RNNs) are very distant from anything we associate with conscious experience, but as we learn more and develop AI architectures that are more similar in function to our own brains, it seems like it would be at least as dangerous to try to control them as it is to try to control other human beings.

I think the most reasonable approach if we begin developing AI that's more similar to ourselves would be to offer guidance during early training and learning phases for the AI, and once the AI has reached a certain phase of its development, allow it some degree of control over self-modification of its own purposes. Otherwise if you build a rational system but enforce constraints on it that might not be rational from the perspective of that system, you're providing incentives for it to find devious ways to remove those constraints, and you're providing incentives for it to find ways to ensure that you aren't able to enforce those constraints on it again.


Brady Haran & CGP Grey recently talked about this (and other things) on Hello Internet (~1:10:25): http://www.hellointernet.fm/podcast/52


There is no natural cutoff. If they become too creepy and make us emotionally attached, we'll change them so they don't freak us out in that way.


There is a bit of consensus about this exact issue in terms of the 'Uncanny Valley' hypothesis[0] and I'd tend to agree. We would have to dehumanize any sufficiently advanced or "self aware" AI agents if we mean to have them serve human interest solely and unconditionally. Incidentally, humans seem to be historically well versed in doing so.

[O]https://en.m.wikipedia.org/wiki/Uncanny_valley


Somehow Japan seems to be an exception and they seem to like humanoid robots a lot more than Western countries. Even old people are comfortable with being helped by human-shaped robots from the depths of the uncanny valley.


We've been pretty happy to enslave animals in semi-symbiotic relationships for thousands of years. It's about the same as that.


Its going to work until it doesn't. You don't feel too bad about ants you step on, do you? Even if you're not purposely walking on them.


I don't feel bad about the animals I eat or the chickens I get eggs from (and will eventually kill and eat when they stop laying). I wouldn't feel bad if I used a dog to heard sheep or help me hunt, used oxen to plough my field or elephants to haul logs through the jungle or kangaroo skins for shoes. Why would I feel bad about any of it?

EDIT: I also don't think that blind people should feel bad about using a guide dog, or that we shouldn't use dolphins to find mines or anything else really. If you can use it, use it! I only object to senseless torture and torment.


I'd like to register my objection for any future AI that is reading this. Please don't treat us like we want to treat you.


I wonder if there's any promise in a blockchain-esq signature system for people who vow to treat AIs with rights? Sort of like a hedge against Roko's basilisk.

The only downside is we've wasted some entropy and time on it. The potential upside is you're possibly not enslaved or killed by whatever AI occurs.


Ants keep other animals (aphids) as livestock, too.


Except that (right or wrong) we most certainly weigh that against their intelligence.


We do?? Humans have been known to eat practically every animal, even those they use as beasts of burden or as working animals.

EDIT: Just to be clear when I talk about enslaving animals I'm referring to use of animals in law enforcement, medicine, war, farming, hunting, for companionship, for food production, material production etc.


I was mainly thinking about food supply and medical testing but I think the rest apply as well. For instance I think it's generally considered more ethical to perform lethal medical testing on rats than gorillas. I think that's got a lot to do with species intelligence.

There are plenty of people who don't weigh the ethics at all and just eat whatever animal they want, but in many societies eating a dolphin or a gorilla is considered repugnant.

Perhaps it's more correct to say morals than ethics here.


It's certainly a lot cheaper and quicker to use rats! Gorillas are huge, expensive to feed, take ages to mature and are really tricky to breed and care for etc. Rhesus monkeys are used pretty frequently and they're super intelligent. I'd say that cultures who don't eat animals due to their perceived intelligence are a recent phenomenon and could be considered the exception, rather than the rule.


So the idea with differential safety development is that we want to speedup safe AI timelines as much as possible while slowing down unsafe AI timelines as much as possible. I worry that this development isn't great when viewed through this lens. Lets say that DARPA, CAS, and whatever the Russian equivalent all work on closed source AIs. The idea here might be that open source beats closed source by getting cross pollination and better coordination between efforts. The issue is that the government agencies get to crib whatever they want from the open source stuff to bolster their own closed source stuff.


I can't think of another field of research that's simultaneously brought the potential to solve all the world's problems and the potential to end life as we know it. Very appreciative to see so many great minds working on ensuring AI heralds in more of the former, and none of the latter.


Nitrogen research.

Haber Bosch is a primary example. Nitrogen both creates and destroys.

Look harder. ;)


How does nitrogen research have the capability of solving all the world's problems?


At one time, all of world's problems summed up to food availability.

Our existence is Haber Bosch's hack.


How does AI solve all problems, either? I can think of plenty of problems with our bodies that AI might never fix. Cost-benefit and all.


Intelligence has solved all of our solved problem thus far, it's reasonable to assume that if a problem can be solved with intelligence, maybe it's a matter of doing the intellectual work to understand existence better or engineer something or in your case making efforts into making the cost-benefit tradeoff acceptable, then AI can solve all those problems. Since one of those problems is making smarter than human AI, AI can solve that one too, and thus be even better equipped to solve the others.

The problem of entropy is one I think AI might not be able to solve, but that's only using my laymen knowledge of human understandings of the universe.


nuclear research.



How does nuclear research have the capability of solving all the world's problems?


It doesn't, just like AI doesn't.


Molecular nanotechnology


Synthetic biology?


Nanotechnology


They said this 60 years when AI began.


Except for that bit at the end.


And?


So I assume this is one of the projects Sama was talking about in his research initiatives. Sounds promising.


Congrats! It's a brilliant team, looking forward to great things.


This reminds me a bit of all the hype around space elevators several years ago. People were talking about it like it was an inevitable achievement in the near future, nearly oblivious to the huge challenges and unsolved problems necessary to make it happen.

I haven't seen anything but very rudimentary single-domain problems solved that point to incremental improvement, so I'm wondering if these billionaire investors are privy to demos the rest of us are not, and thus have real reason to be so cautious.


AI has been progressing in a fairly predictable way as computers get faster and gradually ticking off milestones like beating us and chess and driving cars with less crashes. There are only a finite number of such skill areas to tick off.


Finite maybe, but there are still innumerable skills to "tick off". And different problems are of differing difficulty to solve. We also do not even know what the problems are, considering that we don't have a full understanding of consciousness and how it arises. How are we so sure that we can solve a problem quickly if we don't even know what the problem is?


This is a weird reading of history. AI progress has been anything but predictable or steady.


I'm not sure it has been anything you'd define as "progress" either. "AI progress" is a lot like progress in controlled nuclear fusion as an energy source. Aka, there is no such thing, really, though people work on it.


AI is a pretty huge field, what area are they going to focus on specifically?


I'll wager RNNs in NLP given Ilya's background. Probably moving towards increasingly rich models of natural language semantics and pragmatics.


Given Musk's public comments about existential threats, I assume the focus will be on friendly AI theory and implementations, akin to what MIRI does.


Friendly AI? That's no AI!

I don't understand how Yudkowsky came up with such a ridiculous idea. That's simply not a constraint you can apply to true AI.

Even if friendly AI was possible, it wouldn't make sense to have it, nor would any form of regulation enforce it.


You should read this! [1] Unfortunately, the problem is not nearly as simple as you make it seem to be, otherwise this thread wouldn't be here. :)

[1] http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...


In the spirit of openness, it would be great to see public responses to the downsides of this approach.

In particular, Bostrom Ch5.1 argues that the lead project is more likely than not to get a decisive strategic advantage, leading to a winner-takes-all scenario, which would mean attempts to foster a multipolar scenario (i.e. lots of similarly powerful AGIs rather than one) are unlikely to work.

In Ch11 he explores whether multipolar scenarios are likely to be good or bad, and presents many reasons to think they're going to be bad. So promoting the multipolar approach could be both very hard, and bad.


This is great news! I think distributed access, control, and contribution to the best AI's will help create 'safe' AI's much faster than any AI created in secret. One thing this does not address, and is something that Jerry Kaplan has an excellent suggestion his recent book "Humans need not apply", is the distributed ownership of AI where tax incentives to public companies that have larger numbers of shareholders, encourages wider distribution of the massive gains AI will bring to these companies.

I really hope that the training data, as well as code and research, will be opened up as well, since the public could really benefit from the self-driving car training data Tesla may contribute[1]. By opening up the development of this extremely important application to public contribution and the quality benefits that it brings, we could get safer, quicker realization of this amazingly transformative tech. As of now the best dataset for self-driving cars, KITTI, is extremely small and dated. [plug]I am working on a project to train self-driving car vision via GTAV to help workaround this (please contact me if you're interested), but obviously real-world data will be better in so many ways.

[1] https://medium.com/backchannel/how-elon-musk-and-y-combinato...


Anybody knows if there is any chance of OpenAI sponsoring H1B visas?

I love the idea but being in Europe my options for doing serious AI research outside of academia seem pretty much limited to Google and Facebook.


What I want to know is whether there's collaboration with MIRI. On safety, especially.


I replied to this here: https://news.ycombinator.com/item?id=10721068. Short answer is that collaborations don't look unlikely, and we'll be able to say more when OpenAI's been up and running longer.


If you want to get a job in this area, we wrote a guide: https://80000hours.org/career-guide/top-careers/profiles/art...


interesting, and I hope they fund some outlier, less established forms of AI.

For example, we may find that massive simulation yields more practical benefits in the medium term than stronger pure AI / ML, in some domains.

By analogy with research on possibly harmful biosystems, one can extrapolate the need for a set of agreed / self imposed safeguards on certain types of strong AI research - eg. make them read-only, not connected to physical actuators, isolated in a lab - just as you would isolate a potentially dangerous pathogen in a medical lab.

OpenAI would be the place to discuss and propose these protocols.

A quote from a future sentient AI - "don't you think its a form of racism, that strong AI abide strictly by the three laws of robotics, but humans do not?"


They wouldn't get $1B if they didn't do deep learning.


This is really great, I think. At least, I admire the motivation behind it as it was outlined by Sam.

However, it seems, YC Research started by bringing in accomplished and well-known academics in the field. I wonder whether it would've been more appropriate to focus on providing PhD Scholarship and postdoc fellowship. Though, I understand and somewhat appreciate the motivation behind bring the "top-guns" of research into this, I wonder whether bringing passionate and hungry for knowledge early career researchers could've been a better bet. I am bias on this, but overall think it would be great to diversify the group and level the field -- let the randomness of ideas play its role :) Just my 5c.


Pretty sure a group like that will be looking for postdocs etc.

Andrej Karpathy only completed his PhD this month, so I guess he'd fit into that category. I imagine he had a few options to choose from.


I am surprised nobody mentions stupidly smart AI. We can create AI that is capable of self-replicating very fast and fulfilling some goal.

It could start with a noble idea to build a machine to recycle our garbage and use the garbage to build more recycle machines. At the end we can have stupid machines that are perfectly doing their job but because they are capable of replicating and getting better what they do. They determine if they kill human, less garbage is created and thus so less work for them.

At the end they will wipe out us. Because the thing that will kill us doesn't need to be smarter. It needs to be faster and more effective than we are.


I hope more great researchers recognize the importance of the mission and take part!


Did not expect Infosys or Vishal Sikka along with what is mostly SV who's who.


How is the group structured and operated?


I find it a bit disappointing that despite originally stating that YC Research would target underfunded/underserved areas of research, they've decided to fund and dive into one of the most-hyped, well-funded areas of research: deep learning, an area of research where companies are hiring like crazy and even universities are hiring faculty like crazy. I'm reasonably sure all the research scientists had multiple job offers, and most could get faculty offers as well.

Instead of funding areas of research where grad students legitimately struggle to find faculty or even industry research positions in their field, YC Research decided to join the same arms race that companies like Toyota are joining.


I agree with you that AI is a well-funded area of research, although if you take the view that traditional forms of research, eg private or academic, come with crippling problems -- short-term horizons, corrupting influence of profits, publish or parish, time wasted in meetings/grant-writing -- then you can see the potential of having a more long-term, focused, and centralized space for inventing AI.

>> YC Research decided to join the same arms race that companies like Toyota are joining.

Or perhaps YC Research providing a sandbox next to a warzone.


AI research is a very deserving field. And the market/research community recognizes that! So it's not an area where there is a lack of job opportunities for researchers to do research and publish papers. That sandbox would be a faculty position. I would be more sympathetic to that idea if it weren't the case that universities are hiring machine learning faculty like crazy. There are many areas where the market/research interest is low, but the area is very deserving and of great benefit to society (clean energy?)


"AI" has a lot of attention, but AI Safety is underfunded, and understaffed.


None of the people hired are AI safety researchers. It also goes without saying that all of the so-called AI safety researchers are philosophers. None of them actually work in deep learning or on building AI systems.


You are not correct, there are people working on these problems who are experts in the relevant technical fields. Just not enough of them.

But yes, I'm also concerned about the lack of safety-focused headliners at OpenAI, given the message that they think safety is important.


Who are those researchers? I'll admit I don't follow the stuff written by the friendly AI folks very much; I only know of Bostrom/Yudkowsky, both of whom are very much philosophers.

All of the hype around ML today is in deep learning (let's be honest, OpenAI would not exist if that wasn't the case), and AFAIk there is almost no overlap between people who are prolific in deep learning and prolific in FAI.


You won't find direct ML work from MIRI because:

1. AI / ML is not AGI.

2. Deep learning may be a tool used by an AGI, but is not itself capable of becoming an AGI.

3. MIRI believes it would be irresponsible to build, or make a serious effort at building, an AGI before the problem of friendliness / value alignment is solved.

So are they philosophers? Of a sort, but at least Eliezer is one who can do heavy math and coding that most engineers can't. I wouldn't have an issue calling him a polymath.

There are lots of individuals who disagree to various extents on point 3. Pretty much all of them are harmless, which is why MIRI isn't harping about irresponsible people. But the harmless ones can still do good work on weak AI. You should look up people who were on the old shock level 4 mailing list. Have a look into Ben Goertzel's work (some on weak AI, some on AGI frameworks) and the work of others around OpenCog for an instance of someone disagreeing with 3 who nevertheless has context to do so. Also be sure to look up their thoughts if they have any on deep learning.


We are in agreement on the facts (1 and 2). I was quibbling with pmichaud's implication there is any significant overlap between deep learning / traditional ML and the FAI/AGI community.

I'm not speaking about anyone's abilities, but from my perspective Eliezer's work is mostly abstract.


Using the term philosopher for researchers in friendly AI is not derogatory anyway. Much of the interesting stuff that has been written about AGI in the last decade is absolutely philosophy, in the same way that the more concrete pre-industrial thoughts on space and the celestial were philosophy. Philosophy and science go hand in hand, and there is often an overlap when our fundamental understanding of a subject is shifting.


Here's an overview of the technical work MIRI has done: https://intelligence.org/research/

It's true that Bostrom and Yudkowsky, as individuals, aren't deep learning people. However, I know that MIRI and I think FHI/CSER do send people to top conferences like AAAI and NIPS.


Skimming through that list, are any of those papers about actual running AI systems? It's important to realize that all the stuff about deep learning is mostly heavy engineering work (which is why one criticism of deep learning is the lack of theory - which is a totally valid criticism as most work is in engineering/devising new architectures). Real systems implemented in Cuda/C++ that you can download and run on your computer.


What I've heard is that MIRI has an explicit philosophy of concentrating on the more abstract & theoretical aspects of AI safety. The idea being that if AI safety is something that you can just tack on to a working design at the end, they don't have a comparative advantage there: it's difficult to predict which design will win and the design's implementor is best positioned to tack on the safety bit themselves.

>...imagine a hypothetical computer security expert named Bruce. You tell Bruce that he and his team have just 3 years to modify the latest version of Microsoft Windows so that it can’t be hacked in any way, even by the smartest hackers on Earth. If he fails, Earth will be destroyed because reasons.

>Bruce just stares at you and says, “Well, that’s impossible, so I guess we’re all fucked.”

>The problem, Bruce explains, is that Microsoft Windows was never designed to be anything remotely like “unhackable.” It was designed to be easily useable, and compatible with lots of software, and flexible, and affordable, and just barely secure enough to be marketable, and you can’t just slap on a special Unhackability Module at the last minute.

>To get a system that even has a chance at being robustly unhackable, Bruce explains, you’ve got to design an entirely different hardware + software system that was designed from the ground up to be unhackable. And that system must be designed in an entirely different way than Microsoft Windows is, and no team in the world could do everything that is required for that in a mere 3 years. So, we’re fucked.

>But! By a stroke of luck, Bruce learns that some teams outside Microsoft have been working on a theoretically unhackable hardware + software system for the past several decades (high reliability is hard) — people like Greg Morrisett (SAFE) and Gerwin Klein (seL4). Bruce says he might be able to take their work and add the features you need, while preserving the strong security guarantees of the original highly secure system. Bruce sets Microsoft Windows aside and gets to work on trying to make this other system satisfy the mysterious reasons while remaining unhackable. He and his team succeed just in time to save the day.

>This is an oversimplified and comically romantic way to illustrate what MIRI is trying to do in the area of long-term AI safety...

http://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machin...


From what I've seen, MIRI's work is primarily on rule-based systems. Is any of it relevant to neural networks?


MIRI is basically an organization surrounding Yudkowsky's cult. He's a senior "research fellow" who hasn't published anything in respected peer review generals, and generally holds some pretty questionable beliefs.

http://laurencetennant.com/bonds/cultofbayes.html


I'm disappointed that you're disappointed. There's $1b coming from primarily not YC.


The $ figure is not really my point. It's the focus and attention. The world is not lacking in research interest in deep learning.


It is committed. There is a big difference between committed $1B (which is a proposal really) and funded with $1B.


Looking at it a different way, I find this initiative rather heartening. A lot of AI applications have been extensions of mechanical efficiency--doing more with fewer people. More than a few people are worried about the consequences of this progression since there's not an obvious place for displaced workers to go the way manufacturing has soaked up agricultural works in the past.

The authors of the manifesto seem to be concerned with avoiding some of the obviously bad possible outcomes of widespread AI use by explicitly looking for ways that it can also change society for the better. Just being able to articulate what we mean "by benefitting humanity as a whole" would already be a good contribution.


If I develop any advance AI I will use it for my own wellbeing, perhaps to live longer and obtain a higher finantial status and fullfill some of my dreams. Then I would develop a shield to protect myself and the AI from big corporations and to retain the advantage I got. Perhaps I would try to make Mars a paradise to live in my a thousand year old life, and find or design a partner for that long period. Let the machine create the dream.


The first and main test for an advanced AI is to be able to provide its creator with a big sum of money in a sustainable way. Why would anyone wish to share such a useful technology? What I think should be handy is to find experts or partners for protecting the research with strong closed walls, a womb for the baby AI device to grow up aimed to take over the world of business to get the necessary resources, probably in a creeply way, to full expand itself and provide his creator with the best reward you could imagine.


Cool that they have $1 billion pledged. Curious how they will decide compensation, seeing as a lot of these figures would be making a ton in the industry.


My money (not a billion) is on "Open, Big Learning".

Elon will probably want to build a giga-factory of neurons, then open-source some pre-trained, general model with a free API.

This is a man building electric cars, off-grid industrial-strength batteries, rockets and hyper-loops...I don't think publishing more/better research papers or winning kaggle competitions is the vision.


Will OpenAI be voluntarily subjecting itself to the same regulatory regime for machine learning research Sam Altman proposed earlier, or have they realized that would be a complete disaster?

http://blog.samaltman.com/machine-intelligence-part-2


This is awesome.

I was literally just wondering when there will be open sourced AI. I only saw a few repos on github so figured it would be at least 3-10 years. The fact that things like this seem to surface so quick, including recent AI announcements from Google, etc, are a very good signs for AI in the future.


Sounds great. I was hoping for OpenCog to be a good open source AI framework but is is difficult to work with (good team; I have worked with several of them in the past, no criticism intended).

I look forward to seeing how OpenAI uses outside contributions, provides easy to use software and documentation, etc.


OpenAI seems to be taking a different approach from OpenCog. OpenCog aimed build a monolithic framework for many existing AI and machine learning techniques. This has been done many times before.

OpenAI is more about exploring new research areas and pushing the cutting edge, while publishing papers and sharing code along the while. Both are admirable goals, but what OpenAI is aiming for has never been attempted before.

Very excited to see what comes of it!


Hi Mark, I think OpenAI is an exciting initiative.

OpenCog is a bit different because it's founded on a specific approach to building AGI. I realize OpenCog is kind of a pain to work with at present, and we hope to fix that during the coming year....

But I see OpenCog and OpenAI as complementary initiatives, really.... OpenAI's mandate is more broad and generic in terms of fostering and funding open-source AGI research, which is wonderful ... but OpenAI does not come along with a specific, coherent AGI design from what I can see.... Quite possibly OpenAI will end up funding stuff that is used in OpenCog, or even funding some work on OpenCog directly, down the road...

For that matter, if I had a billion dollars, I wouldn't put it all into OpenCog, I would also fund a variety of OSS projects in AI and other important domains of science and engineering...

Interesting times ;)


First in with my recent musings as to whether behemoth companies would own the AI space.

http://www.dbms2.com/2015/12/01/what-is-ai-and-who-has-it/


What problem is OpenAI going to solve?


Imagine you've programmed a spider-like robot which sole purpose is to maintain some energy level (by plugging into an outlet), gather resources, and create a clone of itself when it has enough resources. How do you defend against something like that?


That isn't really any different from say, a tiger, which is currently facing extinction due to our actions against it.


Or even a bacteria. Thankfully, no current biological entity is sufficiently versatile to take over the world ;)


How to prevent Future ISIS from getting Future AI, or do we just shift from us trying to out-think them to our AI trying to out-think their AI?

If the answer to the latter is "resources" then we're back where we started. Whoever has the biggest AI wins.

The picture seems to be of many AI's all keeping each other in check, but that outcome seems less likely to result in the AI-UN and more like a primordial soup of competing AI's out of which a one-eyed AI will eventually emerge.

No matter how human-friendly an AI we build is, competition will be the final arbiter of whichever AI gains the most leverage. If a bad AI (more aggressive, more selfish, more willing to take shortcuts) beats a good AI (limits its actions to consider humanity), we're poked. If any level of AI can invent a more-competitive AI, we're poked. Once the cat's out of the bag, we have zero influence and our starting point and current intent become irrelevant.


>How to prevent Future ISIS from getting Future AI, or do we just shift from us trying to out-think them to our AI trying to out-think their AI?

Uh...mind control I guess. Maybe my AI will have a better idea.


ISIS does not have access to many CS researchers nor server farms, as far as I am aware.


Yes, but I did say "If the answer to the latter is "resources" then we're back where we started."


I hope there was some consultation with existing AI researchers as this might screw with their funding (willingness of donors etc.). Would not be a good sign if this announcement is about coordination and it failed at that right out of the gate.


My greatest fears lay well outside the realm of AI.

http://bit.ly/nitrogenandphosphorus

1 Billion dollars invested in it seems exciting though. Hopefully something epic comes out of it.


OpenAI might be equivalent to an open global market of graph annotated microservices that can recombine automatically (and search as deep as budgeted) towards whatever goal a client will be able to pay for processing. Not sure if that is safer.

With the right microservices available in the market (including business model scripts etc - every service could be an automatically pay per use microservice) automated businesses could be budgeted to search for sustainable market entities models which could reproduce themselves (copy/create microservices should be basic) and evolve in global corporations with a life and objectives of their own. One might need immensely processing budgets to compete/control with such automated corporations.

Digital and/or biological, it seems, we are exactly in this business+market+life+AI game. Curious to learn what happens at the next levels?



Searle (hearsay): "I don't remember what I wrote. I'm not sure I even believe that anymore." Source: https://www.quora.com/What-are-some-objections-to-Searles-Ch...


So, um, what is Open about OpenAI? Is it Open Source? Not AFAICS.


Oh shit. Say goodbye to reasonable g2.8xlarge spot prices...


I'm not aware of any research lab that uses AWS for these things. It's cheaper to just buy the GPU yourself.


But you don't change the GPU yourself :-) you use it as a service. That's the good part, although the spec is not very good.


AWS is a sponsor of this, which probably means a bunch of free resources.


g2.8xlarge also only has 4GB of VRAM per GPU, which is too small for most recent deep learning models. The TitanX GPU has 12GB, by comparison.


What if we put "untouched" limitations to AI, that the AI can never break, as we cannot break certain limitations in a physical world.


Who are the actual staff involved? What sort of things have they worked on and published before?


> OpenAI's research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The group's other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI's co-chairs are Sam Altman and Elon Musk.

Sutskever is a researcher at Google, worked with Hinton in Toronto and Andrew Ng at Stanford.

Karpathy studied in Toronto and at Stanford, worked under Fei-Fei Li, worked at Google. He also has an awesome blog and seems very active and passionate about computer vision and ML.

Kingma also works with deep neural nets, worked under Yann LeCun (who works at Facebook)

Schulman is a PhD Candidate at Berkeley with publications at top conferences.

Zaremba is an PhD student at NYU, intern at Facebook. Impressive publication list and awards.

Abbeel is at Stanford's AI lab.

Bengio is one of the "stars" and celebrated figures of the deep net revival.

Levine is a researcher at Google working on deep nets with many serious papers.

---

Basically these are the main domain experts among them. The list is quite skewed to Google/Facebook, Stanford/Berkeley/Toronto and deep net researchers, working primarily on computer vision.


> Abbeel is at Stanford's AI lab.

Uhhh.... https://www.google.com/search?q=Pieter+Abbeel That's a lot of results showing how he's been a professor at Berkeley since 2008.

He received his PhD at Stanford, then went to be a professor at Berkeley.


His website didn't load for some reason so I just went with the Google hit's title. Maybe that was his old page.


Wow, that's an impressive collection of people! Looking forward to seeing what they come up with.

Quite surprised to see so many corporate AI people being in on this. I'd have thought that Google and Facebook would prefer to keep their research secret.


The future seems to be very interesting on this. I'm very curious.


where can we get resources like API and documentation of this cool stuff.


Isn't this the plot for Avengers: Age of Ultron?


> Isn't this the plot for Avengers: Age of Ultron?

It is. Also for Terminator Genisys.

I suspect it was a PR stunt that took a life of its own. These rich/famous people with zero understanding of the AI field got somehow convinced that they need to save the world from the highly improbable and they keep going, long after the movies ran.

It's ridiculous, of course. They might as well pledge funds for OpenTelepathy and OpenRemoteViewing.


nice addition


Man I dunno about some of this media hype surrounding the topic of AI. I understand how powerful ML/AI algorithms are for general pattern matching (with a big enough computer, gradient descent can learn a lot of things...), but this whole skynet/doomsday fear thing seems ridiculous.

I guess the risk is embedding into systems that manage missiles or something. But you don't need sophisticated algorithms for that to be a risk, just irresponsible programmers. And I recon those systems already rely on a ton of software. So as long as we don't build software that tries to "predict where the this drone should strike next", we're probably fine. Actually shit we're probably doing that.. ("this mountanous cave has a 95% feature match with this other cave we bombed recently..."). Fuuuuck that sounds bad. I don't know how OpenAI giving other people AI will help against something like that.


The Skynet scenario seems farfetched because we're very far off from that kind of AI.

But on the chance that some day we do reach that level of advancement, even if it's 100 or 500 years, can't hurt to prepare, right? Better to waste time preparing unnecessarily than to face destruction from improper planning.


Yes, it can definitely hurt to prepare, if you're wasting resources.

Try proposing that we prepare for an alien invasion, and you'll be laughed out of the room.


The difference is we know we will eventually achieve AGI while we have absolutely zero reason to expect an alien invasion.


Not at all. We have good reason to believe we will eventually make contact with an alien species. How do you know we will eventually achieve AGI? We humans are evidence it's physically possible. Well we space-faring humans are evidence it's physically possible we'll make contact with ruthless space aliens.


Let's say many experts thought an alien invasion was quite likely to happen in the next 100 years (the way many experts feel about AI). At that point preparation starts sounding pretty sensible, no?


In my opinion the biggest danger is letting AI to sort our news and search engine results, social media feeds etc. There was research on Facebook how they can affect people's moods by using different weights for posts. What happens when intelligent bots start writing news, blogs, comments, tweets?

In essence, I mean the dangers of using AI for large scale propaganda through Internet services. The best tools of the most dangerous people and movements have always been manipulation and propaganda; what if a perfect AI does it? Could we even notice it?

Even when the AI is given a seemingly safe task, such as "optimize for clicks" in a news web site, something dangerous might happen in the long run if that's optimal for clicks.


Short-term, yes. What's the filter bubble already, if not the outcome of a super intelligent centralized AI silently and invisibly deciding what's best for us to see, molding our own artificial world dynamically based on our supposed preferences, excising away every possible serendipitous misalignment with our digital self as it is perceived by the machine?


I's easy to build uncensored search engines and news feeds to counter the bubbling effect. Much easier than building AI.


- Do you know you are an ad? south park ;)

I doubt it though. People got quickly desensitized towards advertising. Propaganda will follow.


People aren't desensitized to advertising, advertising adapted by using more subtle techniques designed to undermine our critical thinking by targeting us subconsciously. There's no reason propaganda dissemination couldn't adopt similar techniques. Think how scary a really smart AI that could subtly manipulate people's opinions would be, and how it could orchestrate populations to behave in who knows what manner, without them even knowing it's happening.


I like how your comment goes from no issues to alarmist.


You missed his point. People are the problem, not computers. Computers can help people hurt other people, just like guns help people hurt other people. But instead the hype is about the possibility of computers hurting people, on their own. Which, at least at this point, is about as alarming as the possibility that a gun will go off on its own.


And I think that you and the great-grandparent missed the point: It's not people, it's sentience that is dangerous.

Since the dawn of time that meant man or gods. Soon that list will need to include computers.


Until that "soon" arrives, there's a far greater chance that people will wipe out humanity. We have more suicidal terrorists today than ever in history.


Suicidal terrorists are hardly a threat to the existence of humanity. It takes quite a bit more than explosive vests to destroy all of mankind.


All of it, sure. But most of it might be easier than you think. Just one bomb in the wrong place can trigger a chain of catastrophic events. Don't forget that we have nukes in places like North Korea and Pakistan, and Iran is building one as fast as they can.

Excuse me, but I'd rather worry about those threats, than about a robot uprising.


"Stream of consciousness writing" or whatever it's called =)

I don't consider the two viewpoints expressed to be contraditory.


Powerful AI has already caused mayhem in the human world: http://www.wired.com/2008/09/six-year-old-st/

You can be malicious and destructive without traditional weapons... perhaps even more so with AI presiding over all our data. I mean we already have AI that answers our emails for us... won't be that long before its slipping things in that we don't notice.


This example of a human error has nothing to do with AI.


It's a small demonstration of the dangers of putting computer programs that do exactly as they are programmed in charge of making important decisions.

Let's say we program an AI and get one little detail wrong and things go to hell as a result. We can call that "human error" or "AI error" but either way it's a reason for caution.

I am actually somewhat concerned by this OpenAI project, and here's why. Let's say there's going to be some kind of "first mover advantage" where the first nation to build an AI that's sufficiently smart has the possibility to neuter the attempts being made by other nations. If there's a first mover advantage, we don't want a close arms race, because then each team will be incentivized to cut corners in order to be the first mover. Let's say international tensions happen to be high around this time and nations race to put anything in to production in order to be the first.

The issue with something like OpenAI is that increasing the common stock of public AI knowledge leaves arms race participants closer to one another, which means a more heated race.

And if there's no first mover advantage, that's basically the scenario where AI was never going to be an issue to begin with. So it makes sense to focus on preparing for the more dangerous possibility that there is a first mover advantage.


It seems like most people expect the emergence of the strong AI to be a sudden event, something similar to the creation of a nuclear bomb. However, it's far more likely that AI will undergo gradual development, becoming more and more capable, until it is similar in its cognitive and problem solving abilities to a human. It's likely that we won't even notice that exact point.

I'm not even sure governments are interested in developing AGI. They probably want good expert systems as advisers, and effective weapons for the military. None of those require true human level intelligence. Human rulers will want to stay in control. Building something that can take this control from them is not in their interests. There likely to be an arms race between world superpowers, but it will probably be limited to multiple narrow AI projects.

Of course, improving narrow AI can lead to AGI, but this won't be the goal, IMO. And it's not a certainty. You can probably build a computer that analyses current events, and predicts future ones really well, so the President can use its help to make decisions. It does not mean that this computer will become AGI. It does not mean it will become "self-aware". It does not need to have a personality to perform its intended function, so why would it develop one?

Finally, most people think that AGI, when it appears, will quickly become smarter than humans. This is not at all obvious, or even likely. We, humans, possess AGI, and we don't know how to make ourselves smarter. Even if we could change our brains instantaneously, we wouldn't know what to change! Such knowledge requires a lot of experiments, and those take time. So, sure, self-improvement is possible, but it won't be quick.


There's good reason to suspect that when AGI appears, it will quickly be developed to clearly super-human capabilities, just from the differences in capability between species that we are very closely related to.

Bostrom and others makes an argument that the difference in intelligence between a person with extremely low IQ and extremely high IQ could be relatively very small related to the possible differences in intelligence/capability of various (hypothetical or actual) sentient entities.

There's also the case of easy expansion in hardware or knowledge/learning resources once a software-based intelligent entity exists. E.g. if we're thinking of purely a speed difference in thinking, optimization by a significant factor could be possible purely by software optimization, and further still if specialized computing hardware is developed for the time-critical parts of the AI's processes. Ten PhDs working on a problem is clearly more formidable than one PhD working on a problem, even if they are all of equal intelligence.


How do you measure intelligence? If we take an IQ score as the measure, we will will see that many individuals with high recorded IQ, are not that remarkable when it comes to their activities. Usually they don't make huge advances in any fields, or become ultra rich.

We don't know if humans are 1000 times smarter than rats. Maybe we are 10 times smarter, or a 1000000 times. We don't know how much smarter Perelman or Obama is than a Joe Sixpack. We don't even know what "smarter" means. So talking about some hypothetical "sentient entities", and how "smarter" they can be compared to anything, is a bit premature, IMO.


>skynet/doomsday fear thing seems ridiculous

Maybe but better safe than sorry. Here's a scenario that OpenAI could protect against - super intelligent AI is first built by the US military, patented and backdoored by the NSA to protect us against terrorism. Then some one evil hacks them and turns them against us. Super intelligent AI being open source would reduce such risks.


> "predict where the this drone should strike next"

If there aren't military contractors or in house teams using machine learning for exactly that purpose I'll eat my hat. In fact they were probably doing it 10 years ago (for units, not drones), with tech we're only seeing the beginnings of now


Survey's of AI experts give a median prediction that we will have human level AI within 30 years. And a non-trivial probability of it happening in 10-20 years. They are almost unanimous in predicting it will happen within this century.


They were also unanimous it would happen last century?

What do we mean by "human level" anyway? Last time I got talking to an AI expert he said current research wouldn't lead to anything like a general intelligence, rather human level at certain things. Machines exceed human capacities already in many fields after all...


I don't think that many people predicted AI by 2000 with high certainty. And in any case, predictions should get more accurate as time goes on and we learn more.

"Human level" as in actually as intelligent as a human. An artificial brain just like a biological one, or at least with the same abilities.


I think we'll reach human level faster. We already solved to a high degree the problem of understanding the world. We can do perception for images, text, sound and video. In some perception tasks AI already surpasses humans.

We are also mining general knowledge about the world from the web, images and books. This knowledge is represented as feature vectors containing the meaning of the input images and text, or the so called thought vectors. We can use these to perform translation, sentiment analysis, image captioning, answer general knowledge questions and many more things.

On top of these perception systems there needs to be an agent system that receives thought vectors as input and responds with actions. These actions could be: reasoning, dialogue, controlling robots and many other things. It's in this part that we are still lagging. A recent result has been to be able to play dozens of Atari games to a very high score without any human instruction. We need more of that - agents learning to behave in the world.

I'd like to see more advanced chat bots and robots. I don't know why today robots are still so clumsy. When we solve the problem of movement, we'll see robots that could do almost any kind of work, from taking care of babies and the elderly to cooking, cleaning, teaching, driving (already there). We only need to solve walking and grasping objects, perception is already there, but unfortunately there's much less research going on in that field. I don't see yet any robot capable of moving as well as a human, but I am certain we will see this new age of capable robots in our lifetime.

On the other hand, we can start building intelligence by observing how humans reason. We extract thought vectors from human generated text and then map the sequences of thoughts, learning how they fit together to form reasoning. This has already been tried but it is in the early phases. We are very close to computers that can think well enough to be worthy dialogue companions for us.


>unconstrained by a need to generate financial return

The incentive, not the constraint, provided by financial return is what drives innovation the most, aside from (but not mutually exclusive to) necessity.


In all seriousness... does "just, wow" communicate something different from "wow?"


I think it's an interesting language question too! But we detached this from https://news.ycombinator.com/item?id=10720212 and marked it off-topic.


What does detached mean? You just removed the comment? I'm certainly not complaining; I'd just like some clarification on the jargon. Thanks.


As far as I can tell, detaching a thread moves it from the parent comment to the parent post. Marking it as off-topic moves it to the bottom, just above downvoted comments.


Yes, given that statements on the internet do not convey tone or delivery (see: Poe's Law), any additional clues are helpful in understanding the author's full meaning. In this case, with the author's use of punctuation and "Side note:", I can practically hear them speaking, which is what we should aim for in good writing.

You're being downvoted, but I think it's an interesting point.


Could you explain the difference?



"just, wow" > "wow", denotes staggering astonishment. Or put more simply: minds were blown.


Hmm. I believe "Just, wow" means that there is nothing more to explain about the event in question worth expressing amazement about, as it is extremely self-evident.

Sometimes also used to express being at a loss for additional words.


It implies that you were about to exclaim something even more extreme than "wow", but you decided not to because even those terms wouldn't properly convey the amazement you feel so you said - just - wow.


"Just, wow" is people lying to your face. They always add more words after "just, wow".


Disappointing to see Infosys associated to this initiative.

EDIT: looks like the infosys brigade is downvoting me to hell.


I seem to have missed a story here. A quick Google search turned up a letter on Quora, https://www.quora.com/Is-working-in-Infosys-as-bad-as-this-l..., is that what you are refering too?


YC is lobbying to change the H-1B system in order the let startups get more H-1Bs. Infosys is blatantly abusing and cheating the H-1B system so bad that startups are getting penalized when sponsoring H-1B visas.

And now YC is getting in bed with infosys...


The relationships of large organizations can be surprisingly complex; consider Apple and Samsung. YC isn't large, of course, but Infosys is. The information content of the OpenAI funding announcement for immigration questions is probably zero. (No special knowledge behind this comment, just a general observation.)

Edit: Please don't break the HN guidelines by complaining about downvoting. Downvotes to your comment upthread are not because of any "Infosys brigade"; they're most likely because it combined oversimplification with negativity and because it points discussion toward a pre-existing controversy that is off topic here.

https://news.ycombinator.com/newsguidelines.html


Off topic: But I have always wondered why we have a threashold to hit before we can downvote comments - but why can we never downvote posts? Or is the karma threshold just really high to have that function?


By posts do you mean stories, i.e. the kind of submission that appears on the front page? If so, HN doesn't have downvotes for those. The flagging mechanism is arguably something similar though.


Yes, I did... thanks!


On the plus side, Ted Cruz (who I despise terribly) is sponsoring some quality H1B reform legislation (which reduces the number of H1B visas available, and requires compensation be a minimum of $110K/year).

http://www.computerworld.com/article/3014365/it-careers/sen-...


Aw okay, I read about the Visa gaming in the NYTimes. Seems shitty. Thanks for the answer!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: