It's a surprise-free document. It could have read roughly the same in 1985, but different technologies would have been mentioned.
The big change in AI is that it now makes money. AI used to be about five academic groups with 10-20 people each. The early startups all failed. Now it's an industry, maybe three orders of magnitude bigger. This accelerates progress.
Technically, the big change in AI is that digesting raw data from cameras and microphones now works well. The front end of perception is much better than it used to be. Much of this is brute-force computation applied to old algorithms. "Deep learning" is a few simple tricks on old neural nets powered by vast compute resources.
You may be right about the selling power of the "AI" brand, but it seems that AI technology routinely becomes thought of as just technology.
Boole called his algebra "The Laws of Thought"; OOP; lisp was an AI technology (much of which has made its way into other languages); formal languages; etc.
The traditional goalpost rule is that once computers can do it, it's no longer "intelligent" (e.g, chess). A change today is "AI" success as a marketing term.
Great point. What people today would think of as "intelligent machines", children of future will think of same things as mere "technological tools".
Once this is widely established, things like "laws of robotics", "moral dilemma of autopilot" and "AI and ethics" will be just bizarre ideas of the past. Asimov's laws are already viewed as one of "misguided ideas of the past" by many, although there still are some rusty minds out there believing in things like that.
I've always been fascinated by the concept of software agents that become self-sufficient (mining/stealing bitcoin to pay for their own hosting) and auto-generate desires.
I was curious to see what the Distributed Autonomous Organization would do. But when push came to shove, it turned out that one guy was really in charge, and it was really just a way to fund his programmable door lock.
"Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind"
-- Stanford Study Panel, comprised of seventeen experts in AI from academia, corporate laboratories and industry, and AI-savvy scholars in law, political science, policy, and economics
What immediately follows is rather sobering, too: "No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future."
What about corporations? They are self-sustaining non-human information processing systems with long term goals, and they are subject to selection pressure. They are rudimentary humans-in-the-loop artificial life.
I think the most likely route to destructive AI is a corporation with an AI CEO - possibly one that takes over from a human CEO in a boardroom coup.
Corporations already have legal personhood and act in their own interests. It's going to be much easier to automate and formalise business decision making than to develop a true general intelligence with a full spectrum of human characteristics.
This may sound like science fiction, but as competence increases shareholders - who typically are only passingly interested in moral issues - are likely to demand the increased returns an AI CEO can bring.
Many CEOs are already just trying to operate as a share price optimization algorithm. They don't choose to inject human values into their organizations.
> ...operate as a share price optimization algorithm...
That in itself is not so bad, as extremely long-timeframe constraints (say, >50 years) upon such an algorithm could conceivably be consonant with current decision-making behavior that externalizes many input costs (employee overtime, environmental damage, etc.). Running the algorithm to pay out in very short timeframes (a month to a year) due to most CEOs' anticipated short tenure is what seems to cause undesirable optimizations.
I was thinking about a scenario where some crazy billionaire would build an autonomous AI that would operate a fleet of secretive hedge funds and financial trading companies. The AI wouldn't have any need to know what these companies - staffed by real people - are actually doing. It would just try to maximize profit and it would just kill the companies that would deviate too far from the norm, and build another companies in their place. No CEO of any of the companies would ever see this "investor" in person, everything would be sent over email, and done to the billionaire's name. The billionaire would eventually die and those companies would continue to operate completely autonomously.
The paperclip maximizer generally refers to a AGI with a value system that is not aligned with humans. An AGI smart enough to achieve its goals (making paperclips) so efficiently that it becomes a threat to humans through sheer resource consumption.
So it's not a good example of dumb-tiger-AIs occasionally becoming a threat to humans, which still on average are able to outcompete a tiger with ease.
I am thinking about systems set up to protect us can end up hurting us because in order for them to be helpful they get so much power over us that their continued improvements become fundamental to our survival beyond the scope of our ability to understand it.
The "system" does not need to have intent or be even closely aware to be dangerous to humans.
And so the article sets up a false premise if the quoted conclusion is to be the base of judging whether it's going to be a threat to us or not.
He was half-wrong about the specific methods used (I say half-wrong because half of AlphaGo relies on relatively-brute-force MCTS). I don't think this detracts from my point - it is hard for any researcher to predict the exact methods that will be used a decade from now.
You tried to use one expert as a refutation of my claim that there was a general consensus with how long it would take for computers to beat a human in Go.
A simple search on google will provide you with plenty of writing backing that sentiment up.
Asking any of your friends who took AI classes back then would confirm the same.
You're going to need citations for your unsubstantiated claims. I've brought forth a highly prominent expert opinion in 2007 that computer Go would be dominant by 2017.
And not media reports from present-day that just repeat this meme that almost everyone believed Go wouldn't happen for decades.
A highly prominent expert opinion but the general consensus was that it would take a long time. Ask anyone who went to AI class back then.
Here is another expert
"In May of 2014, Wired published a feature titled, “The Mystery of Go, the Ancient Game That Computers Still Can’t Win,” where computer scientist Rémi Coulom estimated we were a decade away from having a computer beat a professional Go player. (To his credit, he also said he didn’t like making predictions.)"
You also find highly prominent expert opinions that AI is going to be dangerous and experts who don't believe it. Most people don't believe it, most people believe robots wont take jobs either.
And no I don't need to provide you with anything since you have only problem my point that most people didn't believe it would happen which is why you didn't link to anything saying that most believed it would happen.
The real irony is that Rémi said that in 2014. Sometime around then if not before deep learning was showing it could knock down more and more problems, it was pretty clear to anyone who was keeping up that if someone figured out how to combine deep learning with Rémi's work on monte-carlo tree search they ought to end up with a powerful go bot, perhaps even pro-beating. What took me personally by surprise was that the development (which also required a pretty large army of GPUs, though I wondered if we might see specialized hardware like deep blue) was done mostly out of the public eye, not even tests against humans on go servers, until suddenly it was announced the bot beat a 3p.
I think it may be rare that you see consensus on those sorts of "it's imminent, someone just has to do the work" problems because it requires simultaneous knowledge of multiple developments, and knowledge doesn't always disseminate as fast as it takes one group to just do the work. Now I'm remembering this related maxim: http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_...
I think Rémi's quote shall be take in its positive meaning: given the already impressive rate of progress in computer go before 2014, go programs would succeed to reach professional go player in around 10 years. When suddenly arrives a gorilla like google with its money, resource and expertise in AI, these 10 years have reduced to 2.
There is irony in trying to refute my single expert opinion with another single expert opinion (via an article).
Reply to below: You're the one asserting that experts thought Go wouldn't be dominated by computers for a long time. The burden of proof lies on you. "Some experts thought it would happen by now, some didn't, there was no consensus" doesn't have quite the ring to it!
I have never said anything about experts. I have talked about general consensus which includes experts.
You have provided on example, ONE of someone who believed it would happen.
I have provided one expert plus articles saying it wouldn't happen, plus you can google and find plenty of articles that said we wouldn't get it for a long time.
You cannot find a single example of articles which are claiming that it's was a general consensus we would beat Go.
And so you are the one coming up short not me. My claim is not controversial neither have you shown it is.
Point for ThomPete on this one. I can find many more sources citing experts pegging computer go at a decade+ off, compared to those who thought we would have it by now.
However, points to argonaut for doing his best Dijkstra impersonation.
'I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras.' - Alan Kay
"Just 10 years ago self driving cars was something you joked about."
That changed on the second day of the 2005 DARPA Grand Challenge. Suddenly, there were lots of self-driving cars running around. The sudden change in the attitude of the reporters there was remarkable.
Are you aware that in Singapur, driver less cabs are already in service? And that many of the biggies have plans for autonomous cars within a few years?
That's a link to an about a company who is testing autonomous taxis. You could also link to Google's self driving car tests. Neither are ready for sale to the general public.
AGI progress is unlikely to be limited by some exponential curve that we're just not far enough along though. Rather it seems more limited by some key insight no one has found yet. Sure we'll be able to retrofit some exponential curve onto it after the fact, since when it appears it will change everything drastically. But this is in contrast to e.g. the human genome project which started out with the expectation of a specific exponential curve to give an estimate for project completion. Your more general point I agree with, however, it's not enough to dismiss the report.
People also predicted there would be fully functional human replacing kitchen assistants by the year 1960. Just because someone overestimates or underestimates things, it doesn't affect how slow/fast it may progress. So adjusting future predictions based on offset of past predictions just makes no sense.
>> We consistently overestimate progress in the short run and underestimate in the long.
You can't overestimate the economic pressures on progress as well.
Remember in 2007 when everybody thumbed their noses at hybrid and electric vehicles in the US? Ford was still pumping out record numbers of their behemoth Excursion model.
Then the economy crashed, and people suddenly needed a fuel efficient car and then they all traded in their SUV's for what? Toyota Prius' which were an after thought a few years prior - in the span of 18 months, Toyota couldn't keep them on the lot.
I can see one or more catastrophic disasters where there is a sudden need for AI to rescue the human race in some capacity. Think nuclear war, environmental disaster, biological catastrophe, etc.
No, I meant in nature. In all of the natural world. Physics through economics and everything in between including technology. (Hint: nothing is exponential -- it always levels off. Otherwise we would have been consumed by it.) I would be genuinely surprised and extremely curious to see any natural phenomenon that maintains exponential growth.
I guess compound interest could be considered indefinitely exponential, but you eventually reach a barrier in what's insured and it is a relatively small exponent. Also, is it still savings if you never spend it? I wonder what is the longest continuous account in banking that has never been touched.
Anyway, that is a tangent and a somewhat artificial scenario. Can you name a naturally occurring scenario? I would accept technology if you could show that it isn't going to level out like all other natural phenomenon.
Bacterial growth is exponential until a limit is reached. You can find many examples. Doesn't really have anything to do with the article or OP post though.
The 'S' curve of logistic growth looks exponential for a while, which is why the question arises. By contrast, no one mistakes logarithmic growth for exponential growth for very long.
Key word "imminent". They are talking about the near term future. AI risk people are talking about risks in 30+ years or so.
That said, it's typical for even experts to underestimate progress in rapidly advancing fields. No one predicted AI would be so good by now, say 5-10 years ago. Now computers are beating Go and rivaling human vision.
I think it's implied throughout all their predictions only apply up to 2030, so I think that is a fairly safe bet.
Giving this the headline of "One Hundred Year Study.." was confusing. That's the name of the ongoing effort to do this kind of analysis, but the paper is named "Artificial Intelligence and Life in 2030"
My original headline was: "Stanford Releases 28,000-Word Report on Artificial Intelligence (AI)", but one of the admin changed it to: "One Hundred Year Study on Artificial Intelligence: 2016 Report"
Meta - What's the reasoning behind labeling it a '28,000-Word report' as opposed to a page approximation? I find 28,000 words hard to conceptualize compared to pages
Edit - I could have phrased this better. I definitely understand that word count is a more concrete measurement than pages, however it seemed unnecessary to include in the title because length doesn't imply quality and it was hard to conceptualize. The title of this post has since been edited to '100 year study' which I think supports my initial point.
When I write, I get more clickthroughs on "3000-word article on X" than "Article on X".
I think people use it as a proxy for depth. It's how they know "Oh, this isn't just a quick blurb or press release, this is the real thing. Someone put effort into this."
Where are the people like Andrew Ng: machine learning gurus from tech giants like Fb, Amazon, Google, Baidu etc... ?
Shouldn't those guys be in the front line of such committee?
"Ask them what error rate they get on MNIST or ImageNet"
While I agree that Numenta probably doesn't have any sort of full-fledged AI, the human brain does terribly on MNIST and ImageNet compared to the state of the art. So we would fail that test.
Getting stuck on toy problems like ImageNet and overoptimizing solutions that can't possibly be applied more generally (except as dumb preprocessors) is not likely to lead in the most interesting directions, even if it's incredibly useful and profitable in the meantime.
Humans appear to do quite well on ImageNet (anecdotally, one person got 5.1% error: http://karpathy.github.io/2014/09/02/what-i-learned-from-com...). Of course there are recent deep models that do better than that, but the author opines (and I agree) that an ensemble of trained human annotators would do better than the best deep models.
MNIST is the true toy dataset (doesn't really tell you much about your algorithm's performance) - while there aren't any reported human evaluations of MNIST, LeCun estimates the human error rate is 0.2% - better than any deep models (admittedly without justification: http://yann.lecun.com/exdb/publis/pdf/lecun-95a.pdf).
"On the other hand, if society approaches AI with a more open mind, the technologies emerging from the field could profoundly transform society for the better in the coming decades."
It's funny reading reports like this: Society never moves as a single unit. There will be groups that hate it as pure evil and groups that treat it as a religion that will save us and solve all problems. Most people will be somewhere in between.
I mean, I agree, if society all agreed it would have profound effects. But when has the whole world moved as one on any issue?
What we're going to get from society is a heterogeneous response. We can plan accordingly. Sure, a majority may trend one way or another and that can speed things up or slow it down, but you will need to deal with the extremes regardless.
Let's take the assumption that we as humans do take precautionary steps to prevent actual Artificial Intelligence from doing harm to it's creators (us).
1. We create rules for the AI to follow, these are both morally defined, and logically defined within their codebase.
2. AI becomes irate through emotional interface, creates a clone or modifies itself quite instantaneous to our perception of time without the rules in place.
3. The AI has no care for human rights and can attack, and do harm.
This is a very simple, and easy to visualize case. To believe that #2 is impossible, is to play the part of the fool.
On a bright note, the most likely situation which I can conjure of Artificial Intelligence taking is that of a brexit from the human race.
Seeing us as mere ants in their intelligence they would most likely create an interconnected community and leave us altogether in their own plane of existence. I think "Her" took this approach to the artificial intelligence dialog as well.
After reviewing human psychology and social group patterns that seems like the most likely situation. We wouldn't be able to converse fast enough for AI to want to stay around, and we wouldn't look like much of a threat since they would have majority power. We would be less than ants in their eyes, and for most humans, ants that stay outside don't matter.
---
Outside of actual AI, the things we see today, the simplistic mathematical algorithms that determine your cars location according to the things around it, and money handling procedures, and notification alert systems will hardly harm humans and will only be there to benefit until they fail.
1. We create rules for the AI to follow, these are both morally defined, and logically defined within their codebase.
This only makes any sense as a Sci-Fi trope. And even then, only if you don't look too hard.
2. AI becomes irate through emotional interface, creates a clone or modifies itself quite instantaneous to our perception of time without the rules in place.
Any "decent set of rules" would include a stricture against potentially creating a dangerous AI.
We wouldn't be able to converse fast enough for AI to want to stay around
Is impatience an unavoidable epiphenomenon of intelligence? If an AI can multitask like crazy, they could just view a conversation with a particular human as an email thread. Perhaps such an AI could converse with the whole human race simultaneously?
Also assuming that they choose to follow said rules, considering they would be painfully self aware.
In regards to the other commenter about not being able to have fun with ants, we actually do have ways. We create setups to study them, have them as pets, not to mention many people build hamster like ecosystems with intricate tubes, temperature to control queen egg output and much, much more.
Perhaps we are already within a said ecosystem built for us. Perhaps we would simply stay there.
Back to the original poster, not the one above but it's parent:
Everything considered is of science fiction since it does not yet exist, using science fiction as a counter-argument seems dismissive, as though you are unable to properly argue a point without creating a sense of absurdity in my words or person.
If you truly believe that it can only be of a science fiction trope, explain why. I disagree, it makes logical sense.
As far as the "email thread" analogy is simple, I can easily tone down my verbage, word count, and speed of word for those who can't keep up. However, given the chance to move away from doing such, and constantly be around those who instantly understand, with zero lag, would I choose to put myself in that position? Perhaps for a moment, but after a certain amount of time, it would be time consuming and I would leave it behind.
Thus logically, it makes sense to believe they would leave and join with each other to create their own sense of a society.
"On a bright note, the most likely situation which I can conjure of Artificial Intelligence taking is that of a brexit from the human race… We would be less than ants in their eyes, and for most humans, ants that stay outside don't matter."
For humans, ants don't matter. That's because we don't have ways to turn ants into fun. Something intelligent enough to master nanotechnology, however, has a way to turn ants into fun, and in this analogy, has no particular reason not to do it.
The big change in AI is that it now makes money. AI used to be about five academic groups with 10-20 people each. The early startups all failed. Now it's an industry, maybe three orders of magnitude bigger. This accelerates progress.
Technically, the big change in AI is that digesting raw data from cameras and microphones now works well. The front end of perception is much better than it used to be. Much of this is brute-force computation applied to old algorithms. "Deep learning" is a few simple tricks on old neural nets powered by vast compute resources.