He was half-wrong about the specific methods used (I say half-wrong because half of AlphaGo relies on relatively-brute-force MCTS). I don't think this detracts from my point - it is hard for any researcher to predict the exact methods that will be used a decade from now.
You tried to use one expert as a refutation of my claim that there was a general consensus with how long it would take for computers to beat a human in Go.
A simple search on google will provide you with plenty of writing backing that sentiment up.
Asking any of your friends who took AI classes back then would confirm the same.
You're going to need citations for your unsubstantiated claims. I've brought forth a highly prominent expert opinion in 2007 that computer Go would be dominant by 2017.
And not media reports from present-day that just repeat this meme that almost everyone believed Go wouldn't happen for decades.
A highly prominent expert opinion but the general consensus was that it would take a long time. Ask anyone who went to AI class back then.
Here is another expert
"In May of 2014, Wired published a feature titled, “The Mystery of Go, the Ancient Game That Computers Still Can’t Win,” where computer scientist Rémi Coulom estimated we were a decade away from having a computer beat a professional Go player. (To his credit, he also said he didn’t like making predictions.)"
You also find highly prominent expert opinions that AI is going to be dangerous and experts who don't believe it. Most people don't believe it, most people believe robots wont take jobs either.
And no I don't need to provide you with anything since you have only problem my point that most people didn't believe it would happen which is why you didn't link to anything saying that most believed it would happen.
The real irony is that Rémi said that in 2014. Sometime around then if not before deep learning was showing it could knock down more and more problems, it was pretty clear to anyone who was keeping up that if someone figured out how to combine deep learning with Rémi's work on monte-carlo tree search they ought to end up with a powerful go bot, perhaps even pro-beating. What took me personally by surprise was that the development (which also required a pretty large army of GPUs, though I wondered if we might see specialized hardware like deep blue) was done mostly out of the public eye, not even tests against humans on go servers, until suddenly it was announced the bot beat a 3p.
I think it may be rare that you see consensus on those sorts of "it's imminent, someone just has to do the work" problems because it requires simultaneous knowledge of multiple developments, and knowledge doesn't always disseminate as fast as it takes one group to just do the work. Now I'm remembering this related maxim: http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_...
I think Rémi's quote shall be take in its positive meaning: given the already impressive rate of progress in computer go before 2014, go programs would succeed to reach professional go player in around 10 years. When suddenly arrives a gorilla like google with its money, resource and expertise in AI, these 10 years have reduced to 2.
There is irony in trying to refute my single expert opinion with another single expert opinion (via an article).
Reply to below: You're the one asserting that experts thought Go wouldn't be dominated by computers for a long time. The burden of proof lies on you. "Some experts thought it would happen by now, some didn't, there was no consensus" doesn't have quite the ring to it!
I have never said anything about experts. I have talked about general consensus which includes experts.
You have provided on example, ONE of someone who believed it would happen.
I have provided one expert plus articles saying it wouldn't happen, plus you can google and find plenty of articles that said we wouldn't get it for a long time.
You cannot find a single example of articles which are claiming that it's was a general consensus we would beat Go.
And so you are the one coming up short not me. My claim is not controversial neither have you shown it is.
Point for ThomPete on this one. I can find many more sources citing experts pegging computer go at a decade+ off, compared to those who thought we would have it by now.
However, points to argonaut for doing his best Dijkstra impersonation.
'I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras.' - Alan Kay
"Just 10 years ago self driving cars was something you joked about."
That changed on the second day of the 2005 DARPA Grand Challenge. Suddenly, there were lots of self-driving cars running around. The sudden change in the attitude of the reporters there was remarkable.
Are you aware that in Singapur, driver less cabs are already in service? And that many of the biggies have plans for autonomous cars within a few years?
That's a link to an about a company who is testing autonomous taxis. You could also link to Google's self driving car tests. Neither are ready for sale to the general public.
AGI progress is unlikely to be limited by some exponential curve that we're just not far enough along though. Rather it seems more limited by some key insight no one has found yet. Sure we'll be able to retrofit some exponential curve onto it after the fact, since when it appears it will change everything drastically. But this is in contrast to e.g. the human genome project which started out with the expectation of a specific exponential curve to give an estimate for project completion. Your more general point I agree with, however, it's not enough to dismiss the report.
People also predicted there would be fully functional human replacing kitchen assistants by the year 1960. Just because someone overestimates or underestimates things, it doesn't affect how slow/fast it may progress. So adjusting future predictions based on offset of past predictions just makes no sense.
>> We consistently overestimate progress in the short run and underestimate in the long.
You can't overestimate the economic pressures on progress as well.
Remember in 2007 when everybody thumbed their noses at hybrid and electric vehicles in the US? Ford was still pumping out record numbers of their behemoth Excursion model.
Then the economy crashed, and people suddenly needed a fuel efficient car and then they all traded in their SUV's for what? Toyota Prius' which were an after thought a few years prior - in the span of 18 months, Toyota couldn't keep them on the lot.
I can see one or more catastrophic disasters where there is a sudden need for AI to rescue the human race in some capacity. Think nuclear war, environmental disaster, biological catastrophe, etc.
No, I meant in nature. In all of the natural world. Physics through economics and everything in between including technology. (Hint: nothing is exponential -- it always levels off. Otherwise we would have been consumed by it.) I would be genuinely surprised and extremely curious to see any natural phenomenon that maintains exponential growth.
I guess compound interest could be considered indefinitely exponential, but you eventually reach a barrier in what's insured and it is a relatively small exponent. Also, is it still savings if you never spend it? I wonder what is the longest continuous account in banking that has never been touched.
Anyway, that is a tangent and a somewhat artificial scenario. Can you name a naturally occurring scenario? I would accept technology if you could show that it isn't going to level out like all other natural phenomenon.
Bacterial growth is exponential until a limit is reached. You can find many examples. Doesn't really have anything to do with the article or OP post though.
The 'S' curve of logistic growth looks exponential for a while, which is why the question arises. By contrast, no one mistakes logarithmic growth for exponential growth for very long.
http://assets.motherjones.com/media/2013/05/LakeMichigan-Fin...
5 years ago it was thought to take decades to beat a human in Go.
Just 10 years ago self driving cars was something you joked about.
We consistently overestimate progress in the short run and underestimate in the long.