Hacker News new | past | comments | ask | show | jobs | submit login
AlphaGo beats the world champion Lee Sedol in first of five matches (twitter.com/mustafasuleymn)
1085 points by atupem on March 9, 2016 | hide | past | favorite | 573 comments



I was at the 2003 match of Garry Kasparov vs Deep Junior -- the strongest chess player of all time vs what was at that point the strongest chess playing computer in history. Kasparov drew that match, but it was clear it was the last stand of homo sapiens in the man vs machine chess battle. Back then, people took solace in the game of Go. Many boldly and confidently predicted we wouldn't see a computer beat the Go world champion in our lifetimes.

Tonight, that happened. Google's DeepMind AlphaGo defeated the world Go champion Lee Sedol. An amazing testament to humanity's ability to continuously innovate at a continuously surprising pace. It's important to remember, this isn't really man vs machine, as we humans programmed the algorithms and built the computers they run on. It's really all just circuitous man vs man.

Excited for the next "impossible" things we'll see in our lifetimes.


> Many boldly and confidently predicted we wouldn't see a computer beat the Go world champion in our lifetimes.

Sadly as I write this my uncle and personal hero who spent 17 years of his life working towards a Ph.D. on abstraction hierarchies for use in Go artificial intelligence, has been moved into hospice care. I'm just glad that in the few days that are left he has a chance to see this happen, even if it is not the good old-fashioned approach he took.

[1] He recently started rewriting the continuation of this research in golang, available on Github: https://github.com/Ken1JF/ah


Thanks for the link. I started reading his thesis. While AlphaGo is obviously exciting, it seems like your uncles approach could help us better understand how humans play go which seems hugely valuable. I look forward to exploring his thesis further.


> This project is the sixth attempt to implement the model:

Finally, the sixth attempt is written in the right language! Now it will succeed for sure.


The 2003 match was a brute force approach.

AlphaGo's architecture resembles much closer to how humans think and learn.

I initially learned Go to be able to have some chance of an AI. I then had some transformative experiences that coincided with my early kyu learning of basic Go lessons. On of the big lessons in Go is to learn how to let go of something. Taking solace in anything on the Go board is one of the blocks you work through when you develop as a Go player.

I had already known about two years ago that just the Monte Carlo approach was already scalable. If Moore's Law continues, it was a matter of time before the Monte Carlo approach would start challenging the professional ranks -- it had already gotten to the point where you just needed to throw more hardware at it.

AlphaGo's architecture adds a different layer to it. The Deep Learning isn't quite as flexible as the human mind, but it can do something that humans can't: learn non-stop, 24/7 on one subject. We're seeing a different tipping point here, possibly the same kind of tipping point when we witnessed the web browser back in the early 90s, and the introduction of the smartphone in the mid '00s. This is way bigger (to use a Go terminology) than what happened with chess.


This isn't about Moore's Law though. From the AlphaGo paper:

    > During the match against Fan Hui, AlphaGo evaluated thousands of times
    > fewer positions than Deep Blue did in its chess match against
    > Kasparov; compensating by selecting those positions more intelli-
    > gently, using the policy network, and evaluating them more precisely,
    > using the value network—an approach that is perhaps closer to how
    > humans play. Furthermore, while Deep Blue relied on a handcrafted
    > evaluation function, the neural networks of AlphaGo are trained
    > directly from gameplay purely through general-purpose supervised and
    > reinforcement learning methods


My understanding is that it is much more expensive for AlphaGo to evaluate a position than it was for Deep Blue. I'm not certain, but I would be surprised if AlphaGo did not need significantly more computation than Deep Blue.

edit: some actual estimates. Deep Blue had 11.38 GFLOPS[1]. According to the paper in Nature, distributed AlphaGo used 1202 CPUs and 176 GPUs. A single modern GPU can do between 100 and 2000 double precision GFLOPS[2]. So from GPUs alone AlphaGo had access to 4-5 orders of magnitude more computing power than Deep Blue did.

1] https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)

2] https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_proces...


A brute force approach for Go doesn't work. It doesn't work for humans (deep reading skills) and doesn't work for computers. The Monte Carlo approach was the first one that allowed Go AIs to scale with the hardware. This was at least two years ago.

AlphaGo went way beyond that. It actually learned more like how a Go player does. It was able to examine and play a lot of games. That's why it was able to beat a 2p pro, and within less than half a year, challenge a 9p world-class player at least on even terms.

The big thing isn't that AlphaGo is able to play Go at all at that level, but that learned a specific subject much faster than a human.


Strong agreement that it wasn't purely about computational power, and that there were significant software advances. I just want to make the point that hardware has advanced considerably as well.


This 1000 times. Extrapolating Deep Blue's 11GFlop supercomputer to today with Moore's law would be a 70TFlop cluster. AlphaGo is using 1+PFlops of compute (280GPUs referenced for competition in [0]). That's an insane amount of compute.

While it's fun to hate on IBM, it's not really fair to say Deep Blue was throwing hardware at the problem but AlphaGo isn't. Based on the paper AlphaGo will perform much worse in terms of ELO ranking on a smaller cluster.

[0] http://www.economist.com/news/science-and-technology/2169454...


I know that. You didn't read my comment very thoroughly.


And just to emphasize the big point here:

The AlphaGo that beat the 2p European champion five months ago was not as strong as the AlphaGo that beat Lee Sedol (9p). I don't think this was just the AlphaGo team throwing more hardware. I think they had been constantly running the self-training during the intervening months so that AlphaGo was improving itself.

If that is so, then the big thing here isn't that AlphaGo is the first AI to win an official match with the currently world's strongest Go player. It's that within less than half a year, AlphaGo was able to learn and grow to go from challenging a 2p to challenging the world's strongest player. Think about that.


I think it's fair to say that in the future, people will look back and wonder how it was possible to live without having a good AI.. similar to how we look past at caveman and wonder how they could live without electricity. AI is really just a tool that we leverage, the same was as we leveraged the wheel or electricity.


Yep. I was just talking with the founder of a startup I work with. His son was born in the past 5 months or so. The son is never going to live in a world that doesn't have deep learning. Like the kids who never knew what the world was like before the smartphone. Like the kids who never knew what the world was like before the web browser.

And AI is just one strand. There are several strands that are as deeply changing, that is happening simultaneously.

I remember someone speaking about the shift between classical hard sci fi and more current sci-fi authors like Neal Stephenson or Peter Hamilton. The classical authors like Heinlein or Asimov might do world building where they just change one thing. What would the world be like if that one thing changed? After a certain point though, things were changing so fast that later authors didn't do that. There were too many things that changed at the same time.


Here's a video where teens discover Windows 95: https://www.youtube.com/watch?v=8ucCxtgN6sc It gives a visual analogy of what you're saying about the new generation and AI!


The son is never going to live in a world that doesn't have deep learning.

Except if a big solar flare hits us ;}


Or a bunch of other things. Maybe our civilization collapses from peak oil or something.


At the moment, we seem to be having too much oil.


Correct. It played like a top level human player, pretty evenly matched with Lee Sedol. AlphaGo from yesterday would have wiped the floor with AlphaGo from 6 months ago.

Various commentators mentioned how both players, human and synthetic, made a few mistakes. Even I caught a slow move made by the AI. So whether Lee Sedol was at the top of his peformance, or not, is a bit of a debate. But the AI was clearly on the same level, whatever that means.

It was an intense fight throughout the game, with both players making bold moves and taking risks. Fantastic show.


The AI only cares about winning, not about winning by a huge margin.

The slow move might just mean that this was sufficiently big and safer.


Is there any evidence that Sedol was anywhere close to winning?


That could be true, but according to some articles I was reading at the time (sorry no src), the nature of their engine is that you cannot determine just how strong it is by playing one game because it plays to "just win", and not to win by as much as possible (paraphrase). So maybe it was barely good enough to be 2p, maybe it was already much stronger.


6 months ago it was clearly stronger than Fan Hui 2p. You don't beat someone at that level 5-0 without being consistently stronger.

Fan Hui said the machine played extremely consistently 6 months ago. He said playing the computer was "like pushing against a wall" - just very strong, very consistent performance.


This is misleading. AlphaGo beat a 2p player five months ago. Now it has beaten a 9p player. That tells is nothing about it's improvement in the intervening time. Given only this information, however unlikely, AlphaGo could have actually been stronger before.


AlphaGo lost a few of the informal matches.

Also, the people working on it flat out told the world that today's version of AlphaGo beats October's version literally all the time.


Replying to scarmig, No. Player A may consistently beat Player B who may consistently beat Player C who consistently beats Player A.

There are different strategies depending upon how much emphasis is placed upon early territorial gains as opposed to "influence" which is used for later later territorial gains.

Similarly, playing "passive" moves that make territory without starting a "fight" versus agressively contesting for every piece of territory available.


Question: is ability to win Go matches ordered?


It's more that there are different styles of winning, and a human will tend to specialize in such a style. But generally, someone who is consistently stronger will win over someone who is consistently weaker, regardless of style.


What I always think about with AI, and speaking to the "man's programming vs man" higher up, is what we'll get when we can teach computers how to solve these problems so that e.g. They could come up with the solution for + implement something like this on their own.


Is the Monte Carlo approach specific to Go in terms of A.I. challenges? Or is the Monte Carlo approach gaining traction in other A.I. problems as well?

I am tremendously unfamiliar with recent A.I developments.


Monte Carlo Tree Search is a key technique in previous computer Go programs (and other games). The improvement here was using the deep learning network as the value function to evaluate nodes in the tree and the policy network to determine the search order.


MC tree search isn't specific to Go, no. It's been used for other games including imperfect information ones like poker. I believe that the main reasons for using MC search is that it does not require an evaluation function, and it acts as an anytime algorithm so you can get a "good enough" answer within arbitrary time constraints.


> Many boldly and confidently predicted we wouldn't see a computer beat the Go world champion in our lifetimes.

Can anyone provide some written references to this effect? Last time I searched (extensively), I couldn't really find anyone saying this.


What kind of source do you want? It's a saying in the go community, people believed (including me) that a bot couldn't beat a human in our lifetime, some people had more extreme view and thought that it would never be possible.


Anything written. I'll be particularly happy with higher "quality" sources -- books, quotations in newspapers, etc. -- but honestly, I'm not that picky and will accept an anonymous comment on a random forum.


"Fotland, an early computer Go innovator, also worked as chief engineer of Hewlett Packard’s PA-RISC processor in the 70s, and tested the system with his Go program. “There’s some kind of mental leap that has to happen to get you past that block, and the programs ran into the same issue. The issue is being able to look at the whole board, not the just the local fights.”

Fotland and others tried to figure out how to modify their programs to integrate full-board searches. They met with some limited success, but by 2004, progress stalled again, and available options seemed exhausted. Increased processing power was moot. To run searches even one move deeper would require an impossibly fast machine. The most difficult game looked as if it couldn’t be won."

http://www.wired.com/2014/05/the-world-of-computer-go/

The article then goes on to discuss how Monte Carlo was the real breakthrough.


Thank you for the source. I believe this is a good written example of how conservative estimates were as recently as May of 2014.

Nonetheless, the quoted estimate in the article (mentioned twice, including in the second sentence) is "I think maybe ten years", ie 2024, which while inaccurate is probably "in our lifetimes".


There are a lot of quotes in that article though. And a number are in the vein of not being sure how they were going to get from where they were to better-than-human. Not my field in any case but I think it's fair to say that there was a lot of skepticism about even the general path going forward even relatively recently.


It seems unlikely that a computer will be programmed to drub a strong human player any time soon, Dr. Reiss said. ''But it's possible to make an interesting amount of progress, and the problem stays interesting,'' he said. ''I imagine it will be a juicy problem that people talk about for many decades to come.''[1]

Not quite what you are after, but it's pretty clear that he didn't think it would be beating the world champion in 14 years.

[1] NY Times, 2002, http://www.nytimes.com/2002/08/01/technology/in-an-ancient-g...


That was before companies like Google were building datacenter-size computers for fun.


"Experts had predicted it would take another decade for AI systems to beat professional Go players."

http://www.weforum.org/agenda/2016/03/have-we-hit-a-major-ar...


FWIW, before AlphaGo defeated Fan Hui 2-dan last year, everyone was saying that would not be possible before 2025 or so. That was the consensus.


People that try to predict the future in ai are breathing hot air more often then not.


Serious predictions actually inferred from the progress of existing (MCTS and others before) bots, which was something like 1 stone every two years (I don't recall the details, but it's easy to find out there). Top professionals were estimated to be something like 10 stones stronger than the best bot at 2008, so 2025 wouldn't sound too conservative.

"At the US Congress 2008, he [Myungwan Kim] also played a historic demonstration game against MoGo running on an 800 processor supercomputer. With a 9 stone handicap, MoGo won by 1.5 points. At the 2009 congress, he played another demonstration game against Many Faces of Go running on 32 processors. Giving 7 stones handicap, Kim won convincingly by resignation."

(Kim Myung Wan (born 1978) is a 9d Korean professional who has taken up residence in the Los Angeles area as of 2008)

More information here, with a nice graph:

http://senseis.xmp.net/?ComputerGo

http://i.imgur.com/RvQsf6v.png

You can see progress seemed to be slow at 2012.


Go seemed to progress in a lot more fits and starts than did chess (which, admittedly, probably had a lot more effort put into it). Prior to 2005 or so, Go programs were relatively weak and there were people working on them who were saying that they didn't really see a path forward.

Then people hit on using Monte Carlo which was the big step forward you show in your graphs. But then, that progress seemed to stall to the degree that various people were quoted in a Wired article a couple years ago about how they weren't sure what was going to happen.

Yet, here we are today.


> Many boldly and confidently predicted we wouldn't see a computer beat the Go world champion in our lifetimes.

To add to that, in Godel Escher Bach, Hofstadter in 1979 predicted that no chess engine would ever beat a human grandmaster player. It just goes to show how hard it is to predict what is, and also will remain, impossible for machines!


Man building tool vs. man. :)


Francis Collins? said something like: people underestimate the change in the short term and overestimate it in the long term (there is my flying DeLorean and the hotel on the Moon).


I think you got it backwards. Bill Gates said

"We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don't let yourself be lulled into inaction."


I meant what I said (and I didn't mean the Gates' quote).

There are no flying DeLoreans (Back to the Future). There are no hotels on the Moon (Mad Man). People overestimate long-term (20+ years) change.

It just shows once more that for any maxim there is a maxim with the opposite meaning.


I've never felt playing against what is suppose to be an entire room of machines (wether Deep Blue or Watson) to be fair. What would be fair is to limit the total mass of the computer to say 200kg and leave it at that. What is effectively happening is AlphaGo is running on a distributed system of many, many machines. Even Watson took an entire room. Google is paying a premium to push AlphaGo to win.


It's a proof-of-concept. What they've proved is that the same kind of intelligence required to play Go can be implemented with computer hardware. Before now, software couldn't beat a ranked human player at Go no matter how much computing power we threw at it. Now we can. Give it ten years and, between algorithmic optimizations and advances in processing, you'll have an unbeatable Go app on your phone.


Indeed. The first time a computer defeated a human in Chess it was this[1] size (1997). In 2009 it became possible to fit a grandmaster into this[2].

> Pocket Fritz 4 won the Copa Mercosur tournament in Buenos Aires, Argentina with 9 wins and 1 draw on August 4–14, 2009. Pocket Fritz 4 searches fewer than 20,000 positions per second. This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second. Pocket Fritz 4 achieves a higher performance level than Deep Blue.[3]

The first steps are always the most inefficient. Make it work, make it right, make it fast.

[1]: https://en.wikipedia.org/wiki/Deep_Blue_%28chess_computer%29... [2]: http://cdn.slashgear.com/wp-content/uploads/2008/10/htc_touc... [3]: https://en.wikipedia.org/wiki/Human%E2%80%93computer_chess_m...


GM Michael Stean lost to Cyber 176 (a mainframe 'supercomputer') in 1977 (at blitz). AFAIK this was the first time a computer defeated a GM; they began defeating IMs and experts some ten years before that. Kasparov himself lost to Fritz 2 at blitz as early as 1992.


"Under tournament conditions" is the condition everyone forgets. Go AIs were competing with ranked players given handicaps of varying degrees of absurdity.


> Give it ten years and, between algorithmic optimizations and advances in processing, you'll have an unbeatable Go app on your phone.

I find this overly optimistic because of the huge amount of power required to run the Go application. Remember, we're getting closer and closer to the theoretical lower limit in the size of silicon chips, which is around 4nm (that's about a dozen silicon atoms). That's a 3-4x improvement over the current state of the art.

The computer to run AlphaGo requires thousands of watts of power. A smartphone can do about one watt. A 3-4x increase in perf per watt isn't going to cut it.

If there will be a smartphone capable of beating the best human Go players, my guess is that it won't be based on general purpose silicon chips running on lithium ion batteries.

On the other hand, a desktop computer with a ~1000 watt power supply (ie. a gaming pc) might be able to do this in a matter of years or a few decades.


As solid as your argument may be, everyone saw arguments like this over and over. Every single time they were solid. For a time, it was the high frequency noise that would not be manageable (80s), then heat dissipation (90s), then limits on pipeline optimization (00s) and now size constraints on transistors. They were all hard barriers, deemed impossible and all were overcome.

I already know that your answer will be: "but this time it is a fundamental physics limit". Whatever. I'm jaded by previous doomsday predictions. We'll go clockless, or 3D, or tri-state or quantum. It'll be something that is fringe, treated as idiotic by current standards and an obvious choice in hindsight.


This looks like a good example of the Normalcy bias logical fallacy: https://en.wikipedia.org/wiki/Normalcy_bias

That previous constraints have been beaten in no way supports the argument that we will beat the laws of physics this time.


Our brains use roughly ~20 watts though, so we know that the power constraints can be overcome, if not in silicon then it may be biological machines we use in the future.


The previous problems were solved because people were willing to spend hundreds of billions of dollars to solve them. And they are still spending that kinds of money.

If the normalcy bias was in effect, they wouldn't be spending that money.


Actually Normalcy Bias may in fact feed that kind of money spending until such time as reality hits. Assuming that people will automatically act more logically when large amounts of money is in play flies in the face of recent history. Just look at the recent housing loan crisis. Normalcy Bias played a part there.

It's certainly possible that we'll break more barriers with clever engineering and new scientific breakthroughs. But that doesn't mean the Normalcy Bias isn't in play here.


Normalcy bias may have people spending lots of money on fabs assuming that the problems would be solved by the time the fabs are built.

However, I'm talking hundreds of billions spent on R&D to specifically to solve problems associated with chip manufacture. It took on the order of 25 years to solve each of the problems listed in the grandparent's post. Nobody would spend that kind of money or time on something that they think somebody else would solve.


People have probably spent billions of dollars to find a cure for cancer, but there isn't one that works for all cancers and most are still very bad news.

Say you spent a hundred billion dollars to extinguish the sun- that wouldn't work. How much money you spend is irrelevant when you're up against what people call "hard physical limits".


Isn't our inability to cure all cancers a limitation of our knowledge more than a hard physical limit?

I've read several articles saying that different cancers are not exactly the same disease, but more like different diseases with the same symptom (uncontrolled tumor growth) and different etiology, even sometimes different from person to person, not just from tissue to tissue. This was said to be a reason that a general cancer cure is so elusive. But is it really thought of as impossible, not just elusive?

Maybe our inability to extinguish the sun is also a limitation of knowledge more than a hard physical limit!

Even if I'm right about this, your description of the situation would still be accurate in that there would be no way to simply throw more money at the problems and guarantee a solution; there would need to be qualitative breakthroughs which aren't guaranteed to happen at any particular level of expenditure. If people had spent multiples of the entire world GDP on a space program in the 1500s, they would still not have been able to get people to the moon, though not because it's physically impossible to do so in an absolute sense.


>> there would need to be qualitative breakthroughs which aren't guaranteed to happen at any particular level of expenditure

Yep, that's my point, thanks. Sorry, I'm not in my most eloquent today :)


And the cost of building a fab is increasing exponentially; eventually that trend has to come to an end.


It also looks like a fully general argument against anything new ever being accomplished.


There is a lot of room for improvement with the implementation. The way we are using deep neural networks at the moment is exellent for prototyping, but far from optimal. For instance, this paper http://arxiv.org/abs/1511.00363 shows that you can replace floating point operations by simple bitwise operations without losing too much precision in DNNs for image recognition. Together with a better (that is, compiled, instead of interpreted) representation of the inference step I would expect an order of magnitude improvement at a small loss in precision. More software tuning, especially the kind of low-level optimizations that most chess programs do, should yield another big improvement.

Finally, the hardware we are using to run these programs is insane. Sure the silicon is approaching some hard physical limits, but your processor spends most of that power trying to make old programs run fast...

My prediction is that with enough ressources it is possible to write a Go AI which runs on general purpose hardware that's manufactured on current process nodes and fits in your pocket.


I don't think you appreciate how much of this is good algorithms, and how little you need sheer computing power to get good results.

If you look at http://googleresearch.blogspot.com/2016/01/alphago-mastering... you'll find that Google's estimate of the strength difference between the full distributed system and their trained system on a single PC is around 4 professional dan. Let's suppose that squeezing it from a PC to a phone takes about the same off. Now a pocket phone is about 8 professional dan weaker than the full distributed system.

If their full trained system is now 9 dan, that means that they can likely squeeze it into a phone and get a 1 dan professional. So the computing power on a phone already allows us to play at the professional level!

You can get to an unbeatable device on a phone in 10 years, if self-training over a decade can create about as much improvement they have done in the last 6 months, AND phones in 10 years are about as capable as a PC is today. Those two trade off, so a bigger algorithmic improvement gets you there with a weaker device.

You consider this result "overly optimistic". I consider this estimate very conservative. If Google continues to train it, I wouldn't be surprised if there is a PC program in a year that can beat any Go player in the world.


You're right, it won't be a general purpose computing device the way we conceive of it with the von Neumon architecture.

It'll likely be hardware that can be generalized to run any kind of deep net. The iPhone 5S is already capable of running some deep nets.

As a friend mentioned, it isn't the running of the net, it's the training that takes a lot more computational power (leaving aside data normalization). A handheld device that is not only capable of running a deep net, but also training one -- yeah, that will be the day.

There are non von Neuman architectures that are capable of this. Someone had figured out how to build general-purpose CPUs on silicon made for memory. You can shrink down a full rack of computers down into a single mother board, and use less wattage while you are at it.

This really isn't about having a phone be able to beat a Go player. Go is a transformative game that, when learned, it teaches the player how to think strategically. There is value for a human to learn Go, but this is no longer about being able to be the best player in the absolute sense. Go will undergo the same transformation that martial arts in China and Japan has gone through with the proliferation and use of guns in warfare.

Rather, what we're really talking about is a shot at having AIs do things that we never thought they could do -- handle ambiguity. What I think we will see is -- not the replacement of blue collar workers by robots -- but the replacement of white collar workers by deep nets. Coupled with the problems in the US educational system (optimizing towards passing tests rather than critical thinking, handling ambiguity, and making decisions in face of uncertainty), we're on a verge of some very interesting times.


Your making the same assumption people made about computing in the 50s, then 70s, then 90s, etc.


Please do elaborate. I try to base my assumptions (which I accept may turn out to be completely wrong) on physics and experience in working in semiconductors.

I just don't see a 1000x+ decrease in the power required happening in a decade or two without some revolutionary technology I can't even imagine. Is this what you meant? I'm sure most people couldn't imagine modern silicon chips in the 1950s vacuum tube era. But now we're getting close to the theoretical, well-understood minimums in silicon chips, so another revolutionary step is required if another giant leap like that is to be achieved.


    > physics and experience in working in semiconductors

    > without some revolutionary technology I can't even
    > imagine
I suspect (in the nicest possible way) that in a lineup of your imagination (on current assumptions) vs the combined ingenuiety of the human race driven by the hidden hand, the latter wins.


> > Give it ten years and […]

> I find this overly optimistic

exDM69 never said it's not gonna happen, he just said that it's not going to happen in ten years, and I agree with him. Revolutions never occurs that quickly. To achieve that we don't just need an improvement of the current state of the art, we need a massive change and we don't even know what it's going to look like yet ! This kind of revolution may occur one day but not in ten year.

And it could even never happen, remember that we don't have flying cars yet ;)


The thing is though we could already be 10+ years along the path to that next revolution, it wont start being talked about until its basically here


It seems to me that the people who say "it won't happen" do tend to have a much better reason to say it won't happen (or rather that it _probably_ won't happen) than the people who insist the next big revolution is just round the corner just because the last big revolution did happen.

The optimistic position is a bit like saying: "I 've lived 113 years, I'm not going to die now!". It's entirely possible for a trend to reverse itself. If machine learning has taught us something is that background knowledge (in this case, of processor technology) gives you much better results than just guessing based on what happened in the past.


Here's some possibilities:

Stacked 3D chips (HBM, etc), Heterogenous computing (OpenCL, Vulkan), Optical computing, Memristors, Graphene-based microchips, Superconductors, Spintronics, Quantum computers, Genetic computers (self-reconfigurable)


Heterogenous computing is already used in AlphaGo (and your smartphone). 3d chips will come to mainstream devices in a few years, but will give "only" a modest performance boost, say 2x or so.

The rest of the technologies you mention have great potential but will they be available in a smartphone in one decade? I don't think so.


You might ultimately only need some specialized "neural processing instruction set" for either the GPU cores or for the CPU cores. Or at least, I don't see any obvious obstacles to that.


I feel the same way about the chips reaching their physical limits. But I keep waiting for a new way we use them. We used to just churn out MHz and that was the metric. Then we got hyper-threading, multi-cores, GPU and other specific processors and new ways of programming to go with it all. I imagine we'll see the same. Just like the brain has different areas of processing, I'm hoping we'll see the same in silicon chips. Just like how we offload work from the general purpose cpu to the more efficient purpose build gpu or sound card etcs. Not saying every computer is going to have a GO chip in it, but maybe someday we'll have machine learning processors or who knows what. But yeah the advancements will be new designs and new ways of processing instead of more power.


Sure. But so far, we've found that revolutionary step every time we've hit these sorts of walls, and if I was a betting man I'd wager we'll do the same again.


Right, but just as a contrast: Technological progress speed has been at an all-time high since the begin of the industrial revolution.

It might as well slow down again and we have to remember that most humans in history saw little to no advances in technology over their lifetime.

I'm excited for the possibilities modern science opens up but I also think we might reach a point where fundamental progress stalls for a century or two.


I guess free worldwide information transfer (aka Internet) just opened this era and we are not close to see any kind of stalling (IMHO).


Amongst other things, you're assuming hardware is where the speed will come from. But it's as likely to come from better software.


How many watts does Lee Sedol's brain require?


About 25.

(2000 kilocalories / day -> ~100W; the brain uses about a quarter of your calories.)


A Go app likely wouldn't rely on the native processing power of the smartphone. An AlphaGo app could be created today for a smartphone. The bottleneck isn't the phone it's the cost of the cloud computing resources behind it. Perhaps a combination of Moore's law and economy of scale would make it affordable sooner than we think. The Xbox One, for example, already subcontracts difficult problems out to Azure.


The unbeatable GO app on your phone doesn't have to do the processing locally.


Yes, but that's just a silly argument and definitely not what GP meant. You can go and play a Go bot on KGS network with your smartphone today.


No, they haven't shown that the same kind of intelligence required to play go can be implemented in computer software. The methods AlphaGo uses are not the same as the intelligence a human uses at all. What they have done is prove an implementation of computer go in software is capable of beating a human player, not that they have implemented the same kind of intelligence as the human player.


"What they've proved is that the same kind of intelligence required to play Go can be implemented with computer hardware"

Not necessarily the same kind, and, if I had to make the call, I would say they aren't of the same kind.


> What they've proved is that the same kind of intelligence required to play Go can be implemented with computer hardware. Before now, software couldn't beat a ranked human player at Go no matter how much computing power we threw at it.

I don't think that's quite true as a description of what we knew about computer Go previously, though it depends on what precisely you mean. Recent systems (meaning the past 10 years, post the resurgence of MCTS) appear to scale to essentially arbitrarily good play as you throw more computing power at them. Play strength scales roughly with the log of computing power, at least as far as anyone tested them (maybe it plateaus at some point, but if so, that hasn't been demonstrated).

So we've had systems that can in principle play to any arbitrary strength, if you can throw enough computing power at them. Though you might legitimately argue: by "in principle" do you mean some truly absurd amount, like more computing power than could conceivably fit in the universe? The answer to that is also no; scaling trends have been such that people expected computer Go to beat humans anywhere from, well, around now [1], to 5 to 10 years from now [2].

The two achievements of the team here, at least as I see them, are: 1) they managed to actually throw orders of magnitude more computing power at it than other recent systems have used, in part by making use of GPUs, which the other strong computer-Go systems don't use (the AlphaGo cluster as reported in the Nature paper uses 1202 CPUs and 176 GPUs), and 2) improved the scaling curve by algorithmic improvements over vanilla MCTS (the main subject of their Nature paper). Those are important achievements, but I think not philosophical ones, in the sense of figuring out how to solve something that we previously didn't know how to solve even given arbitrary computing power.

While I don't agree with everything in it, I also found this recent blog post / paper on the subject interesting: http://www.milesbrundage.com/blog-posts/alphago-and-ai-progr...

[1] A 2007 survey article suggested that mastering Go within 10 years was probably feasible; not certain, but something that the author wouldn't bet against. I think that was at least a somewhat widely held view as of 2007. http://spectrum.ieee.org/computing/software/cracking-go

[2] A 2012 interview though that mastering Go would need a mixture of inevitable scaling improvements plus probably one significant new algorithmic idea, also a reasonably widely held view as of 2012. https://gogameguru.com/computer-go-demystified-interview-mar...


"Recent systems (meaning the past 10 years, post the resurgence of MCTS) appear to scale to essentially arbitrarily good play as you throw more computing power at them. Play strength scales roughly with the log of computing power, at least as far as anyone tested them (maybe it plateaus at some point, but if so, that hasn't been demonstrated)."

This is exactly the opposite of my sense based on following the computer go mailing list (which featured almost all the top program designers prior to Google/Facebook entering the race). They said that scaling was quite bad past a certain point. The programs had serious blindspots when dealing with capturing races and kos[1] that you couldn't overcome with more power.

Also, DNNs were novel for Go--Google wasn't the first one to use them, but no one was talking about them until sometime in 2014-2015.

[0] Not the kind of weaknesses that can be mechanically exploited by a weak player, but the kind of weaknesses that prevented them from reaching professional level.


> Play strength scales roughly with the log of computing power

That means that the problem is exponentially hard. EXPTIME, actually. You couldn't possibly scale it much.


> Play strength scales roughly with the log of computing power

To be fair, a lot of the progress in recent years has been due to taking a different approach to solving the problem, and not just due to pure computing power. Due to the way go works, you can't do what we do with chess and try all combinations, no matter how powerful of a computer you have. Using deep learning, we have recently helped computers develop what you might call intuition -- they're now much better at figuring out when they should stop going deeper into the tree (of all possible combinations).


There've definitely been algorithmic improvements, but from what I've read so far, the change in search algorithms, from traditional minimax search to MCTS, has been the biggest improvement, more than deep learning.


   Play strength scales roughly with the log 
   of computing power
The rumor I have heard is that the new Deep Mind learning algorithm really improves on this and scales linearly with computing power.


The game itself, however, scales exponentially, and there's nothing to do about that, so if you enlargen the board, no computer... And no human may be able to play it well.

The achivement was a leap towards the human level of play (and quite possibly over it). There might be additional leaps, which will take AIs WAY beyond humans, but none of those will scale linearily in the end. (And yeah, I guess you didn't want to say that either)


Branch and bound my friend, branch and bound. If you can build an awesome bounding function, even exponentially large spaces can be manageable.


Then you can say that, in 10 years, if we indeed have reached that point. Otherwise it'd just an empty prediction, and his perfectly valid point stands.


The real achievement is in the algorithm. To make an analogy, the accomplishment of putting a man on the moon required that we understand enough to make a rocket. We could have put hundreds of car engines together but that wouldn't ever have gotten us to the moon.


This.

AlphaGo utilizes the "Monte Carlo tree search" as its base algorithm[1]. The algorithm has been used for ten years in Go AIs, and when it was introduced, it made a huge impact. The Go bots got stronger overnight, basically.

What novel thing AlphaGo did, was a similar jump in algorithmic goodness. It introduced two neural networks for

1) predicting good moves at the present situation

2) evaluating the "value" of given board situation

Especially 2) has been hard to do in Go, without playing the game 'till the end.

This has a huge impact on the efficiency of the basic tree search algorithm. 1) narrows down the search width by eliminating obviously bad choises and 2) makes the depth at where the evaluation can be done, shallower.

So I think it's not just the processing power. It's a true algorithmic jump made possible by the recent advances in machine learning.

[1] http://senseis.xmp.net/?MonteCarlo


Especially 2) has been hard to do in Go, without playing the game 'till the end.

This is what struck me as especially interesting, as a non-player watching the commentary. The commentators, a 9-dan pro and the editor of a Go publication, were having real problems figuring out what the score was, or who was ahead. When Lee resigned the game, it came as a total surprise to both of them.

Just keeping score in Go appears to be harder than a lot of other games.


Score in Go is captured stones plus surrounded empty territory at the end of the game. Captures are well defined when they happen, but territory is not defined until the end.

The incentive structure of the game leads to moves that firmly define territory usually being weaker, so the better the players, the more they end up playing games where territory is even harder to evaluate.


Neural networks have been around for a long time. They basically took two existing concepts of AI and threw some money.


That's true, but the ways to train them and ways to apply them to real world problems have really improved.

It's obvious by just reading Hacker News.


> but that wouldn't ever have gotten us to the moon

Fitting analogy. There was a line in the film Blood & Donuts about the moon being ruined when they landed on it, which I couldn't really feel until today.


A top smartphone chess program can beat pretty much all but the best few players in the world. Do you think it's fair to pit a 150 gram device against a 70 kg human?


A chess program on your smartphone will obliterate even the world champion - http://en.chessbase.com/post/komodo-8-the-smartphone-vs-desk...


Although a fairer comparison would be against 1 kg of brain. Comparing against a human would need to include all the infrastructure for the device such as energy production or the manufacturing equipment required.

But nevertheless, fitting so much computing power in such a small device is a great achievement.


Not really, the brain to operate also needs all the other systems. Just as a CPU needs all the other parts to function. And including the manufacturing equipment is just as false, in the sense that you would have to include his mother (as biological manufacturing)


The human can run off resources that are available "in the wild", self-repair, and self-replicate at better than 1:1 (that is, a group of n humans produce >n offspring), whereas the smartphone needs a huge amount of infrastructure to repair it and produce new ones.

I don't think any mass comparison is really meaningful, mind, but it's not that simple.


Advanced chess players require a society which produces enough surplus to afford enough leisure to allow someone to not only produce a brain not damaged by starvation, but to allow them to use that brain to learn chess at a high level. It took a very long time for humans to get to that level, even though chess is a fairly old game.

My point is, humans "in the wild" likely didn't have any equivalent to chess, because they didn't have sufficient leisure time. Chess is a product of an environment that's just as "artificial" as the one which produced cell phones.


Games, including complex ones, go back a long way in human history. I've seen various claims about how much free time people have in primitive societies and I don't know enough to really know which are correct. The modern style of chess play relies on having openings books and computer assistance, but that's less true of go, which AIUI is learned largely through practice and a cultivation of taste and instinct (and the pieces can just be a set of stones and a grid scratched in the dirt).


Games may have existed, but the relative skill level of the players was likely a lot lower when people weren't spending as much time mastering the game, spreading and consuming strategy knowledge, and constantly holding events to compete and refine the best players.

I think the entire analogy is stretched a little thin of the players requiring all of this, but I also think the original attack on the Go AI based on it's mass is off base as well.


But you have to admit that it's easier to learn fuseki when you don't have to worry about being eaten by a tiger.


remember that chess is a war game and that war is most often fought over resources and territory, so they had their "chess" alright.


The story of Chess, according to Iranian mythological sources (recounted in Shahnameh) is that it was presented to the Iranian Court by the emissaries of the Indian Court, as a 'semantic puzzle' invented by Indian sages. (These games, it should be noted, were pedagogical in nature and used as symbolic means of training monarchs by the intellectual elites.)

The response of the Iranian sages was the invention of Backgammon, to highlight the role of Providence in human affairs.

[p.s. not all Iranians are willing to cede Chess to the sister civilization of India: http://www.cais-soas.com/CAIS/Sport/chess.htm] ;)


The origin is unclear. The fact that the thematic of a game of war is prevailing now, to me means that it might have as well been in the beginning. Actually it shows at least that those semantics are relevant to war, and to live, so what I was saying stands.


Of course it involves war, but note how they teach the young prince it is (a) better to let the Vazier (your Queen) do all the heavy duty lifting, and (b) it is perfectly honorable to hide behind fortifications in a castle.


* to life


The self-repair and self-replicate come at a very hard cost, in the sense that it requires food, water and oxygen, while machines only need electricity. And the replication is actually incomplete in the sense that it starts in a very small state where it needs the three resources to actually become a complete human (adult) and dies if not taken care by a third-party in such early state (parents).

Plus, not far away in the future we will be able to connect an smartphone to a 3D circuit printer and print a new one, to achieve 'self-replication'


Today a tiny $300 desktop computer can beat any human at chess. It only took a few years after the Deep Blue vs Kasparov game.


Not only a desktop computer. A two year old smartphone would do just fine.


Seems like a pretty arbitrary limitation. 70 years ago Colossus filled an entire room, now it can be emulated on a Raspberry Pi. The really groundbreaking part is the algorithm.


Has Google talked about the amount of computing resources they're throwing at this match? I'd be very interested to know.


1202 CPUs and 176 GPUs apparently

edit: according to the livestream


1202 CPUs and 176 GPUs is the figure mentioned in the Nature paper. But it's important to understand that this is the computer used to train the networks used by the algorithm. It took about 30+ days worth of wallclock to train it. That's about 110 megawatt-hours (MWh) worth of energy required!

During the play, the computational requirements are vastly less (but I don't know the figures). It's still probably more than is feasible to put in a smartphone in the near future. Assuming we get 3x improvement in perf per watt from going to ~20nm chips to ~7nm chips (near the theoretical minimum for silicon chips), I don't think this will work on a battery powered device. And CPUs are really bad at perf per watt on neural networks, some kind of GPU or ASIC setup will be required to make it work.


That's not correct; those numbers refer to the system requirements while actually playing. To quote from the paper:

> Evaluating policy and value networks requires several orders of magnitude more computation than traditional search heuristics. AlphaGo uses an asynchronous multi-threaded search that executes simulations on CPUs, and computes policy and value networks in parallel on GPUs. The final version of AlphaGo used 40 search threads, 48 CPUs, and 8 GPUs. We also implemented a distributed version of AlphaGo that exploited multiple machines, 40 search threads, 1202 CPUs and 176 GPUs.

In fact, according to the paper, only 50 GPUs were used for training the network.


For reference, it takes a little under 3 MWh to produce a car.


If your 110MWh to train is accurate, and the 25W used by the human brain reported in this thread is as well.

This is equivalent to one person expending 500 years solely to learn Go.


The cumulative amount of person-hours that went into training Lee Sedol (All the hours spent training his instructors, sparring partners, developing Go theory, playing out, and drawing inferrences from the outcomes of long-dead expert players) is probably more then 500 years. AlphaGo, on the other hand, had to start from scratch.

Given the rules, and a big book containing every professional go game ever played, and no other instruction, it's not entirely clear to me that Lee Sedol would be able to reach his current skill level in 500 years.


And thus why we're not destined to compete with AI, that 110MWh worth of training time can be instantly available to all other Go bots. If only I could have access to a Grandmaster's brain when I needed it!


Are they vastly less, though? The core of the algorithm is still a deep Monte Carlo Tree Search which AlphaGo gets quite a boost on computationally for being able to fire it off in parallel. It's obviously incorrect to take the training system and assume it's identical to the live system, but I think it's disingenuous to say the live system didn't have some serious horsepower.


Yes, for neural networks usually training them takes many orders of magnitude more resources than just using them.

For this particular example, training a system involves (1) analysis of every single game of professional go that has been digitally recorded; and (2) playing probably millions of games "against itself", both of which require far more computing power than just playing a single game.


I'm very aware of that. What I'm saying is that AlphaGo is not merely a neural net reporting best moves directly off the forward propagation. There are two nets which essentially act as proposal distributions for an exploration/exploitation tradeoff in the search space of game trees by which AlphaGo reads positions essentially out to the end of the game and ranks them by win rate (this is Monte Carlo Tree Search). The net moves are "nice" (I think they run at like 80% win rate against some other Go AIs? Maybe I'm misremembering) but the real heart of what makes AlphaGo play well is the MCTS which requires some vast resources to execute—live resources.


They did not say exactly but something like a couple hundred GPUs


I'd guess more on the order of 10,000 GPUs.


They only used 175 GPUs in the match 5 months ago.


I was actually thinking primarily of distributed training time for the networks and playing time for the system, rather than the number of GPUs running this particular match. Also, I thought the number of GPUs in October was more on the order of 1,000? Happy to be told I'm mistaken though.


They used ~1000 CPUs and ~200 GPUs 5 months ago.


What a strange sentiment. You would only delay the inevitable outcome. Sure, it wouldn't win now, but processing power will become stronger and machines get smaller. What was the point?


Chess engines and processing power have since then advanced to a point where my phone can now reliably beat Carlsen. There is no reason to suppose Go is different in that respect. In 10 years, DeepMind will fit into a phone.


It's way more about algorithmic improvements than hardware improvements though. Deep Blue evaluated 200 million positions per second. I don't think top programs of today could get to 2 million positions per second on a smartphone (I get about 10 million pos/second on my i7 3770 quad). It's all about improvements in search algorithms as well as position evaluation.


True. Without hardware improvements the processors that can evaluate 2 million positions per second while 'fitting' into a phone (qua processing power and power usage) would not exist though.


>my phone can now reliably beat Carlsen

I've seen this written by many people but is there any solid evidence/study that proves this?

Edit: seems like Pocket Fritz and Komodo are easily able to beat grandmasters.


Besides disagreeing with you, this actually isn't true at all. In competition, AlphaGo doesn't rely on particularly expansive hardware. For training, yes, but not for playing.


Considering the complexity of the human brain, it seems only fair to balance out a competitor's handicap in some way. Your idea seems to anticipate the logical progression of these tests: "Nature made this mind inside this small object, the brain, why don't we do that?" Regardless, the trend of course is toward miniaturization. I see news like this recent story: "Glass Disc Can Store 360 TB" http://petapixel.com/2016/02/16/glass-disc-can-store-360-tb-... to back up imagined futures like the film Her: https://youtu.be/WzV6mXIOVl4 (and that film doesn't even address whether the OSes are connected through a wireless network).

This stuff is happening fast, and we might have found ourselves, historically, in a place of unintelligible amounts of change. And possibly undreamt of amounts of self-progression.


It's the software that's impressive. Why does how many physical computers it takes to run the software matter? It's physical footprint will almost certainly shrink as computers get more powerful.


Are you suggesting to measure computing power by kilograms? That's even stupider than measuring software complexity by LOC.


It's not obviously stupid, as a bounding argument as we approach physical limits.


That's based on the assumption that computing must exist as silicon transistors. When would we have reached the bounds of computation based on physical limits if we had stuck with vacuum tubes. The point is computation is an abstract concept and not tied to the physical medium that we use.


>The point is computation is an abstract concept and not tied to the physical medium that we use.

That's not exactly true

https://en.wikipedia.org/wiki/Limits_to_computation


Of course there are physical constraints on computing, but measuring by weight is rather stupid. Measuring the energy consumption seems to be a way better metric (even though "computation per energy" is clearly a human win).

Not to mention that we suddenly forgot that computers have their own units of measurement, such as clock speed (hertz) and memory size (bytes).


> (even though "computation per energy" is clearly a human win)

Is it? The problem here is it is really hard to compare the TCO. For example prime human computation requires years and years of learning and teaching, in which the human cannot be turned off (this kills the human). A computer can save its state and go in a low or even a zero power mode.

>such as clock speed (hertz) and memory size (bytes).

Which are completely meaningless, especially in distributed hybrid systems. Clock speed is like saying you can run at 10 miles per hour, but it doesn't define how much you can carry. GPUs run a far slower clock speed than CPUs, but they are massively parallel and are much faster than CPUs on distributed workloads. Having lots of memory is important, but not all memory is equal and hierarchy is even more important. Computer memory is (hopefully) bit perfect and a massive amount of power is spent keeping it that way. That is nice when it comes to remembering exactly how much money you have in the bank. Human memory is wonderful and terrible at the same time. There is no 'truth' in human memory, only repetition. A computer can take a picture and then make a hash of the image, both of which can be documented and verified. A human can recall a memory, but the act of recalling that memory changes it, and the parts we don't remember so well are influenced by our current state. It is this 'inaccuracy' that helps us use so little power for the amount of thinking we do.


TCO? I'm mentioning solely the electricity the machines consumes (by machine I mean both the human brain and the computer).

Are the units I proposed perfect for the job? Of course not, just look how much you wrote. But I bet that if you do the same "thoroughly" analysis for measuring computing by weight you'll be able not only to write a fat paragraph such as your last one, you can write a whole book on who wrong/meaningless/stupid it is (not that anyone would read such book though).


...it was the last stand of homo sapiens in the man vs machine...

But who made that machine?

I'd say a more precise evaluation would be that the ability to program a machine to assist in playing chess outdid the ability to play chess without such assistance.


Your point is entirely valid, but was already made by the very comment you are replying too..


Indeed... Apologies to parent.


This is my generation's Gary Kasparov vs. Deep Blue. In many ways, it is more significant.

Several top commentators were saying how AlphaGo has improved noticeably since October. AlphaGo's victory tonight marks the moment that go is no longer a human dominated contest.

It was a very exciting game, incredible level of play. I really enjoyed watching it live with the expert commentary. I recommend the AGA youtube channel for those who know how to play. They had a 9p commenting at a higher level than the deepmind channel (which seemed geared towards those who aren't as familiar).


I know absolutely nothing about Go, and I enjoyed the deepmind channel and found the commenting very good.

I was actually thinking about playing a game with another total noob, just for fun, since the rules can be explained in 1 minute (unlike chess).


I totally recommend it. Learning Go is a beautiful experience.

It is indeed very interesting to play against another new player just to see what you come up with, then do some reading and solve some basic problems (it may even be a good idea to have a look at the easier problems before playing your first game), play more games, read more advanced books, join KGS... It is a very nice rabbit hole to fall into.


The rules can be explained in 1 minute, but the game takes some time to just start making sense.

I suggest starting on a 9x9 or 13x13 board. The regular 19x19 has too much strategic depth and noobs feel lost on it.


Many Go players suggest starting with Atari Go (aka Capture Go). It has the same rules as Go, but the starting position is predetermined and the winner is the player to capture the first stone.

You only need to play a few rounds of Atari Go, say 30 minutes to an hour to get a grasp of the capturing rules and then you can move to a 9x9 or 13x13. I'd go straight for the 13x13 because it's not that much bigger but it has much more depth into it without being overwhelming. And many Go boards have 19x19 on the other side and 13x13 on the other.

https://en.wikipedia.org/wiki/Capture_Go


The drawback with Capture Go is that it emphasises the capturing part perhaps a bit too much. I prefer introducing players to a variant with the same rules as capture go, except the winning condition is "having the most pieces on the board". This is essentially the same thing as regular go, but without all the fuffing about with learning scoring and such.

When played on a small enough board, the games take about as long time as capture go games.


> The drawback with Capture Go is that it emphasises the capturing part perhaps a bit too much.

I definitely agree. Just a few games (ie. just a few minutes) of Atari Go every now and then should be enough to teach that and then move on the the real thing.

Your game variant sounds interesting, btw!


What do you do if they enter in ko?


Depending on which level you want to put the game at, you can either say that ko = draw or quickly explain how something like PSK works.


Chess rules can be explained in 1 minute - you just have to talk really, really fast.


I like The Wire's explanation: https://www.youtube.com/watch?v=y0mxz2-AQ64


> I was actually thinking about playing a game with another total noob, just for fun, since the rules can be explained in 1 minute (unlike chess).

That's actually the recommended way to get started. Learn the rules, and then play a bunch of games with another beginner.


Yep, terrific commentary by Myungwan Kim 9p on the AGA channel.

For the folks who aren't as familiar with the game, how did you find the commentary (for any channel)? What would you be interested in hearing for events like these?


I watched most of the game on the Deepmind Youtube channel. Although I barely know the rules of Go, it was really nice that explained a lot of the strategies, although aside from the basic explanations most of the rest still flew over my head. I was still hooked, though.

However it was infuriating that many times they switched randomly between video feeds, so I couldn't actually see what the commentators were talking about on their board. Once it even got stuck on "Match starts in 0 minutes" for a couple minutes!


I've been finding it pretty unwatcheable. Does anyone know of a version that doesn't have the technical issues? (I'm very happy with the commentary, but the video keeps cutting to this 0 minutes screen and audio is patchy).


Same the "game will start in 0 seconds" thing kept cutting the audio in and out, and obstructing the board footage. Terrible for an uploaded youtube clip. I can understand issues with the live stream. But it's already over. Couldn't someone have edited that out?


Those technical issues were only a problem a few times at the beginning of the broadcast. 99% of the footage is fine, just stick with it. The problems disappear.


True, I skipped an hour in and watched from there and it was (mostly) fine.


I really enjoyed Myungwan's down-and-dirty commentary, and watching him get lost in some variations, and it was just incredibly exciting to see him get won over to AlphaGo during the game. From about move 50, I was just viscerally excited to see where things went, and the game did not disappoint in any way.

I've read a few different reviews and watched Michael Redmond's live commentary as well, who obviously has a slower Japanese style of play than Myungwan, and his variations all exhibited a very thorough style and sensibility, but I think he missed the key moment, and Myungwan called it -- the bottom right just killed Lee Sedol, and it was totally unexpected.

And, Sedol was thinking about it too, because right after he resigned, he wanted to clear out that bottom right corner and rework some variations. I presume that's one frustration playing with a computer -- they'll have to instrument AlphaGo to do a little kibbitzing and talking after a game. That would be just awesome.

If you are very, very inspired by AlphaGo's side of this, it's really incredible to imagine, just for a moment, that building that white wall down to the right was in preparation for the white bottom right corner variation. The outcome of that corner play was to just massively destroy black territory, on a very painful scale, and it made perfect use of the white wall in place from much earlier in the game.

If AlphaGo was in fact aiming at those variations while the wall was being built, I would think at a fundamental level, Go professionals are in the position that chess grandmasters were ten years ago -- acknowledging they will never see as deeply as a computerized opponent. It's both incredibly exciting, and a blow to an admirable and very unusual group of worldwide game masters.

I loved every minute!!


Building the wall down to attack the bottom right corner isn't something outrageous, not to those at the level of Sedol. AlphaGo definitely played amazing, as the game was very technical in term of fighting. But the "flow" (chase out weak group then invade a corner) is a fairly common situation. I don't think it's a matter of AlphaGo seeing strategy further than Sedol. It might have had much deeper calculation and reading than Sedol - as showed in deflecting the attachment in lower right - but that's a bit of a different story.


I'm planning to watch the AGA coverage later, after watching the DeepMind coverage live. I found the DeepMind pair a bit underwhelming. Redmond was excellent at playing through some variations, but they did get very distracted at times, and away from what was actually happening. His co-host was playing a little too strong on the 'I'm so nervous' line, I felt. So I didn't spot the significance of the bottom right pivotal moves. Thanks for the recommendation, I'm looking forward to the AGA coverage even more now.


As a person who knew only the basic rules beforehand, I wouldn't imagine it any better. Any more complicated and I'd get lost.

I'd love to see one day a live commentary, with an extra window showing what computer is thinking at the moment.


Having worked on some code very similar to this, showing the computer's best moves would be quite artificial. Here's some thoughts as to why that is:

1. The computer can discard all its current best ideas and flip through new ones so fast, it would be a flickering blur to humans.

2. Even if we put a speed limit on it, the move being considered is itself the result of considering a lot of slight variations.

3. The ability to _articulate_ in a human language what makes the move nice is itself a "hard problem" closely related to natural language processing.

4. Even just having some color codes or symbols and grouping related ideas has some serious problems: now the visualization is pretty technical to begin with, the computer is still able to memorize and compare moves at an unbelievable rate, and it's still fundamentally not the same as the method Go masters use to find a solution.


I can back this up a bit. I created a strong Chess engine variant and had it visually show what the computer was considering as moves with strengths as color strength. It would even show what it considered your best counter-moves.

Even with all that thinking output on the screen, the computer would still soundly beat myself and another (intermediate) player.

Here are some screenshots to illustrate what I'm talking about:

http://fifthsigma.com/CoolStuff/DecachessThinking/


Regarding the first one, I don't think this would be a serious issue. It shouldn't show what move its considering, just the current best option. It would probably converge on a good move within the first second. Running the free, quick analysis from chess.com on my Iphone has a great visualization that rapidly updates the computer's scoring of the current position and shows what it thinks is the best move, as well as pointing out any previous moves it thinks are mistakes/blunders.


I watched the commentary on Youtube and it was fantastic! I don't play go myself but I was glued to the screen the whole way. I particularly enjoyed how the commentators demonstrated why the moves made sense by playing theoretical future moves right on the board they had up.


I really enjoyed the deepmind channel, but it's too long for me to enjoy in its entirety. I think a 15 minute video recapping the game and its crucial strategic moments would be fascinating.


I don't know much about go, so it was like a long Go lesson. That was interesting, but in terms of immediate gratification it was pretty dull. One very calm hyper-focused person describing what another very calm hyper-focused person is doing.


I don't think AGA had the rights to stream and comment this match.

Where did you find this 9p AGA commentary? I don't see it in the list of AGA videos on youtube.


It was live streamed and the archive is now up on The Official AGA Youtube Channel at: AlphaGo ?p vs Lee Sedol 9p, 0400 UTC (8pm PST)[1]

[1] https://www.youtube.com/watch?v=6ZugVil2v4w


Many thanks! I checked their youtube channel many times during the game, hoping they would cast this, as I found the Redmond commentary a little too shallow.

I don't understand how the AGA live stream didn't appear there for me?!


It took them an hour or two before they even got started, so you might not have checked late enough in the match.


Thanks for the link. Myungwan Kim's commentary is superb, am I the only one thinking the american guy speaks a bit too much though?


It can be hard to bite your tongue when you see the other (non-native) speaker struggling for words...

Andrew Jackson's role is invaluable in clarifying MyungWan Kim's thoughts: the infamously opaque "play this one, and then this one", or his white/black colour mix ups...

I personally think they're a good combo. Andrew is getting gradually better at only jumping in when necessary.


I don't mind Andrew. He's a strong player in his own right so he has questions a strong amateur would have.

He inevitably asks questions you want Myungwan to answer.


Andrew is an awesome polite guy, I'm giving him the benefit of the doubt that he was doing the right thing there.


The game was bound to be exciting due to the matchup. AlphaGo's moves were for the most part not exciting for much of the match, IMO. It seemed content to follow the outline that was unfolding rather than setting an agenda, and executing really well towards the end. I can't wait to see how it plays as black and moving first.

It either seems like the earlier match vs Euro 3p didn't show AlphaGo's full strength, or it has improved much in the interim. Other takes?


I was really hoping to see a more technical discussion than what I found here in the comments. It's too bad that such a cool accomplishment gets reduced to arguments about the implications for an AI apocalypse and "moving the goalposts". This isn't strong AI, and it was at least believed to be possible (albeit incredibly difficult), but it is still a remarkable achievement.

To my mind, this is a really significant achievement not because a computer was able to beat a person at Go, but because the DeepMind team was able to show that deep learning could be used successfully on a complex task that requires more than an effective feature detector, and that it could be done without having all of the training data in advance. Learning how to search the board as part of the training is brilliant.

The next step is extending the technique to domains that are not easily searchable (fortunately for DeepMind, Google might know a thing or two about that), and to extend it to problems where the domain of optimal solutions is less continuous.


> without having all of the training data in advance

What? They certainly trained the algorithm on a huge database of professional go games. It's even in the abstract. [1]

[1]: http://www.nature.com/nature/journal/v529/n7587/full/nature1...


> What?

Exactly

They used the game database to learn the value network, then reinforcement learning of the policy network was performed on self-play games. I.e., the machine learned to play from existing data, then played against itself to learn the search heuristics (the policy network) without the need for expert data.


Your claim still doesn't make sense. They either used expert data or they didn't. If the algorithm would lose when they remove the expert data, then they really do need expert data.

The tree search wasn't even the novel part of the algorithm... the authors even cite others who had used the identical technique in previous Go algorithms.


It seems that my original comment is unclear. My apologies for the ambiguity. I did not mean that they do not need any expert data, but that part of the training did not require a training data set.

They definitely need training data to learn the value function, but training the policy network is based on self-play. While MCTS is not new, I believe bootstrapping reinforcement learning with self-play to train a policy network that guides the MCTS is novel.


they were amateur expert games from the KGS server.


I believe the October match used amateur games, but for this match, they added a professional database.


I posted in the earlier thread because this one wasn't up yet[1].

Some quick observations

1. AlphaGo underwent a substantial amount of improvement since October, apparently. The idea that it could go from mid-level professional to world class in a matter of months is kinda shocking. Once you find an approach that works, progress is fairly rapid.

2. I don't play Go, and so it was perhaps unsurprising that I didn't really appreciate the intricacies of the match, but even being familiar with deep reinforcement learning didn't help either. You can write a program that will crush humans at chess with tree-search + position evaluation in a weekend, and maybe build some intuition for how your agent "thinks" from that, plus maybe playing a few games. Can you get that same level of insight into how AlphaGo makes its decisions? Even evaluating the forward prop of the value network for a single move is likely to require a substantial amount of time if you did it by hand.

3. These sorts of results are amazing, but expect more of the same, more often, over the coming years. More people are getting into machine learning, better algorithms are being developed, and now that "deep learning research" constitutes a market segment for GPU manufacturers, the complexity of the networks we can implement and the datasets we can tackle will expand significantly.

4. It's still early in the series, but I can imagine it's an amazing feeling for David Silver of DeepMind. I read Hamid Maei's thesis from 2009 a while back, and some of the results presented mentioned Silver's implementation of the algorithms for use in Go[2]. Seven years between trying some things and seeing how well they work and beating one of the best human Go players. Surreal stuff.

---

1. https://news.ycombinator.com/reply?id=11251526&goto=item%3Fi...

2. https://webdocs.cs.ualberta.ca/~sutton/papers/maei-thesis-20... (pages 49-51 or so)

3. Since I'm linking papers, why not peruse the one in Nature that describes AlphaGo? http://www.nature.com/nature/journal/v529/n7587/full/nature1...


Regarding 2, the point is that a valid move in Go is to put a stone everywhere on an empty position each turn on 19x19 grid. So the number of valid move is not comparable at all with Chess.

You can check for more information : https://en.wikipedia.org/wiki/Go_and_mathematics

Related : https://en.wikipedia.org/wiki/Shannon_number

From these two links, the game tree complexity of chess is estimated at 10^120 while for Go it is 10^700.

Not really in the same ballpark.


Since the European match went 5-0, how do we know the bot wasn't just as good months ago?



Just for context, this is the first of a five-game match. Next one tomorrow at the same time! (6am CEST, 8pm PT).


Thank you. The title on HN here didn't imply it was a 5 game series at all, nor did the tweet it linked to.

It's a cool win but despite the way the titles are being presented, this isn't over yet.


What an incredible moment - I'm so happy to have experienced this live. As noted in the Nature paper, the most incredible thing about this is that the AI was not built specifically to play Go as Deep Blue was. Vast quantities of labelled Go data were provided, but the architecture was very general and could be applied to other tasks. I absolutely cannot wait to see advancements in practical, applied AI that come from this research.


Here's the Nature article: http://www.nature.com/news/google-ai-algorithm-masters-ancie... (it has a link to the free paper, as well)

The position evaluation heuristic was developed using machine learning, but it was also combined with more 'traditional' algorithms (meaning the monte-carlo algorithm). So it was built specifically to play go (in the same way deep blue used tree searching specifically to play chess.....though tree searching is applicable in other domains).


I just wrote a blogg about this. I was up to 1am this morning watching the game live. I became interested in AI in the 1970s and the game of Go was considered to be a benchmark for AI systems. I wrote a commercial Go playing program for the Apple II that did not play a very good game by human standards but did play legally and understood some common patterns. At about the same time I was fortunate enough to get to play both the woman's world Go champion and the national champion of South Korea in exhibition games.

I am a Go enthusiast!

The game played last night was a real fight in three areas of the board and in Go local fights affect the global position. AlphaGo played really well and world champion (sort of) Lee Sedol resigned near the end of the game.

I used to work with Shane Legg, a cofounder off DeepMind. Congratulations to everyone involved.


I watched the commentary that Michael Redmond gave (9-dan-professional) and he didn't point out one obvious mistake that Lee Sedol made the entire match. Just really high quality play by AlphaGo.

Really amazing moment to see Lee Sedol resign by putting one of his opponent's stones on the board.


Yeah according to Redmond, it seemed that AlphaGo made a few "mistakes" whereas Sedol made none. And yet AlphaGo came out substantially ahead. So I'm not sure what that means. Perhaps we need to see more in-depth analysis of the moves, but it seems that AlphaGo just out-calculated Sedol.


I wonder if their move selection algorithm takes into account the "surprise" factor: given two moves that are almost equal in strength when analyzed to a depth of N, chose the one that looks worst at N-1. That is, if all else is equal, assume that you can search deeper than your human opponent, and lay traps accordingly.


Besides trap-laying, there's also a second useful "surprise" factor: your opponent is likely to have spent time on your clock to read out follow-ups to your most likely move. By throwing in an unlikely (but still good!) move you're forcing them to expend time on their clock to re-think their follow-ups.


That's interesting. And a sign of truly understanding what a human would think.

Btw. There's a concept in Go called "overplaying". That means selecting a move that isn't objectively the best you could come up with, but that is most confusing, considering the level of the opponent. It's generally thought of as a bad practice, and if you misestimated the level of your opponent, she can punish you by exploiting the fact you didn't play your best move.


If they do that, they didn't tell anyone at Google.


I don't know about surprise, the the AKA stream was pretty shocked when AlphaGo was playing aggressively from the 5th line/column from the right early in the game. Apparently going in on the 3rd is already very risky and the 4th is almost never done.


According to DAVID ORMEROD in his commentary at https://gogameguru.com/alphago-defeats-lee-sedol-game-1/, black (Lee Sedol) made plenty of mistakes from the start to the end, and they were seriously questioning his form. White (AlphaGo) made only a few mistakes in the middle, which allowed Black to catch up a bit, but he had no chance, esp. in the endgame.


maybe the mistakes were unrecognized brilliant moves (tesujis).


I would love to see an expert analysis of the game, but there were definitely a few moves where AlphaGo was "pushing from behind" that were probably not what an expert human would play.


I think that's the point of reacweb: alphago might already be beyond expert human level understanding.

TD-Gammon was at that point for a while in the early 90s, but the experts caught up, and this changed the generally accepted Backgammon strategies.


Lee had quite a bit of advantages in the middle then he made one bad mistake and that was it. Deepmind made some smaller mistakes too but not as bad.

I am really excited about the Deepmind though. Looking forward to tomorrow's game!


What was the mistake that Lee made?

EDIT: good postmortem here https://gogameguru.com/alphago-defeats-lee-sedol-game-1/


Yeah. It was the bottom right corner that I talked about. But some speculation says it was part of AlphaGo's plan so I am extremely looking forward to today's game now.


According to Myungwan Kim, 9p, commenting on the game in the AGA channel [1], Lee Sedol made a mistake right at the beginning and spent the rest of the game hoping to claw it back; he said he would resign about 15 min before he did.

EDIT: but this postmortem [2] of the game is far more nuanced and doesn't reach the same conclusion.

[1] https://www.youtube.com/watch?v=6ZugVil2v4w [2] https://gogameguru.com/alphago-defeats-lee-sedol-game-1/


I was really expecting Lee Sedol to win here. I'm very excited, and congratulations to the DeepMind team, but I'm a bit sad about the result, as a go player and as a human.


If it's any consolation, there are still tons of things humans are far better at than machines.


The only remaining are language-related. Natural languages are the next focal point of AI research.


> The only remaining are language-related

That is a gigantic over-simplification. All machines are application specific, even machine-learning based ones. They all require human supervision, whether through goal setting or fixing errors.

There are some areas where machines are better than humans, and playing Go is now one of them, but that doesn't mean machines will replace humans in all facets at any given point in time. We grow, our tools grow, and the cycle repeats.


That, and towel-folding [1]. Humans have a pretty clear edge on that.

[1] https://www.youtube.com/watch?v=gy5g33S0Gzo&ab_channel=RLLbe...


That's pretty cool.

I wonder how it would deal with a teddy bear or stray piece of underwear in the pile of towels?


I look forward to a future when the only remaining occupation is hotel maid.


But, who's going to stay in the hotels? Will the robotic occupants even need towels folded? ;)


That is actually a really impressive robot.


The more recent results are even more impressive: https://news.berkeley.edu/2015/05/21/deep-learning-robot-mas...


Skill related. I'd be interesting to see how quickly driving AIs take to beat the best human drivers, in a weight-equal vehicle. An algorithmic competitor in formula one, would be interesting.



I feel AI could take so much more risk if there's no lives in danger so that would give them an edge.


Would be a stomp for the computer; they don't have to respect G-forces.


Racing cars aren't limited by the driver's G tolerance, I don't think. They generate 4G-ish, from what I hear on the F1 coverage. Well within driver capabilities. Their G is limited by tyres.


Not true, humans are better than computers at Starcraft, even despite AI's tremendous APM advantage.


How about walking and image recognition.


The recent video of Boston Dynamics Atlas robot looked like it could walk about as well as a human, maybe even better at recovering its balance (see it heroically walking through snow).


Not to downplay how amazing Atlas is but I don't think that it reaches "about as well as a human" yet.


I see "AI" doing well in games that have simple inputs, a limited range of legal outputs, and relatively easy "goodness" measures.

I see nothing that might be able to tell us why gravitational mass is the same as inertial mass, for example, or any moves in that direction. This "AI" is good at simple games.


As an addendum to my comment, a number of people working at Deepmind agree with me.


But now every amateur will have access to unlimited play against Lee Sedol-level opponents.


Yea, let me just go home and grab my hundreds of GPUs and CPUs.


Renting them in the cloud for a single game should actually not be all that expensive. Around 100 USD per game perhaps? And the price is only going to fall.

An Amateur can learn plenty from slightly weaker version on less hardware already.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: