Hacker News new | past | comments | ask | show | jobs | submit login
AlphaGo Can't Beat Me, Says Chinese Go Grandmaster Ke Jie (shanghaidaily.com)
172 points by xianshou on March 10, 2016 | hide | past | favorite | 123 comments



The headline doesn't do justice to what he said. He actually said that he thinks after watching the current games he has a 60% chance of winning, but that AlphaGo will eventually beat him (perhaps in months, perhaps a few years).

The headline makes it sound like he's being super-arrogant, but his actual words tell a very different story.


He said both, if you can read his posts on Weibo (rougly Chinese Twitter). And he said those two points in separate posts, so the headline quote not taking words out of context. It's Jie himself who used the clickbait strategy in the first place and the media happily followed :)

Edit: I just looked up his Weibo, and it turns out that he said only what is quoted in the headline on Weibo; that deeper analysis was, presumably, from a later interview which he didn't post on Weibo at all.

Also, I don't think it sounds super arrogant. The fact that AlphaGo beats someone beaten by Ke Jie 8 times out of 10 doesn't say enough about AlphaGo vs. Ke Jie. As Jie himself pointed out, eventually AlphaGo will be able to beat him, but claiming that it cannot do this now sounds pretty safe to me.


> The fact that AlphaGo beats someone beaten by Ke Jie 8 times out of 10 doesn't say enough about AlphaGo vs. Ke Jie.

8:2 implies the difference between them isn't so large. (If someone only beat me 8:2, I'd expect a professional to beat them 10:0.) Do we have any particular reason to think that AlphaGo would fall within that narrow skill band?

E.g. if Google specifically challenged Lee, thinking that AG had just about passed his level.

Alternatively by looking at the games and saying "Ke beat Lee by more than AG did, so Ke will beat AG as well". But many people have pointed out that AG doesn't optimize for winning by the greatest amount, so this probably isn't very informative.

Alternatively if AG's playstyle is strong against Lee but weak against Ke. But could Ke know that? AG's only played against two high-profile opponents, if it can adapt its playstyle we won't have seen that in action.


A couple background info:

For people outside of Go community, Ke Jie is the new star of Go, currently Jie's Elo rating is #1, http://www.goratings.org/, Lee Sedol is #4. Media touted Lee Sedol as the greatest player of past decade, it was true couple years ago but Lee Sedol has been clearly in decline in recent years (as seen in the Elo rating).

2) Ke Jie is 19, at the peak age of his game. Lee Sedol is 33 and is seemingly in decline in his game, hence the 2:8 record vs. Ke Jie.

3) Ke Jie probably wanted to be challenged since he is the current world's #1 Go player. Sure there aren't much differences in terms of skill levels between top Pro 9 dan players but challenging a current world's #4 player is different from challenging a current world #1 player, isn't it?

It's a shame that the Western world doesn't know Ke Jie well(as his wikipedia page has very limited info and his award records.)[1] But these are moot points. After the 2nd defeat, it's increasingly clear that AlphaGo will defeat human player - it's a matter of time or a matter of a couple more games.

[1] https://en.wikipedia.org/wiki/Ke_Jie


Not so sure about the extra mile. It could happen that the AI couldn't be able to go beyond a certain level. That is this AI could be walled. We have a wonderful opportunity to see an example in which current AI is measured to be near but not far from best human competence.


> It could happen that the AI couldn't be able to go beyond a certain level.

That's your wishful thinking.

As we've seen the matches with Fan Hui in the past, unfortunately and fortunately, the CNN will improve drastically game after game and making less mistakes whereas human always do at a certain level.

Another big point I want to point out is human emotion and state of the mind. With 0:2, Lee Sedol lost his confidence ("speechless" in his own words) [1] and quite possibly increasing fear of losing in a landslide 0:5. Losing confidence and fear of losing are VERY TERRIBLE things in any competitive sports.

[1] http://www.theverge.com/2016/3/10/11191184/lee-sedol-alphago...


it is very interesting to measure at what rate AI improves. If the AI can improve a lot, a very interesting problem is to develop a scale or elo for measuring this progress. I am more interested in what lesson we can take home from this match, I don't have a wish here. We have many thinks to learn about this. In a situation of great uncertainty about the pace at with AI can improve its capabilities, measuring the velocity or rate of change can gives a hint about the future and applications of this new technology.


I prefer the marketing narrative of beating a top pro then challenging the #1...or better yet have said #1 more or less ask for the challenge himself. It's more impact than going for the #1 directly and also allows for some extra optimization cycles.


Agreed. Each 100 point of ELO diference translate into a 2 times stronger player.

The one to beat is Ke Jie because he is currently two times stronger than Lee Sedol.


Transitivity doesn't necessarily apply in most of these cases. Just because A can beat B and B can beat C, that doesn't mean A can beat C.

That's especially true when one of your players is a robot, which learns in a different way than humans. Yes, it's running a neural network, but a neural network still isn't a brain. Every person's brain is slightly differently organized on the neural level. On the other hand, every NN is running more or less the same structure and backpropagation algorithms. If you can find where those algorithms are strategically weak, you'll be able to beat them consistently.


In this case, we have A > C and it's looking like B > C. For these three players, transitivity holds whether A > B or B > A.

But yes, I spoke about playstyles. Do you think Ke can discover exploitable weaknesses in AG, given the games AG's played?


The actually values in the NN depends on both the algorithms and the data input during training phase. You cannot find how these algorithms are strategically weak unless you've seen all of what NN have seen.


Not necessarily. Unlike the human brain, NNs are extremely organized. They can change their weights but they cannot easily change their fundamental structure. Their organization can make them bad at certain tasks.

For example a CNN does well at object recognition from photos, but does poorly at recognizing a pencil sketch of an object it hasn't previously seen a sketch of. Or recognizing a scene at night that it has only seen daytime photos of. Those tasks, humans do very well because the trained human brain combines cultural understanding, physics intuition, and visual cortex at the same time, and would be able to use this to their advantage to easily beat, say, a massively trained CNN image recognition program that lacks cultural understanding and physics intuition.

Some other more complex NN structure may be able to tackle these kind of tasks, but as long as its neural structure is rigid, it will have yet other deficiencies. The human brain is still able to structurally adapt in ways that NNs cannot, yet.


"8:2 implies the difference between them isn't so large. (If someone only beat me 8:2, I'd expect a professional to beat them 10:0.)" Not sure I follow the logic here. They're both professionals, and one was much better than the other in the 10 games they played!


My point is that 8:2 isn't much better. Not compared to the range of skills available. If someone beats me 8:2, that puts fairly strong bounds on their skill level, such that most players would either beat both of us or lose to both of us.

Another way to look at it, if Ke beats Lee 8:2 then Lee-plus-four-stones probably beats Ke more than half the time. (I'm pretty confident about four stones; I suspect two or three would suffice.) Four stones is significant, but it's still a fairly narrow gap to say "I think AG falls within this skill range" until you've seen it lose a game.


I get your point, but four stones is beyond ridiculous. Two stones is already a really large handicap. At two stones, Ke would have no chance whatsoever.

Games at this level are decided by a few points. The closer two players come to optimal play, the harder it is to be very much more efficient than your opponent over the course of a game. To be so much more efficient, at this level, that you could overcome a 4 stone handicap is unthinkable.

I would very much like to see computers improve to the level that they could take on a top professional giving a 2 stone handicap. That would be a sight!


Ke Jie would lose to or at best be even with Lee Sedol with one stone handicap (taking white with no komi) because his ELO is not THAT much higher. Since Lee Sedol has a better record against Park Junghwan who also has wins vs. Ke Jie etc. ELO is a better predictor than just 10 games in a vacuum.


8:2 says a lot more when the players are the 2 best in the world (or close to that)


I agree with what you say, but disagree with what I think you mean.

Weaker players are more inconsistent. Someone else says Lee-plus-two stones beats Ke. I don't think me-plus-two-stones dominates someone who beats me 8:2.

So with weaker players, 8:2 is consistent with a wider range of skill gaps (measured in handicap stones). It's consistent with one player being only a little better, and it's consistent with one player being a lot better.

With two of the best players in the world, 8:2 isn't consistent with more than two stones difference between them.

So in this case, 8:2 rules out a lot of possible skill gaps, which does say a lot; in the same way that "beats me 8:2" says a lot more than "beats me 10:0".

(Alternatively we can measure skill gaps in points. Games between weak players will often be decided by tens of points. Games between strong players will be decided by a handful. 8:2 between Lee and Ke suggests their points gap is smaller than between two amateurs who get 8:2.)


Even if he did say that and mean it, it's probably natural and healthy for someone in his position to have a psychological bias towards predicting their own victory. To be an elite competitor, I think you have to stay in the mindset of winning. Thinking about losing too much can take away a little bit of edge (provided you don't get arrogant and lazy)


I can only agree, the headline is absolutely horrible, the actual quotes are way more nuanced and importantly expose downright eagerness to try himself against AlphaGo.


He's effectively saying as an outsider that he would have won the match just played. But he has no way of knowing that. It's easy to see mistakes from the outside. Seems like hubris to me.


Also it's very likely if the human played stronger in the last match, the computer also would have stepped up. Computers tend to sacrifice points (i.e. "big leads") for assured victory, so unless the human is stronger than the computer, you cannot know it's true potential.


That's a common strategy in human game play and sports as well. It has been observed many times that humans on their own in regular sports do much worse than doing the same in competition against a somewhat stronger party. As they say: one boat is a cruise, two boats, it's a race.


To be fair, he's 8-2 vs Lee Sedol head to head and he's only 19 years old.


He's great, to be sure, but only Sedol knows what it's like to be in this position. The pressure must be immense.


>Seems like hubris to me.

I assumed that, since this has been in the news, he's promoting himself to be next in line to take on the computer. The purpose being the publicity and earnings associated with participating in an event like this (it's not often Go makes it into the mainstream media).


Matches have been regularly televised for decades.


This is very interesting. We have a way to test what he say. A match between the machine and this player with the game in the situation he describes. If he defeats the machine he proves that the machine game is nothing beyond our capabilities.


> but that AlphaGo will eventually beat him (perhaps in months, perhaps a few years

I think that part is irrelevant because I think everyone knows that would be the case - that in a few months or years, the AI would exponentially improve.


Before you can talk about "exponentially improve" you first have to define what kind of scale you want to use.



Given the talk about scales, Was expecting this one: https://xkcd.com/271/


It will be twice as fast for the same budget in about two years, maybe less. That'll allow it to better analyse outcomes.

And it will continue improving at about the same rate for the next few years, at least.

In the long term, meatware has no chance against it.


This is assuming the limitation is mostly processing power.

From what I've read, more processing power would allow the AI to calculate possible outcomes for each move further into the future but that does not mean the AI's overall success rate will go up at the same rate.


The AI seems to be improving linearly, actually.


He's the generally agreed world's best player (and only 20-years-old). but he might not have seen the true strength of AlphaGo. After analyzing the games, it's apparent that AlphaGo only tries to win, and does not try to win by a large margin. Translate that strategy into game play, AlphaGo seems to always try to stay ahead of its opponent by a few points, and often trades an optimal move by a less-profitable one but one that's much safer. So even though the two games with Lee have been close, no one has any idea how strong AlphaGo really is.


I'd describe that a little differently: I think AlphGo's choosing safer moves shows strength. In go you don't get any extra credit for winning by more; a win by 0.5 is equivalent to a win by 50.5. As such there is no incentive to fight for wins at a higher margin. If a more aggressive move is riskier or harder to read (it usually is) there is actually a disincentive to play it. I can't tell you how many games I've lost because I misread a complicated variation which I could have avoided entirely when I was ahead.


>"a win by 0.5 is equivalent to a win by 50.5 It don't matter if you win by an inch or a mile. Winning's winning."

Which reminds me of some wisdom from Dominic Toretto "It don't matter if you win by an inch or a mile. Winning's winning."


“The secret [to motor racing]”, said Niki Lauda, “is to win going as slowly as possible”


>"I can't tell you how many games I've lost because I misread a complicated variation which I could have avoided entirely when I was ahead."

Isn't that Go in a nutshell though.


I'm sure this has been discussed, but why not have two AlphaGo play each other?


That's, in effect, how they train it -- they've done literally thousands of these. What they haven't done (yet) is published the moves of those games for external review.


This was being discussed on the radio the other day. They stated it had played 30 million games against itself.


because it would create a singularity that would swallow the known universe.


To be pedantic - he was born in 1997 so he is 18 or 19 years old.


The question (as Ke Jie says, but the headline hides) is not if AlphaGo will beat him, but when.

I was taught a computer would not beat a professional player in my life time. Now, there is maybe one player who can beat AlphaGo. I guess this won't be true for too long. When? That would be an interesting bet.


Exactly. Constrained (most games) AI singularity is near, because chess was just the start.


You're reaching. Constrained AI singularity isn't a thing. It's singularity or it isn't.

AI is definitely getting better, but it is all application specific. AI is not at the point of setting its own goals or fixing all of its errors. We still need humans for that.


Bah, replace "singularity" with "superhuman", and we're set. We have superhuman arithmeticians since the dawn of computing, superhuman chess players since Deep Blue or so, superhuman Go players and superhuman drivers are not too far off…

Superhuman programmer with intelligence explosion capabilities… yeah, that's a whole 'nother game.


What's an intelligence explosion? Intelligence dissemination or deletion?

EDIT: Nevermind, just looked it up and found this was a real thing. I thought you were joking.


It sounds less weird if you think: "AI has automated some jobs, and eventually it may automate away most of them, in the same way (pending AlphaGo victory) it's now automated winning at most board games... So what happens if the job of programmer gets automated away?"

(I'm convinced that programmers will take the AI-automating-all-the-jobs idea more seriously once it's their own jobs that are on the line)

There are a few ways I can think of to object to this line of reasoning:

1) You could argue that programming will be a job that never gets automated away. But this seems unlikely--previous intellectual tasks (Chess, Jeopardy, Go) were thought to be "essentially human", and once they were solved by computers, got redefined as not "essentially human" (and therefore not AI). My opinion: In the same way we originally thought tool use made humans unique, then realized that other animals use tools too, we'll eventually learn that there's no fundamental uniqueness to human cognition. It'll be matter & algorithms all the way down. Of course, the algorithms could be really darn complicated. But the fact the Deepmind team won at both Go and Atari using the same approach suggests the existence of important general-purpose algorithms that are within the reach of human software engineers.

2) You could argue that programming will be automated away but in a sense that isn't meaningful (e.g. you need an expensive server farm to replace a relatively cheap human programmer). This is certainly possible. But in the same way calculators do arithmetic much faster & better than humans do, there's the possibility that automated computer programmers will program much faster & better than humans. (Honestly humans suck so hard at programming https://twitter.com/bramcohen/status/51714087842877440 that I think this one will happen.) And all the jobs that we've automated 'til now have been automated in this "meaningful" sense.

Neither of these objections hold much water IMO, which is why I take the intelligence explosion scenario described by Oxford philosopher Nick Bostrom seriously: http://www.amazon.com/Superintelligence-Dangers-Strategies-N...


> (I'm convinced that programmers will take the AI-automating-all-the-jobs idea more seriously once it's their own jobs that are on the line)

I assure you we do not. We would all love to be the one to create such a program. But don't worry it isn't happening anytime soon in the form of singularity.

Just like other people in other jobs, we sometimes come up with more efficient ways of doing things than our bosses thought possible, and then we have some extra free time to do what we like.


"But don't worry it isn't happening anytime soon in the form of singularity."

Probably not soon. Worth noting that the Go victory happened a decade or two before it was predicted to though.

(To clarify, I was not trying to establish that any intelligence explosion is on the immediate horizon. Rather, I was trying to establish that it's a pretty sensible concept when you think about it, and has a solid chance of happening at some point.)

>Just like other people in other jobs, we sometimes come up with more efficient ways of doing things than our bosses thought possible, and then we have some extra free time to do what we like.

Yes, I'm a programmer (UC Berkeley computer science)... I know.

But don't listen to me. Listen to Stuart Russell: https://www.cs.berkeley.edu/~russell/research/future/


> (To clarify, I was not trying to establish that any intelligence explosion is on the immediate horizon. Rather, I was trying to establish that it's a pretty sensible concept when you think about it, and has a solid chance of happening at some point.)

Well first you were saying programmers may hesitate to create an AI singularity because it may cost them their job. I said we would love to but probably won't anytime soon. Now you're saying that it's likely that it will happen some day. I'm not sure these points follow a single train of thought.


>Well first you were saying programmers may hesitate to create an AI singularity because it may cost them their job.

The line about programmers fearing automation only once it affects them was actually an attempt at a joke :P

The argument I'm trying to make is a simple inductive argument. Once something gets automated, it rarely to never gets un-automated. More and more things are getting automated/solved, including things people said would never be automated/solved. What's to prevent programming, including AI programming, from eventually being affected by this trend?

The argument I laid out is not meant to make a point about wait times, only feasibility. It's clear people aren't good at predicting wait times--again, Go wasn't scheduled to be won by computers for another 10+ years.


The fact that programming is an exceptionally ill-defined task. Computers are great at doing well-specified tasks. In many ways, programming is the act of taking something poorly-specified and making it specific enough for a computer to do. Go, while hard, remains very well defined.

I hope for more automation in CS. It will help eliminate the boilerplate and let programmers focusnin the important tasks.


> In many ways, programming is the act of taking something poorly-specified and making it specific enough for a computer to do.

Software development in the broad sense is that, sure; not sure I'd say programming is that -- taking vague goals and applying a body of analytical and social skills to gather information and turn it into something clearly specified and unambiguously testable is the requirements gathering and specification area of system analysis, which is certainly an important part of software development, but a distinct skill from programming (though, given the preference for a lack of functional distinctions -- at least strict ones -- within software development teams in many modern methodologies, its often a skill needed by the same people that need programming skills.)


There is one uniqueness to human cognition: the allowance to be fallible. AI will never be able to solve problems perfectly, but whereas we are forgiven that, they will not be because we've relinquished our control to them ostensibly on exchange for perfection.


It may interest you to know that machine learning algorithms often have an element of randomness. So they are allowed to explore failures. Two copies of the same program, trained separately and seeded with different random numbers, may come up with different results.

I'm not saying there will or won't be AI some day, I just thought that point was relevant to your comment.


Sorry, I found that a bit difficult to follow... it sounds like you think human programmers will beat AIs because we can make mistakes?

Well here's my counter: mistakes either make sense to make or they don't. If they make sense to make, they aren't mistakes, and AIs will make the "mistakes" (e.g. inserting random behavior every so often just to see what happens--it's easy to program an AI to do this). If they don't make sense to make, making them will not be an advantage for humans.

At best you're saying that we'll hold humans of the future to a lower bar, which does not sound like much of an advantage.


You ignored the "setting its own goals" part, which we haven't even started on yet. Humans define the arithmetic problems. Humans define what it means to win at chess and Go. Humans choose a specific destination to drive to.


Once humans have defined and programmed what do to, the machine happily does it much better than any human could (depending on the domain).

One day we'll be tempted to write a more serious, more complex "game", with real world consequences. For those, we'd better specify our goals precisely, lest we face unforeseen unfortunate consequences —there will be reasons why the machine does it better, ranging from faster computation, exquisite motor control, or a total lack of ethics —assuming ethics wasn't part of the programming.

Next thing you know, you need to solve the Friendly AI problem.


Well actually this neural network style of machine learning is not all that application specific. You create a general architecture and throw lots of data at it, could be go, could be recognising pictures. You will need different general architectures, but the point is that this is a fundamentally different approach to the old school chess algos.


Yeah I get that. I studied and have worked in machine learning. Neural networks are more general than previous approaches but they still need to be customized by humans for different applications. And none of these programs are going off and learning how to play other games on their own. They need to be led.


> And none of these programs are going off and learning how to play other games on their own

With the danger of sounding stupid, isn't that what Deep-Q did?

http://arstechnica.com/science/2015/02/ai-masters-49-atari-2...

> Scientists tested Deep Q’s problem-solving abilities on the Atari 2600 gaming platform. Deep-Q learned not only the rules for a variety of games (49 games in total) in a range of different environments, but the behaviors required to maximize scores. It did so with minimal prior knowledge, receiving only visual images (in pixel form) and the game score as inputs.

Sure, the problem space is still fairly limited, but the AI did learn new games without much guidance at all.


>They need to be led.

We should rejoice in that fact. We are woefully unprepared for true learning programs as a species. Let us hope that between now and the time we do manage to create one that we mature to the point where we don't create these thinking entities for malicious purposes.


> Let us hope that between now and the time we do manage to create one that we mature to the point where we don't create these thinking entities for malicious purposes

That's a long way off and we'll face a lot of other problems before then.

For instance, fear mongering of a looming AI. We're better off focusing on teaching kids computer science and allowing them to see for themselves how theoretical and unscary true AI remains.


>That's a long way off

People, up until very recently said that computers being able to beat people at Go were a long way off too.


A singularity is a point where things are advancing so fast on different fronts it's impossible to make predictions about how fast things are going or where the next advance will come from. I think we're about there with "constrained AI", because everyone was looking at Facebook's bot which was playing with a four-stone handicap when Google skipped straight to champion-beating.


That is not the general understanding of singularity...

The technological singularity is a hypothetical event in which artificial general intelligence (constituting, for example, intelligent computers, computer networks, or robots) would be capable of recursive self-improvement (progressively redesigning itself), or of autonomously building ever smarter and more powerful machines than itself, up to the point of a runaway effect—an intelligence explosion[1][2]—that yields an intelligence surpassing all current human control or understanding

- Wikipedia


I hear there's still lots of work to be done for real time strategy computer games such as Starcraft (and even those rely on a great deal of easy to compute mechanical skills).

How much time it will take however, I cannot begin to guess.


That's mostly the case of little research not a hard problem.


80% of StarCraft is the APM - actions per minute. Strategy may exist but its nothing a computer can't replicate with human-coded heuristics. If a computer doesn't have to worry about a limit on APM, I'm sure it would obliterate any human starcraft player.


I regularly beat players with 2-3 times my APM. I always wonder what they are doing with all their clicks.


I'm just saying, that the APM of the best players is also in the top %. It's a prerequisite that already culls people that might be better tacticitioners.


> because chess was just the start.

I don't believe that Deep Blue's victory in Chess has any similarity to AlphaGo's wins. DB was just brute-forcing its way through 100s of millions of positions. AlphaGo is using some learned strategy behind its moves.


From a go player's perspective, this would have been a more exciting match. Ke Jie is largely considered the strongest player in the world right now whereas Lee Sedol is marginally weaker but has better name recognition.


I'd very much like to see Ke Jie try (and also beat) Alphago while it appears to be in the realm of possibility.


He was playing a joke in Weibo since Google is banned in mainland China. Not this "horrible" title :D


Hah, that would be funny. AlphaGo can't beat him because Google can't go to China? Touché


And Ke Jie was born in 1997. Wow.


He was born after I started playing go in the 1990s. Man, I'm old.


yeah, his picture was a little different from the Chinese GrandMaster I had in my head :D


I expected others to say that. Can't Google just allow anyone to play against it online after these 5 matches with Sedol conclude?


With the amount of computing power they throw at this I doubt that's something even google could easily fund.


Is running AlphaGo really that expensive? I get that training deep learning systems is very computationally expensive, but my understanding is that running them is orders of magnitude cheaper.

edit: table showing gains as CPUs are added - https://i.imgur.com/xxdWUtV.png


It's a huge cluster of machines.


Got any numbers?


The Economist says "The version playing against Mr Lee uses 1,920 standard processor chips and 280 special ones developed originally to produce graphics for video games"


If you price by the GCE calculator[0] it's $1920 to rent 1920 CPU cores for 20 hours. This doesn't include GPU costs as they don't seem to have GPUs available on cloud, but I could see that easily doubling the costs.

[0] https://cloud.google.com/products/calculator/#id=f63f1fe0-56...


Daresay the Economist is unclear whether they are talking chips or cores so it could be several times more cores.


So, that's actually not very expensive to rent for the duration of five games.


I don't know how long a game takes but I don't think google could afford letting anybody play, as stated in the original post.


Charge $1000 per game and pay out $10,000 if the human wins. Could be a nice earner. ;)


I'll pay for two simultaneous sessions of AlphaGo please :). I'll even let it go first in one of them!


It would be nice if they preserve a snap shot of Alphago as it was a the start of the Sedol games as a historic thing. Then if they open source it people could go play it too. They have mostly open sourced their code but not I think the learning data.


They would only need to open the learned coefficients, not the data learned from.


I like the way you think. :D


Clever, but they could easily detect this on their servers.


But that would prove that they are not overfitting their opponent. Which would be nice.


Not really. Even if it would only take a single dedicated GPU to win in real time against a human opponent you could not offer a service like that for free. If google massively overfits their opponent that equation still holds true so it's not really proof. It would only be proof if they switched off 10% or so of the array that powers their offering right now and it would lose from Lee Sedol consistently. For all you know they have a huge margin of error.


I suspect that training the AI is the most CPU intensive part.

If they would release the (trained) AI to everybody, that would prove that the training phase is general enough to beat any player, not just one.

Google doesn't have to pay for the CPU time. Ke Jie can find sponsors if he pretends he can beat AlphaGo.

(Edit: clarify)


Alas, the software is tied to Google's platform. Just releasing as open source won't do. (That's why they didn't open source Google Reader. The source would have been both useless for running a Reader clone, and given away lots of trade secrets about Google infrastructure.)


I don't doubt there is an a-symmetry between the number of machines / GPUs they throw at the problem during a match and during the run-up to a match but even so they will have to have some margin of error if they expect to win in the first place and besides that whatever that pile of hardware is it, the infrastructure required to run it and the people involved are not free.


Isn't one of the features of AlphaGo that it trains one part of the AI while playing the game?


I really wonder who would bother reading this entire article and still find any value in the last line:

> Go is a complex ancient Chinese mind game played on a board with a 19x19 grid of black lines.


I wonder if you jacked the game up to 23x23 or larger if the computer or the human would have more difficulty or if it would make no difference.


The larger the board, generally, the harder for the computer to play it, because it can't just pre-calculate all of the possible games.

Computers have been beating humans at 9x9 Wei Qi for quite a while, because they can calculate all possible games just like Chess.

However, if Wei Qi machines are now being written to do pattern recognition, rather than move by move brute-forcing, it may be a different story entirely.


Computers cannot calculate all possible chess games. The game tree complexity of chess, 10^123, is greater than the number of atoms in the universe.


The problem space shrinks as the game develops though, so there does come a point where it is possible to calculate all possible moves/outcomes.


Of course, one can make the same argument about go. As I understand it computers are quite good at the endgame.


"Deep Blue is stupid, you know, it's a machine." Garry Kasparov before being defeated by Deep Blue.


Famous last words.


Read the article.


I did.


It's on, seriously. Challenge accepted.

I think what Ke is underestimating however, is that the computer's learning rate is better than a humans. Even if the computer is not in his game now, in a little while it will exceed him.

Only a matter of time, Ke. Accept it.


If you had read the article Ke pretty much says this.


AHAHAHAHA. "pretty much", "had read" -- so you can read what others are doing across time and space? Riiiight. Moron. Why would I listen to someone whose joy comes from telling other people they were wrong on the internet. SO meaningful. hahaha.

This is worth revisiting -- like the absolute idiocy of someone who assumes that if another person, wholly unlike themselves, had read the same article, then they would reach the same conclusion. Group think loser. I mean, that's your assumption, right? Hahahah. Flubbing moron. Hahaha.


Are you OK? You seemed like a cool guy in the who's hiring comment but in this thread you sound nuts. Look for help if you need it.


???

"According to the pace of AI's progress, it won't be long for AlphaGo to beat all human players, it may happen a few years later, even a few months later," added Ke.


??? And? So from your point of view you just proved someone wrong on the internet, well done. If Ke actually appreciated this, he'd see there was no point to playing a computer. But I don't expect you to understand that...because then you'd have nothing to say " you were wrong " about. Whatever. Idiot.


I play against the computer in a variety of games for fun, I'm sure Ke would play Go with one for pretty much the same reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: