Hacker News new | past | comments | ask | show | jobs | submit login

I question how much of it's success is down to the AI/understanding the game/outplaying the opponent vs. simple culled brute force. Especially when they can throw Google level computing power at it and they have mentioned using heat maps and looking at move sets.

It could be argued that it's only AI when it understand the game rules and plays to them without iterating random choices until it finds a hit. Machine learning would be between the two, but still not what many would consider true AI.




I played chess instead of go, but I think they are similar enough ...

When you play, you consider a few possible movements, and a few possible responses of the other player, and in each case a few possible response of you, and ... I think amateur players like me consider only 3 or 4 levels (unless it's some easy but interesting situation like a multiple capture chain) but professional players consider much deeper trees. So humans also iterate randomly, but usually we prune the tree more aggressively for the time and memory constrains of the current implementation.

Unless you are Capablanca :). There is a famous fake quote from Capablanca that says "I see only one move ahead, but it is always the correct one." It's probably fake, but it's funny. More info and similar quotes: http://www.chesshistory.com/winter/extra/movesahead.html


You are correct, but you are playing the game. You see and process the possibilities based on your understanding of the chess ruleset.

With machine learning with brute force you are simply trying X possibilities until something sticks and gives a high % of win state. That's different to playing the game using knowledge of the ruleset, even though, most of the time, the end result is the same.

This is what killed AI research in the 80s. That moment when everyone collectively saw they were simply working on a more powerful culled brute force (pruned tree as you call it) when they all thought it was true AI.

True AI is hard. The required computational resources are immense even for something simple. Take a Bishop on a chess board. How would you tell an AI the ruleset that the Bishop moves diagonally only? It must first understand what it is looking at, then what diagonally means, then what diagonally means in this particular context. All with nodes of pattern matches and an input stream.

I feel these young guns are falling into the same trap of calling machine learning AI without the benefit of experience an older researcher would have, having been through this situation before.


Actually, teaching AlphaGo the rules was easy. And what you call brute force is in fact intuition based search. It learns to guess by intuition (policy net) what moves to try and to give up (value net) the bad ones. It's far from brute search, and that's why AlphaGo is so much better than the other Go software.


It's not a conscious entity, but it's a network good at what it's trained for. Since the network is frozen, it doesn't really learn anything or trying to "outplay" anyone (let alone it doesn't know there's an opponent). While it's impressively crafted machine learning, we shouldn't rush to call it AI. It seems like it's showing the right direction though, and, who knowns, the "intelligent conscious agent" part may one day be built on top of this, with the same building blocks.


If we made a Turing test based off Go, then would a Go player be able to tell when playing against a human or when playing an AI?

That's why I think AlphaGo does manifests consciousness, in its play. It is not conscious of what we are conscious about, but rather limited to the domain of Go play.

It might even have developed concepts about the game that are completely alien to us, maybe untranslatable to us.


I think it would be a stretch to say that now. The neural net acts like a big lookup table atm , even if its states contain high level concepts. However, consciousness is a human concept, so the system should be coupled to another, expressive system that we could relate to, and communicate with. Thats why the turing test is proposed through language communication and not some other limited vocabulary.


It is around 1p without any MC search.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: