Seems unlikely. The focus seems to be on improving AI through "vision". The idea is to make the AI learn skills the same way a human would (at least in the first years of life). Google's AlphaGo also learned from screen pixels.
So these would be human-like bots, rather than bot-like bots, like you normally have in games. The bot would simply learn by doing, until it masters the game, not by getting access to game algorithms.
> Google's AlphaGo also learned from screen pixels.
It did not. It received the state of the board as one array, another board state array for capture/komi (since Go does have global state which is not visible purely from the board representation), and a few additional features to help it out with stuff like ladders. It was architected with convolutional layers, but over the Go grid, not pixels. See the AlphaGo paper pg11 for the exact structure of the input: http://www.postype.com/files/2016/04/08/16/05/03384c91046e8e...
Could it have learned from pixels (augmented by the additional necessary global state)? Sure. But that would've been a waste of computation since the visual layout of a Go board is fixed and static, unlike Atari games.
> Google's AlphaGo also learned from screen pixels.
Source? That literally seems to make zero sense to me. Go can be represented in a super-simple state. Why make it spend millions of cycles learning to categorize pixels into that state you already have?
I would guess that they trained AlphaGo from many thousands of hours of match footage. Writing a computer vision script to segment / extract the data may cost cycles as you say, but would save many human hours by eliminating the need to re-watch the footage and literally type out state information for each move.
Alphago was actually trained directly on game state (plus some extra computed state like "how many liberties will I have if I play this move" or "will I win this ladder"). A huge number of pro games (and countless amateur games) are available on servers like KGS in a nice computer-digestible format.
Again, that doesn't make sense because the moves have almost certainly already been typed out by somebody. It's the same in chess. There are databases containing millions of games.
So these would be human-like bots, rather than bot-like bots, like you normally have in games. The bot would simply learn by doing, until it masters the game, not by getting access to game algorithms.