Hacker News new | past | comments | ask | show | jobs | submit login

I'm very interested in what sort of adversarial gameplay can be used against these sorts of AIs. I've read that the chess versions all have weak points, where if you know you are playing an AI, you can easily exploit them and win. They usually seem to deal with strategies over many moves, since the AI look ahead approach cannot go very many plies deep. I suspect that once a human can break a micro spamming AI tactic he can consistently beat an AI with a more long range strategy. And perhaps this approach is in general undefeatable by an AI, since it seems that AI play can only either be short term spam attacks or long play look up tables. The search space for short term play is tractable, but long term play is exponential, so AI brute forcing will lose out to human principled reasoning. Thus, without human input in the form of game tables, the alphazero variant AIs can only optimize for short term spamming. And the alphago/deep blue variants are always downstream from human play, since they depend on human gameplay tables for deriving long term strategy and insight.



> I've read that the chess versions all have weak points, where if you know you are playing an AI, you can easily exploit them and win.

That may have been true in the past but it hasn't been so for more than a decade. A laptop running Stockfish handily beats top human players, even if the players know a year in advance the exact hardware and software revision they will be playing against.

https://en.wikipedia.org/wiki/Human%E2%80%93computer_chess_m...

Kramnik, then still the World Champion, played a six-game match against the computer program Deep Fritz in Bonn, Germany from November 25 to December 5, 2006, losing 4–2 to the machine, with two losses and four draws. ... There was speculation that interest in human–computer chess competition would plummet as a result of the 2006 Kramnik–Deep Fritz match. According to McGill University computer science professor Monty Newborn, for example, "I don’t know what one could get out of it [a further match] at this point. The science is done.". The prediction appears to have come true, with no further major human–computer matches, as of 2019.



Judging better than Stockfish on one board configuration isn't enough to win a game, much less a tournament series.


But it seems there is this general problem with long term strategy that led to the board configuration misevaluation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: