I really want to build a Lego Mindstorms robot (or similar) that watches my phone with a webcam, holds a stylus, and plays Flappy Bird. I have played enough (high score 194) that I am pretty sure I know the right strategy, I just want to take human error out of the equation.
Are there open source vision libraries that are low-latency enough to play this in real time? Assuming the robot could keep up with some regular servos. Would be a very interesting project.
Edit: I'm thinking I could run an Android emulator, and then use MonkeyRunner with Python OpenCV processing my monitor to do the work. Anyone who has any relevant experience please reply, I'd love to hear if this is feasible.
I was wondering if this would work for Flappy birds. Due to the speed of the game, I think it is going to be quite a challenge.
P.S. If you are interested in the mindstorms robot part, I'll put a shameless plug for a robot I built some time ago (based on some work by David Singleton). Code is on github but here is a pycon video: http://pyvideo.org/video/1195/self-driving-lego-mindstorms-r...
Ohai! (I presented that Angry Birds bot at PyCon.) My bot is now called "Tapster" and I continue to make improvements. It's completely open-source; I'd love to expand the community of game-playing bot fans. (http://tapsterbot.com) I even created a 3d-printable LEGO Technic-compatible building material to make the bot. (http://bitbeam.org)
I'm working on training Tapster to play Flappy Bird. I'm going down the route of webcam + OpenCV.
This is why I love Hacker News, I can ask a question about getting a lego robot to beat my favorite cell phone game and there's someone watching who has done it.
Would love to hear how it goes! I'm sure a blog post about that would be front page HN material.
If you're going to go as far as running it in an emulator... then why not just write something that works in process and avoids the whole visual feedback / physical input loop altogether? It's how bots are written for MMO's like WOW.
I'd like to do it without altering the app, which makes it more like the robot is playing the game not cheating it. This limits me to processing the game's output in real time.
(only with flappy bird) - they say it's addictive but you're taking this to another level. "Man I need to do a Lego Mindstorms robot with a webcam and stylus and OpenCV - I've gotten to 194 but I NEED more. I need to take the human element out of this equation...."
Neat AI approach to play it. When this first blew up 5 or so days ago I threw together a simple javascript bookmarklet that played the game at: http://www.mrspeaker.net/dev/game/flappy/
It's really poorly formatted and has unused code in it but it was fun to automate and watch. http://pastebin.com/yTmdWgfC
To use it just install the bookmarklet, click on it and then start the game. Press space to make it die.
Very cool! It's remarkable that you can get a good score with effectively only two features: vertical distance from the lower pipe and horizontal distance from the next pair of pipes.
I suspect you could cut the training time by an order of magnitude with a different optimization algorithm, or just by varying alpha. For example, have you considered trying something like a line search?
http://en.wikipedia.org/wiki/Line_search
reenforcement learning is not an optimization algorithm, and his times are consistent with an off the shelf Q-learning approach.
reenforcement learning is trying to find the optimal policy of action from a given position in the discrete state space.
The policy is roughly a map between state -> action. But it understands the temporal nature of the world.
It will start knowing nothing, then discover hitting a pipe is really bad, therefore states close to nearly hitting a pipe are almost as bad. Over tons of iterations the expected negative value of hitting a pipe bleeds across the expected value of the state space.
With regression and optimization, its never clear what you are optimizing against. Obviously hitting a pipe is bad. But what about the states near a pipe? Are they good or bad? There is no natural metric in the problem that tells you the distance from the pipe or what the correction action should be.
So that's the major different between reenforcment and supervised learning.
How hard would it be to solve this deterministically?
Can someone comment on the game's physics? I am assuming:
- constant horizontal velocity
- gravity
- flapping implemented via impulse
- collisions are handled with bounding boxes
Maybe someone who knows something about optimal control (ODE) say whether this is analytically solvable? Of course there's still the practical stuff (numerical integration, I/O lag) to deal with but I'm optimistic.
If you play the game, it's actually quite trivial to program deterministically. :-) Flapping is not implemented as an impulse. It's more like jumping in Mario - it instantaneously fires a canned "flap" movement that is always the same.
I tried it out in my own HTML5/JS Flappy clone (mine's not at all interesting or even done, but I felt I had to give it a try), and the movement seems really accurate.
I have no idea how they got to those numbers, however.
Neat. You could also probably do this with a genetic algorithm. Thinking out loud here:
- Have a neural network with 6 inputs for:
- player x
- player y
- acceleration speed
- acceleration direction
- next pipe mid-point x
- next pipe mid-point y
- Two outputs of 0 or 1 for click or no-click
The fitness score would be how close the player is to the pipe mid-point. Hopefully, this would cause the bird to stay as close as possible to fly between the pipes. The genetic algorithm would select the best neural network that knows when to flap based on the current input state.
I'm curious, I've thought about using genetic algorithms for training neural nets before, but haven't seen it mentioned much. I imagine PSO could work as well. What is the general concensus on using algorithms other than back propagation for neural net learning? When is it appropriate / not appropriate?
Some people frown upon training neural networks with genetic algorithms, just because gen algs are so random and hard to dissect. I think it's silly to discard anything that is a potential solution to a problem.
I go by a rule of thumb like this:
- If you are able to collect lots of training data (inputs and valid outputs) then use back-propagation. It's faster and you might get better results.
- If you don't know the outputs for inputs or if there are simply too many possible combinations (such as in the case of a game), then use a genetic algorithm. It's effectively a search engine that finds the best solution within the problem space (the solution being the optimal weights for the neural network).
I'd like to see a two-dimensional chart that plots Click vs Do Nothing for the two input parameters. (Vertical distance from lower pipe
and Horizontal distance from next pair of pipes)
Reinforcement learning (RL) is the future of real-world AI. Bear my words ;) RL has been around for a long time, it is the mix of optimal (in the optimization sense) decision making along with machine learning (ML). It does benefit of most recent advances in ML. As such it is likely to power the next batch of 'intelligent' applications and services.
So was I. I remember a posting about deep learning on HN, they managed to let the algorithm play a NES game purely by analyzing the video. So who is up for the challenge?
Are there open source vision libraries that are low-latency enough to play this in real time? Assuming the robot could keep up with some regular servos. Would be a very interesting project.
Edit: I'm thinking I could run an Android emulator, and then use MonkeyRunner with Python OpenCV processing my monitor to do the work. Anyone who has any relevant experience please reply, I'd love to hear if this is feasible.