Hacker News new | past | comments | ask | show | jobs | submit login

Anyone know how many GPUs will be used for the game vs. Lee Sedol? That distributed AlphaGo with more GPUs really scales up is impressive and scary.



My understanding is that once the neural network is trained you don't need as much computation power to put it in use.


In general, training requires far more neural networks requires far more computational power than using them does. However, AlphaGo combines neural networks with a traditional Monte Carlo search. This means that you can improve the performance of a trained AlphaGo by just giving it more processing power. Indeed, with a sufficient amount of processing power, AlphaGo would converge to fully optimal play.


It's not really the traditional Monte Carlo search; IIRC they're using UCT weighting for MCTS. Apologies if that's what you meant, but I think to most people "traditional Monte Carlo" means something different (probably uniform depth charges).


Not sure what they will use vs Lee Sodel, but their optimal setup was 1202 CPUs and 176 GPUs, at least from their paper see page 17:

https://www.dropbox.com/s/vbv639tavdza2l3/2016-silver.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: