Hacker News new | past | comments | ask | show | jobs | submit login

My understanding is that once the neural network is trained you don't need as much computation power to put it in use.



In general, training requires far more neural networks requires far more computational power than using them does. However, AlphaGo combines neural networks with a traditional Monte Carlo search. This means that you can improve the performance of a trained AlphaGo by just giving it more processing power. Indeed, with a sufficient amount of processing power, AlphaGo would converge to fully optimal play.


It's not really the traditional Monte Carlo search; IIRC they're using UCT weighting for MCTS. Apologies if that's what you meant, but I think to most people "traditional Monte Carlo" means something different (probably uniform depth charges).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: