Hacker News new | past | comments | ask | show | jobs | submit login

The way in which it is superhuman matters. The calculator uses simple mechanical algorithms. Alpha Go uses a completely novel approach to deep learning that can likely be applied to many other systems/problems.



As long as those systems/problems include a grid based problem space where the goal is to successfully place stones restricted by a limited set of rules.

Ok flippancy aside, there are two problems that make techniques like this single-domain: network design and network training.

The design, uses multiple networks for different goals: board eval (what boards look good) and policy (which moves to focus on). Those two goals, eval and policy, are very specific to go. Just like category layers are specific to vision and LSTM is specific to sequence learning.

Network training is obviously hugely resource intensive -- and each significantly complex problem would need such intensity.

It is amazing the variety of problems DNNs have been able to do well in. However, the problem of network design and efficient training are significant barriers to generalization.

When network design can be addressed algorithmically I think we may have an AGI. However, that is a significant problem where you automatically add another layer of computational complexity so it is not on the immediate horizon and may be 50+ years down the road.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: