Hacker News new | past | comments | ask | show | jobs | submit login

This is not true. We can look at networks and manually look at their weights to determine what features it learned, even high level features. You can do it right now (by examining a pretrained network).

People don't bother to because:

1) it's a very boring problem (we already have a high-level view of what networks learn through various visualizations, and what you'd learn would be specific to one network learned for one dataset)

and 2) it's very tedious and not repeatable (have to do it all over for each new dataset and each new model).




We might be thinking of different scopes of machine learning.

You could look at AlphaGo's weights for the entire neural network for ten thousand years. But you would become no better at Go. The only way AlphaGo can help us improve at Go is by showing us what it has learned in the games it plays.


You're dancing around, and haven't really defined, what it means to "learn Go."

Sure, humans wouldn't become better at Go. But that's a limitation of the human brain (we're not good at mathematical memorization and computation).

For all we know, what the network has learned about Go (a highly complex and interconnected set of statistical dependencies) is what there is to learn about Go. You're implicitly making the assumption that what the network learns about Go is guaranteed to be translatable to something humans can learn.

On the contrary, what the network learns is merely reducible, with loss of accuracy to what humans can understand. And that is an active area of research (feature visualizations and explanations), but that is tangential to your point.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: