Hacker News new | past | comments | ask | show | jobs | submit login

I think the concept of "interpretability" is what you are getting at. I group that in with automatic feature engineering, since they are the same idea from different perspectives. Sometimes that is a benefit, sometimes it's not: http://blog.keras.io/how-convolutional-neural-networks-see-t...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: