Hacker News new | past | comments | ask | show | jobs | submit login

How hard is it to determine why a probabilistic or "deep learning" system made a specific choice?



The hard part is defining "why". Machine learning methods can produce a model which fits the data very well, and you can easily prove that it fits the data, but understanding "why" is much harder.

There is a tool called Eureqa which was specifically designed to produce understandable models, in the form of mathematical equations. A biologist used it on some data from an experiment of his, and it produced a very simple equation that fit the data perfectly. But he couldn't publish it because be couldn't understand or explained why the equation worked or what it meant.


With PGM you just look at the graph to see how each node is weighted.

That is one of the advantages of PGM, it tells you why it thinks something. Combining this with domain experts is a killer advantage of PGM. For the soundbite: PGM's help the domain expert figure out where to go next.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: