Hacker News new | past | comments | ask | show | jobs | submit login

You cannot say: "This image was classified as a stop sign, because this part recognized the shape and this part recognized the color, and this part the text", which you could do with other approaches.

When it doesn't discover that it's a stop sign, how do you debug it? Did it recognize the shape.. who knows?




> When it doesn't discover that it's a stop sign, how do you debug it? Did it recognize the shape.. who knows?

Barring other analytic tools (like looking at which parts contribute the most to the wrong result), the same way you test other things when you have a (somewhat) black box:

Form hypotheses and test them.

Maybe it didn't recognise the shape, so try adjusting the image to clean it up, and once you have one it recognises, try to reduce and alter the difference between them. Maybe it turns out the image e.g. has the stop sign slightly covered, making the shape look wrong, and there's nothing in the training set like that.

Maybe the hue or brightness is off and the training set is mostly all lit a certain way. Test it by adjusting hue and brightness of the test image and see if it gets recognised.

And so on.

There are plenty of other areas where we are similarly constrained from taking apart that which we're observing, so it's not like this isn't something scientists are dealing with all the time.

Within comp.sci. we're just spoiled in that so much of what we do can be easily instrumented, isolated and tested in ways that often lets us determine clear, specific root causes through analysis.


It's definitely possible to get insight into how a CNN would classify something like a stop sign.

This paper does a good job of showing how CNNs learn a hierarchy of increasingly complex features to classify images: http://arxiv.org/abs/1311.2901




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: