Hacker News new | past | comments | ask | show | jobs | submit login

I wonder. If they have an exhaustive vocabulary would it be possible to generate a picture of what the system believes an object to look like? I know that there is something called generative models in machine learning and my guess is that it could be applied here.



It's possible, but generally generative models have to be trained in a specific way. If not, you could do something like for every layer of the neural net, you train another NN which can "predict" the layer below it, it's input. Then you can work your way down each layer to try to find an input which would produce that output.

Another way is to use some kind of optimization to find an input which produces that pattern (e.g. backprop to the pixels themselves.) This will give you the image that most strongly triggers that output. Not necessarily a typical example.


Well you'd have to build a probabilistic model for each concepts, whether on pixels or on features, and you could use it to generate images randomly. It might show up some good shapes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: