Hacker News new | past | comments | ask | show | jobs | submit login

I have no doubt that neither Google nor HP made those error maliciously. I was just curious as to whether or not it's possible to incorporate some sort of... tact?... into these recognition algorithms to avoid labeling people (or other things) offensively. Is it just a matter of a larger training set? It would be hard to cover all sorts of people in all sorts of poses, with all sorts of lighting conditions, etc.



It's not about tact, it's just the algorithm doing its best. In order for the algorithm to be capable of "tact", it needs to recognize that it's looking at a person (or whatever). And if it recognized a person, then there wouldn't have been this problem because it would just label it correctly.


You can certainly include tact. The algorithm thinks it's more likely it's 51%-49% gorilla/person split, but a level above that chooses person as the answer as even though it'll be wrong more often, the impact of the error is lower.

This is why you shouldn't just train your system to hit higher accuracy figures but also investigate the type of errors it's making. This needs to be done while thinking about your specific use case and domain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: