> Think of the classic stereotype that all asian faces look alike
> to europeans: that's ok for still labeling a human face, and useful.
> But for image compression to have different quality based on the
> subject would be useless!
You bring up a very interesting phenomenon, but I think your example actually supports my assertion. My understanding is that europeans who (presumably initially having some difficulty telling asian faces apart) go on to live Asia (or live among asians in other contexts) report that, after a while, asian faces start to "look white" to them. I would suggest that plastic changes (ie learning) in the brain's facial recognition circuitry underlie this changing qualitative report. In other words, it's not that the faces are miscategorized as having more caucasian features, but rather that they start to look more like "familiar faces".
An extreme case of this not happening can be found among persons with prosopagnosia (face blindness). These people have non- or poorly functioning facial processing circuitry and exhibit great difficulty distinguishing faces. Presumably to the extent they can distinguish people at all, they must use specific features in a very cognitive way ("the person has a large nose and brown hair") rather than being able to apprehend the face "all at once".
Incidentally, I think there are a myriad of other examples of this phenomena, especially among specialists (wine tasters, music enthusiasts, etc) who have highly trained processing circuitry for their specialties and can make discriminations that the casual observer simply cannot. Another example that just came to mind is that of distinguishing phonemes that are not typical in ones native tongue. One reason that these sounds are so difficult for non-native speakers to produce is because they are difficult for non-native speakers to distinguish, and it simply takes time to learn to "hear" them.
All this is to say that your perceptual experience is not as stable as you think it is. Any sort of AI compression need only to be good enough for you or some "typical person". If the compressor was trained on asian faces (and others, besides) then it should be able to "understand" and compress them, perhaps even better than a naive white person. I could even imagine the AI being smart enough to "tune" its encoding to the viewers preferences and abilities.
An extreme case of this not happening can be found among persons with prosopagnosia (face blindness). These people have non- or poorly functioning facial processing circuitry and exhibit great difficulty distinguishing faces. Presumably to the extent they can distinguish people at all, they must use specific features in a very cognitive way ("the person has a large nose and brown hair") rather than being able to apprehend the face "all at once".
Incidentally, I think there are a myriad of other examples of this phenomena, especially among specialists (wine tasters, music enthusiasts, etc) who have highly trained processing circuitry for their specialties and can make discriminations that the casual observer simply cannot. Another example that just came to mind is that of distinguishing phonemes that are not typical in ones native tongue. One reason that these sounds are so difficult for non-native speakers to produce is because they are difficult for non-native speakers to distinguish, and it simply takes time to learn to "hear" them.
All this is to say that your perceptual experience is not as stable as you think it is. Any sort of AI compression need only to be good enough for you or some "typical person". If the compressor was trained on asian faces (and others, besides) then it should be able to "understand" and compress them, perhaps even better than a naive white person. I could even imagine the AI being smart enough to "tune" its encoding to the viewers preferences and abilities.