The researchers suggest that the current crop of machine learning architectures may be inferring something far more fundamental (or, at least, unexpected) from images than was previously thought...
Is it more plausible that this shows they are inferring something fundamental, rather than that they are differentiating images on the basis of some of their accidental (i.e. non-essential) features?
The researchers suggest that the current crop of machine learning architectures may be inferring something far more fundamental (or, at least, unexpected) from images than was previously thought...
Is it more plausible that this shows they are inferring something fundamental, rather than that they are differentiating images on the basis of some of their accidental (i.e. non-essential) features?