Hacker News new | past | comments | ask | show | jobs | submit login

This caught my attention:

The researchers suggest that the current crop of machine learning architectures may be inferring something far more fundamental (or, at least, unexpected) from images than was previously thought...

Is it more plausible that this shows they are inferring something fundamental, rather than that they are differentiating images on the basis of some of their accidental (i.e. non-essential) features?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: