Can any deep learning experts answer a related question I have? What's the state of the art in recognising rotated 3d objects?
I know that deep learning systems can recognise, e.g., different animal toys held in the hand, at different distances from the camera. How about shapes which are rotated? (I assume shapes rotated with the same face showing the camera are fairly trivial to recognise, I mean the other two rotations - 'pitch' and 'yaw').
I have a hunch that humans have a ton of perceptual hardware devoted to precisely this task, and that deep neural nets are going to have a hard time cracking it. Is my hunch accurate?
I know that deep learning systems can recognise, e.g., different animal toys held in the hand, at different distances from the camera. How about shapes which are rotated? (I assume shapes rotated with the same face showing the camera are fairly trivial to recognise, I mean the other two rotations - 'pitch' and 'yaw').
I have a hunch that humans have a ton of perceptual hardware devoted to precisely this task, and that deep neural nets are going to have a hard time cracking it. Is my hunch accurate?