Hacker News new | past | comments | ask | show | jobs | submit login

We inherently know that cardboard boxes don't move on their own. In fact any unusual inanimate object that is moving in an irregular fashion will automatically draw attention in our brains. These are instincts that even mice have.



Yep, and humans will make good guesses about the likely cause of the moving box. These guesses will factor in other variables such as the context of where this event is taking place. We might be in a children's play room, so the likely activity here is play, or the box is likely part of the included play equipment found in large quantities in the room, etc.

"AI" is not very intelligent if it needs separate training specifically about boxes used potentially for games and play. If AI were truly AI, it would figure that out on its own.


We also make bad guesses, for instance seeing faces in the dark.


Yes, and when humans make bad guesses it's often seen as funny or nothing out of ordinary. When AI makes bad guesses, it will be seen as a failure of some standard, but with very few people understanding how to fix it. I'm not sure how "allowable" mistakes in the interest of AI learning will be tolerated for AI services used for real-world purposes.

"This Bot is only 6 months old, give him a break". But will people give the Bot a break? Either way, blaming AI will be a popular way to pass the buck.


>We inherently know that cardboard boxes don't move on their own.

No. We don’t. We learn that. We learn that boxes belong in the class “doesn’t move on its own”. In fact, later when you encounter cars, you relearn that these boxes do move on their own. We have to teach kids “don’t run out between the non-moving boxes because a moving one might hit you”. We learn when things seem out of place because we’ve learned what their place is.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: