Hacker News new | past | comments | ask | show | jobs | submit login

Yep, and humans will make good guesses about the likely cause of the moving box. These guesses will factor in other variables such as the context of where this event is taking place. We might be in a children's play room, so the likely activity here is play, or the box is likely part of the included play equipment found in large quantities in the room, etc.

"AI" is not very intelligent if it needs separate training specifically about boxes used potentially for games and play. If AI were truly AI, it would figure that out on its own.




We also make bad guesses, for instance seeing faces in the dark.


Yes, and when humans make bad guesses it's often seen as funny or nothing out of ordinary. When AI makes bad guesses, it will be seen as a failure of some standard, but with very few people understanding how to fix it. I'm not sure how "allowable" mistakes in the interest of AI learning will be tolerated for AI services used for real-world purposes.

"This Bot is only 6 months old, give him a break". But will people give the Bot a break? Either way, blaming AI will be a popular way to pass the buck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: