Hacker News new | past | comments | ask | show | jobs | submit login

Right. But that happens too with ML models during training - the model makes a prediction given training example input, which is evaluated against expected output, rinse repeat. A single example is very similar to doing something in response to a stimuli, and observing the reaction. Here it's more of predicting a reaction and getting feedback on accuracy, but that's part of our learning too - we remember things that surprise us, tune out those we can reliably predict.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: