Hacker News new | past | comments | ask | show | jobs | submit login

What are the analogous components? Backpropagation is biologically implausible.



Dopamine released from task success primes long term potentiation, effectively acting as the analog of the cost function's gradient. The larger the dopamine hit (generally) the bigger the learning step.

You are correct that neurons can't do backprop. Keep in mind that the networks of the brain aren't straight feedforward, they're recurrent. In order to provide temporal control of activation propagation throughout the network, inhibitory neurons are needed. Thus, instead of "tweaking weights" backwards in the network, the brain learns to activate inhibitory neurons to provide forward feedback against activation.The mechanism is different but the learning effect is pretty similar.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: