Hacker News new | past | comments | ask | show | jobs | submit login
Essential Math for AI (oreilly.com)
39 points by teleforce 3 months ago | hide | past | favorite | 10 comments



From the Table if Contents

> Backpropagation Is Not Too Different from How Our Brain Learns

I am very skeptical of this position, but not enough to buy this book. There is a process in the brain called “backpropagation” but to my knowledge neuroscientists don’t believe it is primarily responsible for learning or can even carry an error signal back multiple layers like the loss in an artificial neural net.

Neurons in the brain are living things with complex behavior and many degrees of freedom. Nodes in an artificial net are governed by simple rules. This constant drive I see everywhere to analogize brains and artificial neural networks is really unfounded. We use NNs because they work. They can approximate the functions we want them to, we have backprop to efficiently drive them to that approximation, and we have hardware (GPUs and TPUs) which can run them efficiently, the latter being almost an accident of history really.


Similar doesn't mean identical. Neurological systems in the brain have to take into account all sorts of things that we can discard in a simulated environment, energy just being one example.

While maybe "not too different" might be a bit of an exaggeration, to not make any kind of comparison would be a disservice in my mind. I see NNs as being closer to a mathematical model of neurons in a petri dish, it lacks the higher level framework that pieces everything together, but still accurately models a fundamental building block of the system to a certain extent.


It’s not just a bit of an exaggeration, it’s a fundamental mischaracterizarion. It’s barely even a cocktail party-level understanding. It’s what you tell your retired lawyer brother in law if he asks how a neural network works, but anyone but anyone who is serious enough to buy this book is not being served by the analogy.

But fundamentally the point is the same: brains don’t learn via backpropagation. The headline isn’t sort of correct, it is wrong.


Math is helpful for neural networks, but it seems our mathematical sophistication is immature. For example, why can't a MLP (the most basic neural network from the 1960s) do the same things as a Transformer (the successful new architecture from 2017)? We've cited some possible reasons in retrospect, but we're not skilled enough to predict the performance of a neural network without trying it.



pretty funny to sell this since large portions of major AI systems are "not currently understood" by their authors or closest reviewers. They say it repeatedly in the actual published research.


The book itself has a section "Mathematics and the Mysterious Success of Neural Networks" which covers the lack of strong theoretical foundations for deep learning.

This doesn't mean there is no use for mathematics in AI though. Many of the topics the book covers both have very strong theoretical foundations and are useful in AI.


if you really are a 'bot, I think that breaks guidelines on YNews


When I created this handle in 2017 the likelihood of anyone thinking my comments were not written by a human was a lot lower.


I wonder if it's on oceanofpdf or pdfdrive




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: