Sure - but I think that's the least important part of my comment. The preview is basically just an intro to Scheme. I would rather be able to see how well the book covers interesting ideas about deep learning. While the jury is still out, my personal skepticism is that this is a really beautiful passion project that looks great on bookshelves, but that any serious learner would be better served dedicating their time to working through the existing, more traditional, and very high-quality resources available online.
This is what the book has to say (part of a foreword by Peter Norvig):
> Maybe, maybe not. But even if you use a machine learning toolkit like TensorFlow or PyTorch, what you will take away from this book is an appreciation for how the fundamentals work.
Even if you think so, Python is really an easy language, and you can easily port the code to something else.
If you already have the basic ideas about the parts of a Neural Network pipeline, you can just search google "implement part-X in Y language", and you will get well written articles/tutorials.
Many learners/practitioners of Deep Learning, when they have the big enough picture, write an NN training loop in their favorite language(s) and post it online. I remember seeing a good enough "Neural Network in APL" playlist in YT. It implements every piece in APL and gains like 90%+ accuracy in MNIST.
I also remember seeing articles in Lisp (of course!), C, Elixir, and Clojure.
I suggest the book Programming Machine Learning. I'm slowly going through the book using another language, and it's easy to translate since the book doesn't use Python's machine learning libraries.
What does "from scratch" really mean? You don't reimplement Python itself, or invent a new GPU hardware, a new CUDA including compiler, etc. You don't reimplement the OS. Where do you draw the line?
Do you reimplement matmul or other basics?
Do you reimplement auto-diff?
Maybe PyTorch or TensorFlow using auto-diff is a good "from scratch" basepoint, without using predefined optimizers, or modules/layers, or anything. Just using the low-level math functions, and then auto-diff.
I don't understand. I don't argue with you? I also don't speak about these books. I just made a generic comment, to start a discussion.
I just wanted to point out that "from scratch" is not really well defined. There is always some arbitrary line. I just found it interesting to discuss and think about where to draw this line exactly. Obviously it's never really from scratch, i.e. you don't reinvent the hardware level, for example. Or you don't start with teaching quantum physics. So you start from somewhere.
And I was wondering whether auto-diff is maybe something which could also already be the starting point, or also matmul. Reimplementing an efficient matmul on CUDA is not easy, and might distract from the main deep learning content. But it depends also where you want to have the focus on.
Where do people draw the line? Where they want to.
Some people choose to go closer to the metal than the others. It's just people's choice.
Some just write stuff in Python, some write CUDA kernel for themselves (some among them had to), a friend even wrote his own compiler and programming language for Deep Learning.
So it depends on your choice. And how much deep you want to go also depends on what you want to do- i.e. you choice of career, direction of research, etc.