Hacker News new | past | comments | ask | show | jobs | submit login

Sure - but I think that's the least important part of my comment. The preview is basically just an intro to Scheme. I would rather be able to see how well the book covers interesting ideas about deep learning. While the jury is still out, my personal skepticism is that this is a really beautiful passion project that looks great on bookshelves, but that any serious learner would be better served dedicating their time to working through the existing, more traditional, and very high-quality resources available online.



This is what the book has to say (part of a foreword by Peter Norvig):

> Maybe, maybe not. But even if you use a machine learning toolkit like TensorFlow or PyTorch, what you will take away from this book is an appreciation for how the fundamentals work.


I think all serious researchers have implemented core Deep Learning algorithms from scratch. I know I have.

There are two books that do exactly this:

1. Deep Learning from Scratch

2. Data Science from Scratch

In these books, you implement each part of the ML/DL pipeline, from scratch, in Python.

There is also a GitHub project call Minitorch that teaches you the inner workings of a framework like PyTorch.

And then there are several other good resources for exactly this.

What he claims to have as a content is neithet new nor unique.


How tied are those two books to python? If very much, are there books covering the same content with a different programming language?


They are aren’t that tied to Python.

Even if you think so, Python is really an easy language, and you can easily port the code to something else.

If you already have the basic ideas about the parts of a Neural Network pipeline, you can just search google "implement part-X in Y language", and you will get well written articles/tutorials.

Many learners/practitioners of Deep Learning, when they have the big enough picture, write an NN training loop in their favorite language(s) and post it online. I remember seeing a good enough "Neural Network in APL" playlist in YT. It implements every piece in APL and gains like 90%+ accuracy in MNIST.

I also remember seeing articles in Lisp (of course!), C, Elixir, and Clojure.

I am writing one in J-lang in my free time.


I suggest the book Programming Machine Learning. I'm slowly going through the book using another language, and it's easy to translate since the book doesn't use Python's machine learning libraries.


Thanks for the suggestion, finally a ML book that isn't bound to python frameworks!


they mostly use numpy (matrix maths library).

So if you use a library for matrix multiplication, inverse, transpose, ... with a nice syntax, you're good to go.


What does "from scratch" really mean? You don't reimplement Python itself, or invent a new GPU hardware, a new CUDA including compiler, etc. You don't reimplement the OS. Where do you draw the line?

Do you reimplement matmul or other basics?

Do you reimplement auto-diff?

Maybe PyTorch or TensorFlow using auto-diff is a good "from scratch" basepoint, without using predefined optimizers, or modules/layers, or anything. Just using the low-level math functions, and then auto-diff.


I don't understand why you are arguing with me.

Yes, in those books, you do implement matmul, auto-diff, etc.


I don't understand. I don't argue with you? I also don't speak about these books. I just made a generic comment, to start a discussion.

I just wanted to point out that "from scratch" is not really well defined. There is always some arbitrary line. I just found it interesting to discuss and think about where to draw this line exactly. Obviously it's never really from scratch, i.e. you don't reinvent the hardware level, for example. Or you don't start with teaching quantum physics. So you start from somewhere.

And I was wondering whether auto-diff is maybe something which could also already be the starting point, or also matmul. Reimplementing an efficient matmul on CUDA is not easy, and might distract from the main deep learning content. But it depends also where you want to have the focus on.


Now it makes more sense.

Maybe I misread your comment.

Thanks for rewording.

Where do people draw the line? Where they want to.

Some people choose to go closer to the metal than the others. It's just people's choice.

Some just write stuff in Python, some write CUDA kernel for themselves (some among them had to), a friend even wrote his own compiler and programming language for Deep Learning.

So it depends on your choice. And how much deep you want to go also depends on what you want to do- i.e. you choice of career, direction of research, etc.


More than happy to leave the traditional, high quality resources available online to the serious learners


Can you suggest any?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: