Surprised (and delighted!) to see this pop up on the front page of HN today! This course is designed to teach proficient coders the practice of deep learning, that is: how to train accurate models; how to test and debug models; and key deep learning concepts. It covers applications in vision, natural language processing, tabular data, and recommendation systems.
If you're interested in diving deeper into the papers and math behind the scenes, as well as coding from the lowest levels (right down to the compiler level), you'll be interested in "Deep Learning from the Foundations", which is coming out in 2 weeks. The last two lessons are co-taught with Chris Lattner (creator of Swift, LLVM, and Clang).
If you want to learn about decision trees, random forests, linear regression, validation sets, etc, try "Introduction to Machine Learning for Coders": https://course18.fast.ai/ml
(All are free and have no ads. They are provided as a service to the community.)
Jeremy, we met at Exponential Medicine a few years ago. You probably don't remember me, but I wanted to say thanks for the inspiration (both in person and via this course).
The course has been a fantastic resource. Thanks again, and keep up the good work. Looking forward to what comes out of the work with Swift.
Thanks again to both you and Rachel for your contributions, and attention to the ethics of AI as a central concern that's every bit as relevant as the technical details.
Is there a course or part of a course that focuses, at a library level, on semantic classification. For example, if we had a reference document that had the term "fruit", then if someone entered the string "grape" we would tie it to the the "fruit" section(s) of the reference document.
I really love this course. I have no deep learning experience except for this, but in a few hours of watching these videos I was literally building image classifiers that worked pretty well. Coincidentally, just today I started working on a hobby project using the stuff I learned here.
The lessons look interesting from a high level perspective. And I think could help people guide their applications.
I think there's also a need for a very low level course in deep learning. I.e. on the level of someone who wishes to write their own deep learning library. Because from high up, sure it all looks like the chain rule, but down low, it gets messy quickly if you want to write a high performance library on your own.
If you want to take a look at a framework that isn't a terrible mess and that one person can understand both Darknet [1] and Flux [2] are really nice. Learned a lot from reading their source.
Part 2 of this course, which will be released later this month, is a step by step walkthrough of building the FastAI and PyTorch libraries from scratch (minus autodiff).
I recently published a series of 16 blog posts Deep Learning from Scratch to GPU that does exactly that! [0]
I also started working on a book titled "Deep Learning for Programmers"[1] that takes it even further! You can subscribe today and start reading the drafts as they are written.
Now, I don't know if you're implying that fast.ai specifically should include this in their course, and you're saying this as feedback, but there's tons of resources online for it. If you're looking for a guide i highly recommend this: http://neuralnetworksanddeeplearning.com/chap1.html.
you're talking about thousands on PhD level work in autodiff and optimization and whatever else. you're not going to learn that in one PhD let alone one class.
I've been writing a C-based deep learning library for the past year or so, which I've recently been trying to clean up to be a bit more presentable. It was pretty heavily inspired by Darknet, although it admittedly has far fewer features: https://github.com/siekmanj/sieknet
As a complete non-expert in ML I have watched some of the fast.ai videos and dabbled around with their library, and generally I like very much their approach of trhing to make state of the art ML as easily approachable as possible for practical purposes.
However, what worries me a lot is complete breakup of the API between versions and some discussion about Swift. As a non full time expert I would need to have a robust and stable framework to keep on learning so that I can keep on building knowledge without needing to be worried that whatever I have learned about how to use some framework (or which language to use!) Will be obsolete in a couple of months and I need to start from scratch again.
So, does anyone know if fast.ai is going to stabilize anytime soon, or is it better for me to just try to spend the little time I have to play with ML and deep leaening directly with e.g. pytorch?
It's a fast moving field. In the past 5 years we've scene enthusiasm for Caffe, luatorch, theano, tensorflow, and most recently pytorch. And tensorflow 2 of course, which totally changes the programming model. Our own library changes to keep up to date with the field.
Having said that, v1 isn't changing much now - v2 (out soon) will be where new ideas go, and only bug fixes will go to v1, so you can stick with that as long as you like.
Obviously it is moving fast at the bleeding edge. But the basics of backprop have not changed for a while, and somehow I would think that a toolchain that I would expect to be a serious tool for practical purposes and not just a display of the bleeding edge would at least be backwards compatible with previous versions.
(I hope I am not writing this in too harsh way. I actually liked your videos so much that I am seriously considering applying for your on site course if I one day just get all the other issues sorted out in my life so that I can relocate to SF for a couple of months)
I’ve gone through this guide and other guides before, as I often teach courses like: “introduction to deep learning”
The guide from fast.ai is very good and highly recommend. I also recommend if you want to get into deep learning taking a numerical methods course / guide. Once you understand the basics the hard part is understanding the pitfalls introduced from hardware (and some software) limitations
Just a question for a project of mine, but can tabular data be combined with NLP? Or should I train two seperate nets (i.e. with ulmfit) and try to combine the results?
You can one-hot encode categorical data like strings to build one big model. But if the data is independent, or if the tasks are orthogonal then you might be better served if you create two models.
I think most people would categorize that as machine learning or data science, not deep learning, maybe you would do better searching under those keywords
There is only one global in the library, which is a singleton called `defaults`, and contains 2 things: the default GPU to use, the the number of CPUs to use.
Anyone have thoughts on the utility of the Intro to ML and Computational Linear Algebra courses? I've done Ng's ML course and was interested in the first of these as a more practice-oriented complement to it; the latter sounds interesting, but a bit more of an "elective" to me.
That'll work nicely as long as your local machine has a GPU that supports CUDA. If not, you can either train on CPU and wait an eternity, or follow the AWS EC2 setup steps further up on that page.
I was expecting this new course to also include a Swift section with Chris Lattner, but couldn't find it. Does anyone know by chance when that will be released (roughly)?
There was a lot of noise about this. But if you read the blog post carefully, Jeremy only said the Swift thing is very interesting and we will keep close eye on it and consider use it in the course.
I was in the live version of the most recent course. The last two weeks were guest lectured by Chris Lattner on Swift for TensorFlow.
Edit: this appears to be a link to Part 1 which was released in January (and is excellent). The s4tf portion was in part 2 which should be released publicly sometime this month.
"The combination of Python, PyTorch, and fastai is working really well for us, and for our community. We have many ongoing projects using fastai for PyTorch [...] This stack will remain the main focus of our teaching and development.
It is very early days for Swift for TensorFlow. We definitely don’t recommend anyone tries to switch all their deep learning projects [...]"
If you're interested in diving deeper into the papers and math behind the scenes, as well as coding from the lowest levels (right down to the compiler level), you'll be interested in "Deep Learning from the Foundations", which is coming out in 2 weeks. The last two lessons are co-taught with Chris Lattner (creator of Swift, LLVM, and Clang).
If you want to understand the underlying linear algebra implementation details, have a look at "Computational Linear Algebra for Coders": https://github.com/fastai/numerical-linear-algebra
If you want to learn about decision trees, random forests, linear regression, validation sets, etc, try "Introduction to Machine Learning for Coders": https://course18.fast.ai/ml
(All are free and have no ads. They are provided as a service to the community.)
Let me know if you have any questions!