Hacker News new | past | comments | ask | show | jobs | submit login
Practical Deep Learning for Coders (fast.ai)
411 points by samrohn on June 9, 2019 | hide | past | favorite | 40 comments



Surprised (and delighted!) to see this pop up on the front page of HN today! This course is designed to teach proficient coders the practice of deep learning, that is: how to train accurate models; how to test and debug models; and key deep learning concepts. It covers applications in vision, natural language processing, tabular data, and recommendation systems.

If you're interested in diving deeper into the papers and math behind the scenes, as well as coding from the lowest levels (right down to the compiler level), you'll be interested in "Deep Learning from the Foundations", which is coming out in 2 weeks. The last two lessons are co-taught with Chris Lattner (creator of Swift, LLVM, and Clang).

If you want to understand the underlying linear algebra implementation details, have a look at "Computational Linear Algebra for Coders": https://github.com/fastai/numerical-linear-algebra

If you want to learn about decision trees, random forests, linear regression, validation sets, etc, try "Introduction to Machine Learning for Coders": https://course18.fast.ai/ml

(All are free and have no ads. They are provided as a service to the community.)

Let me know if you have any questions!


Jeremy, we met at Exponential Medicine a few years ago. You probably don't remember me, but I wanted to say thanks for the inspiration (both in person and via this course).

I'm still chipping away at AI applications to mental health and the law, which sadly remains relevant: https://www.oregonlive.com/pacific-northwest-news/2019/06/ju...

The course has been a fantastic resource. Thanks again, and keep up the good work. Looking forward to what comes out of the work with Swift.

Thanks again to both you and Rachel for your contributions, and attention to the ethics of AI as a central concern that's every bit as relevant as the technical details.


Is there a course or part of a course that focuses, at a library level, on semantic classification. For example, if we had a reference document that had the term "fruit", then if someone entered the string "grape" we would tie it to the the "fruit" section(s) of the reference document.


Is there a link already where I'll be able to find the "Deep Learning from the Foundations" course?


I really love this course. I have no deep learning experience except for this, but in a few hours of watching these videos I was literally building image classifiers that worked pretty well. Coincidentally, just today I started working on a hobby project using the stuff I learned here.


The lessons look interesting from a high level perspective. And I think could help people guide their applications.

I think there's also a need for a very low level course in deep learning. I.e. on the level of someone who wishes to write their own deep learning library. Because from high up, sure it all looks like the chain rule, but down low, it gets messy quickly if you want to write a high performance library on your own.


If you want to take a look at a framework that isn't a terrible mess and that one person can understand both Darknet [1] and Flux [2] are really nice. Learned a lot from reading their source.

[1]: https://github.com/pjreddie/darknet/

[2]: https://github.com/FluxML/Flux.jl


These are refreshing. Redmon's resume is, the first of its kind I suspect.


Part 2 of this course, which will be released later this month, is a step by step walkthrough of building the FastAI and PyTorch libraries from scratch (minus autodiff).


I recently published a series of 16 blog posts Deep Learning from Scratch to GPU that does exactly that! [0]

I also started working on a book titled "Deep Learning for Programmers"[1] that takes it even further! You can subscribe today and start reading the drafts as they are written.

[0] https://dragan.rocks [1] https://aiprobook.com


Not a complete answer to what you're looking for, but you might find this dive into PyTorch internals interesting: http://blog.ezyang.com/2019/05/pytorch-internals/


This is actually very close to the flavor I had in mind.


Now, I don't know if you're implying that fast.ai specifically should include this in their course, and you're saying this as feedback, but there's tons of resources online for it. If you're looking for a guide i highly recommend this: http://neuralnetworksanddeeplearning.com/chap1.html.


you're talking about thousands on PhD level work in autodiff and optimization and whatever else. you're not going to learn that in one PhD let alone one class.


I think that’s only necessary for a good library. I think you could probably make a bad or simple one with much less.


I've been writing a C-based deep learning library for the past year or so, which I've recently been trying to clean up to be a bit more presentable. It was pretty heavily inspired by Darknet, although it admittedly has far fewer features: https://github.com/siekmanj/sieknet


As a complete non-expert in ML I have watched some of the fast.ai videos and dabbled around with their library, and generally I like very much their approach of trhing to make state of the art ML as easily approachable as possible for practical purposes.

However, what worries me a lot is complete breakup of the API between versions and some discussion about Swift. As a non full time expert I would need to have a robust and stable framework to keep on learning so that I can keep on building knowledge without needing to be worried that whatever I have learned about how to use some framework (or which language to use!) Will be obsolete in a couple of months and I need to start from scratch again.

So, does anyone know if fast.ai is going to stabilize anytime soon, or is it better for me to just try to spend the little time I have to play with ML and deep leaening directly with e.g. pytorch?


It's a fast moving field. In the past 5 years we've scene enthusiasm for Caffe, luatorch, theano, tensorflow, and most recently pytorch. And tensorflow 2 of course, which totally changes the programming model. Our own library changes to keep up to date with the field.

Having said that, v1 isn't changing much now - v2 (out soon) will be where new ideas go, and only bug fixes will go to v1, so you can stick with that as long as you like.


> It's a fast moving field.

Obviously it is moving fast at the bleeding edge. But the basics of backprop have not changed for a while, and somehow I would think that a toolchain that I would expect to be a serious tool for practical purposes and not just a display of the bleeding edge would at least be backwards compatible with previous versions.

(I hope I am not writing this in too harsh way. I actually liked your videos so much that I am seriously considering applying for your on site course if I one day just get all the other issues sorted out in my life so that I can relocate to SF for a couple of months)


I wrote a similar guide based on my experiences:

https://austingwalters.com/neural-networks-to-production-fro...

I’ve gone through this guide and other guides before, as I often teach courses like: “introduction to deep learning”

The guide from fast.ai is very good and highly recommend. I also recommend if you want to get into deep learning taking a numerical methods course / guide. Once you understand the basics the hard part is understanding the pitfalls introduced from hardware (and some software) limitations


Is there a catalog of things that will bite you in this field?


Much deep learning focuses on image classification (the first topic of the fast.ai course) and natural language processing.

What is a good course that focuses on "tabular data", in particular predicting continuous outputs from continuous inputs, aka regression?


It is not a full course but, since the fast.ai library does cover tabular data[1], they touch the subject in lesson 4[2].

[1]: https://docs.fast.ai/tabular.html

[2]: https://course.fast.ai/videos/?lesson=4


You might also be interested in the sister course to the one posted here: "Introduction to Machine Learning for Coders":

http://course18.fast.ai/ml


Just a question for a project of mine, but can tabular data be combined with NLP? Or should I train two seperate nets (i.e. with ulmfit) and try to combine the results?


You can one-hot encode categorical data like strings to build one big model. But if the data is independent, or if the tasks are orthogonal then you might be better served if you create two models.


I think most people would categorize that as machine learning or data science, not deep learning, maybe you would do better searching under those keywords


Just please do not adopt their coding style. Use this instead: http://google.github.io/styleguide/pyguide.html and try not to drag global state through your entire codebase with kwargs.


The fastai library uses a style guide specifically designed for data science libraries: https://docs.fast.ai/dev/style.html .

There is only one global in the library, which is a singleton called `defaults`, and contains 2 things: the default GPU to use, the the number of CPUs to use.


Anyone have thoughts on the utility of the Intro to ML and Computational Linear Algebra courses? I've done Ng's ML course and was interested in the first of these as a more practice-oriented complement to it; the latter sounds interesting, but a bit more of an "elective" to me.


Really awesome course! For those looking to run locally these seemed to be the most straight forward setup steps https://course.fast.ai/start_aws.html#step-6-access-fastai-m... assuming you have conda installed.


That'll work nicely as long as your local machine has a GPU that supports CUDA. If not, you can either train on CPU and wait an eternity, or follow the AWS EC2 setup steps further up on that page.


I was expecting this new course to also include a Swift section with Chris Lattner, but couldn't find it. Does anyone know by chance when that will be released (roughly)?


There was a lot of noise about this. But if you read the blog post carefully, Jeremy only said the Swift thing is very interesting and we will keep close eye on it and consider use it in the course.

So basically, nothing promised for v4.


I was in the live version of the most recent course. The last two weeks were guest lectured by Chris Lattner on Swift for TensorFlow.

Edit: this appears to be a link to Part 1 which was released in January (and is excellent). The s4tf portion was in part 2 which should be released publicly sometime this month.


Interesting, do you have a link to the lecture?

IIRC fastai transitioned from keras/tf to pytorch while back so I don't understand what's happening now


- the 2017 course was keras/tf (+ a little pytorch once Jeremy hit the limits of keras)

- the 2018 course is pytorch.

- the 2019 course (part 2) is vanilla python, and then the last couple of classes re-build everything using swift.

The lectures will be released soon.


I'm not sure what to think:

"The combination of Python, PyTorch, and fastai is working really well for us, and for our community. We have many ongoing projects using fastai for PyTorch [...] This stack will remain the main focus of our teaching and development.

It is very early days for Swift for TensorFlow. We definitely don’t recommend anyone tries to switch all their deep learning projects [...]"


> - the 2019 course (part 2) is vanilla python, and then the last couple of classes re-build everything using swift.

Implementation in vanilla python seems exciting. Do you any idea when lectures will be released?

I was a bit put off by the use of fastai library in 2019 part 1 course mostly because library makes it sound too simple.


Per closing paragraph of https://www.fast.ai/2019/03/06/fastai-swift/, part 2 is scheduled for sometime this month.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: