I have no questions, but feel compelled to say thank you.
I've been a ML/DL practitioner for the past five-plus years, and first watched one of your lectures a bit over a year ago. All I remember thinking is that a wealth of practical knowledge that had taken me years to acquire was there for the taking, for free, for anyone who cared to look.
Since then, I have been recommending these courses to anyone who asks me for advice for learning about deep learning. You -- and Rachel Thomas -- have created by far the easiest and fastest path for a wide range of people to gain deep learning expertise.
In fact, I'm so sure the new lectures will contain valuable nuggets of know-how that even though I consider myself pretty knowledgeable about deep learning (and an expert in my narrow domain of interest), I will make it a point to find time to watch all the updated lectures.
I am a web dev, JS being my first lang of choice :-), and i have been trying to get into machine learning/deep learning/AI, but I am having a information overload.
I have 0 knowledge about this field, so my question is should i just start with machine learning, instead of jumping right into deep learning ? Or is it ok to jump right into deep learning and is it possible to do all the stuff done with machine learning with deep learning ?
Just like when a person who shows an interest on learning to code I refer them to Python instead of C++, what do you recommend ?
Just jump in with this course. Your background in web dev will be really helpful, but there will be plenty of new concepts to learn - don't try to understand everything perfectly the first time through, but just try to complete each week's assignments as best as you can. You'll need to learn python and numpy along the way, but there's plenty of free online resources you can refer to when you see something you don't understand. And use http://forums.fast.ai of course.
You can then go back through them a 2nd time and do a deeper dive. By that time, our Intro to Machine Learning course will be out too :)
So. Many. Resources. When I have the time I'm watching the first course with Keras and Theano, haven't gotten to the second course with Tensorflow, and now this. My question is, is it worth my while to continue watching the Keras/Theano videos or should I skip to the Keras/Tensorflow videos or even to the fastai/PyTorch videos? Are the same concepts covered or does each series cover different things?
Definitely skip to this new course. Sorry I should have provided a link to the FAQ where this and other questions are answered: http://forums.fast.ai/t/part-1-faq/10330
The concepts and skills you learned from the 2017 are entirely transferable - learning the software packages is the easiest part, and the various libraries are similar enough that switching between them doesn't take much time.
Jeremy, I don't have a question, but just wanted to thank you guys for fastai. It is just the right mix of "throw you in the pool" and "I'm still here, you aren't going to drown"!
I too want to say thank you, even though I have only started with the material. Your philosophy on how to teach the subject I feel will be much more useful to me than the classes I was signed up for recently (and lost interest in rather quickly). It's a lot easier to get excited about the practical things surrounding the topic which is why I'm looking forward to diving into the new content.
On a related note, one thing I do see a heavy emphasis on with the material is on the Computer Vision / Image Processing side of things which is certainly understandable considering how popular that area is specifically right now.
Something not really computer vision related though that I'm curious about (and I'm not sure if it's covered in the new/existing lessons) would be on how to craft a data set using data I might have accessible to me, but which isn't necessarily image-based, and to apply these techniques to that sort of data set to come up with predictions (I bring this up, because one of the goals I have for learning about this topic specifically is to see how I might be able to apply it back to my job at a community college and if I can pull historical data related to our students and use that for forecasting / recommendation purposes and create some useful applications our students can utilize...as a simple idea one example would be using historical data about the current student, and maybe data from other similar students, to predict success in a student's upcoming courses).
Thank you and keep up the awesome work (and for sharing it freely :-)!
Just wanted to say thanks for all your work. I'm taking some time off from my regular job as an iOS developer to pursue an ML project of mine. I wouldn't have had the confidence to do that with out your online course.
I wrote up a project I did after last years class on Medium, also because of your guidance.
Thanks for sharing! FYI you may find resnet18 or (better still) densenet a good option for that dataset - you shouldn't need so many training images then.
Jeremy, thanks for putting this out. I had issues with part 1, v1 with the AWS set up. But I had none of those issues with this course. As a beginner programmer the template provided via Paperspace is just what I needed. I've been waiting for this course to be released for a really long time now, so I am incredibly excited to have successfully set up the environment and am now FINALLY ready to learn. Thank you!
1. You've made an off-hand comment on one of your videos that a sequential dense network is just a generalization of any other type of neural network architecture. In theory you could re-create an RNN or CNN through just Dense layers. But obviously it's not practical.
Why isn't it practical? Is it because the network would have to be too deep, or too wide? Would the optimizer just get stuck in a local minima or would overfitting be inevitable? Or perhaps some combination of issues?
What do you think is the best hope for a generalized network architecture, most similar to our brain?
2. On a somewhat related note, do you have a strong enough faith in the current machine learning algorithms and architectures being used (RNNs, CNNs, capsule networks) that given infinite resources (time training and network size), that we would be able to create a meaningful general AI? Or do you think that our current approach is just incremental and a truly different approach would be required to achieve meaningful AI?
> Why isn't it practical? Is it because the network would have to be too deep, or too wide? Would the optimizer just get stuck in a local minima or would overfitting be inevitable? Or perhaps some combination of issues?
Schmidhuber did a paper a few years ago showing near SoTA performance on computer vision using just a fully connected net. One of our students showed how a convolution is just a weight-tied matrix multiply here: https://medium.com/impactai/cnns-from-different-viewpoints-f...
So the issue is that without the weight-tying, you've got more parameters to regularize (which can decrease performance) and train (which takes longer). So you should use weight tying where you can - e.g. by using convolutions.
In general, domain-specific architectures try to find structure in the underlying data and problem, and use that to decrease the number of parameters we need. The use of implicit factorizations in the inception and xception architectures is a good example.
Really looking forward to getting stuck into this course. I’ve a background in fluid mechanics, in particular writing code (in Matlab but more recently with python) to identify structures in experimental and numerical data. In essence this is very similar to classical image analysis. Looking to machine learning seems a logical next step. I’ve been working through Andrew Ng’s course and while it is very good it suffers from fundamentalitis to a degree. However I do plan on completing Ng’s course first.
Is the 'fastai' library on Pytorch something you'd recommend only for beginning/learning/this course, or is it intended to be used in production beyond the course? Is it being positioned more as a learning aid or as an open source library with a life beyond this course?
Definitely well beyond the course. It's designed to be the easiest way to create world-class models. I'll be providing a lot more information on how we're doing this in the next week or two.
PS: The focus of fastai is training, not production. The models you end up with are largely standard pytorch models, so standard pytorch approaches to production work fine. For most people, a simple flask endpoint with CPU inference is generally the best approach for this (e.g. all the deep learning web apps at http://allenai.org/ use this approach AFAIK.)
I do, but at the moment the course is the documentation, on the whole - so it won't really be standalone until we're written proper docs! (There's basic docstrings for most functions, but they don't tell you how it all fits together.)
I thought the first version of the course was amazing but this version unlocked a new level of brilliance. Specially, the fastai library. I'm counting the days to part 2. Thank you so much! You're making a huge difference around the world! This is education at its finest
Started this weekend and finished the first two chapters. Loving it so far. This is the first course that makes me so excited and I keep thinking about when I can start doing the next chapter. The forums and other resources well documented and extremely useful.
Hi Jeremy, thank you so much for your great effort, it helps me a lot.
Would you release a new version for the part2 course this year? I haven't started the part2 yet, but I heard that the part2 course is using the new Pytorch based library already.
Very excited about the pytorch version of this course. Will you offer more NLP focused materials in future courses. I've found that the majority of the deep learning materials gravitate towards vision and image classification.
We suggest Paperspace now. Lesson 1 walks you through getting set up. It's very easy - there's a fast.ai template there ready to go. They have the best price/performance ratio of anyone at the moment for GPUs.
Can I still use AWS though? Does the setup scripts remain the same? I had already set up an AWS server (my company has provided us some cash for personal learning) from the old lessons, but haven't started the lessons yet.
It'd be great if I can switch to the new lessons but with AWS.
Yup AWS is fine too. We have a new AMI available if you want to use it, and also a predefined conda environment to add fastai/pytorch to an existing server. Lesson 2 covers AWS usage, and there's also step by step instructions linked from the forum.
If you remember to shut down your machine when you're done, and you do the suggested 10 hours per week, plus maybe an extra 5 hours of model training per week, at $0.45/hour for Paperspace's cheapest machine (still works great though): $47.25
Dan from Paperspace here. Our fast.ai template is Linux based but you can spinup a GPU backed Windows instance on our cloud if you’re interested in a remote option. Our streaming tech is GPU accelerated so it feels snappy.
Hi Jeremy,
I just recently "upgraded" from a Macbook air to a Windoze Gaming Laptop with the GTX1050 GPU just for the reason that one day I could hopefully do all the Fast.ai assignments on my laptop. I think this announcement right here will spur me in taking up this course and Finishing it. I have an actual need for automating some of our IQC/OQC( Incoming and outgoing Quality control) of sub-assemblies and parts that we purchase from our vendors and I hope to leverage what I learn from this course. Thanks a Ton for democratizing Deeplearning and ML for a large population.
Ananth
BTW the 2018 version of the course is being discussed in this forum, for those interested: http://forums.fast.ai/c/part1-v2