Hacker News new | past | comments | ask | show | jobs | submit login
Stanford CS231n – Convolutional Neural Networks for Visual Recognition (cs231n.github.io)
125 points by dennybritz on Feb 9, 2015 | hide | past | favorite | 11 comments



Hi HN, the course is still in progress so the notes are not yet finished (I'm currently struggling to finish the ConvNet notes).

Our syllabus is here: http://cs231n.stanford.edu/syllabus.html where you can also find lecture slides, which have some more information.

Lastly, our assignments (that walk you through implementing a Softmax/SVM classifier and Neural Networks and ConvNets in Python+numpy) are all on terminal.com. Terminal.com lets us set up a VM in the browser: You visit the assignment URL, fork the snapshot, and you can work right away on the assignment in your browser on an IPython Notebook: the data is there, all dependencies are already installed, and everything ready to go. We're also working with terminal.com right now to get access to GPU machines soon, which will let us set up assignments that use Caffe and efficient GPU code, etc.


I've been reading through the notes and you present the material extremely well. I especially like how you discuss naive approaches before going about a better way to do things (e.g., computing a gradient numerically vs analytic). This is rare in teaching but, from a student's perspective, it really helps fill in the gaps of knowledge as you try to reason and understand the process on your own.


Thanks! Unfortunately a lot of teaching is very relative and strongly depends on prior background. A different student gave me feedback on that section as: "Why are you expanding out all the random steps nonsense? Gradient descent takes one line to explain". It's the same for my lectures: No matter what I say or cover, at any point during the lecture some people are bored and some are completely lost. All you can hope for is hitting the median well and then learning to ignore (to some degree) the person who just asked a question that indicates that they are not following at all, and the person next to them who is yawning and on their phone.


I'd like to second karpathy here. When you do a practical interpretation of something like machine learning (and even deep learning!) I've had to cater to different tastes. Usually people in these classes fall in to either the more engineering side where breaking down gradient descent can help, or mathy where they've already done convex optimization and know the trade offs of LBFGS vs Conjugate Gradient and all properties of parametric models are obvious. The best thing you can do here is work with the students 1 on 1 to fill in the gaps. There's no silver bullet. Which is why I'd say taking the class in person is always going to be better than notes. I think karpathy is hitting a wider audience with the way he's handling the notes though.

Props to the way you're handling this!


Confused as to the downvote..but maybe I can clarify here. People wanting to apply deep learning tend to fall in to one of two camps: heavy CS with some applied ML experience and mathematicians who might not have as much experience building things. In karpathy's case, he's likely going to get a mix of students who have taken different classes. There's a lot of variance either way.


This is brilliant, thank you! Is there any audio/video of the lectures available anywhere? I've been enjoying some other Stanford courses on iTunes U and Coursera :)


Thanks! We ended up not recording the lectures this time around. I was playing with the idea of prerecording some videos MOOC style, but then ended up completely swamped even without them. Among the notes, slides, lectures, office hours, midterm design, coding assignments, project design, message boards, meetings, and various misc, running this class has turned out to be a stressful 100+ hours/week endeavor, and that's even with an all-star TA team by my side.

There have been some whispers of offering this class next year as a proper MOOC, in which case we'd definitely have videos. I'm just not sure if I'm up for it yet - I enjoy dissemination but I'm also starting to miss research quite a bit, and a MOOC would likely be the same thing or worse all over again.


Are you planning to offer it in non MOOC form (i.e. just teach it the same way, but presumably with less stress since you've already made the notes and assignments) at Stanford next year?


I've been following along, and this course has been great so far. The author also has made the code for one of his publications about image captioning publicly available: https://github.com/karpathy/neuraltalk

Along with various other nice work about neural networks:

http://cs.stanford.edu/people/karpathy/convnetjs/started.htm...

http://cs.stanford.edu/people/karpathy/recurrentjs/

http://karpathy.github.io/neuralnets/

Nice visualization about classifiers from the class (various svm, softmax):

http://vision.stanford.edu/teaching/cs231n/linear-classify-d...


OT: It's fantastic that faculty at top universities like Stanford are increasingly making their content freely available on Github. It surely is an awesome time to be alive!


I wish I could take this class online (not an alumnus, not even in the US) but thanks for the free material!




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: