Hacker News new | past | comments | ask | show | jobs | submit login

Learning about the basis of logistic regression in Ng's Stanford class was eye-opening. Also like that he then motivated generalized linear models, and why they're nice (e.g., parameters are linear with input data; the maximum-likelihood hypothesis is the expectation of the sufficient statistic) and he explains why we see the logistic function in so many damn places (it's the response function whenever y|x is distributed as an exponential family).



100% agree.

It was great how he spent a lot of time on logistic regression before delving into SVM's or Neural Nets - it was much easier to understand the cost functions & regularization for other types of classifiers after having understood those for logistic regression.

My takeaway: if you can avoid adding risk to your systems by using more complicated models, you should.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: