Learning about the basis of logistic regression in Ng's Stanford class was eye-opening. Also like that he then motivated generalized linear models, and why they're nice (e.g., parameters are linear with input data; the maximum-likelihood hypothesis is the expectation of the sufficient statistic) and he explains why we see the logistic function in so many damn places (it's the response function whenever y|x is distributed as an exponential family).
It was great how he spent a lot of time on logistic regression before delving into SVM's or Neural Nets - it was much easier to understand the cost functions & regularization for other types of classifiers after having understood those for logistic regression.
My takeaway: if you can avoid adding risk to your systems by using more complicated models, you should.