Linear models are simpler. GBMs are more powerful, more flexible, and faster.
Every ML course I took had 3 weeks of problem sets on VC dimension and convex quadratic optimization in Lagrangian dual-space, while decision tree ensembles were lucky to get a mention. Meanwhile GBMs continue to win almost all the competitions where neural nets don't dominate.
I suspect my professors just preferred the nice theoretical motivation and fancy math.
Svms are, by default, linear models. The decision boundary in the Svm problem is linear and since it’s the max margin we may enjoy nice generalization properties (as you probably know).
You probably also know that decision tree boundaries are non Linear And piecewise. It’s not so straightforward to find splits on continuous features.
Ie If the data is linearly separable then why not. Even using hinge loss with nns is not uncommon.
You probably see gbms winning a lot of competitions compared to svms because a lot of competitions may have a lot of data and non linear decision boundaries. some problems don’t have these characteristics.
Linear models are simpler. GBMs are more powerful, more flexible, and faster.
Every ML course I took had 3 weeks of problem sets on VC dimension and convex quadratic optimization in Lagrangian dual-space, while decision tree ensembles were lucky to get a mention. Meanwhile GBMs continue to win almost all the competitions where neural nets don't dominate.
I suspect my professors just preferred the nice theoretical motivation and fancy math.