Interesting that the most common models being used are the simpler ones, logistic regression and decision trees. This is despite all the hype for the more complicated techniques like neural nets and GBMs. Is it just because these models are faster to train and easier to interpret or something else?
in my experience, doing deep learning is a lot harder than building simpler ml models. training times are killer, need lots of data, overfitting is a challenge, hard to interpret results, lots of things can go wrong. deep learning is the future from a mathematical standpoint (with neural nets you can essentially learn arbitrary functions in some borel space or something whereas simpler ml models are basically a special case of deep learning) but it's definitely harder.