Hacker News new | past | comments | ask | show | jobs | submit login

Random Forests is not really at all in the same class of depth(no pun intended) as the other models you've mentioned; if you understand decision trees, random forests are just a way of combining independently trained decision trees that is less prone to overfitting.

I'd probably rank these areas in order of importance as follows: deep learning, pgms, reinforcement learning. Deep learning as a framework is pretty general. PGMs, as I have seen them, don't really have any one killer domain area - maybe robotics and areas where you want to explicitly model causality? Applications for reinforcement learning seem the more niche, but maybe that's because they haven't been adequately explored to the extent that DL/CNNs/RNNs/PGMs have been.




Tree based algorithm have it's place. I dunno what your definition of depth is but it all depends on your data.

I'm doing a thesis on tree based algorithm and it works great for medical data.

Granted I have little exposer to NN, you can't do that with NN when clinical trials data is small as hell.

It's all base on the type data and your resource and criteria.

NN is really hype and people tend to over look other algorithms but NN is not a silver bullet. I don't get how you rank those importance either, it could be bias.


Oh there's no question that tree based methods are effective - random forests/gradient boosted trees routinely win kaggle competitions. But I was more referring to how random forests is learnable in a day, whereas deep learning would probably take at least a few weeks to learn properly.


In analysis of scientific data, causality is important. Typically in science you aren't really making a model to predict things (although that can be a good way to test the model) but rather to understand what is going on, and PGMs are great for that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: