Hacker News new | past | comments | ask | show | jobs | submit login

Love each of the individual BellKor approaches (http://www2.research.att.com/~volinsky/netflix/ProgressPrize...) for finding recommendations in the space of movies or users-- an MDS embedding, a PCA whitening, an NMF factorization by alternating least squares. Each of those hunches seems like the true art in these problems. The blending 100 of them together is far less interesting to me, though.

Yet that seems to be the sort of jockeying and tweaking these problems (seen now in Kaggle contests) seem to require. Is there an art or science then to the subsequent blending? Does one develop a better intuition for the problem at that point, or am I entirely missing the point of most ensemble methods (predictiveness over parsimonious understanding)?




> Is there an art or science then to the subsequent blending?

You could regard this as an application of the "Smoothed expectation theorem", Saying E[X] = E[E[X|Y]]. That is, if you are trying to compute the expectation of something, you can make it depend on anything else, and compute the inner expectation with respect to that. Might seem trivial or useless, but it is wildly applicable and often significantly simplifies computations.

One of the practical implications is that if you're not sure about something (underlying model, specific parameters), just apply some prior distribution and compute the expectation over that -- it is essentially guaranteed* to provide a better result than trying to pick the correct setup.

Although I'm not sure what the interpretation here would be.

* - so long as the entropy of your prior is not more wrong than the entropy of your hyper-parameters. This is often the case.


Yeah, the tower property! That made my day. Thanks for cleanly giving motivation and mathematical beauty to something that irked me up until now. Which is probably the problem of having your aesthetics drive you in the first place.


I don't know of any deeper interpretation of blending. I guess one could look to the 'wisdom of crowds' for anecdotal evidence. There might be connections with consensus and voting systems, but those are mostly discrete AFAIK. It is at least a pragmatic way of exploiting the different biases of the members in the ensemble. In our case, the Netflix predictions we made based on Wikipedia data scored worse on their own than ratings-only predictions, but they attracted some weight in the blend and made the overall score (marginally) better.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: