Hacker News new | past | comments | ask | show | jobs | submit login

Good to see the Netflix prize paid off for Netflix. In pure terms of hourly rate, Netflix managed to get some of the smartest people in the world to work for less than chump change. In those terms alone it was a huge success, but the Netflix prize also pushed forward the field so really no-one was exploited.

The same idea is been commercialised by Kaggle (http://www.kaggle.com/) but there are several issues. Of course there is less up-take as the idea is no longer novel and the prizes are less. More than that, I think people are realising that winner-takes-all sucks, and the winning entries tend to combine so many different techniques that, as Netflix found, putting them into production is difficult. There is some interesting work on a better model here: http://arxiv.org/abs/1111.2664




The Netflix prize's criteria -- accurately rate a pile of movies -- is a red herring, though. It may help some, but the broad accuracy comes at the expense of an optimal algorithm for what's really important.

What I want is a recommendation of what I'll probably like. It is absolutely irrelevant if mid-range movies sore a 2 or a 3. If Netflix can pick out a list of movies that I would rate a 5 (and maybe even 4), they've got the holy grail. Nothing else matters. So why optimize your algorithm to capture the 2s and 3s as well?

For bonus point, it might be nice to be able to pick out the real dogs. If it could warn me that I'm about to rent a 1 or 2, that would be cool. But it doesn't matter if they can tell me which of 1 or 2 it is. The precision is irrelevant, just tell me I won't like it.

(If I've said this once, I've said it a hundred times. But I guess I'll keep on like a broken record as long as Netflix keeps trumpeting what an achievement the Prize's algorithm was.)


You're not alone. I don't know if you have a ML background, but it has been somewhat widely discussed in the community that the way Netflix scored the competition -- RMSE -- isn't the best way. E.g. http://hunch.net/?p=949 and http://andrewgelman.com/2008/11/netflix_prize_s/

In the OPs article they mention they monitor if a movie is watched to completion, which gives them a much better metric to optimise. The other issue is that this is really a sequential decision making problem. Recommending a movie has an opportunity cost -- there are other movies you don't recommend -- and the recommendation is an ongoing process, so it is probably best to spend some time exploring the user's taste on the assumption this will let you make better recommendations in the future. Accounting for these issues is much harder in a competition format.


And, if you look at the winners of the netflix prize they are people who were working in heavy research laboratories (Yahoo, AT&T, etc). Very few people can really compete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: