Hacker News new | past | comments | ask | show | jobs | submit login

Wouldn't this just encourage people to "pile on" by upvoting things which already have high scores, or downvoting things which already have low scores?



That's why I emphasize that the key is making sure you set thresholds around it. For example, don't perform the calculation when the post has more than a 10% distribution in votes, or you could add a freshness component, so you would only gain the karma when you are the first upvote on something that ended up being popular.


Counting 'freshness' is even worse - that just turns the whole thing into a game of speculation. "Piling on" to existing popular consensus does enough damage to originality, but what you propose would explicitly reward a proactive chase of the lowest common denominator.

For an exploration of similar ideas, take a clicky: http://www.nplusonemag.com/?q=node/473 (I actually think I might have seen here first, but forgotten to save it...)


It's worse only in the narrow context in which it's allowed to be dominant. This isn't a zero sum game. Taking the time differential between when a contribution was posted and when it first met a "value" threshold (say, 5 upvotes) and rewarding one of the people who voted in the affirmative by giving them, say, .25 karma points for it, does not reward the lowest common denominator. My hypothesis is that it might have exactly the opposite effect by encouraging those who might otherwise be excessively frugal with their votes to use their voting opportunities in a meaningful way.

In other words, if you want to look at this in absolutes and think that when I say that I think it's a good idea to take a given variable into account that I am somehow implying that the variable is free of all bias and in every other way perfect, you are making a mistake. You will never have high confidence for all of your variables when they describe human interaction, but that low confidence doesn't speak to the utility of the variable, it speaks to its importance in the overall calculation.


I understand that it's a small component of the overall system you envisage; I just believe that it's a small component pointing in the wrong direction.

If you boil it down, my point is that while "do people like what I say?" is a suboptimal scoring criterion, "do most other people like the same things I say I like?" is likely to be significantly worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: