Hacker News new | past | comments | ask | show | jobs | submit login

I don't think trust can be reliably built with elements of gamification like this. The deal with gamification is that people learn to game it out, which erodes the kinds of sincere or honest interactions you're trying to cultivate.

In the olden times blogs earned trust by cultivating a reputation. The reputation was earned by having an audience that trusted them and recommended them. Cross-linking content to other blogs, guest blogging, being included on a blogger's 'blogroll,' etc. were all ways they expanded their audience.

It was slower and had much less reach, but it also focused more on "building an audience" rather than "driving traffic." We, fundamentally, don't trust content, so mechanisms that operate on validating atomized bits of content are going to fall flat. We trust people and institutions. If you want to build trust it has to work on the agents producing the content rather than the content itself. Segmenting content up into atomized bits is what creates the erosion of trust in the first place. It's something timeline driven social media feeds do specifically because it makes it difficult to parse genuine buzz from advertising, which makes the ads more effective. But that's the opposite of trustworthiness.




This is sort of a perfectionist perspective. Search engines use the same “gamification” and suffer the same problems you’re “predicting” but that doesn’t mean you don’t use search. It does mean it’s an arms race between the engine and the abusers. Weighting the agents instead of the content is no different than a popularity contest and is essentially an “appeal to authority” (or a lot like cancel culture). Just because someone has weird opinions about X doesn’t mean they can’t be brilliant about Y. If you ranked the content then their X content can sink and their Y content can rise. The problem with Twitter, etc. is that their incentives are not as aligned with their users’ goals as we would like. Probably the best part about blogging was that it wasn’t centralized and so wasn’t subject to one person’s definition of what those trade-offs should be. But, of course, now we’re trying to discuss fixing one of its weaknesses without losing too many of its strengths.


> If you want to build trust it has to work on the agents producing the content...

LinkLonk's algorithm works that way. It builds trust in sources of information (including users who rate content). And it does not and will not try to understand the individual pieces of content.

Unlike the social media feeds that are powered by black box neural networks, LinkLonk's algorithm is transparent. You know how your interactions with it will be interpreted. I hope that this transparency will help build trust in the system and in the sources of information you are connecting to.

Yes, bad actors will try to game any system to gain the attention that they don't deserve. I'm not claiming that LinkLonk is game-proof, but I think it has better feedback loops and incentives than other systems such as popularity based ranking (please don't take it as a challenge).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: