I agree with all the takes about LinkedIn turning into a bad social network, but is that intrinsic to its design or is it something that happens no matter what?
Is there a way to design a site--at the UI/policy level--to end up with a community that doesn't suck?
For Hacker News, for example, is what we like about it just a consequence of the current set of members (in which, case, it might change eventually), or is it the moderation that keeps it good? If the latter, is there some way to automate the moderation (to make it scalable to LinkedIn-sized networks)?
I'm also thinking of Quora, which was once an amazing community, but eventually (though it took years) degraded into just another meme-fest/click-bait social network.
I think it’s an intrinsic byproduct of any site/community that puts an algorithmic feed optimized for “engagement” front and center. By our nature, we engage with things that ping our extreme senses, which is the core reason Facebook and Twitter have gone to complete crap. Clickbait is truly bait and we are just angry raging fish if baited sufficiently.
I think it’s absolutely possible to have a community that doesn’t suck if sucky behavior is punished instead of rewarded.
Well said. I think HN has two qualities that make it work:
1. No need to drive engagement.
2. Heavy moderation to keep things on track.
#2 won't work without #1 because of what you said (the nature of getting engagement).
But is there some way for moderation to scale? Maybe this is where LLMs come in?
Ironically, LinkedIn should be the one experimenting with this. If they can get revenue from recruiters or job seekers, then they don't need the toxic kind of engagement.
upvotes and downvotes help, and I'd argue HN does a good enough job at preventing new accounts from being able to do disproportionate damage (discouraging bots, astroterfing, etc.)
Another thing is user friendliness and maximizing engagement. Whatever Facebook did around 2015 or so to make everything bloated, animated reactions, and slower seemed to make things more user-friendly for a subset of less technical users, such that people who can't fact-check obviously fake articles and spam posts are enabled and incentivized to post as much as possible, and adding algorithms on top of that to maximize "engagement" make these things inevitable.
Twitter's chronological got turned off as the default at some point for a similar algorithmic "for you" page that I think has also made things worse for their platforms.
No idea about Quora, but eventually for ad-supported sites you start maximizing for time on the site and getting people to look at ads, rather than "quality" in the content on your social media site.
The fact that YCombinator doesn’t need to optimize the platform for profit/advertising for the business to continue to run profitably means they can afford to maintain rules which encourage only niche high quality discussions and keep high standards for moderation.
HN engagement is not (for now?) directly related to any significant "real-world" metric, meaning that (for now?) people are not using it as an indicator in a CV or anything like that. Having a lot of points here also does not lead (at least as far as I know) to any significant "real-world" social engagement between its users. As soon as such a score exists, human beings will start to optimize it in some ridiculous and disgusting ways.
I also suspect that moderation is heavy, but I have absolutely no idea about how it works, who does it, etc. I've never seen moderators engaging with the community here, which means that probably moderating itself does not add to your social score.
You will frequently see the top comment is by him when he describes merging / redirecting threads, explain title changes, or try to set expectations for hot/contentious topics.
it's both. Human moderation will always be an important factor. If you want larger communities to work well you need to put a proportionate effort of human attention in to maintain it, including bottom-up from users. Bot-autocracy is bad not just because it doesn't work well but also because it makes people feel alienated.
And the design is important too. There's no followers here, no monetization, no flashy upvote count, essentially no identity with a capital I. HN and sites like it work to a large part because there's no status or financial grift involved in posting, people largely just post to communicate. The exact opposite of what Twitter is turning into.
adverse selection, a community is good, so it grows, and then it becomes attractive to the mainstream, and to bad actors who exploit it for their own agendas, and extract value while making it worse.
Is there a way to design a site--at the UI/policy level--to end up with a community that doesn't suck?
For Hacker News, for example, is what we like about it just a consequence of the current set of members (in which, case, it might change eventually), or is it the moderation that keeps it good? If the latter, is there some way to automate the moderation (to make it scalable to LinkedIn-sized networks)?
I'm also thinking of Quora, which was once an amazing community, but eventually (though it took years) degraded into just another meme-fest/click-bait social network.