> where participating in one community automatically bans you from other communities, regardless of the content and context of your reputation
This doesn't happen unless you make a moderator mad which moderates other subreddits, but I haven't heard of this happening to anyone, or if you violate the content policy https://www.redditinc.com/policies/content-policy
>where participating in one community automatically bans you from other communities
This absolutely does happen.
I got banned from /r/offmychest, for example, despite never commenting or submitting anything to that subreddit.
When I asked for a reason for the ban, I was told there is a tool that mods can use which will automatically ban accounts who "participate" in specific subreddits that the moderator deems "bad".
The rest of their reply to me was: "We are automatically banning participants in specific abusive hatereddits that have systematically harmed this support community."
Alongside /r/offmychest, I was also banned from /r/blacklivesmatter, /r/depression, /r/relationships, and who knows what other ones because I got tired of trying to find out. Literally never interacted with any of these in any way, yet I'm already banned from them...
>regardless of the content and context of your reputation
If they had bothered looking at the "content and context" of my singular, years-old comment then they would have realized I was actually criticizing the subreddit in question (and its users) rather than supporting them. Instead, I was preemptively banned from multiple communities.
I'd argue there's a huge responsibility on the folks writing moderation tools like this to ensure they don't have adverse side-effects like this.
In the sci-fi Matrix reputation world, I can absolutely see somebody curating a reputation list called #bad-people:example.com which they prime by finding every user ID in every room they don't like and blanket assigning them -1000 reputation.
If then a moderator was dumb enough to subscribe to #bad-people:example.com and use it to impose a ban list on their rooms, then I'd hope that their community would arch an eyebrow at the crassness and treat them like a rogue moderator and either get them removed, or fork and go elsewhere... assuming that it's possible to visualise the filters which have been put in place.
There's a huge responsibility on the tool author to ensure that the users can see what filters are in place, and what they do, and encourage the user to challenge them - but again, hopefully, the market will vote with its feet and users will adopt the best tools available, and avoid being trapped under primitive moderation systems like the ones you refer to here.
tl;dr: we need better, morally relative reputation systems - rather than zero reputation system. liberal plurality >> anarchy ;)
> I'd argue there's a huge responsibility on the folks writing moderation tools like this to ensure they don't have adverse side-effects like this.
With all due respect, this is a naive take: the tools that are used on reddit for autobanning people from subreddits based on subreddits they've posted in are designed strictly to de-legitimize voices not in ideological lockstep with their own.
>If then a moderator was dumb enough to subscribe to #bad-people:example.com and use it to impose a ban list on their rooms, then I'd hope that their community would arch an eyebrow at the crassness and treat them like a rogue moderator and either get them removed, or fork and go elsewhere
The issue with forks is that they either unify the community (a successful fork) or completely divide the community (ffmpeg versus libav). It is more likely that such a system will divide communities and encourage infighting rather than consensus.
Question: did you ask anyone with experience with community building and community dynamics about this proposed reputation system, and if so, what were their comments?
Whether they have the effect of "[de-legitimizing] voices not in ideological lockstep with their own", the stated goal is often to prevent brigading. If you don't think that's a real problem, don't think that people are seeking solutions in good faith, or don't think these solutions are effective, please say that.
>I think brigading is a real enough problem that moderators/administrators need to at the very minimum be aware of it and ready to step in if needed.
I also think that brigading on reddit is an inherent problem with the way that reddit is structured and that no amount of stapled on tools will fix the problem in the most general case because they can't make the determination on the intent of the person accused of brigading.
At best they can detect patterns of repeat behavior in an automated fashion.
> If you don't think that's a real problem, don't think that people are seeking solutions in good faith, or don't think these solutions are effective, please say that.
I don't think that the majority of people deploying these solutions on reddit are doing so in good faith.
>there's a huge responsibility on the folks writing moderation tools like this
I'm fatalistic in my opinion that all moderation tools would eventually end up in situations like this.
The Rise of The Power-User just feels too inevitable to me. Once you have power-users you start to have clashing of egos. Once you have egos, you start to have censorship (as opposed to moderation).
I honestly think it’s an unsolvable problem, via technical means anyway. As it seems to speak to the fundamental nature of human relationships and communities.
BLM is advocating for fundamental change to the nature of law enforcement and incarceration, the very apparatus of state power. Anyone who thinks this could possibly be apolitical is blind to what politics is.
To reiterate what you're saying in case you just had a typo or something, you're advocating for blanket censorship regardless of context. Is that correct?
Yes absolutely. You do know that this social website uses "blanket censorship regardless of context" right? Shadow bans, rate limiting, etc. I am 100% sure Hacker News has automated tasks for identifying who to shadow ban and rate limit, since I've tested this with multiple accounts and other tools like changing IPs, etc. and also by contacting them for reasoning for their decisions. I think these are perfectly fine and acceptable ways for people to manage their private communities. My only caveat would be that you can't discriminate against protected classes. I would prefer that these rules be transparent, but I think that's a bit of a straw man for this discussion.
Managing private communities of tens of thousands of people require some disciplined rules in order for the entity's leaders to achieve their goals.
While I recognize that "private communities" have the ability to moderate and administer their communities whatever way they see fit (and subreddits [at least their initial iterations] certainly fit under this label), I do not agree that blanket censorship without contextual understanding is the right way to do things.
>My only caveat would be that you can't discriminate against protected classes
How do you define "protected classes", though? Is it your definition? The US Federal Government's? Who gets to make those decisions keeping in mind a niche subreddit (or even a full site like we are on here) will already have a very different demographic than the real world?
Not to mention, sorting and censoring people based on a classification that is outside of their control is quite a... controversial way to go about things.
>Managing private communities of tens of thousands of people require some disciplined rules in order for the entity's leaders to achieve their goals.
Establishing, enforcing, and maintaining "disciplined rules" should not, and do not need to, mean "shadow bans, rate limiting, etc". Certainly not without a human element capable of contextual analysis, at least.
--
I disagree with you, but respect for explaining your beliefs.
>I do not agree that blanket censorship without contextual understanding is the right way to do things.
I don't agree with it either, but it's impossible to do this at scale without hiring a large number, relative to community size, to be full time paid moderators. We have to accept the reality that these free communities need automation in order to maintain order, or else they would cost lots of money.
>How do you define "protected classes", though?
This is a straw man and I'm not going to address it. You know exactly what I'm talking about.
So a single comment I made multiple years ago that was in criticism of /r/the_donald was actually "brigading" and worthy of a ban across multiple unrelated communities?
Several subreddits have policies that will ban you automatically, even if you've never participated in their subreddit, for participating in other subreddits. For instance, the most famous one in my memory is that posting in /r/the_donald got you automatically banned from a number of other subreddits.
This doesn't happen unless you make a moderator mad which moderates other subreddits, but I haven't heard of this happening to anyone, or if you violate the content policy https://www.redditinc.com/policies/content-policy