Ok, hear me out: I am a mod of two very large subs. The larger one and the one were I am mostly active is HUGE. To give you an idea how large, there are anywhere between 7 and 15000 comments per day and in terms of active moderators, it's somewhere between 5 and 6(the others pop in and out for 5-6 actions a day).
First things first, beyond the reddit TOS, ultimately the moderators have the final word on what stays and what goes. And as such, some subs are moderated great, while others are a complete shithole. I.e. my country's sub is moderated by 3 far-right extremists. And I do mean that: Some months ago a police officer shot at an illegal migrant and a mod commented with "The only crime here is the cop only shot once and didn't kill him". Someone was kind enough to translate the text into English and message reddit and he got suspended. Which, as reddit goes, suspension is not permanent and he will be back. And how have they been getting away with stuff like that for 12-13 years is beyond me.
But more on the sub I moderate: With this much traffic and this much content, we need to have a very tight set of rules and we need to enforce them. Some of those rules include, no speculation, rumors or "I heard that", not to mention obvious propaganda and spam. All of which, believe it or not, are horrible on reddit and the platform gives mods little to no tools to fight it. Hence the reason why I've made several bots that work around the clock to ease our pain.
Now I have no idea how the subs in question in the article are moderated or what are their rules. But from my experience, there are users that do exactly what the mods replied with. We have several users that use the platform and the sub to self-promote their youtube channels or websites. And we have had huge arguments with them about it. One notable example is a woman who is sort of an official, who went full Karen on us. We came to an agreement that we would let her post under the condition that her posts are strictly relevant to the sub. Next day, she posts a 30 minute video, and in our chat someone posts a message saying:
"Uugh, u/<redacted for privacy> posted another video. Does someone have the patience to watch through an annoying 30+ minute video".
Now picture this happening in a sub with hundreds of daily submissions: forget about the fact that we are volunteers, even if this was our full time job, it's not humanly possible to go through all that. So if indeed, the author has been posting their own content, looking from my perspective, I can sympathies with the mods. Now is it a bit extreme to ban someone for it? Yeah, totally. Of course you should first try to reason with the author and set some boundaries. The fact of the matter is, moderators play a huge role in how active and alive a sub is. And for the sub I moderate, believe me, we go to great lengths to make sure it stays active and alive. Apart from being moderators, we constantly discuss the content and contribute ourselves.
And at that scale it is understandable if some content slips through the cracks. Which is fine for the most part. We try to make sure this doesn't happen, so the bots I made include an NLP model(which, despite being the product of a weekend worth of coding, has been doing an awesome job of giving us a list of users to be bonked). And looking at the subs in question, they are tiny. If you are a user, which goes regularly and sees 5 frequent contributor, 3 of which use reddit for self-promotion, it's likely that you will bounce off. And moderators are well aware of that.
As I said, banning seems extreme, and what we would have done is try and reason with the user first. Of course, if that fails to yield meaningful results, then...
What HN did reasonably well was to train its own users to do a bit of moderation themselves; the "bad" behaviour have good chance to just get flagged or have some user complain about your comment.
I saw that behaviour few times in some other forums and for mid-sized communities it seems to work, provided you cultivate that culture in the members
Oh no, our community has been incredibly helpful in that regard. They do report things if they see them. I was really pleasantly surprised by that to be fair. And as time goes, you lean who your most valuable contributors are and you start paying closer attention to them. Which is not to say that the automations and bots I made didn't take tons of loads off all mods on the sub.
First things first, beyond the reddit TOS, ultimately the moderators have the final word on what stays and what goes. And as such, some subs are moderated great, while others are a complete shithole. I.e. my country's sub is moderated by 3 far-right extremists. And I do mean that: Some months ago a police officer shot at an illegal migrant and a mod commented with "The only crime here is the cop only shot once and didn't kill him". Someone was kind enough to translate the text into English and message reddit and he got suspended. Which, as reddit goes, suspension is not permanent and he will be back. And how have they been getting away with stuff like that for 12-13 years is beyond me.
But more on the sub I moderate: With this much traffic and this much content, we need to have a very tight set of rules and we need to enforce them. Some of those rules include, no speculation, rumors or "I heard that", not to mention obvious propaganda and spam. All of which, believe it or not, are horrible on reddit and the platform gives mods little to no tools to fight it. Hence the reason why I've made several bots that work around the clock to ease our pain.
Now I have no idea how the subs in question in the article are moderated or what are their rules. But from my experience, there are users that do exactly what the mods replied with. We have several users that use the platform and the sub to self-promote their youtube channels or websites. And we have had huge arguments with them about it. One notable example is a woman who is sort of an official, who went full Karen on us. We came to an agreement that we would let her post under the condition that her posts are strictly relevant to the sub. Next day, she posts a 30 minute video, and in our chat someone posts a message saying:
"Uugh, u/<redacted for privacy> posted another video. Does someone have the patience to watch through an annoying 30+ minute video".
Now picture this happening in a sub with hundreds of daily submissions: forget about the fact that we are volunteers, even if this was our full time job, it's not humanly possible to go through all that. So if indeed, the author has been posting their own content, looking from my perspective, I can sympathies with the mods. Now is it a bit extreme to ban someone for it? Yeah, totally. Of course you should first try to reason with the author and set some boundaries. The fact of the matter is, moderators play a huge role in how active and alive a sub is. And for the sub I moderate, believe me, we go to great lengths to make sure it stays active and alive. Apart from being moderators, we constantly discuss the content and contribute ourselves.
And at that scale it is understandable if some content slips through the cracks. Which is fine for the most part. We try to make sure this doesn't happen, so the bots I made include an NLP model(which, despite being the product of a weekend worth of coding, has been doing an awesome job of giving us a list of users to be bonked). And looking at the subs in question, they are tiny. If you are a user, which goes regularly and sees 5 frequent contributor, 3 of which use reddit for self-promotion, it's likely that you will bounce off. And moderators are well aware of that.
As I said, banning seems extreme, and what we would have done is try and reason with the user first. Of course, if that fails to yield meaningful results, then...