Hacker News new | past | comments | ask | show | jobs | submit login

"Moderation" is a terrible excuse used by the corporate types as a reason to stick in their safe walled gardens.

We had moderation on Usenet. It was simple, but damn, did it work. We had plonka lists (client selected ignored users), keyword scoring, server level scoring, some anti-spam. There was EARLY work in Bayesian filters built in to clients, but was too late for the "big event". (When ISP's everywhere killed Usenet servers, for what is pretty surely assumed as anti-piracy.)

But lets look corporate moderation. It's done in secret, with vague rules that may or may not be stated, and by content moderation farms that use and abuse people. https://www.wired.co.uk/article/facebook-content-moderators-... . Or worse yet, you fool people in faux-ownership (reddit) and get them to do your moderation for you. Well, that is unless you do a bad job and get your subreddit banned for "low/no moderation".

I'd much rather see everything, and craft my own blocklists as I see fit. You know, just like Mastodon. Sure, there's some fediblocks at admin-level over pervasive content from a single server. But aside admin tasks, its pretty damn sweet. And it's loads better than the shit that happens over on Twitter, Reddit, or Facebook.




We, the technologically literate, are not the target audience. Client-side filtering requires awareness of client-side filtering. How many Reddit users want that ? Facebook users ? Oh, maybe they want to fine-tune their scoring rules library or tweak they bayesian filters - so much fun ! And lacking social intelligence, client-side filtering fails against organized harassment. By the way, who is liable for infractions to the local law ?

Enter the party, meet regulars, meet new faces, enjoy a drink and don't even think about safety. Sure, you can manage it yourself for a birthday party at your home but at larger scales even the wild rave party thrown in an abandoned hangar out there will have some form of institutionalized enforcement, even if they are not corporate security.

Yes, that means that successful large public places tend towards bland - the same reason why successful edgy underground dives are small affairs and their character can't survive growth.


There's no reason a private/paid service addon to a client can't work for filtering. Been thinking similarly for Mastadon, in that you can pretty easily have a gated experience, and get top ranking for Android/iOS client use with a monthly service that includes filtering with relatively sane defaults, but toggle-able filtering/reporting.

You can use fdroid, webapps or side-load the "full" unfiltered client but the experience for the normies would be that paid app experience. And I say paid app to avoid the pitfall that is advertising fed results.


> There's no reason a private/paid service addon to a client can't work for filtering.

There should be no reason why a service can't provide that feature natively.


Cost.


> Client-side filtering requires awareness of client-side filtering. How many Reddit users want that ? Facebook users ?

Facebook[1] and Reddit[2] both have a block user feature. It's a very rudimentary version of the killfile that usenet clients have. If users didn't want features like that, then they wouldn't be available on either platform.

> client-side filtering fails against organized harassment.

Usenet killfiles were advanced enough that you could easily filter out messages even from organized campaigns. For example, filtering it based on a number of conditions such as group header, from header, a unique header, certain keywords in the body, etc.

[1] https://www.facebook.com/help/168009843260943

[2] https://support.reddithelp.com/hc/en-us/articles/214548323-H...


> We, the technologically literate, are not the target audience.

Highly disagree. It's the early adopters of ANY platform, hardware, or whatever that drive real organic demand and adoption. And those early adopters ARE exactly that: technologically literate.

Alienating that group is how you get relegated to the "Digg v4" area of history.

> How many Reddit users want that ? Facebook users ? Oh, maybe they want to fine-tune their scoring rules library or tweak they bayesian filters - so much fun !

How many Facebook or Reddit users even HAVE the capability of client side anything? The big threads are that Reddit's killing most API access unless you pay a pound of flesh. And Facebook does enough horrible things in their coding that makes it impossible to even screen-scrape. The text "Sponsored" shows up as shittery like "essprnodo".

For those gated islands, you'd need their permission to really do any sort of client side anything.

Places like Mastodon make blocking trivial. Ive shared that link, and no, you don't need to know about Bayseian weights or whatever new mod scheme you want to make. Its there, the initial functions are easy-peasy, and the API hooks are there to add whatever you want.

> And lacking social intelligence, client-side filtering fails against organized harassment.

And that already is a major problem on the corporate side. Reddit calls it "brigading". HN has chill-down scripts to defuse flamewars. Twitter, Facebook, and Reddit all have mass-reporting as harassment. There's even current 4Chan threads to organize just that. Sure seems like an unsolved problem, not just "unsolved for federated systems".

> By the way, who is liable for infractions to the local law ?

Irrelevant. Someone in another area could read content that is illegal to them. That's *their* responsibility, not mine. As long as I follow the laws in my jurisdiction and where my server is located, I'm fine. And 17 USC 512 covers me as long as I do good-faith DMCA process.

> <example of comparing physical real world with raves/alcohol/drugs with online "dangers">

And in your latter half, your other problem is that you're equating online "safety" with in-person bodily safety with alcohol and drugs. Those 2 aren't even remotely the same, and its laughable that you'd try to compare them.

If someone is coming on to you in a sexual manner, and your "No" isnt effectively heard or acknowledged, you have a big problem. If someone uploads a bad image (say, animal torture, for something abhorrent), you can block the user, the server, report the user to their/your server admins. You can also choose not to show images. It's terrible, but physical assault is no comparison to a bad image or terrible text.


Someone in another area could read content that is illegal to them. That's their responsibility, not mine

Unless you serve people from the EU or sell things to US customers like bitcoin.


Moderation isn't only about removing bad content, it's about removing content that's not necessarily bad but is unwanted. /r/gaming and /r/games are entirely different subreddits, with different content focuses and different communities because of their moderation. If I really wanted junk food content consumption I can go to /r/gaming. If I want to see news about video games I can go to /r/games. How do you accomplish that with client side filtering? /r/games has Indie Sunday where developers can post their own games on Sunday without competition from news about the massive AAA games everyone's already heard of. How do you accomplish that with client side filtering?


Good point. The majority of my Usenet memories revolve around meta discussions. What is and what is not supposed to be posted. Mid 2000s and people still had flamewars about whether a four line signature was to heavy.


Most meta discussions (beyond the basic etiquette and moderation, which were mostly worked out in the 1980s already) don't have objective answers, just subjective preferences. The solution is to keep aggressively iterating on splitting subreddits/communities/categories of users, until quality interaction is maximized. Obvious example: one category of user wants to discuss politics via memes, another category wants to discuss in long-form essays, another in threaded discussions, Canadians want to discuss Canadian politics without being flooded with US stuff, and plenty of others not at all i.e. they want any political posts off-topic and banned. The obvious way to keep all these groups maximally happy is to split into different groups.


Corporate/algorithmic moderation is almost entirely about removing "bad" content. You're talking about community moderation, which works exactly the same in decentralized systems.


> the "big event". (When ISP's everywhere killed Usenet servers, for what is pretty surely assumed as anti-piracy.)

I thought this was due to the deal[1] that Andrew Cuomo (New York's attorney general at the time) made with major ISPs to restrict access to child pornography. Many major ISPs discontinued their usenet service in that time period.

[1] https://www.cnet.com/tech/tech-industry/n-y-attorney-general...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: