Hacker News new | past | comments | ask | show | jobs | submit login

> hopefully their community would rise up and vote with their feet (just as you would today if a rogue moderator got trigger-happy with /ban)

This doesn't tend to happen though, in communities offline or online. My own experience (particularly in online communities) is that once groups achieve a certain momentum, the average participant in the group cares very little about policies that impact a tiny minority of users no matter how draconian they are.

If it were true in general that people would "vote with their feet" we wouldn't have autocrats in positions of power or much injustice in the world at all.

I love Matrix and have been running my own Synapse instance for friends and family for years -- I appreciate all the work that has gone into it. And further, I appreciate that this is a challenging problem to solve in general, and that you're unfortunately being forced to come up with "lesser evil" solutions to counter all of this talk of E2E encryption being the "tradecraft of terrorists and child pornographers".

I'm just deeply disappointed to see it -- social credit styled systems like this penalize people for having bad mental health days, or who have unusual interests or points of view, or who are LGBTQ+, or belong to some other vulnerable minority group, and I think generally have a chilling effect on communities -- or rather on these specific subcultures within those communities, clearly not all of whom are "bad faith" actors.




> My own experience (particularly in online communities) is that once groups achieve a certain momentum, the average participant in the group cares very little about policies that impact a tiny minority of users no matter how draconian they are.

Agreed this can be a problem - but I wonder if it can be solved with UX? If the filtering rules are really obvious, and the clients give you the ability to actually visualise and curate the filters being applied, I'd hope that it would be much easier to spot toxic moderation.

> I'm just deeply disappointed to see it -- social credit styled systems like this penalize people for having bad mental health days, or who have unusual interests or points of view, or who are LGBTQ+, or belong to some other vulnerable minority group, and I think generally have a chilling effect on communities -- or rather on these specific subcultures within those communities, clearly not all of whom are "bad faith" actors.

Hm, "social credit" implies an absolute reputation system (like Reddit, HN, China etc ;) which this categorically isn't.

The idea is that if you as a user want to subscribe to a reputation feed which prejudices against minority groups - then that's your bad choice to make. You'll find yourself having to explicitly remove the filter to follow the conversations around you. Alternatively, if you find yourself in a community which has engaged a blanket ban or filter on minorities, you may want to find a different community.

We get that there is a massive responsibility on the Matrix team to implement UX for this which is designed against factionalism, censorship, filter bubbles, absolutist social credit, persecution, polarisation, antagonism etc. But we also feel a massive responsibility to stop users getting spammed with invites to child abuse/gore/hate rooms, or from accidentally hosting content which could get them incarcerated.

Critically, this stuff doesn't really exist yet - the first developer hired to work fulltime on it hasn't started yet. So this is the perfect time to give feedback to help ensure this rep system actually works (at least as well as real life society does) and not go toxic like so many others have done before it.


> I wonder if it can be solved with UX?

I saw a HN comment the other day, talking about the problem of popular servers in the fediverse blocking polarizing servers. It proposed a solution at the client level: make it easy to switch between identities on different servers. My addition: UX along the lines of Firefox multi-account containers, where you can have multiple tabs (here: conversations/rooms) with different identities open alongside each other, rather than having to switch profiles.

I think making it easier to participate in other communities without losing access to the large one, should the large one stop federating, is a good strategy for encouraging users to vote with their feet. Otherwise, it's hard to justify losing access to the large community, just to enable interaction with a small one.

I'm not sure how this solution squares against combating abuse, though; encouraging multiple accounts might make this harder.


Thanks for your response here. You mention that it isn't an "absolute reputation system" -- that makes sense. I'd like to know more about that because I think I'm misunderstanding the proposal.

I understand that it isn't a single dimension (like HN or Reddit karma) -- it seems your proposal is more or less a "tagging" system, but anonymized so that other users wouldn't be able to say, look at my profile and see how other users have tagged me. But other users or operators could apply a filter to exclude comments from users based on those tags? I feel like my understanding must be incomplete because that wouldn't be very anonymous -- e.g. if I applied a filter to exclude users tagged with "gopher-enthusiasts", and suddenly stopped seeing messages from a certain active members of a community I was in, that would "out" those users as "gopher-enthusiasts". So I assume the system you're proposing is more sophisticated than that. Can you clarify?

Based on my reading of what was proposed, I'm particularly concerned about these scenarios:

1. Say you have a community centred on the goldfish keeping hobby. It happens to be the largest such community -- many thousands of members. And (to use the example given by OP), suppose a large contingent of moderators (or perhaps even the server operator themselves) are super averse to guns enthusiasts, to the point where they'll either ban anyone they learn to be a gun enthusiast, or persistently tarnish any members' reputation score (for lack of a better word) until they leave, or simply won't allow anyone who is also in a gun community to participate (does this reputation system give others insight into what communities I'm in?). This is problematic because there aren't many other goldfish communities worth participating in -- all the experts participate in this one, and as a newcomer to the goldfish keeping hobby, I won't get very far "voting with my feet" here. Presumably the overwhelming majority of participants won't care or even necessarily notice that guns enthusiasts are "filtered" from the group.

Nothing stops this particular scenario from playing out today, but I guess my concern is that the proposed system would make this scenario super easy for operators to implement -- I would be filtered from the group without anyone knowing anything about me other than that maybe I had some association with a gun focused group at some point

2. Say you're having a mental health crisis, and you end up saying some stuff in a channel with many participants that you regret later. How does that impact your global reputation, and for how long?

3. Say you're a Marxist and participate in Marxist discussion groups. What kind of metadata will the reputation system generate about you? Is it possible that it'll put you on the same filter/tag lists as "terrorists" and people who advocate for the overthrow of governments, even if you don't personally hold such beliefs?

4. (Mostly a rehashing of #3) Say you're involved in some (legal, consensual, 18+) fetish community. Are you now globally on a filter list for sexual deviants that will keep you from joining, say, a parenting community?


> 1. Say you have a community centred on the goldfish keeping hobby. It happens to be the largest such community -- many thousands of members.

[...]

> 4. (Mostly a rehashing of #3) Say you're involved in some (legal, consensual, 18+) fetish community. Are you now globally on a filter list for sexual deviants that will keep you from joining, say, a parenting community?

Would these be good usecases for having two identities?


This is a complex issue. I'm not sure if Matrix's reputation system can be used to "call out" individuals or groups, like Twitter callouts (or blocklists/blockchains) operate. However, I have the experience of seeing someone on Discord being called out on Twitter for being a pedophile (though I didn't see that because Discord doesn't let you subscribe to reputation services which comments on users as you see them), and then reregistering under a different Discord identity and joining servers without saying the original identity. So this is already happening.

Twitter now shows posts liked (not just retweeted) by those you follow. I've heard that has led to "like policing", in addition to avoiding people based on who they follow.

Are reputation feeds going to be subject to threats of libel lawsuits if used for false reporting?


As Yoric says (btw, Yoric: check your DMs ;P), #1 and #4 sound like a good case for maintaining different personae.

For #2, i guess you'd need to petition whoever maintains the blocklists who were blocking you. Or give up and start a new identity.

For #3, Yeah, there's a risk here that somebody enthusiastically starts building a reputation list for The Best Marxist Content and puts the hash of your user ID on it. Someone then reverses the hash via dictionary lookup or similar and proves that you were on the list, and promptly tries to arrest you. Frankly that risk exists already today, though. We may need to try to think of better pseudonymisation approaches than just hashing though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: