Hacker News new | past | comments | ask | show | jobs | submit login
Combating abuse without backdoors (matrix.org)
208 points by Arathorn on Oct 20, 2020 | hide | past | favorite | 169 comments



Presumably, the writing is on the wall. One of the driving forces of spinning up my own Matrix community for me and my friends is the Facebook wouldn't let us discuss firearms without heavy censorship.

My fear with this is that it will turn into something like Reddit, where participating in one community automatically bans you from other communities, regardless of the content and context of your reputation. Especially when the predominant server is matrix.org. What, if any, measures are in place to prevent any one server from becoming overwhelmingly popular and then bullying everyone else? That could be detrimental to the discovery and growth of smaller communities that are viewed as controversial by bigger ones. If you associate with them, you risk being ostracized by other important communities, and barred from other important conversations. I think that's a net negative for the entire ecosystem.

I see the necessity of this, and I appreciate the Matrix team's attempt to provide an alternate solution and avoid governmental oversight, but this is just going to delay the issue, because we all know this isn't about catching child predators. Governments want to break encryption because they are after power. A reputation system won't stop them for long.

I'm very sad to see this development, and to me it seems like something far from what my understanding of the mission of Matrix was. I say all this with respect and gratitude for Matrix, which to me is one of the most important developments of the internet this century.


This an excellent response, and thank you for writing it (and for considering Matrix so highly).

I think the main reason why this isn't going to turn into Reddit is that the reputation is strictly relative, and hopefully we can set a precedent for proportional behaviour.

So, while you will always have some set of people who think you shouldn't discuss firearms and assign folks in firearm communities negative reputation, hopefully that would never result in widespread bans or filtering. If a moderator got overenthusiastic and imposed a blanket ban, hopefully their community would rise up and vote with their feet (just as you would today if a rogue moderator got trigger-happy with /ban). Meanwhile, if someone got overenthusiastic with blanket filter rules (i.e. "automatically assign negative weight to content from users from these communities so it's hidden by default"), then individuals could and should override it by assigning their own filters.

We've tried to model it after real life, for better or worse. If someone chooses to use the same identity for discussing both firearms and (say) cooking, then they may get shunned by some cookery folks. But most of the cook group won't care and ignore it. You might get unlucky and discover the head chef is anti-firearms and kicks you out, but frankly that sounds like a good reason to find a more broadminded cook group.

You're right that a large server like Matrix.org could take an opinionated view and go and apply radical blanket bans for all sorts of stuff, and set a precedent that it's okay to throw your weight around. But a) we're not going to do that - for instance we haven't yet blocked any server from Matrix.org; b) the only blanket bans we're considering for Matrix.org at this time are against spam, child abuse, and folks conspiring to kill; c) we're pretty confident that if Matrix.org overstepped its bounds on moderation, the network would route around the damage anyway: there are something like 55,000 servers that we're aware of on the network, and we believe Matrix.org accounts for only about 20-30% of the traffic (and that includes IRC bridges etc). So folks would vote with their feet and shift server - especially once we have portable accounts.

Finally, P2P Matrix will change the dynamic entirely on this - no more servers (by default) means that users will be entirely making their own choices on where to hang out and who to hang out with.


> hopefully their community would rise up and vote with their feet (just as you would today if a rogue moderator got trigger-happy with /ban)

This doesn't tend to happen though, in communities offline or online. My own experience (particularly in online communities) is that once groups achieve a certain momentum, the average participant in the group cares very little about policies that impact a tiny minority of users no matter how draconian they are.

If it were true in general that people would "vote with their feet" we wouldn't have autocrats in positions of power or much injustice in the world at all.

I love Matrix and have been running my own Synapse instance for friends and family for years -- I appreciate all the work that has gone into it. And further, I appreciate that this is a challenging problem to solve in general, and that you're unfortunately being forced to come up with "lesser evil" solutions to counter all of this talk of E2E encryption being the "tradecraft of terrorists and child pornographers".

I'm just deeply disappointed to see it -- social credit styled systems like this penalize people for having bad mental health days, or who have unusual interests or points of view, or who are LGBTQ+, or belong to some other vulnerable minority group, and I think generally have a chilling effect on communities -- or rather on these specific subcultures within those communities, clearly not all of whom are "bad faith" actors.


> My own experience (particularly in online communities) is that once groups achieve a certain momentum, the average participant in the group cares very little about policies that impact a tiny minority of users no matter how draconian they are.

Agreed this can be a problem - but I wonder if it can be solved with UX? If the filtering rules are really obvious, and the clients give you the ability to actually visualise and curate the filters being applied, I'd hope that it would be much easier to spot toxic moderation.

> I'm just deeply disappointed to see it -- social credit styled systems like this penalize people for having bad mental health days, or who have unusual interests or points of view, or who are LGBTQ+, or belong to some other vulnerable minority group, and I think generally have a chilling effect on communities -- or rather on these specific subcultures within those communities, clearly not all of whom are "bad faith" actors.

Hm, "social credit" implies an absolute reputation system (like Reddit, HN, China etc ;) which this categorically isn't.

The idea is that if you as a user want to subscribe to a reputation feed which prejudices against minority groups - then that's your bad choice to make. You'll find yourself having to explicitly remove the filter to follow the conversations around you. Alternatively, if you find yourself in a community which has engaged a blanket ban or filter on minorities, you may want to find a different community.

We get that there is a massive responsibility on the Matrix team to implement UX for this which is designed against factionalism, censorship, filter bubbles, absolutist social credit, persecution, polarisation, antagonism etc. But we also feel a massive responsibility to stop users getting spammed with invites to child abuse/gore/hate rooms, or from accidentally hosting content which could get them incarcerated.

Critically, this stuff doesn't really exist yet - the first developer hired to work fulltime on it hasn't started yet. So this is the perfect time to give feedback to help ensure this rep system actually works (at least as well as real life society does) and not go toxic like so many others have done before it.


> I wonder if it can be solved with UX?

I saw a HN comment the other day, talking about the problem of popular servers in the fediverse blocking polarizing servers. It proposed a solution at the client level: make it easy to switch between identities on different servers. My addition: UX along the lines of Firefox multi-account containers, where you can have multiple tabs (here: conversations/rooms) with different identities open alongside each other, rather than having to switch profiles.

I think making it easier to participate in other communities without losing access to the large one, should the large one stop federating, is a good strategy for encouraging users to vote with their feet. Otherwise, it's hard to justify losing access to the large community, just to enable interaction with a small one.

I'm not sure how this solution squares against combating abuse, though; encouraging multiple accounts might make this harder.


Thanks for your response here. You mention that it isn't an "absolute reputation system" -- that makes sense. I'd like to know more about that because I think I'm misunderstanding the proposal.

I understand that it isn't a single dimension (like HN or Reddit karma) -- it seems your proposal is more or less a "tagging" system, but anonymized so that other users wouldn't be able to say, look at my profile and see how other users have tagged me. But other users or operators could apply a filter to exclude comments from users based on those tags? I feel like my understanding must be incomplete because that wouldn't be very anonymous -- e.g. if I applied a filter to exclude users tagged with "gopher-enthusiasts", and suddenly stopped seeing messages from a certain active members of a community I was in, that would "out" those users as "gopher-enthusiasts". So I assume the system you're proposing is more sophisticated than that. Can you clarify?

Based on my reading of what was proposed, I'm particularly concerned about these scenarios:

1. Say you have a community centred on the goldfish keeping hobby. It happens to be the largest such community -- many thousands of members. And (to use the example given by OP), suppose a large contingent of moderators (or perhaps even the server operator themselves) are super averse to guns enthusiasts, to the point where they'll either ban anyone they learn to be a gun enthusiast, or persistently tarnish any members' reputation score (for lack of a better word) until they leave, or simply won't allow anyone who is also in a gun community to participate (does this reputation system give others insight into what communities I'm in?). This is problematic because there aren't many other goldfish communities worth participating in -- all the experts participate in this one, and as a newcomer to the goldfish keeping hobby, I won't get very far "voting with my feet" here. Presumably the overwhelming majority of participants won't care or even necessarily notice that guns enthusiasts are "filtered" from the group.

Nothing stops this particular scenario from playing out today, but I guess my concern is that the proposed system would make this scenario super easy for operators to implement -- I would be filtered from the group without anyone knowing anything about me other than that maybe I had some association with a gun focused group at some point

2. Say you're having a mental health crisis, and you end up saying some stuff in a channel with many participants that you regret later. How does that impact your global reputation, and for how long?

3. Say you're a Marxist and participate in Marxist discussion groups. What kind of metadata will the reputation system generate about you? Is it possible that it'll put you on the same filter/tag lists as "terrorists" and people who advocate for the overthrow of governments, even if you don't personally hold such beliefs?

4. (Mostly a rehashing of #3) Say you're involved in some (legal, consensual, 18+) fetish community. Are you now globally on a filter list for sexual deviants that will keep you from joining, say, a parenting community?


> 1. Say you have a community centred on the goldfish keeping hobby. It happens to be the largest such community -- many thousands of members.

[...]

> 4. (Mostly a rehashing of #3) Say you're involved in some (legal, consensual, 18+) fetish community. Are you now globally on a filter list for sexual deviants that will keep you from joining, say, a parenting community?

Would these be good usecases for having two identities?


This is a complex issue. I'm not sure if Matrix's reputation system can be used to "call out" individuals or groups, like Twitter callouts (or blocklists/blockchains) operate. However, I have the experience of seeing someone on Discord being called out on Twitter for being a pedophile (though I didn't see that because Discord doesn't let you subscribe to reputation services which comments on users as you see them), and then reregistering under a different Discord identity and joining servers without saying the original identity. So this is already happening.

Twitter now shows posts liked (not just retweeted) by those you follow. I've heard that has led to "like policing", in addition to avoiding people based on who they follow.

Are reputation feeds going to be subject to threats of libel lawsuits if used for false reporting?


As Yoric says (btw, Yoric: check your DMs ;P), #1 and #4 sound like a good case for maintaining different personae.

For #2, i guess you'd need to petition whoever maintains the blocklists who were blocking you. Or give up and start a new identity.

For #3, Yeah, there's a risk here that somebody enthusiastically starts building a reputation list for The Best Marxist Content and puts the hash of your user ID on it. Someone then reverses the hash via dictionary lookup or similar and proves that you were on the list, and promptly tries to arrest you. Frankly that risk exists already today, though. We may need to try to think of better pseudonymisation approaches than just hashing though.


> If someone chooses to use the same identity for discussing both firearms and (say) cooking, then they may get shunned by some cookery folks.

It seems like you're conflating two different kinds of things under "reputation".

If I start talking about firearms in a cooking group, obviously I'm way off topic and I should expect my posts to be moderated. If I start spouting insults, that's not just off topic but abusive and I should expect to be treated accordingly. And if I do those sorts of things in multiple forums, I should expect to get a global reputation for not being a good participant and suffer the appropriate consequences.

But if I come into a cooking group and just talk about cooking, why should the information that I'm also in a completely different group talking about firearms even be relevant? Why should "being a member of a group that talks about firearms" be part of my global reputation at all?


I got hit by this with reddit and the masstagger. Normally, people aren't going to go through the post history of someone who says "look at these muffins I baked", and find they posted in an unsavory sub years ago. But with masstagger, anyone who posted in T_D got a big ol flare calling them out. But, the only reason I went there was to say "hey, you guys are stupid racists" to some posts. But the actual content of my posts weren't discussed, merely the fact that I filthied myself by even going near that sub. For Matrix, it would be like someone going onto a firearms room to talk about increasing gun control laws in the US, but then that means that all of the sudden your global reputation includes "firearms", and people on cooking topics will call you an asshole who supports murdering children, even if you are the most anti-firearms person in the world.


> But if I come into a cooking group and just talk about cooking, why should the information that I'm also in a completely different group talking about firearms even be relevant?

Agreed, it shouldn't be relevant, but unfortunately it often is. People are so mired in identity politics that your affiliations outside of a particular group will often matter within that group. It's not fair and it's not rational, but that's what discourse has devolved to, sadly.


> People are so mired in identity politics that your affiliations outside of a particular group will often matter within that group.

I understand that people are often like this. What I'm wondering is why Matrix would include such information in the reputation system they say they're building, since that basically just encourages people to look at irrelevant information and engage in identity politics instead of discouraging it.


a filtering system which doesn't have the concept of identity sounds interesting; how would it work?


Yeah, without the ability to query a particular identity it's not clear (to say the least) how you would go about filtering out malicious identities.

To some extent, identity politics based on public associations is an unavoidable problem. Public data can be scraped so someone is probably going to aggregate and analyze it at some point.

That said, it seems important that a reputation system not facilitate making associations that otherwise wouldn't have been visible. To that end, it's important to take care not to accidentally incentivize community moderators to leak information that otherwise wouldn't have been discoverable by the general public.

In particular, it occurs to me that a naive reporting mechanism inherently reveals that the reported identity has associated with the reporting operator. I assume you've given this problem some thought - is there an obvious way around it? My concern would be that a more general use reputation system (ie one that goes beyond simple "illegal content" and "spam" event reports) would rapidly begin leaking association data on a broad scale, even from otherwise private communities.

I guess the goals here are at odds in a fundamental way. A server should be able to report scores for it's participants. Querying an identity should reveal it's various scores. Now repeat spammers (or those posting illegal content, or who are just generally assholes, or whatever) can be filtered out. But in being able to query scores for an identity I don't see how you can avoid revealing the entity that reported any given score. If a broad set of categories are being reported (consider, for example, the birthday cake example from the article) then the information leakage seems like it would end up being quite broad.


Perhaps rather then filtering per se, focus on the economic equation of moderation. Fundamentally moderation is about the time/resource cost of the moderator(s) vs the time/resource cost of evasion. Good moderation is probably an AI-complete problem, so it's hard to automate right now. Most efforts at improving that seem to either use broad brush measures and heuristics on the mod side, or so-so proxy measures for the evasion side. From captchas to money to asking for ID, all at the end of the day are about trying to make it more 'expensive' to evade bans. If the expense is really high, then even a small bit of moderation can keep up.

But instead of any of that why not just do a time token directly and with full pseudonomyity? Matrix.org could ask people to do something like brute force RSA, choosing key lengths based on how much time they want to represent, and then sign a "Time Level" certificate result. Community operators could then dynamically adjust how much "time investment" they wanted to require in order to participate, and would have a mechanism to ban independent of IP or anything else. And this could be expected to increase over time as people let their systems run a day or two a month. If in a few years it requires a token equivalent to a month of computation time that would be a high bar to evasion. It would not require any money, identity, or knowledge of behavior elsewhere, but people would have strong incentives not to burn their Time Level tokens, or at least to comply with non-permanent bans. You could further tweak things by having per-community tokens which can only be issued once per cert, so identity can't be as easily tracked across communities while still stopping evasion. This should all be near completely automatable as well.

Anyway, just some musing. I guess the question is if communities had effective cryptographicly guaranteed rate controls and moderation stickiness, would they really need more in practice to keep up?


Don't measures based on computational difficulty have the effect of erecting an impossibly high barrier for those with limited access to such resources? (The poor, anyone using a mobile device, etc). Any system with a reasonably low bar can probably be worked around by a spammer at a low enough cost per account that it's unlikely to be of much use.

Some services (many subreddits, for example) use account age as a metric. That's easy to work around with mass registrations of zombie accounts though.

Some services (again many subreddits) use overall network reputation as a metric. That makes life difficult for new users though in addition to all the privacy issues surrounding centralized reputation and identity.

Some services use a phone number as a unique identifier that's more difficult to come by than an email or IP address. That still poses an accessibility issue and also introduces a privacy one.

Sorry, I don't actually have a solution here. Just a bunch of problems.


>* Don't measures based on computational difficulty have the effect of erecting an impossibly high barrier for those with limited access to such resources?*

I don't think this is true so much with the flattening of computational growth. These days a 10-15% general gain year over year is quite good, and we've seen gens with less. Order of magnitude is close enough in this case, there isn't that much practical difference between a week vs two weeks.

>(The poor, anyone using a mobile device, etc).

Case in point, for much of the population their mobile devices may well be their most powerful ones. A 7 year old PC can still be extremely capable, etc.

Also again, this is just another tool idea. A community can use it to whatever degree they deem appropriate, and that can vary dynamically. So in regular times a low volume community might set a very low level, just a few minutes worth say. But if there was a sudden influx, they could temporarily ramp it up for new joiners.


>for instance we haven't yet blocked any server from Matrix.org

This is stretching the truth. I remember there was a time where Matrix.org attempted to purge a lot of channels related to image board communities. Channels that started with the format of /?/ were deleted from Matrix.org. This includes the federated version of channels from other homeservers. I think it is disingenuous to say that you do not block any other homeservers when you have deleted channels from other homeservers, preventing them from federating properly. Some users have been banned from official channels on the Matrix.org server because of the homeserver they were registered on. Perhaps things have changed since I was last involved with Matrix, but from what I saw the Matrix.org homeserver was to be avoided since they did not play nice.


I'm not aware of us ever having doing any en-masse removal of /?/ style rooms from Matrix.org server. However, it's true that we do remove individual rooms from the server if they break the server's T&Cs (https://github.com/vector-im/policies/blob/master/docs/matri...) - but that's completely different to unilaterally blocking other servers or shutting down rooms based on the pattern of their name(!), which we don't (so far, at least).

There's a whole bunch of conspiracy theories that we do block servers though - but ironically this tends to be due to federation problems (often the remote server hasn't tuned its rate limits, and so as its users get more active, busier servers trying to talk to it get rate limited. matrix.org is one of the busiest servers, therefore the first symptoms of the problem are that it looks like matrix.org is explicitly blocking the server. comedy, eh?).

However, if folks want to believe the conspiracy theory instead, we're not going to shed too many tears.


I wouldn't have written it if I hadn't seen that you frequently engage on these threads, so give the credit to yourself for actually building something for the community, instead of just talking about it in one-way blog posts.

>We've tried to model it after real life, for better or worse. If someone chooses to use the same identity for discussing both firearms and (say) cooking, then they may get shunned by some cookery folks. But most of the cook group won't care and ignore it. You might get unlucky and discover the head chef is anti-firearms and kicks you out, but frankly that sounds like a good reason to find a more broadminded cook group.

>You're right that a large server like Matrix.org could take an opinionated view and go and apply radical blanket bans for all sorts of stuff, and set a precedent that it's okay to throw your weight around

My biggest fear is both of these things in tandem. I agree that individually, they're solvable problems. For instance, I'm not actually worried about Matrix.org banning me by association, and in fact, welcome a reduction of child porn, spam, etc. And if there was a plethora of cooking groups, and getting banned by one still left me with others, it wouldn't be a huge deal.

But its the potential combination of clout and curation that makes me concerned. For instance, email is a "federated" standard, in that anyone can get a domain, and host their own email server. However, for all intents and purposes, it's not a federated standard because nearly everyone uses Google, and getting Google to accept email from my domain is nearly impossible for a variety of factors that I don't have control over. That means that effectively, it's no longer federated, unless you play by Google's rules.

I don't know how to solve this problem, and I feel bad complaining about this solution without proposing a better one. I see the predicament that you're in, as well. Governments obviously want to be able to peek into conversations, and will use any excuse they can get to do it. If you try to play the game by their rules, and come up with a way to preserve privacy while stopping the spread of things like child porn, by democratizing moderation of the entire ecosystem, then I see how that potentially solves the problem, or at least kills the child porn excuse. But I think the game is rigged, and this capitulation comes with costs to the platform, that, because of the rigged game, ultimately won't protect what you're trying to protect. Governments will come up with some other reason why they need access to private conversations, and instead of a single death knell, it'll instead be a drawn out one.

The only solution is to replace the game with something entirely different, the same way for instance, that cryptocurrency did with financial markets. If they had tried to play by the rules of the existing game, it'd have gotten nowhere, because the financial game is designed to specifically stop things like that.

>Finally, P2P Matrix will change the dynamic entirely on this - no more servers (by default) means that users will be entirely making their own choices on where to hang out and who to hang out with.

I'm very excited about this, because this is the type of outside of the box thinking that I think falls into the category of "whole other game where existing rules don't matter". I really hope you guys can get the UX good enough to pull it off with wide adoption.


> But it's the potential combination of clout and curation

Our plan for the Matrix.org server is, in an ideal world, to turn it off once we have portable accounts (and especially once we have P2P). Users can easily pick a set of other servers for home, or just use their devices. We have absolutely zero desire for any server to have clout or to end up as a Gmail-style centralisation point.

However, in an account-portable world, I suspect all we'll see is that communities (rather than servers) will emerge which have equivalent risk of disproportionate social influence. Then all we can do then is to arm the users with tools which allow them to visualise and curate that influence and make up their own minds, rather than accidentally getting trapped in someone else's filter bubble all over again.

> The only solution is to replace the game with something entirely different, the same way for instance, that cryptocurrency did with financial markets. If they had tried to play by the rules of the existing game, it'd have gotten nowhere, because the financial game is designed to specifically stop things like that.

From my perspective, introducing a morally relative reputation system as a core primitive in the protocol, is very much replacing the game with something entirely different. Imagine if SMTP had the concept of subjectively modelling spam built in from day 1. Or if the Web had had the concept of subjective search result quality.

Nobody has pulled this off before (as far as I know?) but we're having a go at it to see what happens. If it goes horribly wrong then worst case we just turn it off as a failed experiment.


Really hope it works out! Good luck to you and your team.


This is already the case for Mastodon instances. There's a list of servers that are not federated into many servers [1] because of hateful or plain illegal content.

I think Matrix is more akin to email than it is to Twitter and Facebook like Mastodon is, though. If you wish to contact someone on another server then that's an active decision by you. If someone wants to force themselves into your server, the most they can do is spam chat invites. There's no illegal content or filtering policy on the server that can prevent that from happening in most cases. The fact that Matrix is e2ee helps; server moderators cannot see what you're discussing, let alone censor you.

The core problem is that one of the ways we as a species prevent bad behaviour from spreading is ostracising groups of people who exhibit such behaviour. I strongly believe that any chat client that is used by humans will eventually exhibit similar behaviour, splitting and ostracising groups that exhibit behaviour other groups of people object to.

[1]: https://github.com/Gargron/mastodon.social-misc


> the Facebook wouldn't let us discuss firearms without heavy censorship.

I remember when Facebook censored a famous painting because it showed a naked woman’s breasts. That was so dumb.

Not being very knowledgeable about firearms, it’d help to better understand what sort of stuff in particular Facebook was censoring. May I ask?


Primarily links to legal online listings of firearms, such as Armslist. A friend was suspended multiple times and finally outright banned for continuing to link to Armslist.


I was surprised by this line of comments since I recall my feed having gun / hunting / shooting sports related content (including ads) so I looked into it.

Facebook has a policy banning private individuals from selling guns to each other on their platform - https://www.facebook.com/help/179037502478035 and "Armslist" appears to be a platform for doing just that: "Gun Classifieds, Guns for Sale, No Fees, 45000 guns for sale. The largest free gun classifieds on the web. Buy guns, sell guns, trade guns."

One could argue that the transaction isn't happening "on Facebook" since it's going through Armslist, but I don't find that convincing. If take payment via Venmo does that mean it's "not on Facebook" if everything else about the sale was?

Anyway, I fault Facebook for plenty and generally avoid it now, but I don't fault them for this policy. Private sales can get into sketchy territory really quick and I wouldn't want to mess with that on a public platform I created unless I was willing to invest in the nuance there which has basically no value proposition for Facebook.


Sure, I don't fault Facebook either. I think they're wrong, and making a mistake, but I understand why, and think it's a mistake they are absolutely allowed to make.

Technically, Armslist is legal. They don't even facilitate transactions on Armslist, and it's local laws that apply. Basically, it's a Craiglist for firearms. Armslist has also won numerous lawsuits[1] by claiming Section 230 protection. Facebook doesn't know this, and they have no reason to know this. Acquiring this information about Armslist would require someone up high enough at the company to impact US policy to understand this. That would cost money. It makes more sense to avoid it altogether, just like it makes sense to avoid pornography altogether, rather than watching every video that gets posted, in case a "model" is underage.

Matrix gives me the perfect opportunity to continue this conversation on my own terms, using my own knowledge to moderate it.

However, there's also another reason that inspired me to ditch Facebook, which is the fact that they are censoring my private messages at all, without any action by a human being to report them or anything like that. I just dislike that policy, and so I chose to stop using Facebook over it.

[1] https://www.armslist.com/blog/get-involved/armslist-defeats-...


Thanks for your response. Why do you think Facebook is wrong? Honest question.


Because there's no legal liability in the US for allowing links to Armslist. At least, no more legal liability than linking to a website like Craigslist. So they're censoring and driving away some small subset of users for a perceived benefit (less legal liability) that isn't actually real.


Thanks for your response. I'm basing this on what another commenter said, but, I take it your root issue then is Facebook's policy against private individuals selling each other guns on their platform?

Or, did I misunderstand something? : )


> where participating in one community automatically bans you from other communities, regardless of the content and context of your reputation

This doesn't happen unless you make a moderator mad which moderates other subreddits, but I haven't heard of this happening to anyone, or if you violate the content policy https://www.redditinc.com/policies/content-policy


>where participating in one community automatically bans you from other communities

This absolutely does happen.

I got banned from /r/offmychest, for example, despite never commenting or submitting anything to that subreddit.

When I asked for a reason for the ban, I was told there is a tool that mods can use which will automatically ban accounts who "participate" in specific subreddits that the moderator deems "bad".

The rest of their reply to me was: "We are automatically banning participants in specific abusive hatereddits that have systematically harmed this support community."

Alongside /r/offmychest, I was also banned from /r/blacklivesmatter, /r/depression, /r/relationships, and who knows what other ones because I got tired of trying to find out. Literally never interacted with any of these in any way, yet I'm already banned from them...

>regardless of the content and context of your reputation

If they had bothered looking at the "content and context" of my singular, years-old comment then they would have realized I was actually criticizing the subreddit in question (and its users) rather than supporting them. Instead, I was preemptively banned from multiple communities.


I'd argue there's a huge responsibility on the folks writing moderation tools like this to ensure they don't have adverse side-effects like this.

In the sci-fi Matrix reputation world, I can absolutely see somebody curating a reputation list called #bad-people:example.com which they prime by finding every user ID in every room they don't like and blanket assigning them -1000 reputation.

If then a moderator was dumb enough to subscribe to #bad-people:example.com and use it to impose a ban list on their rooms, then I'd hope that their community would arch an eyebrow at the crassness and treat them like a rogue moderator and either get them removed, or fork and go elsewhere... assuming that it's possible to visualise the filters which have been put in place.

There's a huge responsibility on the tool author to ensure that the users can see what filters are in place, and what they do, and encourage the user to challenge them - but again, hopefully, the market will vote with its feet and users will adopt the best tools available, and avoid being trapped under primitive moderation systems like the ones you refer to here.

tl;dr: we need better, morally relative reputation systems - rather than zero reputation system. liberal plurality >> anarchy ;)


> I'd argue there's a huge responsibility on the folks writing moderation tools like this to ensure they don't have adverse side-effects like this.

With all due respect, this is a naive take: the tools that are used on reddit for autobanning people from subreddits based on subreddits they've posted in are designed strictly to de-legitimize voices not in ideological lockstep with their own.

>If then a moderator was dumb enough to subscribe to #bad-people:example.com and use it to impose a ban list on their rooms, then I'd hope that their community would arch an eyebrow at the crassness and treat them like a rogue moderator and either get them removed, or fork and go elsewhere

The issue with forks is that they either unify the community (a successful fork) or completely divide the community (ffmpeg versus libav). It is more likely that such a system will divide communities and encourage infighting rather than consensus.

Question: did you ask anyone with experience with community building and community dynamics about this proposed reputation system, and if so, what were their comments?


Whether they have the effect of "[de-legitimizing] voices not in ideological lockstep with their own", the stated goal is often to prevent brigading. If you don't think that's a real problem, don't think that people are seeking solutions in good faith, or don't think these solutions are effective, please say that.


I think brigading is a real enough problem that moderators/administrators need to at the very minimum be aware of it and ready to step in if needed.

I think Matrix is seeking a solution in good faith. I don't think the Reddit situation I brought up was a solution founded in good faith.

I don't think these solutions are effective, however I unfortunately do not have an alternative to suggest which would be effective.


>I think brigading is a real enough problem that moderators/administrators need to at the very minimum be aware of it and ready to step in if needed.

I also think that brigading on reddit is an inherent problem with the way that reddit is structured and that no amount of stapled on tools will fix the problem in the most general case because they can't make the determination on the intent of the person accused of brigading.

At best they can detect patterns of repeat behavior in an automated fashion.


The parent comment is saying that, they're saying that they're used to solve a problem other than the one they're stated to solve.


> the stated goal is often to prevent brigading

I believe stated goal to be a figleaf

> If you don't think that's a real problem, don't think that people are seeking solutions in good faith, or don't think these solutions are effective, please say that.

I don't think that the majority of people deploying these solutions on reddit are doing so in good faith.

Hope that clarifies my position.


>there's a huge responsibility on the folks writing moderation tools like this

I'm fatalistic in my opinion that all moderation tools would eventually end up in situations like this.

The Rise of The Power-User just feels too inevitable to me. Once you have power-users you start to have clashing of egos. Once you have egos, you start to have censorship (as opposed to moderation).


I honestly think it’s an unsolvable problem, via technical means anyway. As it seems to speak to the fundamental nature of human relationships and communities.

God I hope I’m wrong, though!


Wow, that's insanely sad.


Seems like a sane way to handle brigading. I'm 100% sure /r/blacklivesmatter don't want people who post in /r/the_donald to participate.

Freedom of speech does not mean freedom from consequences.


Seems like a sane way to polarize people and build the perception that BLM is a partisan political movement more than a social one.


BLM is advocating for fundamental change to the nature of law enforcement and incarceration, the very apparatus of state power. Anyone who thinks this could possibly be apolitical is blind to what politics is.


Sure, I guess it's pretty political when Donald Trump advocates for racism in spite of anti-racist movements like black lives matter.


To reiterate what you're saying in case you just had a typo or something, you're advocating for blanket censorship regardless of context. Is that correct?


Yes absolutely. You do know that this social website uses "blanket censorship regardless of context" right? Shadow bans, rate limiting, etc. I am 100% sure Hacker News has automated tasks for identifying who to shadow ban and rate limit, since I've tested this with multiple accounts and other tools like changing IPs, etc. and also by contacting them for reasoning for their decisions. I think these are perfectly fine and acceptable ways for people to manage their private communities. My only caveat would be that you can't discriminate against protected classes. I would prefer that these rules be transparent, but I think that's a bit of a straw man for this discussion.

Managing private communities of tens of thousands of people require some disciplined rules in order for the entity's leaders to achieve their goals.


While I recognize that "private communities" have the ability to moderate and administer their communities whatever way they see fit (and subreddits [at least their initial iterations] certainly fit under this label), I do not agree that blanket censorship without contextual understanding is the right way to do things.

>My only caveat would be that you can't discriminate against protected classes

How do you define "protected classes", though? Is it your definition? The US Federal Government's? Who gets to make those decisions keeping in mind a niche subreddit (or even a full site like we are on here) will already have a very different demographic than the real world?

Not to mention, sorting and censoring people based on a classification that is outside of their control is quite a... controversial way to go about things.

>Managing private communities of tens of thousands of people require some disciplined rules in order for the entity's leaders to achieve their goals.

Establishing, enforcing, and maintaining "disciplined rules" should not, and do not need to, mean "shadow bans, rate limiting, etc". Certainly not without a human element capable of contextual analysis, at least.

--

I disagree with you, but respect for explaining your beliefs.


>I do not agree that blanket censorship without contextual understanding is the right way to do things.

I don't agree with it either, but it's impossible to do this at scale without hiring a large number, relative to community size, to be full time paid moderators. We have to accept the reality that these free communities need automation in order to maintain order, or else they would cost lots of money.

>How do you define "protected classes", though?

This is a straw man and I'm not going to address it. You know exactly what I'm talking about.


So a single comment I made multiple years ago that was in criticism of /r/the_donald was actually "brigading" and worthy of a ban across multiple unrelated communities?


A ban you can circumvent in 30s mind you (make another account).


That isn't the point, mind you.


Several subreddits have policies that will ban you automatically, even if you've never participated in their subreddit, for participating in other subreddits. For instance, the most famous one in my memory is that posting in /r/the_donald got you automatically banned from a number of other subreddits.


I see quite a few open letters like this that I think mostly miss the point.

From the perspectives of the governments in question, this isn't (solely) about terrorism or law enforcement. It's about control of their citizens. They use terrorism as the bogeyman to try to get people comfortable with the concept, but really, they just want dragnet surveillance over everyone, period.

I think it's most useful to point out that banning or backdooring E2EE won't stop the terrorists, but it seems like the main thrust of these sorts of articles is about how backdoors are dangerous to everyone... which is exactly what these governments are ok with. And a disturbing number of people (outside of the HN crowd) are perfectly willing to give up their own privacy and freedom if it means catching the terrorists.

I think a better approach to getting the public to be against these encroachments would be to drive home that backdoors and E2EE bans don't actually catch terrorists or human traffickers or child pornographers.


I think the best approach may be a disingenuous one:

Anyone advocating for encryption back doors is a pedophile.

Encryption helps users keep their data private from others, including service providers and the government. DEEP STATE PEDOPHILES have infiltrated our government and want to have free reign to access pictures of you infant child in the bathtub for their disturbing desires. Pedophiles managing social media servers want to be able to snoop on the private media of anyone their sick desires are targeted at. They want to be able to blackmail your children who have taken selfies in the mirror into sending them more child pornography or even committing espionage against their own parents or families, consolidating power for the secret cabal of pedophiles at the top of our institutions.

Say NO to pedophiles, keep encryption secure!

Any politician who supports these measures should get the question, "Why do you support this legislation, are you a secret pedophile?"


There's no need to resort to lies like that. It's better to stay away from the crazy conspiracy theories. Even the following is sufficient and true:

"Imagine your child's calendar, real-time location, grades, medical history, selfies, search and video viewing history, chats with friends hackable by any predator. All of this is possible if we allow the government to force backdoors."

Just keep hammering on that point to any politician or voter who supports encryption backdoors. Ask them why they want to make children less safe. Stop saying "you can't outlaw math" and "terrorists will use encryption anyway" - people either don't understand, or don't care.


We treat wood to fight termites. The treatment is unnecessary if there’s no termites, but it fights them if they’re there so we do it.

In the same vein, we should spread “counter-conspiracy” theories to fight conspiracy theories. A kind of poisoning by noise.

Hopefully, net education levels will rise over time, increasing our resistance to and eventually destroying the harmful viruses that are unfounded conspiracy theories.


Spreading counter-conspiracy theories is like introducing a non-native species to fight termites. It might work but you will likely end up with a whole different set of problems.


Very good rundown of why encryption backdoors are fundamentally flawed (and why governments should stop pushing for them).

I'll be interested to see how the proposed reputation system rolls out. It sounds like a complex system to get right.


It's the Clipper chip all over again. They never learn.


I waiting to see somebody actually provoking any of the 5 eyes governments into exerting this power, and seeing how it works.


I'm very excited to see this discussed! Even without pressure from government entities, I would certainly want some sort of way to prevent something like child abuse content from being uploaded to my server, so I'd love for a system like this to work well.

1. Would the system discussed have anyway to diminish "cancel culture" like attacks? For instance, if a user were to say something egregious like "Empire Strikes Back is bad", what is preventing especially enthusiastic fans from labeling them with having uploaded child abuse content. Presumably that would be a good way to get that user banned from most popular servers. Is there a way to "appeal" this is the moderator or server host is a bad actor? Is having a good enough reputation before an incident like that enough to not be blocked everywhere? 2. Is there a way we could test this outside of Matrix? It seems useful to social platforms, particularly other decentralized ones. If it could be done in a platform independent way, it could be a nice open source collaboration (granted, how people interact on different platforms might be enough to make that impossible to generalize effectively). I also ask, because this sort of thing is really dependent on how people interact, so "play testing" it might be beneficial to getting a more realistic design going early on. Even doing something low-tech might find some issues that should be addressed.

Is there a room discussing this in particular? I'd be interested in participating.


> For instance, if a user were to say something egregious like "Empire Strikes Back is bad", what is preventing especially enthusiastic fans from labeling them with having uploaded child abuse content. Presumably that would be a good way to get that user banned from most popular servers.

Suppose @alice:example.com says “Empire Strikes Back is bad”, and @bob:example.org labels Alice with “uploaded child abuse”.

The label attached to @alice:example.com is not “uploaded child abuse” but “uploaded child abuse, according to @bob:example.org”.

Perhaps @charlie:example.net discovers this and labels Bob with “false moderator”. Maybe @drew:example.horse labels Bob with “false moderator” and Charlie with “true moderator”. Each server can decide whether they trust Bob, Charlie and/or Drew.

Anyone who cares enough can check whether Bob's assertion about Alice was correct, and so they can learn whether to trust Charlie and Drew.

This is similar to how blocking works on [scuttlebutt](https://ssb.nz), which is subjective by design — if I can see that a few people I trust have blocked a certain identity, and they've left a message saying why, I can choose to trust them and block that identity myself.


> 1. Would the system discussed have anyway to diminish "cancel culture" like attacks?

I would differentiate between "cancel culture", where folks gang up and react disproportionately to a misdeed - versus a smear campaign where people try to weaponise the moderation system by spreading malicious reputation (e.g. your ESB example).

The idea is that both could be solved by having an open ecosystem of relative reputation lists. If a trusted authority (e.g. iwf.org.uk) were to publish a vetted list of hashes of known bad content that you definitely don't want on your server, then you'd obviously expect them to review and verify entries to that list. If they got sloppy and let randoms try to smear each other by submitting ESB-haters to the list, then there would be a scandal and people would stop trusting that reputation list. Alternatively, if ESB-fans started framing ESB-haters with child abuse imagery, then obviously that's a problem for the cops rather than the moderation list verifiers.

Alternatively, if some random vigilante decided to start publishing up a reputation feed of people they happen to think are are child abusers, and then started to stuff it with other randoms they happen not to like (whether that's ESB-haters), then the hope is that armed with appropriate visibility on the filters which they are applying, people would spot that this is a disreputable source of reputation asap and run a mile. For instance, you might have accidentally subscribed to the #child-abuse-vigilante reputation feed, and your Matrix client might say "btw, 98% of the rooms in this community have been filtered out by the #child-abuse-vigilante feed data". You might click on the link to check the names of the rooms which have disappeared, and if they turn out actually to have names like "ESB was overrated" then you might choose to dig further and then ring the alarm that #child-abuse-vigilante is not to be trusted, and unsub. You could also publish a reputation feed for reputation feeds (no, really).

> 2. Is there a way we could test this outside of Matrix?

Absolutely. There is nothing Matrix specific here, or even decentralisation-specific. We're just trying to prove it in Matrix because we have a relatively large community, containing a fairly representative mix of different actors, but we're not so big as to make it hard to experiment - and because this is frankly an existential threat to the long-term success of the project :)

The room for discussing this is #matrix-reputation:matrix.org (although public convo is a bit sporadic; this should change as we spin up more experiments in the coming weeks/months however).


Thank you for the response!

> I would differentiate between "cancel culture", where folks gang up and react disproportionately to a misdeed - versus a smear campaign where people try to weaponise the moderation system by spreading malicious reputation (e.g. your ESB example).

Good distinction!

> The room for discussing this is #matrix-reputation:matrix.org (although public convo is a bit sporadic; this should change as we spin up more experiments in the coming weeks/months however).

Great, I've joined!


> Alternatively, if ESB-fans started framing ESB-haters with child abuse imagery, then obviously that's a problem for the cops rather than the moderation list verifiers.

Also a problem with ESB fans.

Note: I came here because I assumed this was about the ancestor simulation we are all living in, I'm a bit disappointed.


Just a drunken musing here... what you describe as (implied to be) unlikely is real in the form of the Anti-Defamation League for other platforms/social behaviors, who have listed individual numbers and letters as "hate symbols" due to obscure uses by extremists. If all it takes to become a hate symbol is for a small segment of the internet to say "what should we make the ADL freak out about today?" how would that sort of "anti-brigading" work? If I see the letters HH together I assume Hulk Hogan, but others (such as the ADL or SLPC) see Heil Hitler. Not sure how this would be defended against, if any group became the de-facto standard arbiter of what constitutes bad.


This proposal is clear and thoughtful, I like it. However, I didn’t see an explanation of how this would help addressing abuse (or other illegal activity) directed at third parties - someone who is not participating in a particular group. E.g. a group that shares child pornography. It is obvious that the members themselves would not report it. Likewise, relying on reputation does not make sense - in fact, their reputation could be inflated by the satisfied members.


They do touch on this point:

"Meanwhile, communities which are entirely private and entirely encrypted typically still have touch-points with the rest of the world - and even then, the chances are extremely high that they will avoid any hypothetical backdoored servers. In short, investigating such communities requires traditional infiltration and surveillance by the authorities rather than an ineffective backdoor."


I found that part lacking as well, but then I remembered that it's not the protocol's job to solve crime. As matrix indicated in their opening statement, to try to solve for the .1% could irrevocably damage the 99.9.

It's not like law enforcement doesn't have options. I'll use cp as an example. Off the top of my head they could host honey pots and social engineer their way towards cp content creators, analyse cp media for artifacts that could lead them to a location. Legislators could create laws that throttle human trafficking by ending drug wars, opening borders, and providing universal social services. Etc.

But to drag an algorithm into this is the wrong approach for the reasons matrix listed.


I agree, I think crime fighting requires tools that operate at the user level.

So an infiltration bot: We have the technology today (gpt-3 level dialog) that could infiltrate criminal social networks, gather evidence, build credibility and power, and then help shut everything down.

The system itself cannot provide this, but an ai-human actor could.

Of course, this technology is scary: What I think is not a crime - like complaining about the government - is a crime in other places.


Thankfully we don't insist that every real life meeting room monitors and reports our activities. Kind of amazing to me that we don't insist on a better justification other than "because we can" when it comes to virtual spaces.


Without filtering rules and sub-communities that are following them, it's just one big soup of encrypted stuff. Once you have people self-selecting into groups, the suspicious ones are going to stand out more and then those can be infiltrated directly. Cops do this already, anyways, joining leftist/rightist organizations, offering things "for sale" on Facebook that will attract certain groups, joining tech communities via consultancies, &c.


This is a very well written post and I'm happy Matrix is taking this seriously.


in systems where reputation has significant monetary value, how do you stop people from exchanging money for reputation? it's a serious problem which i've never seen solved when participants don't have trust established with each other, e.g. amazon reviews are completely off the rails, but HN karma works quite well. slashdot has metamoderation which i haven't seen implemented anywhere else - it seemed to work 10 years ago or so, but times were different then.


That feels like you're asking about a different system, based on centralized authorities like HN or Reddit. Matrix is decentralized by design, similar to IRC, so you'd have to pay off a _lot_ of people to buy your way into every server's good side. Because no sane server is going to accept the trust list of any other server it interfaces with just because it happens to also be a Matrix instance.


HN reputation does not have any implications or monetary value - at least from a very early point.

Amazon review scores have obvious monetary value.


> HN reputation does not have any implications or monetary value

I would buy a slot in HN ranked posts for greater than $0.


Ranked posts? I'm not aware of any positioning of hn threads or posts that has correlation with user reputation.

Personally I never check user reputation either, and the only 2 I recognize by name are dang and the digital minister of Taiwan.


Perhaps the logical next step is slashdot-like karma that reduces the visibility of low reputation users. Slashdot allowed users to choose their thresholds and, IIRC, whether they wanted funny comments or not.


Reputation scores risk turning the rooms into an echo chamber due to karma mining. It's good for forming consensus, but bad for debates. This can be bad for some communities like FOSS groups where new users and people with fresh ideas frequent.


Matrix will apparently invent ui to warn you if your filters are creating an echo chamber but I have no idea how that would look or work.

EDIT: Fucking obnoxious post throttling. I don't know why HN doesn't want me posting anymore, so I guess I'll just abuse the edit feature to get my voice heard.

In reply to zuesflight:

> What if the feed manipulation was per-user?

My understanding from Matrix's blog post is that is half of the filtering equation. 1. Users and 2. Servers can filter 3. other users via reputation. It sounds like these filters will be inspectable, so, I would imagine that if you don't like how a server is filtering, you'd just not use it. I don't really have a strong understanding of Matrix (is it a protocol, a company?) so this is just me firing from the hip from the blog.

I also think the filter "Merge" thing is all about making "multiple points of attack."

None of this is implemented from what I understand, all they have now is a sort of binary banned/not banned list they share with mozilla. Very pie in the sky but if it works as they write that'd be cool!


I missed that point earlier. Thanks! However, it still sounds complicated. Granted it's open source. But I feel that the fundamental reason why we are in this social media conundrum is the on-cloud platform-wide message feed manipulation. What if matrix and other social media were just channels that didn't manipulate the feed in anyway (no filtering or sorting)? What if the feed manipulation was per-user? On client-side like a personal rspamd, or as per-user on server profiles like sieve? There could be several competing filters, all of which could be adjusted by user to varying degree. Some filters could be learning ones and some could be for special cases (like child filters). There would be no single point of attack, and therefore harder to do social engineering by manipulating the feeds.


> What if the feed manipulation was per-user?

> Or client-side like a personal rspamd

> Or as per-user on server profiles like sieve?

> There could be several competing filters, all of which could be adjusted by user to varying degree.

> Some filters could be learning ones and some could be for special cases (like child filters). There would be no single point of attack, and therefore harder to do social engineering by manipulating the feeds.

This is precisely what we're trying to propose in the original article :)

In terms of UI, visualising the filters could be as simple as "98% of the rooms in this list have been hidden by your #nsfw filter", letting you peek behind the filter to see what you're missing, etc.


Great! Does that mean that I can design and plug in a custom feed filter logic?


Sure! you could get experimenting on the current basics via https://github.com/matrix-org/mjolnir today.


(Answering to the second part of you reply) Either I read the article wrong or it wasn't obvious. I really hope that matrix designs the controls to put the decision fully in the hands of the users.

> I don't really have a strong understanding of Matrix (is it a protocol, a company?)

Matrix is definitely a protocol. Element is the company driving its development.

(PS: I wonder why you are downvoted for a meaningful reply)


This would either have to have miraculously intuitive UI, or it would have to be done not on user level, but rather on matrix server level, because otherwise regular users would never start really using it.

My money would be on the latter. :)


> We call on technology companies to [...] enable law enforcement access to content in a readable and usable format where an authorization is lawfully issued, is necessary and proportionate, and is subject to strong safeguards and oversight.

I believe the age of centralized services will soon begin to die out. Users are increasingly frustrated with major tech companies who host these services and laws like this continue to erode their potential. I don't know if the crown will be passed on to the likes of Matrix and Mastodon or new decentralized platforms I haven't heard of. Regardless, I think we're approaching a breaking point with centralized services and it's only a matter of time.


You've got a much more optimistic view than I do. That frustration is definitely there, but there needs to be a reasonable alternative for people to easily move to before you see any kind of exodus.

Right now, one doesn't exist. These also-ran projects designed by and for geeks don't really count.


Like I said, I don't know if the likes of Matrix and Mastodon will be our platforms of the future. However, they are blazing the trail. They demonstrate what is possible.

There is a lot of churn in the world of online services. It's only a matter of time before someone else follows in the footsteps of Matrix and Mastodon, but figures out how to take it even further. Or maybe someone else will break through by following in their footsteps. Or maybe Matrix or Mastodon are just dark horses who haven't hit their stride yet.

I don't know, but I do believe the future will be in decentralized services over centralized ones.


Isn't this very different from what they're requesting?

>something that empowers users and administrators to identify and protect themselves from bad actors

and

>enable law enforcement access to content in a readable and usable format

are completely different things, no? They're barely even related.


The idea is is this:

* Enabling LE to read encrypted content == backdoor

* Backdoor == unviable, because it weakens encryption for everyone, and will be abused.

* Therefore, you need a way to help everyone (including LE) navigate/filter/investigate/discover/block content out there. Subjective reputation feeds give a way of doing so.


Maybe I'm missing something here, how does having a hashed lists of "bad" message IDs allow LE to navigate/investigate/discover content? From what I can see it only tells you there's something to block, and you can't even reverse it to "investigate" much of anything.


One approach could be to check public servers for "bad" content, and then use that as the entry point to investigate/infiltrate those communities.


> Anyone who can determine the secret needed to break the encryption will gain full access

Without collections just having the secret key[s] won't help you much. I don't like the idea of a backdoor as much as anybody else, but this notion that you merely have to have a key to access everything is strictly false.

This may only be true specifically for systems that allow anybody (or some broad set of untrusted people) to see other people's encrypted data. For most architectures backdoored E2EE is still more secure than no encryption. And obviously if there was a backdoor people wouldn't use/design systems that allow for easy collections.


Reminder that the NSA is basically collecting all of the internet traffic it can get its hands on. They didn't build a data centre in the middle of the desert for the fun of it.


Well, that's part of the point. Governments have been doing mass collections for decades and a lot of that might be cleartext. As far as I know this has not lead to random hackers obtaining great access. So it's preposterous to claim that backdoored E2EE would universally enable that[0]. E2EE does in large part protect from governments and other powerful players and it should be okay to say that.

Again, that said, if Facebook was to offer plaintext and backdoored E2EE messaging, the backdoored version would still be a strict improvement security wise.

[0] as noted, with exception of some systems that might provide easy access to everybody's data by design.


Why would hackers need to hack the NSA's methods of tapping the internet to obtain communications sent in cleartext? If you are sending everything unencrypted there should be no expectation of secrecy. The problem is that we current do send high value information encrypted and this proposal wants to weaken that encryption; what happens to low value information freely sent unencrypted doesn't seem relevant to me.


I'm arguing against the specific point that merely leaking secret keys would give anybody great access.

As far as I understand it the governments are not asking for backdoored transport layer encryption. They want the data one stores or transmits to be ultimately decryptable by them one way or another. Most services that use secure transports merely use it to transmit cleartext and store it as such. With E2EE messaging even if you have the server tapped you'll only capture encrypted binary blobs and perhaps some metadata. That's what the governments don't like.

With backdoored E2EE you'd still need to hack the server AND have the key to access the data.


The problem is that you wouldn't have to 'hack the server', at least for Matrix, given anyone can run a server - so all it would take is for one server in the room to be vulnerable to a social or technical attack (e.g. a nosey sysadmin).

If an escrow public key has been mixed into the e2ee encryption, then all it takes is for the private key to be leaked (e.g. for a price on a dark market) and the nosey sysadmin can go and break the encryption of all its users.

The same goes for a centralised service too (c.f. the twitter hack) - while it might be less likely given the smaller attack envelope, it's a much bigger prize.


> Finally: we are continuing to hire a dedicated Reputation Team to work full time on building this.

This sucks. A group of friends and I have been running Matrix for a year and we would gladly like more money dedicated to bug-fixing. Having to hire somebody to introduce more bugs to placate some goons from backward governments is just sad.


Abuse is a real problem for the matrix network. I've seen spam and borderline CSAC on the network.

Frankly I think this is a valuable addition. Although the 5 eyes should bankroll it if they are really so concerned about CSAC and terrorism.


CSAC?


Child Sexual Abuse Content or something along those lines, I imagine.


Something funny they don't even mention:

> We call on technology companies to [...] enable law enforcement access to content in a readable and usable format where an authorisation is lawfully issued, is necessary and proportionate, and is subject to strong safeguards and oversight.

Matrix.org is not a company and therefore not a target at this point.


> Matrix.org is not a company

Correct there is no company by that name, but the website Matrix.org is indeed owned and operated by a company that would be targeted by this law. It is called New Vector Ltd.


Matrix is a project looked after by The Matrix.org Foundation (which is a company: https://matrix.org/foundation), but outsources running the matrix.org server and website to New Vector Ltd (nowadays known as Element - https://element.io). Both the Foundation and Element could be eligible for pressure if this legislation went through.


Ah right, I recall that now that you mention it. And IIRC it's vector.im that's running their public home and identity servers as well, so they would indeed be targeted.


Actually, it is called Element now. (Not just the new client, but also the company name)


Not true. The attack would be against any service provider. The fact that currently they are eyeing whatsup and apple does not mean that they make the distinction between them and signal or telegram.


Again - WhatsApp, Apple, Telegram and Signal _are_ indeed service providers. Matrix is not.


Again, matrix provide a server, matrix.org. Again, if e2e is outlawed, it would be illegal for them to even work on the code.


If e2e is outlawed, it would be illegal to operate the matrix.org server (which is one of a universe of federated Matrix servers) - but the code would still be legal.

Also, peer-to-peer Matrix is in the works, with working prototypes - the client sort of embeds a server... Enforcing illegal e2e would require going after each user - lots of fun !


If e2e is outlawed, how will https work?


Some countries can have blurry lines between companies and non-profits; in this case, Matrix.org is a Community Interest Company in the UK (https://matrix.org/foundation), so I assume that'd apply to it as well.


Not sure how this reputation system solves the e2e encryption backdoor problem, but I would definitely adopt a WOT-like reputation system for SSL certificates.

The bundled-CA system is IB shambles and if it’s still standing, it’s only because of the deliberate ignorance of the general public.


I had an idea similar to this one some time ago. I called it "moderation feeds".


the idea also pops up somewhat unexpectedly in Neal Stephenson's "Fall, or Dodge in Hell". (although we were thinking about it first :D)


What a strange book. It's like two or three totally unrelated books mashed together. I felt like I was having a nice classic sci-fi read, then all of a sudden it's wild and insane speculation about KKK-redux fundamentalists, and then back into normal sci-fi, only to be replaced by what was ultimately a pretty puerile fantasy novel. Really not sure how to feel about it. Could have been a lot better.


:D it sticks in your head though... between Seveneves using Matrix-style decentralised communication as an (amazing) plot device, and Fall suddenly pursuing subject reputation feeds as a (completely dead end) plot device, it does make you wonder about living in a simulation.

(And yes, the fantasy quest was a bit surprising, to say the least.)


The Informo folks also thought about something similar (though maybe not as flexible) a couple of years ago https://specs.informo.network/unstable/trust-management/


What a disappointing read.

> What we really need is something that empowers users and administrators to identify and protect themselves from bad actors, without undermining privacy.

What if we had a standard way to let users themselves build up and share their own views of whether other users, messages, rooms, servers etc. are obnoxious or not?

> This forms a relative reputation system. As uncomfortable as it may be, one man’s terrorist is another man’s freedom fighter, and different jurisdictions have different laws - and it’s not up to the Matrix.org Foundation to play God and adjudicate.

And as uncomfortable as it may be, Matrix.org is itself part of a value system, however much they would like to bask in moral relativism.

Yes, adding backdoors to prevent bad actors is a flawed solution with lots of negative consequences. However, then we should think about ways to solve this problem that do not rely on ignoring half of the problem.

The whole idea of "relative reputation" assumes that any problem on the internet is a matter of subjective opinion and will go away if I could just convincingly enough pretend that the problem doesn't exist: If someone annoys you on the internet, block them and pretend they aren't there and everything will be fine.

We all know this is not how the world works.

There are situations where a conversation between persons A and B is actively harming C, even though C is not part of that conversation at all - and where C has a legitimate interest in preventing A and B from communicating.

Take cyberbullying for example (in addition to all the other examples usually given). In those cases, the whole point is that the victim is not part of the group. Nevertheless, the group is open to the public and may in fact gain high reputation and utility among those interest. This doesn't make it at all better for the victim.

Is this really the hill you want to die on?


pardon me but this looks like a longer version of ban list with reasons?

What if spammers just create really massive one-off accounts? A lot of computing power would be wasted on comparing billion rows of user ids.

Another issue in MSC2313 is the {"reason": "undesirable behaviour"} is too short. What if a legitimate user is wrongfully banned? What's the appeal process? What if the un-ban didn't propagate into proper servers?


Typo at “[…] European of Court of Justice to invalidate the Privacy Shield.” Change to “European Court of Justice to invalidate the Privacy Shield.”



What's the deal with the irritating "Click to rate this page" popup? It doesn't even go away when you click on a star.


It's an attempt to get a handle on "does our documentation suck?"; we'll exclude it from the blog. It should go away when you click on it, but looks like there's a bug on some browser/tracker-blocker combos.

Try to not let it distract from the fact that the governments of 2 billion people want to outlaw end-to-end encryption though...


I've found the documentation to be excellent for developers and operators (maybe except for SDK documentation last I checked, but the source code is readable enough in that case).

However to be blunt (and with the utmost respect for the work that you're doing) it is really, really poor when it comes to documenting client features, e.g. there's a poverty of "how-tos" for certain common things, which is especially crucial for nontechnical users.

For example, just the other day I had a group jitsi call with everyone in my Matrix instance where I tried to onboard everyone on cross-signing. Most of the people in my community are non-technical and a couple barely understand what E2E encryption means. Unfortunately I didn't anticipate that the Element app would have significantly different menus between the Android and iOS apps, so I tried to consult the documentation... and couldn't find any whatsoever on cross-signing. I tried searching element.io, the `docs/` folders in each client's repo, and to be frank I ultimately gave up and turned it into a verification party

I really think there needs to be more investment on this front -- as it stands, the onus is on operators to justify the overhead of E2E, and to figure out how to do XYZ on all the various clients from outdated blog posts, issue threads, and reading source code. Very rarely have I been able to find answers to questions in the actual documentation itself.

Anyway, again, I appreciate the work that you're doing, and I'm confident this experience will improve given time. I'd love to help with it -- I've been building my own documentation as I go along, but not sure how to even get started contributing.


So the good news here is we're working on "implementation guides" which are basically developer friendly documentation on how to do X rather than a raw technical spec. I don't recall if it's actually hosted anywhere just yet, but you can see the progress in https://github.com/matrix-org/matrix.org/tree/master/impleme...


so this sounds like they're saying "hey instead of undermining end to end encryption, implement a social credit system"


no, "social credit system" sounds like China/HN/Reddit/Slashdot etc.

This is more "hey instead of undermining end-to-end encryption, empower users to filter out content they don't want to see, using whatever metric floats their boat".


Perhaps now may be the appropriate time to begin transitioning my friend group on Matrix over to XMPP.


/me solemnly adds you to his "people who'd prefer to use XMPP than filter out abuse with a reputation system" reputation list


“Without backdoors” wow, shots fired at Moxie “federation won’t work” Marlinspike’s secure honeypot Signal.


Digital Utopian1: Everyone is petting an unethical monkey because it is so cuddly. We must rescue them.

Digital Utopian2: I know, let's design our wire monkey to be 100% ethical and spread the word.

Digital Utopian1: Shots fired! Not only is your logic impeccable, there cannot possibly be any alternative to this course of action!

Digital Utopian2: Yay! I love our ethical technology.

Digital Utopian1: Yay! I love our love for our ethical technology.

Digital Utopian2: I love us.

Digital Utopian1: I love us more...

the two begin kissing, much to the chagrin of the other cafe patrons


Strawman 1: Everyone is using Signal, which I have no legitimate objections against, and only dislike because it's easy to use.

Strawman 2: Yes, there aren't any technologies which care more about privacy and freedom, while still valuing good usability, because Element doesn't exist.

Strawman 1: The only chat client I use is one I created myself, based on my cryptography PhD research. Using it requires memorising over 100 command-line options.

Strawman 2: I would communicate with you using a copy of that client, but a chat network having two users would be too much mainstream adoption, which I hate.

everyone in the cafe starts clapping, out of respect for their intelligence and principles

[I think my story is about as convincing as yours, but yours was way hotter, well done.]


> Strawman 2: Yes, there aren't any technologies which care more about privacy and freedom, while still valuing good usability, because Element doesn't exist.

From a chat-room creation dialog in Element:

> Enable end-to-end encryption

> You can’t disable this later. Bridges & most bots won’t work yet.

That's a great option for backwards compatibility-- in fact I may end up trying it out for a public chat group at some point.

But it's difficult to take seriously an argument that Element cares more about privacy than Signal when they have that option for the user.

Well, it's hard to take seriously anything under a GP that yelled "honeypot!" at one of the applications under discussion.


I keep seeing this comments against Moxie and I have never seen any backing to them, and they seem like gross-over-representation of what he actually said.


Could you elaborate on what's wronf with signal


The problem with Signal is the savior complex of its creator, who insists he and only he can correctly implement secure messaging, and it can only be done if he's permitted to handle all your data himself.


This is a gross misinterpretation of his thoughts. Its not that he and only he can implement it. Its that he believes that security must be seamless and take basically no configuration in order to be used by the masses. Things like federation, key control, protocol control, and a million other things makes the effort required to use the service be far greater than most people would put forth. So he's made something seamless, and it's being used by more and more people every year, unlike, say, PGP, which is a UX nightmare.


> Its that he believes that security must be seamless and take basically no configuration in order to be used by the masses.

He is basically saying the same as people who said "Let's accept self-signed, and broken SSL certificates, or "the unwashed masses" will not understand." Remember how it went.


We ended up with a situation that is still far better than using http everywhere. Remember, IT security doesn't have to be absolute. What you do is largely dependent upon your threat model. Not everyone needs to be defended against three-letter agencies, but this is still probably good enough to defend against your local police department or a script kiddie.


No, this is nonsense.

> Remember, IT security doesn't have to be absolute.

The field of applied cryptography is absolutely reliant on near physical unbreakability of its algorithms, or it doesn't work at all. (you need n-times the life time of the universe to have a working bruteforce, and as much overwhelming mathematical proof of non-applicability of non-bruteforce approaches as possible.)

And it was actually found to be extremely hard to make crypto algos which are only "slightly" unreliable. Either they are a complete mathematical iron wall, or their deemed weakness is too glaring to be hidden.


That's the wrong point. Key distribution is the weak point in many (most? any?) crypto systems (and analogously, SSL certs), and that's where you have a trade-off between super-high security (opengpg ring of trust) and decent security (lets encrypt).


The clients are OSS. Does the protocol have any backdoor?


The protocol has a good reputation, problem is that you can't tell that the app is identical to the source, and each new update has the possibility of breaking things.

Many people were angered by the recent update that forced users to have pins, which many didn't want. Doubly so since there's no recovery method.

The pin issue highlights that future updates could make things less secure and/or add backdoors. Various legal efforts in various legal jurisdictions are trying to enforce backdoors and it remains to be seen if whisper systems will pull out of a jurisdiction if forced to compromise on security.


"The protocol has a good reputation, problem is that you can't tell that the app is identical to the source" Don't they have reproducible builds?


I can't speak to the iOS client but the Android one is only technically OSS in that it bundles proprietary binary blobs by default.


Who cares if the server is not ?


OSS /= no backdoors.


> Who cares if the server is not ?

Well... if E2EE is properly implemented, you could be broadcasting your messages on twitter and you'd still be safe.


It's annoys me that both sides lie/are wrong in this.

The statement' signatories [1] agree that having default E2EE (end to end encryption) on Facebook Messenger will stop around 12 million reports of child pornography per year. This is a lie/incorrect because recipients of such material can manually report it [2] and/or it's possible to make the detection tool work on end points, it's a fancy hash comparison algorithm [3].

The Matrix people respond to what they think "lawful access" is by saying that "lawful access " is a EE2E backdoor so they it's impossible to have "lawful access" and "E2EE". It's an assumption that "lawful access" is a third decryption key and it's possible to have E2EE and third party access to it as Facebook Messenger proves [2].

If all parties were honest the governments would say "we want regular police work to have a way to lawful access E2EE messages and while you're a it a backdoor for the intelligence agencies". The other side would have positions between "no to both" but most would say "no backdoor, OK to past encrypted messages for police investigations and intelligence agencies with transparency reports and oversight".

[1] https://www.gov.uk/government/publications/international-sta... [2] https://www.facebook.com/help/messenger-app/android/49882866... [3] https://en.wikipedia.org/wiki/PhotoDNA


What's the distinction between "lawful access E2EE messages" and "backdoor for the intelligence agencies"? How is the service provider supposed to ever access an E2EE message without a back door?


The service provider, in most cases, is also the application provider which decrypts the message and holds the keys.

Most "lawful access" scenarios are met if the service/application provider requests/obtains the plain text and/or keys from the application.

Example: Police is investigation John Doe for involvement in child porn. Police has his phone number so requests a warrant for his chat apps message contents. Gets a warrant so sends a request to Facebook. Facebook sends request to all apps to send last month decrypted content. Facebook provides that to the police.


Do you not see how that is effectively identical to any other backdoor? Just because the mechanism you describe is slightly different -- it's not a third-party master key, nor a clipper chip, but instead some kind of protocol to request to the application requesting the message history -- doesn't change that the end result is the same: unilaterally being able to obtain any message sent by any user retrospectively. Privacy activists are objecting to the end result, not the technical mechanism by which it is implemented.

Also such an active mechanism would be quickly thwarted by even moderately sophisticated criminals, either by not using backdoored communication software or by simply blocking requests for the keys. And if the goal is to only capture unsophisticated criminals, surely old fashioned police work is more than sufficient.


  Do you not see how that is effectively identical to any other backdoor?
Yes but it's not the same as "a backdoor to which the authorities have a secret key, letting them view communication on demand.". I oppose the notion that "lawful access" is the same that a backdoor/secret key that decrypts all E2EE.

  the end result is the same: unilaterally being able to obtain any message
It's not the same since restricts scope (monitoring all communications vs specific users, indefinite decryption vs history chat). Keep in mind that some will monitor if they are getting those "send decrypted chat please" so if weird stuff is being request, word will get out. The E2EE backdoor is worse because allows silent decryption of all communications for ever.

  Privacy activists are objecting to the end result, not the technical mechanism by which it is implemented.
Many "Privacy activists" actually oppose with the argument that it's impossible to have E2EE and also "lawful access" which is not true.

And I agree with all your concerns, I just thing a middle ground (favouring more the privacy) is possible.

  Also such an active mechanism would be quickly thwarted by even moderately sophisticated criminals, either by not using backdoored communication software or by simply blocking requests for the keys. And if the goal is to only capture unsophisticated criminals, surely old fashioned police work is more than sufficient. 
That all might be true but the considerations is also, will E2EE by default create unnecessary difficulties for police? Are we willing to increase expenditures in police forces to keep the same level of protection? How much crimes are we willing to let go to keep E2EE (crimes where the timing matters)? I would be fine if the discussion was that.

---------------

Side note regarding this, be aware that OS Updates and App updates are already the mechanism I examplified. An App update can be installed which decrypts everything and sends it to somewhere. We're already trusting that the app/service provider is not doing that or being mandate by the courts to do that.


You are confusing policy with mechanism. There is no good solution right now for lawful access to E2EE messages, but the blanket opposition to it prevents are work on finding such a solution.


There is no solution because by definition any solution must be some kind of backdoor -- the explicitly stated goal is to allow retrospective access to encrypted communication without the consent or knowledge of the parties involved. To me, that is almost a textbook definition of a backdoor in an encrypted communication protocol.

The blanket opposition isn't out of stubbornness and lack of understanding, it's because there is no way to simultaneously satisfy people who want a backdoor in every E2EE communication protocols and those who don't.


The two are inextricably intertwined.

Only the sender or recipient can provide lawful access to an end-to-end encrypted message. That's the whole point of end-to-end encryption. e.g. an informant can take screenshots or copy the message text or a defendant can be compelled to provide their fingerprint by subpoena.


That's a distinction without a difference. Why would anyone run closed-source clients for federated networks when there are open-source clients readily available?

One could imagine a scheme where endpoints run some kind of zksnark-wrapped computation against a hash-list of "unlawful" content, proving with mathematical certainty that one is not transmitting contents that are on the list. But this mountains of effort both because of complexity (protocol needs to be structured in some other way) and limitations of snarks - they do not handle this amount of computation well.


I'm not fully aware how Matrix works, my comment is more related to their statement trying to make equal "lawful access" and an E2EE backdoor.

  Why would anyone run closed-source clients for federated networks when there are open-source clients readily available?
It doesn't have to be closed-source, it can be open-sourced and still have the functionality and send the plain text to the server upon request.

Why would anyone spend time implement it? Well, that could be defined in the law that for open-source the government would have to provide the pull request for it a fork.

Why would criminals use such software? Because many criminals aren't that operational-security aware so they use SMS, Facebook Messenger and other easily intercepted messaging software/channels. A popular open-sourced software with E2EE decryption mechanism would still be used by criminals just because is popular.


Having a button which people can use to forward a message (or forward it) - with a cryptographic guarantee of who it came from - to the authorities would be useful. E2EE preserved, and in fact utilised.


Signal and other OTR-derived systems actually cannot cryptographically prove that someone sent a message (the recipient can forge the message because the message is authenticated via HMAC not a signature).


Yup. Matrix/Olm/Megolm similarly has deniability. You can prove in retrospect that someone in the conversation sent the message, but not who.


To be honest, I'm not convinced that this is practically true in Matrix except in cases where you wish to argue that the homeserver operator is "in on it" to create fraudulent messages (the same goes for Signal -- though in Matrix's case the homeserver usually keeps the entire conversation history, making it even harder to make that argument). So while it is technically true that you cannot cryptographically prove that a message was sent by an individual, practically speaking you probably can prove it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: