Hacker News new | past | comments | ask | show | jobs | submit login

This proposal is clear and thoughtful, I like it. However, I didn’t see an explanation of how this would help addressing abuse (or other illegal activity) directed at third parties - someone who is not participating in a particular group. E.g. a group that shares child pornography. It is obvious that the members themselves would not report it. Likewise, relying on reputation does not make sense - in fact, their reputation could be inflated by the satisfied members.



They do touch on this point:

"Meanwhile, communities which are entirely private and entirely encrypted typically still have touch-points with the rest of the world - and even then, the chances are extremely high that they will avoid any hypothetical backdoored servers. In short, investigating such communities requires traditional infiltration and surveillance by the authorities rather than an ineffective backdoor."


I found that part lacking as well, but then I remembered that it's not the protocol's job to solve crime. As matrix indicated in their opening statement, to try to solve for the .1% could irrevocably damage the 99.9.

It's not like law enforcement doesn't have options. I'll use cp as an example. Off the top of my head they could host honey pots and social engineer their way towards cp content creators, analyse cp media for artifacts that could lead them to a location. Legislators could create laws that throttle human trafficking by ending drug wars, opening borders, and providing universal social services. Etc.

But to drag an algorithm into this is the wrong approach for the reasons matrix listed.


I agree, I think crime fighting requires tools that operate at the user level.

So an infiltration bot: We have the technology today (gpt-3 level dialog) that could infiltrate criminal social networks, gather evidence, build credibility and power, and then help shut everything down.

The system itself cannot provide this, but an ai-human actor could.

Of course, this technology is scary: What I think is not a crime - like complaining about the government - is a crime in other places.


Thankfully we don't insist that every real life meeting room monitors and reports our activities. Kind of amazing to me that we don't insist on a better justification other than "because we can" when it comes to virtual spaces.


Without filtering rules and sub-communities that are following them, it's just one big soup of encrypted stuff. Once you have people self-selecting into groups, the suspicious ones are going to stand out more and then those can be infiltrated directly. Cops do this already, anyways, joining leftist/rightist organizations, offering things "for sale" on Facebook that will attract certain groups, joining tech communities via consultancies, &c.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: