Hacker News new | past | comments | ask | show | jobs | submit login
Attorney General will ask Zuckerberg to halt plans for end-to-end encryption (buzzfeednews.com)
1325 points by minimaxir on Oct 3, 2019 | hide | past | favorite | 558 comments



> We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety.

Oh, so you’re asking for more end-to-end encryption?

> While the letter acknowledges that Facebook, which owns Facebook Messenger, WhatsApp, and Instagram, captures 99% of child exploitation and terrorism-related content through its own systems, it also notes that "mere numbers cannot capture the significance of the harm to children."

This is such a lazy argument :/


The Second Great CryptoWar: https://reason.com/2013/03/12/the-second-great-crypto-war/

> "The proponents of this process use fear tactics to win support, what the four cypherpunks dub "The Four Horsemen of the Info-pocalypse: child pornography, terrorism, money laundering, and the War on Some Drugs." In other words, laws passed to go after child pornographers, terrorists, money launderers, and drug dealers end up chipping away at everyone's privacy. The classic example is the PATRIOT Act, passed to prevent terrorism but soon used to expand wiretapping and National Security Letter powers in other contexts."


I'm not entirely sure how those four examples are supposedly invalidated by giving them a clever nickname.

I'll take the privacy side in most any discussion of privacy-vs-security, but categorically denying the possibility that some crimes could be aided by encryption seems a step to far.

It's also bad PR strategy: anybody not already on your side will be put off by your apparent lack of reasoning skills.

Instead, acknowledge the possibility and show them why you consider the benefits outweighing the risks.


> I'm not entirely sure how those four examples are supposedly invalidated by giving them a clever nickname.

They're not. A number of horrible things will happen as a result of pervasive end to end encryption.

However.

You have to run the numbers here. On the one hand, perhaps a bit more crime, some of which horrible. On the other hand, the privacy of everyone.

It's a hard sell. Just picture a politician on live television having to choose between having this cute little child being raped for years before they commit suicide (letting the perpetrators off the hook), or ramp up the surveillance a bit. Picture them choosing rape.

Nevermind the false dichotomy. The horrible fact is, the value of human life is not infinite. A mere inconvenience, suffered by enough people, is worth killing a few. Such situations rarely present themselves. (We rarely condone murder in the name of the betterment of humanity: some tried, didn't go so well.) End to end encryption (and metadata hiding while we're at it), is such a situation. The harm, though hard to perceive, is significant, affects everyone, and can potentially grow into full blown totalitarianism (possibly enforced by incentives rather than violent policing).

Preventing that is totally worth killing a few children… or at least spend resources on properly policing the problem, like going undercover.

Still, go say that on TV. I'm not even sure I'm safe writing it here.


> You have to run the numbers here. On the one hand, perhaps a bit more crime, some of which horrible. On the other hand, the privacy of everyone.

It's not just a matter of "many small harms > few large harms" though. Not having privacy can lead to severe harms.

If the bad guys break into a system that allows them to effectively wiretap everybody, now they can snoop around and find blackmail targets. "I know what you did, have sex with me or everyone will know. Or send money. Or give me your employee access badge."

Results: Rape, financing child sex trafficking, facilitating an act of terrorism. Or any of the less visceral but nonetheless widespread and significant consequences like major financial fraud or corporate espionage.

And that's just blackmail. What about the suicides of people who get doxxed? Or the people in violent relationships whose abuser is in law enforcement or in a criminal enterprise that has compromised the surveillance apparatus? Or the mental health epidemic which results when people know their communication is exposed to people they don't trust to see their true selves and then self-censor into performance-art conformists riddled with anxiety and loneliness?

Privacy is about keeping perverts in law enforcement from reading the sexting that should only be between you and your spouse, but it's also about keeping the country and the people safe from terrorists and foreign powers, keeping victims safe from abusers and allowing people to satisfy the human need to be themselves in communications with people they trust.

Privacy isn't a trade off against security, it's a necessary component of having security.


> The horrible fact is, the value of human life is not infinite. A mere inconvenience, suffered by enough people, is worth killing a few. Such situations rarely present themselves.

If you consider "not saving while you could" the same as "killing", it's very common. So much in fact that it has its own WikiPedia page:

https://en.wikipedia.org/wiki/Value_of_life

With a cosy link to a letter by the Department of Transportation:

> this guidance identifies $9.6 million as the value of a statistical life to be used for Department of Transportation analyses assessing the benefits of preventing fatalities and using a base year of 2015

https://www.transportation.gov/sites/dot.gov/files/docs/2016...


All such value of life logic and cost figures go out the window with cars and alcohol. For our driving and drinks, we’re pretty much A OK with daily carnage.

Our society at large acts just fine with drunk driving deaths; looks like killing people while drunk is less abhorrent to us than trading photos of a teen.

I’m guessing it has to do with sense of purposefulness, malevolence vs. negligence, or identifiability with the commission of the crime?


Still, go say that on TV. I'm not even sure I'm safe writing it here.

FWIW, I applaud your willingness to make such a statement. (And no, this is not sarcasm!) Self-censorship is a horrible thing, and I feel like I die inside a little bit every time I catch myself doing it. But more and more these days it begins to feel like even hinting at a willingness to engage in "thought-crime" is exceedingly dangerous. I mean, shit, a woman got passed on for a job because she had a picture of herself in a bikini on her Instagram. Imagine if a prospective employer find a comment online which could even remotely be twisted into saying that some child deaths are a sacrifice that may be inevitable in order to protect an abstract principle like Freedom, or "Free Speech". Zoinks!


Ok let's turn that the other way,

Do you say you want to apply a dictatorship state of global surveillance because criminals are going to use the internet? What about other tech available to them? Are we going to set up a surveillance mechanism in those as well? Scooters? Cars?

Who's going to compile all that data? How is it protected?

It's about the Governance of Information,

and guess what,

I want to be Governing my Data, not some shady politician that got elected for a 5 or 7-year term in some other country.

Every actor in those systems have access to the data of every person that got their info collected.

And yet we are still here with that pedophilia and terrorism argument that, while they are very true, have still evolved and expanded. Even with the Cloud Act and all of the shady agreements

It's an evolution of our sick society, we had that kind of criminality rampant in 2000, we saw that evolve with the net,

But just like the young gangsters that evolve with stolen scooters,

Criminality will always have its sick way.

But dictatorship and global surveillance?

We didn't accept that in the 2000.

And we should never accept that at any cost,

even if it falsely promises to resolve the problem of pedophilia, terrorism, and criminality on those spaces by applying weird unknown algorithms

Technology will always be used by criminals because these are part of the society we build and evolve. They will change their behavior just like all of us; We cannot give up on our freedom and ethics to help catch the few.

And that's what's the problem. I don't care about police retrieving the data of a criminal once a judge has agreed to that, I care when we give them the entire web information and allow them to dictate the behavior of those networks


Not sure why this reply is to what I posted, since it seems to be making a different point. But FWIW, I almost completely agree with what you said there.


>It's a hard sell. Just picture a politician on live television having to choose between having this cute little child being raped for years before they commit suicide (letting the perpetrators off the hook), or ramp up the surveillance a bit. Picture them choosing the rape.

But the argument doesn't even make sense. It makes the horrible assumption that pedophiles will continue using a service they know to be insecure. That's literally a provably false assumption. The second they realize it's insecure (which will take exactly one raid), they'll switch to something else.

And ALL of the above assumes that there are no criminal programmers and thus they have no ability to just write their own tools if there isn't something sitting on the shelf. We know with 100% certainty this is also a false assumption.


  That's literally a provably false assumption. 
  The second they realize it's insecure (which 
  will take exactly one raid), they'll switch 
  to something else.
It is trivial to prove that the second a criminal realizes a given communication mechanism is insecure they do not stop using it if it is convenient.

Has every criminal in the world stopped using the phone after the first phone was ever tapped?

  And ALL of the above assumes that there
  are no criminal programmers and thus they
  have no ability to just write their own tools
But we know there are more than 0 non-programmer criminals who will not be writing their own tools.

You need better arguments than "if we can't stop all crime with an action it has no value" because it is a silly argument.


> You need better arguments than "if we can't stop all crime with an action it has no value" because it is a silly argument.

Arguments like "we all need to abandon privacy to maybe, potentially stop some crime somewhere" are equally silly.


Agreed.

Everyone should avoid silly arguments.


They don't need to write up the tools, just to use them, they are widely available


You do know software can be downloaded without knowing how to program it yourself right?


>But the argument doesn't even make sense. It makes the horrible assumption that pedophiles will continue using a service they know to be insecure. That's literally a provably false assumption. The second they realize it's insecure (which will take exactly one raid), they'll switch to something else.

It's not an assumption at all. The fact that people engaged in child pornohraphy have had decrypted communications used against them in court demonstrates this to be true.

Ephemeral conversation is not new and implicit in a right to privacy assuming it extends beyond your actual corpus.

The only way I understand your arguments to be a justification to break encryption is if you believe we should not have freedom to share privately.


> The fact that people engaged in child pornohraphy have had decrypted communications used against them in court demonstrates this to be true

It only demonstrates that for some people, it's true. What about cases where they were using secure channels and thus haven't been caught/charged?


What about it? Every criminal ever that never got caught managed to do so through using "secure channels" (and thus not getting caught).

Perhaps we should aim for not leaving any piece of space on Earth unsurveilled?


> It's a hard sell. Just picture a politician on live television having to choose between having this cute little child being raped for years before they commit suicide (letting the perpetrators off the hook), or ramp up the surveillance a bit. Picture them choosing rape.

This was tried by a Canadian Minister at one point to sell surveillance in this extreme way:

> In February 2012, as Minister, Toews introduced the Protecting Children from Internet Predators Act (also known as Bill C-30).[118][119] The bill, which made no mention of children or "Internet predators" outside of its title,[120] would have granted police agencies expanded powers, mandate that internet service providers (ISPs) provide subscriber information without a warrant and compel providers to reveal information transmitted over their networks with a warrant. When criticised about privacy concerns, Toews responded that people "can either stand with us or with the child pornographers."[121]

* https://en.wikipedia.org/wiki/Vic_Toews#Federal_Minister_of_...

The legislation in question went down in flames as plenty of people sided "with the child pornographers":

* https://en.wikipedia.org/wiki/Protecting_Children_from_Inter...


> It's a hard sell. Just picture a politician on live television having to choose between having this cute little child being raped for years before they commit suicide (letting the perpetrators off the hook), or ramp up the surveillance a bit. Picture them choosing rape.

We see politicians regularly choosing to not enact overwhelmingly popular positions like universal background checks in the face of mass murders of children.

How many politicians have done anything to stop the Catholic church from hiding child rapists? This is a much more direct harm and no one who isn't a child rapist would lose anything if we fixed it.

Politicians are already choosing rape and murder when the stakes are much lower.


It is not that simple. Two potential unintended consequences of this no-encryption rule come to mind that might actually not just not help, but in fact make it worse for the victims.

1. They said that FB captures 99% of those illegal activities by themselves. What makes us think that government will be able to capture that last 1% on their own?

2. Encryption gets removed, potential perpetrators move to another platform that uses encryption and doesn't have the capability to catch those illegal activities that FB already catches using their internal systems. Congrats, now those perps that would have been caught (even if FB implemented the encryption) won't be caught at all.

And with the uncertainty that comes to mind with both of those points, one thing that is definitely guaranteed to come is further erosion of personal privacy.


> 1. They said that FB captures 99% of those illegal activities by themselves. What makes us think that government will be able to capture that last 1% on their own?

To stress even more on this point (I have to also FULLY agree with the second), what evidence is there that lack of encryption will even make it easier to capture this 1% more. Pareto is a real thing. I'm not convinced there's an easy answer to capturing the last 1% and that it can be done with minimal resource allocation. That's where I start being suspicious. The back of the envelope math doesn't work out.


I agree, but many will not. So I figured we should address the central problem: increasing privacy automatically means increasing the ability for people to do whatever they want (that's the whole point). We'd be stupid not to anticipate bad consequences. It's just that the benefits are much greater, and we should be able to argue as much, even on TV in front of a potentially hostile journalist.


Facebook relies on the fact that most Facebook messages are not end-to-end encrypted to detect illegal activity. Forget the last 1% – how would Facebook continue to detect the 99% without access to decrypted message contents?


Client-side detection. You can probably argue that it can be evaded easily because it is client-side, but most people won't bother + it isn't like FB has an open API with a ton of third party FB clients.

I spent less than a minute thinking about this, but there are probably tons of other strategies they can employ. People familiar with the domain, I would actually love to hear your takes on this, as the topic is fairly fascinating.


It's an interesting idea. Assuming it's feasible, I wonder whether people would actually want to use a messaging service that's private, except where it thinks it detects illegal activity, in which case messages are copied to the police (and end-to-end encrypted on the way there, of course). That seems practically and logically equivalent to a messaging service which is not end-to-end encrypted but run by a trusted operator.


I feel like, to ensure privacy, we need an equivalent of Mutually Assured Destruction: the strategic introduction of a secondary Nash equilibrium that's so bad that it makes it obvious that we should choose the primary equilibrium. Right now, people have no clear sense on what "a gradual erosion of privacy at the hands of the government" really means. We haven't seen a 100% solid example of what any country would do with that power, yet (though China is getting close.)

But imagine if some crazy billionaire set up some sort of autonomous decentralized system that constantly attempted to MITM all the same systems the government likes wiretapping; and then, if it managed to extract any data from that attack, it would find names in that data, and put bounties out on those people on assassination markets (paid from anonymous accounts previously set up by the billionaire.)

Now the choice is between some people dead, and a lot of people dead! Everyone get on board the privacy train!


In a way isn't this what web technologies did?

That is-- pick an insanely fast development cycle where piecemeal/baroque security approaches simply cannot keep up. Eventually you arrive at a place where the whole endeavor become a giant tinder box just waiting to go down in flames. But that forced browser vendors to say, "Ok, let's just assume everything is constantly on fire, cordon everything off into flame retardant boxes, and improve our response time by many orders of magnitude."

If anything ever gets too hot from the thousands of strangers I casually let in the front door I simply move to another building and recycle the burning one.

Meanwhile I treat my Debian install like a prized piece of Shaker furniture. You'd have to be a goddammned 19th century furniture historian with your credentials showing before I let you anywhere near my brittle little museum.


But we aren't "ensuring" privacy; we're inventing whole new forms of privacy. There has never been a private and secure channel to communicate with thousands of strangers across the world. Simply acting like this brand new thing is an obvious and unquestioned good is naive.


This "never before seen" argument cuts both ways, though. There has also never been a full, verbatim record of all of one's communications, personal notes, searches of the equivalent of the world's largest encyclopedia, precise location over time, and myriad other details, carried in one's pocket and/or magically copied to a central location at which someone can perform instantaneous bulk keyword searches. The raw data and processing power over that data that governments have, now, is completely unprecedented in the history of the world.

We used to be protected by the ephemeral nature of 99% of life and by the inability for anyone to centrally view or mechanically process the other 1% (letters, diaries, etc). This is just as much argument for strong protection as your argument goes against strong protection.


At a point in time, simply writing in your villages native tongue and posting it around the world could have afforded the same benefits. Private communication has been the cornerstone of civilisation for many hundreds and thousands of years and will do for many more to come. I should not be vilified for my want of privacy.


I think the two key things to remember are that "failure will happen" and Blackstone's principle. We have to, as a society, decide which direction we want to fail (modes of failure). We also have to decide what amount is acceptable (having 0 crime is a good goal, but we will never get there and need to be realistic). I think it is clear which way the founders of the country wanted failure to happen, which is Blackstone's principle. "A hundred guilty men should go free before a single innocent man is is deprived of his freedom."

I think everyone here is familiar with Pareto. Capturing 80% of criminals is easy. Catching the next 10% is harder than catching that first 80%. Catching that last 1% takes significant amounts of resources, way more than the previous 90%. So it just isn't economically viable to stop it all (exponential curve and we don't have infinite resources).

So I'm not sure that saying

> A number of horrible things will happen as a result of pervasive end to end encryption.

But I don't think it is necessarily wrong either. I think encryption enables it, but this sentence implies causation (which is the rhetoric of those that want to remove encryption: causation).

How I read the horsemen comment is that these topics are used as boogie men. Because what sane person wants that stuff to exist? OF COURSE we want 0%. How can you be against stopping child trafficking? (practically) No one is against that! These are also topics we care VERY much about. Because frankly we should do everything we can to stop child exploitation. But we know the goal is unobtainable and to get only a handful more than we currently get (which is almost all of them!) requires huge violations of privacy and significantly more resources to be allocated. The horsemen are being used to get us on board and make us not ask if these methods are meaningfully effective or ask what the costs are. It also says that anytime you hear officials talk about the horsemen that you should perk your ears up and start asking serious questions. How effective is this? Does it actually help? What are the costs? The Patriot Act is a good example. It is not clear that it meaningfully reduced terrorism in any way and yet we gave up a significant amount of privacy (I can quote founding fathers on this too).

So it boils down to "if giving up the privacy does not do an effective job at making any meaningful reduction in actual child trafficking <insert horseman>, why should we give that privacy up? There's clearly benefits to privacy. So is this issue really about child trafficking or is something else at play here and are they just feeding off my emotions?"


> How I read the horsemen comment is that these topics are used as boogie men.

Yes they are. And it works because we can't multiply, emotionally. I tend to avoid LessWrong links around here, but this one is appropriate: https://wiki.lesswrong.com/wiki/Shut_up_and_multiply.

My first comment alluded to the torture vs dust speck dilemma. Horsemen vs privacy is nowhere near as extreme (especially at the dust speck end), but it has the same structure. If lay people realised this, the horsemen would not convince anyone.


> (having 0 crime is a good goal, but we will never get there and need to be realistic).

Actually, please stop there and think about this for a second.

Having zero crime depends extremely strongly on the precise definition of "crime". Much more so than other, non-zero levels of crime.

This is only a good thing if you are absolutely sure that the definition of what constitutes a crime right now is perfect and immutable.

Extreme positions are almost always a bad idea and I think this is one of those cases.


Why does my point have to depend on immutables? I'm really pointing out Pareto, which is still going to exist if you add or remove crimes. The only way I can see your counterpoint being an actual rebuttal, and not just pedantic, is if you redefine crime to be something meaningless. But if we're allowed to change definitions then we can't really have any conversation about anything. But language is imprecise and we can quibble over things that are in that imprecision or we can argue over what was the clear intent of statements.


It was not meant as a rebuttal of your overall point, which I agree with.

I'm adding to your point by saying that I don't consider "0% crime" as a desirable goal since crime is something defined by people and people are fallible, hence the definition is also fallible.

The problem is not in removing meaning from the word "crime". We all have an imprecise, handwavey meaning of the word in our minds when we use it. We usually mean something like "undesirable behaviour which benefits a single individual while harming others". The problem arises when you try to operationalize this loose notion into laws since this process frequently results in errors. Another way this can go sour is due to overreach by groups currently in power in an effort to stay in power, leading to loss of freedom (which is more applicable in this case).

In other words, some crime is actually people breaking the law because for good reason, like rebelling against an unjust, inefficient or overreaching law.

Therefore, it's better to strive for a low crime rate than a zero crime rate. This is also the conclusion you arrive at through the application of the Pareto principle.


I agree with much of your argument, but what makes you say so confidently that we capture or punish almost all of the people who engage in child exploitation?


I'm going off of what is said in the article and determining if what is said matches what I expect (which is based off of Pareto). Also consider that the statement that the AG said is not countering FB's claim. So that's what gives me confidence. One side says "we capture 99%" and the opposition says doesn't disagree and uses a desperate statement. I think if FB was wrong then the AG would counter on that point.


> We rarely condone murder in the name of the betterment of humanity: some tried, didn't go so well.

sounds like "War" to me.


This whole privacy thing is pretty mute if the lack of privacy doesn't come with its own danger. The wider princple is likely based around a simpler question.

Who is more of more danger to you. A cabal of rich drug dealing terroist pedophiles or your own Government? The argument then becomes one of statistics and then of the lesser harm.

I imagine the answer is obvious with the Government being sigificantly more harmful. You're likely comparing a few hundred deaths of tens of thousands.

You'll always find a subset of the population who thinks the death of citizens or harm is justified in some way, because they took some action that broke the law.

But find enough examples where the offernder did no direct harm to anyone else and I suspect you'll give most reasonble people pause for concern.

And if the facts don't match your gut, then maybe you're wrong and privacy isn't all that important.


> Still, go say that on TV. I'm not even sure I'm safe writing it here.

The thing to point out is that we all make the implicit choice to let people die for convenience every single day: Almost all of us could choose to pay a bit more to charity to save a life, almost no matter how much we're currently giving or what we're currently doing, or could choose to take jobs that would do more to make the world a better place.

We just rarely have to face what choosing differently would have meant, so we get to pretend it doesn't have much to do with us. And granted, most of the time the consequences are many steps removed from our choices.


> The horrible fact is, the value of human life is not infinite. A mere inconvenience, suffered by enough people, is worth killing a few.

I think this kind of dark-sounding reasoning only holds up when you phrase it not as “a mere inconvenience” but as tyranny, which is what encryption and privacy in general protect against.

I support personal rights to privacy, and believe that we should find ways to enforce laws without violating privacy. Representative governments need the ability to alter their own power structure in order to function, and yet human power structures naturally resist change, including by spying on those who conspire against them: therefore, privacy is essential for representative government. Representative government’s alternative is tyranny, which we know creates incalculably-large-scale suffering.

Human nature is at the root of why privacy is costly and also why it is necessary; however, much evidence suggests that the set of aspects of human nature we express is mutable and dependent on environment.

While I would never argue that we can remove those aspects of our nature, I believe we can alter our environment to reduce how frequently we express the more sinister ones.

In short, I’d rather work on reshaping our environment, including and especially our culture, to make privacy less necessary and costly, than to debate whether it is either. It is both, and it will always be both, but we can make it less of both, or more of both, with our culture. Same for guns.


Anyone decently intelligent and motivated can utilize end-to-end encryption in myriad ways with relative ease. If anyone is going to organize a sufficiently dangerous security threat, they're not going to do it through unencrypted Facebook messages.

Sure, idiots do stuff like this all the time. Sure they get caught. But idiots usually get caught by other means, too. And you never catch all the idiots, there's just too many of them.


Honestly, on HN I feel there is much more disdain for my position. Given the world we're in, Facebook being able to assist law enforcement around child abuse, sexual slavery and human trafficking provides a lot of value. I feel people on HN are underestimating the extent of these issues, underweighting their severity, and overstating the dangers of letting Facebook store private data.

Estimates say there are something like 20-40 million slaves in the world, and that 50,000 people are trafficked per annum in the USA. Something like 20% of slaves are sold for sex. Facebook is believed to be a common platform to facilitate this, though I haven't been able to find numbers.

https://en.wikipedia.org/wiki/Slavery_in_the_21st_century

In contrast, despite the occasional news post of a particularly incompetent company leaking a database of plaintext passwords, Facebook's data storage is pretty safe, and encryption does still afford a lot of protection. The idea that hundreds of thousands of people are going to have their private data extracted from Facebook through individual attacks against the servers is not well corroborated.

Totalitarianism is an important long-tail risk, but I don't think it's reasonable to suggest that these programmes are a path to it. Overall privacy rights quite plausibly are, but that issue exists primarily as a matter of policy and law, not as a matter of technology. If the laws are bad, Facebook illegally preventing protection of trafficking victims will not help those laws change to a more moderate position, and if the laws are reasonable, cooperating to prevent trafficking victims would not be harmful to positive political outcomes.

The scale of abuses would have to be multiple orders of magnitude smaller, or detection mechanisms incredibly ineffective, before this tradeoff made sense to me.

<braces for backlash>


> A mere inconvenience, suffered by enough people, is worth killing a few.

Wow. I can't agree with that one even a little bit. The real concern is totalitarian abuse. But being inconvenienced allows us to kill? That rings as downright pathological.


That's because it only works on pathological cases. Like the dust speck vs torture dilemma. https://www.lesswrong.com/posts/3wYTFWY3LKQCnAptN/torture-vs...

In practice, the tradeoffs are never that extreme.


> I'm not even sure I'm safe writing it here.

Yep. I'm pretty sure you just torpedoed any hope you may have had at a political career.


So, how do you feel about gun control?


I used to have an opinion, but recently realised that I simply don't know the subject well enough. But I can see the tradeof: on the one hand, guns facilitate accidents and murders. On the other hand, arming the population could ensure they can effectively rise up against their government if that's really needed. Not that they will actually, do it, but the mere possibility likely changes the way things are done.

There's also personal defence, but that one has disadvantages too (there might be false positives, were a presumed mugger would get shot). Plus, from what I hear, guns have different effects in different countries.

It's a complicated subject. My opinion right now is not informed enough to be trusted.


I didn't really want to derail the discussion, it was just really striking how similar the arguments here in favor of privacy really are to the arguments in favor of gun control- but many people I have known are in favor of one and not the other.


What number of things will happen?

This argument is so unbelievably stupid, that I don't even know where to begin.

First of all, encryption in WhatsApp is targeted towards consumers. You won't catch terrorists by limiting encryption. If terrorists and other criminals are not using encryption anyway, then they are so stupid that they deserve to be caught.

The only thing a lack of privacy does is throwing us further into dictatorship. Just imagine a madmen, like uhm... say Trump, with the full power of the secret service behind him and limited to privacy for end-users. Well this is just great. All the dirt he can dig up about his opponents. He asks foreign governments to dig up dirt, okay, that means right now the NSA isn't too much inclined to help this guy out, but what if they were?

If you water down privacy you are robbing the people of their only chance to organize protests and rise up to authoritarian governments.

Last but not least, the crimes prevented by not using encryption are absolutely negligible. The numbers are so freaking low that each day more people will die in car accidents in the US alone than would die globally because of pervasive usage of unbreakable encryption.

This analogy of `Uh a person could be prevented from getting raped is more important than preventing a fall into dictatorship` is so contrived that I don't even understand how anyone can eat this shit. Bad things happen all over the place and you are buying this sham argument that is preying on human psychology.


Some crimes can also be aided by whispered in-person conversation. Should we require all in-person conversation to be shouted near a government office?

The societal default used to be that substantially all conversations were inaccessible to the government except through testimony. Encryption does nothing to change the availability of information through testimony.

Previously, remote conspirators could collaborate through the post, and their conversations could only be accessed with a warrant specifically targeting those communicators. End-to-end encryption does little to change the availability of information in a targeted investigation; it just means it's a little more difficult to access the information than entering a phone number into XKeyscore. Investigators can install malware on the device, or microphones and video cameras in the suspect's home to hear or see what is being communicated.

Forbidding end-to-end encryption, in combination with our mass surveillance apparatus, changes the societal default to be that substantially all conversations are trivially and automatically accessible to the government.


I think your point is very important. Forbidding e2e encryption only enables mass surveillance. It does not affect traditional investigative work. There are other existing tools for the job.

Anyone remember the FBI and Apple case a few years back? How quickly the FBI hacked the phone after Apple wouldn't cave? Other tools exist to do targeted surveillance and targeted attacks. Only authoritarians want mass surveillance.


> Anyone remember the FBI and Apple case a few years back? How quickly the FBI hacked the phone after Apple wouldn't cave?

To be fair, Apple is working to ensure FBI won't be able to do that next time.


I'm not saying anything bad about Apple here... I'm not sure what your comment is about. I'm actually saying (as an anti-Apple person) "good job Apple".


* Should we require all in-person conversation to be shouted near a government office?*

"Why isn't your desk in front of the telescreen?" - 1984

The day will come when not having an Amazon Echo or Google Speaker will be considered probably cause for a search.


"The societal default used to be that substantially all conversations were inaccessible to the government except through testimony"

Sure. Before the telegraph was invented. And even before that people used security measures, such as wax seals on letters to prevent tampering or at least have an ability to detect whether or not the message has been tampered with.

You seem to think of messaging systems as if they were spoken conversations in private. They are not. Just because you are chatting with someone from the privacy of your own home, doesn't negate the fact that your words are traveling through a lot of wires and boxes belonging to all sorts of private and public entities.

During WW2, American federal government established Office of Censorship whose sole purpose was to review and censor all communications coming into and out of the country. Now imagine them finding that someone is mailing letters written in an unbreakable code. How long do you think it would take for the FBI to break down that person's door?

I am not saying we should not have end to end encryption. I am saying that government spying on citizens is nothing new, it is not something that was ever limited to totalitarian dictatorships and it should not be surprising to anyone that the government is trying to fight it. The news is not that the government wants to keep an option to read your communications. The news is that first time in history there is a chance people might want to and be able to stop them.


During WW2, American federal government established Office of Censorship whose sole purpose was to review and censor all communications coming into and out of the country. Now imagine them finding that someone is mailing letters written in an unbreakable code. How long do you think it would take for the FBI to break down that person's door?

It wasn't done that way. Envelopes were opened, and stamped "Opened by Censor". Material was cut out of letters. Censorship was not concealed at all.[1][2]

[1] https://www.archives.gov/publications/prologue/2001/spring/m...

[2] https://sparks-auctions.com/SAN/28/281401a.jpg


I never said they were secretive about it. I said they opened, read and edited people's letters and the public was completely OK with it.


> End-to-end encryption does little to change the availability of information in a targeted investigation

A secure E2EE system must resist even targeted, legally authorised attacks. Are you suggesting that secure, practical E2EE systems do not currently exist? Would a world in which law enforcement must install physical listening devices and exploit unpublished software vulnerabilities to surveil suspects be more private than our current world, where E2EE is available but often disabled by default, and trusted intermediaries like Facebook and Google can be legally compelled to disclose non-E2EE messages?

> Forbidding end-to-end encryption, in combination with our mass surveillance apparatus, changes the societal default to be that substantially all conversations are trivially and automatically accessible

That is not (publicly acknowledged to be) the case today, even though non-E2EE platforms like Facebook are widely used. While Facebook could currently comply with laws authorising mass surveillance, and implementing secure E2EE would prevent them from doing so, the warrants under which they currently hand over non-E2EE messages are at least somewhat targeted. This controversy is not about a proposal to relax the need for warrants, it's about asking Facebook to preserve their ability to comply with them.


Sharing unlimited copies of child abuse videos is a new kind of crime, which whispering would not have aided.


Why should anyone give a shit? Sharing unlimited copies of child abuse videos does not actually hurt anyone.

I for one am not willing to give up any freedoms in order to prevent the sharing of such vile material. Perhaps the creation, but not the sharing.


>Sharing unlimited copies of child abuse videos does not actually hurt anyone.

This is contradicted by the evidence we have to the contrary, in particular the model of supply and demand identified in child porn markets.


The sharing creates demand. Producers increase supply. You should read more about this issue. NYT just did a bunch of great articles that will give you perspective.


Sharing only creates demand if money or other videos (or other things of interest to the original perpetrator) are given in exchange. And honestly, I expect the vast majority of abusers to be doing it either for fun (so they'd stop sharing videos, but keep raping) or for the money. Sometimes I wonder if what people really want is just to stop the transmission of child porn so they can go back to pretending it doesn't exist...

Child porn piracy is an essentially victimless crime.

Note: In the above I am considering the generic example of "child porn" to be broadcasting the forcible rape of prepubescents, not 17 year olds sending each other naughty pictures, which is what most content matching the US legal definition is (the UK one is just stupid as it includes drawings).


>And honestly, I expect the vast majority of abusers to be doing it either for fun (so they'd stop sharing videos, but keep raping) or for the money.

This runs contrary to the fact that people share things not just for monetary profit, but for other videos/pictures in exchange, and voyeuristic/exhibitionist reasons.


>Sharing unlimited copies of child abuse videos does not actually hurt anyone.

I'm sure some of the kids in those videos would have something to say about that.


How many thousands of people would you kill to save your own child? For many people the answer might well be all of them. Turns out emotional response isn't a reasonable way to come to a logical conclusion for how we ought to order society.

I'm sympathetic to those people's plight but ultimately I place a very high valuation on not only my privacy but everyone's privacy, A much lessor but still very great valuation on preventing such evil happening to the victims, and only a small valuation on preventing the ultimate sharing of those images between perverts except insofar as it serves to lead us to people doing the wrong.

I believe that via a substantial effort we can work to reduce the abuse of children, I think we can via a lot of work take down the groups of people sharing such data by infiltrating such groups and taking down the people participating. I believe that outright stopping all such sharing is probably impossible and I'm unwilling to implement 1984 to try.


Do you regularly run into child porn on the internet? I don't.

I don't see those kids getting hurt any further unless they happen to be the sort of people that go out of their way to seek out child abuse material. It's hard to imagine anyone ever accidentally running into images of themselves being abused. You can't possibly be re-victimized if you don't know.

I don't mean to sound heartless, obviously these are horrific acts which hurt people deeply. I simply don't think banning encryption or otherwise eroding our rights to go after distributors and consumers is really going to have much practical effect on how the victims end up feeling.

E: HN doesn't let me reply to selectodude below

>the mere knowledge that pictures of you being abused on the internet is extraordinarily difficult to deal with

I absolutely agree with this. I just know that nobody can ever go tell the victim that now those pictures are forever gone off the internet.


I don't disagree. We catch child pornographers all the time and they are on the bleeding edge of encryption. There's no practical benefit to banning encryption. The .gov just doesn't feel like going through the effort and would rather enjoy, say, the ancillary benefits to being able to read all of our messages forever.

But yes, the mere knowledge that pictures of you being abused on the internet is extraordinarily difficult to deal with. Maybe not the best direction to go in when arguing.


If you don't see a reply link below on the main comments page try clicking their comment link which looks like "7 seconds ago" and you should find a reply to link there. I think hacker news is designed to make it harder to have a quick back and forth discussion for some reason.


It's not necessary to prove that the benefits outweigh the risks. It's trivial to show that in a free country where people are allowed to control their own devices and create and install software of their choosing its trivial for bad guys to communicate in such a fashion as to be 100% immune to any sort of dragnet style surveillance while being equally vulnerable to targeted surveillance whether end to end encryption is banished from mainstream platforms.

There is no benefit to outweigh. For there to be benefits one must convince oneself that we will actively prosecute bad actors via intelligence gathered via holes we will publicly announce we are drilling into previously secure platforms and drug dealers and terrorists will never figure out how to embrace free open source p2p software that doesn't share these disadvantages. There isn't even one smart person among the bad guys or even among the good guys who will create a platform one can connect to with a few clicks amenable to even techno morons. What could possibly go wrong.


Basic human rights prevent people being jailed on sight. Some of them are criminals. Some of those rights are constantly challenged, yet no one questions their general value. Privacy, thus end to end encryption is a basic human right. There shouldn't be any reason to require proof of this. What we have here is simply the fear of the loss of an extremely powerful tool.


Privacy is a basic right; it is even constitutionally protected in the United States by the Fourth Amendment. However, it has always been accepted that gross invasions of privacy can be authorised in appropriate cases – for example, warrants to search houses, or warrantless searches of immigrants and prisoners. Why must the right to use end-to-end encryption, unlike the right to privacy in your home or at the border, be accepted without proof as inviolable?


> I'm not entirely sure how those four examples are supposedly invalidated by giving them a clever nickname.

It's not supposed to invalidate the reality of the matters concerned. It's supposed to suggest that that they're not actually the issues at hand, with CP and Drugs and Terrorism being used as stalking horses because being in a position of being seen to defend the privacy rights of terrorists makes for terrible PR.


There are ways for them to investigate via warrants. That covers all use cases for FBI/CIA/cops. They don't need any other techniques. It's the only logical interpretation of the US constitution to modern electronic communications. Otherwise you become what China is currently.


The controversy is about Facebook's proposal to implement E2EE on Messenger, thus ending Facebook's ability to comply with warrants ordering them to disclose Facebook messages. Due to the lack of E2EE, Facebook can currently detect and report a lot of crime to law enforcement; some of this crime may go unpunished if E2EE is implemented.


> some crimes could be aided by encryption

Some crimes could be aided by virtually anything. E.g. let's destroy all the roads so criminals won't be able to travel.


Implementing end to end encryption in these messaging apps won't make any of these issues worse. The criminals this will supposedly help already have easy and convenient ways to do the same thing. The government has not caught anyone of any importance this way, so encryption of Facebook messages won't hinder any real investigation methods.

This won't increase the number of people affected or reduce the number caught. The only thing encryption can do in this case is make it more difficult for the government or foreign governments to read personal messages.

Not implementing encryption here can only help people violate people's privacy, it cannot help the government do its job.


> The government has not caught anyone of importance this way

The letter specifically describes an offender who was sentenced to 18 years' imprisonment, detected when Facebook read his non-E2EE communications with a child victim. Are you suggesting this was an unimportant or unrepresentative case?


Unfortunately vast swathes of the population are not able to process nuance. Even if you gently point out that North Korea is probably one of the few "crime free" societies they don't seem to parse that a world of total security cannot coexist in a nation that values freedom.


The same crimes can be aided by accepting any other form of payment than credit card.


Those four examples are not sufficient reason to deny good encryption to anyone...and denying good encryption to anyone is denying good encryption (privacy) to everyone.


They are already using it and not on Facebook.

It's silly to think that terrorists use Facebook

Very silly


It's well-documented that terrorists use Facebook and other social media. There's a whole Wikipedia article about it. https://en.wikipedia.org/wiki/Terrorism_and_social_media


What about the 4 Trojan horse of the apocalypse?


Child pornography - Politicians and priests. Terrorism or Freedom Fighters? Money laundering - Capitalists .. harmless in the large scheme of things. Drugs - Yeah right.

How about 24x7 surveillance for politicians and priests and no surveilance for normal folks.


I feel like "money laundering" has given way to "human trafficking" among those horsemen, but this is otherwise right.


They did such a great job with Epstein too. Letting him go free for all those years and protecting him and then he suddenly dies when he is gonna name people in Government and the Royal family.


What's the fourth horseman?

edit: ok ok sorry I misread


1) child porn

2) terrorism

3) money laundering

3) war on [edit: some] drugs


It was specifically "war on some drugs". Americans love drugs, and we even love dangerous or mind altering drugs. But some drugs, for various socioeconomic reasons, we have declared evil.


more specifically, to keep black people from voting


Sure, for marijuana. But not all substances are the same. If you've ever gone through the hell of having a family member or close friend being addicted to heroin or coke or other physically-addictive "hard" drugs, you'll see that they're not in the same risk category at all.


But then you also had the whole "coke vs crack" thing in the 1980s - where crack was a predominantly used drug by ethnic minorities, and cocaine was more likely to be used by "high rollers" who could afford it - usually white and rich, yuppies, etc. Wall Street types and the like.

But rarely did you see or hear about any of those people landing in jail or prison for long periods. If you heard anything it was more or less laughed about, and society forgot about it. The perps maybe would spend a day or two in jail, get bailed out, and have their lawyers negotiate and plead down to a misdemeanor or something like that. Rich guy goes back to party and life.

Crack users? Well - they all ended up in prison for super long stretches and/or died there. Nobody cared or cares. Certainly not the above cocaine users and abiders.

It wouldn't surprise me to learn that the people abusing the cocaine and making deals like that weren't also in on the production and selling of the crack made from a portion of that same cocaine...


I'm not denying the double standard, just reiterating that some drugs are extremely harmful to people and their use should be discouraged/banned/whatever, no matter who is using them.


"Facebook, which owns Facebook Messenger, WhatsApp, and Instagram, captures 99% of child exploitation"

Reminds me of this story: "Back during World War II, the RAF lost a lot of planes to German anti-aircraft fire. So they decided to armor them up. But where to put the armor? The obvious answer was to look at planes that returned from missions, count up all the bullet holes in various places, and then put extra armor in the areas that attracted the most fire.Obvious but wrong. As Hungarian-born mathematician Abraham Wald explained at the time, if a plane makes it back safely even though it has, say, a bunch of bullet holes in its wings, it means that bullet holes in the wings aren’t very dangerous. What you really want to do is armor up the areas that, on average, don’t have any bullet holes. Why? Because planes with bullet holes in those places never made it back."

Qoute from https://www.motherjones.com/kevin-drum/2010/09/counterintuit...


How does this analogy apply to the competing goals of protecting privacy vs. detecting and preventing crime by monitoring private conversations? Which one corresponds to a plane getting shot down?


"captures 99% of child exploitation" The majority of bullet holes are in the wings, so those need armor; the majority of child exploitation is through Facebook, so they need access to the content of the messages. It's an assumption made on incomplete information. The engineers only saw bullet holes in the planes that returned. And law makers only have information about the criminals who get caught. At least that's my understanding of the analogy.


The wrong assumption is in BuzzFeed's reference to "99% of child exploitation." The letter, which they deserve credit for quoting in full, says that "more than 99% of the content Facebook takes action against – both for child sexual exploitation and terrorism – is identified by Facebook's safety systems" (which rely on Facebook being able to automatically analyse unencrypted messages), "rather than by reports from users" (which could still be submitted if Facebook implemented E2EE by default).


Assuming you discover child exploitation on Facebook, why would you report it to Facebook, rather than the police? The police can actually arrest people.


I don't think it is related but it is still interesting!


Imagine being able to design a plane, but not immediately recognizing that.


It was probably not the engineers doing this, but the politicians, officers and committees.


99% of domestic disturbances happen in private homes. Should they put cameras there, too?


It's called "Telescreen" and will be soon available.


Doesn't everyone want to catch guys like this [1]?

If not, you're a bad guy and we can discount any argument you put forth am I right?

Am I right?

High five? Anyone?

Anyone?

Point being that the enemies privacy have been waging an extremely successful propaganda war against our position for longer than most privacy advocates have been alive. A campaign that is at once massive and specially calibrated to go unnoticed. (How many connect the 'Telescreen' to the example in [1] in their minds?) They have the power to ensure that hand picked, specific examples, calibrated to inflame, are used as flag bearers in the zeitgeist. They have a long game of very specific goals, coupled with an army of natural language, behavioral psychology, and media experts all dedicated to that singular purpose. They have decades of experience at manipulating populations at scale, and a track record replete with successes.

Meanwhile, we sit on HN and social media talking to each other about how our side will be proven right in the end. You know, 'cuz "freedoms".

We have to start addressing the legitimate points that government and law enforcement types are making, because not doing so cedes more and more ground in public behavior and public opinion to those enemies of privacy. The great irony here is that we ourselves have helped them manage to cast themselves in the public eye as the "champions of safety", due largely to the poorly chosen nature of our bedfellows. We need to start getting out in front of a lot of the legitimate arguments that law enforcement is making, and we need to push back against professionally subtle narratives in the national media that attempt to attach us to yet more unsavory bedfellows.

----

[1] - https://www.cnn.com/2019/10/02/us/woman-assaulted-boyfriend-...


because nobody seemed to get the reference:

https://en.wikipedia.org/wiki/Telescreen



Call me crazy but I'm thinking of buying one. The fact it has a camera cover and a microphone switch is enough to make me comfortable. I'm just waiting to find out if it's a software switch or hardware switch.


Should be pretty easy to install your own hardware switch.


The way electronics are put together today (lots of glue, e.g. the Surface Book) I'm not so sure about how easy that'd be.


It's called laptop, smartphone and smart tv, is already available and we do know it's already watched and listened to.


^^^ Source Please.


How about the (now former) head of the FBI recommending we cover our front-facing cameras?

https://www.engadget.com/2016/09/23/the-fbi-recommends-you-c...

and, mostly because it's mentioned in the engadget article:

https://twitter.com/topherolson/status/745294977064828929/ph...


hey alexa !


Got any sources to cite for that watching and listening?

Specifically Apple-related and North American examples?


It's hard to take that question serious. https://en.wikipedia.org/wiki/PRISM_(surveillance_program)#E... explains why. Snowden leaked it years ago now. There is even a scene about that in the movie.

And Apple-related? North American? That's not relevant at all.


It's relevant to what I want to see.

You boldly claimed that the government is observing me in my home through my devices, and I want you to show your work.

PRISM happens at the ISP level, and doesn't mention directly using iPhones, Macs, HomePods or tv as surveillance endpoints... So, where's your proof showing that they're surveilling US Citizens through those specific devices?



>Specifically Apple-related and North American examples?

That's an oddly narrow goalpost. Apple shares user encryption keys with the Chinese government, for example, giving the Chinese government access to all iCloud pictures, messages, documents, videos, etc.


> That's an oddly narrow goalpost.

I didn't want to get accused of moving it later. Also, I'm only interested if there's evidence that applies to me. I'm 100% Apple and 100% USA-based, so if there's proof that "the government is listening!" then I want to see it.


Do they? Shit, I thought that they’d set things up that sharing the key was extremely difficult if not impossible without access to the device? That’s disappointing if true, is there a good article on this?


AFAIK this is only true for Chinese citizens or people who bought their phone in China or something but it is still scary.


Yep, Chinese users only. The odd part is that the internet blew up at Google for even proposing it and later cancelling it with project Dragonfly (also only for Chinese users), but Apple quietly went ahead and just did it without many people noticing.


That’s not accurate at all.


In early 2018, Apple forced their users to opt-in to migrating iCloud encryption keys and data to Chinese data centers:

https://www.reuters.com/article/us-china-apple-icloud-insigh...

Chinese government nationalized the data centers six months later, gaining access to all the encryption keys and user iCloud data at rest:

https://mashable.com/article/china-government-apple-icloud-d...


You've missed the fact that the OP said ALL.

Apple does business in China the way they have to do business in China. I don't agree with that, but I'm also quite sure they have not given my iCloud keys to China.


And you missed the point where I said that it was an oddly narrow goalpost.

Also, how exactly was what I said "not accurate at all"? I accuarately said Apple was sharing iCloud data and encryption keys with the Chinese government.


You claimed Apple gives access to ALL iCloud data to China. That is not correct, which is why I pointed that out.


No I did not. You can read the comment thread and see that. That's your own straw man.


"..giving the Chinese government access to all iCloud pictures, .."

Direct quote. Emphasis mine.


>"Apple shares user encryption keys with the Chinese government, for example, giving the Chinese government access to all iCloud pictures, messages, documents, videos, etc."

Is the full sentence with context, meaning access for all the iCloud data for the encryption keys being shared. Don't mince words because you misunderstood the sentence.


There is no grammatical foundation for your parsing of the sentence. If you meant it that way, you wrote it incorrectly.


People are installing and paying for them already. IoT / Alexa / Ring / etc...


Alexa should be enough…


100%. There is no convenience that would ever convince me to have an always on listening device in my home.

PRISM was proof that governments pressure tech companies to become sources.

How to build a program to retain power:

1) Make new tech that is very convenient but has potential to become surveillance vector. -> e.g, Phone with voice recordings, fingerprint Face ID, location data, network of friends. Smart TV, smart watch, smart locks, Alexa, etc. 2) Support consumer adoption. 3) Keep public attention on the evils of foreign states and domestic terrorism. 4) Convert devices into active surveillance sources citing home security, public safety and the classic “what do you have to hide?”. 5) Do so as fast as possible without creating a revolt. 6) Have such pervasive surveillance that it becomes increasingly difficult to discuss, assemble and revolt. 7) Continue to introduce more controls, reduce freedoms, increase work week, taxes.


> There is no convenience that would ever convince me to have an always on listening device in my home.

Many smartphones have this, and will respond to "Ok Google" or "Hey Siri". Do you view this differently?


I do see some difference. Home, always on, always capable of transmitting, unlimited power means they can listen as much as they want.

The phone has to cherry pick phrases and capture times. With the right codex, the phone could keep recording for a long time after the right phrase or your IMEI is targeted. Still sub-optimal, but not as easy as Alexa. My bigger concern around phones is GPS data. That's why I've not had a phone in my name in the last twenty years and never owned a smart phone.

Now put the data sets from your phone, Alexa, your credit/debit card together and that paints quite a full picture.

The Ring device is very interesting. It faces the street. I mean, the house on the other side of the street. A network of those can track the movement of anyone that lives near one.


They're both rich information gathering sources with potential for abuse.


But the specific question is, do you keep a phone in your home? If so, how is that better than having Alexa in your home?


I'm not suggesting one is better than the other. But two potential sources is surely better than one when it comes to surveillance.


Or worse, depending on the POV. :)


Use a custom rom without google apps. It's easy enough.


Doesn't even have to be a smartphone. Every telco is compromised and they can deploy whatever software they or their overlords want to the baseband processor.


Nah. Too expensive, requires warrants etc. They'll use this instead: https://news.ycombinator.com/item?id=21124963.


There are laws authorising this, and they are frequently used to investigate serious crimes (like murder). They are not used for less serious crimes which do not warrant such a serious invasion of privacy.


And Facebook Portal cameras are hoping to do exactly that! https://portal.facebook.com



We need to put them to homes of politicians, judges, lobbyist, diplomats etc since they are the biggest threat to national security. Also their close ones that might be even greater target.


How can they know they capture 99% of such content given that WhatsApp is (if I understand correctly) encrypted fully end to end such that Facebook cannot access the messages passing through it?


Rephrase as "They are responsible for capturing 99% of the illegal content that is caught."

i.e., of the illegal stuff that goes through their system, some of it is captured, and the majority of that is caught by Facebook's own system.

Note, this article is recent and highly relevant: https://www.nytimes.com/interactive/2019/09/28/us/child-sex-...

> And when tech companies cooperate fully, encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material

> Data obtained through a public records request suggests Facebook’s plans to encrypt Messenger in the coming years will lead to vast numbers of images of child abuse going undetected. The data shows that WhatsApp, the company’s encrypted messaging app, submits only a small fraction of the reports Messenger does.


> Note, this article is recent and highly relevant..

Talk about fear mongering, that article is horrible. It's the same old argument. "Child abuse is bad therefore you can't have any privacy", they position you as being against protecting children when your stance is actually pro security & privacy.


I think that reasonable people can hold both f̶a̶c̶t̶s̶ things in their heads at the same time

1. Privacy is important and good

2. It has almost certainly contributed to an explosion in the production and sharing of child pornography and abuse

The question is what is the moral way to reconcile the two, not to deny that either exists.


Very unconvinced it's lead to an explosion is child abuse.

Child abuse has always been happening (several of the older members of my family were abused), it just wasn't broadcasted. Even today I bet 99% of child abuse is never caught on camera. The increase from child porn is probably negligible.


If you actually read the article above, you can see that there's actually now lots of new evidence that viewing child porn makes people more likely to commit abuse, and that it seems to be a causal relationship, not a "marijuana is a gateway drug" relationship.


I couldn't find that in the article and would be immensely skeptical of such a claim (how on earth would you establish causation as opposed to common cause?).


Neither of those two things are facts. One is an opinion, and the other is speculation.


When did GP specify he was talking about facts?


I edited my comment from "facts" to "things," which was actually what I meant. Apologies. I'll clarify, so as not to disparage the person who replied.


Great points. It definitely seems to me that those two are fundamentally opposites. Suppose we champion (1), then with perfect privacy gives a lot of leeway for criminals to commit crimes such as those you stated and more without being able to detect that they did them.

However, without perfect privacy, every world citizen would be subjected to such monitoring, and we'll basically be exactly what 1984 is, with the metaphorical "telescreen" functionality spread across pretty much every device that's connected to a network.


They’re not complete opposites, since criminals don’t always have perfect privacy: they leave other evidence that can be discovered with a warrant. On the flip side, mass surveillance will just create selective enforcement.


The article never makes the argument you're objecting to. It merely states that E2E is likely to allow certain crimes to avoid detection. That seems both plausible as well as empirically correct considering their whatsapp/messenger comparison.


Exactly. I see it from those exact same perspectives.

The USA took steps to actually protect property in its founding, something the Supreme Court has interpreted to include privacy as well. From this perspective, the rights of citizens wither away when the government is allowed to take the smallest step towards mass surveillance. For that reason I am very much against it.

My other perspective is from a security standpoint. I believe that if companies are not doing everything in their power to protect themselves and their users from data loss / hacking / theft, they put everyone at risk. Intentionally lessening the security of a product at the request of the US government means giving a potential thief or hacker more attack vectors to exploit.


If this article is for real, then it's certainly not being pushed mainstream.

> The Times’s reporting revealed a problem global in scope — most of the images found last year were traced to other countries — but one firmly rooted in the United States because of the central role Silicon Valley has played in facilitating the imagery’s spread and in reporting it to the authorities.

Clearly the NYT is laying the problem at tech's feet, and we are the best ones able to thwart this. Many times I've scoffed at the government's continual removal of privacy, but this is the first time it's sunk in. Perhaps they have a case.

> 1998 - 3k cases. 2008 - 100k. 2014 - 1M. 2018 - 18.4M.

These figures from a total of 45M images flagged.

Again, is this for real, or this this propaganda?


These aren't new images, they're the same images that have been around the web for decades, that's why they are able to be flagged (via hashing, unless companies are training neural networks to know what child pornography is vs regular pornography). You need to benchmark those numbers versus the increase in regular images sent over the web in general. If it's increasing faster than that increase then there is an argument, if it's increasing slower then the opposite argument is the truth.


The NYT article refers to "both recirculated and new images" and states that "in some online forums, children are forced to hold up signs with the name of the group or other identifying information to prove the images are fresh." Isn't it just common sense that the proven existence of online paedophile communities that explicitly encourage the "production" of new child abuse material, in fact results in some net increase in the number of abused children, even if that increase is a small proportion of the total number of victims? Isn't it worth asking what can be done about it?


It is not surprising that creating automated reporting systems increases the number of reports. The article simply gives no way to make an informed opinion.

There are two things I would point out: 1) I would be surprisesd that the ease of communication the internet brought did not benefit to criminals 2) The article describes several problems that won't be fixed by encryption ban (e.g. the lack of means for report clearing houses) and also gives exemples of cases solved despite encryption. I would like to understand why encryption is described as the problem here.


I didn't read the article as blaming tech. The main takeaway for me was the astonishing under-resourcing of investigation and prosecution of these crimes. If the government can't be bothered to investigate 98% of the child sex abuse cases it already knows about (per Flint Waters's testimony -- which dates from a time when the problem was far less severe than it is now), why does it need to prevent encryption to possibly learn about a handful more cases?

Not to mention:

Congress has regularly allocated about half of the $60 million in yearly funding for state and local law enforcement efforts. Separately, the Department of Homeland Security this year diverted nearly $6 million from its cybercrimes units to immigration enforcement — depleting 40 percent of the units’ discretionary budget until the final month of the fiscal year.

Or in other words, in order to fight the imaginary rapists that Mexico is allegedly sending us, DHS is diverting money originally allocated to investigate actual child rapists.


A third party observer cannot read the messages as they pass through the whatsapp network, but Facebook can read the messages at both ends.

Facebook has said that they don't do this outside of Australia, where it is required by law, but come on, are we really going to take Zuckerberg's word for it?


> Facebook can read the messages at both ends

This is why I believe that end-to-end encryption is not truly useful unless the source code of the clients is public, or the protocol is open.

Any closed source client can read messages at the ends, or be forced to do so by an evil government.

At the very least, they could open up XMPP compatibility again so that people could write their own open source clients for it. Australia wouldn't be able to do anything about the propagation of such open source software.

I was able to have end-to-end encryption running on top of MSN, AOL, ICQ, QQ, Facebook, Gtalk, Yahoo, and several others about 10 years ago with Pidgin and simple plugins that encrypted/decrypted messages on the fly. It's too bad they all moved selfishly to closed source walled-garden mobile apps -- it's a big step back in privacy.


I've been bouncing around the idea of a fully decentralized end-to-end encrypted chat protocol for exactly this reason, but I've been afraid to work on it for precisely the reason this thread is being discussed. I know that if my name were attached to the project, I'd be facing all kinds of unwelcome scrutiny from the government and news agencies. I'd lose the very privacy I want to maintain by designing privacy-protecting software.


Check out Matrix. Sounds like what you’re describing.


There's a lot I'm not explaining, in part because I don't (yet) understand crypto well enough to know if my idea even makes sense, let alone is feasible.

Matrix is close, but not what I'm describing. It's far more centralized than I'd like to see.


What's the advantage of decentralisation? Is it really a problem to have centralised servers if they're just storing dumb encrypted blobs?


Governments can force the people who own the servers to stop.


Signal is pretty solid.

https://signal.org/


Maybe it is, but it's not the decentralized replacement for Discord and Teams that I envision.


I would take a look at Matrix[1], which is basically what you're describing (it's a federated replacement from group chats that has Signal-like E2EE and has an open protocol) and it's already implemented.

[1]: https://matrix.org/


XMPP with OTR is exactly this. Facebook messager used to be compatible with XMPP.


Not exactly. I rather dislike XMPP's design, and what I'd like to see is something not only decentralized (relying at most on a DHT seed), but supporting group chats with trivially-expirable keys. My limited exposure to OTR suggests it only reliably supports one-on-one exchanges.


Open source doesn't guarantee that the compiled source running on your device is the same as what you can view on, for example, GitHub. That is also a trust problem.


It is much easier to trust that `apt-get install pidgin` will run the published Pidgin source code, than it is to trust that Messenger or WhatsApp will only do the (undocumented) things that I expect them to do.


It should be easier to detect if they're reading it at the ends because they'd then have to retransmit the data to Facebook with a different key.


I tried to read data transmitted between the facebook messenger app (on iOS) and facebook server but it's encrypted and if you set up a mitm proxy it refuses to transmit. Even if you trust the mitm proxy certificate in the OS settings.


The only chance at security is if you can tap your network connection at the dumb wire layer, and verify that it's encrypted with your own private key.


This is what I find so fascinating. Yes, it cannot be intercepted in transit. But nothing is stopping the keyboard on which I type or the text box in which I type from collecting all the things and sending those things where ever they want. In-transit interception, in this context, seems like a fruitless threat model when compared to the keyboard/text box vector.


With respect to facebook in particular, they have had the means of reading the messages on either side for ages, even before it was required by law. iOS and Android both have "leaky sandboxing", where certain apps from the same publisher can read each other's data and memory, and Facebook added WhatsApp to its sandbox almost immediately after acquiring it, allowing the main Facebook and Messenger apps to read all the messages received by WhatsApp.


Any more reading on this?



Basically police are too lazy to do police work ie get warrants and wiretaps at individual homes.

They want a fishing net so they can catch these people by the thousands.


Where have they said they're doing this at all?


Since WhatsApp is proprietary on both client and server side, it can't be proven that OpenWhisper wasn't tampered with. And chances are always against users when we speak of proprietary.

It's disgraceful that Signal hailed WhatsApp's announcement to use it almost like a second coming of a religious figure.


I can imagine moxie now regrets working with whatsapp to implement the protocol there.


Testing?


> captures 99% of child exploitation and terrorism-related content

Credit where it's due, good on them building such effective systems to catch this content.




Image filtering is now trivial even using simple transfer learning, leading to >99% accuracy. It's just the training datasets might be super disgusting and question one's will to live when confronted with the savagery that is going on around the world.


Wasn't there some issue a while ago that the UK police were having trouble distinguishing pictures of sandy deserts from nude bodies?


How is it not a crime to possess the training set?


AFAIK large companies have deals with the government and have special units that deal with that type of content (nobody survives longer than 12 months there).

You can get legal datasets for sick pr0n or warzone stuff you would like to keep off-platform that might have the same psychological effect on you.


Should be simple enough to have the government/LEAs posses the training data and have a standard harness/workflow you integrate your training program to. Then you send your training peogram and get the resulting trained model files back.


I assume that's capturing 99% of the child exploitation that is captured, not that's estimated to exist on their platform. It might be good for them, or bad for the government, or par for the course.


But they just made that number up.


Is it made up? I thought they meant "Of content identified on Facebook, 99% was identified by Facebook's own systems". Not that Facebook captures 99% of terrorists. Just that it does the bulk of capturing terrorists _on Facebook_.


99% of all uses of the number “99%” are made up.


more or less agreed, would def like to see how they ended up at 99%


Right. Assuming that they know that they're missing 1% of cases insinuates that they know enough about the 1% they're missing to quantify it.


Isn't that right based on the statement though? The implication is 99% of the identified cases on Facebook are caught by Facebook's systems, the 1% not caught by Facebook's systems are tracked by other means. 100% of identified cases, not 100% of all cases.


Still very misleading if that is the case, what other means are there realistically to catch something if not by FBs systems, on FB? Self-reporting in the groups? Is that counted towards the FBs 99% or the 1%?


Reported directly to the police or other authorities? It's probably quite realistic in those scenarios for something that Facebook's systems have missed for it to be found by someone eventually and escalated.

There's a little ambiguity in there, but I wouldn't go so far as to say it's "misleading".


And what's the false positive %. How do you even define a genuine hit for "terrorism-related". Depending on how broad you go that can mean a lot of things, including very innocent stuff. The problem with terrorism is that everyday things that are legal on their own when used as intended can be used to commit those acts.


"Think of the children!" The most ridiculous, stupid argument that US politicians have loved for decades, maybe longer. What a bunch of bullshit. I can't believe that I'm siding w/ Zuckerberg on something. Then again, we're talking about the AG. I hope Zuck stands firm on this and gives them the middle finger. That's the only response to something like this. Have them pass a fucking law or shove their fucking open letter up their own asses.


>This is such a lazy argument

I mean, I agree with you on the argument completely, but I disagree that it’s a lazy argument.

It’s an extremely effective argument, relative to its factual content. The west is culturally conditioned to accept terrorism as justification and child abuse justifiably gets people riled up.

What’s lazy is people who can’t be arsed to look at complex things from multiple angles.


Also isn't "This is such a lazy argument" like 100 times more lazy, because there's nothing backing up why it was lazy in the first place.


“Think of the children!!” Has become de-facto lazy I think. Not really more you can say about it.


That is indeed a lazy argument. The proper angle for the public argument is what's the tradeoff? What do you lose by trading privacy?(as you're clearly not impacting child safety by dropping privacy related to a digital asset). And second: is encryption the number one fix for the actual concern? - i.e. if (physical) child safety is the most important concern for you (the citizen) what has the largest impact on that?


>"...mere numbers cannot capture the significance of the harm to children..."

Good to see that the "think of the children" argument[0] is still in use today. /s

[0] - https://en.wikipedia.org/wiki/Think_of_the_children


Criminal activities are exacerbated by the internet it would be a lie to say no. But just like with cars, scooters, or any tech that's sufficiently democratized.

They need a permit for a car? Why not just steal it?

I need an identity to do shady stuff on the internet?

Why not steal it?

We cannot reason with malevolent forces, there is always going to be away,

And by that time, we compiled the data of everyone, centralized it all, and let govs that don't understand the implication collect those as if it was mere petrol or gold.

We are putting everyone's life at risk doing so, just wait until it leaks out or it starts getting sold. (ahem, oh wait !)

It is a problem, and tbh it saddens me a f* tons.


For you younger folks, whenever the government proposes to do something, "for the children," it's a ruse and you shouldn't support it.


Do you even mean that for “save the environment”?


We might have to decide if that applies... if government ever gets around to climate action.


But think of the children! It is stunning how shallow this letter is. It's so easy to see through it.

1984 rings more true with each passing day.

"It was meant to be a warning, not a manual."


The first quote is almost literally a contradiction. "Remove your security without removing your security" WAT


Coincidentally, the same day HN gets this story ABC News is reporting that...

Congress was provided “encrypted text messages [the top US diplomat in Ukraine] exchanged with two other American diplomats in September regarding aid money President Donald Trump ordered to be held back from Ukraine.”

Might the AG also want to monitor these messages?

https://abcnews.go.com/Politics/top-diplomat-ukraine-crazy-w...


[flagged]


Please don't post flamewar rants here.

https://news.ycombinator.com/newsguidelines.html


Epstein committed suicide. Did you miss the news coverage?

And I’m not sure what the AG has to do with Epstein’s crimes. Did you get some people mixed up?


There's enough conflicting information about the suicide that that particular conspiracy theory is going to have legs for a long time to come.


Better to just call it a substantiated theory at this point.


It's not substantiated, it's speculation.

There's still a possibility it was an actual, non-coerced suicide.


For sane people, this is the only possibility.


Wrong. It's a substantiated theory, because there's lots of evidence suggesting this theory is correct. But it is not a fact, you're right there's stil la possibility it was an actual suicide, but I'm right in calling it a substantiated theory that Epstein was "suicided"


There is no evidence at all suggesting it is correct.


> And I’m not sure what the AG has to do with Epstein’s crimes. Did you get some people mixed up?

The AG is a loyalist to and subordinate of the President, who was an associate of Epstein and who was also at one point accused of having had Epstein procure an underage girl for sex.


The AG's father, Donald Barr, also [may have] hired Epstein as a math and science teacher at the Dalton School where Donald Barr was Prinicpal, despite Epstein seemingly lacking appropriate credentials [1]. William Barr was either a student there or attended in some capacity. Note the linked article at the bottom shows it is not clear whether Donald Barr had a personal role in the hiring of Epstein.

[1] https://www.huffpost.com/entry/jeffrey-epstein-math-science-...


Yes, Epstein committed suicide, and Saddam had weapons of mass destruction. Every sane person knows that.


There were a ton of damning leaks coming out that would taint a jury pool.


It's like when the Republicans said in the 90s that you don't solve problems by throwing money at them.

But that's exactly how many problems are solved.


Which is why US healthcare is the standard of the world. Because it’s so profitable. /s


Government solves problems with money. The US government spends little on healthcare, which causes citizens to spend more.

Perhaps I should clarify that the only way we are going to solve he healthcare issue in the US is by the government spending money on it.


I mean, it has to be, the free market would correct it if it weren't, right? /s


Fields other than cosmetic surgery aren't allowed to operate through a free market http://healthblog.ncpathinktank.org/why-cant-the-market-for-...


> mere numbers cannot capture the significance of the harm to children.

Oh, bull crap. By this argument, literally no protection or liberty that allows even one child to be harmed could ever be tolerated. Why stop at vetting baby-sitters or teachers? We ought to require mandatory background checks for every person who ever visits a family or goes into a public space that might contain children. Law enforcement also ought to be able to track every underage child at all times, via mandatory GPS collars.

At the risk of sounding callous, there are occasionally some situations where you really do need to shut up and multiply.


The argument about children is easily invalidated by the fact, that for some reason that during the Vietnam / Afghanistan / Irak war many children were harmed for much less than everybody's privacy.


Those are other people's children.

The reptilian brain will respond differently to children from different cultures on the other side of the world. I understand the logic behind your argument but policy isn't driven by logic.


Hell, it’s nakedly visible in policy in treatment of children from different cultures in the US itself. Xenophobia is a known route to power. This impulse CAN be countered by putting it in the light. Policy can and should always be driven in part by logic.


Yes. Mr. Attorney General can also demonstrate his concern for children by investigating the ICE and their detention facilities.


I think 99% of arguments that claim to be in defense of children are disingenuous obfuscation of advocacy of censorship to enable fascism.


If they cared about children they'd be putting money and effort into the education system instead of surveillance. Napkin math - It costs the feds roughly $1000 per child per year for school, and the NSA about $250,000 per employee per year. BUT THINK OF THE CHILDREN we could be spying on!


> By this argument, literally no protection or liberty that allows even one child to be harmed could ever be tolerated

It's an appeal to emotion and a commonly used tactic of politicians to get what they want (and to convince people it's what they want too). Used all the time in gun control debate.


By this argument, literally no protection or liberty that allows even one child to be harmed could ever be tolerated.

Well, I think the aim is to create (or in this case, maintain) a emotional button that when pressed, create such an extreme picture in the person's mind that they instantly respond with "your freedom? versus this?, yeah screw your freedom, we gotta stop this."

The extreme the pictures shown on media, the more minachean the TV plot, etc, the stronger reaction.


Something must be done, this is something, so it must be done. (Ignore the fact it won't really help)


> mere numbers cannot capture the significance of the harm to children.

I completely agree. Now can we talk about guns just lying around the home?


If mere numbers cannot capture the harm to children then... why do we break out these equations when we're talking about fracking near school grounds.


I wonder why they don't use this argument to advocate gun control.


"In retrospect Sandy Hook marked the end of the US gun control debate. Once America decided killing children was bearable, it was over." –Dan Hodges


I love how so many people instantly associated that argument with gun control :)


Maybe because guns harm more children than Facebook?


I would put money on that being absolutely, positively, patently false.

In fact, I wouldn’t be surprised if data could be generated that shows cyber bullying on Facebook results in more child suicides than children murdered in school shootings per year.


Some data, presented neutrally with little comment.

> child suicides

https://www.cdc.gov/nchs/data/nvsr/nvsr67/nvsr67_04.pdf

> In 2016, suicides numbered 2,553, while homicides numbered 1,963

> Firearms were the leading method of homicide for persons aged 10–19 years during 1999–2016, accounting for 87% of all homicides in 2016

> Suicide involving suffocation was the leading method among children and adolescents aged 10–19 years in 2016, slightly outnumbering suicide involving firearms (1,103 and 1,102, respectively).

There are some age and gender difference to the method chosen for suicide.

We probably need to include some deaths by unintentional injury, although 85% of those deaths are motor vehicle traffic, drowning, and poisoning.

I'm still getting to grips with the way the US counts deaths by suicide and I still find it a real struggle to find the data and to understand what they've counted and how they've counted it, which is why I only use the CDC data.


Congratulations your a prophet.


> By this argument, literally no protection or liberty that allows even one child to be harmed could ever be tolerated.

[argument]: Considering that a large amount of child abuse is perpetrated by the parents of children, perhaps the people who need to be vetted first are...

/of course, that devolves into "vet everyone"...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: