Encrypting all communications is certainly the way to go.
But, I wonder if we can make these systems completely inefficient by flooding them with false positives. Assuming we can figure out the patterns they are looking for in our communications, could this be a possible solution to force them to withdraw they "black boxes"?
This was a premise considered a very long time ago when it came to the NSA's snooping. People were (still are?) putting keywords in every email, etc. It didn't make any difference, and inherently can't.
Here's why.
Scenario 1) It works. You get arrested on some arbitrary basis for impeding their system. Or they otherwise make it illegal to do so, and begin cracking down on that.
Scenario 2) You throw a vast amount of interference at their system, and it has an effect. They spend more of your money to constantly stay head of the collective efforts. Most likely a relatively small number of people will never be able to overwhelm it long-term.
Scenario 3) It doesn't work in any meaningful way at all.
>Scenario 1) It works. You get arrested on some arbitrary basis for impeding their system. Or they otherwise make it illegal to do so, and begin cracking down on that.
That will be hard first amendment case in US ... very hard.
Flooding the system can only work if the group that floods the system is large enough that it isn't simply expedient for the surveillance organisations to decide you're a potential risk and put you under additional surveillance.
Encryption is in a similar position, but it is a far easier sell to business and the general public, and so the chances of reaching critical mass of communications is much greater.
The interesting bit is that the general public increasing their adoption of better security practices to make them invisible will benefit the pedophiles and terrorist already in hiding because their choice to hide/encrypt will no longer result in them sticking out from the masses.
Most criminals are caught because their groups are targeted and OPSEC (operational security) is really, really hard. They catch the people who didn't maintain strict discipline and get them to flip on the rest. This is an age-old recipe which is resistant to technological change because, again, OPSEC is really, really hard.
I think the idea was to use a DDoS-like farm of hacked machined to constantly send random messages and packets meant to trip their detection systems to random other IPs, thereby increasing the sheer amount of noise surveillance authorities have to deal with and false-positive "suspects" (the owners of all those hacked machines).
Naturally, that still doesn't solve any other problems...
1/ They're after the meta data. Whether you have plaintext or encrypted communication, they still know to whom you talk. Unless you use TOR or VPN yourself out of the country, it's not going to help...
2/ Strict key disclosure laws. You can be thrown to jail, if you cannot decrypt some information when requested by a judge. That's true even in the case where you can prove the key is no longer in your possession...
Who knew Tor wasn't going to be useful only for people in countries like China, Iran or Saudi Arabia...but also France, Spain, UK, US, Australia, Canada...you know, the "most freedom-loving democratic countries" in the world.
There's definitely a coordinated effort to pass these laws together now, to make it seem like it's the "sensible" thing to do after the terrorist attacks. FBI chief Backdoor-Comey has also been making rounds in European countries to push for total surveillance laws "or else it might hurt their relationship with the US". This may especially work in weaker countries where a partnership with the US is regarded as a god-send and they'll try not to do anything to hurt that partnership. In other words they'll do anything the US government tells them to do.
> 2/ Strict key disclosure laws. You can be thrown to jail, if you cannot decrypt some information when requested by a judge. That's true even in the case where you can prove the key is no longer in your possession...
How the heck is this supposed to work when TLS supports Diffe-Hellman?
Are there encryption algorithm such that we could decrypt the payload with more than one key, but only with one key, the real one, it will return the true result, and other keys will return fake, but plausible result ?
people already did this with Echelon a decade ago. I remember people crafting sentences with specifics keywords such as 'bomb', 'explode', etc. that are totally inocuous in context, but were designed to trigger the algorithm.
Maybe i'll make my personnal server connect to random IP on port 80 to send data with such keywords.
But, I wonder if we can make these systems completely inefficient by flooding them with false positives. Assuming we can figure out the patterns they are looking for in our communications, could this be a possible solution to force them to withdraw they "black boxes"?