> Now you're suddenly interesting to the authorities, even if you've done literally nothing wrong. Or have you?
<reply type="devils-advocate">
This argument could be taken to mean that the fear isn't that information is being collected, aggregated, and analyzed, but fear in that algorithms will be wrong and results will be misinterpreted or misused.
As technology advances, both of these problems will reduce more and more.
Furthermore, if I came up with some kind of math that could determine with a high degree of certainty that someone is a (terrorist/communist/pedophile/father raper) given their online activities, the authorities would be negligent not to follow up on that information and determine if it's valid or not.
Much like spam, verified false positives help train the filters further.
That leaves only the abuse argument.. and honestly, I don't see /potential/ abuse as an argument against any kind of technological advance. We have ways of dealing with abuse.
I'd argue that we don't have any good ways of dealing with abuse/misuse, and that's precisely the problem with such a system.
And let's not forget the "verified false positives" are counted in lives ruined/ended. Could we do it? Yeah, no one's denying that. But if we throw out ethics in the name of technological progress, we could do a lot of great things.
There's not really a deterministic line, beyond which a person is "certainly" a threat. At the end of the day, a person has to decide what is and isn't a threat, all the computer can do is help that decision along. A person pulls the trigger, and as we all know, people can really suck sometimes.
"As technology advances, both of these problems will reduce more and more."
For the first problem (that algorithms' results will be wrong), you're assuming that the technology to avoid false positives will progress faster than the technology to collect more data. On what do you base that assumption?
The second problem (that the results will be misinterpreted or misused) isn't a technological problem at all, so how will the advancement of technology reduce that problem?
As technology advances, both of these problems will reduce more and more.
Furthermore, if I came up with some kind of math that could determine with a high degree of certainty that someone is a (terrorist/communist/pedophile/father raper) given their online activities, the authorities would be negligent not to follow up on that information and determine if it's valid or not.
Much like spam, verified false positives help train the filters further.
That leaves only the abuse argument.. and honestly, I don't see /potential/ abuse as an argument against any kind of technological advance. We have ways of dealing with abuse.