Hacker News new | past | comments | ask | show | jobs | submit login

So people are willing to tolerate the infinitesimal personal risk of an invasion of privacy by the government in order to diminish the infinitesimal personal risk of a terrorist attack. Not sure what all the fuss is about.



It is not only about people buying sense of safety at the expense of privacy and it's obvious problems due to shifting powers and vastly greater opportunities for abuse with decreased chance of ever being caught of such abuse.

It is about the faulty argument that the "nothing to hide" argument and it's various mutations are, as the paper argues. One of the problematic parts is that the faultiness of the said arguments aren't very obvious on an individual basis. Although the argument for "stopping 9/11 ever happening again"(or any other "fight terrorism" argument) is to "save lives", which inherently subjects to "any (American) lives", the "nothing to hide" argument acts on an individual basis unlike the argument about terrorism and saving lives which acts on a collective basis.

Because the "nothing to hide" arguments acts on an individual basis, it can be interpreted as "whatever as long as it doesn't happen to me" type of argument, yet it is used to argue for a collective matter.

If it were about the lives, banning alcohol would save tens of thousands of lives annually(assuming that "banning alcohol" would result in zero alcohol consumption, which is false, but brings out the point and even argued hypocricy regarding the "fighting terrorism" argument).


So you don't recognize the difference between one infinitesimal and another? The fact that they may be orders of magnitude difference, thereby implying that the amount of collateral damage may be significantly higher than any benefit to society?


If you are arguing for more government eavesdropping, I would point out that terrorism is a failed political strategy on the decline even without help from government programs.

If you are arguing for less government eavesdropping, I would need to see evidence, first, that there is some sort of real danger. I don't share the libertarian presumption that government is by definition malicious and incompetent.


The point is not to argue either, it's the essential absurdity of your statement without concrete values.


Concrete numbers would be nice if we had them, but the values offered by true believers are not data driven. In the meantime, I will continue not worrying about getting hit by lightning and, simultaneously, not worry about being eaten by a shark even though one is probably riskier. That seems reasonable to me when there are higher risk things to worry about.


Well, the problem comes with how the government uses the data it has on you.

Say, for example, I have... 10,000 points of data about you. 10,000 HN posts + Facebook posts + Reddit posts, etc. You've said a lot during your Internet career.

I can take what you've written, and form a profile of you from these words. They're just words, you've done nothing wrong. Freedom of speech being what it is, and let's even say you haven't said anything particularly inflammatory. Nothing threatening, nothing dangerous - you're just an average guy.

Now, I take these words, and I compare them with words that other folks have said. Using some fancy technology, I can group you with people who say thinks kind of like what you say. Using this data, I can group everyone this way, into clouds.

And here's where things get sticky - I can use these clouds of people to look at folks who are "similar" to known terrorists. Folks who, themselves, have done nothing wrong, but who "look" similar to people I know are bad. Let's say, for some reason, you're grouped with someone who has known ties to I dunno... the militant branch of the KKK or whatever. Now you're suddenly interesting to the authorities, even if you've done literally nothing wrong. Or have you?

If you're grouped via my super special technology with a terrorist, maybe this puts you on a no-fly list. Maybe this gets your security clearance denied. Maybe you get "randomly" audited. World-ending? No. Completely unwarranted and totally annoying? Yes.

On a philosophical level, this is all kinds of against the freedoms we expect to have in America, and that sucks but let's be more practical. People are screaming about the sky falling and the world ending because the NSA knows you like Japanese porn or whatever, but that's not frankly a big deal. What's more likely is that you're going to be annoyed and inconvenienced, and there's not a lot of reason for it.

It kind of sucks, and I guess you have to decide for yourself if you're okay with what might happen to someone if they turn out to be a false positive. For me, I don't so much mind the data collection, I just want it to be fully exposed. I want Google, Microsoft, et. al. to have to say publicly when they comply with a request for data, and in a perfect world, I'd want these companies to be required to notify the people whose data they hand over. Does it make it more difficult to catch bad guys? Yes. Does it provide a level of transparency that a representative democracy requires to function properly? Yes.

If they collect too much data (or too little!) I want to be able to vote someone out of office. Just saying, "attacks haven't happened so therefore what we're doing is working" isn't something I can buy. Is that too much to ask?


> Now you're suddenly interesting to the authorities, even if you've done literally nothing wrong. Or have you?

    <reply type="devils-advocate">
This argument could be taken to mean that the fear isn't that information is being collected, aggregated, and analyzed, but fear in that algorithms will be wrong and results will be misinterpreted or misused.

As technology advances, both of these problems will reduce more and more.

Furthermore, if I came up with some kind of math that could determine with a high degree of certainty that someone is a (terrorist/communist/pedophile/father raper) given their online activities, the authorities would be negligent not to follow up on that information and determine if it's valid or not.

Much like spam, verified false positives help train the filters further.

That leaves only the abuse argument.. and honestly, I don't see /potential/ abuse as an argument against any kind of technological advance. We have ways of dealing with abuse.


I'd argue that we don't have any good ways of dealing with abuse/misuse, and that's precisely the problem with such a system.

And let's not forget the "verified false positives" are counted in lives ruined/ended. Could we do it? Yeah, no one's denying that. But if we throw out ethics in the name of technological progress, we could do a lot of great things.

There's not really a deterministic line, beyond which a person is "certainly" a threat. At the end of the day, a person has to decide what is and isn't a threat, all the computer can do is help that decision along. A person pulls the trigger, and as we all know, people can really suck sometimes.


"As technology advances, both of these problems will reduce more and more."

For the first problem (that algorithms' results will be wrong), you're assuming that the technology to avoid false positives will progress faster than the technology to collect more data. On what do you base that assumption?

The second problem (that the results will be misinterpreted or misused) isn't a technological problem at all, so how will the advancement of technology reduce that problem?


Another thing that bothers the CRAP out of me is that now I feel like I can't say how I feel with friends because I know government is monitoring me. Its not that I am a terrorist or criminal, but this feels /exactly/ like I'm living in some unfree communist piece of shit country where you cannot speak how you feel without facing scrutiny from informants or stasi style police; UNLESS its in the comfort of your four walls with your close family and friends. This is tyranny, make no mistake about it.


You can, you just can't do it over Facebook without employing client-side encryption.

Something your friends probably don't give a crap about and won't do. I know mine wouldn't if I tried to get them to - it's just too hard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: