”What these companies are doing is illegal in Europe but they do not care," said Ms Eckert, adding that the research had kicked off a debate in Germany about how to curb the data gathering habits of the firms.
I think it’s important to be skeptical towards legislation as a solution to these things. The EU/UK cookie law is a cautionary tale, for example. After all that talk we ended up with a law that (effectively) mandates a boilerplate nag screens and no change in behaviour. Even if it had clearer language to distinguish allowable-illegal cookie use, it would still be very difficult to enforce.
I don’t mean to say legislation has no part to play. Just saying that the politician outrage to legislation sausage factory has produced some duds in this area. I wouldn’t count on a solution coming from this direction.
Speaking of enforcement… Most countries have an advertising standards authority. They create the rules and such. If an ad is (for example) a blatant lie, they can call up the Press/TV/Radio station and get the ad removed. Online, it’s not obvious what authority they have, or how they would enforce that authority at all.
Where advertising standards are still not broken is regulated industries. If a locally regulated bank advertises “one weird trick to double your savings,” the advertising standards people can go to the regulator. They have a number to call, genuine threats to make. ..enough to promote self policing.
Online, even reputable newspapers allow shockingly crappy ads. Sleazy data collection, snake oils, fake products, click farms, scams even fake news (ironically). Real shyster stuff.
This is on the visible end of the online advertising stick, the ad content itself. We already have legislation and a custom of rules. Still, enforcement is nonexistent. Dealing with the unseen data collection end of this stick is even harder.
I think it's important to recognise how good - and effective - data protection laws are in some countries. The biggest challenge is US companies flagrantly ignoring them. In other words, the main thing holding Europe back from protecting data is that the USA is so lax. I think in that regard, more legislation could be massively beneficial.
Err, it's not clear that EU law applies to US companies online, nor should it. There are many things illegal in the EU that are quite allowable in the US including a lot of freedom of speech issues.
One of the issues anytime we have legislation on the internet is how it applies. If XYZ company is based out of the US, markets itself to the US, but has a generally available website, does EU law apply to it at all? Even if an EU citizen uses it?
Alternatively, can every country force their laws on websites based out of the EU?
I agree, but that is going to change. There is new European legislation coming up (GDPR, effective May 2018) which will be a lot more restrictive than the current privacy laws. It will apply to any organization that monitors the online behavior of people in Europe, even if the company itself is in the US or somewhere else. With fines up to EUR 20M or 4% of worldwide turnover there will be a large incentive to comply. Already you can see big companies like Google and Mailchimp adapting to it.
I think it's fairly clear. If there are two parties to a transaction in two different countries, you can generally apply the laws of either country. That's the nature of international commerce.
The US for one feels entitled to go after gambling companies incorporated abroad.
If an American company doesn't like it, it can simply stop doing business in the world's largest market.
> Err, it's not clear that EU law applies to US companies online, nor should it. There are many things illegal in the EU that are quite allowable in the US including a lot of freedom of speech issues.
> The regulation applies if the data controller (organization that collects data from EU residents) or processor (organization that processes data on behalf of data controller e.g. cloud service providers) or the data subject (person) is based in the EU.
It's fairly clear that, absent a world government, national laws apply to companied outside of the nation based on the combination of will and ability of the particular nation to enforce penalties on foreign violations.
The ability side tends to be very high if the company at issue has any substantial assets held in the country trying to enforce the rules, or in a another country with an interest in keeping that country happy.
In the end, it's pretty simple: online companies that make money in the eu (even just selling ads) have something to lose and can be forced to comply. Others can't.
In some cases, even the opposite. I used to use self-destructing cookies but stopped after so many websites required using cookies to stop showing that message. I know there are extensions for removing those, but the point is that they made more difficult to avoid what they wanted to avoid in the first place.
Makes me think that laws like this should be more like programming (for humans) and easily reverted when they are found to cause bugs in behavior but somehow laws have to be fixed with new bills on top of the old.
The law books are the biggest bunch of out of date legacy code that nobody understands apart from a few overpaid experts and is in need of a massive refactoring.
So what else do you propose we should do? Demand that the companies self-police out of their deep commitment to ethics and communal wellbeing - and then be shocked and outraged when against all odds they don't?
I don't actually accept "the current state of affairs is bad" as a justification for new regulations.
The entire point of e.g. the cookie example is that often regulation is not only annoying but ineffective. You don't get to say "this is bad, self-enforcement doesn't work, therefore we have an obligation to pass a law that also won't work".
I agree that self-policing won't work, but it's entirely fair to say "all of the options I see are unacceptably bad, we need to talk about finding a new path instead of choosing any of these".
> I don't actually accept "the current state of affairs is bad" as a justification for new regulations.
And I don't accept "a previous regulation turned out to be pointless and annoying" as an argument against trying new regulations to fix "the current bad state of affairs." I don't think there's some magical third way to resolve issues with the failure of self-policing that doesn't involve regulation. Such arguments tend to have little positive outcome and usually just serve to discourage any attempts to actually fix problems.
Law, regulation, and enforcement are the solutions to reconciling the mismatch between selfish bad behavior and the public good. However, these regulations will undoubtedly have to be iterated on until a workable and effective solution is found.
> I don't think there's some magical third way to resolve issues with the failure of self-policing that doesn't involve regulation.
I'm still hopeful about finding a magical third way, but I admit I'm short on answers. "Find a tech solution" is the common one, but it feels like a coinflip at best - with smart people putting their talents on both sides of anonymity, there's no real reason to count on an eventual victory for privacy. (And, troublingly, each retrenchment seems to raise the skill requirement for privacy - it's entirely out of reach of most average web users.)
> Law, regulation, and enforcement are the solutions to reconciling the mismatch between selfish bad behavior and the public good. However, these regulations will undoubtedly have to be iterated on until a workable and effective solution is found.
And here's where we differ, I guess. I don't think that the cookies regulation is a one-off case of "pointless and annoying", nor do I think it will be iterated on until an effective solution is found. I think "iteration until success" is entirely absent from the history of tech regulation.
I think that regulators are currently too slow, uninformed, and biased to effectively address any but the most egregious technological issues - unless the right answer is "outlaw any behavior even vaguely resembling this" the task is simply beyond the bodies involved. The number of well-crafted, impactful, unsubverted laws protecting computer privacy and security is approximately zero. I think that this state of affairs can't be substantially changed with outreach or lobbying, but will remain a basically fundamental aspect of first-world governments for at least a decade.
---
So... I guess my answer for now is hopelessness. I agree that the current state of affairs is bad, self-policing is hopeless, and no third solution is forthcoming. But I also think regulation is generally both destructive and useless, which is marginally worse than nothing at all. I guess that too is "discouraging any attempts to actually fix problems", but I'll be damned if I see a way out.
In spirit of findings new paths, just an idea (and I'm not sure at all that this is a practical solution):
fail2ban---a tool commonly used in servers to automatically block remote attackers such as spammers---can be configured to automatically send an attack report to some e-mail address when it detects an attack.
A browser extension, such as PrivacyBadger, could do a similar thing when it detects a tracker. Let the report be sent to a law enforcement agency that is setup to deal with this kind of problem. If this agency collects enough data, then it can do some data mining to find, for example, the most violent trackers. If that works, maybe such a tool can be used to give law enforcement some actual power to punish these companies.
I'm not sure how practical the "to law enforcement" stop is here, for two reasons. First, lots of these trackers are based in places where their data collection-and-sales are legal - the EU makes decent efforts to pursue foreign actors, but it's a serious limitation nonetheless. Second, it's not clear how often illegal action can be demonstrated. The authors of the DefCon presentation here only identified one of the ten extensions they suspected because the rest couldn't be identified beyond reasonable doubt; with data being gathered by so many actors identifying one is hard.
Even so, producing a bank of highly-suspicious extensions could be truly valuable. Law enforcement (sometimes) has the resources to pursue these things beyond a DefCon presentation - honeypotting might make it possible to advance suspicion to proof by providing fake data to specific trackers. And publicizing high-likelihood suspicions might produce publicity against bad actors and moves to transparency and guarantees by better actors.
Sometimes you have to accept that legislation isn't going to fix a problem. Piracy, spamming, and so on aren't going to go away, even if we catch people periodically.
You can certainly get the largest firms to comply, but the internet is full of people trying to trick you with fraud and viruses. You're not going to make those people care.
> And sometimes you have to accept that there will be zero change until legislation hits the big companies with an even bigger stick.
Were the problem parties in this article the "big companies"? Nope. They're already flouting existing laws. Are new laws going to stop them when the existing ones don't? Obviously not.
Regulations can't defend hordes of gullible suckers.
It's a two part problem: Consumers need to be equipped to defend themselves with basic critical thinking, in addition to regulation that punish abusive entities.
I haven't read it (or the underlying legislation being amended), so I don't know.
The eprivacy directive amendment (the "cookie law" I was referring to as a dud) didn't show it's flaws until it was implemented by sites in what now appears to be the recommended manner^. Basically, it allowed a generic (and IMO useless) interpretation of informed consent,
Most of the discussion (parliamentary and otherwise) was around tracking and privacy. What seemed to have fueled it was retargetting. To the best of my knowledge, google analytics, FB advertising or any of the other major ad networks have not made meaningful changes to their systems as a result of the legislation.
I think a change to browser defaults could have (still can) have a more meaningful effect than this legislation and all its regional children.
The overview you link to, describe the legislation's intent. I agree with the intent. The problems are not there. The problems are with effectiveness in practice. As I said, I don't know the details of this commandment so I don't have those details for you.
In the legislation this amends, many of the problems arise from the "user initiated" approach. You have a right to see what data FB have on you, but you need to know FB have info on you to ask about.
It also (as this article suggests) needs to deal in a lot of grey area around anonymous data.
I'm not against laws. I'm not a hater. I'm just skeptical that these issues can be resolved solely or even primarily via legislation.
I think it's appropriate to be skeptical of any particular piece of legislation in the same way that it's reasonable to be skeptical of a proof of a centuries old conjecture. It doesn't follow that you should be skeptical of mathematics as a system.
This business of using full HTTP requests with full cookies to domains that are secondary to the site I'm visiting needs to end. When I go to Foo.com, the browser does not need to send all my cookies and info to bar.com, even if we're fetching resources to display on Foo.com. Bar.com in this case is acting as a dumb file server, it doesn't need cookies.
Yes, this would make single-sign-on harder, but it would make it explicit and be worth the trouble so that when the user is talking to A, they're not being tracked by A's friends B, C, and D.
Of course, the big problem: the best browser is owned by the advertiser who stands to lose under such an arrangement. So at best you'd need Safari or IE to spearhead such a change. You can ape it with browser extensions, but without a big browser maker pushing for this kind of shift some sites would just break under such a model (particularly single-sign-on services like Gmail and Facebook).
> This business of using full HTTP requests with full cookies to domains that are secondary to the site I'm visiting needs to end. When I go to Foo.com, the browser does not need to send all my cookies and info to bar.com, even if we're fetching resources to display on Foo.com. Bar.com in this case is acting as a dumb file server, it doesn't need cookies.
Many third-party services (not just ads and tracking) currently rely on this behavior. That's not trivial to retract.
I have switched to explicit cookie whitelisting, and a browser cache that automatically wipes at application quit. I'm sure I can probably still be fingerprinted somehow, but I hope I have reduced the low-hanging fruit.
Fonts strike me as particular low-hanging fruit, especially when targeting developers (who often like to customize everything - and fonts high on the list).
In the context of a browser, how much does anything else matter if you have installed some particular programming font?
Well I have switched to offline Stackoverflow, Wikipedia and Khan academy, all my media from no js torrent site and all news sites blocked with stay focused. Lets see you beat that :p
How is one supposed to browse these files once un-zipped? I will be honest and admit I am ignorant to XML and had trouble finding a solution after some quick googling. Any help would be appreciated.
Cool thanks for the reply. If you find something better and get a chance, let me know, I'd be very appreciative. I'll take a look this evening and let you know if I find anything.
Because Firefox no longer has the market share it once did and is not in a position to drive web standards like IE, Chrome, and Safari. If Apple decided to kill cookies for secondary requests, they would be dead just like Flash died.
I didn't mention Chrome because Google is an advertising company that benefits from cookie tracking and so I expect them to look out for their own interests.
This really isn't my area of expertise. Can you provide a link to what you're talking about? AFAIK, if you have third-party cookies set to "never", Facebook sees exactly the same (lack of) cookies no matter where I click on a like button outside of Facebook.com itself. So, if Firefox does what you say, I still don't see what it has to do with the third-party cookie option gcp pointed to.
Also, in trying to figure out exactly what feature you're talking about, I've come across quite a lot of sources that suggest that Safari has similar or more strict default settings than Firefox regarding third-party cookies.
It's part of the container tabs concept, but automatically putting each first-level origin (what you see in the URL bar) into its own container. This is atm only an about:config option in Nightly.
So how do you use this with systems that need it - can you log into the site for this "container"? Or would it leave the container when you go to the sign-in page? Like if I'm using Google or Facebook for single-sign-on, or I want to comment on a Disqus thread below a news article - since it's in its own container, I'm not logged in on this other site, but if I click a link to log in and it takes me away from this site, it takes me away from this container.
That's actually an interesting issue I had with it, in some versions that actually didn't work. But, somehow, now it does, and I don't know why (haven't read up on it yet).
Yes, the technical solution would be so much better than the annoying "this site uses cookies" notice, a much as I don't trust governments to competently meddle in web standards it would be nice to see somebody big throw their weight around and protect consumers from this.
We were only able to name one extension (the one named in the presentation), as we did not have conclusive proof that data from other extensions which we found to be suspicious ended up in the data set (as the access to incremental data was limited to a short time period):
We developed a sandboxing framework to test whether Chrome extensions send URL data to a third party using a MITM proxy, the code is available on Github:
In general, you should be careful about any extension that regularly sends data to a third party. You can check this in Chrome by opening the extensions list (chrome://extensions/), checking "Developer Mode" on the top right corner and clicking on "Inspect views: background page" of the extension. You can then open the "Network" tab and see all requests the extension makes while you surf the web.
i had to dns sinkhole requests to pixel.intenta.io that a browser extension was creating, every site site I visited was sent along in an ajax request to this domain.
I think it makes a lot of sense to start a register of data providers. I.e. so that if you want to sell user data you have to register as a provider and specify where the data comes from and what it contains.
That'd make it so much easier to critique the possibilities and to further legislate. It'll also allow for independent control of how anonymous data is and independent attempts and de-anonymising the data.
I think it's still not well understood by most people just how much can be known about you and it's potential for misuse. I think an initiative like this would go a long way to bridging that gap and to better legislating for it.
No they bought the click-through data and then used social engineering attacks and other sources (YouTube, Google, etc) to connect it to other identities.
Researcher here, `gcp` is correct in that the NDR acquired the clickstream data from a data provider as part of an investigative story (the data was provided as a free sample), the deanonymization was demonstrated by linking individual URLs with publicly available information from various sources (Twitter, Youtube, Google+, ...), though this often wasn't necessary as many users had URLs that contained direct identifiers (e.g. their full name) as part of the URL and that we could use for efficient de-anonymization.
I live in the EU and work in data; there is no requirement to register as a data provider if you're working with customer data. Further, there will not be such as requirement under the GDPR either to my knowledge.
If you're a "data controller," then you may be required to register with the local DPA (Data Protection Authority). In the UK, for example [1]:
"The Data Protection Act 1998 requires every data controller (eg organisation, sole trader) who is processing personal information to register with the ICO, unless they are exempt. More than 400,000 organisations are currently registered."
Whether this registration achieves anything useful is another question.
I find this reasoning by prof. Orin Kerr pretty interesting, in respect to whether always collecting the full URLs of users (by IPS) is actually legal. His argument is that it might not be legally OK to do so, and that there already are restrictions, even with rescinding the privacy rules by the FCC:
I've been caught out a couple of times by describing something as trivial (or non-trivial) to people not versed in software-speak. They can either think you are dismissing the whole discussion in some way or just have no idea what you're talking about whatsoever.
I don't think you'll find 'easy' as a definition of trivial in a mainstream dictionary (rather things like 'of little value or importance'). It probably is just computing jargon, though I recall it being used when studying mathematics and meaning 'self-evident' (e.g. a trivial solution).
Some mathematicians do have a definite over-fondness for handwaving away statements they can't be bothered to prove with the "it follows trivially that ...". It's all well and good when such a proof would be 20 minutes of basic algebra, but that's frequently not the case.
One of our lecturers at university was so renowned for this, everyone taking that module organised to write "the answer to this question is trivial, and is left as an exercise to the marker" for any question they couldn't figure out in the exam.
And they got it wrong too. Trivial in the software sense merely means there is a known solution, you just have to implement it. Non-trivial means you need to generate some novel IP first.
As far as I can tell, he's borrowing the mathematics sentiment, where "the proof is trivial" doesn't necessarily mean you'll find it easy - it just means that the interesting aspects of the problem have been dealt with and what's left is a simpler or more general case.
That said... I've never encountered "novel IP" as the actual standard for 'trivial' in mathematics or CS. It would take a very strong source to convince me that's a formal or most-common-use meaning. (The Wiki page on 'trivial (mathematics)' certainly doesn't offer that meaning.)
I'm confused.. the article claims the extensions doing the clickstream gathering are illegal... but that the collected data is 'supposed to' be anonymized? Supposed to by what standard? If they're already breaking the law by gathering the data, why would they bother to anonymize it?
Let's say there are 10,000 subjects and you are interested in 10. You click 100 links for the subjects you are interested in, and also 1,000 links picked at random. Now random subjects get somewhere between 0 and 2 clicks, way less than your real interests.
You could maybe pick 20/30/40 random subjects to fake a prolonged interest in, and really focus on those, maybe swapping some out over time. In any case, I think one has to pump enough fake volume through to make fake interest look like real interest. But then the opposite problem comes up - excessive requests for one topic might signal fakeness. Maybe you could have fake traffic be produced proportionally to real traffic?
Advertisers would still win, though. Instead of showing you ads for 10,000 subjects of which one interests you, they are now showing you ads for 2 subjects of which one interests you. Much better targeting.
And if they want to link your profiles from elsewhere, two interests of which one is fake is still pretty powerful.
I'd imagine that is much worse. It's not random at all, you're basically giving them huge amounts of extra data points. That extension was supposed to used to mess up click rates for ads, not stop you from being tracked.
the problem is that humans are really bad at generating random noise. you'd need an automated solution that makes sure you are generating unformly random data.
Private browsing mode is your friend. (You can set it as your default, at least on iPhone.) Caveat: You will have to keep confirming cookie usage popups (EU only I guess).
Private browsing does not conceal what you visit from your ISP, and it only partially prevents trackers and beacons from tracking you (browser fingerprinting is an issue the private browsing won't solve).
It also won't safe you from nefarious extensions installed in good faith (as mentioned in the presentation).
Using private browsing keeps your local history clean, and prevents existing cookies from being used to track you. That's it. It is there mainly to prevent the letter 'p' typed in the address bar from auto-completing to the more colourful websites just when you want to show your mother-in-law a nice quilt you saw on Pinterest.
To prevent the level of tracking mentioned in the presentation, you should at a minimum use a VPN, private browsing, and trusted anti-tracking extensions such as uBlock Origin and Privacy Badger (which as far as I can tell seem to be in the clear and above board at the moment).
If your threat level warrants it (e.g., a judge in a morally conservative society) you would use Tor or a VPN with multiple exit points chosen at random for each session.
Private browsing mode is helpful in the sense that it disables all extensions by default (Google seems to have understood that they pose a privacy risk), but as others have pointed out it won't protect you from tracking by your ISP or IP-based tracking (if your address is stable over longer time periods).
I recommend using a VPN solution with rotating exit nodes, e.g. Zenmate. This will make it much harder to track you based on your IP address (as many people will share the same exit node address and as it will often even change randomly between requests), and it will keep your ISP from spying on you as well as the only thing they see is the VPN connection.
It might protect you in the sense that the evil browser extensions are disabled, but then why install them at all?
The fact that I visit `https://news.ycombinator.com/user?id=qznc` more often than other user pages reveals something about my identity. That is one attack the researchers used. Browsing mode is not designed to protect you from URL snooping. Embedded ads can track those URLs as well and they can in private browsing mode.
Private browsing mode protects the client, it's so people with access to your browser can't see what you've been up to.
It doesn't do much in protecting you from servers that are tracking you - especially over time when there is a large number of individual pieces of information that might not reveal anything in isolation but when put together can be quite revealing.
I reject your rehashing of "I've done nothing wrong, so I have nothing to hide." This thread is full of examples of things which many may find innocuous now, but some may also disagree with, and may use as grounds to persecute in the future.
instead of making this (even more) illegal, which would solve very little as its still apparantly trivial to do, they should instead work on making that address book.
that way when everything is transparant, paedophiles can be caught and we might choose not to vote for somebody with a cocaine addiction.
Those articles either miss, or fail to emphasize enough, the best counterargument: what if you do have something to hide?
What if you're organizing a protest, a new political party, a business to compete with entrenched corporations, are a whistleblower (corporate or government), an investigative journalist (still needed in a surveillance society - those in power know better than to turn surveillance on themselves), or a union organizer?
Those are all activities (most) people would consider good, or at least necessary, and all require some degree of secrecy.
im not saying that because i am a hypocrite who says what he says because its in his best interest. its not. my interests are best served by secrecy. so i might suffer myself. but there are many good people.
im saying that because i think its in the interest of soceity that we know how naughty everybody is being. because people are being naughty.
of course there is a tradeoff. that example from ww2 is unsettling. For sure that is not my intention, but i doubt that is what would happen. ww2 is a long time ago. hate crime is down a lot. people should not be so pessimistic about modern soceity.
but some of our leaders are not people you'd introduce to your mother. and we need that kind of stuff to be out there.
> Two German researchers say they have exposed the porn-browsing habits of a judge, a cyber-crime investigation and the drug preferences of a politician.
So how long before we as a society stop making a big deal of things like that? Everyone watches porn, most people enjoy drugs[1].
Why does anyone still care? Why do we make such a hullabaloo if a judge watches porn? Why is it a big deal if a politician smokes some pot to relax? Who cares if a detective drinks a case of beer on the weekend to blow off some steam?
I'm all for privacy and there are things I wouldn't care to expose on the internet (like my home address and exact apartment number)[2], but I promise you future generations will not give a shit about each other's "super secret internet browsing habits". My fav porn site is RedTube, my drug of choice is caffeine, and I hate that ThePirateBay has become hard to find in recent months.
There's a lot of memes out there about deleting your browser history before you die. But honestly who cares? And if you're doing illegal shit, use a burner laptop. They're $100 on Amazon [3]. Don't be dumb.
[1] drugs as in psychoactive substances. Legal drugs like caffeine and alcohol count, as do prescription drugs.
[2] you can probably triangulate those from my YouTube videos if you really want to
I'd point you towards Brenden Eich's tenure as Mozilla CEO for an example of why it matters. He supported a (at the time) popular measure, Prop 8, which later became unpopular. 7-8 years later Prop 8 is widely condemned as homophobic and his support of it was used to oust him from his job.
This might be a little off topic because I don't think his personal browsing habits are what exposed the information, but I think the principle remains - you don't know what currently "reasonable" or "private" thing you do now will cause you a lot of trouble in the future.
Prop 8 at the time it was announced was widely seen as homophobic. The only people who didn't see it that way were...wait for it...usually homophobic themselves.
Considering Moz.org's trajectory since then I am of the belief that was an excuse fomented to oust him for other reasons. Most people will have a hard time agreeing with, or even liking, many of the decision makers at the top. Hyper-focus on celebs & politicians is disingenuous and weaponized by detractors to further their agensas. We are all fallible & human.
*I do not agree w/ Prop 8 or any such discrimination as to how consenting adults choose to live their lives. Nor do I consciously employ xenophobic limits to those who look, speak or act differently from me. If it's to be a "global" community, we must be more inclusive rather than assimilating, IMO.
The point is, it may be accepted now, but in the future it may become cause for a great deal of concern - and that is why we cannot allow ourselves to submit to the heinous evil of state surveillance. This is a de-evolutionary impulse.
Either these things that we all do become illegal and we're all criminals and the government uses it as a form of repression when people need to be put away for whatever reason the regime deems necessary. Or these things remain legal and become socially acceptable because once everyone understands everyone is doing it, we'll collectively stop caring.
Right now we're in this weird transitionary period where things are not illegal, but are frowned upon in polite society and it's first coming out that you know what, we're actually all weird behind the scenes and there's no such thing as normal. So we have weaponized public outrage.
I'm curious which way we'll go. Hopefully it's not the "we're all criminals" way. That would suck.
There are plenty of things that 1% or 0.1% of people are doing - common enough that widespread discovery would have a large societal impact, but rare enough it could easily remain stigmatised.
I can't imagine that sex workers or adult diaper fetishists or whatever will suddenly find acceptance from their neighbours just because it's all out in the open now.
I'm reminded of the suffering of the Jewish people, who returned to Vienna after a long period of persecution, because they were 'made legal again' .. and yet, only a few years later, were again destroyed as a people and a culture by the change of power.
We must never forget the betrayal that occurs before every human atrocity. Setting ourselves up for betrayal belies a very weak culture.
Yeah and when Captain Kirk kissed Uhura it was a small scandal because it was the first time a black woman and a white man kissed on US TV.[1] Are we expecting those tensions to come back any time soon? Should interracial couples hide their relationships?
Your sci-fi fantasy is a straw man, but I will say that this point:
>expecting those tensions to come back any time soon?
.. has to be answered. The answer is, yes, we must always expect society to regress, and never assume that because we made the world okay, our children will too.
When we open the door for discrimination, hatred and intolerance come pouring through - at first as a trickle and then as a flood. The law that says "all your data belongs to us" today may not be cause for harm; but would you want the next leader, whose policies may be more discriminatory, to have such powers?
The price of freedom is constant alertness. We must always be prepared for the doors of intolerance, hatred and bigotry to open - having total access to all our information is a very, very wide, open door.
>Are we expecting those tensions to come back any time soon?
Sure, why not. For example in a lot of European cities honor killings (e.g. killing your daughter/sister etc because she has been or even eyed some man or your son because he is gay), have been re-introduced in practice after over a century due to hardcore islamic immigration.
This is from an official EU report: " In Sweden, the government estimated that up to 2004, 1 500 to 2 000 girls
and women had been the victims of 'honour' crimes".
You should be careful with these statistics for three reasons.
1. Their provenance is sketchy. The EU report sources data from private advocacy organizations. That data is in turn extrapolated, premised on the idea that most "honor crimes" are unreported.
2. Their definitions are sketchy. By the report's logic, the difference between an honor crime and domestic violence is cultural intent. Domestic violence is highly prevalent across Europe (and, of course, the United States, where it accounts for a plurality of all those currently incarcerated in major metro jails, despite a microscopic population of people from cultures where "honor crimes" are a thing). And, of course, almost none of these crimes are murder or attempted murder.
3. The goals are sketchy. There's an extent to which reporting on "honor crimes" can be seen as a way to take a universal phenomenon, separate out the Muslims, and pathologize their incidence of that phenomenon.
Certainly, with this report, you are nowhere close to having presented evidence for the extraordinary claim that "hardcore Islamic immigration" has created an epidemic of "honor killings". You should, again, be more careful than this.
We’re getting quite OT here, but since where already at it... Why the quotes around the term honor crimes? I do believe there is a credible argument that domestic violence and honour-based violence are not the same thing. It’s not just private advocacy organizations that say this. Some interesting quotes from a report published by the UK government:
2.13. The rationale for HBV [honour-based violence] differs from other related crime types such as domestic abuse in that it usually occurs to preserve perceived social, cultural or religious traditions or norms. HBV can involve multiple perpetrators taking drastic action which may be premeditated.
2.14. The other difference between HBV and other forms of abuse is that there are potentially significant risks to people associated with the victim, for example the victim’s children, siblings and friends as well as members of various authority groups and organisations who seek to assist the victim.
So based on this I would say HBV isn’t exactly the same thing as domestic violence. But of course, this is not only about muslims. Far from it. HBV is taking place in many different cultures and countries.
2.9. HBV is not linked to any one religion, culture or society. It has been identified as mainly occurring among populations from South Asia. However, it can occur in other cultures and communities, such as African, Middle Eastern, Turkish, Kurdish, Afghan, parts of Europe (including the United Kingdom) American, Australian and Canadian. HBV is not associated with any particular religion or religious practices and has been recorded across a number of faiths, including Christian, Hindu, Jewish, Muslim and Sikh communities.
I don’t know how prevalent these practices are in Sweden/Europe so I have no idea about the reliability of the stats from the EP briefing that was cited by 'coldtea. But I think it’s important that the authorities are aware about the specifics of the issue in order to help the victims in a proper manner. If we just say that it’s all domestic violence we might not be able to help the victims in a relevant way.
>Certainly, with this report, you are nowhere close to having presented evidence for the extraordinary claim that "hardcore Islamic immigration" has created an epidemic of "honor killings".
Notice how the word "epidemic" (and thus the rendering of my claim as "extraordinary") is yours.
I only said that honor killings "have been re-introduced as a practice" in Europe [1] due to fundamental muslim immigrants, which is neither extraordinary nor up to argument.
Seems like the word "sketchy" has also been thrown around a lot aiming to delegitimize any claim that such things can and do happen nowadays in Europe and based of specific, religious and regional based, cultural values.
For all the sketchiness in favor of over-reporting such things that you mention, there's as much an underreporting out of fear of looking as culturally insensitive/racist/etc, and out of lack of access (of social workers, police, etc) in those communities.
And this is not some fringe right-wing publication I've linked to, this is an official EU briefing report.
And it makes total sense that the difference between "an honor crime and domestic violence is cultural intent", since by definition a honor killing is a cultural act, based on the idea that one's status/standing in the community has been compromised because of the behavior of its daughter, children, etc. Domestic abuse is a more widespread umbrella category. You can have domestic abuse without honor crimes. Or you can practice various kinds of domestic abuse AND honor crimes (which is often the case in families that would consider honor crimes acceptable). Domestic violence is indeed highly prevalent across Europe and the US, but it is also highly prevalent across subcultures that consider "honor crimes" an acceptable practice.
[1] This also shows that I have no problem to admit that those were once a common enough practice by christian Europeans too. It has been on the wane all through the 20th century, though, before being reignited by newly arrived populations.
Most domestic abusers make some sort of excuse --- the victim is always "asking for it" somehow. Only when that excuse involves some kind of foreign culture do we separate it out of the morass of all domestic violence into a scary new "honor crime" category.
>Only when that excuse involves some kind of foreign culture do we separate it out of the morass of all domestic violence into a scary new "honor crime" category.
There's nothing new about "honor crime". We had that in Europe, by Europeans, for ages, and it was a common enough practice up until quite recently (up until the 50s or so). There are lots of studies about honor crimes and the relevant cultural notions behind them.
As for the cultural connection is in there by definition. The "honor" part implies a cultural impetus about what's "honorable" and how a daughter/son/wife/etc should behave, that is not the same as the personal motives (e.g. jealousy, feeling of inadequacy, etc) for general domestic abuse.
In other words, it's a special case of domestic abuse, with it's own, long, history.
And it's by no means invoked only "when that excuse involves some kind of foreign culture". It might seem like that to someone from a place with no major cultural history of "honor" codes and "honor crimes", but in several cultures in Europe, e.g. in Spain, Italy, Greece, etc., honor crimes were a thing (and occasionally pop up still), and we still clearly differentiate them from domestic abuse, have different terms for them etc. In our own country and culture -- no second culture involved.
(Wikipedia: "Honour in the Mediterranean world is a code of conduct, a way of life and an ideal of the social order, which defines the lives, the customs and the values of many of the peoples in the Mediterranean moral") -- as a mediterranean, this is quite accurate.
This is a lot of words, respectfully, none of which respond to my point. I have no trouble believing that Europeans have spent many decades ostracizing people from West Asia.
I know I'm repeating myself but I want to be clear: I'm not apologizing for "honor crimes". I'm suggesting that West Asia and Western Europe share a pattern of violent abuse, but call them different things, and that Europe exploits that to deflect attention to domestic violence by blaming things on the lurid crimes of immigrants.
There are xenophobes that use the term HBV to push their own murky agenda. That’s wrong. But when women from e.g. Kurdistan report about HBV I believe that their stories are worth listening to and taking serious. If we want to do something about these issues we first need to understand them.
>Right now we're in this weird transitionary period where things are not illegal, but are frowned upon in polite society
That's not just 'right now'. That has been going for millennia of recorded history.
And any change towards e.g. more transparency can be reverted at any point in the future too (history usually moves in circles, not towards some end goal of progress -- even slavery has been abolished and then re-introduced several times. And racism is a relatively modern (a few centuries) invention. There were black kings with white subjects back in history and nobody batted an eye, but it would be unthinkable in 18th century USA/Europe).
and the reverse, we use these examples to highlight things you might not want to come out now but what happens when there's a marked shift in the political stance to e.g. Homosexuality/Transgender people?
Say an ultra-conservative government was elected on a platform of hate towards people that we're currently moving towards accepting, wouldn't you want it to be hard for someone to find out if you spent a lot of time looking at certain materials relating to those topics?
You don't need to cast it as some outlandish scenario, since there are many countries where being gay (or, to use the example in the OP, watching porn) are illegal and violators are really subject to penalties.
To be clear, my stance is that laws that make it illegal to be homosexual are wrong.
With that said, in your example those people are actively breaking the law and if you're using the internet to break the law today I don't see a lot of problem with the government tracking that back to you. My concern is large scale collection of records of acts which are then turned into crimes/persecuted groups and I think that's what people should primarily concern themselves with when it comes to the present internet privacy debate.
That's kind of circular isn't it? I understand we care because it has real consequences. But why does it have real consequences?
That's what I'm wondering and have been for a while. Where does the outrage come from? We all know we all watch porn. But if it comes out that so and so important figure watches porn, suddenly it's the end of the world? Why? Who decides which instance breeds outrage and which doesn't?
For me, this is why it's such a big deal: I don't think it has to be porn to be used against you. It just needs to be some minor character flaw that breaks with your public image.
And I guess more troubling still, this kind of stuff could be used to blackmail normal people in the same way revenge porn is. What about threatening to reveal your porn habits to friends & family?
Again, doesn't need to be porn. What about your browsing history for medicine, self-help etc? All that stuff could be used to blackmail someone, especially if they're psychologically vulnerable.
> Where does the outrage come from? We all know we all watch porn.
Let's say Politician A runs on a platform of making legislation and supporting other things that make the lives of LGBTQ difficult (or dangerous). He gets totally awesome support, everyone loves him.
Then it is found that Politician A is browsing twink pr0n on his work computer. Or worse, he is blackmailed because of it, becomes corrupted, then is outed for his browsing habits.
You don't think this can't have real consequences? What if this was found long before, and he is being coerced into introducing legislation by some more powerful person or group who will expose his (now hypocritical) habits to the public, unless he pushes for legislation that will punish those that other group/person wants?
That's only one example (probably flawed in ways), but people get really bent when a politician says one thing, but is secretly doing another and it is found out - especially when that politician is doing things that oppress others. Some call it self-hate, or projection, or similar - which it may be. Whatever it is, it is justified to feel angry over such things.
What I don't understand is why people keep giving politicians such "benefit of doubt" before electing them - we always seem to regret it in some manner afterward (sometimes only a bit, sometimes a lot - very, very rarely do we see or get someone who is a benefit to everyone).
>That's what I'm wondering and have been for a while. Where does the outrage come from? We all know we all watch porn. But if it comes out that so and so important figure watches porn, suddenly it's the end of the world? Why? Who decides which instance breeds outrage and which doesn't?
Well, ultimately others decide, so it's beyond your and my control.
Others decide because we allow them too. We could make an effort to read less about the private habits of others. "That's private, I'm not clicking it." And yet for all the privacy advocates on HN, there's still a great deal of interest in salacious stories.
This is great. I look forward to everyone's information being made public in the future. It might be embarrassing initially but if everyone else also has embarrassing stuff released about them then it won't be so bad; it will make people more open and encourage us to be honest. Only criminal and highly unethical activity will be negatively affected.
We need to re-calibrate our ideas about people and society to something more realistic. It will probably lower our overall opinion of humanity but at least people will know the truth and behave accordingly. Right now people are idealising certain things and behaving based on false information.
In any case, I think it's unavoidable that all information will be public at some point in the future. It's been heading slowly in that direction since the dawn of civilization. Several hundred years ago, even a figure as powerful and well known as the pope could behave unethically and nobody would find out until hundreds of years later.
Today it's much harder to keep things secret. I think big, embarrassing revelations like the Anthony Wiener scandal should become increasingly common.
> Only criminal and highly unethical activity will be negatively affected.
Because nobody has ever suffered for being outed as LGBT, minority religion or political party, etc.? Losing a job, being beaten or killed go way beyond embarrassment and those outcomes are a given for millions of people.
Many people like their various bigotries – whole religions feed on the social dynamics of saying everyone outside is sinful and trying to corrupt you – and while it'd be nice if revealing secrets lead to greater tolerance I'd bet a lot that the more likely outcome would be rage that they'd been infiltrated and purging anyone who had been keeping secrets.
Yeah but a lot of people who are currently pretending to be straight or even homophobic might actually be closeted themselves or they may have some real dirt on them.
Your boss won't fire you for being gay if it turns out that they themselves are closeted or if they are laundering money, avoiding taxes or doing drugs, etc... Or even if your boss is in fact a very simple perfectly-behaved person, then 90% of employees in the company will have something potentially embarrassing about them and they're not going to fire 90% of the company.
The higher you go up the ranks of society, the more dirt you will find.
And the beauty of it is that people who are higher up the ranks will be uncovered first (due to public demand). The researchers didn't go after just anybody, they specifically looked up a politician and a judge. These are the natural targets; people who are supposed to represent and uphold society's moral ideals.
Hate to say it, but this is a rather naive perspective on human behavior and interaction. Like the GP said, the threat goes far beyond losing a job. For just one example, it is a very sad fact that transgender individuals face significantly higher rates of domestic violence, sexual violence, and stranger assault, especially trans individuals of color. More broadly, maybe it's true that fully outing all (or most, or many) closeted LGBTQ individuals would force faster societal change in the long term, but the short term effects may not be worth it, and anyway, I really don't think that decision should ever be in the hands of anyone other than the individuals themselves.
I deliberately didn't give historical examples, because I think many people dismiss them as "Oh but that was in the past - we know better now, and there's no chance of that repeating." - they feel too distant, almost abstract.
And honestly, I mostly think they'll probably turn out to be correct - for those specific cases, for their countries. What I think will happen is some new group will be targeted - political, social, or ethnic. People that shouted a bit too loud at G20 protests, or condemned US involvement in the middle east too harshly, or published cracks for DRM.
To convince them, you have to give examples of things that are at risk right now. Show them the gun pointed at their head, not the abandoned gunpowder refinery nearby, and tales of how it might start up again.
> And honestly, I mostly think they'll probably turn out to be correct - for those specific cases, for their countries. What I think will happen is some new group will be targeted
The current trend globally is for discrimination against LGBTQ to increase...
That's why I said 'for their countries'. In most of the West, LGBTQ discrimination is decreasing. Because if you look globally, then all sorts of innocent people are at risk , and the need for privacy is obvious - look at China or Russia. But the people making 'nothing to hide' arguments live in the here and now - sure, gays might need privacy in Russia, but we're not Russia lol
Yeah but if people had known earlier about how widespread homosexuality was - They would have realised how little it means in the context of all the other stuff that people do; it would have been accepted sooner.
"...I look forward to everyone's information being made public in the future..."
One of the problems is that everyone's information will not become public, and that asymmetry MIGHT lead to social power imbalances. A good "for instance" is that using some of the techniques outlined in the article, a person who might have had the discipline to never start using any social media sites would be part of that 5% of people that you could not identify.
Every technique will have holes like that, and it will give some people a level of privacy that others don't have. And that's where even mundane things you would normally not even think about, begin to get affected by the social power imbalances. For instance, a company receives 2 resumes, one from me, one from someone who is "digitally invisible". (For lack of a better term.) The background check they run on me would be a whole lot more detailed than the check they run on the "digitally invisible" person. I guess the answer to that is for me to just not be a douche on the internet.
At any rate, I don't know if I'm being clear, but I hope you can see where techniques like this could expose one person, but not another. It's not really "fair" in the way you described unless everyone is exposed.
It's possible they were led by utopian thinking. For example, I'd love a society where everyone knows everything about everyone but just doesn't care what weird ideas and preferences they have. But I also expect that no society that hasn't been telepathic since it's dawn would be work as well as the idea goes ;)
I think it’s important to be skeptical towards legislation as a solution to these things. The EU/UK cookie law is a cautionary tale, for example. After all that talk we ended up with a law that (effectively) mandates a boilerplate nag screens and no change in behaviour. Even if it had clearer language to distinguish allowable-illegal cookie use, it would still be very difficult to enforce.
I don’t mean to say legislation has no part to play. Just saying that the politician outrage to legislation sausage factory has produced some duds in this area. I wouldn’t count on a solution coming from this direction.
Speaking of enforcement… Most countries have an advertising standards authority. They create the rules and such. If an ad is (for example) a blatant lie, they can call up the Press/TV/Radio station and get the ad removed. Online, it’s not obvious what authority they have, or how they would enforce that authority at all.
Where advertising standards are still not broken is regulated industries. If a locally regulated bank advertises “one weird trick to double your savings,” the advertising standards people can go to the regulator. They have a number to call, genuine threats to make. ..enough to promote self policing.
Online, even reputable newspapers allow shockingly crappy ads. Sleazy data collection, snake oils, fake products, click farms, scams even fake news (ironically). Real shyster stuff.
This is on the visible end of the online advertising stick, the ad content itself. We already have legislation and a custom of rules. Still, enforcement is nonexistent. Dealing with the unseen data collection end of this stick is even harder.