Hacker News new | past | comments | ask | show | jobs | submit login

Just to be clear, my original comment wasn't disputing your point about international enforcement (on which, once again, it seems we agree on the main principle). I was challenging your characterisation of the RTBF as "an individual's ability to censor true information simply because that information happens to be about them".

The original CJEU ruling was pretty clear that, in the court's opinion, there were some circumstances under which restricting the promotion of personal information by search engines, even if that information was true, could be justified. However, the court was also pretty clear that merely disliking the fact that some information about you was being shared was not sufficient grounds for the law to restrict that sharing.

I'm happy to debate more specifically the circumstances where that sort of restriction might be reasonable, but if you believe as a philosophy that freedom of expression and the right to say true things should always outweigh an individual's right to privacy then it's unlikely we're going to agree on so much there. I don't personally believe in an absolute right to free speech regardless of the consequences. I do believe that there are many good reasons that privacy is important, and sometimes that it is the more important of the two.

To give one concrete example, I think it is a general good that anyone -- including public figures with influential and highly stressful jobs -- should be able to seek help for mental health problems if and when they need to, without fear of suffering criticism or abuse for doing so. That makes respecting the privacy of those who do seek help important, regardless of the truth of any statement that they are doing so.

To give another example, the evidence appears to be overwhelming that punitive "justice" systems are far less effective at reducing crime and improving general quality of life than systems based on the rehabilitation of offenders. A huge part of that rehabilitation is allowing past offenders an opportunity to move beyond their old situation, to "turn over a new leaf". It's pretty difficult to do that if stories of the chocolate bar you stole from a store aged 18 are still keeping a very different you out of good job opportunities aged 28 or 38 or 48. Again, it might be true, but there is a strong public interest in concealing that fact once it is no longer relevant to understanding your character, which is exactly why concepts like spent convictions and probationary periods exist in many legal systems. In the age of an Internet that never forgets, that concealment becomes a lot harder, but the evidence for its benefits not just for the individual but for society as a whole is still the same.




> I was challenging your characterisation of the RTBF as "an individual's ability to censor true information simply because that information happens to be about them".

I stand by that characterization.

> The original CJEU ruling was pretty clear that, in the court's opinion, there were some circumstances under which restricting the promotion of personal information by search engines, even if that information was true, could be justified.

Yes, I get that. And I recognize that this is a defensible position. I even sympathize with the motivation behind it. But I disagree with it because 1) it's too easily abused and 2) it doesn't work.

> seek help for mental health problems if and when they need to, without fear of suffering criticism or abuse for doing so.

Completely agree, but the answer to that is to not let that information get out in the first place. Once that particular cat is out of the bag getting rid of the Google listing won't help much, especially if you're a public figure.

Actually, the real answer here is to remove the societal stigma associated with seeking help for mental health issues in the first place. And one way to promote that process is to reveal that respected figures have sought help for mental health problems, just like the cause of gay rights was advanced by revealing that well known and respected people were gay. I'm not suggesting forced outing, just that the ideal situation is for all of this to come out of the closet.

> the chocolate bar you stole from a store aged 18

What about the person you raped when you were 32?


I stand by that characterization.

Then with due respect, your understanding of RTBF is objectively incorrect. It was never about allowing individuals to censor information about themselves just because they don't want it to be shared. You can read more from the EU itself:

http://ec.europa.eu/justice/data-protection/files/factsheets...

But I disagree with it because 1) it's too easily abused and 2) it doesn't work.

Is there actually any evidence that it's easily abused, or is that just an assumption?

For that matter, why doesn't it work? It seems clear that promoting information via a search engine will typically give that information much wider reach than just, say, publishing it on a blog somewhere or having an article on the web site for someone's local newspaper. If that information is incorrect or misleading, if we accept for the purposes of this discussion that distributing that information is therefore undesirable, and if we accept that the search engines themselves report that many links have been removed as a result of RTBF, how can the current situation not be having the intended effect?

Completely agree, but the answer to that is to not let that information get out in the first place.

Ideally, yes. I just don't think that means you give up on trying to limit harm to an individual, particularly an innocent one, if someone does screw up and let the cat out of the bag.

Actually, the real answer here is to remove the societal stigma associated with seeking help for mental health issues in the first place.

That's a noble goal, and one I fully support. However, I don't think it's realistic to expect everyone in society to treat everyone else objectively and fairly when faced with potentially negative information about them any time soon. Sadly, there is still far too much ignorance and prejudice in society for now. However, privacy is an effective practical tool for reducing the amount of unfairness that results from that kind of ignorance and prejudice.

Also, as I mentioned in another post, it's impossible to reliably achieve fair outcomes after initially encountering negative information, even with good intentions all round. If nothing else, someone who looks up an individual and initially finds negative information about them won't (and can't) know for sure whether there was also some later positive information that they should take into account as well. Short of reliably data mining and classifying the sum of all human knowledge so that any search that turned up negative data immediately highlighted the overriding positive data as well, there is no way to solve this problem 100% of the time.

What about the person you raped when you were 32?

When it comes to disclosures and background checks, there are usually some types of offence (typically violent or sexual ones) that are deemed sufficiently serious that convictions for those offences never become spent and will turn up on background searches forever, even if the legal system does consider lesser offences to be irrelevant after a certain period of time. I don't see why the same principles shouldn't apply for the same reasons in RTBF cases, and I would expect EU courts to do exactly that when applying the privacy and data protection rules.

We're in a shades-of-grey area here, and your example was a much darker shade of grey than mine, so it seems reasonable that there might be a different balance between allowing the past offender to move on and helping others to find information that might be relevant to their dealings with that past offender.


> It was never about allowing individuals to censor information about themselves just because they don't want it to be shared.

Maybe that wasn't the intent but that is the net effect because the burden of proof is not on the individual to show that the information should be removed but rather on the search engine to show that it should not be. As a matter of simple economics, Google has essentially no choice but to comply with all RTBF requests whether or not they have merit, just it has to comply with all DMCA takedown requests whether or not they have merit. Google doesn't have the resources (nor should it be expected to provide the resources) to actually adjudicate every case on its merits.

> Is there actually any evidence that it's easily abused, or is that just an assumption?

It's a prediction.

> it's impossible to reliably achieve fair outcomes after initially encountering negative information

Why? If you can pass a law forcing Google to remove links, why could you not pass a law forcing news outlets to update stories?

> We're in a shades-of-grey area here

Yes, of course, that's the whole point. What if you were never convicted of the rape even though you were clearly guilty? (Should Michael Slager have a right to force the internet to forget that he actually shot Walter Scott in the back, despite the fact that the jury did not convict him?) What if you were actually innocent even though the evidence against you seemed overwhelming? Who decides which cases fall under the auspices of RTBF? The answer is: you do. Why? Because it's cheaper and less risky for Google to simply comply with all takedown request than to actually try to adjudicate each request on its merits.


As a matter of simple economics, Google has essentially no choice but to comply with all RTBF requests whether or not they have merit

From Google's own public statements, they actually reject the majority of requests to remove search results under RTBF:

https://www.google.com/transparencyreport/removals/europepri...

Google doesn't have the resources (nor should it be expected to provide the resources) to actually adjudicate every case on its merits.

Apparently you're wrong on that one, but even if you weren't, I'm not sure I would have a problem with this. Google has a highly visible and influential system, but I don't see why it should be exempt from the same data protection and privacy rules that everyone else has to follow just because it is inconvenient or expensive to comply with them. Remember, the original RTBF ruling from the CJEU was based on existing general principles of fair data processing under EU law, not some special anti-Google legislation.

If you can pass a law forcing Google to remove links, why could you not pass a law forcing news outlets to update stories?

There are laws potentially requiring news outlets to update misleading stories, issue corrections, remove material infringing others' intellectual property rights, etc. This would also be covered by much the same data protection rules, as well as legislation regarding defamation, official secrets, copyright, and so on.

What if you were never convicted of the rape even though you were clearly guilty?

What if you were actually innocent even though the evidence against you seemed overwhelming?

We have a legal process to make decisions on these difficult issues, by holding a trial, and a great deal of thinking by a great many smart people has gone into making that system the way it is. If the verdict reached through such a trial is flawed, there are well established legal mechanisms for appealing it or even correcting it later if new evidence proves it was wrong. Obviously the system isn't perfect, nor can it ever be in the absence of telepathy or something. But Google and the Internet aren't even close to big and important enough to overturn an entire legal system.

And yes, sometimes that legal system results in surprising verdicts. Maybe that's because, unlike Mike from down the road who made a five minute post on his personal blog, the courts actually considered all of the available evidence and followed an impartial process through which an official determination of someone's guilt or lack of guilt was made. But that verdict will be regarded as the final decision for any other purpose, including say a defamation lawsuit also brought in a real court. So an argument that someone should be allowed to just say whatever they want and use a system like Google to magnify their influence even though a court has already ruled the opposite isn't going to be very convincing to me (or just about anyone else) I'm afraid.

Who decides which cases fall under the auspices of RTBF? The answer is: you do.

Every time you deal with someone else's personal data, you have a legal (and, I would argue, moral) responsibility to do so fairly. If you don't, the law may punish you. I just don't see why the RTBF is different to any other aspect of the relevant law in this respect.


> they actually reject the majority of requests to remove search results under RTBF

Slightly over half. But we really have no way of knowing how many of these requests are really rejected on merit and how many are rejected because of some technicality (e.g. some problem with the submitted proof of identity)

> I don't see why it should be exempt from the same data protection and privacy rules that everyone else has to follow

It should not. I'm using "Google" here as a proxy for any web service that falls under RTBF.

> We have a legal process to make decisions on these difficult issues

Yes, of course, but this isn't about that. This is about controlling information. You brought up stealing a chocolate bar as an example of something that should be forgotten, and I countered with rape as an example of something that perhaps should not. The problem is: who gets to decide where the line between petty theft and rape gets drawn with respect to the RTBF?


The problem is: who gets to decide where the line between petty theft and rape gets drawn with respect to the RTBF?

Ultimately, the legislature when it comes to setting general principles and then the courts when it comes to interpreting those principles in specific cases, just like any other decision like this.


What I meant was: who decides in individual cases. Obviously neither the legislature nor the courts are going to do that. Companies are going to do it, and they are going to do it in a way that minimizes their cost and legal exposure. And that means, under the current law, if they are going to honor their fiduciary responsibilities to their shareholders, they are going to err on the side of censorship. (Note that Google is in a unique position of not actually being beholden to their shareholders. Because of the way the company is structured, Larry and Sergey have absolute control, just as Zuck has absolute control over FB. But those are extremely rare situations for public companies. In any case, I do not want a world where freedom of information depends on the benevolence of a handful of oligarchs.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: