Hacker News new | past | comments | ask | show | jobs | submit login
Reflecting on the Right to be Forgotten (blog.google)
128 points by DiabloD3 on Dec 10, 2016 | hide | past | favorite | 213 comments



I'm guessing very few, if any, people here have ever felt a need to be forgotten. There was a time only a couple short decades ago when a person who made mistakes could do the work to make things right, make themselves a better person, and move on with their life. Now that's extremely difficult if not impossible. Mistakes are newsworthy and get indexed; redemption isn't and doesn't. That's not to say that everyone who makes mistakes redeems themselves. But those who do, nowadays, still have to enter every new job, every new relationship, with the dark past just a few keystrokes away.

I would like to stress that this is a very new development. As such, I think it merits discussion beyond a simple black-and-white free speech analysis. I would urge you all to look beyond abstract principles and consider also the human impact that these policies have on actual people's lives. Try to imagine, if you can, what it's like for a person who has done the hard work to right their wrongs after landing in the news. Don't they deserve to be allowed to shut the door on their past when it's no longer who they are?

I also think it's important to note that we (in the U.S.) do already recognize such a right when it comes to criminal records, through expungement proceedings. We routinely remove convictions from people's criminal records. I don't think any disagreement with this "right to be forgotten" is complete if it can't be reconciled with that.


I don't think it's practical to expect to be forgotten. Archives will save webpages, and mistakes are interesting to science.

As tech continues to improve, we will increasingly face a world where there are no secrets. I don't see this as preventable. Everyone has a camera in their pocket, and soon it might be on their face a la Google Glass or Microsoft Holo Lens.

Instead we have to learn to live with it. I think that every single person in this thread can think of a dozen things they wouldn't want made public. For each they fear something between judgement and prosecution. The ability to hide those things is disappearing though.

My thought is that a cultural shift towards leniency and forgiveness is going to be by far an easier path than enforced privacy. Not that shifting culture is an easy path. But I don't see fighting technology as a battle that can be won.

Cameras will be everywhere, streams will be accessible to everyone, and it's all going to be archived and remembered. And not just physical life, but digital life as well.


"As tech continues to improve, we will increasingly face a world where there are no secrets. I don't see this as preventable"

It's absolutely preventable, we just have to make the choice to do that.

Of course - there are 'some bits of data' that will get persisted here and there, but by enlarge, there's ways to do this:

1) Policy 2) Common ethics 3) Regulation 4) Architecture 5) Law

If it became 'policy' at most companies to address this issue, there are a number of things they could do.

If it there was awareness about this - and it was framed as a moral issue, large swaths of organizations might simply make do.

Identity/Software/Systems architecture may be able to enable this - i.e. being able to make online transactions without actually identifying oneself.

Responsible regulations and responsible law could urge companies to do this.

Obvious, it will never be fully thus - but there are a lot of things that we could do.

Starting with: any account you open, you should be able to close and have all relevant information erased, unless there is some special requirement otherwise.

This is law in some places and it's not entirely unreasonable.

If we collectively made the choice we could make the internet 'mostly' a private experience for people.


I think your examples are muddying up the discussion. Your 2 examples of:

>Identity/Software/Systems architecture may be able to enable this - i.e. being able to make online transactions without actually identifying oneself.

>Starting with: any account you open, you should be able to close and have all relevant information erased

... is not what OP (danaliv) you're replying to nor The Right To Be Forgotten[1] is conceptually about.

Your examples are about transactions getting erased. E.g. if a woman creates a Facebook account and posts some selfies, she should later be able to delete that social account and Facebook would erase the associated photos. It would not do further data mining on her profile.

RTBF is about erasing negative or embarrassing information. E.g. President George W Bush was arrested for drunk driving in 1976. RTBF would require Google, Bing, New York Times, Wayback Machine,etc to remove that information so people couldn't find it. Bush's arrest isn't a private transaction with a company -- it was a publicized incident that RTBF wants to erase. Would the (ideal) compliance of the law also require citizens who wrote personal emails such as "I won't be voting for Bush because of his drunk driving arrest" to delete that from their email archives? What about discussion forums (Usenet/reddit/HN/blogs/etc) where people talked about his arrest?

Erasing the Facebook account is a trivial implementation. In contrast, erasing Bush's arrest from the public mind is only solvable for centralized institutions such as the Google search index or The New York Times archives. All the decentralized nodes of information will not comply.

[1]https://en.wikipedia.org/wiki/Right_to_be_forgotten


> It's absolutely preventable, we just have to make the choice to do that.

Hear, hear!

> but by enlarge

It's "by and large". https://www.merriam-webster.com/dictionary/by%20and%20large


> ... any account you open, you should be able to close and have all relevant information erased, unless there is some special requirement otherwise.

> This is law in some places and it's not entirely unreasonable.

Its absolutely unreasonable (eg) for HN. And this is just one form of information processing.


Regardless, it's the way things are done in at least one country[1]. This only covers "personally identifiable information" though. Especially this paragraph is noteworthy:

>(8) Disagreement with processing pursuant to paragraph 6(c) must be expressed by the data subject in writing. The controller shall be obliged to notify each controller to whom he has transferred the name, surname and address of the data subject of the fact that the data subject has expressed disagreement with the processing.

What this means, is that if telemarketers call you, you just say that you require them to erase your data, and notify everyone they gave the data to erase it too. And you never hear from any telemarketers again. (At least in my experience, it says the disagreement has to be in writing, but they probably err on the side of caution.)

1 - https://www.uoou.cz/en/vismo/zobraz_dok.asp?id_ktg=1107


I don't agree at all.

It's neither unreasonable nor entirely impractical from a mechanical perspective once it's thought through a little bit.

Because comments are made here effectively anonymously, I don't see the issue, but it's not at all unreasonable that comments could be removed given a mandate.


Giving every HN user power to delete all his comments will make HN unreadable. To understand a comment, access to all ancestor comments is needed.


I don't know, I often encounter [flagged] or whatever it is that happens to bad comments in the middle of a thread. It leaves me wondering what was actually said.

So it happens already, just for different reasons...


You can turn on [showdead] in your profile to see flagged comments.


Ah, I hadn't realised that!


There shouldn't be a need to delete one's comments to achieve a right to be forgotten, only to remove the identifier of the user who made them.


"Giving every HN user power to delete all his comments will make HN unreadable."

No, I think it would be so rare it would not make a difference.

For that 'small intrusion' on your HN experience, you get a hefty dose of privacy and security protection.

I would gladly agree if HN allowed people to 'delete all comments'.


What scientific value does a drunk post on your exes facebook wall after a bad breakup exactly has?

What value does some hot mic slip of the tongue has?

Shift towards forgiveness isn't something you want to strive too it can create a very bad society where there are virtually no social norms or regards towards other human beings since everything has to be forgiven.


> What scientific value does a drunk post on your exes facebook wall after a bad breakup exactly has?

More than a historic record consisting only of favorable "truths" worded as carefully as an obituary, because everything else has been erased in a race to spotlessness.


Yes because when contemplating about a person's life what is important is what they wrote on the facebook of their former partner in their 20's....

You do understand that "whitewashing" obituaries isn't for the dead, it's for the living when a person dies most socially adjusted people rather remember the good not the bad. It's not to adjust history in anyone's favor it's to be able to grief in peace.


>> "My thought is that a cultural shift towards leniency and forgiveness is going to be by far an easier path than enforced privacy."

Things seem to be shifting the other way to me. The internet has given people the ability to enact vigilante justice easier than ever before.


I basically agree with you.

> I think that every single person in this thread can think of a dozen things they wouldn't want made public.

While privacy is definitely going away as we watch, secrecy is alive and well. For both private and secret data, an unrelated person would have difficulty uncovering the information, but privacy is a public good (people who have no inherent interest in hiding the data have access to it), whereas secrecy is a private good.

Those who truly value keeping information about themselves non-public will have to work to keep or make it secret.


What about the rights of the person wronged by someone else to remember, and to tell their story?

Someone who committed a sexual assault, for example, might have served their time and feel like they have changed and deserve a new start. They might want the record of their assault expunged from the Internet.

What about the victim, though? Are they not allowed to keep telling their story, and to tell people who assaulted them? Are they not allowed to warn someone that the person they are dating has assaulted someone in the past? Does the new dating partner not have a right to be warned that their new fling has assaulted people in the past?

I don't think their is a way to enforce the right to be forgotten while preserving the right to tell your story to whomever you please.

If you do something bad that you wish would be forgotten, it isn't up to you to decide when other people have to forgive you and move on. It is the other people's rights to remember as long as they want.

You say you should try to imagine what it would be like to do the hard work to right your wrongs, but you also need to imagine what it would be like to be wronged and then be told that the person who wronged you has the right not to have the wrong known.


Don't you think that the person who committed the crime and served the sentence has had enough of a punishment and deserves to lead a normal life? How long should one carry that on their back? At the end of the day, both the wronged and person who wronged should move on. There should be something called "fair punishment". Otherwise some people who were wronged will never be satisfied by anything other than death sentence.


You're dodging the point. Does the victim have a right to remember and tell people about their experience, or do they not? What they "should" do is irrelevant to a question of law.


It depends. `tell people about their experience` ranges from talking about it with a counselor, friend or family to continuous harassment of the condemned-but-now-reformed's colleagues and friends. Which is starting to look like revenge and has nothing to do with the right to remember.

So, what do you really mean by `tell people about their experience` ?


No, I am not. Victim's right to remember doesn't have anything to do with the person's right to be forgotten unless they are in the same social circle. The question is not to erase memories out of people's mind but whether to continue creating those memories in others.


"creating memories in others" is what's commonly known as speech. Do victims have this right or not?


Of course, no one is taking away their right to speech. But all those news articles that came out when the culprit was being investigated need not be there to always remind them of their past.


So if I am starting to date someone, I don't have a right to know they were convicted of sexual assault ten years ago?


Correct.

Society has dealt out its punishment to the perpetrator and moved on, it owes you nothing.


I think the judicial system ought to be free to expunge its records when they have outlived their usefulness in the judicial system. However, a mechanism to force another to expunge its records because a third-party objects seems to me an extreme position.


Just because society has dealt out its punishment doesn't mean the perpetrator has had their history erased. Society might not owe me something, but it also doesn't have the right to censor information from me that might be relevant to my safety.


They are in the same social circle: It's called the Internet.


I don't think it is up to us to decide how long someone should go before "moving on." That is deeply personal, and any length of time we set is going to be too short for some people.

Obviously, we set a punishment that is fair in terms of what society gives (jail sentence, fines, etc). But what is fair in terms of the sentiment your peers feel towards you? That is up to each person to decide on their own. You don't get to tell someone they have to forgive someone else, or that they have to move on. They are allowed to hold a grudge for life if they feel it is justified.

Obviously, allowing them to hold a grudge doesn't mean they get carte blanche to harass the person. If they send threats, stalk, harass, or tell lies about the person, they should be held liable and stopped. But if they want to tell TRUTHS about what the person did, and explain to others why they don't think they are worthy of forgiveness, they have that right.

Just imagine the different possibilities. Say a drunk driver kills your child; the law might only send him to jail for a few years, but I have every right to not forgive him for life. If I run into him in the store, I have a right to tell the people around me that he killed my kid. I have a right to write a blog post about why he shouldn't be forgiven. No one has the right to tell me to 'move on'.


> I don't think it is up to us to decide how long someone should go before "moving on." That is deeply personal, and any length of time we set is going to be too short for some people.

The question of what a persons' internal emotional state should be, is a wholly separate topic from removing online records of an event.

It's like juvenile crimes. The event still happened. People will still remember, and no one will stop them from talking publicly about the topic but past the date of the criminals 18th birthday, but the public records are simply sealed.

RTBF, as it is currently implemented, is extremely similar. The past events still occurred, people can still talk about it publicly, but mentions of them are excluded from public search engines.


Removing online records from public searches is troublesome to me.

From our example, if our accident victim writes a blog post about crime and why they haven't forgiven them, would that have to be removed from public search engines? Lets say the incident in question was some corruption charge (like say, police officers taking bribes), would articles that talk about that corruption have to be purged or censored? If so, that means that we would lose the ability to learn from that history and change our practices (for example, when people say 'no need to worry about bribe taking, Officer Joe Smith is such a nice guy!', we couldn't then read up on the time Officer John Doe, who was also a very nice guy, took bribes)

I really think any attempt to censor or whitewash the past is doomed to be abused and cause problems. Facts need to be able to be shared, no matter what harm they may cause. Hiding the truth is a dangerous precedent to set.


See, going about it rationally makes sense if you ignore human impact of such decisions. If that officer already got punished for his crime and learnt his lesson, why do you want to hold it on his head forever? We want people to learn and we want to give them a second chance.


We also want to be able to protect ourselves from people who are dangerous; how do you know the person has 'learnt his lesson', and is no longer dangerous? Yes, we should give people a chance at redemption, but we need to do it with eyes wide open, being vigilant to protect ourselves from the person doing it again.


Sounds like you're saying they shouldn't be forgiven. And the right to be forgotten applies to Google search, not criminal records.


I will make a second comment, highlighting what I understand the core argument of the article to be. Each country should be allowed to decide on their own what tradeoffs they make when it comes to censoring information. (Right to be forgotten is censorship).

France should be free to enforce that right, and America should be free to reject such censorship. France should not have the power to force America to censor their citizens because France has decided that data infringes upon a person's rights. Substitute for any two or more countries you desire.


These two requirements : that France should enforce that right, and that america should not, cannot cohabitate in a space such as the internet : either the info is available somewhere and everyone can access it with minimum effort, or it isn't and no one can access it.

A very similar process is US' IRS asking foreign banks to disclose the activity of US citizen, and strong-arming foreign countries to enforce this on their banks. Does it look like the US is making a jurisdictional overreach ? Yes. Do they have another way to make sure their domestic law is applied ? No.


Maybe I'm confused but France couldn't make America do anything. The question is, can France tell a company "you can't operate here if you're going to do X in Y country" which it already can generally, just maybe it's a good idea/not a good idea in this case.


But then there is the problem that "operating here" generally means "you have a legal entity in our jurisdiction", the point being "we'll throw the executives of your legal entity in our country to jail if you don't comply".

However, that is a very different thing from French people being able to access some Internet service outside France. It's even a different thing from being able to restrict cash flow to an Internet service that you don't like, particularly in the context of EU which intends to ensure free movement of capital, goods and employees within the union.

The only way for France to enforce access restrictions would be to build a Great Firewall of France, similar to other more authoritarian countries, and do they want to wreck the EU to do it?

(Yes, France appears just a little bit authoritarian in my eyes.)


(Yes, France appears just a little bit authoritarian in my eyes.)

France has been in a continuous state of emergency for more than a year, and there is talk of extending it (I think this would be the fourth time) for another six months to cover the election period next year as well.

During such a state of emergency, the executive gains considerable extra powers, with provisions made for measures such as curfews, house arrests, the prohibition of public gatherings, censorship of the press, and searches and seizures without the usual judicial oversight.

So yes, I think it's reasonable to suggest that France is an authoritarian state at the moment.


Correct, the state of emergency will be extended: http://en.rfi.fr/france/20161116-france-extend-state-emergen...

However, IMO, on the scale of authoritarianism, France does not still score very high - it's not like Turkey, not to mention Russia or Arab countries. But its advocating the right to be forgotten is another worrying sign here.


As far as authoritarianism goes, hopefully no-one would seriously claim that France or, say, the UK or US, is a similar living environment in practice to somewhere like various Arab states or Russia. However, it is still worrying that so many legal principles and government powers are being set up in disturbingly similar ways in the West lately, even if the current administrations are only using those powers in a small number of cases. There's definitely an element of the general population who aren't in certain demographic groups assuming that "it will never happen to me" and so condoning official powers or behaviour towards other people that they would never accept if they thought there was a significant chance that they or their own loved ones would be on the receiving end. I don't believe that is healthy for important issues like civil rights and government accountability.

As for the right to be forgotten itself, I personally have no problem with the basic principle that the CJEU seems to have established in the original case, and as it's an EU wide issue, I'm not sure how France's advocacy is any sort of threat to any cross-border provisions that come along with the EU. I don't see why France can or should be able to compel people in other jurisdictions to follow the same rules, absent some proper agreement with the other jurisdiction that those same rules will apply there, though.

Unfortunately, the logical conclusion if everyone sticks to their positions would be that the EU really might start trying to block traffic to and from sites hosted elsewhere that don't respect the same principles when it comes to issues like privacy. If you think about it, it's actually quite remarkable that the Internet has remained as open as it has for so long, but that same openness has also created a kind of lawlessness online that increasingly has negative and sometimes serious consequences in the real world, and sooner or later something has to give. Personally, I hope it won't come to anything as dramatic as a Great Firewall of Europe or the like, but it's a fine line that a large, international organisation like Google is being asked to walk here.


But what about my Right to Remember ?

This is the world we now live in. We have to get used to it. Its a disadvantage but a disadvantage faced by everyone.


This is the world we now live in. We have to get used to it.

Why? The problem here is a little bit someone's right to remember (which is in no way undermined by the CJEU ruling) and a lot the tools that artificially help everyone to find information they never knew in the first place.

Its a disadvantage but a disadvantage faced by everyone.

We're all subject to police powers and the criminal justice system, too, but statistically speaking it seems some of us are more subject to them than others.

The press are equally free to report on all of us and under the same professional obligations to print retractions if they make mistakes, but it turns out that the retractions tend to be more prominent when it's a celebrity with the resources to bring a multi-million defamation lawsuit than when it was the local school teacher whose name was coincidentally the same as a convicted paedophile in last week's front page report where they included the teacher's photo by mistake.


> ... a lot the tools that artificially help everyone to find information they never knew in the first place.

If the information is already legally public, there is no case. If its illegally obtained then there already exist laws that covers it.

> ... statistically speaking it seems some of us are more subject to them than others.

But google IS displaying info indiscriminately what RTBF doing is distort that perfection.


If the information is already legally public, there is no case.

The original "right to be forgotten" case heard by the CJEU was essentially that information was already out there but was no longer fairly representative of the individual concerned and that this resulted in harm to that individual.

But google IS displaying info indiscriminately what RTBF doing is distort that perfection.

But Google is not the whole picture here.

This issue primarily affects people who have incorrect or misleading information about them floating around on the Internet. That situation may or may not have been their fault in any way, but it nevertheless makes them much more vulnerable in this sense.

The operation of a site like Google, if it perpetuates such information without ensuring its appropriateness first, then dramatically increases the resulting vulnerability, which the individual still may or may not actually deserve.

In other contexts, if you repeat a harmful lie about someone, you are as guilty of defamation as whoever told you that lie in the first place. I see no compelling argument that the same should not apply to online search tools, just because it's inconvenient for their business model of automating everything with minimal human interactions.


IANAL, but, for comparison, in Spain the right to privacy may be superseded if there are good reasons of public interest. For example, the daily stories of the average Joe’s life are covered by the right to privacy, but the same details for a major politician might be not, in the proper circumstances. It’s not always a black and white issue, but something to be decided at court.

Regarding the article, I’m not familiar enough with the "Right to be Forgotten" directive to question the choosing of the examples opening the text, but to me it seems biased. These examples are arguably the kind of information which couldn’t be legally hidden because of public interest.


The fact that the disadvantage is faced by everyone doesn't really have anything to do with whether or not we should accept a standard. Saying "this is how the world is" is obviously also not valid...

Keep in mind that the disadvantage is faced by everyone, but those who are ACTUALLY impacted are often devastated.


  Saying "this is how the world is" is obviously also not valid...
It depends on what you mean by "how the world is". If you're taking about some mutable aspect, then it's invalid. But if, as in this case, you're talking about the fundamental dynamics, then you'd best get used to it. If you fight reality, you're going to lose.


Never understood this.

You aren't who you were in the past and it's in your hands to evaluate who you have been.

If I meet a former neo nazi, who reflected on their doings and changed their mind, I have much more respect for them than for someone who did always good stuff and never did any introspection...


What if that neo-nazi beat up a Jewish guy and left him with brain damage? Sure, they might have reformed and changed their ways, but the victim is still suffering. Your past doesn't just go away when you reform.


Then he was in prison and has been resocialized. That's what prison is for and when your prison sentence is over your dues have been paid, history is history.


>You aren't who you were in the past and it's in your hands to evaluate who you have been.

It's also in my hands and the hands of everyone else. If someone's done terrible things in the past he's the sort of person who's capable of doing terrible things, and I'd rather know when that's the case.


> It's also in my hands and the hands of everyone else. If someone's done terrible things in the past he's the sort of person who's capable of doing terrible things

Everyone is capable of doing terrible things. The idea that some people are capable and others are not is naive at best. The assumption that someone who has done terrible things is more likely to do them again than someone who didn't do terrible things (or wasn't caught doing them) has no base in reality. It is simply pondering to the lizard part of our brains "you must fear the evil ... fear the evil .."


I don't believe this is true. Most people have enough moral fortitude there are lines they will not cross except to prevent a greater evil.


"Neo Nazi changes their mind" doesn't exactly make news headlines, so you would be unlikely to find out. IMHO, this asymmetry is what RTBF helps to balance.


You can be arrested without probable cause and still not be able to expunge the record at all, as in not even be able to request so in court, so I wouldn't exactly call it a right.


I don't think your "a couple of short decades ago" holds; Merle Haggard was singing about it in the 60s:

http://www.azlyrics.com/lyrics/merlehaggard/brandedman.html


Except, that is you look at most of the cases for this state ordered censorhip for example France ("right" puts this ar the same level than human righths and it's not), it's mostly about politics trying to erase a bad past. Don't be fooled.


Do you have a source of that? Or are those only the cases you heard about?


https://www.egaliteetreconciliation.fr/66-droits-a-l-oubli-d...

It's french, but 224 french politicians asked to remove content about them thanks to "Right to be forgotten".


How many other people?


I'm ambivalent about RTBF when it's confined to a particular country - I can see arguments in favor and against. But I think having one country's laws on free speech extend out of that country is a massive bridge too far. Anybody advocating for this needs to consider reciprocity - will they accept other countries laws regulating what they can see, publish, read and hear?

But even with that aside, I don't think having the laws spread out of an individual country reflects the spirit and intent of RTBF laws. The essential problem is that people's privacy is violated because it suddenly becomes trivial to find out negative, inaccurate and out of context information incidentally. That is, you find out by accident, through casual browsing and not because you specifically desired to research negative information about that person. It has always been possible to find out about people's past if you put in specific effort. Doing that is an essential activity for all kinds of reasons. Going out of your way to use a VPN or a dedicated other-country search site demonstrates an explicit attempt to research a person beyond mere casual informational purposes.

So I think extending these laws out of their country of origin not only is a terrible idea for free speech, but it doesn't even reflect the spirit of the laws themselves.


I agree with your opinion that it's ludicrous to extend RTBF beyond a country's borders but I feel that even RTBF within a country or collection of countries could be detrimental. If you do something newsworthy enough to make it into a newspaper, I can go find out what you did (unless RTBF also requires printed papers to go redact your information from each copy). Isn't this collective knowledge part of what keeps society running? And doesn't this provide incentives to behave (to those who are thinking about misbehaving)?


If you're thinking along those lines, you might consider that reputable newspapers (and TV news shows) do issue retractions when earlier reports have been incorrect or misleading. There is also some debate about how certain -- perhaps less reputable -- newspapers will print a big front page article that is wrong, and then issue a tiny retraction on page 17 a few weeks later. The concern is very similar to the arguments for RTBF online: disproportionate coverage can leave the wrong impression with readers. Also similar to RTBF, there is talk of stronger regulation, such as requiring retractions to be printed or announced as prominently as the original story.


I don't disagree with you ... if someone publishes incorrect information they should own up to it and issue a retraction. And it makes complete sense to make sure the original incorrect information is at least deprioritized.


> If you do something newsworthy enough to make it into a newspaper, I can go find out what you did

The goal of the law is to be able to remove your drunk picture from 5 years ago from the engine results so a prospective employer will not discard you for it. Cyber-bulling is another scenario where maybe you want to protect the victim and remove attackers posts/tweets/... from the search engine. Or maybe I just said 10 years ago that I hate Java and have repented since then and I just want it to be forgotten so my right to privacy is respected. You have right to choose what you share on-line, but once it is in the wild one of the most powerful companies in the world - trying to do a good service - is going to keep it for ever. This is something new in a society that - not so long ago - allowed people to move city and start their lives from scratch.

For what I understand removing your search results for a newspaper is an abuse of the law, even that it is going to happen as I guess that this case is not protected enough by RTBF. And for me that's the most important part. This new law should assure that newsworthy events are not forgotten. It should protect news, while also protects privacy of things that you - or others in your behalf - have shared in the past.

It is tricky, but I believe in a complex world of grays and not just black and white situations.


Collective knowledge of important, relevant things. But that is a small, constantly changing set of theories and knowledge.

Inability to stop focusing on irrelevant details (ie, too much information intake) or inability to move on from no-longer-accurate information (ie, too good a memory) are both heavily involved with mental illness and psychiatric disorders -- and I think are causing society level mental illness, now.


Let us not lose sight of this: the enforcement of the "right to be forgotten" is a form of censorship. This is not about enforcement of copyright, or defamation, it's about an individual's ability to censor true information simply because that information happens to be about them. And it can only be effective if enforced globally. If the EU succeeds in enforcing the RTBF globally it would set a catastrophic precedent for freedom of speech.


Let us not lose sight of this: the enforcement of the "right to be forgotten" is a form of censorship.

Censorship is not the suppression of any information, it's the suppression of some types of information by some types of actors [1].

If someone gets dox'ed and their private address gets exposed to the internet, asking for the information to be removed is not censorship, but protecting private information. Having revenge porn removed is also not a form of censorship, but protection of the individual. It is very important that privacy is protected, especially because spreading (mis)information can destroy someone's life.

Of course, the right to be forgotten can be misused. That's why it should be carefully encoded in law under which circumstances the right can be applied.

[1] https://en.oxforddictionaries.com/definition/censorship


> it's the suppression of some types of information by some types of actors

Yes, that's why I specifically cited copyright and defamation of examples of suppression of information that are not censorship. But forced redactions under TRTBF are clearly censorship.


"The enforcement of the "right to be forgotten" is a form of censorship"

This is a false representation of the issue.

Your 'address' may very well be 'true' information - but people don't have the right to publish it.

Your 'private notes stored at Google Docs' are 'true information' - does Google or anyone else have the right to make it public?

No - we live in a new world, we need new rules, new laws - 'the right to be forgotten' in 99% of cases is completely benign and reasonable.

99% of the time entities may not want to 'forget' it's simply for economic reasons: they want the data to mine - that's it.

This is 99% a 'consumer vs. massive company' issue - and far less an issue of 'right to access information'.


> Your 'address' may very well be 'true' information - but people don't have the right to publish it.

Of course they do. For anyone who has ever made a political contribution in the U.S., their address is public record. I have to pay the phone company extra to not publish my address and phone number.

> Your 'private notes stored at Google Docs' are 'true information' - does Google or anyone else have the right to make it public?

No, because I hold the copyright on my private notes.

> 'the right to be forgotten' in 99% of cases is completely benign and reasonable.

I'm actually not disputing that. What I am disputing is a country's right to enforce its laws outside its own borders.


"Of course they do. For anyone who has ever made a political contribution in the U.S., their address is public record."

No, they don't. That is a specific case, wherein someone could arguably publish data, because it's public.

It would be trivial to change that law - i.e. you cannot know the 'home address' of the contributor, or, you cannot publish it.

In which case we're back to the normal scenario of 'your private info is not public info'.

"No, because I hold the copyright on my private notes."

FYI - you do not hold the 'copyright' on your Google notes - you have a 'user agreement' with Google, which is probably complex, and beyond the understanding of anyone who is not a lawyer, and may allow them to do all sorts of things.

Bug you validate my case: as with 'google notes' - there's not reason that standard information could not be controlled in a similar manner.

If you want to 'delete' your Google notes forever, you probably could, and Google would probably not keep a copy. They can do the same for other things.


> No, they don't. That is a specific case, wherein someone could arguably publish data, because it's public.

That is my point. Much as you may wish it to be otherwise, your home address is not private information. Quite apart from the fact that you are required to publicly disclose your home address under certain circumstances, anyone who knows your home address is free to publish it if they want to and there's nothing you can do about it. Not even RTBF changes that.

> It would be trivial to change that law

Yes, of course, but so what? Unless and until the law is changed, it is what it is.

> you have a 'user agreement' with Google

I may have granted Google a license to use my copyrighted material. I have have even signed my rights over to them. I don't know because I don't use Google Notes, so I haven't read the T's&C's. But none of that matters because the initial state for any content I create is that I hold the copyright, and so if I want to control its distribution I can. There's no need for RTBF.

The only situation under which RTBF is applicable where existing law is not is when someone wants to remove true information about themselves published by a third party. (If the information is not true then libel laws apply.)


Doh. "have have" ==> "may have".


I think main concern against RTBF that its not well defined. It cannot cover every info associated with a person.

Right to Privacy is more applicable and solves everything you want.


Just like any fresh law. Let the judge/regulation bodies polish it.


Google defines itself as a global corporation and has stated many times that needs global access to talent, global access to markets, and global access to information. Yet a company that has benefited so much from globalization doesn't think that the French government should be able to enforce its laws globally.

Stated simply, Google wants borders for everyone else and none for itself.


No, Google wants borders to mean something, and so do I. I don't think France should be able to impose its laws on me if I am neither a resident nor a citizen of France.


>> and so do I

So you believe in the concept of sovereignty. France's laws end at France's borders? Border should be strong, right?

>> No, Google wants borders to mean something, and so do I.

No, Google wants borders to mean something when it benefits them, such as in this case. They want borders to be meaningless when it benefits them, such as the ability to operate in almost every country in the world.

These sorts of skirmishes will only increase in the connected world being promoted by corporations. Some people, including myself, would rather that it wasn't done quite so hastily.


> They want borders to be meaningless when it benefits them, such as the ability to operate in almost every country in the world.

Huh? That makes no sense at all. Of course they want to operate everywhere because that's how they make the most money. That doesn't mean that they don't respect national borders.


How free would your speech be if it was recorded for posterity with no way of controlling it? Just something else to think about.


If I want to control the distribution of my own speech I can do that now by asserting copyright.


"If I want to control the distribution of my own speech I can do that now by asserting copyright."

No, you can't.

If you say something in public, and someone records with an iPhone and puts in on YouTube - you have no legal recourse on that matter.


Interestingly, in some (all?) European countries uploading the recording of your speech (or just you as a person) on YouTube would be a violation of your privacy rights, unless you're a person of public interest and gave that speech in public, i.e. in front of a crowd or something similar.


The enforcement of copyright is a form of censorship.


No, censorship is the suppression of information based on content, not ownership.


Many people believe that censorship applies only to speech limited by a state actor. Ex: the first amendment of the US Constitution means that the US government cannot repress your speech. It does not prevent others from doing so.

Many other agents engage in repression of speech: we repress ourselves, so do parents, friends, communities, lawyers, contracts. We don't automatically regard repression of speech/expression as a bad thing, and in many cases we are glad for it.


Except it's not. The data is gone from the google index but you can still find it.


How long do you think it will take for RTBF advocates to figure this out and demand it be taken down at the source, too? Probably about as long as it takes for people to start working around search engines. At least then no one will be able to claim it's not censorship.


But the data in Google's index is still information, and forcibly suppressing it is still censorship.


Yes, forcible suppression by a state actor is definitely censorship, but not all censorship is bad.

I am generally in two minds about this because there are convincing arguments for both sides. In the US, some people have a terrible time getting a job because of a criminal conviction. Yes, they did something wrong, maybe even terrible, but does society benefit from keeping these people from obtaining gainful employment?

If society does benefit, for how many years should the individual wear the scarlet letter? At present a criminal conviction in the US is a life sentence for a lot of people. What if the government passed a law that expunged their criminal records after 7 years, but did so for the greater good of society?

The French people think that the scarlet letter is bad for their society.


> not all censorship is bad

I don't dispute that. I'm just saying that France should not be allowed to censor beyond their own borders.


" and forcibly suppressing it is still censorship."

No, not by any reasonable definition of the term.

Whether it's 'censorship' or not, absolutely depends on what kind of information is in question.


This is not about enforcement of copyright, or defamation, it's about an individual's ability to censor true information simply because that information happens to be about them.

No, it isn't. In fact, the original CJEU ruling was quite clear that the right to be forgotten is not absolute and does have to be balanced against competing considerations like freedom of expression on a case by case basis.

However, data protection laws in Europe are much stricter than in some places when it comes to things like incomplete or out of date information. A key detail in the right to be forgotten ruling is that it is not only the original source of such information that bears responsibility for it, but also search engines that make such information easier to find.

The point is that while such technologies are useful for finding good information, they can also greatly magnify any damage caused by bad information. A search engine that links to a newspaper article describing how someone was accused of rape won't necessarily also link to the follow-up article describing how the key witness later admitted in court that they made the whole thing up and was found guilty of perjury themselves. And yet, a prospective employer or girlfriend or whoever else might have reason to look that person up by name won't know that and might reach entirely the wrong conclusion if they only saw the first article. More significantly in this hypothetical scenario, thanks to modern searching technologies, every prospective employer or girlfriend or whoever else might have reason to look that person up by name is likely to see the same article with the same missing follow-up, with potentially catastrophic consequences for the entirely innocent subject's life.

That's an easy example that surely no-one is going to seriously argue against, because it's very clearly misrepresentative and the potential harm is obvious. Once you get into information that is factually correct but either no longer fairly represents an individual or is concealed for good reasons anyway, things tend to get more complicated. There are reasons that justice systems typically regard even most criminal convictions as spent after a certain period of time and no longer require disclosure beyond that point. There is a reason that courts sometimes order the identities of people involved in cases before them to be concealed. There are many other good reasons for confidentiality and privacy protections. In any of these situations, while whoever is disclosing the information in the first place may be the original cause of any resulting harm (and may be punished accordingly under the law), it remains the case that a search tool making the information much quicker and easier to find magnifies any resulting harm considerably.


> The point is that while such technologies are useful for finding good information, they can also greatly magnify any damage caused by bad information. A search engine that links to a newspaper article describing how someone was accused of rape won't necessarily also link to the follow-up article describing how the key witness later admitted in court that they made the whole thing up and was found guilty of perjury themselves. And yet, a prospective employer or girlfriend or whoever else might have reason to look that person up by name won't know that and might reach entirely the wrong conclusion if they only saw the first article.

But the answer to that is not to try to suppress the first article, but rather to draw attention to the second.

But that is really missing the point. If France wants to deal with this issue through censorship that is France's prerogative. The point is that France should not be able to censor information outside its own borders.


But the answer to that is not to try to suppress the first article, but rather to draw attention to the second.

You're assuming the second exists, but that is outside the control of sites like search engines that are just linking to others' content.

In any case, I don't agree. If the first article is incorrect or misleading, I have no problem with suppressing that article rather than unfairly damaging the person who was described in it. This is not censorship any more than saying you'll arrest someone who shouts "Fire!" in a crowded theatre just to cause a dangerous panic or someone who phones in a false bomb threat just to disrupt a big public event.

Words are powerful things. People who abuse their ability to speak and in doing so cause unjust harm to others need to be treated appropriately by the law. People who propagate or perpetuate those harmful words are contributing to the harm themselves and also need to be treated in some appropriate way by the law.

The point is that France should not be able to censor information outside its own borders.

We agree on the underlying principle here. I wrote a little more about this in my first comment in this HN discussion.


> You're assuming the second exists

You stipulated that it did. If it didn't, then on what basis do you determine that the first article is wrong?

> We agree on the underlying principle here.

Maybe we should just leave it at that.


If it didn't, then on what basis do you determine that the first article is wrong?

That brings us back to where we came in: the "right to be forgotten" measures provide a way for someone who is suffering harm as a result of information being spread about them to notify the likes of search engines that that information should not be spread about them.

Having been so notified, if the search engine continues to propagate the information anyway, it can't say it wasn't warned and it's on the hook for any harm it causes. It's a similar principle to various other "safe harbour" style laws that have been written in recent years, trying to balance the benefits that derive from having these powerful, automated services available online with the dangers that they inherently bring if they act unfairly or get something wrong. The consensus so far seems to be that such services shouldn't necessarily be on the hook for automatically generated or user-supplied content by default, but making them provide a clear notification mechanism and accept responsibility once they have actual knowledge of something going wrong is a fair trade for that safety net.


> provide a way for someone who is suffering harm as a result of information being spread about them to notify the likes of search engines that that information should not be spread about them

Yes, of course. But whether or not an individual should be able to censor true information about themselves, even if that information is harmful to them, is highly debatable. I'm happy to engage in that debate if you want, but that is a different issue than the one which I originally raised, which is the question of whether a country's power to censor should extend beyond its own borders. The answer to that is IMHO an unequivocal "no".

(The answer to the question of whether a person should be empowered to censor true information about themselves is also IMHO "no", but that is not nearly so clear-cut. Like I said, if you want to engage on this topic I'm happy to have that debate as well. But let us not conflate the two issues. They need to be kept separate or we will end up in a hopeless muddle.)


Just to be clear, my original comment wasn't disputing your point about international enforcement (on which, once again, it seems we agree on the main principle). I was challenging your characterisation of the RTBF as "an individual's ability to censor true information simply because that information happens to be about them".

The original CJEU ruling was pretty clear that, in the court's opinion, there were some circumstances under which restricting the promotion of personal information by search engines, even if that information was true, could be justified. However, the court was also pretty clear that merely disliking the fact that some information about you was being shared was not sufficient grounds for the law to restrict that sharing.

I'm happy to debate more specifically the circumstances where that sort of restriction might be reasonable, but if you believe as a philosophy that freedom of expression and the right to say true things should always outweigh an individual's right to privacy then it's unlikely we're going to agree on so much there. I don't personally believe in an absolute right to free speech regardless of the consequences. I do believe that there are many good reasons that privacy is important, and sometimes that it is the more important of the two.

To give one concrete example, I think it is a general good that anyone -- including public figures with influential and highly stressful jobs -- should be able to seek help for mental health problems if and when they need to, without fear of suffering criticism or abuse for doing so. That makes respecting the privacy of those who do seek help important, regardless of the truth of any statement that they are doing so.

To give another example, the evidence appears to be overwhelming that punitive "justice" systems are far less effective at reducing crime and improving general quality of life than systems based on the rehabilitation of offenders. A huge part of that rehabilitation is allowing past offenders an opportunity to move beyond their old situation, to "turn over a new leaf". It's pretty difficult to do that if stories of the chocolate bar you stole from a store aged 18 are still keeping a very different you out of good job opportunities aged 28 or 38 or 48. Again, it might be true, but there is a strong public interest in concealing that fact once it is no longer relevant to understanding your character, which is exactly why concepts like spent convictions and probationary periods exist in many legal systems. In the age of an Internet that never forgets, that concealment becomes a lot harder, but the evidence for its benefits not just for the individual but for society as a whole is still the same.


> I was challenging your characterisation of the RTBF as "an individual's ability to censor true information simply because that information happens to be about them".

I stand by that characterization.

> The original CJEU ruling was pretty clear that, in the court's opinion, there were some circumstances under which restricting the promotion of personal information by search engines, even if that information was true, could be justified.

Yes, I get that. And I recognize that this is a defensible position. I even sympathize with the motivation behind it. But I disagree with it because 1) it's too easily abused and 2) it doesn't work.

> seek help for mental health problems if and when they need to, without fear of suffering criticism or abuse for doing so.

Completely agree, but the answer to that is to not let that information get out in the first place. Once that particular cat is out of the bag getting rid of the Google listing won't help much, especially if you're a public figure.

Actually, the real answer here is to remove the societal stigma associated with seeking help for mental health issues in the first place. And one way to promote that process is to reveal that respected figures have sought help for mental health problems, just like the cause of gay rights was advanced by revealing that well known and respected people were gay. I'm not suggesting forced outing, just that the ideal situation is for all of this to come out of the closet.

> the chocolate bar you stole from a store aged 18

What about the person you raped when you were 32?


I stand by that characterization.

Then with due respect, your understanding of RTBF is objectively incorrect. It was never about allowing individuals to censor information about themselves just because they don't want it to be shared. You can read more from the EU itself:

http://ec.europa.eu/justice/data-protection/files/factsheets...

But I disagree with it because 1) it's too easily abused and 2) it doesn't work.

Is there actually any evidence that it's easily abused, or is that just an assumption?

For that matter, why doesn't it work? It seems clear that promoting information via a search engine will typically give that information much wider reach than just, say, publishing it on a blog somewhere or having an article on the web site for someone's local newspaper. If that information is incorrect or misleading, if we accept for the purposes of this discussion that distributing that information is therefore undesirable, and if we accept that the search engines themselves report that many links have been removed as a result of RTBF, how can the current situation not be having the intended effect?

Completely agree, but the answer to that is to not let that information get out in the first place.

Ideally, yes. I just don't think that means you give up on trying to limit harm to an individual, particularly an innocent one, if someone does screw up and let the cat out of the bag.

Actually, the real answer here is to remove the societal stigma associated with seeking help for mental health issues in the first place.

That's a noble goal, and one I fully support. However, I don't think it's realistic to expect everyone in society to treat everyone else objectively and fairly when faced with potentially negative information about them any time soon. Sadly, there is still far too much ignorance and prejudice in society for now. However, privacy is an effective practical tool for reducing the amount of unfairness that results from that kind of ignorance and prejudice.

Also, as I mentioned in another post, it's impossible to reliably achieve fair outcomes after initially encountering negative information, even with good intentions all round. If nothing else, someone who looks up an individual and initially finds negative information about them won't (and can't) know for sure whether there was also some later positive information that they should take into account as well. Short of reliably data mining and classifying the sum of all human knowledge so that any search that turned up negative data immediately highlighted the overriding positive data as well, there is no way to solve this problem 100% of the time.

What about the person you raped when you were 32?

When it comes to disclosures and background checks, there are usually some types of offence (typically violent or sexual ones) that are deemed sufficiently serious that convictions for those offences never become spent and will turn up on background searches forever, even if the legal system does consider lesser offences to be irrelevant after a certain period of time. I don't see why the same principles shouldn't apply for the same reasons in RTBF cases, and I would expect EU courts to do exactly that when applying the privacy and data protection rules.

We're in a shades-of-grey area here, and your example was a much darker shade of grey than mine, so it seems reasonable that there might be a different balance between allowing the past offender to move on and helping others to find information that might be relevant to their dealings with that past offender.


> It was never about allowing individuals to censor information about themselves just because they don't want it to be shared.

Maybe that wasn't the intent but that is the net effect because the burden of proof is not on the individual to show that the information should be removed but rather on the search engine to show that it should not be. As a matter of simple economics, Google has essentially no choice but to comply with all RTBF requests whether or not they have merit, just it has to comply with all DMCA takedown requests whether or not they have merit. Google doesn't have the resources (nor should it be expected to provide the resources) to actually adjudicate every case on its merits.

> Is there actually any evidence that it's easily abused, or is that just an assumption?

It's a prediction.

> it's impossible to reliably achieve fair outcomes after initially encountering negative information

Why? If you can pass a law forcing Google to remove links, why could you not pass a law forcing news outlets to update stories?

> We're in a shades-of-grey area here

Yes, of course, that's the whole point. What if you were never convicted of the rape even though you were clearly guilty? (Should Michael Slager have a right to force the internet to forget that he actually shot Walter Scott in the back, despite the fact that the jury did not convict him?) What if you were actually innocent even though the evidence against you seemed overwhelming? Who decides which cases fall under the auspices of RTBF? The answer is: you do. Why? Because it's cheaper and less risky for Google to simply comply with all takedown request than to actually try to adjudicate each request on its merits.


As a matter of simple economics, Google has essentially no choice but to comply with all RTBF requests whether or not they have merit

From Google's own public statements, they actually reject the majority of requests to remove search results under RTBF:

https://www.google.com/transparencyreport/removals/europepri...

Google doesn't have the resources (nor should it be expected to provide the resources) to actually adjudicate every case on its merits.

Apparently you're wrong on that one, but even if you weren't, I'm not sure I would have a problem with this. Google has a highly visible and influential system, but I don't see why it should be exempt from the same data protection and privacy rules that everyone else has to follow just because it is inconvenient or expensive to comply with them. Remember, the original RTBF ruling from the CJEU was based on existing general principles of fair data processing under EU law, not some special anti-Google legislation.

If you can pass a law forcing Google to remove links, why could you not pass a law forcing news outlets to update stories?

There are laws potentially requiring news outlets to update misleading stories, issue corrections, remove material infringing others' intellectual property rights, etc. This would also be covered by much the same data protection rules, as well as legislation regarding defamation, official secrets, copyright, and so on.

What if you were never convicted of the rape even though you were clearly guilty?

What if you were actually innocent even though the evidence against you seemed overwhelming?

We have a legal process to make decisions on these difficult issues, by holding a trial, and a great deal of thinking by a great many smart people has gone into making that system the way it is. If the verdict reached through such a trial is flawed, there are well established legal mechanisms for appealing it or even correcting it later if new evidence proves it was wrong. Obviously the system isn't perfect, nor can it ever be in the absence of telepathy or something. But Google and the Internet aren't even close to big and important enough to overturn an entire legal system.

And yes, sometimes that legal system results in surprising verdicts. Maybe that's because, unlike Mike from down the road who made a five minute post on his personal blog, the courts actually considered all of the available evidence and followed an impartial process through which an official determination of someone's guilt or lack of guilt was made. But that verdict will be regarded as the final decision for any other purpose, including say a defamation lawsuit also brought in a real court. So an argument that someone should be allowed to just say whatever they want and use a system like Google to magnify their influence even though a court has already ruled the opposite isn't going to be very convincing to me (or just about anyone else) I'm afraid.

Who decides which cases fall under the auspices of RTBF? The answer is: you do.

Every time you deal with someone else's personal data, you have a legal (and, I would argue, moral) responsibility to do so fairly. If you don't, the law may punish you. I just don't see why the RTBF is different to any other aspect of the relevant law in this respect.


> they actually reject the majority of requests to remove search results under RTBF

Slightly over half. But we really have no way of knowing how many of these requests are really rejected on merit and how many are rejected because of some technicality (e.g. some problem with the submitted proof of identity)

> I don't see why it should be exempt from the same data protection and privacy rules that everyone else has to follow

It should not. I'm using "Google" here as a proxy for any web service that falls under RTBF.

> We have a legal process to make decisions on these difficult issues

Yes, of course, but this isn't about that. This is about controlling information. You brought up stealing a chocolate bar as an example of something that should be forgotten, and I countered with rape as an example of something that perhaps should not. The problem is: who gets to decide where the line between petty theft and rape gets drawn with respect to the RTBF?


The problem is: who gets to decide where the line between petty theft and rape gets drawn with respect to the RTBF?

Ultimately, the legislature when it comes to setting general principles and then the courts when it comes to interpreting those principles in specific cases, just like any other decision like this.


What I meant was: who decides in individual cases. Obviously neither the legislature nor the courts are going to do that. Companies are going to do it, and they are going to do it in a way that minimizes their cost and legal exposure. And that means, under the current law, if they are going to honor their fiduciary responsibilities to their shareholders, they are going to err on the side of censorship. (Note that Google is in a unique position of not actually being beholden to their shareholders. Because of the way the company is structured, Larry and Sergey have absolute control, just as Zuck has absolute control over FB. But those are extremely rare situations for public companies. In any case, I do not want a world where freedom of information depends on the benevolence of a handful of oligarchs.)


Your cure is worse than the disease and ineffectual at that what's to stop us from making a search engine for illegal truths.


Relevance is relative. If I'm going to be hiring you, dating you, or having you around my family the public information regarding your history of immorality and criminality suddenly becomes relevant.

The Internet isn't a conversation between a minority of public figures and everyone else and the decision isn't whether what they have to say ought to be shared with everyone.

It's a conversation between all of us and we individually ought to determine what is relevant to all of us not censors.

If you even try we will make a search engine exclusively for illegal truths and the things that you want to hide will only become more visible.

Ultimately the Internet belongs to all of us and we won't let you stifle it.


If I'm going to be hiring you, dating you, or having you around my family the public information regarding your history of immorality and criminality suddenly becomes relevant.

Does it?

Does getting busted for smoking a joint at university really make any difference to whether someone is a good employee a decade later? Should this matter?

Does being merely accused of something once upon a time really make any difference to whether someone is a good employee? Even if they didn't actually do it, and were cleared at trial or never even got that far because the charges were later dropped? Should this matter?

Sometimes people tend to think the worst of other people, or simply let their personal prejudices get in the way. That's why employers already aren't legally allowed to discriminate on some grounds when making hiring and firing decisions, and why sometimes information is legally protected by the courts. So maybe providing tools with the scale and influence of a search engine like Google to feed the prejudices and paranoia of hiring managers without regard to correctness, completeness or context isn't the best idea.

It's a conversation between all of us and we individually ought to determine what is relevant to all of us not censors.

Be careful what you wish for. In this case, one person's censorship is another person's respect for privacy. In a world where no-one gets second chances, how do people ever escape a criminal path and get back on the straight and narrow?

If you even try we will make a search engine exclusively for illegal truths and the things that you want to hide will only become more visible. Ultimately the Internet belongs to all of us and we won't let you stifle it.

Oh, you're going to invent and fill an entire Internet on your own, free from the laws and ethics and practicalities that apply to everyone else? Well, OK, good luck with that.


- Getting busted smoking a joint where its still illegal is a matter of public record regardless of whether it shows up on google

- If you are wrongly accused refute the accusation. Its an unfortunate matter that got cleared up. You might get passed over for a job, try again put in another app. The extra struggle unfortunate as it is is a lesser evil then giving anyone with money to hire lawyers the ability to silence critics.

- Respecting your privacy means not reading your diary or publishing your emails for planet earth to read. Unless the data was obtained unethically or you had a legal or moral expectation that it wouldn't be shared then there is nothing legally or morally wrong with sharing it. This isn't gray its black and white.

- Censorship is when the government acting on its own behalf or on behalf of others via legal apparatus silences the citizenry or suppresses their speech. Once again this is super black and white.

Censorship is a terrible idea that is somehow gaining credence among otherwise educated people. In the first place it is a gross violation of the right of the citizenry and a terrible precedent, in the second while it might silence the little guy in a lot of common cases the people you ought to be concerned about the movers and shakers who are likely to make decisions that effect peoples lives can probably afford a source of information that is less likely to be effected.

So you are looking at creating a world where you don't know bob whose dating your daughter was accused on 3 separate occasions of rape but john the hiring manager still knows you were caught with some weed at 19 even if the charges were dropped.

If you win you will have accomplished nothing of note other than making the world a crummier place.


Getting busted smoking a joint where its still illegal is a matter of public record regardless of whether it shows up on google

But in a lot of places, the degree to which public authorities will report it in official background checks will fade over time, as will any legal obligation for the individual to disclose the information when, say, applying for a job. There are well established arguments for governments acting this way in their treatment of past offenders, and whether or not search engines highlight past indiscretions that are otherwise considered in the past by the legal system, thus undermining the normal legal position, can have a large practical impact on the individual's future.

If you are wrongly accused refute the accusation. Its an unfortunate matter that got cleared up. You might get passed over for a job, try again put in another app. The extra struggle unfortunate as it is is a lesser evil then giving anyone with money to hire lawyers the ability to silence critics.

I'm sure that will be a great comfort to the paediatrician who was mistaken for a paedophile by someone too stupid to know the difference, as they go into hiding on police advice while the death threats are dealt with.

Unless the data was obtained unethically or you had a legal or moral expectation that it wouldn't be shared then there is nothing legally or morally wrong with sharing it. This isn't gray its black and white.

Very little in either ethics or law is black and white. In fact, there is an entire section of EU law about processing personal data that shows how far from black and white the legal rules are, and the RTBF ruling from the CJEU was just a particularly high-profile application of those general legal principles.

Censorship is when the government acting on its own behalf or on behalf of others via legal apparatus silences the citizenry or suppresses their speech. Once again this is super black and white.

So you don't agree with any form of defamation laws? You don't have a problem with someone inciting someone else to commit a serious crime so they can avoid getting their own hands dirty? You think someone who phones in a hoax threat before a major sporting event, causing huge amounts of disruption, financial loss and potentially physical harm to many thousands of people, is just exercising their right to free speech and shouldn't suffer any negative consequences?

So you are looking at creating a world where you don't know bob whose dating your daughter was accused on 3 separate occasions of rape

That depends entirely on the situation. Were those three separate accusations all made by close friends of Bob's psycho ex, and did any of them actually lead to any sort of trial? Because if not, protecting Bob from unproven accusations that could seriously damage his reputation is exactly why it's important to have strong privacy laws and to make sure that any personal data that is being shared is not misleading.

Now, if Bob was prosecuted for rape, took a plea bargain accepting some lesser sexual assault charge, and is on some official sexual offenders register, then that's a different story entirely. But in that case, it's unlikely that the RTBF would mean Bob was entitled to suppress search engine results reporting that situation either. Again, the world is not a black and white place, and neither are the EU legal provisions on which the RTBF result was based.


Regarding censorship rape threats, defamation, bomb threats and other obvious and clear exceptions to free speech. Those are pretty well understood and covered by existing law. We are talking about suppressing true statements 1984 style and making the truth disappear ex post facto from the internet. This isn't the same as shouting fire in a crowded theater and you know it.

You still haven't addressed how easy it would be to create a public search engine of illegal truths nor the fact that there is so much info out there that anyone of importance will likely have access to all the info about people that they might want to hide. You wont be able to get a job at McDonald's without dealing with someone who pays a monthy fee for the truth about you.


We are talking about suppressing true statements 1984 style and making the truth disappear ex post facto from the internet. This isn't the same as shouting fire in a crowded theater and you know it.

No, it's the more like a court-ordered anonymity ruling protecting a vulnerable witness, or medical confidentiality, or a government removing old convictions for minor offences from someone's official background checks. All of these are well established principles in other parts of our legal and government systems, and the ethical arguments for them are well understood.

You still haven't addressed how easy it would be to create a public search engine of illegal truths

If you want to do that somewhere outside the jurisdiction of the EU, well, good luck with that. It's going to cost you a fortune for very little benefit, but if you believe so strongly in the ethics of this and your own legal system protects your freedom of speech above any other relevant rights, go ahead. But if you do that in the EU, you're going to jail. And you should probably expect that if you did openly challenge the legal principles of the EU in such a way, the EU actually would respond by censoring your site within its own territory, just as it ruled that Google (who know a thing or two about building a search engine, in case you didn't know) are required to comply with the same data protection rules as everyone else in the RTBF case.

nor the fact that there is so much info out there that anyone of importance will likely have access to all the info about people that they might want to hide

You keep saying this, but the whole point is that if you have a facility like a search engine making it easy to find that information then any damage caused by propagating or perpetuating the information will be greatly magnified.

The reality is that even in very important situations like a trial in court, a jury will not be told information that the law deems irrelevant or inadmissible in the case. And as I've been explaining in various other comments in this discussion, these legal principles have been developed over a very long period of time and apply far more widely than just Google and RTBF. If you want to challenge them, again go ahead, but you're not just taking on some online free speech issue, you're taking on generations of legal and ethical argument that have brought us to where we are today.

You wont be able to get a job at McDonald's without dealing with someone who pays a monthy fee for the truth about you.

That kind of argument is exactly why these legal protections are necessary.

Again, it's hardly news that allowing employers to discriminate on undesirable grounds is not healthy for society. We have literally passed laws to protect various categories of vulnerable people against other potential abuses in the employment system.

This is no different, except that it's not even close to the same scale and didn't even need laws of its own, just the same basic legal data protection principles that already applied. If you want to provide that monthly database of "truth" about someone to potential employers of that person, and your database in fact presents a misleading picture of that person because while true the information is also incomplete or out of context or otherwise not telling the whole story, you should expect to be taken to court and you should expect the court to award damages to the person harmed by your misrepresentation of them.

Ultimately it's the same principle that leads oaths in court to say not just "tell the truth" but something like "tell the truth, the whole truth, and nothing but the truth". The problem here isn't someone telling inconvenient truths about someone else, it's misrepresentation that causes harm. Sometimes you can do that even by telling the truth, if you're only telling a partial truth, and our legal system recognises that.


A trial is a very specific situation whereby we deem that controlling how the facts are presented is deemed vital to an essential function of our justice system. The jurists are a tiny handful of individuals explicitly picked from those who hold no interest in the case and they are denied free access to public info only so long as the trial lasts.

It troubles me that you make little distinction between keeping a handful of disinterested parties in the dark for a few months so they can render an impartial verdict based on specific facts and giving the government the power to decide for all of us what truths are fit to be heard.

There are many orders of magnitude more difficulty and potential for abuse.

Your fantasies about fresh starts and protecting innocents from persecution are poorly founded daydreams. You are unwittingly pushing for us to lay the groundwork for tyranny and injustice in pursuit of a fools errand. While this is ripe for abuse it is bizarrely naive to believe that important people will willingly give up the ability to spy on us, collect information on us, and use that intelligence to make decisions about our lives.

Further where does it end? Must the internet consist only of truths that everyone can agree or in other words why is western Europe special and say Saudi Arabia not. Is it because you suppose you are reasonable and they are not?


I'm sorry, but you keep reading things into my posts that aren't there, and as a result you are attacking straw men in your responses.

All I am really arguing here is that if we have laws that protect privacy and limit use of personal data for whatever we consider good reasons in a certain jurisdiction, those laws must apply equally to online sources and data processing within that same jurisdiction. What those laws are and what reasons we might consider good enough to prioritise privacy over other relevant factors is a vast and complicated area, and today's situation was reached after numerous debates by smart and thoughtful people.

You seem to be interested in promoting some sort of anti-government agenda here. Don't trust the government, fear its power over the people, and all that. In other contexts, perhaps I might agree with some of your concerns there. But this issue, the one we're talking about right now, has really very little to do with excessive government power or state censorship. This is simply about enforcing laws we already have, made with good intentions, written in reasonable terms, and with the purpose of protecting vulnerable people from unjust harm. That is no more an abuse of government power than having police officers legally allowed to use force against you if you're literally beating up another person in the street, because that police action is an infringement of your right to free expression or something.


You have already spoken in favor of jailing developers in one jurisdiction because they don't implement legal censorship you favor in your jurisdiction regardless of the laws in their jurisdiction where exactly does THAT road end?

You have proposed we make a secret book of illegal truths unknowable to the general public that we cannot even protest effectively without running afoul of the laws prohibition of their publication.

If you simply support delisting such inconvenient truths from google then your work is for naught as I will simply publish a list of such truths. If you support whitewashing the entire internet of them then you have proposed we impliment 1984 for the sake of helping bad people move on with their lives after learning better. Backed up by threats of violence and imprisonment from your nations thugs.

Rather than buying their peace of mind with my freedom I propose you spend your own nations money to support programs that will employ, educate, and provide therapy to help them move on.


I have done no such things, and in fact several of my comments in this HN discussion have said very much the opposite of what you described, as anyone who cares to read a few comments up can immediately verify. You're just making stuff up now, and as such I see no value in continuing this thread any further.


You said if someone was to publish a search engine for illegal truths they would be jailed in the EU.

You are continually framing the right to silence others as some sort of right.

You ought to note that such rights normally derive from obligations to those whom you entrusted your data in the first place.

Your doctor nor his staff may share your medical information. Either explicitly in an agreement or implicitly by law you gained the right to expect that your information remain confidential when you became his patient.

When you mug someone and break their face no amount of jail time and rehabilitation obliges your victim nor society at large to silence.

You imagine all of society possesses an obligation that rightly attaches only to those with relevant relationships like employee/employer, doctor patient etc.

You cannot attach such an obligation without grossly limiting freedom and such an obligation attached to no human right I can imagine.

Your position isn't merely badly thought out its morally wrong. If you can no longer discuss it then I'll drop it.


> That's an easy example that surely no-one is going to seriously argue against,

This is ineffective legal solution to social problem.

Internet lies. Be cautious when placing trust especially on Internet. Two very good advice people will eventually have to learn.


Internet lies. Be cautious when placing trust especially on Internet. Two very good advice people will eventually have to learn.

That will bring little comfort to the person who was wrongly accused of rape in my example, when they are turned down for ten jobs in a row after HR search for their name online and find only the reports of the malicious accusation.

Lies about a person can be extremely damaging. Even sharing the truth about certain things that we normally consider private can be extremely damaging. Unfortunately as a society we are nowhere near the state where everyone allows for all information we find from any other source potentially being incorrect or misleading and treats such things in an objective and considered manner.

I'm not sure reaching that state is even possible, in general. For example, how long should a reasonable, responsible person spend looking for corrections or retractions that might or might not exist before believing an article they read online?

That is why we have defamation laws and that is why we now have related protections like the "right to be forgotten".


how is it ineffective if enforced ?


Practice your freedom of speech outside my personal life.


With all due respect, I think the position OP's advocating is that people need to start defending the right to have what you're calling a "personal life". Our lives are no longer personal, at least in the sense of information ownership, which is what we're talking about. The term's continued usage is a polite lie involving curtains and a cheerful aversion of thought.

RTBF is reactive policy-making at it's worst. Resources are finite. Make those companies spend resources doing something useful for our rights.


Can you expand on this? What I have in mind is where to draw the line between your personal life and your interactions with others. Posts on public forums (including HN)? Your Facebook page? Your github repos? Someone else writing a blog post about an experience they shared with you? I sympathize with some of the intent. It's increasingly difficult to have a private life. I just am wondering how practical it is.

I know the Right to Be Forgotten doesn't necessarily include all of these, but I can imagine it be expanded in a way that's doesn't seem to be much of a "slippery slope".


How about any personal information illegally obtained (the act already a crime most likely) be defined illegal unless already available legally.

However there is still a question of, If a content be blocked globally if its illegal in some country. No IMHO.


any personal information illegally obtained

You're right, that should be removed. As you say, it's already a crime, so that wouldn't need any additional support by the RTBF.


I think I disagree. By that logic we would also be deleting information like that gained from the Snowden leaks


Damnit. Why can't this be simple?

Is this type of thing really personal information, though? Would it be something covered by RTBF?


It actually is simple.

Covering up an illegal act using state-granted powers is illegal. Someone "breaking the law" to tell people you committed illegal acts is not illegal in any meaningful sense.


> Is this type of thing really personal information, though?

Its not. So the rule still works.


I'm not convinced it's against the spirit of it though. Freedom of speech is about the right to express thoughts without fear of government retaliation or censorship. The "right to be forgotten" is essentially about censoring one's own speech, which is quite different.


I find it funny when people can easily take the side, news or records should stay permanent online forever.

Many lives are ruined daily by the law and the ruined lives can be innocent of the crime accused of committing.

These people have to wait years to expunge in some cases and are not living the easy life with a job that makes anything close to what software engineers make.

The US does not care if you were not found guilty of the crime or what you were accused of was dismissed. People will just see whatever in a background search and close all doors without asking the other party.

A very horrible world for some people.


Then the solution is not a right to be forgotten, but fixing the specific issues you addressed. It's like applying a bandage without removing the shrapnel.


Only people earning less than X have a right to be forgotten.


Which would be exactly the opposite of even the most optimistic "right to be forgotten" scenario, because enforcing that right would certainly be a rather expensive luxury.


Really weird dichotomies they're drawing. Clearly the intent behind the right to be forgotten is to be removed from whatever indexing Google does to display results regardless of where the query comes from. Trying to map national boundaries over data is why this person is having such a hard time trying to make a convincing case.

Data does not care about countries and national boundaries since clearly the google servers doing the indexing could be in space. In which case would they be allowed to not comply with any requests by people to be removed from their indices? Or would they be compliant depending on which continent the servers happened to be over at the time? Bordered data is an absurd concept and this person is being very disingenuous by framing the argument in those terms.


It's not about the servers, it's about the users location.

If a judge in France can succesfully that no one in the US can no longer find results for XYZ on Google, I find this troubling.


They don't need to restrict access by Google search domain. They can do it by search IP geolocation. Thus, if you're in France, you won't see the result from any Google site, but if you're in Germany, you will see it on all Google search sites.

I agree with them that worldwide banning is troubling, especially if there is open door for any country to ban anything from any other country.

Let's not forget that deleting something from Google doesn't delete it from human memory, the source website or mirrors it might have, and from other search engines. What gets on the internet tends to be resilient to deletion. Google can only do so much and they didn't write the content in the first place. They are just infrastructure.


They already do that. France wants them to go further.


I'm saying the concept itself is absurd to begin with. Is all French data kept on French servers? Obviously not. Whatever infrastructure google uses to power their data layer is replicated across continents. So when a French citizen says "Remove me from all your indices because my government gives me that right" google is not allowed to insert some extra phrases in there like "only if the person requesting that data is also French or is asking for the data from servers located in France".

Even if we assume that somehow all French data happened to be located on servers in France then complying with those requests would mean purging the data except it is obvious they are not doing that if a person from another country can access the data. So however you slice it google is either non-compliant or is purposefully misunderstanding what it means "to be forgotten" from their indices.


Is France the only country that gets this power? What happens when China demands the removal of Tiananmen Square wikipedia article in the name of unity? Or when the UK requests the removal of the Vietnamese napalm girl in the name of decency? Do we just keep taking down results until we're left only with things that no country objects to?


I think you may misunderstand what the right to be forgotten usually refers to. It's not about a user's "data" such as emails or online comments; it's usually about news articles that mention a person.

Surely you would agree that no matter what French law is, there is no way a French judge should be able to force, say, an American company to not display links to American websites (which happen to mention a French person) for American users?

(disclosure: I work for Google, but not on anything related to RTBF, and my views are my own)


The newspaper who wrote an article about the Frenchman has a right to display that article to people outside of France and has a right to have search engines index the article to drive traffic to its website.

The Internet user outside of France has a right to read something critical of the Frenchman to decide whether to do business with him on that basis and has a right to find that information using a search engine.


Right. And what happens when an Internet user in France opens a VPN connection that has an endpoint outside of France, and accesses information that the French government doesn't wish him to see?

You can prosecute the Internet user, of course. That's what happens in China, for instance.

You can try to detect VPNs and make it a crime to even try to create such a VPN. That's also what happens in authoritarian countries.

But is that the right way for France or other EU countries to go? No.


If you think bordered countries make sense, then bordered data should make sense, right? Can't we trace packets back to a physical emitter?

I don't think this is a non-issue at all. If we replace "data" with "nukes", it's obvious that borders matter. Neither nukes nor data care about borders, but people do.

I think the last sentence sums it up really well:

"...should the balance between the right to free expression and the right to privacy be struck by each country—based on its culture, its traditions, its courts—or should one view apply for all?


Google is mis-directing here. It is obvious what I mean when I say "Delete me from all your servers". I don't care what country those servers are in. This is not a freedom of expression vs privacy issue. This is people being in charge of their behavioral and historical data that google has locked up in their data silos. Framing it otherwise is disingenuous. I'm still free to express myself however I want on whatever forums I choose. Whether google gets to freely index and profit from it is the real question except their lawyers are trying to make it sound like they're champions of freedom of expression.


It's obvious what you mean, but it's not obvious how or why an EU law means Google has to change data in their datacenters in the USA for someone searching from Brazil.

What if Australia ruled that Google must keep all data on all people worldwide and forbid deletions. Should that override all other country laws? If not, why should the EU law affect other countries?

> This is people being in charge of their behavioral and historical data that google has locked up in their data silos.

It's not "their" data. People clicked "I agree", and it's someone else's data now. People aren't in charge of it, they never have been.


It's obvious what you mean, but it's not obvious how or why an EU law means Google has to change data in their datacenters in the USA for someone searching from Brazil.

It is very simple. A country (or union) can require businesses that operate in their country to fulfill a set of rules. One of the rules could be that it should be possible for any user of the service to remove their data.

Although a country cannot enforce these rules globally in the strict sense, it can forbid the company to operate in that country until it operates within the boundaries of the law. Some companies will relent to get access to that market.

(Whether this is a good idea is another question.)


Yes, and France runs the risk of Google saying "well, FU France" and closing its offices and presence there. If they try to enforce at the EU level, same thing.


Google will comply before pulling out of the EU. The EU is simply a market that is too large.


"Pulling out" of the EU means closing a few small offices (except for Dublin). It has no significant staff presence outside of Ireland in the EU once the UK leaves - the offices in France and Germany are really small.

If Google decides to start moving staff out of Dublin, the EU would lose its last leverage over it.

The EU could then do a China and attempt to force ISPs to block Google services at the network level. Given how much the web relies on Google servers, that would break the EU internet. The EU is struggling already, there's no way they're going to shoot themselves in the head economically like that.


"it can forbid the company to operate in that country "

What does that even mean on the internet?

IE what do you expect them to do, and why is it the company's' problem?

Do you really believe if Google, Facebook, whoever, was to close up all their physical shop in say, France, and take no affirmative steps to "operate a business" there (IE they don't try to explicitly block folks, but tell them not to do it and don't do it themselves) France would declare whatever they want to do okay? Of course not.

It's pretty irrelevant where they do business, examples abound of countries where none of these companies have explicit presences trying to regulate them. None of this would be any different if they were entirely located, and declared all profit, inside, say, the US. The argument would be slightly more tortured, but the EU/etc would make it and do everything they could to hold these companies to it.

(and you don't want to try the "they operate a business wherever they earn profit", as this is even more fuzzy, as opinions differ on what profit is earned where, even if you ignore the tax issues. In fact, you can already see the countries don't want to make that argument because they don't like the answers it gives!).

Since Google is pretty polarizing, let's take HN instead. Imagine someone in France files a RTBF request against HN for continuing to index an HN story about a crime they "paid their time for".

HN has no presence in France (i think! :P). Do you really believe France is going to say "yeah, you are right, do whatever you want"? That seems ... really really unlikely.

It's also pretty clearly not HN's job to say, try to deliberately exclude french citizens from participating.

So what do you want them to do, exactly?

They are kind of screwed no matter what they do[1]

The truth is, this is the problem with international legal jurisdiction in the internet world. All countries/unions/what have you want their jurisdiction to be as expansive as possible. Historically, these jurisdiction were limited by physical borders, military might/willpower or allied treaties.

But they all (US included) see ways to expand their jurisdiction when it comes to internet companies, as their digital footprints are often much larger than their physical ones.

So, as expected, they all try to claim it, because it is to their political/legal/social/trade advantage to do so. If they don't, nobody will ever care about them at all. IE If i can easily make globally successful companies that doesn't have to care about EU law, but still derives benefit from EU citizens voluntary using it, why would any company want to care about the EU?

This is why it does not matter where they "operate a business" or any other metric you can come up with. The metric is really "able to be successful and used by citizens of X without really being subject to the direct legal jurisdiction of X". Any time this happens, X will claim, through whatever argument they can, that the company is subject to their legal jurisdiction/regulations. Because if what they really cared about was anything else, these countries, like China, would just block the website/story/whatever inside their borders! They certainly have such power. Instead, they want the foreign website to recognize and submit to their authority, so that the country/union/etc stays relevant.

It's really not about anything else, and for pretty much all of these companies, there is no real way out.

[1] This is usually the point where the argument devolves into extremism, and people suggest that if these companies simply collected no data, they would never have problems. This is, IMHO, pretty naive as well, as they have the same problems above whether they are being asked to remove data, or keep it, or what have you. The data issue is just one that happens to involve something people seem passionate about, but it is far from the only issue that arises. This is about the joy of legal jurisdiction in a world without borders. It applies equally well to things vastly unrelated to issues like RTBF.


> What if Australia ruled that Google must keep all data on all people worldwide and forbid deletions. Should that override all other country laws? If not, why should the EU law affect other countries?

This is already the current situation and it's a mess.

Countries have laws to force removal of personal information if the user requests it. Other countries have laws to force the company to keep user information for X years.

It's a total mess. Just f* the law and pretend that everything is alright when you do the certifications :D


Great point about the other side of the coin about retention policies. The bottom line is that neither retention nor deletion is currently possible. Isn't that really the issue? If Australia wanted to retain whatever data google had then surely some kind of export functionality would be the answer for that use case. If google's retention policies were by some miracle something like 1 year instead of forever that is.


> It's not "their" data. People clicked "I agree", and it's someone else's data now. People aren't in charge of it, they never have been.

Not according to EU law. If you do not want to comply with EU law do not do buissness in EU. Simple as that.


"Just kick Google out. Simple as that."

Because that would work out just fine. :)


Do you have the right to ask my browser client that it is not allowed to hit https://the_forum_you-post_to.com/your_post?

Google is an automated browser client (more or less). Do you think real people are justified to read your post, but automated people aren't?


> This is people being in charge of their behavioral and historical data that google has locked up in their data silos.

Why are you using intentionally strongly connoted language like "be in charge", "locked up", and "silo"? A more accurate version of this statement is "This is people wanting to make someone else delete data that they have on their servers." You're also presuming that someone "owns" any and all data that pertains to them.


I think the strongly connoted language is warranted.

If it becomes effectively legal to surveil and store everything about anyone because you own servers, what kind of world does that lead to?

These are important questions that should be in the forefront before we're too far gone, which we're certainly flirting with.


Effectively legal? It is legal. I can privately crawl the web for all references to "John Doe", "johndoe.com", "/u/johndoe28", etc, and save them, because it's all content that John Doe chose to make publically available, whether on his own website or on forum posts or Facebook or what.

There's probably some point at which the ways I use that data can be classified as stalking or harassment, but simply collecting it is allowed.


There are far more sources of information than what John Doe explicitly puts out there. Those are probably the least of his worries.

What about the embarrassing things other people post about John Doe? What if this harms his ability to provide for his family?

What about things fabricated by others to silence or overtake John Doe?

edit: (adding the point below)

Just because it's legal to collect, manipulate, and profit from this mass of information 'somewhere' doesn't mean that concerned groups of people shouldn't try to make that process harder or impossible.


> What about the embarrassing things other people post about John Doe?

Is "John Doe" a politician trying to suppress public discourse about a scandal?

If you do something embarrassing, and people are talking about it, it is absolutely immoral to use government force to prevent people from disseminating information about it.

> What if this harms his ability to provide for his family?

Of course, think of the children. We must suppress any information that could negatively affect a hypothetical child!


What Google does is not "surveillance", it's collation of public data (or data that you agreed to give to google via one of its private services, like gmail). What the NSA does is surveillance, which is collection of private data without consent.

> what kind of world does that lead to?

A world where public discourse is more conveniently indexed? Be explicit with your concerns.


I think your version of the intent is correct but the post by the author makes it sound like google here is a champion of something virtuous when in reality they're just protecting their own business interests. If google was a publicly funded operation and their indexing/search algorithms were in the public domain then they might be able to make a convincing case but as it is there is clearly misdirection going on here.

From where I stand they are trying to take advantage of the murky definitions around data retention and various policies applicable to personally identifiable information to side-step certain restrictions in various countries that have much stronger policies around use cases of such data.


> Can't we trace packets back to a physical emitter?

No. Not really sure what else to say; this just isn't the case. Even without any sort of advanced privacy tech like tor, you still can't assume you can geolocate the source of a packet.


Are you saying it's too noisy to do that?


I mean it just doesn't make sense. There is no technical way to do that. Even if all packets claimed an origin (they don't), it could be spoofed.


Packets are physical objects. You can theoretically trace light back to a source source by measuring its amplitude at various points.

All the government has to do is "own" the physical components involved in wireless transmission, and prohibit people from using their own tools. Does anyone doubt they are capable of pulling this kind of shit if they had enough money and the right scare? I mean isn't North Korea already doing this?


>Can't we trace packets back to a physical emitter?

Most data on the web is sent through so many servers and cached and archived in so many places around the world that it makes no sense trying to tie it to a single locality.


"Most drugs are traded between so many people and stored in so many homes around the world that it makes no sense trying to tie them back to a single locality".

That's true, but that never stopped governments from regulating and controlling them and shutting down big drug operations. I don't see why they wouldn't attempt to do the same with "illegal" data (even if data is orders of magnitude less traceable).

I'm afraid that the mere fact that all data has a physical starting point is enough for governments to try and claim jurisdiction.

Most people seem to be fine with schedule I drug prohibition. Then again, data is the one drug that we're all addicted to.


> I don't see why they wouldn't attempt to do the same with "illegal" data (even if data is orders of magnitude less traceable).

They do. Warez/Pedophilia/Drugs/Exploit sites are regularly shutdown and seized by the police.

It's just not possible for Google/Amazon/Facebook. The police ain't gonna go there, seize the datacenters, in a simultaneous operations where they arrest and charge the 1000 people at their national HQ office at the same time.


I'm a strong believer in personal privacy, and I also think that there are reasonable arguments for "forgetting" (in the sense we're using it here) information about individuals under some circumstances, sometimes even if that information is correct.

However, this article makes a valid point. When operating in Europe, of course you have to comply with the European legal framework, and the meaning of "operating in Europe" is reasonably clear and has absolutely nothing to do with accessing data via a European national TLD. However, absent relevant laws or other agreements, I see no reason at all that what information people in one nation provide to people in another nation should be constrained by the wishes of people in a third nation. That sort of extra-territorial influence would set a very unhealthy precedent, for exactly the kinds of reasons the author set out here.


I think "The Right To Be Forgotten" is misleading in its use of the passive voice. A more accurate description would be "The Right To Force Other People To Forget".


Other people's machines, you mean.


How's that? How is asking google to remove me from their indices forcing other people to forget? Where exactly is the "force"?


> How is asking google to remove me from their indices forcing other people to forget?

"Asking" is very different from "legally compelling".

If we buy the use of the "forgetting" terminology, it is quite clear how forcing google to expunge someone from all records is "forcing other people to forget". This isn't a complicated analogy, so I'm not sure what you're confused about.

> Where exactly is the "force"?

EU Law. Is that not manifestly obvious?


Your analogy is flawed. Being removed from a google index because of a court order != being wiped from peoples' memories by force. As far as I know the technology for wiping memory doesn't exist yet and unless you are equating the google index with some kind of exo-cortex and even then the analogy is quite a stretch.


Obviously the "people" in this case are the proprietors of news websites, web indexers, etc. If we look at the intent of the law (causing someone to be erased from public view), we could also argue that the public is being forced to forget in a moderately less concrete sense. I use the "memory wipe" analogy because the people advocating for censorship are using the euphemistically passive "forgetting" analogy. Not my choice, I'm just keeping it consistent.


Force would be misleading, it's more like "enable"


Google already has the capability to remove results, so they do not need to be "enabled" by EU law. I'm not really sure how you imagine this works, but the "right to be forgotten" refers to various legal frameworks in the EU for forcing companies to delete data that pertains to an individual.


It doesn't actually. There are no laws that create or define the right to be forgotten. It was a "right" created out of nothing by a new court interpretation of a very vaguely worded part of the ECHR. Before the court invented this right nobody was even talking about it.


Google seems to be happily applying DMCA to the whole world.


I don't think they're happy about it. They've funded a lot of anti-DMCA stuff under "chilling effect" branding. The DMCA is a horrible law, so I don't think we should use it to justify similar takedown behavior under other laws.


This really is about Google putting a foot down on territorial limits of enforcement. Without those we will be living in a lowest common denominator world.

The stance they are taking is important beyond this specific issue e.g. warrants wrt. data stored in foreign data centers and law enforcement hacking foreign computers.


No it's not. This is about google wanting to maintain control over someones data just because they are in a different jurisdiction.

If google succeeds, the only thing that will happen is more walled off sections of the internet as countries move to control the flow of data.


Although French, I tend to agree with Google. Bu what does this mean:

> Aside from anything else, it’s plain common sense that one country should not have the right to impose its rules on the citizens of another, especially not when it comes to lawful content.

Lawful content? According to whom? By saying that, Google seems to agree that there is such a thing as a universal lawfulness, which is the point the CNIL is trying to make.

If there was a country that considered child porn to be completely ok (and there may be), would Google let American citizen visiting that country search and find child porn on the local Google?


The "right to be forgotten" does not apply to illegal content. If it was illegal it could just be removed from the origin servers (if in the EU) and the publishers punished. The right to be forgotten applies to legal, true content which nobody will be punished for, but which the person it's about would prefer to go away.


Google should be entirely irrelevant when it comes to the right to be forgotten. That right should be used to allow users to delete content from websites. Who or what points to those websites is irrelevant. This is a broken piece of legislation based on a broken understanding of how the internet works.

There are alreadt laws about defamation etc which could be used to censor/remove defamatory articles aimed at your person or business.

So RTBF shouldn't apply to content about you, only to content by you.


Google is right on this one, what French are effectively doing is European Imperialism 2.0. This is far far worse than whatever Russia or Chinese great firewall does. Worse if Google does implements this worldwide then its game over for internet as we know it. The next hot business would then be building platforms to allow governments to interfere with tech companies under false pretense of "privacy protection".


You absolutely hit on all points. Great Firewall of China sound sane in comparision.


Police Google as much as you like. They can remove their results worldwide, but not the data itself.

I'm confident that there will always be a search engine available that gets around this law. Perhaps we will even see some specialized tools to show only results hidden by Google.


I think this argument is flawed. Analogy: there will always be people who steal stuff; this doesn't mean we should accept the practice, and discard any preventive measures.


I believe your analogy might be flawed. When theft is prevented, the downside is that "thieves" don't end up with more things. But when you have the right to be forgotten, you may also remove other things. For example, in an election, a candidate's running mate could easily decide "I hate this person, I want all of my work with them to be forgotten", which would severly hurt that candidate's media coverage (sure, you can still find articles about that candidate on some archives, but many voters do not work that hard to educate themselves)

But that isn't even what your parent comment was talking about. They were talking about how the whackamole game of censorship. If you put a law in place to prevent theft, it's not like thieves start coming up with places for all other thieves to come steal. On the other hand, if you censor Google Search results, there's a good chance you'll see, e.g. employers, moving to another search engine specifically for finding results people have requested to be forgotten


Well, granted, my analogy might be flawed, but to second order approximation. The original argument, however, is flawed in the first order.


What is annoying about Companies House in the UK, is that when you open a company, and even after you close it, they make your Name, your Address and your Date of Birth public on the internet, inviting thus Identity thieves to fraud you. It wouldnt be enough to publish just the name of the company name and your name. Moreover they pass these details to third parties for over a decade.

Search engines should ban these websites: beta.companieshouse.gov.uk duedil.com companycheck.co.uk companyformationuk.co companieslondon.com


Similarly, when you publish a website in Germany you have to disclose your name and address on your website.

The idea is that everyone is responsible for what they are publishing. Sadly, the danger of fraudsters wasn't taken into account when this law was made.


> Google believes it should not.

It's always funny to see someone write that a company "believes" anything. It's a party line. With a company as big as Google of 10,000 it's downright strange.


Dear google, maybe if your data collecting hadn't gotten so incredibly out of hand, this wouldn't be an issue. But people are starting to see how little control they have over their information, and they are beginning to freak. the-fuck. out. Now, you have to reign it in.

Before, being a giant global company meant only mean more customers and more places to hide your profits. But now, there are global consequences too.

So don't try to make us feel bad about your regulatory issues. We have our shit to worry about.


It is absolutely a fundamental right to be able to be forgotten. By court order especially.


The Right to be Forgotten is a huge barrier to entry for new search engines.


I'm really not a fan of the new brand gTLDs. It's very pro-centralization. :(


.google??? Really?


>"That’s why, for much of the last year, we’ve been defending the idea that each country should be able to balance freedom of expression and privacy in the way that it chooses"

It's clear now that Google's motto of "do no evil" was a cynical joke. They left out the preceding lines: "see no evil, hear no evil".


Another take on this is that Google, an advertising company, is regretting removing about 780k links of potentially very interesting content. This is a very slippery slope for them, because it threatens their core business.


> Aside from anything else, it’s plain common sense that one country should not have the right to impose its rules on the citizens of another, especially not when it comes to lawful content.

That's just not true. A country has the right to impose it's rules on you if you want to do business in that jurisdiction. Of course no one is forcing them to do business there, which they did not mention in the post.


To be clear: we are not disputing that Google should comply with the right to be forgotten in Europe. We have worked diligently to give effect to the rights confirmed by the European Court of Justice. We have delisted approximately 780,000 URLs to date and have granted fast and effective responses to individuals who assert their rights.

But you haven't done that for the United States, which makes it clear Google wants to play Big Brother, where legally not disallowed to do so. Else, Google would have proactively implemented this for everyone across the board, and where that is not legally possible, Google would have diligently lobbied the authorities like it does when it is in Google's interest.

The whole thing is a farce; if Google weren't legally pressured, they would never do this because it cuts into their business model of living off of people's privacy. Clever opinion manipulation attempt with "every country should be able to choose", but not clever enough; there should be no attempt at manipulation to begin with, Mr. Global Privacy Counsel. As a privacy counsel, You shouldn't even be making these arguments, Mr. Global Privacy Counsel!


The RTBF is not a valid right, is censorship, and is at its core contrary to free speech and the freedom of the internet.


So you are saying that if Google finds, indexes and caches my private information without my explicit permission or knowledge, it is okay for that data to be available to everyone indefinitely?

Well I disagree vehemently, especially in cases where a third party, such as a for-profit corporation violates a fiduciary relationship without my knowledge, sells that data, and Google mines and makes that data available. That's just wrong!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: