Hacker News new | past | comments | ask | show | jobs | submit login
Insights on the First Three Years of the Right to Be Forgotten at Google (elie.net)
110 points by scrollaway on Feb 27, 2018 | hide | past | favorite | 83 comments



The article omits what I think is probably the most important datum: '0.2% of requesters accounted for 20.8% of all accepted delistings. Many of these frequent requesters were individuals using law firms and reputation management services.' That's a total of about 214,000 URLS deleted on account of 1000 individuals. The most prolific requester requested 5,768 URLs relating them be removed. It's unstated how many of these were approved.

The sort of individuals using law firms and reputation management services to try to erase information about themselves from public access, are generally going to be the individuals whose information has the most probable public utility. But they're also the individuals who can most threaten Google with lawsuits and other difficulties should they not comply.

The internet is something that will live on long after we're all dead, and it is creating a living chronicle of the times. The more this chronicle is interfered with, the less valuable this information will be. Einstein's 'marital demands' [1] (not what you might think) of his wife would almost certainly fall within this 'right to be forgotten' yet they now provide a all too rare glimpse at the less 'sculptured' life of one of the most important figures in history. This yin and yang, good and bad, nature of people helps provide a more balanced, and less idealistic view of actors in the past - and present.

Perhaps the thing becoming ever more clear is that search is something that needs to be decentralized. It's all too easy for the player in control to exploit the data for their own personal benefit, and they also end up working as a single point of failure for access to said information.

[1] - http://www.openculture.com/2013/12/albert-einstein-imposes-o...


> The internet is something that will live on long after we're all dead, and it is creating a living chronicle of the times. The more this chronicle is interfered with, the less valuable this information will be.

The primary sources are still available, aren't they? It's only the Google result that is removed. Future historians could store everything and run their own search engines on top of that.


Maybe Google can "unforget" after people die.


Does the right to privacy end with death?


While not an absolute, in general, yes, it's a legal doctrine that quite a few privacy issues and certain rights do end with death.


What harms could happen to someone who is dead?


You can most definitely harm the family of the dead person.


They should send their own request in. The request should be evaluated with respect to the living individual's circumstance, without regard to the deceased.


The internet is something that will live on long after we're all dead

Yup, but how much of what’s online today will? Bitrot kills, and it seems like a lot of today’s content will be gone long before we are. By conflating the net with the content you’ve avoided that key issue.


I think there is a substantial difference in things that are, let's say, naturally 'forgotten' versus things that powers that be attempt to force society to 'forget.' I completely agree with you that bitrot is a major issue, but it's a different issue with different resolutions. And it's fortunate that there are a number of sites making some remarkable effort to preserve the internet including organizations like the Internet Archive. However, might these sort of rules eventually make it unlawful for the Internet Archive to even allow users to search their archive for certain things that governments decide ought be forgotten? This all just feels as though we're inching towards dystopia.

---

As a related aside, one tool I think people should really use much more often is date filtering in searching. This [1] is a search on google for 'Iraq' from January-March 1 2003. The US would invade Iraq on March 20th. All of those results are 15+ years old now. I'm sure much has been lost to time, but fortunately there is also a lot that has not.

[1] - https://www.google.com/search?tbs=cdr%3A1%2Ccd_min%3AJanuary...


Einstein was a highly public figure, its unlikely that such things would be removed from public view, or a lawsuit to such effect would be winnable.

Information parties like Google has goes far beyond what you see publicly, and its not being retained for your benefit, its being retained for Google's, and Google's and your goals are not in alignment.


To me, the most concerning thing is the level of government officials requesting delisting. Democracy requires transparency. If you've committed crimes, had financial judgements against you, or even been accused of sexual misconduct, I want to know about it.

Powerful or high network individuals or corporations requesting delisting should have a much higher bar.


had financial judgements against you,

You had me with the other two, I don't know about this one. In the absence of criminal financial malfeasance, or deliberately poor mishandling of instruments an official had some responsibility over as an individual acting in the public trust, I can't think of a compelling argument for why this needs to be the public's business.

I can only presume, but I'm with pause to put words into your mouth that this is probably want you meant?


Exhibit A: Donald J Trump. Very high number of civil judgements against him (over 3000), indicating a highly unreliable person prone to con-artistry. While not having been criminally convicted (although he settled the TrumpU fraud case), the sheer weight of the judgements suggests with high confidence he is not to be trusted.

Do you think stories about the thousands of judgements against him should have been delisted from the public prior to the election? I'd argue they are materially relevant to voters. The same goes for his tax returns, and the private financial dealings of political candidates.

IMHO, if you run for public office, you deliberate opt-in to public scrutiny of your private financial and civil court history.


>Do you think stories about the thousands of judgements against him should have been delisted from the public prior to the election?

Shotgun reaction (knee-jerk, better phrase probably): no but Trump is such an exigent outlier here because of his immense footprint in various financial endeavors. Apply the same scrutiny to every citizen who steps up even to serve on city council, taken to it's extreme conclusion and you have John Q. Public getting grilled over his financials because someone found out he accidentally bounced a rent check in 2001 when he was in college.

But that's also why I made the qualifier of someone with a fiduciary responsibility to certain public trusts. Trump clearly fits this as someone with vetted interests in a very large (private) corporation and other business interests.

Don't get me wrong here: I definitely agree and value a thorough review of a candidate's worthiness for office, I'm still incredulous about the path we have to go down when we start looking at an individual's finances as an indicator of public service to the extent of running for office.

Said a better way: I'm looking at the extremely long view of this and having reservations about our ability to remain objective on the matter. So if we scale back from the extreme conclusion of Mr. Public and his bounced rent check from 2001, what does the middle ground look like and what is the framework for how the logic ladder works over asking candidates running for public office to disclose their financial 'worthiness' (read: I'm using this separately from the obvious disclosures that one may already be morally obliged to turn over such as relations one's business may have to government entities they would invariably preside over) and you may end up convincing me to change my mind.


I think there is a difference between allegations and judgements. Judgements are ultimately a public record in the United States.

Stuff like credit records are different, but even then I think transparency goes a long way. Asking Google to forget your unpaid rent has some meaning, but it also doesn't meaningfully address the issue -- private entities will privately keep track of check and credit issues.

IMO, once you do something, the genie is out of the bottle in terms of the hit to your reputation. Obscuring it isn't necessarily helpful. Perhaps the real solution is figuring out how to stop applying harsh moral judgements for mistakes. If you bounced a check or were involved in a crime in 2001, and we're in 2018 and you don't have a pattern of misbehavior, someone should either not consider that long ago action or have to positively explain why that event was a factor in a decision process.


> Perhaps the real solution is figuring out how to stop applying harsh moral judgements for mistakes.

As Tversky and Kahneman showed, that is something that humans seem to find incredibly (maybe even intrinsically) difficult.


I definitely don't consider whether it individual was late on his credit card bills or got evicted for rent to be a strike against his worthiness for public office, in fact, it might even be a benefit in that he knows the problems of financial hardship and can empathize with the electorate.

That's why I mentioned [sic] "high networth" individuals in my original post. I weight failure to pay your small bills when you inherited millions or are worth billions are far more indicative of bad management skills or unreliability than someone who is poor missing payments.


That's why I mentioned [sic] "high networth" individuals in my original post.

I suppose you'll have to pardon my cynicism towards how exploitable the general public is, or rather, how exploitative 24 hour political coverage is, at large to be worrisome that this wont just end up in pronounced levels of Tall Poppy Syndrome spreading through the population.

https://en.wikipedia.org/wiki/Tall_poppy_syndrome

Edit: Let me clarify something before it gets confused, I am not suggesting that the sort of inquiry on finances you're proposing is itself an example of Tall Poppy, I am saying that left unchecked the sort of thinking that goes into "we need use financials to vet the worthiness our prospective politicians" could have the result of overcorrecting the inquiry of integrity and objectivity in office to many of the symptoms endemic to the Tall Poppy phenomenon.


Honestly, I don't think that ever becomes a problem in a sufficiently transparent and competitive democratic system. If there is true selection pressure, skill rises to the top. It also seems to me that if people are "superior" to others, they should be able to deal with what your link talks about.

It therefore to feels to me a lot like the pseudo-meritrocratic rationalizations of people privileged by birth,environment or genetics (like us on this website, statistically speaking) that don't want to learn how to deal with people who won't blindly accept their status. Yes, sometimes that is annoying, irrational, inefficient. But it is also important, because no matter how smart we are we might be wrong.


I'm not quite sure whether "accused of sexual misconduct" is that great either. Anyone can accuse anyone of anything, and especially on the internet, these things have a way of getting a life on their own. And especially in the case of public figures, these things can be an easy way to blackmail them/bring them down.

Not that I'm in favour of Google acting policeman, but these things are just pet peeves of mine that I think we should be careful with.


Given two headlines:

- "foo suspected of sexual misconduct" @t=0

- "foo proved to be innocent in sexual misconduct case" @t+1 mo/yr/whatevs

Guess:

- what people will consciously or subconsciously remember when scanning through Google results

- which one will get the most clicks/views/pagerank at various points in time

That is, if they ever continue and look for the second one. And even if it were the last one, many (most?) people will be subconsciously biased anyway by the shock effect of the foo <-> sexual misconduct tie, which obliterates the "innocent" part.


People aren't proven innocent (at least in the US), they're found "not guilty", which is significantly distinct


This makes sense, but you also have to think of the other side; there are a lot of people who have been victims of sexual misconduct but there was not enough evidence to convict the perpetrator. Don't those people have the right to REMEMBER what happened to them, and tell people about what happened to them if they want to?


Totally, and I'm in favour of people figuring out what the facts are by themselves over Google doing that on their behalf. My call was more for those people to keep my comment in mind when they're doing so.


Right, but if I write a blog post about my experience with someone, don't I have a right to have that blog post indexed by google if I want?

In our modern world, 'people figuring out what the facts are by themselves' means searching via google. How else are people supposed to figure out facts for themselves?


Yes, that's what I'm saying - I agree. It's just that when people read those blogs, they should not equate that to "proven to have happened", which is what the person above appeared to be doing.


Yes! To be trusted with power, there must be transparency. The weak, conversely, need privacy. To help protect themselves from oppression.


Doesn’t that logically lead to the weak forever remaining that way? Because, I actually agree with your premise, but having them side by side like that made me curious about the interplay.


The weak need privacy to learn, organize, and so on. But as they become powerful, they lose some of their right to privacy.


In your opinion, what constitutes "some"?

At the end of the day we are still talking about people who have the same basic Constitutional rights as everyone who isn't an elected (or appointed) official, right? The moral and ethical scrutinies are certainly higher than they would be for you or I as mere voters, the legal scrutiny is unquestionably higher-this much I am not here to dispute or question.

It is this:

How do you build a bridge between holding your officials to a high standard and ensuring their ability to conduct the tasks of governing with integrity without ourselves resorting to destroying certain pillars of our legal system-either prescriptively or descriptively to do so?

I understand fully this is possibly opening up pandora's box to all sorts of logical conceits and philosophical interrogatives about society, but I also believe this sort of dialogue cannot be had in ernest without doing so.


By "some", I meant transparency proportional to power. And what I describe is common in societies like the US. Presidential candidates are expected to release their investments and income tax returns. Federal judges are required to disclose conflicts of interest, including their investments. Political contributions must (in theory) be disclosed. Reporters are free to investigate public officials, and to publish their findings.

Even normally private sexual activity is discoverable, as evidence of questionable moral character. Or at least, lying about it implies more general dishonesty. And overall, efforts to frustrate transparency are presumptive evidence of corruption.

There is the risk, I admit, that the Constitutional rights of public officials will be violated, beyond requirements of transparency. But arguably that's a risk that one knowingly takes in seeking political office.


I think that an official at least cannot hide what they do with the public money. See also: https://publiccode.eu/


Part of being trusted with power is the ability to do things without transparency - to be able to filter what and what not you share. Done correctly, this will increase the trust in you in both public and private matters.


I agree with that in the interpersonal realm. There are downsides to over-sharing. But maybe it was confusing for me to frame it as trusting. It's more like permission to be investigated. Or the absence of penalties for investigation. That is, reporters and police have the right (and indeed the duty) to investigate the powerful.


I am a pretty big supporter of Right to Be Forgotten as a thing, and thought this was an excellent summary of how it's gone so far! When you add the 89% of the users being "private individuals", and that the next biggest category, 4.4%, is minors, it seems most of the requests definitely fall within the purpose intended.

Without going into the details, 43% delisting rate seems likely quite fair, it doesn't seem like it's being over-implemented by any means, but a good number of requests are indeed being approved, this is a lot better than I expected, to be honest.


I'm the exact opposite: I believe that history should never be rewritten nor forgotten. The increasing "transientness" of information is, quite frankly, extremely scary --- when future historians looking back may see only a whitewashed representation of what things were really like.

Besides, Google forgets enough even without being told to:

https://news.ycombinator.com/item?id=16153840


Why should history not be forgotten? It used it be that way all the time before we managed to build information storage large enough to capture this huge amount of data. And at what cost does it come that we never forget? Not all too long ago (1993-1997), there was a large trial against 25 people for alleged child abuse in germany. (1) All were eventually cleared, one person died while in prison before the trial ended. It was one of the largest case of false accusations for child abuse. The names were all over the press. Should those people not have the right to have that memory fade? To have history be forgotten at some point?

> Besides, Google forgets enough even without being told to:

That was not the case when the laws were drafted and enacted.

(Wormser Prozesse, sorry, german only https://de.wikipedia.org/wiki/Wormser_Prozesse)


Should those people have the 'right' to be remembered, falsely, as child abusers? Should anything and everything every written about the case be expunged? Should individuals be responsible for purging their diaries of such information too? Why not? Someone might upload it to the Internet in the future and, because everyone else officially 'forgot', who will remain to remember that this is officially prohibited information?


> The increasing "transientness" of information is, quite frankly, extremely scary

Information has always been transient, except that which we put great effort into preserving, until very recently. The idea that every bit of data, every photograph, every snippet of text should be immortalized and exhaustively searchable is only a few decades old has in no way been proven to be 'better'.


>…few decades old has in no way been proven to be 'better'.

As long as someone can leverage such information for any sort of 'gains', the incentives to store/collect information with these capabilities will persist and grow (esp as it becomes cheaper to do so).

It will be interesting to see in the future what kind of databases pop up that with information on those who seek to scrub themselves from sites. I know the project I was working had companies wanting to pay us to remove information we mined/crowd-sourced about people (which we declined on principle that we found that it was increasingly valuable to people to be able to leverage this information on whatever contextual basis they chose to view it through).

The Google's and Facebook's have had to comply since they ultimately take money from advertisers based in the EU (i.e. have assets with exposure to EU political risks that may entail when banking/employing workers in the EU when engaging with these sorts of actvities), future organizations/platforms (from relatively centralized to decentralized) that monetize their traffic/platforms by mining crypto on their users platform won't have to though.


Sorry but I really fundamentally disagree with this line of thought. My existence and my rights, right now, are vastly more important than some theoretical future information-hungry archeologist.


I’m sure any future historians working on a scale where these delistings would be relevant would be working with tools that would be able to do the equivalent of a content-aware fill on the missing data.


I.e., the tools will make up a plausible story, such as the story that best fits the available evidence excluding the forgotten bits? That's not very reassuring for those who worry about the loss of history.


I’m being slightly tongue-in-cheek, but what history can be researched or delivered in human-digestible form where the histories of the delisted at the scale you’re concerned about wouldn’t be so lossily compressed as to amount to roughly the same thing?

Not everyone will be delisted, the usual tools of extrapolation and demography are still available to the historian, and merely because information is not accessible via Google does not mean it is not available.


The highlight point is that Right To Be Forgotten is very specific on what types of information should be forgotten. (As shown, only 43% of requests for removal were approved.) If you consider the type of content that is required to be forgotten, this concern isn't really reasonable.

Surely you would have objection to posting your social security number here, and entering it into a never forgotten, permanent pool of information for us to reference and use, right?

RTBF specifically excludes any information that has historical significance or relevance to the public interest, so things like nude photos of you your ex posted is fair game, but the corruption scandal a politician is hoping to cover up is not.

As another commenter pointed out, history of the mundane (such as things RTBF permits removal of) is so voluminous in nature, that removal of some private citizens' data will not meaningfully reduce the historical value of the Internet (or what remains of it) in the future. I'm far more concerned about valuable sites and content shut down due to lack of finance or interest, than the removal of personal information that's harmful to people alive today.


> When you add the 89% of the users being "private individuals", and that the next biggest category, 4.4%, is minors, it seems most of the requests definitely fall within the purpose intended

Not necessarily, from the article:

> The top 1,000 requesters, 0.25% of individuals filing RTBF requests, were responsible for 15% of the requests. Many of these frequent requesters are in fact not individuals themselves, but law firms and reputation management services representing individuals.


As a supporter of RTBF, what do you think about its impact on startup search engines?

I'm advising a startup in Europe -- 3 people, self-funded -- and am not looking forward to them launching and hitting a buzz-saw of RTBF requests.


I am not sure how much they need to worry. Bear in mind, they only need to act if they get a request. Consider how successful your startup's search engine would have to be before individuals and companies would start submitting RTBF requests to them!

People are worried about how they appear on Google because Google is people's default. (In Europe, Google's somewhere in the 90% of search range, I think, you'd know better than I.) People don't want the first impression people get of them in the world's index to be unfair. That's why RTBF doesn't really even bother removing the original content from the web, they just want Google's index cleaned out.

I think if the startup you're advising ever has to seriously worry about RTBF, that startup has already succeeded.

(And, as an aside, pass on my best wishes to them. We need more choices in the search space!)


I'm not so sanguine -- lots of RTBF requests come from aggregators, same as copyright take-down notices. So it's easy to imagine that a startup could start to get a large fraction of the ones sent to google/bing.


Yes, but you have to be on that company's radar, before they add it to the list. (Something unlikely to happen en masse before your startup is large enough to handle it appropriately.) And in the case of copyright takedowns, the extreme volume comes in part from the fact that Google and other companies of that size have even implemented bulk reporting tools explicitly designed to permit requests of that type and volume that the average user, or even corporation, doesn't have access to.

I don't know the answers to these, but I'd wonder if RTBF would allow for you to set restrictions on who can submit requests on behalf of who, or the volume permitted by a given reporting entity, or the level of automation permitted in reporting. Presumably the EU would much rather than you make reasoned, well-thought responses to carefully considered and submitted removal requests, over a bulk import/bulk decision process that leads to a lot of incorrect behavior.


Do you have a concrete reason why you think it's unlikely? I have a different opinion about how large a search engine company would have to be before being able to "handle it appropriately", but maybe you have a good argument that trumps my experiences.


I don't think I could offer anything solid, no, beyond the general view that the litigious folks generally don't approach a platform before it's significantly large. (Most piracy on smaller mediums goes unharassed, for example, while The Pirate Bay and YouTube gets most of the attention.)


A friend of mine runs a video copyright enforcement company. He provides the ammunition that Hollywood uses to send takedowns to a large number of small video platforms for content posted on them.

For a web-wide search engine, it's even easier: you can send a search engine a takedown for any url anywhere, because search engines index the whole web. RTBF requests are similarly relevant to any web-wide search engine.


Strictly speaking, the EU's Court of Justice was just applying the EU Data Protection Directive's rule that organizations mustn't process more personal data than necessary, for longer than is necessary. That's a freestanding obligation that exists regardless of whether someone makes a request or not. It just so happens that in practice courts/regulators tend to view act/enforce it when someone has had to remind a company that they're failing to comply with that obligation, and the company hasn't complied. Hence it looks like something that companies only need to act on if they receive a request. Practically, if you're a search engine, that's the only pragmatic outcome anyway - how are you supposed to know, for each and every entry in your database, whether or not they satisfy the criteria for them to be "forgotten"? Better to have people flag potentially problematic ones, for you to then evaluate.


That seems like a problem such a startup would love to have.

In any case, one cost-saving advice would be to piggy-back on Google's process: simply check if they delisted the search result and follow. I can't think of a specific law this would actually infringe.


Google takes active counter-measures against the sort of automated queries that this would involve. It's also against their TOS, and might be considered criminal in some places.


Does Google have a public API to check for URL delisting, or are you thinking about bending the search API for that?


If I were to try to implement this, I'd safely assume any link I'm asked to delist, Google was also asked to delist. Therefore, if I can't find it in Google Search with a query where the page in question should be the top or only result, it means Google delisted it.

That being said, I don't feel anyone should assume Google's decisions are inherently right, and do something just because Google did it.


RTBF is a nice sounding thing, but feels like it's too easy to abuse to create a censorship regime.

Generally with law, if an abuse is possible, power users will eventually exploit that abuse potential to the fullest extent possible.


>RTBF is a nice sounding thing, but feels like it's too easy to abuse to create a censorship regime.

Privacy is not that alike censorship for the individual. That's not a regime, it's a right.

For public affairs we have all kinds of records from the media anyway.

And people should have the right for their past crimes to be forgotten, even murder or whatever. What you did in your 20s is not necessarily who you are in your 50s.

And since public bias, society gossip/meanness nature will be with us for the foreseeable future, we can at least kill the technology and "permanent records" that feed it.


Ok, but do people have a right to remember, too? If you murdered my daughter 30 years ago, I have a right to keep telling people about what happened, including about the person who did it. I shouldn't be censored from talking about what happened to me, just because the person who killed my child wants to forget about it.


>Ok, but do people have a right to remember, too? If you murdered my daughter 30 years ago, I have a right to keep telling people about what happened, including about the person who did it.

30 years on? Why? What good can come out of it? Assuming that the person has served its time (or did it by accident or something)?


But that already does not happen under the GDPR right? Criminal record keeping still works the way it used to.

And what about the person who was wrongly accused/convicted of murder? He deserves to have his incorrect history forgotten, does he not?


What you all described is the nice sounding part of RTBF.

You can then use the same mechanics of those laws to censor things you don't like, as an organization or a government.

It's the age old argument of spirit vs letter of the law.

The spirit might be privacy, the mechanical letter is actual censorship. It is why I don't think something like RTBF wouldn't stand up in the USA, since the law probably wouldn't survive a 1st amendment challenge.


I think we have way worse things to worry about our governments than censorship of Google results.

Whereas personal privacy is an actual, very real problem for people.


You think it might apply 'only' to google, but a smart lawyer can make it apply anywhere.

Think of software patents or the DCMA. Sounds nice in theory, a terror in practice. Any tool exposed free to use for the public, including some very wealthy sociopaths, will be abused.


I have been following Elie and his team's work for a while now, and enjoyed this read, but I am curious as to why it was posted on his personal website as opposed to Google's Research Blog (https://research.googleblog.com). He is one of four authors and this was clearly not a personal effort of his. Not to mention, it would reach a larger audience if shared by Google directly.


Hey :)

We did have an official post today: https://www.blog.google/topics/google-europe/updating-our-ri... that is focused on the improved transparency report rather than the research paper and I am sure it will reach a larger audience. You will recognize the infographic that I borrowed from it :)

When we make a paper public I have the habit to share a short summary (sometime with a delay thus) of what the paper is about on my personal blog. Kurt, Luca and Yuan helped with the summary on a personal capacity which is why they are co-author of the blog post.

Bottom line this is not the official post, we do have one, it is just a summary a few authors of the paper wrote in personal capacity to share some of the finding we found the most interesting.


I didn't see the Google Blog post - thank you for pointing me in the right direction! And great work.


I've been wondering: How will Right To Be Forgotten be implemented in blockchain products, where forgetting is supposed to be impossible? Am I misunderstanding something fundamental about blockchains?


Indeed, this is an excellent thing to consider when you are using immutable databases. GDPR is another such requirement that people are now having to struggle with. Essentially, you need to be able to rewrite the DB with the new information. For merkle chains that are in the public record (like Google's scheme to to publish certificate requests so that CA shenanigans are obvious to anyone that looks -- the name of which escapes me at the moment), it's not going to be reasonable. It's not so much the merkle chain, but the fact that the data is already distributed. Getting everybody to resign the chain would be pretty hard, but getting them all to delete the data is unreasonable... "Pretty please. We got asked by a judge, you know!"

So essentially, you need to make sure you don't put that kind of information in that kind of DB.

(P.S. It's a bit of a shame that you're getting down voted for such a good question, but I suggest you substitute "merkle chain" for "blockchain" when you want to ask technical questions of this sort.)


I believe the program you're looking for is Certificate Transparency. To be honest, write-permanently databases have been around for a while (for example pgp keyservers that peer and distribute keys relatively quickly). Basically, it is a problem unsolved except by having seriously smart people who do this stuff for its own sake, hacking at the data directly, and are distributed across and able to effectively communicate from many jurisdictions.


They're incompatible: the 'right' to be forgotten means a 'right' to rewrite history, which is incompatible with an unforgeable, immutable log.


At the next level up, I suspect. Jut as the current right to be "forgotten" doesn't remove the source material but only inhibits the lookup-path.


> the current right to be "forgotten" doesn't remove the source material but only inhibits the lookup-path.

Is that true? Google, for example, keeps your data but merely prevents people from seeing it? That's a pretty weak level of protection. In fact, in that case Google (and other companies in possession of your data) don't forget you; they still remember your data - and that is a very large exception.


Sorry, I wasn't particularly clear and I was talking about a slightly different angle on the right to be forgotten.

If there's a news site that reports on something old and normally irrelevant -- like a decades old conviction that's spent -- then you can ask Google not to return that report when someone searches for your name. Google won't go and take down the original page, though, and won't stop it appearing in other searches.

The analogy I was given -- which I like -- is that before online searching, everything was reported in the newspapers. If you knew when something happened, you could quickly look it up in the archives, and if you wanted to know about someone in particular, you had to spend a week trawling. Without the right to be forgotten, small things that previously would be forgotten are much more easily remembered. The original right to be forgotten (pre-GDPR) is about fixing _that_. But the archive would still be there, to be trawled through if required, and even in the GDPR there's an exemption for things like news sites.

The GDPR adds extra protections that are being lumped in the same bucket but are more in line with what the DPA in the UK already required: the right to ask a company to delete personal data they hold on you. GDPR also expands the scope of what counts as personal data that the company would have to delete. But at the end of the day, Google's not going to remove anyone else's pages.


I once read about a project to take the content of Internet archive, build a hash tree on top of it, then publish it on blockchain. That kind of thing could could be used as an indicator that the content (any content on Internet) have been tampered with.


So are there any search engines that ignore RTBF?

Edit: Public ones, I mean. I'm sure that major TLAs do.


We need to be careful with our words. There is no "Right to be forgotten". Framing[1] denotes the battlefield terrain, if you accept the frame you've already lost. A better title might be: Insights on the First Three Years of European Censorship of Google.

1 http://www.nytimes.com/2005/07/17/magazine/the-framing-wars....


There is no "Right to be forgotten".

In the context of Google's search results, according to the European Union's "Data Protection Directive" that's based on an interpretation of the OECD's definition of privacy, yes there is. You can't just declare there isn't such a thing as the Right to be Forgotten as fact. You may choose not to recognise it, but there definitely is such an idea that many people believe is a right.

In a wider context there are many countries where case law implies there is a Right to be Forgotten too, including the US. https://en.wikipedia.org/wiki/Right_to_be_forgotten


Not only is it (to some extent) found in existing law, but from May 25th this year, GDPR Article 17 would like to have a word: http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:O...

It's not just an EU thing, either - several other courts, as far away as Japan, have also imposed similar rights/obligations.


Why do you consider this a battlefield and why are you on the side of a multinational spying corporation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: