Hacker News new | past | comments | ask | show | jobs | submit login

This is not about enforcement of copyright, or defamation, it's about an individual's ability to censor true information simply because that information happens to be about them.

No, it isn't. In fact, the original CJEU ruling was quite clear that the right to be forgotten is not absolute and does have to be balanced against competing considerations like freedom of expression on a case by case basis.

However, data protection laws in Europe are much stricter than in some places when it comes to things like incomplete or out of date information. A key detail in the right to be forgotten ruling is that it is not only the original source of such information that bears responsibility for it, but also search engines that make such information easier to find.

The point is that while such technologies are useful for finding good information, they can also greatly magnify any damage caused by bad information. A search engine that links to a newspaper article describing how someone was accused of rape won't necessarily also link to the follow-up article describing how the key witness later admitted in court that they made the whole thing up and was found guilty of perjury themselves. And yet, a prospective employer or girlfriend or whoever else might have reason to look that person up by name won't know that and might reach entirely the wrong conclusion if they only saw the first article. More significantly in this hypothetical scenario, thanks to modern searching technologies, every prospective employer or girlfriend or whoever else might have reason to look that person up by name is likely to see the same article with the same missing follow-up, with potentially catastrophic consequences for the entirely innocent subject's life.

That's an easy example that surely no-one is going to seriously argue against, because it's very clearly misrepresentative and the potential harm is obvious. Once you get into information that is factually correct but either no longer fairly represents an individual or is concealed for good reasons anyway, things tend to get more complicated. There are reasons that justice systems typically regard even most criminal convictions as spent after a certain period of time and no longer require disclosure beyond that point. There is a reason that courts sometimes order the identities of people involved in cases before them to be concealed. There are many other good reasons for confidentiality and privacy protections. In any of these situations, while whoever is disclosing the information in the first place may be the original cause of any resulting harm (and may be punished accordingly under the law), it remains the case that a search tool making the information much quicker and easier to find magnifies any resulting harm considerably.




> The point is that while such technologies are useful for finding good information, they can also greatly magnify any damage caused by bad information. A search engine that links to a newspaper article describing how someone was accused of rape won't necessarily also link to the follow-up article describing how the key witness later admitted in court that they made the whole thing up and was found guilty of perjury themselves. And yet, a prospective employer or girlfriend or whoever else might have reason to look that person up by name won't know that and might reach entirely the wrong conclusion if they only saw the first article.

But the answer to that is not to try to suppress the first article, but rather to draw attention to the second.

But that is really missing the point. If France wants to deal with this issue through censorship that is France's prerogative. The point is that France should not be able to censor information outside its own borders.


But the answer to that is not to try to suppress the first article, but rather to draw attention to the second.

You're assuming the second exists, but that is outside the control of sites like search engines that are just linking to others' content.

In any case, I don't agree. If the first article is incorrect or misleading, I have no problem with suppressing that article rather than unfairly damaging the person who was described in it. This is not censorship any more than saying you'll arrest someone who shouts "Fire!" in a crowded theatre just to cause a dangerous panic or someone who phones in a false bomb threat just to disrupt a big public event.

Words are powerful things. People who abuse their ability to speak and in doing so cause unjust harm to others need to be treated appropriately by the law. People who propagate or perpetuate those harmful words are contributing to the harm themselves and also need to be treated in some appropriate way by the law.

The point is that France should not be able to censor information outside its own borders.

We agree on the underlying principle here. I wrote a little more about this in my first comment in this HN discussion.


> You're assuming the second exists

You stipulated that it did. If it didn't, then on what basis do you determine that the first article is wrong?

> We agree on the underlying principle here.

Maybe we should just leave it at that.


If it didn't, then on what basis do you determine that the first article is wrong?

That brings us back to where we came in: the "right to be forgotten" measures provide a way for someone who is suffering harm as a result of information being spread about them to notify the likes of search engines that that information should not be spread about them.

Having been so notified, if the search engine continues to propagate the information anyway, it can't say it wasn't warned and it's on the hook for any harm it causes. It's a similar principle to various other "safe harbour" style laws that have been written in recent years, trying to balance the benefits that derive from having these powerful, automated services available online with the dangers that they inherently bring if they act unfairly or get something wrong. The consensus so far seems to be that such services shouldn't necessarily be on the hook for automatically generated or user-supplied content by default, but making them provide a clear notification mechanism and accept responsibility once they have actual knowledge of something going wrong is a fair trade for that safety net.


> provide a way for someone who is suffering harm as a result of information being spread about them to notify the likes of search engines that that information should not be spread about them

Yes, of course. But whether or not an individual should be able to censor true information about themselves, even if that information is harmful to them, is highly debatable. I'm happy to engage in that debate if you want, but that is a different issue than the one which I originally raised, which is the question of whether a country's power to censor should extend beyond its own borders. The answer to that is IMHO an unequivocal "no".

(The answer to the question of whether a person should be empowered to censor true information about themselves is also IMHO "no", but that is not nearly so clear-cut. Like I said, if you want to engage on this topic I'm happy to have that debate as well. But let us not conflate the two issues. They need to be kept separate or we will end up in a hopeless muddle.)


Just to be clear, my original comment wasn't disputing your point about international enforcement (on which, once again, it seems we agree on the main principle). I was challenging your characterisation of the RTBF as "an individual's ability to censor true information simply because that information happens to be about them".

The original CJEU ruling was pretty clear that, in the court's opinion, there were some circumstances under which restricting the promotion of personal information by search engines, even if that information was true, could be justified. However, the court was also pretty clear that merely disliking the fact that some information about you was being shared was not sufficient grounds for the law to restrict that sharing.

I'm happy to debate more specifically the circumstances where that sort of restriction might be reasonable, but if you believe as a philosophy that freedom of expression and the right to say true things should always outweigh an individual's right to privacy then it's unlikely we're going to agree on so much there. I don't personally believe in an absolute right to free speech regardless of the consequences. I do believe that there are many good reasons that privacy is important, and sometimes that it is the more important of the two.

To give one concrete example, I think it is a general good that anyone -- including public figures with influential and highly stressful jobs -- should be able to seek help for mental health problems if and when they need to, without fear of suffering criticism or abuse for doing so. That makes respecting the privacy of those who do seek help important, regardless of the truth of any statement that they are doing so.

To give another example, the evidence appears to be overwhelming that punitive "justice" systems are far less effective at reducing crime and improving general quality of life than systems based on the rehabilitation of offenders. A huge part of that rehabilitation is allowing past offenders an opportunity to move beyond their old situation, to "turn over a new leaf". It's pretty difficult to do that if stories of the chocolate bar you stole from a store aged 18 are still keeping a very different you out of good job opportunities aged 28 or 38 or 48. Again, it might be true, but there is a strong public interest in concealing that fact once it is no longer relevant to understanding your character, which is exactly why concepts like spent convictions and probationary periods exist in many legal systems. In the age of an Internet that never forgets, that concealment becomes a lot harder, but the evidence for its benefits not just for the individual but for society as a whole is still the same.


> I was challenging your characterisation of the RTBF as "an individual's ability to censor true information simply because that information happens to be about them".

I stand by that characterization.

> The original CJEU ruling was pretty clear that, in the court's opinion, there were some circumstances under which restricting the promotion of personal information by search engines, even if that information was true, could be justified.

Yes, I get that. And I recognize that this is a defensible position. I even sympathize with the motivation behind it. But I disagree with it because 1) it's too easily abused and 2) it doesn't work.

> seek help for mental health problems if and when they need to, without fear of suffering criticism or abuse for doing so.

Completely agree, but the answer to that is to not let that information get out in the first place. Once that particular cat is out of the bag getting rid of the Google listing won't help much, especially if you're a public figure.

Actually, the real answer here is to remove the societal stigma associated with seeking help for mental health issues in the first place. And one way to promote that process is to reveal that respected figures have sought help for mental health problems, just like the cause of gay rights was advanced by revealing that well known and respected people were gay. I'm not suggesting forced outing, just that the ideal situation is for all of this to come out of the closet.

> the chocolate bar you stole from a store aged 18

What about the person you raped when you were 32?


I stand by that characterization.

Then with due respect, your understanding of RTBF is objectively incorrect. It was never about allowing individuals to censor information about themselves just because they don't want it to be shared. You can read more from the EU itself:

http://ec.europa.eu/justice/data-protection/files/factsheets...

But I disagree with it because 1) it's too easily abused and 2) it doesn't work.

Is there actually any evidence that it's easily abused, or is that just an assumption?

For that matter, why doesn't it work? It seems clear that promoting information via a search engine will typically give that information much wider reach than just, say, publishing it on a blog somewhere or having an article on the web site for someone's local newspaper. If that information is incorrect or misleading, if we accept for the purposes of this discussion that distributing that information is therefore undesirable, and if we accept that the search engines themselves report that many links have been removed as a result of RTBF, how can the current situation not be having the intended effect?

Completely agree, but the answer to that is to not let that information get out in the first place.

Ideally, yes. I just don't think that means you give up on trying to limit harm to an individual, particularly an innocent one, if someone does screw up and let the cat out of the bag.

Actually, the real answer here is to remove the societal stigma associated with seeking help for mental health issues in the first place.

That's a noble goal, and one I fully support. However, I don't think it's realistic to expect everyone in society to treat everyone else objectively and fairly when faced with potentially negative information about them any time soon. Sadly, there is still far too much ignorance and prejudice in society for now. However, privacy is an effective practical tool for reducing the amount of unfairness that results from that kind of ignorance and prejudice.

Also, as I mentioned in another post, it's impossible to reliably achieve fair outcomes after initially encountering negative information, even with good intentions all round. If nothing else, someone who looks up an individual and initially finds negative information about them won't (and can't) know for sure whether there was also some later positive information that they should take into account as well. Short of reliably data mining and classifying the sum of all human knowledge so that any search that turned up negative data immediately highlighted the overriding positive data as well, there is no way to solve this problem 100% of the time.

What about the person you raped when you were 32?

When it comes to disclosures and background checks, there are usually some types of offence (typically violent or sexual ones) that are deemed sufficiently serious that convictions for those offences never become spent and will turn up on background searches forever, even if the legal system does consider lesser offences to be irrelevant after a certain period of time. I don't see why the same principles shouldn't apply for the same reasons in RTBF cases, and I would expect EU courts to do exactly that when applying the privacy and data protection rules.

We're in a shades-of-grey area here, and your example was a much darker shade of grey than mine, so it seems reasonable that there might be a different balance between allowing the past offender to move on and helping others to find information that might be relevant to their dealings with that past offender.


> It was never about allowing individuals to censor information about themselves just because they don't want it to be shared.

Maybe that wasn't the intent but that is the net effect because the burden of proof is not on the individual to show that the information should be removed but rather on the search engine to show that it should not be. As a matter of simple economics, Google has essentially no choice but to comply with all RTBF requests whether or not they have merit, just it has to comply with all DMCA takedown requests whether or not they have merit. Google doesn't have the resources (nor should it be expected to provide the resources) to actually adjudicate every case on its merits.

> Is there actually any evidence that it's easily abused, or is that just an assumption?

It's a prediction.

> it's impossible to reliably achieve fair outcomes after initially encountering negative information

Why? If you can pass a law forcing Google to remove links, why could you not pass a law forcing news outlets to update stories?

> We're in a shades-of-grey area here

Yes, of course, that's the whole point. What if you were never convicted of the rape even though you were clearly guilty? (Should Michael Slager have a right to force the internet to forget that he actually shot Walter Scott in the back, despite the fact that the jury did not convict him?) What if you were actually innocent even though the evidence against you seemed overwhelming? Who decides which cases fall under the auspices of RTBF? The answer is: you do. Why? Because it's cheaper and less risky for Google to simply comply with all takedown request than to actually try to adjudicate each request on its merits.


As a matter of simple economics, Google has essentially no choice but to comply with all RTBF requests whether or not they have merit

From Google's own public statements, they actually reject the majority of requests to remove search results under RTBF:

https://www.google.com/transparencyreport/removals/europepri...

Google doesn't have the resources (nor should it be expected to provide the resources) to actually adjudicate every case on its merits.

Apparently you're wrong on that one, but even if you weren't, I'm not sure I would have a problem with this. Google has a highly visible and influential system, but I don't see why it should be exempt from the same data protection and privacy rules that everyone else has to follow just because it is inconvenient or expensive to comply with them. Remember, the original RTBF ruling from the CJEU was based on existing general principles of fair data processing under EU law, not some special anti-Google legislation.

If you can pass a law forcing Google to remove links, why could you not pass a law forcing news outlets to update stories?

There are laws potentially requiring news outlets to update misleading stories, issue corrections, remove material infringing others' intellectual property rights, etc. This would also be covered by much the same data protection rules, as well as legislation regarding defamation, official secrets, copyright, and so on.

What if you were never convicted of the rape even though you were clearly guilty?

What if you were actually innocent even though the evidence against you seemed overwhelming?

We have a legal process to make decisions on these difficult issues, by holding a trial, and a great deal of thinking by a great many smart people has gone into making that system the way it is. If the verdict reached through such a trial is flawed, there are well established legal mechanisms for appealing it or even correcting it later if new evidence proves it was wrong. Obviously the system isn't perfect, nor can it ever be in the absence of telepathy or something. But Google and the Internet aren't even close to big and important enough to overturn an entire legal system.

And yes, sometimes that legal system results in surprising verdicts. Maybe that's because, unlike Mike from down the road who made a five minute post on his personal blog, the courts actually considered all of the available evidence and followed an impartial process through which an official determination of someone's guilt or lack of guilt was made. But that verdict will be regarded as the final decision for any other purpose, including say a defamation lawsuit also brought in a real court. So an argument that someone should be allowed to just say whatever they want and use a system like Google to magnify their influence even though a court has already ruled the opposite isn't going to be very convincing to me (or just about anyone else) I'm afraid.

Who decides which cases fall under the auspices of RTBF? The answer is: you do.

Every time you deal with someone else's personal data, you have a legal (and, I would argue, moral) responsibility to do so fairly. If you don't, the law may punish you. I just don't see why the RTBF is different to any other aspect of the relevant law in this respect.


> they actually reject the majority of requests to remove search results under RTBF

Slightly over half. But we really have no way of knowing how many of these requests are really rejected on merit and how many are rejected because of some technicality (e.g. some problem with the submitted proof of identity)

> I don't see why it should be exempt from the same data protection and privacy rules that everyone else has to follow

It should not. I'm using "Google" here as a proxy for any web service that falls under RTBF.

> We have a legal process to make decisions on these difficult issues

Yes, of course, but this isn't about that. This is about controlling information. You brought up stealing a chocolate bar as an example of something that should be forgotten, and I countered with rape as an example of something that perhaps should not. The problem is: who gets to decide where the line between petty theft and rape gets drawn with respect to the RTBF?


The problem is: who gets to decide where the line between petty theft and rape gets drawn with respect to the RTBF?

Ultimately, the legislature when it comes to setting general principles and then the courts when it comes to interpreting those principles in specific cases, just like any other decision like this.


What I meant was: who decides in individual cases. Obviously neither the legislature nor the courts are going to do that. Companies are going to do it, and they are going to do it in a way that minimizes their cost and legal exposure. And that means, under the current law, if they are going to honor their fiduciary responsibilities to their shareholders, they are going to err on the side of censorship. (Note that Google is in a unique position of not actually being beholden to their shareholders. Because of the way the company is structured, Larry and Sergey have absolute control, just as Zuck has absolute control over FB. But those are extremely rare situations for public companies. In any case, I do not want a world where freedom of information depends on the benevolence of a handful of oligarchs.)


Your cure is worse than the disease and ineffectual at that what's to stop us from making a search engine for illegal truths.


Relevance is relative. If I'm going to be hiring you, dating you, or having you around my family the public information regarding your history of immorality and criminality suddenly becomes relevant.

The Internet isn't a conversation between a minority of public figures and everyone else and the decision isn't whether what they have to say ought to be shared with everyone.

It's a conversation between all of us and we individually ought to determine what is relevant to all of us not censors.

If you even try we will make a search engine exclusively for illegal truths and the things that you want to hide will only become more visible.

Ultimately the Internet belongs to all of us and we won't let you stifle it.


If I'm going to be hiring you, dating you, or having you around my family the public information regarding your history of immorality and criminality suddenly becomes relevant.

Does it?

Does getting busted for smoking a joint at university really make any difference to whether someone is a good employee a decade later? Should this matter?

Does being merely accused of something once upon a time really make any difference to whether someone is a good employee? Even if they didn't actually do it, and were cleared at trial or never even got that far because the charges were later dropped? Should this matter?

Sometimes people tend to think the worst of other people, or simply let their personal prejudices get in the way. That's why employers already aren't legally allowed to discriminate on some grounds when making hiring and firing decisions, and why sometimes information is legally protected by the courts. So maybe providing tools with the scale and influence of a search engine like Google to feed the prejudices and paranoia of hiring managers without regard to correctness, completeness or context isn't the best idea.

It's a conversation between all of us and we individually ought to determine what is relevant to all of us not censors.

Be careful what you wish for. In this case, one person's censorship is another person's respect for privacy. In a world where no-one gets second chances, how do people ever escape a criminal path and get back on the straight and narrow?

If you even try we will make a search engine exclusively for illegal truths and the things that you want to hide will only become more visible. Ultimately the Internet belongs to all of us and we won't let you stifle it.

Oh, you're going to invent and fill an entire Internet on your own, free from the laws and ethics and practicalities that apply to everyone else? Well, OK, good luck with that.


- Getting busted smoking a joint where its still illegal is a matter of public record regardless of whether it shows up on google

- If you are wrongly accused refute the accusation. Its an unfortunate matter that got cleared up. You might get passed over for a job, try again put in another app. The extra struggle unfortunate as it is is a lesser evil then giving anyone with money to hire lawyers the ability to silence critics.

- Respecting your privacy means not reading your diary or publishing your emails for planet earth to read. Unless the data was obtained unethically or you had a legal or moral expectation that it wouldn't be shared then there is nothing legally or morally wrong with sharing it. This isn't gray its black and white.

- Censorship is when the government acting on its own behalf or on behalf of others via legal apparatus silences the citizenry or suppresses their speech. Once again this is super black and white.

Censorship is a terrible idea that is somehow gaining credence among otherwise educated people. In the first place it is a gross violation of the right of the citizenry and a terrible precedent, in the second while it might silence the little guy in a lot of common cases the people you ought to be concerned about the movers and shakers who are likely to make decisions that effect peoples lives can probably afford a source of information that is less likely to be effected.

So you are looking at creating a world where you don't know bob whose dating your daughter was accused on 3 separate occasions of rape but john the hiring manager still knows you were caught with some weed at 19 even if the charges were dropped.

If you win you will have accomplished nothing of note other than making the world a crummier place.


Getting busted smoking a joint where its still illegal is a matter of public record regardless of whether it shows up on google

But in a lot of places, the degree to which public authorities will report it in official background checks will fade over time, as will any legal obligation for the individual to disclose the information when, say, applying for a job. There are well established arguments for governments acting this way in their treatment of past offenders, and whether or not search engines highlight past indiscretions that are otherwise considered in the past by the legal system, thus undermining the normal legal position, can have a large practical impact on the individual's future.

If you are wrongly accused refute the accusation. Its an unfortunate matter that got cleared up. You might get passed over for a job, try again put in another app. The extra struggle unfortunate as it is is a lesser evil then giving anyone with money to hire lawyers the ability to silence critics.

I'm sure that will be a great comfort to the paediatrician who was mistaken for a paedophile by someone too stupid to know the difference, as they go into hiding on police advice while the death threats are dealt with.

Unless the data was obtained unethically or you had a legal or moral expectation that it wouldn't be shared then there is nothing legally or morally wrong with sharing it. This isn't gray its black and white.

Very little in either ethics or law is black and white. In fact, there is an entire section of EU law about processing personal data that shows how far from black and white the legal rules are, and the RTBF ruling from the CJEU was just a particularly high-profile application of those general legal principles.

Censorship is when the government acting on its own behalf or on behalf of others via legal apparatus silences the citizenry or suppresses their speech. Once again this is super black and white.

So you don't agree with any form of defamation laws? You don't have a problem with someone inciting someone else to commit a serious crime so they can avoid getting their own hands dirty? You think someone who phones in a hoax threat before a major sporting event, causing huge amounts of disruption, financial loss and potentially physical harm to many thousands of people, is just exercising their right to free speech and shouldn't suffer any negative consequences?

So you are looking at creating a world where you don't know bob whose dating your daughter was accused on 3 separate occasions of rape

That depends entirely on the situation. Were those three separate accusations all made by close friends of Bob's psycho ex, and did any of them actually lead to any sort of trial? Because if not, protecting Bob from unproven accusations that could seriously damage his reputation is exactly why it's important to have strong privacy laws and to make sure that any personal data that is being shared is not misleading.

Now, if Bob was prosecuted for rape, took a plea bargain accepting some lesser sexual assault charge, and is on some official sexual offenders register, then that's a different story entirely. But in that case, it's unlikely that the RTBF would mean Bob was entitled to suppress search engine results reporting that situation either. Again, the world is not a black and white place, and neither are the EU legal provisions on which the RTBF result was based.


Regarding censorship rape threats, defamation, bomb threats and other obvious and clear exceptions to free speech. Those are pretty well understood and covered by existing law. We are talking about suppressing true statements 1984 style and making the truth disappear ex post facto from the internet. This isn't the same as shouting fire in a crowded theater and you know it.

You still haven't addressed how easy it would be to create a public search engine of illegal truths nor the fact that there is so much info out there that anyone of importance will likely have access to all the info about people that they might want to hide. You wont be able to get a job at McDonald's without dealing with someone who pays a monthy fee for the truth about you.


We are talking about suppressing true statements 1984 style and making the truth disappear ex post facto from the internet. This isn't the same as shouting fire in a crowded theater and you know it.

No, it's the more like a court-ordered anonymity ruling protecting a vulnerable witness, or medical confidentiality, or a government removing old convictions for minor offences from someone's official background checks. All of these are well established principles in other parts of our legal and government systems, and the ethical arguments for them are well understood.

You still haven't addressed how easy it would be to create a public search engine of illegal truths

If you want to do that somewhere outside the jurisdiction of the EU, well, good luck with that. It's going to cost you a fortune for very little benefit, but if you believe so strongly in the ethics of this and your own legal system protects your freedom of speech above any other relevant rights, go ahead. But if you do that in the EU, you're going to jail. And you should probably expect that if you did openly challenge the legal principles of the EU in such a way, the EU actually would respond by censoring your site within its own territory, just as it ruled that Google (who know a thing or two about building a search engine, in case you didn't know) are required to comply with the same data protection rules as everyone else in the RTBF case.

nor the fact that there is so much info out there that anyone of importance will likely have access to all the info about people that they might want to hide

You keep saying this, but the whole point is that if you have a facility like a search engine making it easy to find that information then any damage caused by propagating or perpetuating the information will be greatly magnified.

The reality is that even in very important situations like a trial in court, a jury will not be told information that the law deems irrelevant or inadmissible in the case. And as I've been explaining in various other comments in this discussion, these legal principles have been developed over a very long period of time and apply far more widely than just Google and RTBF. If you want to challenge them, again go ahead, but you're not just taking on some online free speech issue, you're taking on generations of legal and ethical argument that have brought us to where we are today.

You wont be able to get a job at McDonald's without dealing with someone who pays a monthy fee for the truth about you.

That kind of argument is exactly why these legal protections are necessary.

Again, it's hardly news that allowing employers to discriminate on undesirable grounds is not healthy for society. We have literally passed laws to protect various categories of vulnerable people against other potential abuses in the employment system.

This is no different, except that it's not even close to the same scale and didn't even need laws of its own, just the same basic legal data protection principles that already applied. If you want to provide that monthly database of "truth" about someone to potential employers of that person, and your database in fact presents a misleading picture of that person because while true the information is also incomplete or out of context or otherwise not telling the whole story, you should expect to be taken to court and you should expect the court to award damages to the person harmed by your misrepresentation of them.

Ultimately it's the same principle that leads oaths in court to say not just "tell the truth" but something like "tell the truth, the whole truth, and nothing but the truth". The problem here isn't someone telling inconvenient truths about someone else, it's misrepresentation that causes harm. Sometimes you can do that even by telling the truth, if you're only telling a partial truth, and our legal system recognises that.


A trial is a very specific situation whereby we deem that controlling how the facts are presented is deemed vital to an essential function of our justice system. The jurists are a tiny handful of individuals explicitly picked from those who hold no interest in the case and they are denied free access to public info only so long as the trial lasts.

It troubles me that you make little distinction between keeping a handful of disinterested parties in the dark for a few months so they can render an impartial verdict based on specific facts and giving the government the power to decide for all of us what truths are fit to be heard.

There are many orders of magnitude more difficulty and potential for abuse.

Your fantasies about fresh starts and protecting innocents from persecution are poorly founded daydreams. You are unwittingly pushing for us to lay the groundwork for tyranny and injustice in pursuit of a fools errand. While this is ripe for abuse it is bizarrely naive to believe that important people will willingly give up the ability to spy on us, collect information on us, and use that intelligence to make decisions about our lives.

Further where does it end? Must the internet consist only of truths that everyone can agree or in other words why is western Europe special and say Saudi Arabia not. Is it because you suppose you are reasonable and they are not?


I'm sorry, but you keep reading things into my posts that aren't there, and as a result you are attacking straw men in your responses.

All I am really arguing here is that if we have laws that protect privacy and limit use of personal data for whatever we consider good reasons in a certain jurisdiction, those laws must apply equally to online sources and data processing within that same jurisdiction. What those laws are and what reasons we might consider good enough to prioritise privacy over other relevant factors is a vast and complicated area, and today's situation was reached after numerous debates by smart and thoughtful people.

You seem to be interested in promoting some sort of anti-government agenda here. Don't trust the government, fear its power over the people, and all that. In other contexts, perhaps I might agree with some of your concerns there. But this issue, the one we're talking about right now, has really very little to do with excessive government power or state censorship. This is simply about enforcing laws we already have, made with good intentions, written in reasonable terms, and with the purpose of protecting vulnerable people from unjust harm. That is no more an abuse of government power than having police officers legally allowed to use force against you if you're literally beating up another person in the street, because that police action is an infringement of your right to free expression or something.


You have already spoken in favor of jailing developers in one jurisdiction because they don't implement legal censorship you favor in your jurisdiction regardless of the laws in their jurisdiction where exactly does THAT road end?

You have proposed we make a secret book of illegal truths unknowable to the general public that we cannot even protest effectively without running afoul of the laws prohibition of their publication.

If you simply support delisting such inconvenient truths from google then your work is for naught as I will simply publish a list of such truths. If you support whitewashing the entire internet of them then you have proposed we impliment 1984 for the sake of helping bad people move on with their lives after learning better. Backed up by threats of violence and imprisonment from your nations thugs.

Rather than buying their peace of mind with my freedom I propose you spend your own nations money to support programs that will employ, educate, and provide therapy to help them move on.


I have done no such things, and in fact several of my comments in this HN discussion have said very much the opposite of what you described, as anyone who cares to read a few comments up can immediately verify. You're just making stuff up now, and as such I see no value in continuing this thread any further.


You said if someone was to publish a search engine for illegal truths they would be jailed in the EU.

You are continually framing the right to silence others as some sort of right.

You ought to note that such rights normally derive from obligations to those whom you entrusted your data in the first place.

Your doctor nor his staff may share your medical information. Either explicitly in an agreement or implicitly by law you gained the right to expect that your information remain confidential when you became his patient.

When you mug someone and break their face no amount of jail time and rehabilitation obliges your victim nor society at large to silence.

You imagine all of society possesses an obligation that rightly attaches only to those with relevant relationships like employee/employer, doctor patient etc.

You cannot attach such an obligation without grossly limiting freedom and such an obligation attached to no human right I can imagine.

Your position isn't merely badly thought out its morally wrong. If you can no longer discuss it then I'll drop it.


> That's an easy example that surely no-one is going to seriously argue against,

This is ineffective legal solution to social problem.

Internet lies. Be cautious when placing trust especially on Internet. Two very good advice people will eventually have to learn.


Internet lies. Be cautious when placing trust especially on Internet. Two very good advice people will eventually have to learn.

That will bring little comfort to the person who was wrongly accused of rape in my example, when they are turned down for ten jobs in a row after HR search for their name online and find only the reports of the malicious accusation.

Lies about a person can be extremely damaging. Even sharing the truth about certain things that we normally consider private can be extremely damaging. Unfortunately as a society we are nowhere near the state where everyone allows for all information we find from any other source potentially being incorrect or misleading and treats such things in an objective and considered manner.

I'm not sure reaching that state is even possible, in general. For example, how long should a reasonable, responsible person spend looking for corrections or retractions that might or might not exist before believing an article they read online?

That is why we have defamation laws and that is why we now have related protections like the "right to be forgotten".


how is it ineffective if enforced ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: