-In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public. They didn't warn users that this change was coming, or get their approval in advance.
- Facebook represented that third-party apps that users' installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users' personal data – data the apps didn't need.
- Facebook told users they could restrict sharing of data to limited audiences – for example with "Friends Only." In fact, selecting "Friends Only" did not prevent their information from being shared with third-party applications their friends used.
- Facebook had a "Verified Apps" program & claimed it certified the security of participating apps. It didn't.
- Facebook promised users that it would not share their personal information with advertisers. It did.
- Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
- Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn't.
I remember very well when the first one happened. It pissed me off because someone got to see some information I thought was private after I went to the trouble of going through Facebook's previous 50 privacy settings.
My family had a major falling out with my father, to the point where myself and my sister had a restraining order against him when I was 7 and she was 9. I haven't seen him or talked to him in 16 years. After Facebook deleted my privacy settings, I had a message from him sitting in my inbox the next day. My sister called me, crying, because she had gotten the same message. Both of our accounts were previously unsearchable on the site, with all of our data being private. As soon as they rescinded that privacy, our father could tell what cities we live in (in one of our cases, a _very_ small town) and that she had gotten married, we both had changed our first and last names, and one of us had a child. He found me through my mom's friends list, he found my sister through my friends list. All of which were previously hidden.
Nothing came of it besides an unwanted "please call me" message from him, but it's not a far reach from there to actually being located physically and confronted. We sent this man to jail and changed our names to keep away from him, and Facebook, in spite of their "privacy" settings, let him get a glimpse back into our lives.
An article a few years ago talked about what privacy means to rich kids who have everything. Go to a $40k per year boarding school that mummy and daddy pay for? What do you have to hide? Not much.
Then the kids get older and decide "no secrets for anybody!" What's the harm in sharing your life? It's a net win. If you see James got a new turbo jet ski, won't you want to work harder to get one too? Sharing can save the world.
We can't seem to imagine a time when maybe you wanted to keep a secret. Maybe you're helping someone to not be found. Maybe you're helping someone through a bad time in their life. Then, with a profit-oriented privacy change, you end up in the parent's situation.
The world view of the people in charge aren't aligned with "normal." We'll see PR and lip service press releases, but steamrolling over normal people will continue.
On the contrary, don't rich people have a lot more to hide?
Just knowing that your kid is enrolled at Le Rosey signals to a criminal that she is worth kidnapping... and the last status update shows her headed to Ibizia for spring break. In contrast, nobody cares if another poor kid "likes" Justin Bieber. Over-sharing seems a lot riskier for the rich (and famous.) It would be interesting to read the article you mentioned. It's hard to imagine an argument that the rich are not more concerned with privacy than regular people.
Kidnapping for ransom is rare in the United States. It's not good risk/reward. The family of the kid enrolled at Le Rosey is probably living beyond its means, and lacks sufficient credit to pay enough ransom to cover the costs of a kidnapping operation.
The other important point: Nobody older than 25 exists (well nobody older than ~age-of-founder exists. Remember the entire "Never hire anybody over 27!" advice?). Kids don't have major privacy concerns, therefore everybody should have no privacy.
I would donate to your ACLU legal action against FB. I am consistently editing myself on FB, even in private groups, because of a sinking suspicion that with a flick-of-a-switch it can all be public.
I'm not sure that any laws were broken. It's not like we kept the restraining order active for 16 years, and his message wasn't really harassment, more like attempted atonement. At any rate, I do wish companies had to be held to their own site rules, legally. If Pystar break's Apple's TOS they get sued out of business, but if Facebook breaks their privacy settings they get a slap on the wrist from the FTC.
If you do decide to file please let us know. The entire situation sounds like a nightmare and I wonder how many others have faced this issue. On a more encouraging note I recently had to report a fake FB profile that was used to harass a family member and it was removed promptly, which surprised me.
Shit, that sucks -- people are not yet aware of the damage the lack of online privacy can bring. And I fear that because of inertia, when the damages will become visible, then it will be too late.
I kind of wish the similar Google Buzz incident had gotten more press than it did. A boy who wasn't even in a balloon was on the news for days, but the case of a woman who was being harassed on Buzz by her abusive ex was lost in the public mind after a few minutes. Both the Buzz and Facebook cases were decided by the FTC very recently (the Buzz case was settled in October of this year). Perhaps we've reached a turning point in online privacy?
But then, looking at things like Protect-IP and SOPA, perhaps the regulatory answer is to just do away with privacy altogether.
"- Facebook represented that third-party apps that users' installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users' personal data – data the apps didn't need."
Privacy/ethics issues aside, from a pure developer standpoint, isn't this just a feature? Where do we draw the line between functionality and privacy?
User A allows user B to see her data via "Friends only." User B runs app X, whose functionality includes interacting with friends. Let's say it shows on a map where each of your friends lives. App X can see the said data for the purposes of providing functionality.
Yes, I know that by strict definition this conflicts with "friends only." You now have "friends and the application executable code only." But how is this different from, say, Gmail auto-scanning my e-mail to show ads? Is it because I trust Google and don't trust $random_fb_app_developer?
Likely one concern is that this third-party developer can disrespect (or actually, not even know about) that "friends only" setting and inadvertently make the data visible to other parties.
(Disclaimer: Don't get me wrong, I loathe/distrust most FB apps as much as the next person. Just trying to think from an honest developer's shoes here.)
That's one of the two bullet points where I can see a reasonable case for Facebook's side. It's slightly muddied because the app is running on Facebook where they control it, but I can sort of see an argument for apps' actions conceptually being actions of the user running them.
I'm somewhat sympathetic to Facebook on the other app-related claim as well, "Facebook represented that third-party apps that users' installed would have access only to user information that they needed to operate." Yes, Facebook could've done better on that, but fine-grained security is something nearly nobody has solved.
Privacy/ethics issues aside, from a pure developer standpoint, isn't this just a feature? Where do we draw the line between functionality and privacy?
This is a reasonable question. Where I personally would draw the line is, "functionality" implies to me that the app would only access user info when it needs to do so for some FUNCTIONAL purpose. If the app does not need the data and is not doing anything legitimate with it, then obviously, the user's privacy should be respected and said info should not be accessed.
This sentence: "including giving consumers clear and prominent notice and obtaining consumers' express consent before their information is shared beyond the privacy settings they have established" was the killer to me. It forces FB to much more transparent about what it does with data and could have a significant shift in its revenue model.
The trouble is, it only "forces" Facebook to do anything if there will be meaningful sanctions if they don't.
They've just been found, in a formal investigation, to have broken numerous fundamental privacy laws across several continents, and been punished with... absolutely nothing, as far as I can tell.
All this has done is teach them that they are above the law and should feel free to continue doing whatever they like without regard to the consequences for the hundreds of millions of real people who are counting on them to behave responsibly.
Those "audits" will be toothless, and the only effect we'll ever see from them is the occasional headline. Read up on Microsoft since the late 90s or IBM since the early 80s, among many others.
The FTC's presence in FB's business decisions will only lend an air of legitimacy to what would otherwise cause problems with their users, and transforms FB's business model from "users are dumb fucks" to "Mom said I could."
No, it just means that changes to privacy that affect existing data should be announced beforehand. However, I doubt that they'll tell you which data will be affected.
Have fun scrolling through your entire FB history to update permissions!
Not really. If they provide clear and transparent notice to users on what they are doing with their data, then what they are doing is fine -- "caveat emptor."
Whatever trouble Facebook has with the FTC, it will face roughly 300 million times that trouble in Europe, where the rules are much more strict, and the data protection authorities have much greater power to act on behalf of the general public and individual interests.
That whole using-personal-information-to-burn-people thing is still (barely) in living memory in Europe. There have been riots in Germany over the government trying to take a census - with good reason.
Privacy is more important than a lot of shallow people imagine.
Well, the U.S. has its own history of people who shoot at census takers, and so on. From a legal perspective, the EU and member state implementing laws are: (1) more protective of the individual over the corporation than US laws, and (2) fairly onerous and expensive for companies to comply with. In fact, compliance is a kind of red herring, since many of the data protection rules in place are ambiguous or nonsensical. Personal privacy is basically a global policy experiment at the moment.
True, though I have to admit that Neelie Kroes seems to have plenty of both executive authority and the willingness to exercise it in a muscular fashion. It helps that the EU privacy protections are closer to the constitutional than the legislative level.
So the penalty for ongoing repeated lies and fraud is....
nothing. Zero. The FTC has investigated, and the settlement is zero money and zero penalties. Not one dollar. Whew! I'm glad they were punished! They won't do THAT again!
The U.S. is really in late-stage empire breakdown. I don't think there is any significant enforcement of any laws whatsoever against companies and people that are reasonably well connected. The only thing keeping the society from total breakdown is inertia.
Perhaps the problem isn't that there's no one to watch the watchmen, but that we're over-reliant on watchmen.
You don't need the FTC to keep Facebook from sharing your private information.
I'm not saying Facebook did nothing wrong, and I'm not saying the FTC is doing nothing wrong now. I'm saying that none of it matters if you delete your own account.
You don't need to worry about who's watching the watchmen when you watch out for yourself. (Can anyone translate that to Latin?)
"Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful."
The FTC exists to enforce that. Now, it's fashionable nowadays for the ill-informed to insist that government is worthless, but in fact this law and similar ones underly every aspect of American society and you owe everything to have to their existence. Without government anti-fraud efforts, commerce does not exist, full stop.
Private enterprise are the applications. Government is the operating system.
But let's take another look at your point: your argument is that I should right now go and delete my account a couple years ago because I know today that Facebook was lying a couple years ago. For everyone who has a time machine, that is a good remedy - it will solve the problem for those people admirably. For those that don't, we need effective government enforcement of anti-fraud laws.
My argument is that Facebook has a history of mishandling privacy.
And further that users have been, at least on some level, aware of it.
I was on Facebook when they first implemented the news feed and got everyone upset. They've since introduced dozens of features that have gotten everyone upset (or at least a sizable enough minority to be noticeable). They make it clear that ads are targeted based on your personal information. They make privacy settings complicated. They continuously push you to supply more information, connect with more people, use Facebook for more things. They continuously make more things public by default, and add more features that make personal information discoverable (such as tagging in images).
If they weren't suspicious, they wouldn't have been investigated in the first place.
I'm not talking about a time machine. But let me observe a few things:
- I deleted my account a year ago (and knew I wanted to before that) without the FTC investigating anything. Because Facebook was clearly suspicious.
- You still have an account today even though you know about Facebook's lies today. You don't need a time machine to make changes today.
My argument is this:
Fraud protection is extremely important. There are many cases where there is no suspicion of fraud, no source of information about possible fraud, and consumers get taken advantage of. For example, people get taken advantage of by phishing every day.
But if you fall victim to increasingly bad phishing attacks from the same company over the course of years, you aren't paying attention. You are relying on watchmen to protect you and not watching out for yourself.
I am not blaming the victim. I am trying to empower the victim. Everyone who is reading this and still has a Facebook account knows that Facebook will outright lie about privacy in order to make more money from advertisers. They are still guilty of their actions if you get fooled again, but you don't have to get fooled again.
My argument is "Fool me once, shame on you; fool me twice, shame on me."
"Quis custodiet ipsos custodes? is a Latin phrase traditionally attributed to the Roman poet Juvenal from his Satires (Satire VI, lines 347–8), which is literally translated as "Who will guard the guards themselves?" Also sometimes rendered as "Who watches the watchmen?""
On what legal basis would you expect a financial penalty for a free, voluntary-participation online service that did the things that the FTC found that Facebook did? What does the law on the subject give the FTC authority to do?
The FTC seems to think that Facebook lured consumers in by misrepresenting how it uses consumer personal information. That would be a violation of the FTC Act.
Is anyone else really tired of reading this "you're not the customer, you're the product" platitude every day, especially in contexts like here where it's totally irrelevant? Yes, we get it, Facebook isn't directly making money off their users.
Yes, but apparently people need constant reminders Facebook users are essentially getting a service for free, so they're not consumers in the same sense as someone who pays for a service and is dissatisfied by the treatment hey receive at the hands of the vendor.
The law has long been castrated on your points. The legal basis would be, of course, that private information is property, but that one doesn't exist in this country (yet?).
The legal basis would be, of course, that private information is property, but that one doesn't exist in this country (yet?).
Trying to apply rules made for physical items (if I take it you don't have it any more) to things that act completely differently is a really bad idea.
Humans are smart, they don't have to use the exact same laws. Like I implied, the laws don't actually exist in the US.
Would it make a difference if I had said "something akin" to personal information as property? I mean, we're reading this story, so personal information has currency in some way, right? Seems to me that with some political will that the laws can be nudged further in favor of the user.
You choose to divulge any information you consider to be private. Absent an actual formal agreement contingent upon giving that information (which an internal policy is most definitely not), I have no obligation to restrict who I share the information you chose to share with me.
Unless you're going to argue that you own anything you happen to know, even after other people know it? Unless you're going to argue that a posted policy on a free website constitutes a binding agreement?
The FTC Act: "Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful."
The FTC is empowered to enforce that with a variety of penalties, including extensive fines.
> The only thing keeping the society from total breakdown is inertia.
I know, society is breaking down, kids are getting more disrespectful, things are more expensive, the end of the world is upon us, etc :) You taking your lifetime to come to a realization about the state of the world does not make the realization less true before you had it. It's a memory glitch. Jump back 100 years, it's the same shit.
Yep. Jump back 100 years, and you're in the breakdown of the British empire. And look what we became, a pointless, miserable lame-duck nation utterly yoked to the next empire that rose after us, the USA.
Now you guys are gonna be yoked to the next empire, China.
Have fun with that!
I'll maybe catch you on a beach in Brazil, where hopefully things will be cool. Fingers crossed.
Yep. Jump back 100 years, and you're in the breakdown of the British empire.
You're off by several decades and two world wars. The British Empire was in full bloom in 1911, and there was little unrest in any of its colonies, never mind at home.
Throughout the entire history of the US, you can dig up these kind of antics, what makes this different from those antics that signals that this points to 'late-stage empire breakdown'?
I think that third line was probably more of an "impotent howl into the void in futile pursuit of catharsis" kind of comment than an "attempting to advance the cause of mutual human understanding" kind of comment.
I'm not American but isn't that court's decision to make, not FTC's or dreamdu5t's? Also, wouldn't Facebook's actions rather constitute a breach of contract with their users? Which would mean that users can sue regardless of FTC's decisions?
Privacy policies are not binding agreements or contracts. Facebook didn't lie. They simply don't have to follow their privacy policy exactly, since it's not a binding agreement.
You're dodging the question of whether it is a lie, which is unsurprising. I say again: Facebook lied.
You didn't mention fraud, and I made no comment on fraud. I said they lied. They did.
As for the clumsy smokescreen analogy about Subway, allow me to point out the obvious:
Facebook made promises to users about how FACEBOOK would handle the personal property of said users. These are concrete, easily-definable promises...the kind which are easy to analyze to see if the promise was kept.
It wasn't kept. Furthermore, as the FTC documented in plain English, there was a pattern of behavior, over and over again, that shows any reasonable person that not only did Facebook lie, they lied with prior intent. They never had any intention of keeping those promises and they broke them in the most egregious ways possible.
By contrast, when Subway implies that customers can lose weight by eating certain foods there, they are, obviously, making no promises about how Subway will behave in the future, except an implied promise to be honest about how many calories are in their food (required by law), and to provide some lo-cal options (which they do).
If Subway lied in an objectively verifiable manner (as opposed to on a subjective/gray-area claim), they'd also be breaking the law. For example, if they told their customers that a certain sandwich had 6 oz of meat, and it in fact had 5 oz of meat, that wouldn't be legal. If they told their customers that their credit card transaction data was not being resold, and in fact it was, that wouldn't be legal. Facebook is alleged to have done the latter, among other things.
You're omitting the fact that Facebook has always stated they may change what they do with your information in the future.
Yes, but Facebook disclosed personal information which people expected to remain private. Your Subway analogy doesn't cut it because Subway is trusted to disclose information, whereas Facebook is trusted to enclose it.
Even if you accept it was a lie, it's not fraud.
From the dictionary:
"Fraud: Wrongful or criminal deception intended to result in financial or personal gain."
From the FTC report:
"Facebook promised users that it would not share their personal information with advertisers. It did."
The government needs mindshare and advertises just like any other entity. Just like people use Facebook to tell people not to use Facebook, the FTC may buy or use services from companies they are taking legal action against. That's not an endorsement of the practices they are fighting against; it's simply how businesses work. Facebook is where the people are; if you want people to hear your message, you take the message to Facebook. It's especially relevant in this case since their work directly benefits Facebook's users, so it would not make much sense to not have their message heard there.
Also, they settled and all is good between the FTC and Facebook.
I don't think that's a fair analogy to use. While government regulatory agencies covering the telecommunication sector may indeed use telephones, they don't stick a recommendation to use Company A on their website... especially if they have just given Company A a simple slap on the wrist for an arguably large grievance. You may (or may not) counter by saying that Facebook is an entire market in itself that can't be ignored (which it isn't), but after viewing their Facebook page I see no additional information that I couldn't simply find on the front page of the FTC's website.
There is a lot of nonsense legal interpretation in this thread. Did anyone actually read the settlement?
You commit fraud if you make any intentional deception in order to benefit yourself, or to harm others. If you intentionally make public commitments that turn out to be false, and you thereby cause some harm to another person, you have committed a fraud.
The FTC is empowered to enforce criminal and civil penalties for fraud on behalf of consumers. From the FTC website: "When the FTC was created in 1914, its purpose was to prevent unfair methods of competition in commerce as part of the battle to bust the trusts. Over the years, Congress passed additional laws giving the agency greater authority to police anticompetitive practices. In 1938, Congress passed a broad prohibition against unfair and deceptive acts or practices.”
From the FTC's Facebook settlement statement, it's perfectly clear that the FTC believes that Facebook is guilty of committing widespread and repeated deceptions in violation of the law.
The settlement itself is tantamount to saying that Facebook has had its last warning, and is on very thin ice with the FTC.
Feel free to complain about whether such a "penalty" is effective. We won't really know until the next time Facebook breaks the law.
Many have posted to this thread with complaints that boil down to "this is a slap on the wrist because they are well connected". If you were the FTC, what would you do to Facebook in this case, how would it be supported in law and what long term change for the better would your action create?
Privacy is a civil good but it is a fine line to walk indeed to punish an innovator during a recession. Where's the happy medium?
This is not even close to a fine line. The fact that Facebook may be an innovator or the fact that we may be in a recession have nothing to do with their legal responsibilities to their users. If they have violated those responsibilities they should be punished appropriately regardless of the current economic situation, and them being an "innovator" is totally irrelevant. Should we allow innovative companies to dump toxic waste or employ racist hiring policies, for example?
It's likely that few HNers have a strong enough grasp of the various laws to state how they would pursue Facebook based on law, but all of us have a sense of justice. Whether or not the two coincide is a different discussion.
My happy medium, based on my own sense of justice: give Facebook a year to implement systems that actually protect users' privacy (what that would entail is yet another discussion). If they don't comply, hit them with a hefty fine. We get our privacy, Facebook gets to keep its money - some of which was earned by neglecting our privacy.
Facebook has repeatedly demonstrated that it has no qualms about infringing on privacy, so I would suggest that further changes to Facebook's Privacy Policy be made opt-in only.
Opt-outs should be a privilege that is lost when you repeatedly and intentionally violate federal law.
>"If you were the FTC, what would you do to Facebook"
Or Google, Apple, Twitter, Microsoft, Adobe, and thousands of smaller companies who data mine user accounts and change terms of service every day.
I used to have a subscription to The Economist. Recently I purchased an issue at Kroger. Two weeks later a special subscription offer appears in my mailbox - the first marketing material in they have sent in at least three years.
What Facebook did is the bread and butter of today's business - even if it sucks.
The problem here is the personal information is being voluntarily given to Facebook. And the FTC can do nothing about that.
As far as I can tell, most people using FB are trying to communicate with their friends (as they previously did via letter, telephone and email), not broadcast every personal detail and thought to potentially any person or organization connected to the web.
Alas they are not well informed that by sending all their communications through Zuckerberg's website, this is in effect what they are doing.
That lack of understanding is something the FTC can address.
So to comply with the FTC's requests, FB will make more disclosures.
But the problem remains. FB, whether intentionally or not, is receiving far too much private information and private conversation, and it's all being channeled over the web.
Wrong. The FTC said very clearly that it thinks Facebook lured consumers in under false pretenses. That's punishable by criminal and civil penalties, in theory. The FTC usually settles these kind of cases, AFAIK. Sometimes there's money involved, sometimes not. Anyone who isn't in compliance with their own published privacy policy should be worried about the FTC; they can (again, in theory) do serious harm to a business – even one as big as Facebook.
So are you suggesting that despite FB's rather sizeable legal budget and level of investment they will _still_ not be able to bring themsleves into compliance and stay that way? At least until the IPO. If Facebook even exists 20 years let alone 5 years from now I would be shocked. The data they've collected will of course probably have an infinite lifetime.
In the US, given that they're not in a regulated industry (except to the extent they qualify as a site aimed at children), they really only need to comply with their own stated privacy policies. The question is: will they?
I got a -1 on this comment, my worst score ever, but I still stand by it. I simply do not believe that anything one voluntarily submits to Facebook can be kept "private".
The value Facebook gets from the data is _sharing_ it with others: advertisers, various organisations devoted to catching bad guys, app developers, etc. It is not "private" by any stretch of the imagination.
Even if they purport to restrict access to a profile to certain users, a determined hacker can get around that.
This is a company that is trying to get into your email inbox at every possible opportunity. The concepts of "Facebook" and "privacy" are irreconcilable in my view. Even regardless of their ethics, there is an underlying architectural problem.
The successor to Facebook, which will offer real privacy, not the imaginary kind FB is pitching, will not be another centralised public website.
I never understood why bodies like the FTC rely on 'independent, third-party audits' for enforcement, since they end up making the entire action pointless.
The independent third-party auditor will give Facebook a stamp of approval, both in the next 180 days and every two years thereafter, because the independent third-party auditor wants the repeat business.
Same thing goes for any regulation that depends on a third party, really. I mean, over the last six years how often is a 409a valuation not to the board's liking? Somehow, magically, the auditors collect their fees from the company and then independently deliver an acceptable answer.
Might as well not have the regulations - or just fine the company something meaningful - instead of engaging in this goofy kabuki theatre.
Accountants supposedly are employed by shareholders, but in practice are employed by executives. This makes auditing problematic, but it does have some value. The bigger problem there is the big four's oligopoly: they are too big to fail.
Quite a lot of work goes under contract from the government and auditors or "program evaluators" are pretty common. A lot of grant programs on the social service and education side have a employee as a grant officer but are pretty much run by outside contractors.
Not saying it is good or bad (kind of depends on who the contracting company is), but it is very common and pretty much standard business for the government.
Same for the PCI compliance process. Auditors are on your side and want you to pass, so they'll bend in all sorts of ways to accept bad practice as "acceptable at this company"
My employer handles insurance claims. I have an email sitting in my inbox right now, explaining that if we get a certain thing wrong it will get us fined up to $1000 per claim per day until it's fixed. Not because we screwed up and got in trouble, but because that's just how things always are in this industry.
So no, I don't find that unlikely at all. In fact it not being per person is what would be absurd.
This just means that Facebook privacy changes will have the imprimatur of the FTC from now on, which FB paid for with the airing of a little bit of dirty laundry.
- required to prevent anyone from accessing a user's material more than 30 days after the user has deleted his or her account;
Does this include people that have already deleted their account? Does this also include Government agencies and such from seeing the >30 day deleted data? I'd like to know that after permanently deleting my account all my stuff is gone, but I don't really see anywhere that says that's true. Meaning the site is still destroying my privacy even after I've decided to have nothing to do with the account.
When I created an account with HN using Google account via ClickPass, one of the screen steps before I grant access to ClickPass, Google advised me to not grant it and if I do it I can cancel it any time which it will prevent ClickPass to access to my account information and my password.
This warning statement is not new; it’s there everywhere when you grant any application to use your Facebook, Twitter, Google ... etc accounts.
In the mean time Google Search is nothing without us, because "we are the product", they sell (us to third parties or Governments) or use our "private information" or what they told us it’s private without approval from us.
I wonder how this settlement compares to EU Data Protection Law. Is it possible FB could abide by FTC rules and still be outside EU rules? Will FB use this "We're OK by the FTC now!" as a claim to be not so bad in the EU?
This is assuming Facebook will be around in 20 years.
This binds successor corporations operating Facebook's business and thus changes the potential value of Facebook as an acquisition target (and thus as a retail investment choice when it becomes publicly traded).
I am strangely reminded of that episode in 30 rock where tracy realizes he can just pay a fine to make obscene comments and do obscene things on TV ...
Contracts are a pretty gray area. In a lot of contexts, a handshake agreement or email exchange, if documented well enough, can constitute a binding, legally enforceable contract. It doesn't have to be on a paper saying OFFICIAL CONTRACT with signatures at the bottom.
Why aren't they contracts? When I sign up for a website, I have to enter into a contract for their Terms & Conditions, which usually includes the Privacy Policy.
You know... people could just take responsibility for sharing their private information. If they don't think a website "privacy policy" is enough of an assurance, it is their fault for accepting that risk.
FB should not be blamed for sharing information that others freely share with FB. It's ridiculous. It's even more ridiculous to think that government regulation is somehow needed to protect privacy. How absurd.
"I keep using this service and they don't do what I want! But I keep sharing my information with them."
Come on. At a certain point, individuals need to accept that THEY maintain a relationship with FB as well.
I gave my info to FB and they promised to only share it with certain people. Then they made that info publicly available. That's a breach of trust, and possibly criminal.
While you have a point about being responsible about what information you share with Facebook, it's not that users don't like their privacy policy - it's that the FTC has found that the privacy policy Facebook has published is a lie.
-In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public. They didn't warn users that this change was coming, or get their approval in advance.
- Facebook represented that third-party apps that users' installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users' personal data – data the apps didn't need.
- Facebook told users they could restrict sharing of data to limited audiences – for example with "Friends Only." In fact, selecting "Friends Only" did not prevent their information from being shared with third-party applications their friends used.
- Facebook had a "Verified Apps" program & claimed it certified the security of participating apps. It didn't.
- Facebook promised users that it would not share their personal information with advertisers. It did.
- Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
- Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn't.