Can we not let this become framed as a "breach"? No systems were compromised. Nothing of Facebook's was accessed that wasn't supposed to be accessed. This was data intentionally exposed by Facebook, just exfiltrated and given to an entity whom Facebook hadn't authorized.
This is simply the extent to which we've permitted these Internet giants to collect information about us. It's business as usual.
Edit: To clarify, this is indeed worse than if the data were taken from Facebook without consent. What it means is that not only does Facebook have access to vast troves of personal information, but so does everyone tangentially connected to someone with a Facebook developer account.
> Can we not let this become framed as a "breach"? No
> systems were compromised. Nothing of Facebook's was
> accessed that wasn't supposed to be accessed. This was
> data intentionally exposed by Facebook, just exfiltrated
> and given to an entity whom Facebook hadn't authorized.
This is similar to a HIPAA "breach" where the word doesn't imply that a security system was compromised, but that protected data was accessed by folks who shouldn't have had it. In this context, framing it as a breach is perfectly accurate.
As an aside, a HIPAA-style law that protects and enforces portability for this type of personal data might be a good first step to reforming our industry here, which is currently completely unregulated in this regard.
Listening to politicos, you'd think the systems were actually compromised, and, in the same breath, boogeypeople from Russia are mentioned in order to conflate things in the mind of the audience. This willful conflation is a tactic to drive a narrative.
HIPAA data is accessed by researchers, sometimes anonymized, but not in all cases. These are not considered breaches. In addition, as others indicate, FB posts are not, at least at this time, protected data.
> you'd think the systems were actually compromised
We're seeing a divide between the technical and popular interpretations of the term "breach". When an industry drops the ball and responds pedantically, that's a strong sign that further regulation is needed. If only to force a common language.
Facebook insists they were not "breached" because many states require notification in the event of "security breaches of information involving personally identifiable information" [1]. Each body of law defines "breach" differently. Most do not limit it to technical security malfunctions.
> When an industry drops the ball and responds pedantically, that's a strong sign that further regulation is needed. If only to force a common language.
We already have plenty of regulation here that Facebook is unambiguously subject to; the question is whether the relevant authorities will actually follow through on that.
For what it's worth, it's been two days, and we're already seeing an FTC investigation and a Congressional investigation, so it's a little premature to conclude that existing regulation is insufficient.
> HIPAA data is accessed by researchers, sometimes anonymized, but not in all cases. These are not considered breaches. In addition, as others indicate, FB posts are not, at least at this time, protected data.
In order to receive data protected under HIPAA by a covered entity, you have to go through an extraordinarily elaborate and complex legal process. In addition to signing an agreement that (in effect) binds you to all of the same restrictions on the data that the original covered entity (e.g. hospital/insurer) was, if you're accessing the data for research purposes, you'll have to go through an institutional review of your intended purpose and methods for the research.
Facebook does none of these, which is why they have been (rightfully) criticized for conducting unbelivably unethical studies[0] without either user consent or institutional approval, even though both of those are typically required by all reputable universities and publishers for research.
Facebook posts are not protected under HIPAA, but they're not entirely unprotected either, and it's totally valid to refer to that breach of responsibility and trust as a breach.
I'll agree with you in characterizing it as a breach of trust. That it is. Operatives in Washington, however, are trying to characterize it as something it is not.
It's not Russians hacking in, it's not part of some effort to destabilize democracy, etc. That characterization and demonization is indicative of the mindset of those people and that may be even pose more danger than the breach of trust by Facebook.
True! Mostly it was information about users and their social graph collected by people voluntarily. It's distressing that people were not informed, "We're going to use this to target political propaganda at you when you" when they took personality quizzes/etc, but all the data was shared by users. FB's security isn't breached, merely their users' trust.
> it's not part of some effort to destabilize democracy, etc
I'm not sure we all agree on that. ;) The whole point was that one can use the intelligence gleaned from these users' social graphs to target memes/advertising/messaging to specific subgroups whose political responses you are hoping to influence.
> It's not Russians hacking in, it's not part of some effort to destabilize democracy, etc.
I'll avoid the word "hacking" since it's used to mean a lot of different things to different people, but it absolutely could be part of an effort to destabilize or undermine (US) democracy.
What we've seen is definitely a breach of responsibility and a breach of trust. It's also probably a breach of the law, since the data Facebook collects is still subject to some protections (and it's hard to imagine how Facebook could have done all this while adhering to those). And while we don't yet know the motivation or intentions of the people involved in these actions, it could very well be motivated by an effort to destabilize or undermine US democracy. I don't see why you think those are mutually exclusive.
It's no secret that 3rd parties can get access to your facebook data though. there's been apps asking for permission to access your facebook data for years. That's the whole point of the facebook developer platform.
Do we know what data was harvested? Cause if its data that's supposed to be private then yeah, that's some murky business. If its public info, or info that can be accessed if you give an app permission to log-in, then is that really a "breach"?
I mean, it's terrible and CA was definitely misusing it, but if I install an app and it asks for permission to use my location and my contacts, and I grant them, is that a break of trust and a breach of the law on the Apple/Google front? What should Apple/Google be doing to protect my privacy?
Legit questions here; I do hope something is figured out and less people fall into this kind of trap. I've heard of Android games whose purpose actually is to harvest a ton of personal info. Apple seems to veto its apps better, and maybe that's the solution-- Facebook should veto 3rd parties better (Google should too, before something like this hits the fan).
What data was being protected? The data was created when the user chose to engage with the facebook apps. CA pays facebook to put something in front of users faces and then CA gets back information on user engagement. How is that different than any other kind of advertising on the web?
We can argue that there needs to be more transparency on facebook but a breach? That's torturing the word.
Personally-identifiable information [1]. Many states require notification in the event this data is found to have been accessed improperly. The definition of a "breach" is not limited to technical malfunctions.
Personally-identifiable information that users chose to share with the world as part of public profiles.
We might say that you can't sign away the secrecy of your PII, so user consent is irrelevant. Then we had better get on YCombinator, Stack Overflow, Medium, etc. for allowing prominent community members to use their real names on their posts. Someone could [0] use them train statistical models to who-knows-what purpose, after all.
CA state that the data was collected by a third party as "academic research" and they didn't know that when it was given to them - so they violated the terms of service in good faith.
> This is similar to a HIPAA "breach" where the word doesn't imply that a security system was compromised, but that protected data was accessed by folks who shouldn't have had it.
Protected data, in the context of HIPAA, would refer to Personal Health Information (PHI)
Why would the HIPAA standard of a breach apply here? Scraping public data to create a political profile is on par with getting access to private health data?
Let's please do better than HIPAA. It was the first such law that I know of, and there are a lot of kinks to it. Many subsequent laws were able to learn from its mistakes.
One of the big weaknesses of HIPAA is that the privacy requirements technically apply to the data custodians, not the data. That allows for some loopholes through which private information can fall out of HIPAA protection, and also creates some unnecessary hassles for health care providers.
Ontario's PHIPA is one example of a better model for patient privacy.
>> ...but that protected data was accessed by folks who shouldn't have had it.
Facebook handed over the data. They need to understand that they don't have control over it once it leaves Facebook. Is a violation of ToS a data breach? Do we really want to conflate those things?
I understand why Facebook doesn't want to call it a breach. But it seems equally reasonable to me that users see it as one. From the user perspective, private data is suddenly in the hands of unknown, suspicious actors who may use it against them.
That Facebook would rather not call that a breach so much as "business as usual" is all the more reason legislators may be inclined to define "breach" the way that voters do.
My intention was to contrast the mundane connotation of "business as usual" with the visceral negative reaction most of us seem to have with our data being used this way.
The point I'm trying to make is that there's a difference between an isolated attack (e.g. Equifax) and what Facebook has going on here. To the person who reads about a "data breach at Facebook", it does sound like this was an abberant event that happened suddenly — rather than systemically, by a machine built on doing this every day.
Cambridge Analytica's actions may illuminate how far this can go, but we should treat it as the norm — and regulate accordingly.
Actually, the regulation itself already is in force, and has been since the day it was ratified. There's just a moratorium on enforcement in the first two years of this EU directive, so that business (and society) has time to adjust to the new reality.
The distinction may be very subtle, but it's important to know that following the 25th of May, businesses can no longer claim to be "in the process" of implementing it -- they have already had two years to prepare.
> This is similar to a HIPAA "breach" where the word doesn't imply that a security system was compromised, but that protected data was accessed by folks who shouldn't have had it. In this context, framing it as a breach is perfectly accurate.
Data breach is a compound noun with a very specific meaning in information security. It means that the data was protected, and a malicious entity defeated the protections.
Breach of contract, breach of trust, physical breaching of the hull of a ship, etc. are all different usages of the word breach, but it's not a data breach unless someone accessed a protected system without or exceeding authorization as defined by the CFAA.
It's not, at all. The FB API was designed to give out this information before it was changed. That means the friend data was not need-to-know like healthcare data.
The real point is that companies like facebook and equifax have such large caches of personal data and have no legal obligation to protect it. This is a point most people outside of tech don't understand. You might not even be a user of facebook but these companies still have data on you that is highly personal and invasive.
An academic who has done some great work on this is Evgeny Morozov. Highly recommend his books, articles and lectures.
The point of my comment is that we should not compare Facebook to Equifax. The latter may have been lax in protecting that data, but the millions of records exposed last year were taken without their consent. Facebook is literally inviting anyone who can sign up for a developer account to harvest their — sorry, our — data at scale.
Right and I'm saying that the broader point is it's legal for these companies, Facebook and Equifax being two of the biggest, to have massive caches of highly invasive personal data and have 0 legal obligation to protect it (they can do anything they want with it). How invasive is it for a company like weatherbug to be selling your location data to the highest bidder simply because you want to check the weather on your phone?
The massive industry that has been built around advertising and personal data trading needs to be regulated.
I agree with you with regard to WeatherBug, but Equifax already cannot just sell your credit data to the highest bidder. Whether there should be more restrictions on how that data can be used is up for debate, but for the most part you can't get a credit report on someone without their explicit consent.
I specifically want to avoid the Equifax comparison because it looms large in people's minds as an example of an intrusion and forceful removal of data, which is not what occurred with Facebook and Cambridge Analytica. We should have better laws around protecting sensitive data from intruders, too, but they won't be the same laws prohibiting companies from selling data they've collected on us. Conflating these problems will not help us solve them.
> Was this a breach in trust to Facebook users? I think undoubtedly yes.
What's interesting about this is the fact that the same data is shared with many third-parties, with proper "consent", and users not understanding what's really happening. Calling this a "breach" has the slight unintended side-effect in the public by promoting the idea that this company received a different dataset than other partners, which is not the case.
> And was there a breach of a the Terms of Service by companies taking all this data and using it for non-academic purposes? Yes there was.
There's a legal concept of 'waiver' meaning that even if something is prohibited in a contract, but the parties don't enforce that part, then that part is later not enforceable. Facebook was fully aware of this behavior, chose not to enforce the ToS, and therefore it waived that clause. Therefore no breach.
>Was this a breach in trust to Facebook users? I think undoubtedly yes.
How naive is the average person? The purpose of facebook is to gather this information, hence why its offered as a "free service".
Frankly, I don't understand why the stock is going down, facebook is fulfilling its core mission: Get private information on millions of people and package that information for sale to its clients. If anything CA situation should show how FB is fulfilling its core mission.
The fact that the public is now waking up to this is not a breach, its simply casting a light on what has always existed.
The average person is incredibly naive with regards to what the cost of a "free service" like Facebook is. It's not until you start looking at people who are in related fields that you start seeing people who truly understand the costs.
The public waking up to this breach and the costs being exposed are probably a huge part of why the stock is dropping. Facebook's continued profitability and success is dependent on its users not understanding how their data is being used. And now "everyone" knows, so the secret is out and hopefully Facebook can't get away with this going forward.
All this lawyering over the definition of 'breach' is failing to see the forest for the trees. It is a breach of trust, even if not a breach of technical security controls.
I think there's a meaningful, non-definition difference - and in some ways it makes Facebook look worse.
Metaphorically, somebody had a gun, and someone else took that gun and used it to rob a bank. Equifax left the gun sitting visible in an unlocked car, and people are angry about the predictable results. Facebook was running a "borrow my gun" program for strangers, but had a clause saying "no using my gun for crimes, no lending my gun to any third parties". One of those strangers lent the gun to the robber, and Facebook is saying this isn't their problem because they said not to do that.
So yes, they're both bad outcomes. But "breach" usually means "this was stolen without our knowledge", and that's a very misleading impression to create here.
The only difference is that instead of the baddies having to sneak in carefully at night to nick stuff, Facebook said 'welcome, come on in, help yourself – here's a sack'.
The end result – millions of people having their personal data used against them without their knowledge or consent - is the same.
To me, calling it a breach is Facebook's attempt at passing the buck. Getting breached alleviates some responsibility for what happened. Maybe not in reality, but in how it's portrayed in the media and how it's understood by laymen it absolutely does.
> ”Breach" specifically implies that defenses were penetrated
It’s time to update the definition. “Breach” means you lost my shit. I thought I gave it you in confidence and then you lost it. Facebook arguing “this isn’t technically a breach” comes across as their yet again talking down to users to slip problems under the rug.
Sure, but we still need some term to disambiguate between "a company didn't protect my data against intruders" and "a company sold my data, then didn't like what the buyer did with it".
This isn't like the Equifax breach. It's not a result of Facebook's security practices. It's a result of Facebook's entire business model.
This can be a 'breach' by many of these definitions.[0][1][2]
You're basically saying, "Words only mean the things that I want them to mean, and if you try to use them a different way than I approve, then I will use this meme to try to shut you down."
Words fluctuate in meaning all the time. This may very well be the beginning of a new definition for breach, i.e., a social data breach, for example.
But we don't even have to go so far as to claim that this is a new meaning for breach. Any of these old definitions contains sufficient meaningfulness to make "Facebook loses control of data to unauthorized breach" perfectly intelligble.
> Any of these old definitions contains sufficient meaningfulness to make "Facebook loses control of data to unauthorized breach" perfectly intelligble.
Sure, but the point being made by the "it's not a breach" people is that Facebook didn't lose control of data to an unauthorized breach. They gave up data according to their own documented and expected procedures to people who were supposed to have it. "Facebook voluntarily and purposefully gives away data in an authorized breach" is not so intelligible.
The fact that "Facebook loses control of data to unauthorized breach" would be a sensible, understandable sentence isn't really relevant when nothing of the kind has happened. Who'd be using that sentence?
I guess then I'm confused about the narrative of the story so far.
Did Facebook have control over its (my? your?) data at Cambridge Analytica or not? I thought the extra 50 to 250 million profiles scraped were unauthorized access?
> "Words only mean the things that I want them to mean, and if you try to use them a different way than I approve, then I will use this meme to try to shut you down."
checkyoursudo, I don't want to shut anybody down. I get your point.
And I am sure that in a world of haveibeenpwned.com and Equifax you get mine.
Let's focus on the real issue here. Facebook has data that:
- Can harm everyone
- Is not protecting it well enough
It's an alleged legal breach of Data Protection principles. That language is used historically by the ICO in the UK to describe exactly this type of situation.
Facebook's responsibilities and Cambridge Analytica's responsibilities towards data protection have been breached.
There's no other useful word for that. It might not be a hack and it might not be a security vulnerability, but it is surely a breach.
> A personal data breach means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data. This includes breaches that are the result of both accidental and deliberate causes.
For Facebook, an actual data breach would be better. They could button things up and make some statements and move on.
This appears to have been systemic and profitable for them because companies would turn around and pay them for highly targeted ads. They ignored it because of greed.
Yep. If/when the 50M profiles become public, Facebook will be bending over backwards to rebrand this as a breach. However, on the (Twitter) record, their CISO has already said emphatically this was not a breach. He's been demoted.
It's the problem with that kind of speech, it's impersonal and dehumanizing.
Let's say it like it is: facebook betrays users expectations giving their data to other businesses.
Same for hacking: some people invaded system such and such and took private information.
It doesn't matter if it was a breach, a floodgate, a window, what matters is what happened, and what happened is that player X did Y. Let's just state that first and foremost.
It’s important for the public discourse that someone point this sort of stuff out occasionally. Even if this was not even the most egregious example I’ve seen this week.
Once in a while I reread http://www.derailingfordummies.com and review the definition of “horizontal aggression”. Sometimes it saves me from engaging with people who are derailing the conversation. Accidentally or willfully.
It's definately a breach, just not breach into Facebook's technical infrastructure.
As I wrote previously, don't you think that it can be a breach in the same sense of a breach by phishing? After all, both of the cases are about people giving their "secrets" for one reason but the info being used for something else.
I mean, in the case of traditional phishing the user is tricked to provide the password by impersonating a banking site, getting their funds stolen and in the case in question, the users are tricked to provide personal information by being promised some kind of personality analysis but their data is used for political propaganda that they didn't asked for resulting in life-changing consequences du to politics.
Anyway, the idea here is that CA breached Facebook users personal data by methods quite similar to phishing and FB look the other way. Not necessarily by design but maybe by a desire to exploit the platform as much as possible so that did not get in the way of people who were doing interesting things.
Look at all the examples of a data breach in this wiki. The CA/Facebook incident looks nothing like them.
CA either paid facebook to collected data through apps or scraped data from public profiles. Maybe the CA/facebook incident will change what we consider "breach" to mean but right now "unauthorized collection of public data to create a political profile of users" is not a data breach.
The first sentence from your link: "A data breach is the intentional or unintentional release of secure or private/confidential information to an untrusted environment."
Sounds like exactly what happened with CA and FB. People came for friends and fun personality tests, their information got into the hands of a propaganda machine. Definitely a breach.
As for the examples, do you want me to edit the Wikipedia article and add the CA/FB incident?
Based on many of the comments in this thread I don't see how you could say it "sounds like exactly what happened with CA and FB." Debatable maybe. Clear cut, obviously not.
And as for your glib comment on editing the wiki article, you should read more carefully what I said. My argument was that the numerous examples of a breach in that wiki do not fit the CA/FB incident. Adding the incident to the list would do nothing to dispute that point.
The definition from the Wikipedia article certainly does match this incident.
The comments on this thread aren't generally dealing with the question of the applicability of that definition so brining that up doesn't help you.
I guess you're really trying to get at is that you disagree with that definition. That's fine. But it's a very weak argument to appeal to an authority and then disregard the authority where it contradicts your position.
Maybe you need to edit the Wikipedia article ;)
BTW, not sure if this is the part you don't like, but the distinction between intentional and unintentional is tricky. For one, we'd have to pin down whose intentions we're talking about (the people controlling the data store that has been breached, or the people's whose private information has been taken). Then, peer into the minds of people we don't know or, worse, try to determine intention for a corporate entity. If intent is part of the definition of a breach then it would demand a lot of assumptions to be applied (or some kind of long, expensive process like an investigation and trial).
In the end, the impact on the people's whose private information was taken is the same: their private information has been taken, en mass, without their permission, by someone they don't know, for purposes they don't know.
No more or less a breach than any social engineering attack. No more or less than Chelsea Manning for that matter. "Our servers weren't compromised" is totally irrelevant.
Did the sensitive data end up someplace it shouldn't? Yes? Then your data security was breached. The end.
But hey let's argue over the technical definition of breach rather than how evil facebook are and how much power they have - both of which are vastly more interesting to consider. I'd like to see some support of the not very, not much school of thought.
You don't even need a developer account. You could just scrape Facebook which is probably what CA did in the first place. They used the app to identify US users and from there on just scrape the pages using a headless browser and multiple proxies.
Unless I’m doing something wrong, a developer account makes this sort of thing harder: you can’t just access anyone’s data, you have to convince them to authorize your app first. . . Which is probably why there’s all these “find which star wars character you are!” quizzes that make the rounds on FB.
It's a little easier than getting everyone to sign up, back at the time this app was circulating if you gave it access to your data the app would also gain access to all of your friends' data. That's why a relatively small number of installs allowed it to hoover up huge amounts of data. So even if you were militant about not granting access, but your grandma clicked a button... Whoops!
Even "exfiltrated" is not really accurate since it was freely handed over.
The problem is that Facebook just made its partners pinky swear to only use the data for research, which is obviously not an adequate data security measure.
Whatever it is, it's the same thing that a breach is. So either call it a breach or invent a new word for it. In any event, calling it a breach results in treating it with the degree of seriousness it deserves so I don't see a reason to not use that word.
>the platform operations manager at Facebook responsible for policing data breaches [...] warned senior executives at the company that its lax approach to data protection risked a major breach
>One Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?”
>They felt that it was better not to know. I found that utterly shocking and horrifying.
Lol, "this is bad"? This is normal. I can give you an entire list of F500 companies I've worked at that have the same mindset. I've sat in meetings with F100 CIOs where they were given the same warning and shrugged it off. I've been asked before to turn off security monitoring systems because executives prefer to not know about vulnerabilities rather than know about them and not be able to fix them.
The only thing shocking and horrifying about this whole thing is how naive the American public must be to find any of this shocking and horrifying.
>I've been asked before to turn off security monitoring systems because executives prefer to not know about vulnerabilities rather than know about them and not be able to fix them.
It's a simple cost-benefit analysis.
Implementing effective security is difficult, time-consuming, and expensive. Ignoring problems costs nothing. Unless it's clear the cost of a breach is higher than the cost of security, corporations will risk a breach every single time.
The ultimate loser here is users, who bear the burden of having their data appropriated and misused. Unless the government steps in and imposes penalties on corporations on behalf of users, they'll continue merrily offloading the risks of poor data security on the general population.
It's not even as simple as this. Sometimes, ignoring problems can actually be cheaper. Public perception, as well as government fines, will often treat companies nicer if they were ignorant to the full breadth of security issues than if they knew about them but did nothing.
It's a failing of our system to be sure. I've been asked to stop doing a security assessment halfway through, because once the client realized that the assessment wasn't going to just be "everything is 100% A-OK!", they didn't want it to be on record. If they were breached, they didn't want any paper trail of the executives knowing about the security vulnerabilities that could increase their liability in court. They preferred to be able to claim ignorance.
People love to pretend to be horrified by things they've assumed to be true. "What? A politician is corrupt? Outrage!!" "What? They're tracking me to build profiles on my use of all their advertising driven free services? Outrage!!" I'm sure there's a word for it in German.
Have you ever considered the possibility that many people actually are shocked?
Why do “people love to pretend” that genuine outrage and the sincere desire to stop immoral practices doesn’t exist?
Many people sincerely care about what’s right, even if they fall prey to human flaws and cognitive biases from
time to time.
Perhaps those who talk about how “everyone” just “loves to act like” x and “virtue signal” y are merely projecting their own values on to the rest of us?
That's even more sad. It would mean that even though they care deeply about outcomes, they either willfully ignore the things that result in those outcomes, or have amnesia, or are just incapable of doing anything about it.
Take Congress for example. Approval ratings are what, 20%? They're generally seen to be corrupt, and they don't get anything done, right? So why aren't they voted out of office? Why are people surprised when they end up having low morals or corrupt? If people honestly cared, wouldn't they immediately demand change? But the status quo remains.
So either the people have no power to change things, or they collectively forget these things every day, or the real reason: they don't really care that much, but like to seem like they do.
> Take Congress for example. Approval ratings are what, 20%? They're generally seen to be corrupt, and they don't get anything done, right? So why aren't they voted out of office?
Because the average approval rating of individual members of Congress in their own district (for the House) or state (for the Senate) is much higher. For most people, it's (some large subset of) the 532 members of the Congress that they don't get to vote for that are the problem.
For your congress problem it’s actually none of the reasons you listed. The cause of the discrepancy is the 20% approval rating is for congress as a whole, but people don’t vote for congress as a whole, they vote for individual representatives.
People do like their own representatives, and those approval ratings are
often very good in their own district. It’s the rest of congress they don’t like.
I obviously can't share with you the list of specific clients I work for, but this attitude is pervasive enough that you should assume that any and all major corporations have this same mindset. All of them.
Yeah, anyone who has spent much time in Real Companies will know that this is completely predictable and that it occurs virtually everywhere. Don't give them the data if you don't expect them to harvest it.
This is true for basically everything, even stuff that is typically acknowledged as sensitive. I've consulted for big financial groups whose customer service reps had completely unfettered access to SSNs, birthdays, and everything else they had on millions of customers. I would not have been surprised in the least to learn that some programmers in the company, either acting on their own behalf or acting at the request of a superior, were taking samples of this data for "unofficial" use.
Maybe the takeaway is that the SV brogrammer is not quite as special as he/she thought, and not exempt from the temptations that afflict the rest of us.
For the executives, it is better not to know. As soon as they know they are liable and must disclose it. As long as they don't know they can claim ignorance.
The cool thing? This "whistleblower" already spoke publicly with an op-ed on the NYT months ago.
Again, this is just the top of the newscycle. Let's see what happens in three months from now. My guess: Facebook revenue will go up. This is a PR shitshow, and a great piece of advertising for Facebook's ad department.
A lot of marketers right now are thinking "wait, we could do that with all that Facebook data we have?!"
GDPR can't come too soon. That would definitely put an end to these shady practices, as the penalties of several individual infractions would endanger any company.
It will offer a possibile solution in Europe, where Facebook has already been under heavy scrutiny.
It won't change anything in the US, South America, SE Asia and developing countries where Facebook is already dangerously synonimical to the whole online experience of the average user.
The hope is that Facebook will not have two different data handling strategies for EU and non-EU users and we'll see some sort of regulatory encroachment from the EU to the rest of the world. But obviously GDPR endangers so many of Facebook's shady but lucrative practices that they will have financial incentives to set up two different user silos.
> The hope is that Facebook will not have two different data handling strategies for EU and non-EU users and we'll see some sort of regulatory encroachment from the EU to the rest of the world. But obviously GDPR endangers so many of Facebook's shady but lucrative practices that they will have financial incentives to set up two different user silos.
Facebook might be one of the few organizations with the motivation and ability to set up two different regimes to contain the effects of GDPR on their practices.
In that case, I would love to know what their selection criteria is.
I wonder if other countries also start making laws like the GDPR? The EU is creating a frameworks which makes it very easy for other countries to "attach" on too.
The only danger is not having enough leverage to "prevent" companies from leaving and not adhering to privacy standards, but by cooperating this should be possible?
Under GDPR you cannot mix these two things. They cannot force you to accept conditions that are not relevant to the requested item - if you do not accept the opt-in, they are still not allowed to refuse your access to the monkey video, as there is no meaningful connection between said video and all of your data.
It would be different if you had to pay for something, in which case you would have to agree to share your name, credit card etc. However, they still would not be allowed to share it with unrelated (!) third parties.
Even if they complied with your erasure request and deleted everything from all their servers and their backups (spanning the world over a decade), think of all the non-EU third-parties who have your FB data already
GDPR isn't only about data deletion, but also about the transparency of data handling strategies. There was a "nightmare GDPR" letter HN post a couple of days ago that illustrates some of those responsibilities.
Sure. That's already possible in Germany, where any company has to provide you with the details of their knowledge about you once a year for free: https://selbstauskunft.net/
My thoughts exactly. People seem to hold on to Facebook whatever the stories are, so there is no real problem. I was even thinking of buying stocks soon (wait one more week to have the market process these 'new' facts).
It may not be seen as ethical here, but I also plan on buying some more FB. This is a huge overreaction, and we all know it'll blow over inside of a week.
Attention everyone's smart friend or family member:
Now is not the time to scream "DUH!" or "I told you so!" to people who in the past have not grasped just what they were agreeing to. Now is the time to help your less tech-savvy friends understand the impact their data has in aggregate (like swaying elections!), and how they have been used by this system. Take advantage of all this bad press and help people you care about stop contributing to this machine. If ever you were going to get someone to stop using these services, these are the moments you capitalize on.
I'm going to be using this thread as a perfect example when people think I'm crazy for saying that SV often exists in its own bubble. The disconnect here, and the failure of so many people to realize something so obvious, is appalling.
Open a software job board. 50% of offers are by companies trying to optimize some data harvesting and analysis to better target some ads.
Ads, the direct child of propaganda. So to everyone working in those kind of companies: you're not better ethically than people working on missile software. I'd say you're worse because you can argue missiles can be used as deterrent.
> So to everyone working in those kind of companies: you're not better ethically than people working on missile software.
Yikes.
I agree with your points, don't get me wrong! Spot on-- this optimized data harvesting is widespread and terrible, and ads are dangerous.
Yet, I think your analogy is a little bit much and takes away from your argument. Missiles' purpose is to kill people, they tear apart families, bring chaos to countries-- they are built with the explicit purpose of terrorizing at best, and ending anyone not terrorized at worst.
Ads are meant to sell things. Sure, they are terrible when used as propaganda, but they're still just meant to be an efficient way do deliver feelings+ideas, and one that can be escaped with skepticism and critical thinking.
I personally don't think that a Google engineer working on Google Maps, a Youtube intern helping with creator tools, or even a Facebook employee making face filters for Instagram are in nearly the same ethical level.
Also, you mention missiles at their best are used as a deterrent. That might keep one nation safe, but is is still about spreading terror to others and tends to just fuel arms races.
Ads at their best (aka, furthest removed from propaganda) are about informing people of things they would otherwise not know about. Think, mom and pop sops, some new organization, or a science fair.
It's easy to paint things black and white, and there's a line that can be crossed in terms of tracking, optimization, and attempts to control the population/public opinion. IMHO though, I really do think engineers working on companies in the ad space are not as ethically removed as those working on machines meant to kill.
Isn't the Pen mightier than the Sword? Aren't ideas immortal contrary to people?
If you agree with those tropes missiles are less dangerous than propaganda.
My comment may come as exagerated but I think you need some shock value if you want people to start really thinking. Cognitive dissonance is hard to break and rare are those who don't consider themselves good people.
Really? You expect every user to understand the extent of FB's reach into their personal information? You expect every user to understand the extent of how companies obtain that data, whether through purchases or covert harvesting? You expect every user to then understand the myriad of ways their data can be used?
We work in tech, but we're quick to forget that most users at best have a simple understanding of, "I mean, they have some of my data, I guess some of the ads match some things I've searched for on Google before." Joe Schmo never reads a ToS, and we can't expect people not involved in this industry to not be surprised when something like this happens.
I expect people who dismissed the Cassandra of privacy to not act surprised.
10 years ago they dismissed people telling them about things like Google Analytics or other external scripts on website. They used the "nothing to hide, nothing to fear" phrase. "You're paranoid no one want simple people's data".
Even better: doing it one week after complaining about how the GDPR and the Right to be Forgotten are bad EU laws.
However, Software development is not a Profession, in the proper use of the term. It is not self-regulating the way Medicine, Engineering, Law, and a few others are.
There is no formal standard of ethical conduct in software for practitioners to use as a baseline for their own behaviour.
>1.03. Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good.
edit: this version is from 1992. And I should point out my courses at least had a discussion or two regarding programming ethics in college in the mid-2000's.
I think ethical guidelines by definition are not enforced per se. They're just that, guidelines. However, as mentioned by GP there are boards of ethics at universities and medical/engineering organizations and such that might be able to dole out a modicum of justice.
For instance, not following those guidelines would conceivably end one's membership of the ACM, and many companies have their own ethical guidelines (I would argue there is not much difference between professions for what is truly considered "ethical") which when breached would result in disciplinary action. Theoretically?
Exactly. Nothing is done and the code of ethics is not enforced.
Let's say I'm a structural engineer or a lawyer and I act legally but unethically: I can be censured by my professional association/college, because law and engineering are professions and thus are self-regulating.
Can the same be said of software development? Certainly not. The cult of the amateur, self-taught basement coder and the entirety of startup culture are antithetical to professional ethics.
Professional ethics aside, how about plain old personal ethics? Do programmers have a higher incidence of unethical behavior in general than the rest of the population? I agree with you that it seems like there could be a more rigorous professional standard for enforcing ethics in coding/CS, but I like playing devil's advocate.
No idea, but a lot of developers and other tech people suffer from hubris, believing that since their cognitive skills make them effective programmers, they are in turn equally insightful in other domains because all thought depends on logic.
The problem is that it's very easy (and socially acceptable, even desirable) to build elaborate towers of logic on an unexamined premise.
TBH making the ethical choice may not even be the logical one. That's why it helps to have some education on the topic, as it inevitably involves making the less 'obvious' choice.
> I think ethical guidelines by definition are not enforced per se. They're just that, guidelines. However, as mentioned by GP there are boards of ethics at universities and medical/engineering organizations and such that might be able to dole out a modicum of justice.
Might be more than a modicum. If a lawyer or a doctor violates medical ethics, they could get their licenses revoked and be unable to practice their profession legally.
The ACM has certainly tried to keep software development a profession, but yes the industry mostly ignores the ACM and still revels in a cult of amateur programming.
Yeah and how many people adhere to this? Anyone can call themselves a software engineer, and many do despite never having been near an ethics course. Calling for mandatory ethics modules as part of a CS course has been an unpopular opinion here for years :-l
I'm not saying an ethics course makes one a software engineer but it seems like a pretty basic thing to remind students of so they are less likely (however small that chance is reduced) to go out into the world to become unethical wall st style sociopaths...
I would suggest that something people theoretically had dim awareness of in the DOS era before many current programmers were even born, doesn't govern how this industry works today.
If it did, we'd have heard from it again in the last 26 years.
> There is no formal standard of ethical conduct in software for practitioners to use as a baseline for their own behaviour.
Such a baseline standard _must_ exist, and _must_ be created. Every applied technology has started out with dreams to "change the world", only to have those dreams shattered by those obsessed with power.
Assuming it _must_ exist, how can we enforce it given that anyone with an internet connection can teach themselves how to make software? There is no centralized accrediting board for programmers, and it’s not very feasible to me when there are so many self-taught programmers today.
In some states anyone can take the Bar Exam to be a lawyer. [1] They all still require time in provable apprenticeship/study in exchange.
Michigan doesn't have a degree requirement for the Fundamentals of Engineering exam to work toward being a licensed Professional Engineer. In general, in the past, the NCEES, which runs the FE and PE exams has made degree exceptions for people with appropriate work experience.
It's absolutely feasible to have accrediting standards and bootstrap in all/most of the self-taught programmers today.
The flip side is admitting defeat and proclaiming software development truly is the new blue collar and has no hopes of truly being a profession.
It's important to realize that Facebook's lax attitude to data harvesting was most likely one key to their success. If they had closely guarded user data, there would soon have been stiff competition, but this way everyone could benefit from Facebook's data treasure trove and Facebook's success was in many other companies best interest. The current state of affairs should therefore be seen not as the result of negligence but a desired outcome of Facebook's core business strategy.
I made a Facebook web scraper which opens 20 headless browsers. You provide a list of unlimited usernames & proxies (you can buy them at https://buyaccs.com/en/). It will scrape every ounce of public information available. I acquired a few million users worth. The data is too easy to get.
Which kinda throws me off. It's no secret Facebook, Twitter, your phone (if you give permission to a 3rd party app) etc, are all harvestable. All CA is in regards to a story that broke a couple months ago. Why is it catching fire now?
Is there any reason to believe that the situation isn't the same in, say, the Android ecosystem? In my experience many 3rd party apps require ridiculous amounts of permissions (contact list etc...) for something that's not core functionality. Surely all these free-to-play crapware games on the Android market have siphoned all the data they could and sold them to the highest bidder? Does Google do a better job of monitoring these apps?
Yeah that's exactly the type of things I'm worried about. Smartphone apps have potential access to an incredible amount of sensitive data and I always found Android's permission system to be woefully inadequate.
There is an increasing need for a container app that will feed whatever sensor data you want to the apps within the container. Also there would need to be some preconfigured sensor data profiles, like "Occasionally cheating husband living in city X" or "Really rich housewife living in city Y" or maybe "piss-poor guy living in a rural developing country"
I don't know which one I'm more worried about. On the one side Facebook can track preferences and something akin to feelings closely, but on the other Google can track day-to-day activities much better.
If you've ever seen your Google activity log; that's very scary. The accuracy with which your phone can track your movement and where you are at every point in time is unprecedented. I'm very careful with what I allow 3rd parties to access but I can see a lot of users blindly accepting (like they did for this personality quiz that leaked all this Facebook info in the first place).
All the self-serving memes are out on parade here: "This not news", "what did you expect?", "anyone paying attention should have known", "it is the new normal", "it could not be any other way", "everyone benefits"...
Actually, as of posting, the "Apple/Google/Microsoft are just as bad" version has not yet put in an appearance.
Apple wants to be a walled garden. Can't be a walled garden without walls: their self-interest and marketing direction lies elsewhere. Also, in the case of the iPhone, hardware is the product, and in the case of the app store, the devs and apps are the product (some of the time, anyway).
Yeah, it makes me think of how people often say that SV is disconnected from the rest of society.
Sure, those of us on HN know to expect this from data-mining companies, but spend an extended amount of time with people who don't work in tech and you'll quickly learn that, yes, they know that FB uses your data, but most people have almost zero idea around just how much of your data is captured and sold/harvested/whatnot, nor what is done with the data after that point.
Stallman is right once again. The response that we used to get is, "but Facebook/Google would never give away their data, they will just use it for targeted advertisement, the data is too valuable to just sell it wholesale to other entities".
> Asked what kind of control Facebook had over the data given to outside developers, he replied: “Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.”
Um, well yeah. This is the case any time you give data to a third party. They now have a copy, and you can't control what they do with it.
Exactly. What kind of controls could there possibly be?
Even doing an audit wouldn't necessarily reveal anything. If somebody has data that they want to hide I'm not sure how much can really be done to force them to reveal it.
The controls are agreements that make getting caught doing the unauthorized act painful enough that it might be enough to deter the act in the first place.
If the price is high enough, bad actors will be willing to breach NDAs/CDAs/licensing agreements/etc, but at least then you can be seen as having done more than zero.
Well, NDAs/CDAs/licensing agreements/etc can make disclosure to non-authorized 3rd parties very painful for your customers/partners/etc to contravene your requirements for what they do with data, intellectual property, customer lists, etc.
This doesn't stop external attacks, of course, but it can reduce internal risks.
Facebook could have had more than zero control, if it had wanted.
It would be interesting to see someone analyze if the relationship between government and Facebook got more news coverage this time than it did when Snowden reported the exact same thing in 2013 or when other people reported it back in 2012. In particular it would be interesting to see which new sites write articles about it and if possible how negative they are.
Maybe that would explain why we see so many that got surprised by this while others has seen it for a long time and just got used to everyone not caring.
I was actually surprised by the generally positive reaction the GDPR got in recent threads here on HN. I guess the suspicion of data hoarding overcame conspiracy theories about government regulation or EU protectionism.
BUT it’s important to note that GDPR would probably not have had an effect on the specific situation with Cambridge Analytica. CA is obviously toast if not by law then by the attention alone. Facebook, however, is likely allowed to share data under GDPR as they did with CA: they got the users’ permission initially, and there isn’t much you can do to protect yourself against malicious actors.
>they got the users’ permission initially, and there isn’t much you can do to protect yourself against malicious actors.
The EU is clearly moving against that blatant circumvention. I don't know exactly what they are going to do, but the whole, "just sign all your rights to privacy way with one click" is something they want to change.
I think the mostly likely situation will be one where each specific instance of use of your data would need explicit approval. Moreover the prompt cannot be disingenuous legalese. It needs to be clear and concise. I fear it might just become another Cookie's Law. But it might still be useful. For example, imagine if you get something like:
"Facebook discovered that you have Chronic Illness 1. Facebook requests permission to share this information with Insurance Company in your State. Do you approve?"
I think the insurance company would care a whole lot!
Facebook's big data is getting to where they can predict things like pregnancies, illnesses based on parsing minor changes in behavior and correlating it against the big data set. This is of course super interesting, but it also gives you results like 'suddenly this guy is 42% more likely to die in the next 6 months and doesn't know it'. There are no certainties, but to an actuarial entity like an insurance company?
That's more than worth getting your lobbyists to repeal any shred of requirement that you have to keep faith with such a person. Insurance combined with big data and stripped regulations makes such an industry purely a financial play: handled properly they can, for a time, collect money and never pay any of it out, until it becomes obvious that's what they're doing.
Those are the entities most interested in having Facebook tell them you're probably getting sick. And why would Facebook ever tell you? That's their inference. You never said a thing about it, and indeed they could be wrong. But don't bet on it.
Regarding "they got the users’ permission initially" this is true for the users that signed up for it, not anybody in their social graph. GDPR treats data about a user as data belonging to this user. Those people have definitely not consented to having their data mined for this use case.
Next (as I understand) the consent was for research purposes, not for the CA targeting. So under GDPR Cambridge Analytica could be fined 4% of global revenue or €20M - whichever is HIGHER [1]
AFAIK Facebook shared data of non-consenting individuals ("friends") of consenting individuals. In light of the GDPR this would be at least borderline to illegal. As well data from consenting parties was used in a manner not consented to (that would cross the line) and handed over to a fourth party. Finally FB did not ensure proper data handling (crossing the line again). At least when regulators would be willing they'd had a leg to stand on.
We’re in the midst of reviewing our GDPR compliance one more time ahead of the May 2018 at my company. As a consumer, this kind of regulation makes sense. On the technical level, with so many service providers involved in running a modern day company, it’s obvious that many things have to be restructured for strict compliance. Eg. ESPs for mail delivery, analytics for business insight analysis
In the end, it’s about using data as it’s intended and nothing more.
I think a better analogy is automobile laws. Yes it’s a pain to have your car inspected and registered over and over, and there are plenty of places where speed limits are frustratingly below what a driver could safely deal with, but in general these laws do protect people not only from themselves, but others, who are perhaps not so careful with their car.
> He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.
If this is true – would this constitute willful blindness, and is that not illegal?
Well of course this was routine. Facebook prides itself on it's data collection and ad targeting. I'm not discounting The Guardian's reporting, but I thought this was known.
So the greatest achievement of big data and ML right now is the manipulation of elections a true shock and awe.
And yes, everyone on HN and RMS will say “why is everyone surprised ?” Well it’s because normal people don’t have that perspective and think Facebook is the internet for them.
Data is the new oil but now everyone knows this except just us on HN.
Unlikely, I've managed to work out what was happening (the news never really explains it - just a "data breach"). In the past if you gave it permission, a Facebook app could access information about your friends, e.g. their photos, name, gender, etc. I'm not sure exactly how much data.
Some sketchy apps harvested this data (which was against Facebook's terms and conditions for those apps). So the apps may have broken the law. I guess there is the question "should Facebook have protected the data better" but I doubt they broke the law exactly.
Anyway the stupid thing about this is that it was obvious that's what all these sketchy apps were doing at the time. Facebook app developers knew they could get this data, and the only thing stopping its exploitation was Facebook's app T&C's - i.e. "please don't do bad things".
There was even a setting to prevent third party apps accessing your data when given permission by friends. That's how obvious this issue was. (I doubt anyone used this option).
Does anyone else have the wild speculation that Rupert Murdoch is opening a bottle of champagne today?
The interest in FB and privacy, while gratifying, also seems focused through a particularly yellow lens.
People on HN have also pointed out that it’s very likely that CA’a analytical prowess may well be overstated as part of submarine marketing efforts.
I suppose many people are just surprised that this is taking off now, without any truly new or novel fuel driving it - when the same articles and worse, had no effect earlier.
There is new fuel. Did you miss the video of the CEO of Cambridge Analytica bragging about how his company stole 200 elections around the world by lying and cheating and deceiving the public? With illegal methods abound?
But specifically the way Facebook is being dragged to the center of the fire, even though there isn't really any new fuel on Facebook's side.
There's good reason for the media to be tense against Facebook right now, since Facebook has changed the news feed algorithm:
"traffic in the news category, which includes major news publishers The New York Times, Washington Post, CNN and BuzzFeed, was down 14 percent after a sharper drop in the months prior"
I do think Facebook should audit its 3rd party developers more closely and that this leak of data is terrible. Yet, imagine CA instead had built an app for a personality quiz and asked for a ton of permissions from your phone to track your location, harvest your contacts, etc. What else could Google/Apple have done?
I'm glad that a light is finally being shined on the sliminess of facebook's business model and that the public are starting to understand what an ugly company facebook is.
I am dismayed at the state of journalism that it took until there was a trump connection until they seriously reported on this
From the story:
> They seemed to be entirely focused on limiting their liability and exposure rather than helping the country address a national security issue.
This is a national security issue. That seems like the most pertinent issue, and yet there is no mention of it in the discussions here. Facebook has amassed huge amounts of data about all citizens, and adversary nations are leveraging this data to manipulate the nation, including by helping elect a president who will be friendlier towards them.
Facebook is Russia's biggest cyberweapon. Just as private companies would not be allowed to stockpile WMDs, private companies should not be allowed to stockpile so much digital information either. This is a national security issue.
Are we really that ahead of the curve here? Did the general public not realize that a free service that collects tons of information from you might be gasp using that information to make money?
I know some people who work in SEO and marketing and the stuff CA was doing was a more sophisticated version of what every free Facebook game or 'survey' vendor is doing. This is literally the business model of the free/popular Internet. Of course it's going to be used for political campaigns-- why not? It's used for every other kind of marketing.
I'm not saying it's good. I think it's terrible. It's a plague. I'm just shocked that people are shocked by what's been happening in the open now for years.
How long before it comes out that Google does the same thing with search history, Amazon with product browsing, Apple with phone usage, and Microsoft with Windows 10 usage?
> ...terms and conditions people did not read or understand.
The article acts like this is unprecedented. No one reads terms and conditions. It's not as if people fork over intimate details of their personal lives to Facebook with the defense that "oh the terms and conditions say they can't use this in a way I don't like"
People fork over intimate details of their personal life to companies like fb because they haven't thought about it very hard
I'm OK with data harvesting, but, it has to be transparent and ask for permissions every time they want to use my data, allow me to delete the data I generated. It should always be opt-in.
I use google maps a lot. I search a place, it provides me lots of useful information. Yes, I find it helpful, but also terrifying, especially with the "Popular times" section, which is "based on visits to this place."
It is more of misuse than a breach. The data was provided to a guy for academic research, but the guy sold it to a third party. That is where the 'breach happened.
Imagine a clinic has a policy that allows patient data to be released to non-patients but a court decides that the use violates HIPPA. There would be no technical breach of security, but rather a breach of responsibility.
I've been thinking more and more recently of deleting my facebook profile.
1. I barely add stuff on it
2. 99% of my news feed is irrelevant and I really don't care (there maybe 1 post/day from a friend that is interesting to me)
3. Starting to be more and more concerned about all this data
But I'm scared for multiple reasons:
1. connections needed from a lot of friends, family etc. that I would not keep contact with otherwise
2. It does a great job at keeping my contact list updated (no later than yesterday I searched for friends/coworker to make sure I don't forget anyone on my farewell email)
3. Messenger. I use it a lot (almost as much as iMessage) and again, a lot of people I talk to on facebook I don't have their info for telegram/whatsapp/etc.
> I've been thinking more and more recently of deleting my facebook profile.
> But I'm scared for multiple reasons:
I went through that a similar thought process. I decided to keep my profile active, but unlike everything and delete all my posts. I also changed my profile pic to make it clear I wouldn't be using Facebook anymore. That way I can still get event invites, etc. but not be too burdened by the whole thing.
Eventually I'll delete my profile, but not until Facebook becomes far less ubiquitous (and I do what small things I can to hasten its decline).
Yeah exactly. My Facebook is pretty much read-only at this point, except for a few "likes" to show support for friends or if I get invited to a personal event or something.
Wow. “react only when the press or regulators make something an issue, and avoid any changes that would hurt the business of collecting and selling data.”
You are assuming these same people, with everything to gain through just this behavior, have been utterly and scrupulously well-behaved this entire time all while nobody really questioned them.
'If you authorized any app'? I'm sure there are workarounds for that. If you touch them or pages where their invisible Facebook gif is present, they've probably got all your data that's gettable.
>"Academic research from 2010, based on an analysis of 1,800 Facebooks apps, concluded that around 11% of third-party developers requested data belonging to friends of users.
If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said."
So it's quite possible that there are more than a few third parties holders of FB user data who have now been alerted to the potential profitability of their old "research data."
Do we actually know what data has been leaked/illegitimately retained/whatever you call it?
A lot of the discussion revolves around friends data -- was all friends data accessible regardless of the friends' own privacy setting (this would be deeply troubling), or was it the data that friends shared with the app users (a bit less troubling, but still very questionable), or was it friends' data that was openly available on their public profiles open to any internet user?
What is the deal with those videos in news stories these days that are just moving* pull quotes with music and maybe some pictures? It's like a really short article in video form.
So the Cambridge Analytica stuff had been public for a while, as has Facebook's responsibilty in the matter.
The cynicist in me wonders if this is all lighting up intentionally before GDPR is enacted to reduce potential financial liability. Too conspiratorial?
Sorry for this stupid question: how does Facebook to prevent kamikazes developers/sysadmins data dump and run away? It's there some framework to manage data loss prevention in this way?
the platform was designed to entice and integrate into every aspect of people's lives and then ENCOURAGED you to get your friends to participate by inviting them to the platform. it was a data harvesting, advertising mongrel from the start and everyone KNEW it, but no one cared.
like all things in life... people cry foul when things come back to bite them in ass.
like i said yesterday in a comment... delete ALL forms of social media cause this goes on with ALL platforms out there, it isn't just limited to FB.
Would you not consider HN Social Media? I personally don't think it's the same as FB, but there are some strong similarities. Maybe "Social News" is a better term?
Are there any liability laws that hold those in the chain of custody accountable for third party misuse? Seems like it is an obvious burden of responsibility.
This article sums it up for me. FB makes the short-sighted decision to allow apps to scrape friend-of-friend info, because it would encourage more app developers, from which they get a 30% cut. When this guy asks a question about scraping, his boss replies "better not to know." The company continues not to give a shit about distributing PII until they realize that someone might use that data to create a rival social network. Then they shut it down and treat the fallout as a PR exercise.
I see a lot of Facebook sympathizers here. Is this what devs do at Facebook? Browse HN and defend the reputation of Facebook at any cost? Yes we all knew what we were in for when we signed up for Facebook and Instagram. Yes, they can sell our data to show us ads about what coals to buy for July 4th bbq party and we are OK with that. But not to turn blind eye to foreign entities which in return use it against us and jeopardize the American democracy and social fabric.
lol, noticed the same thing. Sadly its not just Facebook doing it. Every time a company goes under fire there's a ton of comments defending them or saying how it's a necessary step etc. etc. Sometimes its the "famous" users here doing this, consistently for the same companies. Gonna guess they have invested in them but this kind of shilling is just stupid.
TBH I see a plenty of the arguments on the other side too, whenever a company does something people disagree with, there are calls for regulation. Even further, there are lots of people here who get annoyed when they see someone just skirting the rules without literally breaking them and always argue for the most expansive interpretation.
I think some people just naturally like rules and would prefer to live in a more orderly, rule based society, and some people don't like the idea of being constrained. Both groups act quite sanctimoniously though, as if their personal preference is somehow the holy truth.
An analogous example would be if the CA/FB breach had access to private facebook messages or information that was never intended for public consumption.
In the CA/FB case the information was either public (and could be scraped as such) or was collected in the form of facebook apps.
This is not true. The old Facebook API gave access to all data the user had access to. This included information (posts, photos…) by “friends” which was not public.
This data breach is different in nature. It is not same as Ashley Madison or Experian where someone got hold of root pwd or some other data storage (some form of hacking involved). This is about third party using Facebook's API to access user data and exploiting those API to retrieve hoards of information about a particular user. IN this case, Facebook most likely knew which third party developer was heavily hitting a certain API in a certain pattern but they decided to turn a blind eye.
What's utterly horrifying about this whole thing is how the media is acting as if this is some sort of surprise. Like what did you think was happening at a company collecting data about billions of people? Especially at a company that has a CEO who is famous for calling its own users dumb fu * * s? A company that experimented on at risk teens. Like come on.
--edit---
Or lordy, didn't expect this comment to blow up this much. Do forgive me if it sounded a bit smug, that was not my intention. But the fact of the matter is this was something we were all warned about, we were shown countless examples of exactly this, not just us nerds, everyone, people like Edward Snowden risked their lives telling us about how all this data was being used against all of us. and yet everyone kept giving more and more, you were looked at like a tin foil wearing nutter when you told people not to give away so much information about themselves so easily.
At the end of the day, this is not really 100% facebook's fault, this is our fault, the fault of everyone who so readily made their information available without giving much thought to who sees it and what happens to it. And no just because you are not a techie you are not off the hook for not caring enough about your own privacy. I mean what level of technical knowledge is needed to know that once you post something online others can see it?
Funny thing is, this would all blow over after a few months, and everyone will go back to the usual habbits.
It reminds me of the Snowden leaks about mass surveillance programs like PRISM. I think most technical people expected something like that to exist ever since the internet became mainstream. Still, if it's just an "educated rumor" without hard evidence there's not much for the media to talk about. Up until now you could only say "it seems pretty likely that Facebook is doing something like that, but we don't know for sure". That's not enough to make an article and that's not enough to convince the general populace apparently.
In general it's pretty amazing how trusting the average human being seems to be as soon as computer are involved. I suppose that it's mostly out of ignorance and complacency. People seem a lot more careful when physical mail is involved than emails for instance. They also don't hesitate to share extremely intimate details about their private lives with a faceless corporation. Some of my friends willingly opt into streaming their position in real time continuously through their smartphones. That's terrifying to me but apparently very convenient for them. I think Zuckerberg agrees with my sentiment since that's the source of his "dumb fucks" comment.
I hope these articles will help change that mentality but I'm not overly optimistic. I read a comment on a forum earlier today that basically said "screw Facebook, I'll close my account and do everything from WhatsApp instead". I don't think it was sarcastic.
I think the trust is a new thing though, new to the social media age. I remember growing up with computers in the 90's and people I knew wouldn't even consider entering a credit card number on a website. Now we give them freely. People used anonymous handles on AIM. At some point this changed and people decided they could be themselves on the internet, which is a fine idea, but the trust just went too far.
Exactly. Another example is applications "phoning home" (desltop applications sending information back to the server) that not that much ago was considered a serious abuse. And people on forums would lambast you when you asked how to implement something like that. Now it's called telemetry and is the norm.
“No one company should have the power to pick and choose which content reaches consumers and which doesn’t,” said Franken. “And Facebook, Google and Amazon, like ISPs, should be neutral in their treatment of the flow of lawful information and commerce on their platform.”
And then one week later, his political career was suddenly over. Politicians got the message loud and clear; Don't F* with Facebook.
> Some of my friends willingly opt into streaming their position in real time continuously through their smartphones
How do you avoid this? I have a GPS in my car with stored routing information, but if I need to navigate for someone else or get walking/biking directions, I am forced to do this. Printing out directions beforehand is something I did only a few years ago, but these days I don't always have a chance to do that.
I think he means his friends use location sharing apps, where you can actually tell people "Open this app/URL to see where I am right now.".
I remember installing a dating app one evening, and thinking "I'll look at it tomorrow.". Next evening, I opened and it said "This stranger and you were both at this subway station around noon!". Geezus Christ! I didn't even open the app the whole day! Uninstalled it straight away.
On Android, you can open your Google account settings and disable their always-on location tracking "service". Of course, you have to take Google's word for it, and that doesn't stop GPS from working for apps like Maps when you request it.
I usually leave Location services off. I'll enable them for 5-10 seconds, get the directions from Maps, then disable the Location service again. Of course, they can still estimate my location with cell towers (or WiFi, but I usually have that disabled as well), so it's not a perfect solution. Saves a lot of battery life, though.
you can can disable your access to it and its background upload to google. If you read the fine print the "anonymized" gps/cell-tower/wifi data is still used periodically by google to refine their maps etc. Same for apple, same for the GPS in your internet connected car.
I think the person you are responding to is describing something different. If you use, e.g. Google Maps, to get directions while driving, Google knows where you are in real time but no one else does. If you share your location on e.g. Facebook Messenger, your friends can see where you are for the next hour. Presumably there are other apps which will share it continuously.
Most people don’t think “the data used to sell me milk could be used by politicians.” And those that do didn’t think “political ads today could be replaced by surreptitious foreigners tomorrow.”
If your reaction is “they should have known” you are in a Silicon Valley thought bubble. (I was until recently, too.) What you find “horrifying” is that bubble’s edges fraying.
*
“Of all the news crises Facebook has faced during the past year, the Cambridge Analytica scandal is playing out to be the worst and most damaging.
Why it matters: It's not that the reports reveal anything particularly new about how Facebook's back end works — developers have understood the vulnerabilities of Facebook's interface for years. But stakeholders crucial to the company's success — as well as the public — seem less willing to listen to its side of the story this time around.”
> If your reaction is “they should have known” you are in a Silicon Valley thought bubble. (I was until recently, too.) What you find “horrifying” is that bubble’s edges fraying.
What if your reaction isn't "they should have known" but rather "they should have listened when I told them this!"?
> What if your reaction isn't "they should have known" but rather "they should have listened when I told them this!"?
Then you, like me, are still figuring out how to message privacy as a priority to non-technical folks. Maybe it’s an issue of timing. My “delete Facebook from your phone and log out, by default, on your desktop” pitch was more productive yesterday than ever before.
Last Week Tonight did this bit brilliantly when they interviewed Edward Snowden - https://youtu.be/XEVlyP4_11M?t=1437 - in essence we need to get dramatically better at telling this story so everyone understands.
This is what many people would call a Teachable Moment. That rare opportunity when a person’s belief structures are shaken up enough that you can unstick their education by reintroducing and idea they previously resisted.
As a matter of fact, this is news and surprising for most users of Facebook even if it's not for you. By saying that no one should be surprised, are you not taking the same condescending attitude that you're pointing out in Zuckerberg?
The real discussion to be had is how do you know that the person is actually aware of giving consent, similar to how a recaptcha verifies whether or not you are a human. I see in the future, some sort of test for users, that verifies that they read the terms of service, as a form of consent for the user agreement.
Edit: Fixed all urls. All work except, cnn where you have to copy paste.
There can be no consent for the usage of your data, as it is impossible to grasp in what ways the data will be used exactly, what deep learning algorithms will learn from it and what impact it will have on your life and society as a whole.
> some sort of test for users, that verifies that they read the terms of service, as a form of consent for the user agreement.
This will only happen if terms of service get vastly shorter, or if a law is passed that forces it. I would bet that any such measure would absolutely destroy user signup metrics, which means that not only do companies have no financial incentive to take such measures, but they also have an active financial disincentive to make the "I read the TOS, let me sign up now" process any more complicated than they absolutely must.
I'm also pretty sure that the everyday user would be pissed about that additional barrier to entry.
Also, I tried unsuccessfully to convert all those URLs to use HTTPS, but it either failed to connect or the server forced me back to HTTP. That's rather sad.
Well, it is extremely naive to think that Facebook does not use all the data they get about you. Then again, most people are very naive about this kind of everyday technology.
That's just your perspective. I live in the Germany where we have very strong data protection laws. Is it natural for people to assume that these laws are broken at such a large scale? And that abuse goes completely unchallenged for years?
Data protection laws are so strong in Germany that they let registration offices sell your data if you don't explicitly opt out. Most people don't even know whats going on and that they have to opt out to avoid that. Or German credit scoring institutions, who are allowed collect data about you even if you don't have any mutual agreement with them.
German credit scoring institutions collect data on behalf of banks, insurances, etc., and you need to consent that they send data to the credit scoring company. So you are actually consenting. If you never give consent to any such party, the scoring company must not store data about you (and most probably won't, they are tightly observed by data protection agencies).
It will become interesting with GDPR, when customers start to revoke their consent to exchange data with credit scoring companies.
I agree with you that not everything is perfect in Germany with respect to data protection. Not even close. However, our data protection laws are uncontroversially stronger than elsewhere (specifically compared to the US), and I'm almost certain that the courts will find that Facebook violated them.
Maybe, but what sucks about Germany and the EU is the arbitrary nature of many laws, enabling them to selectively punish those who don't play their game. By not being able to define clear boundaries, you give them the power to rule over who can succeed and who not. Data is what fuels businesses in the end.
AFAIK the law in the US is much more arbitrary in the sense that a lot of it is case law. Until such a case has been before a jury, and jurisprudence has been established it's basically a coin toss.
The thing that may feel arbitrary is simply the fact the laws in Europe actually enforce privacy, whereas a company, and people, form the US expect that these laws are teethless.
> Is it natural for people to assume that these laws are broken at such a large scale?
Across international boundaries where those laws may be difficult to enforce because other countries are not in sync with them? Hell yes. Call me cynical, but...
> Is it natural for people to assume that these laws are broken at such a large scale? And that abuse goes completely unchallenged for years?
In Germany where data-leaks (which are a symptom of insufficient data protection) at telecommunication providers seem to happen on the regular, with no (reported) punishment as a result, yes I think that is a bit naive.
If you're dealing with large companies, it is. You should assume that. I have no doubt every major company in the world is covering up some serious crimes constantly. And FB has been egregious and it has been covered by the news. Also, why do you expect German laws to protect you from an American company?
More specifically, I think many people are naive about how it can be applied to their lives.
Every company tracks you. From what you purchase at target, to broad pattern behavior tracking on the web via ad companies, I think most people know their being tracked at various stages for various reasons.
However, is it bad that target knows I like to buy grass fed beef? Probably not. It reveals some things about me, but I am far less concerned, as are most I imagine. This same mindset is what fuels people when they don't care what FB/etc is doing. Not that it's right/wrong, but I think people don't care who knows about their lunch or catpics. Thinking that's all that FB could gain out of it.
Humans in general are really bad at thinking long term. Nothing bad happens immediately when you sign up to FB, when you post personal information, when they sell your data, etc. For a lot of FB users, it might be 20 years before they regret their actions. That's just a hard feedback cycle for people.
You must probably don't know how powerfull this analytics are. Is it possible to correlate and infer all kind of data, based on other signals, what kind of person are you.
For example if you drive a bicycle and eat beef, most likely you have a certain income, have a certain family type ( you use same IP!! ) , which means you might have a certain political view and concerns. And this is where targeted manipulation is active, they can drive you in a certain direction. Psychology at it's best.
Agree completely. Does it say much more than my job though? My car? my public travel patterns? etc. There's a reasonable about of information about me that I expect cameras on every corner to know.
Giving my information to FB/etc though? That's another story.
It's both naivety and lack of understanding, which as I've said before is by far the #1 problem with getting people to want more privacy.
It's really not that they "don't care" about privacy, even if they themselves think that's what it is. They usually say that because they don't understand the 1,000 horrific ways in which that data about them could be exploited, from personal blackmail situations, to identity fraud, to manipulating elections, to using it against them in court in a possible future conflict with law enforcement, and in many other situations.
I've seen people who are typically quite "anti-privacy" because "they want to benefit from Alexa, Google Assistant" and other such gimmicks, and "aren't scared" if Google or Amazon holds their data, because after all it's not the government holding it (ha! good one).
But now they've deleted their Facebook accounts, because they're finally beginning to understand the implications of these companies holding all of this data about them and how it could be abused. And it's still early days. It's only going to get worse from here, as we see more such abuses using Facebook, Google, Amazon's data, carriers', and other data hoarders' data.
Yes, this is news. However, it really shouldn't surprise any users of facebook that facebook would find ways to monetize their data, being carelessly abusive in the process.
Look, I'm a developer, I'm somewhat privacy-conscious, and I quit Facebook years ago because they're slimy.
But "doesn't keep up with technology and privacy news" is not the same as "dumb". For any product as big as Facebook, there are people of all kinds using it, including many who are brilliant.
Is it wise to trust Facebook with your data? No. But not having come to that conclusion doesn't make someone dumb. Please don't be so condescending. I'm sure many of those "dumb" people could be condescending about some of your life decisions based on their own expertise. But it's not helpful.
Self-censoring is in no way 'a higher level of discourse'. Not using curse words is one thing, but in some situation (esp. like this where a direct quote is used) there is no real reason to censor swear words in an adult conversation.
I know it's originally from a Zuckerburg quote but the point is that if you want to call someone a dumbfuck, call them a dumbfuck. Censoring the latter half doesn't somehow elevate the discourse.
I mean on average the readership of HN are vastly more likely to be aware and care about their data and identity privacy than the average facebook user. so in this sense you're not wrong.
It's the core of the business. I'm sure we'd all like to have more recent direct quotes, but Zuckerberg is much more careful with his public image than he used to be. We have to infer it from the actions of the company he controls. I see no evidence anything's changed besides the PR.
Back then, it actually was "dumb" to give a random website so much personal information. Facebook had no reputation. Zuck could have stored passwords in plaintext and hacked email addresses and Paypal accounts. Now, we know that Facebook is a legitimate business, so we know they aren't going to do anything too illegal with our data.
IMO it’s very apropos because it sums up the core attitude of the company. That 19 year grew up to become one of the richest and most powerful men on the planet, with unchecked power.
It's not some random edgy thing he said - he is literally describing his attitude towards the actual thing under discussion, the sanctity of people's private data on thefacebook.com. And the attitude displayed is not just questionable, but Literally The Worst. I suppose we're meant to believe he had some kind of spiritual awakening about it? I'm sure becoming a billionaire really made him see the error of his ways.
The context is he was 19 and the quote's from an instant message conversation.
I've never seen an "explanation." It seems self-explanatory. I haven't seen an apology either, but this was in the New Yorker:
When I asked Zuckerberg about the IMs that have already been published online, and that I have also obtained and confirmed, he said that he “absolutely” regretted them ... Zuckerberg’s sophomoric former self, he insists, shouldn’t define who he is now.
“The media” has always been sceptical of Facebook, and I’d love to see examples of them “acting surprised”. In fact I would guess your scepticism was always informed mostly by what you and those you socialize with read in “the media”. The current scandal’s staying power in the news is similarly based on journalists’ pent-up suspicions finally finding a vehicle to be expressed in public.
These stories are newsworthy because they represent the break from generalized scepticism to specific examples of harm. If the New York Times had waged a nebulous campaign against Facebook without clear evidence it would have rightly been accused of getting ahead of the facts.
As someone who considered himself reasonably well-informed about the privacy implications of Facebook, the "friend permission" was still news to me. That Facebook would share my profile data with a third party because some Facebook contact of mine "allowed" it, is utterly horrific. It is also a clear and massive breach of EU data privacy laws. (Which unfortunately seem difficult to enforce against international companies at the moment; the GPDR can't come soon enough!)
In any case, if this should have been known by everyone already, I guess Facebook has no reason to panic if it's all over the news now. Just a bit of publicity for them, right?
I 100% agree, it's the friend permission thing that is criminal. Facebook no doubt covered their asses legally via the TOC, but I hope that the misleading nature of the (complex and ever-changing) privacy settings is mentioned in a lawsuit one day.
It's not a surprise to most readers of HN, but most people are not readers of HN, and take things at face value, where face value is what they see in advertisements and hear from their friends.
People still have an expectation of privacy, even when, from an HN perspective they should be extremely skeptical about having such an expectation.
"Parakilas, 38, who now works as a product manager for Uber, ..."
So from one reckless company that doesn't give a damn about the law to the next. Who teaches developers that it's okay to work for anyone as long as the tech is cool and the salary is great?
> Who teaches developers that it's okay to work for anyone as long as the tech is cool and the salary is great?
Who teaches them otherwise?
Absent parental/primary-school-instilled ethics, rather a lot of engineers operate in a bubble of like-minded (and similarly-employed) people, making large amounts of money, and are often insulated (voluntarily, deliberately, or accidentally) from the impact of their work.
What could be changed to improve on that situation? I've heard simplistic suggestions to "sue the C-class until they learn/abandon the incredibly lucrative profit motive", "fire/imprison engineers whose changes harm people", and "make the bridge-builder stand under bridge they built" (whatever that means in a software context). Those seem utopian. What tangible, plausible changes can be made to improve on developer accountability (for their work) and discernment (about prospective employers)?
Make your new hires watch the multiple camera feeds and lidar of that woman being run over again and again until they really really understand that they're working on life-critical systems.
That might help if you're making something that, if broken/misused, can directly physically harm people.
What about if you're making a social media app, and the ethics are less clear-cut? It's not like you can show every new hire footage of Trump and drive home the negative impact of data mining/sharing--the causal link is tenuous, the viewer might sympathize politically, or they just might not care about politics.
Ethics in the abstract is very hard to teach; object lessons are easy.
Even nerds understand that one painful social experience can have lasting negative effects.
It’s blinders. Plain and simple. I’ve worked with too many developers who will pander for money. A few that tried to shame me for not being on board (my life skills tell me calling someone a whore in a team meeting is a bad career move but it doesn’t stop me from staring at them and thinking it). When enough money is on the line principles get set aside. We like to think our cohort are above this sort of thing but the evidence clearly doesn’t support it.
I think a lot of software engineers (past me included) are genuinely persuaded that tech really is going to change the world for better and that it’s the way to do good social changes, because “politics is too complicated”.
Then the corporate koolaid of come and tell you you’re doing the most important thing in the world and you just eat it.
Well, the reason is that people in general aren't often ethical, when they seek to benefit personally. It's not taught; it's the default setting.
I wish a little philosophy and ethics were part of the curriculum. This would not be to inculcate normative values, but to help eng students clarify what they believe, and what the implications are.
That said, most engineers I've met who work on sketchy stuff are either naive, apathetic, or suffer from massive cognitive dissonance.
The latter will too often regurgitate the self-justifying language of the business people in their companies.
Ever listen to ad tech people spew absurdities about people wanting to be engaged with "their" brands? How about the justifications for massive data collection and analysis - targeted ads are so much better for people. Pfft.
Then there are, say, NSA engineers who convince themselves that what they do is necessary, if illegal. That said, I saw a lot of NSA LinkedIn profiles that swapped out NSA for DoD a few years back.
Company leaders tend to hand employees ideas and the slogans to repeat to themselves and others. The internal spin is huge and insidious.
My undergrad Computer Engineering curriculum as far back as the mid 90’s offered a dedicated “social and ethical issues in computing” course, which included not only ethics but the societal issues around hacking, copyright, automation, robots, etc. Do these courses no longer exist? I think tech professionals ought to agree to Do No Harm and be held accountable when they do. Problem is the vague and debatable definition of “harm”.
Every ABET accredited degree (CS and/or CE) has a minimum requirement for ethics courses. The software industry just doesn't have a minimum requirement for accredited degrees (or any degrees at all, for that matter).
I actually know Sandy and he’s a conscientious guy who cares about this stuff. He’s a good dude who wants to make positive change by being in the conversation.
There were many people working on the Manhattan project who then later became nonproliferation advocates. I personally would rather that people feel like they can work for companies that have made mistakes and voice their opinions about where things should go. It would be pretty hard to find out what is happening at companies if there weren't former employees talking about it.
> So from one reckless company that doesn't give a damn about the law to the next.
Uber doesn't appear to have historically given a damn about the law, but AFAIK it has historically given a damn about its users. Facebook, OTOH, doesn't appear to be giving a damn about its users.
As for the law: there are plenty of unjust laws out there; I respect someone who fights unjust laws such as the taxi monopolies. I don't respect someone who fights just laws.
Why should Facebook give a damn about their users? Users are the raw material. It’s like asking a car manufacturer to give a damn about the feelings of sheet metal or how emotionally satisfied door handles are.
You get to work with cool stuff and get paid money! Who cares about ethics, it's not like you suffer consequences on failure. (I'm not entirely sure the companies do either...)
Yes, I think we are ready to grow out of the "arrogant jerk" phase of tech.
The tech wizards who build things and run these companies:
1. Are not smarter than you
2. Do not have your best interests in mind
3. Will lie to you repeatedly
4. Will do everything to avoid negative attention or consequences
Stop worshipping anyone. Not Jobs, not Zuckerberg, not Gates, not Musk, not anyone. They aren't on your team. I don't care if they look like you or represent something you are really passionate about, you still need to be skeptical.
Bravo! While I hope that some (IE, Musk) have our best interests in mind, it’s of utmost importance to keep in mind that any individual (and particularly, any organization) has their bottom line as their top priority.
This exact same scandal broke in 2012 only it was with Obama’s campaign. It is humorous to see the media much more outraged when Republicans do it, as if it wasn’t that big of a deal when Obama did it.
The media likes to get people's eyeballs just as much as FB. They will over react to get views.
If those little radio buttons in privacy settings do literally nothing on the backend, then FB could have a massive legal/financial battle if they knowingly ignored user preferences and sold off unaggregated data for profit.
Well, for the past few years, the media has been a "partner" of Facebook. Now it's not any longer. Seems like a wrong move for Facebook to dump the media like old clothes. Whoops.
I even remember such articles from TheGuardian before decrying that people are "going dark" - no, it wasn't about using Tor or VPNs. It was simply about using tracking protection.
I just posted a comment but I will reply here too. The bullshit excuse that used to circulate, even on HN, is that Facebook would never sell the data directly because it was too valuable. Instead they would just sell targeted advertisement.
None of this would have come to light if the "right" person had won the elections. The moment Trump won, the campaign against social media began, first with the "Fake news" meme, then "Russian hacking", YouTube's adpocalypse and now "Facebook data breach". Some people are getting very scared that the traditional manufacturers of consent are losing their grip on people's minds.
And I am not in support of the mass surveillance exercised by those companies, just noticing the timing. When Obama won, his data scientists were hailed as geniuses. What do you think they were doing?
I don't think anybody is arguing against using data science in elections. What people are talking about is data theft, stealing private info(like emails, photos, etc) and stuff like that.
Its a bit like comparing withdrawing money from your bank account and robbing a bank.
> Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing.
> They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.
An Obama Campaign data scientist from the 2012 campaign explaining how they did the exact same thing but Facebook were ok with it "because they were on our side".
WikiLeaks also covered this earlier in their spyfiles warning. People don't care that a political opinion survey being spammed around by their friends is actually harvesting their details in a campaign dossier to manipulate them directly later.
> Like what did you think was happening at a company collecting data about billions of people?
"A more productive answer to someone saying something you agree with is “I agree”, not mistakenly berating them for not agreeing sooner." (https://news.ycombinator.com/item?id=16627766)
It's not like huge numbers of people didn't know about global warming before society started caring about fixing it.
I can't disagree enough with a "blame the user" attitude.
First, it absolves the perpetrators, who are definitely in the wrong. I include both FB and CA in this category.
Second, it is becoming clear that there can be no such thing as "informed consent" in a networked world with respect to data privacy. Zeynep Tufekci, whose writings I heartily commend, had a good article on it a few weeks back.[1] She argues both that the actual uses known of that data are not fully described in consent waivers, and also that it is not possible to know ahead of time how that data will be combined, recombined, projected, analysed, and used in the future to fully consent to all those things. Even if you could do so for yourself as an individual it's not possible to consent to the effects of the combination of an entire society's data as a whole, on others.
Again, it's not possible to obtain informed consent in today's privacy environment, so let's stop blaming the victims.
> Funny thing is, this would all blow over after a few months, and everyone will go back to the usual habbits.
It can't blow over in the UK or the EU, because it is seriously f-ing illegal in those places.
Yes, we "knew" it was happening before this (hence all the regulatory steps taken that were dismissed as anti-American protectionism), but we were lacking hard evidence, so all we could do was reinforce regulations and regulatory authorities.
Now that shit has leaked, it's simply not an option for those authorities to not act. Not to mention the fact that they really, really want to act.
So no, this will not blow over. Maybe in the US and/or the media, but not where it matters.
> What's utterly horrifying about this whole thing is how the media is acting as if this is some sort of surprise
I agree. A couple of generations ago the media was much more combative and willing to take on the powers that be (and each other). By this time most media outlets would have been saying "told you so (many times)". Unfortunately, now media mostly follows trends and competes on beauty of their talking heads, which is a lot safer than, say, investigating slavery or organized crime.
- This was part of an open API, you just needed to sign up for free. There is no data breach.
- EVERYONE was using it - this consisted mostly of games like Farmville. This is how they can show your friends progress and their profile pic.
- It was shut down more than a year ago.
Actually there is no newsworthy thing at Facebooks side at all, the new thing is that companies built games just to harvest this data and use it for something else.
It's like the diesel scandal, IMO. Everybody in the industry knows what's going on, but that doesn't mean the industry's Overton window isn't liable to be reset with severe repercussions once the general public gets looped in.
I know this is HN (where it is probably less of a surprise) but for a wider audience taking this kind of opportunity to write a smug comment might just be harmful to them, their privacy and their rights.
Would you rather the media report on unsubstantiated rumors and assumptions, or would you rather they wait to report things until they have evidence, or a source who can verify the information?
Facebook's current problem is that they've now created a lot of enemies and they're running short on friends. This isn't quite as bad as it sounds, because creating enemies when you reach this scale is inevitable, so the mere fact that Facebook isn't everybody bestest bud ever is not intrinsically a problem. However, one of the fundamental ways to defend yourself when you reach this scale is to act with a certain amount of ethics and decorum. Ethics are not just about being good to other people, because all sensible formulations of ethics have reciprocity in them, so if you are good to other people, even your enemies will, if begrudgingly, cut you some slack.
Facebook has reached this size but has not prepared itself for it, and what's happening is that they took a bit of a stumble, and absolutely nobody is rushing in to defend them because they've burned all bridges. The media hates them for the economics of taking over the media industry and their ad revenue, and making them dependent on them. Conservatives have a pretty solid case that they are being censored by the platform systematically, even if it isn't true they feel it is true, so no friends there. It's pretty obvious that Facebook can expect no help from the Republicans in general. The Democrats may not hate Facebook, but there's no positive reason to burn very much political capital on helping them. (After all, they didn't deliver this time, did they?) And increasingly, the chickens are coming home to roost with their customer base, as fears about surveillance, power, and abuse of power are now going from vague fears to metastaticized, realized issues with Facebook that appear to affect it down to its very core.
It's not just the media narrative, though that's true enough... everybody is now at best neutral towards Facebook, and they're accruing enemies fast, not least of which is an ever-increasing portion of their own customer base(s).
How will they get out of this one? It's possible this will just die down this time. But these forces aren't going anywhere, and if it isn't already too late for Facebook to change course on this, the clock is definitely reaching midnight fast.
You have a point there. Genereally speaking I agree that Facebook itself is on it's path down. Do they have enough money to just hang in there and buy the next thing 'what the kids like'?
I don’t think you have to be a conspiracy theorist to find this weird.
Facebook has been aggressively monetizing user data for years. They are just one player in an entire industry built around this business model.
The existence of this industry is most obvious to technically literate, who you can generally identify by their use of strong ad blockers and password managers. But it’s been reported on before [0]. Online privacy is not a new concern... just look at the “Facebook is listening to me” meme.
So what I want to know is: why now? Why Cambridge Analytica? Why Facebook?
Here’s my best take so far.
1) Facebook’s user base is so big that it’s a relevant political constituency, and thus democratic governments have a reason to care.
2) Facebook creates an expectation of privacy and illusion of control that doesn’t exist with public-first platforms like Twitter
3) Cambridge Analytica is a scummy company in many respects, not just their work with Facebook data. They got lots of Facebook data from a third party who probably wasn’t authorized to sell it to them. This makes them a good candidate for regulators to make an example of.
4) Cambridge Analytica is closely tied to the Trump and Brexit campaigns, both of which are regarded as “dangerous perversions of democracy using lies to exploit vulnerable people” by exactly the kinds of political and media organizations that are driving this story.
Overall, I think this is a “Pigs get fat, Hogs get slaughtered” situation. The industry’s toxic practices are finally causing enough damage that institutions responsible for protecting the public (government, real media) are responding.
No but in the grand scheme of things, things like the Yemen Civil War that has been ongoing since 2015 has gotten less attention in our media than a single Donald Trump tweet.
The plain number of articles on a topic is not a useful measure for their impact. A dozen newsticker reports will be remembered less than a single frontpage article.
I would be optimistic if most Americans actually read the BBC ( or read, for that matter). I would venture to say most however, do not. For example, a popular right wing propaganda news source that most Americans watch has had a handful of articles this year, and skimpy coverage on air.
Do you find it hard to believe that an organization based in the UK, and citizens of that union, are more interested in whats going in Yemen than an organization based in the US, and of course the citizens of that union.
I know whats happening in Yemen, I've read the facts, and now I don't care anymore. I don't want to see it in the news everyday because it wasn't relevant to me when I read about it and it has practically zero chance of every being relevant to me.
What Trump said about xxx person at yyy place is varying degrees of relevant to my life, all of those degrees more so than Yemen.
If Fox had a large Middle Eastern demographic that it's advertisers cared about, you would see Yemen Nightly at 7:30 without question.
> Do you find it hard to believe that an organization based in the UK, and citizens of that union, are more interested in whats going in Yemen than an organization based in the US, and of course the citizens of that union.
Yes, because on the other side of that conflict is the Saudis, who are one of our biggest "allys", and of course, the other side of the coin is the general hypocrisy of caring what goes on in other parts of the world and not this one because it doesn't fit a specific narrative.
> What Trump said about xxx person at yyy place is varying degrees of relevant to my life, all of those degrees more so than Yemen.
I don't know how to not say this in a disrespectful way, but I really feel sad for you on a personal level if that's truly what you think. I have a feeling that you are just attempting to be a contrarian in this instance.
Not at all. Just media's desperate hunger for any story that paints Trump in a bad light. The recent video evidence on Cambridge Analytica shows that they were at least shady in their operations. But their approach to targeting voters and scraping Facebook userdata would have been described as brilliant data wizzardry if it were done for the other side.
You can perform brilliant data wizardry without deceiving people. Micro-targeting has been around since the 60s. There is a huge gulf between finding people receptive to an ad whose content you publicly endorse and creating astroturf sites and fake news content to manipulate people's world view.
I keep seeing comments which equating cheating and cleverness. If I win a chess game by moving making illegal moves this is not a sign of my brilliance. If you can't distinguish brilliant play from cheating, perhaps you don't understand the game.
When we do it it's awesome, when they do it it's a data breach, it's a privacy violation, it's a breach of trust, and it requires government regulation.
It was also described as "groundbreaking", a "game changer", and "an application that will change the way campaigns are conducted in the future" [1].
When Obama's campaign did it, it was heralded as the future of democracy. Even the social media director for Obama's 2012 campaign acknowledges that they did the exact same thing that CA is being blasted for now [2]. I'm not sure why you're getting downvotes other than people just wanting to suppress the truth.
... the campaign literally knew every single wavering voter in the country that it needed to persuade to vote for Obama, by name, address, race, sex and income.
...the digital-analytics team, led by Rayid Ghani, a 35-year-old research scientist from Accenture Labs, developed an idea: Why not try sifting through self-described supporters’ Facebook pages in search of friends who might be on the campaign’s list of the most persuadable voters? Then the campaign could ask the self-identified supporters to bring their undecided friends along.
...They started with a list that grew to a million people who had signed into the campaign Web site through Facebook. When people opted to do so, they were met with a prompt asking to grant the campaign permission to scan their Facebook friends lists, their photos and other personal information.
So, they used Facebook data, including "Friends" lists and personal information that those "Friends" had never directly consented to providing to the campaign.
[1] How did Facebook react to the much larger data harvesting of the Obama campaign? The New York Times reported it out, in a feature hailing Obama’s digital masterminds:
The campaign’s exhaustive use of Facebook triggered the site’s internal safeguards. “It was more like we blew through an alarm that their engineers hadn’t planned for or knew about,” said [Will] St. Clair, who had been working at a small firm in Chicago and joined the campaign at the suggestion of a friend. “They’d sigh and say, ‘You can do this as long as you stop doing it on Nov. 7.’ "
In other words, Silicon Valley is just making up the rules as they go along. Some large-scale data harvesting and social manipulation is okay until the election. Some of it becomes not okay in retrospect. They sigh and say okay so long as Obama wins. When Clinton loses, they effectively call a code red.
Really brutal how unashamed they have become about their biases. No wonder people are getting so angry against the media and big tech while those still play the facade of a fair game.
Did you actually even read your own link? I don't think it says what you think it says. That story you linked is about how democrats in 2006 wanted to do more data collection but couldn't agree on whether it should be the DNC or a private firm that did the data collection. The one thing they could agree on was that data collection was something they should be doing.
The post I was commenting on claimed similar actions were, or would have been lauded if carried out by Democrats. I simply posted an article critical of similar moves.
We knew it facebook was going to be the next friendster+AOL+myspace but I never suspected it would disintegrate to these levels. Expect employees to start jumping ship left and right.
This is simply the extent to which we've permitted these Internet giants to collect information about us. It's business as usual.
Edit: To clarify, this is indeed worse than if the data were taken from Facebook without consent. What it means is that not only does Facebook have access to vast troves of personal information, but so does everyone tangentially connected to someone with a Facebook developer account.