Hacker News new | past | comments | ask | show | jobs | submit login
Facebook pauses app reviews, disables new user authorizations (facebook.com)
234 points by humanfromearth on March 28, 2018 | hide | past | favorite | 149 comments



The so-called "data breach" was always in reality a by-product of an open platform that hundreds of thousands of developers could easily build apps on top. You may err on the side of "more reviews" or "less powerful API", but in the end, those ideals are in tension. The more open the platform, the more open to this kind of "breach".

People who believe in the idea in this kind of platform having an API should have long ago spoken up in Facebooks defense. This is exactly what I was afraid would happen, and I expect worse to come from this "platform review". Given the kind of media coverage here, Facebook seems to have more to lose than to gain from letting random Hacker News kids build on their platform. And if so, they won't in the future.


Apple got this right from the beginning despite years of criticism about the "walled garden". They took arrows for years: all the "open always wins" from the FOSS types, all the press coverage of some app developer crying about App Store rejections or onerous rules.

They didn't get it wrong because they know who butters their bread: customers. Developers are rightly prioritized last.

Fun to give this Paul Graham essay a read again [1].

[1] http://www.paulgraham.com/apple.html


> Apple got this right from the beginning despite years of criticism about the "walled garden".

Being a walled garden is independent of privacy. The calendar app on macOS Calendar app allows me to share my calendars in an open format (ICAL) and interoperates with other calendar apps through that specification. It respects my privacy by not sharing anything I don't ask it to without being "walled".

Signal is open and secure; iMessage could be a non-proprietary format and remain just as private.

> all the press coverage of some app developer crying about App Store rejections or onerous rules.

As far as I'm aware a lot of these weren't for privacy matters [1] and are sometimes a little much [2] (this one is specially absurd: [3]).

[1]: https://techcrunch.com/2017/12/08/apples-widened-ban-on-temp...

[2]: https://www.theverge.com/2018/2/8/16992830/apple-emoji-crack...

[3]: https://medium.com/@alariccole/apple-literally-stole-my-thun...

PS:

> all the "open always wins" from the FOSS types

I don't think that means what you think it means. FOSS and privacy are tangential matters.


> FOSS and privacy are tangential matters

I totally agree (because it's true of course). But do recognize that it's in Apple's interest to conflate the two. To make it even more interesting, add "secure" to the list of matters.


> FOSS and privacy are tangential matters

Actually, the only way privacy can be guaranteed is with code you control running on a machine your control. Free Software (and not Open Source) has everything to do with privacy, as it is about control.


You're comparing Apples to oranges (I'm sorry)

Apple doesn't have a social network. They also don't rely on advertising and 3rd party data brokering.

It's easy for Apple to be the 'good guys' here when they have physical products as their profit generators.

FOSS would have worked better in the Facebook case too as people and developers would know/discover a) where their data is and b) what risks it faces


Apple operated iAd before shutting it down. One of the former iAd execs said this to the WSJ [1]:

>“I don’t believe they are interested in this capability because they have a strict policy around what they do with user data,” Crawford said. “IAd has great assets and great capabilities, but they are going to follow Apple’s policy to the letter of the law.”

So they crippled a potential new revenue stream because of their privacy policy.

[1] https://blogs.wsj.com/digits/2015/07/13/drawbridge-hires-app...


You don't consider it at all possible that "because privacy" is the PR spin on it?

When iAd launched, it required mid six digit buys for ads. Users complained that they only would see the same half-dozen ads. Over the course of the next two years, that was steadily reduced, low six digits, fifty thousand, and the complaints stayed the same.

Before iAd shut its doors, you could get ad buys for a minimum of _fifty dollars_.

I tend to be a bit skeptical that instead of the platform being a failure, Apple realized after a few years of cutting buy prices that "hey, privacy is important".

That sounds a lot more like a PR soundbite.


Exactly, they failed to monetize their installed base through ads, so decided to call it a "feature".


Those ads were priced that high for a reason, to keep garbage off the iPhone. iAd only really made sense as a way to enable developers to make money from their apps in a high quality way. All the stipulations don't make sense for a pure ad-tech business.

And I don't think a former iAd exec has any reason to be doing PR for Apple when the purpose of that WSJ story and his cooperation with it was to pimp his new job/startup.


> Those ads were priced that high for a reason, to keep garbage off the iPhone.

Perhaps so. And only a few companies bought ads, which pissed off their customers (the ones that Apple is "fighting to protect") because of the repetition.

So Apple lowered the barrier. A failure to realize.

Not sure how this is a dispute of what I said, or somehow proves that iAd wasn't a failure, but a planned exit.

> And I don't think a former iAd exec has any reason to be doing PR for Apple when the purpose of that WSJ story and his cooperation with it was to pimp his new job/startup.

If you think that Apple (or any company of that size) doesn't have non-disparagement agreements with any corporate officer or down to SVP level, at the _very minimum_, you're mistaken.

Even those "we screwed up" articles you see are very carefully stage managed. They're (almost exclusively, save some very isolated high profile situations) scripted as a PR effort to manage customer satisfaction.


Exactly, Apple just doesn't need ads because of their physical product and brand strength. If you rely on ads, what actual value are you (or could you be) providing that you're not charging for. And then, why not, why choose ads instead and data harvesting? If Facebook started charging for their services, would people happily start paying or would they suddenly realise there are equal/better alternatives once cost is a factor?

Will be interesting to see what happens with Apple if the i-products do ever decline significantly. (I believe their Brand is strong enough to sell their other services effectively so long as they get the price right).


> Apple doesn't have a social network.

They tried, and failed.

> They also don't rely on advertising

They tried that, too.


The key issue (at least for me) is whether they've tried to benefit from marketing user-specific data. I can't recall any such effort, but will gladly to defer to someone more knowledgable. I'm not sure specifically why their multiple ad and social network efforts have failed, but it might be because they haven't been willing to do just that.

On a general note, I find it frustrating that a lot of the discussions around this area end up with people talking past each other. Some people are concerned about advertising, others about privacy, others about using user data, others about open source software/hardware, others about encryption. They've got some overlap when it comes to specific features, but often get conflated, making the discussion even more difficult. One thing I've appreciated about reading your comments is that you often are able to cut through that quite decisively. Thanks!


> They tried, and failed.

What are you thinking of -- iTunes Ping? That was never really intended as a general-purpose social network.


Is it wrong that I feel glad they failed?


“Apple doesn't have a social network.”

What about iMessage?


That's a social network in the same way the Pony Express was.


I swear there is some kind of downvote bot on HN - reasonable posts are often down-voted grey within the first few minutes (like this one's parent), and then eventually climb up to a reasonable place, when, what feels like, the humans have had time to see it - has this been noticed or discussed before? I'd be curious to know if some accounts are serial downvoters, especially as soon as comments go up - does HN look for that kind of stuff?

Of course part of the answer is that new comments have no votes so a quick downvote will make them grey, but there are frequent strange cases.

One thing for sure is that a lot of people downvote based on disagreement rather than a comment's quality - which in my opinion is not right (and I think against the intent of a downvote), but that's a different issue.


Dang said recent-ish* that upvoting inappropriate comments can cost you your voting privileges. Additionally, there is a karma threshold for downvotes. IIRC, you need 500 karma.

Between those two facts, I think downvote bots aren't a terribly likely explanation. You would need to establish an account with downvote privileges and then hope the mods don't revoke its downvote privilege in short order. It seems like a lot of work for probably not much pay off.

That doesn't mean there can't be downvote bots, but it just seems to me a much more likely explanation is people cruising the Comments page.

* https://news.ycombinator.com/item?id=16117475


I'm fairly sure there are multiple downvote bots operating on HN. Frequently, certain comments to topics considered by some as "political" on HN will get an immediate 2 or 3 downvotes in the first 30 seconds, and then over the course of the next 2 or 3 hours, be upvoted back up to black.

Although it's possible that there's some confounding happening here, such as the handful of people who hit refresh on HN hundreds of times a day are also ones who can't stand certain political viewpoints. But it seems unlikely. Far more likely is that there are a handful of downvote bots in operation on a keyword basis.

It's not a huge deal to me, but it does seem kind of obvious.


I get downvoted on agreeable comments on 4 day old conversations....there's definitely bots or something going on.


A downvote on a 4-day-old comment is unlikely to be a bot, since it's likely that no one will even see it. If you're seeing agreeable comments being downvoted after several days, it's probably just a grouchy misanthrope or two catching up on HN on their day off.


A downvote on a 4-day-old comment...

Is not possible since they turn downvotes off after 24 hours.

/pedant

As someone who incessantly refreshes HN and spends a fair amount of time on the Comments page, I see no reason why it wouldn't be people seeing new comments to older discussions via the Comments section rather than bots. Although I am a demographic outlier in multiple ways, I cannot possibly be the only person routinely cruising the new comments section.


Good point about the new comments feed. I think your parent either misspoke or misunderstood their parent's "[downvoted] comments on 4 day old conversations", which would mean a new (down votable) comment on a thread that started 4 days earlier.


Correct.....old conversation, new comment.

The giveaway, imho, is a downvote on a completely inoffensive comment, combined with certain other increasingly common patterns.


it's probably just a grouchy misanthrope or two catching up on HN on their day off.

I do this :)


just a grouchy misanthrope or two catching up on HN on their day off.

Who's calling me?


Similar behavior appears to happen with quality of comments. Initial comments on a story are often insubstantial, knee-jerk low-quality comments. With time, things generally get better.

From what I gather (but I don't have any comments at hand), the mods do pay attention to voting behavior, both to detect up-voting rings and to penalize those abusing downvotes. Their usual recommendation, when coming across a comment you think is unfairly downvoted, is to silently apply a corrective upvote.


> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

FYI, want to start off with that to help indicate the likely reason for any downvotes you got on this post.

> One thing for sure is that a lot of people downvote based on disagreement rather than a comment's quality - which in my opinion is not right

Everyone is entitled to their opinion, but it doesn’t make it any more valid than other’s and doesn’t make it a fact either.

> (and I think against the intent of a downvote)

@Dang has said on multiple occasions that the intent wasn’t as you are assuming. Apologies I don’t have a link handy at the moment.


> FYI, want to start off with that to help indicate the likely reason for any downvotes you got on this post.

Thanks, I did know the price, no problem.

> @Dang has said on multiple occasions that the intent wasn’t as you are assuming. Apologies I don’t have a link handy at the moment.

I would be curious to read dang's opinion on this if anyone has it, it's not in the guidelines.

> Everyone is entitled to their opinion, but it doesn’t make it any more valid than other’s and doesn’t make it a fact either.

Yes, that is the definition of an opinion.


> "I would be curious to read dang's opinion on this if anyone has it, it's not in the guidelines."

https://hn.algolia.com/?query=author:dang%20downvote&sort=by...

A couple of the more recent comments:

https://news.ycombinator.com/item?id=16569778#16574021

https://news.ycombinator.com/item?id=16336937


An API into a closed service is no more FOSSy than a graphical user interface into a closed desktop app. I don't really understand the comparison, what do you think motivates it?

Facebook has open-sourced a few internal projects, but none of them had much to do with our personal data.

In fact, it's difficult to blame the API when the problem was that the data was collected in the first place. Surely Zuckerberg has some political opinions of his own, if CA hadn't triggered this media storm what would have stopped him from supporting his own favorite candidate internally? In fact, what's stopping him from doing that right now? Would it even be illegal?


> all the "open always wins" from the FOSS types

Can you produce any quote from any Free Software or Open Source developer or advocate that supports your statement in this context of Facebook being better than Apple because they're "open"? Because even though both companies are terrible at freedom, they're terrible in very different ways, turning any comparison into a false equivalence.


The "open always wins" wasn't talking about access to other people's private data. It was talking about letting apps run and giving them the capability to act on your data. Big difference.

Nobody is complaining about being able to install a Facebook app that harvests your data. The issue is it could also harvest your friends' data by default.

Although honestly I think this is all manufactured outrage. The fact that this could happen was totally public in 2012 and the public weren't outraged about it then.


I hadn't seen that before, cheers it was an interesting read. I really want to believe that my mind still can't dismiss acknowledging that there is always a possibility they have undisclosed back doors, it's just a matter of what it takes for them to disclose it. As more time goes by my thoughts on that possibility lower though, as more people are exposed to the code as a potential outsider/doesn't agree with the practice.


This is completely misleading.

1 - I've yet to learn about a major open source project stealing data from its users.

2 - Open source guaranty the users have the right and access to check how it's data can be used. Apple's products do not. Actually, we discovered they were part of the PRIMS program, which means basically they gave ALL your data away already while actively lying about it (https://www.theguardian.com/world/2013/jun/06/us-tech-giants...).

3 - FOSS people like interoperability and choice, which is one of the major problems they blame Apple not to take care of. This has ZERO relationship with privacy.

So on one hand you have privacy savvy communities, giving their free time and work so that everybody can use it to built a better word an understand transparently what's going on.

On the other hand you have a multi-billion dollar black box giving away user data, using massive PR to pretend they are privacy oriented while they make money out of locked in devices.

Now I get people enjoy the Apple product experience.

I get they make a lot of things right for this experience, especially for user friendliness, and integration.

And I certainly get they made the industry progress from a technical point of view.

But do not compare their moral scale to the one from the FOSS folks. This is literally insulting them.


It’s the same problem Microsoft had. If your success hinges on unfettered access then you can’t hand wring and deflect criticism when bad things come of it.

In other words: if you power and prestige comes from getting others to do the work for you, then whose fault is it when they misbehave? It’s your fault.


Ben Thompson talks about this towards the end

https://stratechery.com/2018/the-facebook-brand/


There’s no helping if as an user, you give the keys to the house to the robbers. More specifically, everything posted publicly is public. It doesn’t matter how much the data is obfuscated by lack of API, it can always be mined. Any changes made in response, right now will be shortsighted. If an actor wants that data, he will get it, especially if it can sway elections.


This is buying into the idea that security and openness are at odds. Bitcoin is extremely secure, and also open to all. The way forward is people owning all their data encrypted, and revealing zero knowledge proofs about that data to services that want access.


> Bitcoin is extremely secure, and also open to all.

And it has zero privacy: everybody can see everybody's transactions.


That's because privacy was not a goal of that particular system. Pseudo-anonymity was the goal. (That's unrelated to my point about security and openness though.)


It seems pretty related in this case, since the kind of security that is being discussed here is around access to private data (and I'd note that it was you that used the word security here, not the OP who made the more correct contrast between openness and powerfulness).

One could equally make the point that Facebook is secure because no one has hacked their servers, and that it is open because it is free to join. That's a pointless thing to say though.

But in any case: yes, there is an inherent tension between privacy and openness.


I should have used Monero as an example instead of Bitcoin, but I thought less people would have heard of it. Privacy can also be achieved with these primitives.


That's still not the point though.

How do I openly share social information with some people without sharing it with everyone? These principles are in conflict.

I did some work way back when FB got popular on the idea of using probabilistic data structures (ie, Bloom filters) to store contact lists, which could then be shared so that (in theory) only people who knew the same people would also know that they knew them. I built a FB app proving this could work technically.

But there are clear security issues with it - it gives a veneer of apparent privacy but doesn't stand up to attacks.

This is what everyone is talking about and the point you seem to be missing: you can't have this both ways. There is a conflict between privacy and openness.

Since you seem fixated on the crypto-payment thing: Monero mostly solves anonymous payments, but then what? Say I want to buy a pair of shoes with it - how do I stop someone knowing where to deliver it? Any attempt at solving this runs into the same problem: you have to tell someone the same thing you are trying to keep secret, and if that person is the attacker then the system falls to pieces.


We've accepted trusted computing as a society, many phones have a secure element now, and I think this can migrate down to that level.

That's the real trade-off: we reclaim ownership of our data with knowledge of where it goes and who's using it for what, but that means a full embrace of DRM (something our community has typically been against.)


I'll take that as implicit acknowledgement that the OP's point was correct (DRM of course being the complete opposite of "open").

I agree that could work though.


> I should have used Monero as an example instead of Bitcoin, but I thought less people would have heard of it. Privacy can also be achieved with these primitives.

Well actually that’s very much still in dispute:

https://www.wired.com/story/monero-privacy/


>open to all

That's in direct opposition to security. When you grant people you don't even know much less trust access to your system, all bets are off. There's just no way to predict what is going to happen. You've lost the fundamental protection afforded by trust.

Today, computers are expected to be able to talk to and serve hundreds of thousands of random users. What if one of them has access to a 0day? They could own the machine. The world would be a lot more secure if servers dropped all incoming packets by default and talked to trusted users only.

Bitcoin is open to everyone and that's great, but it doesn't change the fact people managed to sneak a bunch of illegal pictures into its blockchain.


The Facebook API is, now at least, a capabilities based system where you ask the user for permission to access various bits of information. The major problem these days is that people don’t read through the authorization dialogs because they want to see which Star Wars character they are.


To my knowledge, you're not able to reveal just your first name, or just a proof that you're a FB verified user. And once you grant access, it's forever. We may be at a global maximum for user permissions, but I doubt it.

The problem is we can't try anything new - since it's all stored in FB's servers, we have to wait for them to build new granular capabilities.


The truth is that Facebook engineering & UX has been either incompetent (incredibly unlikely given the caliber of people they have) or willfully / inadvertently avoiding clearly communicating to users what permissions apps need.

The thought that they can't "do better" than dialogs / strings identifying each permission is laughable.

It's not rocket science.


You're using "open" in different contexts, one applying to a platform (e.g. blockchain or facebook app api) and another in regards to access control.

As platforms, blockchain is more open than facebook and for access control it's less open. GP was referring to security being at odds with openness in the sense of access control.


> The way forward is people owning all their data encrypted, and revealing zero knowledge proofs about that data to services that want access.

Who has a financial incentive to build and maintain that system?


Cryptocurrency projects.


The Facebook API has been useless since 2014 when most access to friend data was cutoff. Since then, if your objective was data collection, that could be easily achieved by scraping publicly available information (many friends lists are public, there are many public posts, etc. - certainly enough to use in aggregate to formulate campaign strategies etc.). I suspect that will be the next “scandal,” since in 2018, people can’t possibly take personal responsibility for the things they post and allow to be public.

Ironically, the “scandal” that caused this whole thing is a non-issue. Pre-2014 Facebook apps could collect a lot of information about you and your friends, along with their Facebook user IDs, and that was scary because there was a time when you could simply submit a list of user ID’s that you wanted to show a specific ad to. But since Facebook advertising cannot be targeted by user ID anymore, and this policy was in place well before the 2016 election, all of that data was essentially useless to any participant in the 2016 election other than for aggregate things like general campaign strategies. I am intimately familiar with the advertise by ID issue - I was awarded a $2k Facebook bug bounty for spotting an exploit in the Custom Audiences feature that allowed an equivalent version of targeting by ID after they disallowed it.

So while it’s possible that Obama used his special access to the entire US social graph to successfully influence his elections, it is impossible for Trump or Hillary to have done it even if they had the data because of the changes in the FB ad platform in between 2012 and 2016. This entire “scandal” was created and promoted by people that don’t understand, or actively ignored, this concept. If you ask everyone that has read the recent headlines, including reporters that wrote the stories, I’ll bet 99%+ will tell you that they believe they could be specifically targeted with ads.

It would be interesting to see if the executives at any of the media companies that have managed to sell this scandal to the public took unusually large short positions in Facebook stock before releasing the story. Since the story is effectively fraudulent (it was not possible for the election to have been influenced in the way that the stories imply), I assume that would be securities fraud.


>Facebook advertising cannot be targeted by user ID anymore

This doesn't matter much if you can do essentially the same thing by targeting with extremely specific location and demographic data.


You can’t “essentially do the same thing” in the way you mention though. You are bringing up a second issue, that is also not allowed. You cannot, for example, use specific GPS coordinates to target a house or even a group of houses. Anything less specific than that and the effect gets watered down significantly, and the platform simply doesn’t allow anything close to that kind of “extremely specific” location targeting. Same with demographic data (which can easily be obtained through other places, where the data is much more accurate than you would get from Facebook - see Acxiom).


The idea is this:

given the complete graph and add some external data, you can identify those users you should advertise to, i. e. jane@test.com.

BUT: what is still possible, and what Cambridge Analytica was actually building, is slightly different: with a statistical sample of maybe a few ten thousand complete profiles, you can build a statistical model that targets your advertisement not to a user, but a set of criteria: "Males between the ages of 35 and 40 living in a mid-sized town in Texas who liked curling and Star Wars Episode I, but has never traveled to Australia". This is still perfectly possible.

It's likely the latter model performs at least 90% as well as the former, with an upward trajectory as methods are improved. Note that CA probably didn't get this working terribly well. But someone else definitely will.

Intuitively, I am far more uncomfortable with the first. But practically, I can't really think of a reason why.


Agreed that it’s scary if someone actually manages to get it working. CA did not, nor did Obama who was given access to far more data. But the issue is the false narrative being sold by these news stories. If you ask an average person right now what happened based upon these news stories, they’ll tell you that Trump hired someone to hack their Facebook data and then used it to target Facebook ads to them, and that this was so effective that he was able to steal the election from Hillary. That’s what I take issue with - it’s a provably false narrative.


I’m actually not opposed to this. In fact I wouldn’t mind if a similar narrative started about Clinton and how she used it to steal the popular vote. Then both major parties would be pushing for privacy in the US. Digital privacy is complex and I’d rather people have a misinformed motivation for it than apathy. It’s not unlike geopolitics or economics in that you almost need to have egregiously simplified propaganda to get people onboard because the reality is so complex and nuanced that most people's eyes glaze over at the thought of it. The truth and depth of understanding is available to anyone that wants it but my parents are never really going to get it the way I do. They don't have my understanding of how data, applications and the web actually work to make it all click so they think Facebook spies on them using their microphone when the reality is so much more insidious.


> you can build a statistical model that targets your advertisement not to a user, but a set of criteria: "Males between the ages of 35 and 40 living in a mid-sized town in Texas who liked curling and Star Wars Episode I, but has never traveled to Australia". This is still perfectly possible.

Prediction: It won't be after the government is done with a new set of regulations, at least for election purposes. I continue to believe this is about the ability to wield political influence, not privacy. The privacy issues have been well known for years, and there's very little evidence of public concern over the privacy, other than newspaper article after newspaper article claiming there is, each one "coincidentally" making reference to Donald Trump, as if he was somehow personally responsible for this debacle, and almost no stories on the Obama campaign who did largely the same thing on a much grander scale.

If someone knows of any convincing independent public polls, with published questions (with the right questions, you can get whatever "answer" you want to produce) that shows that there are in fact widespread public privacy concerns, I'd love to read them. Until then, I'm going to continue to believe this is about the power to influence the public in the political sphere, and predict that the Facebook platform is going to be stripped of that power.


It's not possible to target by specific user id, sure, but that's not the story and in no way makes the whole thing fraudulent. It's also not what CA was doing or how they operated.


The story isn’t that they used data obtained through the Facebook API to microtarget ads on Facebook? That’s the story I’ve read at every major media outlet.


> The story isn’t that they used data obtained through the Facebook API to microtarget ads on Facebook?

Yes, that is exactly it. You seem to assume that the only way you can achieve this is by advertising to individual people by Facebook ID's and use this as the basis of your argument that the whole story is fraudulent and instead there is some media conspiracy to commit securities fraud.

Can you see where your logic might be faulty?

There is a lot of information at your fingertips about how they actually used the data, you can go to your favorite search engine right now, type in "how did cambridge analytica use the facebook data" and read the answer. I'd do this first before calling the whole thing bogus and accusing Channel 4 of committing securities fraud.


I’ve looked, and I manage a relatively large Facebook ad budget so I know exactly what is and is not possible to do through it. I’ve also been both advertising and writing software that interacts with both the ad API and the graph API since the beginning of both. What is being claimed in the media simply isn’t possible to do today. I’ve seen the term “audience of one” thrown around so much it makes me sick.

So yes, the story is fraudulent. The (years old) data that Kogan scraped and sold to CA years after the fact could possibly be used in aggregate to formulate campaign strategy (although because of its age even that use wouldn’t have been very effective). It couldn’t be used to target specific individuals or anything resembling that. If you’re saying they were able to target zip codes heavily populated by people in a given political party sure, they could. But that’s not remotely close to what is being described in the articles.

Also, since you and your friends at the Guardian are implying that they used this data in other ways, see here:

https://www.theverge.com/2018/3/20/17138854/cambridge-analyt...

Even that part of your/their narrative is inaccurate.


As various news sites have clearly and repeatedly reported they used the data to build highly detailed and specific psychological profiles of target voters then created content which would broadly appeal to people that match that profile. Kind of like buyer personas[1], but with rich detailed data on 50,000,000 people and their connections to play with.

None of this implies in any way that they even took out a Facebook advert (in fact they preferred content that looked organic, seeded through various Facebook groups/Twitter accounts), so I'm not sure why your experience with using the Facebook self-serve advertising portal gives much insight here.

In short: they used the dataset to target people, but they did not specifically target people in the dataset.

1. https://blog.hubspot.com/marketing/buyer-persona-research


As I said in my original comment, the data might have been used in aggregate, for things like campaign strategy (“go campaign in Tallahassee!”). But the articles on this subject that have gotten the most attention imply that this data was used, specifically, to target ads on Facebook. Therefore my extensive experience with the platform, that you rudely sought to trivialize in your comment, is directly relevant. Once again, you cannot use the data they had to target people in the way that is being implied in most of these articles.

Finally, to put your comments to rest (hopefully), the data wasn’t used even off of Facebook in the way that you and these articles are claiming. See [1]. Enjoy the rest of your day.

[1]https://www.theverge.com/2018/3/20/17138854/cambridge-analyt...


The data was used in aggregate to produce detailed profiles (upwards of 40k of them), as reported by the original whistleblower and every reputable news source who's reported on it, and in far more detail than "go campaign in X".

More like "This specific group of people in X who like guns, willie nelson and dislike bananas would be receptive to a picture of willie nelson squashing a banana with Hillary Clinton's face on it with a caption about 2A". Then, spread this picture organically through facebook groups for people who hate bananas, love willie nelson and guns.

To be clear: the story is not at all "oh no they brought targeted facebook adverts", and it should be evident that perhaps your logic or understanding is faulty rather than a worldwide media conspiracy to short Facebook stock.


and it should be evident that perhaps your logic or understanding is faulty rather than a worldwide media conspiracy to short Facebook stock.

Classic liberal tactic: dismiss people that point out the obvious flaws in your rhetoric by painting them as conspiracy theorists. I don’t think it’s a “worldwide conspiracy to short Facebook stock”. I think that the media outlets distributing these stories, which are spreading false information and implying things that aren’t possible (as my comment points out), hate Trump, are pissed that they were unable to manipulate the election in the way that they wanted, and are doing what they can to ensure he doesn’t win in 2020. But it wouldn’t surprise me if they attempted to get a little cream on top and tried to profit from their false narrative by shorting the stock as well. There’s actually nothing wrong with that, as long as the stories are factual, but it’s illegal (at least under US law) when they are not.

the story is not at all "oh no they brought targeted facebook adverts"

That isn’t all of the story, but that is a part of the narrative that is being told, and that part is factually impossible. Further, even the rest of your claims that they actually used the psychographic profiles are simply not accurate. See: https://www.theverge.com/2018/3/20/17138854/cambridge-analyt...


Is it really the layman that is wrong? You can put something backyard of your house, and reasonably expect that it’s not going to end up used in a database to profile you.

Why DO we have to accept that “public” means big data? Why can’t we legislate access so that aggregate collection of public personal data requires _consent_?


Why can’t we legislate access so that aggregate collection of public personal data requires _consent_?

If you can get enough people to agree with you, then great - go for it. But that’s not the point of my comment. It’s that even if they did collect the data, it couldn’t have been used to microtarget you through Facebook ads, even though that is exactly what all of these articles have said happened here.


You can create custom audiences by Facebook id (or email, phone number, etc)

https://www.facebook.com/business/help/606443329504150?helpr...


This only applies to people that have installed your Facebook app (not to their friends). In the CA case, that means they would have been able to specifically target a few hundred thousand people, but they didn’t do that because the app had long been deleted before the 2016 election.

Facebook actually cares about this issue. The bug bounty I was awarded arose from the fact that you could build a custom audience email list using user_nickname@facebook.com without knowing their actual email address. So you could simply write a bot that visited Facebook.com/profile.php?id=4 and see that it redirected to Facebook.com/zuck, and now knowing that “zuck” is the user nickname corresponding to user ID 4, you could then put him in your custom audience list using zuck@facebook.com. That worked for all 2 billion people on the platform, because at one point Facebook gave everyone an @facebook.com email address that mapped to their account. However, that issue has been fixed and was fixed before the election.


That is what I thought also but how did CA get this friend data?


They got it by using the same technique that Obama did in 2008 and 2012. They got one of your friends to authorize their app, and then they were allowed to access most of the data about your friends that you could access. The only difference between CA and Obama is that Obama’s app was given special permission to blow through data collection limits enforced on all other apps, such that even though less than 1 million people authorized Obama to have their data, they had information on ~200 million people - the entire US social graph. The researcher that created the app at issue in the CA scandal, who eventually sold that data to CA years after the fact, only had access to about ~50 million profiles.

Both of these things were wrong, but it should have been a huge deal back in 2008 and again in 2012, when 4x the number of profiles were accessed and used, about 99.5% of which never authorized Obama to have or use their information.


There are, obviously similarities. Yet there are, I believe, also two major differences:

(a) This was 8 and 4 years earlier. A lot changed in that timeframe. While, yes, there were always concerns about privacy, the string of data leaks, and the increasingly obvious power of statistical methods for targeting, have had a strong effect on the general public's attitude towards the practice.

(b) The Obama campaign did not break their contract with Facebook. Facebook did apparently turn of their API limits, either because of sympathy or because that's just what they did for big clients back then (FarmVille got the same privilege). But in the end, it appears that nobody broke any rules back then.

Now this probably won't convince everybody. So let me say this: even if the reaction now is hypocritical, it seems everyone agrees that the current outrage is justified. If so, it would be foolish not take action now just because "your team" got caught while the other got away. After all, the chances of each party being advantaged by such practices going forward seem to be exactly equal.

(Not to mention the vastly larger universe of threats not involving the two US parties)


The Obama campaign did not break their contract with Facebook.

While I generally agree with the rest of your comment, Obama did in fact break their contract with Facebook. The developer TOS said that any data obtained through the API was to be used exclusively for the purpose of the operation of your app. Unfortunately, by all accounts, they used this data for purposes outside the app, such as campaign strategy and voter targeting. Further, this was back when targeting Facebook ads by ID was allowed. So they likely did far scarier things than either Trump or Hillary were able to accomplish. We don’t know, because the media was OK with his win, and as a result, nobody investigated the specifics.


> have had a strong effect on the general public's attitude towards the practice

This seems like common sense, but do we have substantial evidence proving this, and to the degree of newspaper coverage we're seeing claiming public outrage? And if so, did the outrage precede the news coverage or follow it?


[flagged]


>It would also be interesting to see why you're so motivated to find excuses for Facebook and to whitewash the damning facts. Facebook employee? Family financial interests in the ad industry? Concerned citizen? Bored?

None of the above. It’s because the hypocrisy of this whole thing is wrong and represents everything that’s wrong with the press today. There was a Facebook data abuse scandal where elections were involved, it just didn’t occur in 2016. It was committed by the Obama campaign in both 2008 and 2012. But the articles portraying that were downright celebratory. Now the “wrong” guy wins and all of a sudden it’s a big problem? There’s a word for that. It’s hypocrisy.

>this scandal (without scare quotes) is an issue because many people have finally realised that Facebook has a ton of information about them and they can't be trusted to handle that information properly.

I agree with you that Facebook gave away too much data pre-2014, but this isn’t the incident that proved they couldn’t be trusted with people’s data. It was the Obama campaign incidents, which involved 4x as many records, that proved it. Most of the holes have been closed today.

>it's not just about ad targeting, it's about all the possible ways information can be misused too: stalking, blackmail, fraud, trolling, circumventing various constitutional protections, identity theft etc, etc.

Are you saying that you believe that Donald Trump is using this data to stalk you? Because this “scandal” cannot be executed again. These capabilities were removed in 2014.

>your assertion that it's impossible to to influence specific outcomes because of the impossibility to accurately target is... wrong. There were many account included in the CA data set, there were and are other data sets and even if they can't target those people with ads, they can probably send them an e-mail, give them a call or send someone to their house.

While it’s true that there are other bands of communication available, the articles surrounding this subject have mostly focused on Facebook advertising, where this is no longer possible and hasn’t been for several years. Read my comment.

>people do have responsibility for what they post, but when their information is stored forever and combined with massive quantities of other collected and bought information the entity doing the above has a lot of power and a lot of responsibility too.

This is a ridiculous statement. If you voluntarily post things publicly, you should expect that they will be stored and crawled. That’s like saying ”I walked down the street naked, and now I can’t believe that my neighbors won’t delete their pictures of it!” If you don’t want it out, don’t put it out in public.


Advertising doesn't require stalking people 24/7 though many seem to have convinced themselves this creepy behavior is a normal state of affairs.

First it was just contextual information like text and location. Now its morphed into something incredibly sinister and toxic.

Hundreds of thousands of people are devoted to building detailed historical profiles and models to give advertisers the ability to micro target people based on sexual preferences, moods, race, religion, political inclinations and other intimate personal information that should not be available COLLATED to anyone. All accompanied by apologism, hand waving, muddying the waters, deception and denial.

It's ironic our societies depend on those who are hungry and deprived to be ethical and not succumb to theft and loot while those who have can behave without ethical constraints and discuss it in a detached intellectual manner.


Pausing app reviews is annoying for sure, but not allowing new users to authorize their app is really bad.

Meaning that new customers can't connect with facebook anymore to access their own data using OAuth! We don't need permissions about your friends, your photos, or whatever. Just accessing their own messages and posts (which is what our customers want to see in our app and pay for).

I know they are shell-shocked after #deletefacebook stuff, but this overreaction is ridiculous.

So glad it's not our only channel of communication through. Times like this you appreciate email - crazy huh?


Drastic action and outcry from developers may be necessary to attain enough media coverage.

The spin from Zuckerberg as a result will be along the lines of "We're really glad you asked that question, and it's one that's really important to all of us. We are prioritising the safety and privacy of our users, and unfortunately that might upset some over-reaching applications."


They are blocking new apps, not new users in existing apps.


>not allowing new users to auth their app

Where does it say this? Is that in separate reporting? That would be huge.


Reading Facebook's PR as they try to fix "problems" that they previously leveraged to profit massively is like someone purposely tripping you and as you stand back up they spit in your face and say "Oh, sorry. I'll try not to do it again" in a condescending tone. [edited to remove things HN can't handle]


Wow man that's some top level hate. You do realize that Zuck has pledged away 99% of his wealth? Providing third party apps access to friends list was a strange permission. I have never used an app that asked for it. But there are plenty of dating, games and social apps that could only work with a permission like that.

I think people are reading too much into this fiasco. We are better off fixing Facebook. It serves its purpose well.


> You do realize that Zuck has pledged away 99% of his wealth?

He hasn't. He has pledged to donate 99% of his wealth to a private, for-profit organization that he owns.

I also pledge to donate 99% of my wealth to my bank account.


>He hasn't. He has pledged to donate 99% of his wealth to a private, for-profit organization that he owns.

I initially thought you were being overly critical here. It looks like you're right, the "Initiative" he created is an LLC and not a non-profit.

https://en.wikipedia.org/wiki/Chan_Zuckerberg_Initiative#Com...


Yep, and not his first rodeo either. Remember the "Internet.org", a for-profit initiative that is part of Facebook? Putting ".org" in something for-profit that tries to achieve monopoly is as dishonest as it gets...


If you read higher up in that wiki it lists all of the money the initiative has given away so far. I'm not sure why people think that the LLC doesn't give away money.


The issue isn't that the LLC doesn't give away money. The issue is that it is a LLC.


That's called influence, changing people's minds with relatively small amounts of money. And it works.


Changing people's minds is no longer considered influential?


Interesting tax avoidance strategy. Any idea if this is also what Bill Gates is doing?


His and his wife's is a proper foundation:

https://en.wikipedia.org/wiki/Bill_%26_Melinda_Gates_Foundat...

The money is out of their hands, and can't be distributed back to them.


Even in that case, he and his family will for a long time have the political benefit of deciding how the money will be spent. And political advantages usually translate into economical advantages, so it can also be viewed as a scheme at some level.


I don't think this cynicism is warranted. No matter how much benefit you think the goodwill of others is towards you for giving away your money, is dwarfed by the benefit of having billions of dollars.

If the political benefit really outweighed the costs, then every single billionaire would be doing the same thing, purely out of selfish interest. How many of them are?


> If the political benefit really outweighed the costs, then every single billionaire would be doing the same thing, purely out of selfish interest.

Well, they are doing for the selfish interest of feeling and looking good.

And I'm completely ok with that!


Lots of them are doing it. In fact, one of the oldest schemes devised by American billionaires is to start a foundation and to call themselves philanthropists. After all, how many billions you really need? It is better to take a portion of this money and invest in public relations.


Oh, sure, I'm not saying he is a saint or anything. But Gate's actions are very, very different than "I'm donating to a company I own and calling it an 'initiative'" Zuckerberg. That's a whole new level of dishonesty.


Not exactly, I'm pretty sure Bill Gates is a nonprofit. However, there's a book called "No Such Thing as a Free Gift" about the social costs of letting our .00001% be the welfare providers for society. Pretty good if you're interested.


I'm also happy to give away every dollar I earn, once I have $440m in my bank account.


>You do realize that Zuck has pledged away 99% of his wealth?

Please stop spreading PR. He put the money into a limited-liability corporation, and spread a bunch of articles about a "pledge." It's nothing more then a tax-sheltered investment vehicle.[1] This is not the same as putting it into a charitable trust. He can use the money to influence whoever he decides to give it to.

1. http://fortune.com/2015/12/02/zuckerberg-charity/

"Corporations can make for-profit investments and political donations—and unlike charitable trusts, they don’t have to report their political donations."


There's an enormous difference between Zuck's "charitable" corporation (not a foundation) and the Bill & Melinda Gates Foundation.

There's no visible effort at giving beyond the US Zuck Feelings Tour and hiring professional political hacks as his CZI staff.


> It serves its purpose well.

It’s purpose being?

Maybe it’s the use I make of it, but it doesn’t add much to my life. It could go away and I could replace any of its function with something else, or not miss the function at all.

Cambridge Analytica/Obama campaign data fiascos aside, many are arguing more and more that Facebook is doing more damage than good to society.

Granted, this is hard to measure, but it’s a valid concern.


> It’s purpose being?

Connecting people. Communication between friends. Social events.

> Maybe it’s the use I make of it, but it doesn’t add much to my life. It could go away and I could replace any of its function with something else, or not miss the function at all.

So, maybe learn to use it better? Less? Not at all?

> many are arguing more and more that Facebook is doing more damage than good to society.

I wish people who actually use it and benefit from it would be taken more seriously on HN. This is pretty stark "silent majority" scenario.


> Zuck has pledged away 99% of his wealth?

They will actively ignore problems that deal with public health and security, but promise to "cure all disease" through philanthropy. Please.


“pledged”

Actually, he put his wealth in a tax-exempt “””charity”””.


People tend to forget that pledging away 99% of your wealth makes you much more attractive to getting investments now. It is all a marketing gimmick.


[flagged]


Why the cynicism?

Should giving money away be unfashionable?


Flashily promising to give money away should not be treated the same as _actually_ giving money away.


Didn't he plan to give away almost all his money over time?


He plans to "give away" money to an LLC he controls, which makes donations and invests in companies with some kind of social mission.

Not saying an LLC that makes investments in for-profit tutoring companies is bad, really, just not a charity in the usual sense, and not a gift to the public good.


HN has no obligation to "handle" your vitriol.


Where does it say that facebook is disabling new user authorizations? I don't see it on the page OP linked.


It's not mentioned in the post specifically, but this is what you get when a new user tries to login with Facebook: https://imgur.com/a/iAf6r


It sounds like this might only affect apps requesting the "pages_messaging" scope... https://twitter.com/search?f=tweets&vertical=default&q=platf...


This is a bit out of left field, but since the height of Farmville I've argued that Facebook should offer cloud services. I know these days everyone wants you to build on their cloud and it's a bit oversaturated, but a very easy way for Facebook to make data available to developers while maintaining security is to run the code that operates on that data on their servers. Seems like such a no-brainer I'm surprised they haven't done it.

But maybe I'm missing something obvious.


Ben Thompson from Stratechery has convinced me that Facebook wanted to be a platform, but changed their mind. They realized that making the world's most valuable consumer dataset available to developers (and competitors) was dumb, as was letting those companies all pollute the core FB experience (like Farmville), and they would make far more money in the long run by letting advertisers target with that data.

This lines up with my experience. I did a ton of (painful) Facebook platform development from 2007-2009 or so, but I haven't followed it as closely since. My sense back then was that there was this huge build-up of activity around the FB platform; they were creating all these new APIs and ways for developers to build super social experiences and deeply integrate with the core FB experience, there were huge companies like Zynga that were entirely dependent on Facebook and also were responsible for tons of FB revenue, etc. And then it all seemed to fizzle? It doesn't seem like there's really hardly any activity any more in terms of deep integration with Facebook as a platform, other than FB login. I never see anything on my news feed any more from weird apps, or get invites to take some dumb quiz, or whatever. I mean, I'm sure that stuff is there somewhere, but not anything like it was. That could be wrong though!


I think that is what the Parse acquisition was about. Obviously didn't work out for some reason.


Our app which has only `email` permissions is still allowing new users to sign up.


Do you have an alternate means of signup available for those that don't have Facebook?


Yep, users can sign up by email too.


I wonder if it's all still a net benefit for fb. I remember back in the day while doing heavy fb dev, being flabbergasted at what we were able to get. It solidified our decision to invest heavily in their platform. We were able to get millions of likes and other data by simply having a few thousand signups. At one point I thought it was a bug and had to ask around about it. We had to consider whether it is something that will be "discovered" and shutdown or not. The power of it cannot be understated, and without a doubt a major catalyst for the success of their platform. It's possible that they would be worth less today, including the $100B loss, without it.


Not sure if this headline is 100% accurate. oAuth for apps that have already passed Login Submission is still functioning. For example, new users to an app that is already in the FB app ecosystem can still create accounts via oAuth.

However, apps that request scopes like "user_friends" or "pages_messaging" [1] may error out during authentication.

[1] https://messenger.fb.com/newsroom/messenger-platform-changes...


Seems kind of late for this kind of thing doesn't it?


As in, a week late? No. Changes like this take time; some of these Facebook devs and managers are probably already working around the clock on this and related changes.

Or as in many years late? In that case yes. A bit more privacy from the start would've been nice.


Interesting how it's all about "sharing" and "community" when they want you to get on Facebook and all about "well you know you signed your privacy away" when you ask any questions. It's so disingenuous - I'm loving Facebook's self-created troubles.


facebook privacy through restricting the api is an illusion, a lot of content can be extracted with scrappers


You have to be someone's friend to extract that same info with scraper though, no?


It depends on their privacy settings. Some people expose a lot of information, and you can (if you're pateitnt) put together a great deal of information about a person with their privacy settings locked down if you study enough of their friends.

Of course, FB could just resort to making everyone's info private. But then it would suffer a serious loss of utility, as many friendships and social connections are validated by the existence of mutual acquaintances. Most of the time this is completely innocent and desirable for all parties, which is what allowed FB to become so popular in the first place.


If a user account authorizes to for the API, you could only see the data of that account's friends, too.


Just turn off all Facebook apps. I never turned them on, and don't seem to be missing anything.


Or how about: "Don't use Facebook. I never seem to miss anything."


How does one even pause app reviews? They don't own the app stores do they?


All those apps/sites with "login with facebook" or asking you for facebook permissions (ie, Tinder, Spotify, etc) are reviewed by Facebook.


Their "app store" is all the facebook app IDs that developers register to enable "login with facebook" and so on, in their own apps. Without an fb app, you cannot access the facebook api or sdk. For example, pokemon go might offer "login with facebook" and that makes your in-game account gated by a working facebook app integration.


They are talking about apps on Facebook. E.g. if you want to build a chatbot on Messenger, you have to create an app on Facebook.


This is the review process for getting extended scope permissions on Facebook apps.


I think this means that they're not allowing new apps into the app store.


Close, but no (: They don't own any app stores, this refers to apps that use Facebook login or data.


Oh I see, thanks for the clarification.


Can anyone think of a useful Facebook app? I can’t think of one. They’re not as intrusive/awful as they were in the FarmVille era, but are any actually useful for users, not marketers?


I mean just having a Facebook login on your own website can be useful. I believe even that qualifies as a Facebook app.


Is it possible to create an app that can easily remove personal info, and delete all posted content that is, say older than n days old? If so, is there a new demand for this kind of app?


"people have noticed that a lot of horses get stolen from our barn, we guess it's maybe time to finally close the barn door"


Seems like the right answer here is analyzing usage of the API and looking for malicious patterns




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: