Hacker News new | past | comments | ask | show | jobs | submit login
Facebook Seeks Shutdown of NYU Research Project into Political Ad Targeting (wsj.com)
317 points by bigpumpkin on Oct 23, 2020 | hide | past | favorite | 105 comments



> Facebook said it already offers more transparency into political advertising than either traditional media

What bullshit. If I want to know what ads are targeting my parents in print, I can buy the locally distributed newspapers. If I want to know what they see on Facebook, I have to literally shoulder surf. That or wait for the week’s crack cure-all to surface in a family message thread.


It's also really useless, like "Wearing masks is killing you!" paid by "masks are killing you"


You're mistaken. I have received >20 political advertisements via SMS in the last week. Do you think you should have the right to read my texts? Should advertisers be required to disclose to you that they texted me?

The status quo outside FB is not newspapers, it's other web ads, television, sms, and phone calls. FB may be more or less transparent, but the status quo is not transparent in the way you're describing.


> I have received >20 political advertisements via SMS in the last week. Do you think you should have the right to read my texts? Should advertisers be required to disclose to you that they texted me?

The research participants are voluntarily sharing their ads with NYU. Facebook is objecting.

I have bought voter rolls for direct mail campaigns. On the mailer, I had to disclose who was paying for it. I knew a fraction of my mailers would end up in the opposition’s hands, just as their messaging ended up in mine, and that from who got which mailers we could deduce which voter roll query was used. If I lied they would call me out to the people I lied to. That is an incentive to stay honest and a promoter of a common fact base.

Direct mail, SMS and phone outreach has gotten more targeted. But it’s nowhere close to Facebook. That deserves elevated transparency to a regulatory force. Given these are political ads government regulation is out of the question. So the next best options are total public disclosure, adversarial disclosure (i.e. someone identifying the opposition, which is complicated, and giving them full transparency) or mandatory or voluntary disclosure to academia. The last is the weakest since it introduces sampling biases and requires private collection resources. But it’s better than nothing. Facebook still fights it.

American politics are no longer a debate. We can’t agree on common facts and can’t agree on a common set of institutions. Broad political interaction has devolved into a competition of facts over one of governing philosophies. That is not a way to govern and is a direct result of social media promoting separate, targeted “facts” that spread faster and more insidiously than they can be countered. If we want to continue making decisions as a democracy, this has to be checked.

This has precedent. Every new form of media (print, radio, broadcast television, phone, cable television, e-mail, et cetera) has been followed by new rules for political discourse. To inform that rule making we need research. If the media controls the data the research will be useless, the rules garbage and our discourse corrupted.


> The research participants are voluntarily sharing their ads with NYU. Facebook is objecting.

FB gives off serious Emil Kaschub vibes with its willingness to run non-consensual psychological experiments on people.


Great comment, thanks for sharing your info and ideas. I’m inferring from your second to last paragraph that you’re alluding to social media being somehow part (or all?) of the reason for those problems.

I totally agree. I think that we as a nation (US) (and possibly all over world) have a blind spot where social media, surveillance capitalism, and targeted ads are. The power of the algorithm is so strong that people truly believe they’re being recorded (the whole FB is listening because I got such and such ad).

Policy is always many years behind technology. My biggest concern is that this technology poses an existential threat to our current zeitgeist (?). I always trouble putting the threat into my own words.

But I can’t agree more. We need immediate transparency. And we need to study the cause and effect so we can better understand. That will lead to the best form of whatever regulation we may need to move to the next stage.


You pay your phone company for service.

They also mine and sell your activity.

Recently on HN https://news.ycombinator.com/item?id=24756042


> I have received >20 political advertisements via SMS in the last week.

There should be a law against that.


I thought there was for mobile phones? I receive unsolicited calls and texts from the same orgs even though I tell them over the phone to stop calling and remove me from your list. It is simple telemarketing. The calls and texts have to be illegal because it is only fake IRS calls and political organizations that Ive never heard of or signed up for. If it were legal I would think I would be receiving them for all kinds of products.


These people are voluntarily sharing the ads they see with researchers. It's a completely different situation.


I don't mean to be snide, but what gives you the right to know what everyone else is seeing?

Tons of posts on HN are about how individuals should own their data, yet you think you should also have access to that of others?


Because this is not private data, this is public discourse. How can we have conversations with people and argue about issues, if we and them have been different information and basically think that the same words have different meanings?


The technical term for that is "theory of mind". Without it you've basically lost the ability to have and participate in civil society.


Interesting observation. Would make for a great sci-fi novel premise.



Tangentially at best, The Truman Show is great but there is complete theory of mind between all characters; there is just a big conspiracy.

A real breakdown of theory of mind would mean that there is a breakdown in common understanding. Like with an animal, you don't have complete theory of mind with an animal, and it certainly does not with you -- there is an inability to account for the reasoning of a dog.


I think it is arguable that one could plausibly make a Documentary already!

The way current social media and search engines optimize for engagement and tailor information to each user, people have really different views on the universe right now as is; and this has already had real-world consequences. (eg. corona-response, motivations for or against Brexit, etc. )


What other people are seeing is public discourse? I'm not sure I follow. Is their search history also public discourse?


> Is their search history also public discourse?

Oh, come on. You are bringing that into this discussion. The topic is not people's private search or browsing history. Political advertisements are very clearly a public issue.


Ad targeting is largely informed by browsing history.


Americans may not have a right to know whose paying for the propaganda we see, but, as far as I know, we do have a right to know what political propaganda is being spread and what groups.


>What other people are seeing is public discourse

advertisement is part of public discourse, in particular political advertisement.


All FB political ads are listed in Facebook's Ad Library


also directly from the article

>But that library excludes information about the targeting that determines who sees the ads.

>The researchers behind the NYU Ad Observatory said they wanted to provide journalists, researchers, policy makers, and others with the ability to search political ads by state and contest to see what messages are targeted to specific audiences and how those ads are funded.

The point of the research is to figure out what ads Facebook shows to which users, which is of course particularly important for people who try to do research on how ads on Facebook intersect with politics.


Political ads, political commentaries, memes.

I have someone on my contact list in fb who honestly thinks that QAnon exposes pedophiles. I can't wrap my head around how the heck they've reached that conclusion.


[flagged]


It sells papers (or digital equivalents).

On Fox News, the left is nothing but a violent BLM mob being cheered on by supportive governors. Supported by Soros.

On MSNBC, the right is nothing but a violent fascist mob being cheered on by a supportive president. Supported by the Koch brothers.

Thankfully, we live in a country where the average person is not crazy. Unfortunately, you'd never know that from watching what passes for news these days.


Yes, the right has their own boogeymen, like Antifa and BLM.

However I'm willing to bet you don't see anything about Q in right wing news sources is there is literally nothing to write about them except "these guys think Trump is fighting a deep underground pedo cabal" and I'm sure they don't want to antagonize their readers.


[flagged]


>Except of course BLM and Antifa physically exist

And Q... doesnt? What are you implying?

>see nearly every day in downtown Portland

>https://twitter.com/MrAndyNgo

So Q doesn't exist, despite "plenty of imagery and video on twitter", but I should take "MrAndyNgo", a Twitter nobody completely at face value? I've been to Portland recently. Plenty of people were outside eating at restaurants. The "anarchy" is greatly overblown by the media.

I don't mean to downplay the impact of either movement, but to dismiss QAnon as "liberal conspiracy" when simultaneously only getting news from, what seems like a Twitter agitator, AndyNgo is concerning.


He's not a "twitter nobody", he's a blue check journalist who runs https://thepostmillennial.com/ and frequently appears on TV. All I'm saying is QAnon, if it's not a dumb LARP by some teenager, at least doesn't set police cruisers on fire and doesn't kill people for wearing a hat it doesn't like. So if we were going down the list of dangerous societal things to address it would be nowhere close to the top.


>All I'm saying is QAnon, if it's not a dumb LARP by some teenager, at least doesn't set police cruisers on fire and doesn't kill people for wearing a hat it doesn't like.

Bruh what? Qanon has incited a lot of violence. Getting someone to commit acts of violence on your behalf is worse than specific acts of violence by people that choose to do so without provocation. [0].

Anyways, comparing qanon, an anonymous conspiratorial propagandist to BLM:

>decentralized political and social movement advocating for non-violent civil disobedience in protest against incidents of police brutality and all racially motivated violence against black people. [1]

...or antifa, [2]:

>an anti-fascist action and left-wing political movement in the United States comprising an array of autonomous groups and individuals that aim to achieve their objectives through the use of both nonviolent and violent direct action rather than through policy reform.

...which are basically spontaneously formed groups of people supporting specific political causes...

  indicates that you do not understand the complexity of events since qanons appearance. yet you speak confidently as if you did and go on to condemn countless millions that support antifa or blm for their tenuous proximal association to anarchists or looters. Such irrational comparisons indicate an emotional response to whatever you perceive antifa and blm to represent.
Antifa and blm protests make specific demands at their demonstrations. Time and time again we see their demonstrations disrupted by radicalized agent provocateurs, opportunists, anarchists and others. Please disambiguate yourself when speaking about millions of people. It's easy to say an opportunistic person looting target is blm. Just as easy as it is to say anyone defending qanon, or intimidating voters by playing soldier at polling places is a proud boy/Aryan brother/whatever racist right wing group is in the news for a hate crime that day.

Proud boys, kkk, Aryan Brotherhood and countless others are violent terrorist organizations that have initial ceremonies, membership lists, and responsibilities. Antifa and blm do not. Trump can't even condemn white supremacists in the most generic of terms. On top of that he's frequently referenced q conspiracies lending whatever credibility you attribute to trump to qanon.

The link between trump, qanon, and racist terrorist organizations is stronger than any links between the countless millions of people who fly antifa or blm signs.

[0] https://www.theguardian.com/us-news/2020/oct/15/qanon-violen...

[1] https://en.m.wikipedia.org/wiki/Black_Lives_Matter

[2] https://en.m.wikipedia.org/wiki/Antifa_(United_States)


The post is completely false and partisan. Trump said specifically he would denounce any group the moderator wanted. He has denounced white supremacy over 20 times on video over the past 4 years. Why so many times? Because the Democrats lie so much about him that the media keeps asking the same question. The same question was even asked at the 2016 debate with the same moderator. Then you have literally thousands of videos of people carrying antics flags, shouting BLM, and lighting things on fire, looting, etc. proud boys are not white supremacist either, they have plenty of black members and their leader is Hispanic. All it takes is a little time googling to find ANY of this or watch a legit news organization like Fox News that actually show the videos and tapes. It’s insane how many people claim Fox News to be fake or stories to be fake, but they literally show video evidence of all the violence meanwhile other media will say it’s “peaceful protests,” but for those of us there we know these protests shouting “No peace” are not peaceful.


Citation? Its so common nowadays to just claim unsubstantiated things. I.e. a video of our potus denouncing white supremacists?

Repeating the same videos of isolated incidents make BLM seem ultra-violent, but that's more of the same smearing tactics used for a decade now.


I have a good one. Here's the "fine people hoax" where he very specifically denounces white supremacists. Joe's entire campaign was started on a lie. You're being lied to by the press about nearly everything: https://www.youtube.com/watch?v=3WWoXwUIywQ

Second part of the quote which you have not heard so far: "And I'm not talking about the neo-nazis and white nationalists, because they should be condemned totally."


That's not a citation. That links to an alex Jones styled YouTube channel with breathless conspiratorial text superimposed over cherry pickedclips of what appears to broadcast tv, however their origin and veracity is as suspect as the person running the account.

Trump has never said: "I condemn white supremacists". Four words. Dead simple.

Here I'll say it. I condemn white supremacists. I condemn white supremacy. Your turn.

Saying they should be condemned is different.


Search: trump condemns white supremacists 20 times. It’s a video showing a lot of them and not even all.


Couldn't have said it better myself. It's weird to me that people on the left don't see how gaslit they are. It's nuts, the bubble is impenetrable.


Then prove me wrong, line by line.


The person quoted that banning QAnon groups is how mainstream censorship prevents honest people from exposing pedophiles. Take it or not.


You misunderstand. What parent poster mentioned is that he wants to know which ads are targeted towards their parents and the only way to know that is stand behind them, because there is no easy way to get this data from Facebook.


Should everyone be able to see which ads targeted at their peers? Still seems like an invasion of privacy.


The invasion of privacy starts with the act of tracking peoples' behavior and using that data to target ads.

It's also not about knowing what a specific user sees. The targeting configurations that advertisers use should be made public, but nobody needs to know which individual users see which individual ads.

So, you should be able to click a "?" link in the corner of any ad and see which demographics or groups the ad is targeting. And likewise, you should be able to look at individual demographics or groups and see which ads are targeting them at any given moment.

This sort of transparency would go a long way towards restoring trust, but I suspect that it would also reveal what a cesspit their ad market is. The advertisers would also probably object, because many would consider their targeting parameters to be a sort of trade secret. And whoever pays the piper calls the tune.


> So, you should be able to click a "?" link in the corner of any ad and see which demographics or groups the ad is targeting.

You can do this on Facebook, for maybe four years now. It's not particularly informative, but it exists.


A third party paid another third party to show very specific content to very specific people based on their very very private data - this is the primary violation of privacy. Being able to tell who pays who to show what is fundamental to transparency in democracy.


I don't care what other people are seeing. I care what advertisers are saying to the public, much like I care about other carbon dioxide emissions.


Parent poster wasn't just talking about aggregate data; they want to see what their acquaintances are seeing. Should we also force disclosure of what petitioners are saying? What about individual phone calls regarding political matters? Party newsletters?


All political correspondence to the public ought be publicly accessible and publicly auditable.


Including phone calls and in-person communication? Many organizations hire or otherwise compel individuals to perform one-to-one communications for the purpose of furthering a political agenda.


We tried that during the 1950s with the Communist Party as the early adopter, didn’t work out too well.


If GM killed the electric car, the establishment killed the Communist Party in America.


The problem is FB microtargeting. Maybe microtargeting should be banned?


Just as an FYI, micro-targeting is very difficult to make work (i.e. measure) on Facebook, or indeed any digital ads platform.

FB/Google/Twitter etc work by measuring some kind of conversion (pixel hit, app install etc). Micro-targeting in terms of voting doesn't have this signal, and as such any of these ads will be difficult to make work on FB.

You can optimise for engagement, but I suspect an ad targeting black voters saying about the 94 crime bill is unlikely to elicit much engagement.


Don't you just need to present the ads? It's about providing targeted information that other people can't see, not about feedback.


That's a good point, and the distinction is subtle. I think it lies in using "their data" to describe both what pages they choose to look at, and what ads Facebook decides to show them. These two types of data should really be treated separately.

Of course, one could argue that Facebook has a right to communicate with their users confidentially. But it's a bit of a stretch to treat ads as private communication.


The ad itself is not private information. The fact that I match the targeting for said ad, is.


What needs to be published is not that you match the targeting. The information that needs publishing is „who is targeting which ad with what criteria.“ That is:

The advertiser, The ad, the targeting given.

None of this contains private information and it‘s equivalent to what‘s known about ads in a newspaper.


>“Scraping tools, no matter how well-intentioned, are not a permissible means of collecting information from us,” said the letter

Well that's quite nice that Facebook understands their intentions are positive, surely Facebook will just supply then the data directly instead!


I think NYU should push them on this. Another high profile case protecting scraping would be great. Also in this case proud to be an NYU alum, great response from the researchers.


Facebook giving data to researchers who promise to behave is how we got the whole Cambridge Analytica mess.


NYU Ad Observatory is overseen by a research university with an IRB, and also operates in plain view of the public.


> by a research university with an IRB

Let's not forget that Facebook is a company that tormented an employee so severely he killed himself at the office [1].

[1] https://www.vice.com/en/article/qvgn9q/do-not-discuss-the-in...


I'm sure that's their concern here, not covering their asses.


In this case, I think the 2 are aligned.


In 2016, there was furor regarding third party data analytics. The $5 billion FTC fine was ostensibly about protecting user privacy. It can be seen here as a fine for inadequate exercise of data authority: Facebook was letting outsiders see what's in the castle.

Now, the walls are opaque, and privacy situation is worse. Outsiders can no longer observe and audit Facebook's output information to users, let alone the information users offer as input.


Rather ironic given that facebook has engaged in human subject research regarding emotional manipulation without informed consent in the past. I think they’ve waived their right to object.


Can you further explain how this works?

1. FB does something wrong that affects 700k of its 1b users at the time 2. FB loses any moral authority to object to things that violate their terms of service.

That is a bit of a leap, but I am open to hearing what I’m not understanding about your thought process.


Not the op, but I'll try:

1. FB didn't "do something wrong". They have repeatedly engaged in research without informed consent as op mentions. These are not isolated incidents, but rather Facebook's modus operandi.

2. NYU uses volunteers to collect information on what Facebook ads are shown to them. A university project using volunteers to research on how people are targeted for political ads.

3. Facebook is trying to use its ToS so that its political targeting remains opaque.

I do not see any leaps here. I do not understand how a company such as Facebook has any moral authority to object about _anything_, much less about this research, but I'm open to hearing what I'm not understanding about your thought process.


> FB didn't "do something wrong". They have repeatedly engaged in research without informed consent as op mentions. These are not isolated incidents, but rather Facebook's modus operandi.

Are you referring to AB testing here?

Are you aware that every internet service does this, on an incredibly regular basis, without informed consent?

You can argue that this is unethical, but you should then be calling for online experimentation without informed consent to be banned.


No it was legit experimentation to affect emotional states, not AB testing. Just a random google search: https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook...


What's the difference?

Speaking as a psychologist who's extremely familiar with both experimentation to affect emotional states and AB testing, I really, really don't see one.

I personally think there should be ethics reviews for AB tests, but that's a fringe position in the industry right now.


You can look up "Surveillance Capitalism" by Zuboff for numeral examples of Facebook's overreach.

Consider "A 61-Million-Person Experiment in Social Influence and Political Mobilisation", the 700,000 people whose emotional states Fb experimented with, the documents which revealed how Fb tried to pinpoint when young Australians and New-Zealanders were vulnerable to advertising.

The difference between AB testing and Facebook's experiments should be obvious to anyone familiar with those experiments, as the aim has been to research and alter human behaviour in general, whereas AB testing has to do specifically with understanding user engagement and satisfaction with regards to the product.

Trying to dress up this experimentation as AB testing or no different than AB testing is incredibly dangerous. As a psychologist extremely familiar with both this kind of experiments and AB testing, you really, really should see a difference.


From a technical point of view, they are 100% exactly the same. You use the same tools and methods to accomplish similiar goals (to understand the reaction of people to particular treatments/interventions.

This article, right: https://www.nature.com/articles/nature11421 ?

So, this is a messaging experiment, which is pretty standard within psychology. The only interesting thing about this (and the only reason it's in Nature) is the size, not the experimental design.

> The difference between AB testing and Facebook's experiments should be obvious to anyone familiar with those experiments, as the aim has been to research and alter human behaviour in general

This is what AB tests do. You change the colour of the button, and conversion rate (i.e. the proportion of people that give you money) goes up or down. How is this different from people taking an action (self-reported, btw so potentially garbage) around an election? Seriously, i would love to know the actual difference here.

You may not like Facebook, hell, I may not like Facebook but it's just inaccurate to say that they are uniquely bad because they run experiments on people, when my entire career both in academia and industry has been running experiments on people and that's somehow OK.


I don't see how the technical point of view is relevant.

I listed three examples that should make it obvious. You refuse for some reason to accept the obvious, that these kind of tests try to alter real world behaviour and not behaviour with regards to the product.

There's a big difference between changing a button's color in a page to see how user engagement changes and trying to find when teenagers are in a more vulnerable emotional state to serve them ads. If you can't see the obvious I'm afraid there is anything more to say.


I'm curious as to the metrics here: is emotionally manipulating 700k less bad than 1 billion users? how about 10 users? or 1 user?

also: is it more bad or less bad if 3rd parties do it - political advertising, disinformation, Cambridge Analytica etc?


I think it's pretty neutral to say that harming fewer people is less bad, assuming the amount of harm is the same. Do you disagree?


Underrated comment.


They asked volunteers to install a browser extension to collect the data for them. So technically aren't they bulk collecting their volunteers' data, not Facebook's?

We really need a law/court case that establishes who owns data and who has permission to share it. The volunteers should be allowed to share whatever they want.


An even stronger, still accurate phrasing, would be to say the volunteers are telling the researchers what ads they see on Facebook. The browser extension is just a tool, doing the bidding of the volunteers, to make that sharing easier.

Facebook is essentially saying we're not allowed to talk to each-other ("in bulk") about what ads Facebook is showing us. Not the first time corporations try to prevent consumers/users/workers from organizing.


This came up recently on HN regarding Facebook taking legal action against two Chrome extensions. (https://news.ycombinator.com/item?id=24661891)

Scraping appears to violate the ToS. Tortious interference was brought up in the comments but I'm not convinced it applies.

Regardless of whether the scraping itself is legal, redistributing the content to a third party is likely to violate copyright in many cases.

The entire situation seems overly complicated.


You might be underestimating the number of folks that would be willing to install an extension for $10 and expose the private data of themselves and their friends and family in the process...


Curious if we'll see facebook sniffing for extensions now.


Isn't extension sniffing just part of the usual fingerprinting toolbox now?


They already do.


Makes sense. Last time a university wanted to scrape user data from from Facebook users (with their permission) "for research purposes", it didn't end so well for Facebook.


Except that Cambridge University did not scrape any data from Facebook nor were they behind Cambridge Analytica. Aleksandar Kogan was a former assistant Professor at Cambridge University who subsequently developed and sold an app to Strategic Communication Laboratories(SCL), the parent company of Cambridge Analytica. That's the entirety of the connection to Cambridge University.

This NYU project is the very antithesis of Cambridge Analytica. This project is part of The Online Political Transparency Project. All of the data is public:

https://onlinepoliticaltransparencyproject.org/

From the NYU Ad Observatory page:

>"The Facebook Ad Observatory is part of the Online Political Transparency Project, which operates from the NYU Tandon School of Engineering. This nonpartisan, independent project is focused on improving the transparency of online political advertising, by deploying traditional methods used in cybersecurity to evaluate the vulnerabilities of online sites. The Project builds tools to collect and archive political advertising data, and makes these available publicly. We encourage journalists and researchers to use these to fuel analysis of online political advertising."


If you meant Cambridge Analytica, that wasn't an university.


Cambridge Analytica got most of their data from a university researcher Aleksandr Kogan, who promised Facebook the data was being used only for academic research but was secretly handing it over to Cambridge Analytica.


So... the problem with university research is that frauds who claim to be university research aren't university research?


He wasn't a fraud at the time, he was a postdoc at Cambridge (which is a pretty well-respected university).

He gave the data to Cambridge Analytica without permission, though.


Sorry, I don't follow. If you're thinking I was trying to communicate some hidden message with my comment, rest assured I was not.


Here's the browser plugin the researcher team uses to collect data:

https://adobserver.org/

It's been obfuscated and I don't see a public repo of the source anywhere.


Not surprising, they don't want to have the Cambridge Analytica scandal happening again. Yes it's a very different thing, but the general populus and old beuracrats in Washington aren't really going to understand that.


Sometimes, I think of throwing up my hands and saying, how can you fight this? There's only so much you can fight against what people want -- because by our actions we're saying we want it.

What do I mean? Facebook is just a manifestation of something that has gripped us through technology. People want to read about their family/friends little daily stories, they want to engage in arguments about politics, they want to hear gossip and news, they want a platform to promote themselves. And they've found a way to do it.

Someone will insert themselves into that stream to manipulate others, entice them to buy things, seek publicity for their causes. Yet we still (collectively) keep opening the app and logging on because we feel -- consciously or not -- that it brings more value to our lives than takes. So every minute, Facebook (or whoever it would be to take their place) gets the reward signal saying, "these people want what we're selling". And on net people aren't abandoning it out of concern for the side effects.

Maybe all you can do is create your local island of sanity and protect yourselves against whatever it produces. Maybe that's us for a decade until someone figures out how to manage this new dynamic, or Congress does its job. How much can you fight when every person's minute-by-minute choice produces a force that's uncontrollable?


> by our actions we're saying we want it.

Just as smokers want lung cancer? Consumers are conditioned to "like" and "dislike".

The battle against commercial surveillance is a battle for democracy. It is being fought but it will take effort because these companies have become powerful profiteers with smiling investors.

It took a long time to start winning against the cigarette industry. Even now, there are places in the world where cigarette sales are profitable and the product has prestige.


Well, that's what I'm saying -- until Congress and some regulation acts to stop it, our habits and desires will fuel this.


Habits and desires are implanted into groups by the system. But they are just group behaviour. People can change if they exert their will, speak out, and educate themselves.


But do people want this? Do they have choice?


It saddens me that this doesn't surprise me anymore. Big tech is so ridden with secrets and trying to hide them, specially when they have to do with politics seems a little less than what democracy is meant to be.


>In letter this month, Facebook says the project violates provisions in its terms of service that prohibit bulk data collection

I thought people asked for these restrictions after Cambridge Analytica. Are they evil now? :^)


This seems qualitatively different compared to Cambridge Analytica. CA used app permissions to gather data not just from the users who used the app, but also their contacts which enabled them to build up a database that covered a good chunk of the US.

The researchers here, as per the article, used 6500 volunteers to collect ads that Facebook explicitly showed to them, apparently without the use of any API or unwilling participants.


Doesn't this NYU research project fall squarely within Facebook's stated mission "To give people the power to share and make the world more open and connected."

Seriously fuck Facebook and fuck Zuckerberg. This company is the absolute height of hypocrisy, hubris and deception.

I hope this only makes more people interested in this research project and spurs more interest in similar projects in the future.


> This company is the absolute height of hypocrisy, hubris and deception.

Welcome to surveillance capitalism. "We are watching you" doesn't sell as well as "We are connecting people".



pot, meet kettle.


drawing parallel, if someone research on ads published in news paper and news paper says you cant do that. This is crazy, wake up FB





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: