Hacker News new | past | comments | ask | show | jobs | submit login
Facebook has sent a cease-and-desist letter to researchers (twitter.com/alexanderabdo)
535 points by ColinWright on Oct 27, 2020 | hide | past | favorite | 195 comments



> The researchers (w/ the help of others) are responsible for a browser plug-in called Ad Observer, which allows FB users to voluntarily share very limited and anonymous data about the political ads that FB shows them. You can read about Ad Observer here:

https://adobservatory.org

=================

Absolutely preposterous takedown demand. Facebook doesn't get to dictate what software I run on my own client devices, including browser plugins, or even what browser I use. If I want to install a plugin that sends a screenshot or data of every advertisement I receive, to a third party of my choice, that's up to me. Or maybe I want to install ublock origin and see no ads.

It sounds like they're complaining because they have no way of detecting this or preventing it on the user client end, thankfully, because of the way browsers are architected to prevent a website from screwing with the software on your computer. The only way fb could detect or block this would be to force users to install their own fb-written browser plugin, with extensive permissions required.

Obviously fb has a high level of motivation to get every user to use their officially app-store-published android or ios app, where the whole experience is centrally controlled, and such a plugin is impossible to use. Rather than having the user browse facebook in Firefox or Chrome or Edge.

If I can display something on my own computer screen it's my right to choose to share it however I damn well please.


A lot of people felt Facebook should do more to proactively stop people from scraping and aggregating data from their site after the CA debacle. Which is what they are doing here, and you are labeling “absolutely preposterous”.

Which way is it? Should they let people do whatever they want with their accounts as you suggest, and risk a repeat of the CA fiasco? Or try to proactively stop it like they are now?


> Which way is it? Should they let people do whatever they want with their accounts as you suggest, and risk a repeat of the CA fiasco? Or try to proactively stop it like they are now?

No, they should assume all the data they serve about people is being collected and indexed by all the people they serve it to, and then restrict what they serve accordingly. Suing people for asking their computers to record what Facebook served them is insane.


This looks like a strawman dressed in false dilemma. With Cambridge Analytica debacle, as I understand it, it was a facebook quiz hosted by facebook and the data collected from facebook directly and used for nefarious purposes.

This is a an extension developed by researchers asking users to install it on their machines and used exactly as advertised: scrapes advertisement data that facebook shows them.


CA data was from a quiz, developed by a researcher at Cambridge University, hosted by that researcher, which scraped FB APIs after being authorized by a user.

This data is from a browser extension, developed by researchers at NYU, hosted by those researchers, which scrapes the FB site after being installed by a user.

IMO the situations are pretty analogous.


The situations are in no way analogous. CA gathered data on users who did not opt in. They paid 270k users to take a quiz, and gathered information on 87MM of their Facebook friends who never consented to sharing their information.


No kidding, the difference between opt-in and secret-opt-in is obvious.


In CA scandal the data collected was about users.

Here data is collected about what Facebook does, what a corporation does.

They don't seem similar.


>A lot of people felt Facebook should do more to proactively stop people from scraping and aggregating data from their site after the CA debacle. Which is what they are doing here, and you are labeling “absolutely preposterous”.

Maybe not the best usage of the word scraping.

CA had access to the data without having to scrape the front end.

For this extension, it is by consent of the user

For scraping, I agree. Facebook should try their best to stop people accessing personal data of people they don't know. They do make reasonable attempts, it's less than trivial to set up fake accounts on scale but not impossible. Their "bulk uploads" feature is designed in such a way that it doesn't link email addresses to profiles (or at least as easily), unlike LinkedIn and Twitter. Saying that, it's up to the user to set their privacy settings but I would much prefer the defaults (if they still aren't, I don't use FB) were automatically set to non-public. I've seen an implementation that used headless browsers and thousands of FB accounts to scrape millions of profiles.


They harvested the data about users without their consent. This is about users wanting to protect against said data collection, so I think your angle comes from the wrong perspective and the juxtaposition (Which way is it?) is wrong.

> Should they let people do whatever they want with their accounts

That CA didn't have the consent of users was the scandal.


> Should they let people do whatever they want with their accounts as you suggest, and risk a repeat of the CA fiasco?

Yes.


What Cambridge Analytica fiasco?

https://ftalphaville.ft.com/2020/10/06/1602008755000/ICO-s-f...

The more I learn about CA and this so-called "fiasco", the more I realize other companies know far more and actively did far more than CA did.

They were an easy scapegoat for Hillary Clinton losing what should have been the easiest Presidential election win in American history.

The fact someone made a few hundred thousand - million off a documentary is even more pathetic.


"Everyone else is doing it" and "these other people are worse" is hardly a justification.


sounds to me like they dont want to be held accountable by a third party.


Some takes From Benedict Evans that are worth considering: https://twitter.com/benedictevans/status/1320378054150148098...

“Meanwhile: the NYU app has access to friend data in your feed and friend data is also in the ads it scrapes. And it replaces an actual security model with our trust that NYU are nice people and won't abuse this access. That is exactly how Cambridge Analytica happened.”


Comparing Cambridge Analytica, who harvested data though means that were not transparent to users (and for malicious purpose), to NYU has explained what data and why, AND has the consent of its users, seems disingenuous at best.


The point is that CA's data harvesting looked like it was transparent to users at the time they were doing it — which is precisely the appearance you'd expect a malicious app to try to convey.

The NYU project is probably on the level, but "they're probably on the level" isn't a very good security model at Facebook's scale.

More to the point, the FTC's 2019 Consent Decree [1] makes it fairly clear that FB is responsible for third parties' access to its users' data — and it would be prudent (from FB's point of view) to interpret this responsibility as also covering browser extensions.

[1] https://www.ftc.gov/system/files/documents/cases/c4365facebo...


For a project like this to happen at a major US university (especially once outside funding is involved), it needs approval of the university's Institutional Review Board. Getting IRB approval entails researchers proposing a strict set of guidelines for how the data will be collected/used/stored, examining the potential for harm to participants, and convincing a room of very very risk averse individuals that the project is safe and bounded in scope.

This is in stark contrast to CA. "They're probably on the level" because they have entire systems in place to keep them there.


The data CA used wasn't collected by them. They got it from a research project at Cambridge University's Psychometrics Center. This is exactly the same situation.


You are a little short on facts. Dr Michal Kosinski and Dr David Stillwell of Cambridge University pioneered the use of Facebook data for psychometric research with a Facebook quiz application called the MyPersonality Quiz.

Aleksandar Kogan was a lecturer at Cambridge who then built his own app based on Stilwell's and Kosinki's app and work. Aleksandar then turned around and sold his version to SCL - the parent of Cambridge Analytica. And the reason that Cambridge Analytica wanted his app was because it worked under the social network’s pre-2014 term of service which allowed app developers to harvest data not only from the people who installed the app as well those people's friends.

Stillwell also denied Kogan's request for access to to his and Kosinskis myPersonality dataset. So No the Cambridge Analytica data did not come from Cabridge University or the Psychometrics Center.

The NYU Ad Observatory's data is completely public and the intended audience of that data is journalists and researchers doing analysis of online political advertising. This is the polar opposite of clandestinely harvesting user data in order to manipulate people.

So no it's not "exactly" the same situation but rather the exact opposite.


From the Wired magazine explainer on CA:

"That data was acquired via “thisisyourdigitallife,” a third-party app created by a researcher at Cambridge University's Psychometrics Centre. Nearly 300,000 people downloaded it, thereby handing the researcher—and Cambridge Analytica—access to not just their own data, and their friends' as well."

https://www.wired.com/amp-stories/cambridge-analytica-explai...

re: "the exact opposite", you are putting a lot of weight on the intention behind this use. After the public response to CA you might appreciate why FB is going to strictly apply the rules.

But I generally agree that users running an extension in their own browser is a different situation than an app developer subject to the FB ToS and am not sure why FB would be allowed to block this.


Hi, I am David Stillwell. I can confirm that Kogan's app "thisisyourdigitallife" was his own endeavour and unrelated to the Psychometrics Centre. I'm not sure why Wired has written this now. They actually already wrote an extensive article about the Psychometrics Centre here in June 2018 if you want the real story: https://www.wired.com/story/the-man-who-saw-the-dangers-of-c...


Thank you for clarifying. Always nice to get first-hand information.


The "thisisyourdigitallife" was not developed by the Psychometrics Lab it was developed by Kogan(a lecturer at Cambridge University) who by then had formed his own company called Global Science Research Ltd (GSR.) GSR signed the contract with SCL Elections and sold the Kogan app to them. SCL Elections being the parent of Cambridge Analytica.

Kogan's app was based on the myPersonality app which was developed by Kosinski and Dr David Stillwell who did work at the Psychometrics Lab and denied Kogan access to their dataset. Cambridge Analytica and Cambridge University are not the same thing at all. So there is no comparison to NYU and Cambridge Analytica or Cambridge University for that matter.

Saying I'm "putting a lot of weight on the intention behind this use" is kind of a bizarre statement considering the data is literally available to everybody. See:

https://adobserver.org/ad-database/

The Project also clearly states:

">If you want, you can enter basic demographic information about yourself in the tool to help improve our understanding of why advertisers targeted you. However, we’ll never ask for information that could identify you"

And to that end the code for the plugin that the Ad Observatory project is used also freely available:

https://github.com/OnlinePoliticalTransparency/social-media-...

How much more transparent can you get than that? The goal of the Ad Observatory project is literally to try to understand how we are being targeted and manipulated. How is this in anyway the same as the secret harvesting of data by a political consultancy that billed itself as providing "election management" services?


That makes quite a bit more sense. Thanks for clarifying.

To the grandparent: A researcher selling IRB-protected data would be effectively ending their academic career and opening themselves up to a mountain of legal trouble from the university and anyone who participated in the trial.


To clarify:

WHAT they were doing with the data was not transparent. HOW they were doing the data collection was completely transparent.

The worst of both worlds. Which is to say—we're saying the same thing.

Univeristy research projects such as these go through extensive review. the univeristy is basically putting their name on the line for any research project that happens under their watch.

I'm not sure what you're advocating for. Is it that Facebook shouldn't be researched because they do not allow it? Not very sound reasoning to me.


Users have to install a browser extension in order to participate in the study. That's a way higher barrier than the personality quizzes that Cambridge Analytica used.

It also happens at a different layer of abstraction. Cambridge Analytica extracted data through the permissions framework that Facebook itself implemented.

Facebook's interest in its users' data doesn't need further explanation after you see that most of their profits derive from their control over it. The same control that allowed the profitable mass political targeting that these researchers are trying to study.


The researchers ask people to opt in tracking a restricted amount of data, and then install an extension that has access to their entire Facebook accounts.

There is no way for Facebook or anyone else to prove that the current or a future version of the NYU's extension won't scrape more data than people agreed to.


> There is no way for Facebook or anyone else to prove that the current or a future version of the NYU's extension won't scrape more data than people agreed to.

How so? The extension is open source, anyone can audit it.


the plugins are just javascript, so verifying that is actually a trivial task. You just open the plugin and read the source. NYU could also provide the code, to make it even easier.


You cannot verify that the researchers won't change the plugin to malware in the future.


You cannot verify that Facebook will not change its product to malware in the future. That it to say, at some point, you trust the software publisher in the same way you trust the service operator.


> You cannot verify that Facebook will not change its product to malware in the future.

I apologize for wasting peoples time, but I can't resist taking the low hanging fruit here.

Facebook is malware.


It's fine for you to trust the software publisher, that doesn't mean facebook should, especially when they're legally liable for data breaches that could result from it


Wait a second. How does Facebook trust Firefox? Microsoft Edge? Safari? The other 20 extensions I have installed, three of which save a copy of every single page I visit?

They don’t. They don’t, at the least, care about anyone’s data - they just phrase it that way to sound legitimate because saying “we want no oversight whatsoever” sounds whiny, and it is. (And so does what they ARE claiming to anyone who understands the technical side).


If facebook is concerned about data breaches from a browser plug-in why don't they just stop server the data to the browser? If the data is that valuable and easy to get it wouldn't be hard for someone to write malware that collects the data and phones home once in a while.


The whole point is that a major problem with CA was the scaled friend’s data collection. The NYU app scraping modality could easily do the same thing which violates the present FB consent/sharing model of you control your data going to or not going to third party apps. FB has to fight as hard as possible against such apps. Remember Clearview AI? If we want FB to fight CA and Clearview they must fight here as well.


> If we want FB to fight CA and Clearview they must fight here as well.

Or they could partner with NYU, offer technical insight to maintain integrity and privacy (me stifles laughter) and do everything to support researchers who potentially could help build trust in their platform.

Going after this group just isn't a good look if you're Facebook. If there are valid concerns then don't start with a Cease and Desist.


They might be willing to partner if NYU is willing to indemnify Facebook against any and all liabilities which may result. How likely is NYU to take on that risk? Why should we expect Facebook to take on the risk for NYU?


So is your opinion just that facebook just shouldn't be researched?


I don't really have a view on that, but I think researchers and universities should be held fully liable for the harms they cause, that way, they'll be more careful.

Some research just isn't worth the risk, but as an outsider, I'm not in a place to make that judgement. NYU could also insure against data breaches; in that case, we might get some good security audits.


Hang on. The whole chain of reasoning started with FB protecting users' interests through the permission system, which NYU ostensibly circumvented. How is it in the users' interests to indemnify Facebook?


If NYU internalizes the cost of all breaches (by indemnifying FB against harm), they will be very careful with the data, and prevent another Cambridge Analytica problem.


> The NYU app scraping modality could easily do the same thing

So could any browser extension with the ol' "read and modify your data on \*" permission. Or any browser. Or any third-party Facebook client.

There is a difference between being technically capable of doing a thing and actually doing the thing- especially in cases where the software authors are well-known and relatively easy to hold accountable. To say otherwise is a little bit goofy!


> especially in cases where the software authors are well-known and relatively easy to hold accountable

Like a certain lecturer and senior researcher at University of Cambridge?

https://en.wikipedia.org/wiki/Aleksandr_Kogan


Suppose NYU sent a person to sit behind every NYU participant, and take a photo of their screen each time it changes - that would be exactly the same as NYU is doing (except more expensive); the participant knows that and gave their consent. It is within their rights to show their screen to anyone.

They are just doing it more economically then sending a person. This is entirely unlike CA, which effectively, sent a person to go through all participants available information as quickly as possible while they weren’t looking and store a copy of everything.


Sure. So what exactly is the binding rule which Facebook should apply here?

Rsearchers can get access to anyone's Facebook data if people enable it? What about the ones in chinese universities? Or just respected universities? Which universities is that? How do we decide?

You're missing the point. There needs to be a black and white line, and whatever Facebook allows they're always being demonised, nobody gives them the benefit of the doubt.


> Rsearchers can get access to anyone's Facebook data if people enable it?

Yes. Where is the problem?


This is ironic. Cambridge Analytica was a university with an IRB collecting personal data, then later sold to foe-profits.


Cambridge Analytica happened with an app hosted on Facebook. This is hosted on your browser. So it’s not exactly how Cambridge Analytica happened because the trust model is completely different.


The legal problem & consequences for Facebook werent because of users who opted in to CA collection, the problem was getting your friend's data, who did not consent.


The legal problems for Facebook was mainly because they were an active party to the collection process, which could not have happened without that active participation.

This collection can happen manually, within the users’ regular and fully authorized use, without facebooks involvement, and in fact without any ability for them to figure out that it happens.

That it happens through a browser extension (which they may or may not be technically able to detect) should not change legality or legitimacy.


Well, what exactly should NYU do instead? There is no API with fine-grained permissions that they can use: To get the data they are interested in (ads), they have to resort to scraping - and a scraper will always have access to all data on the page.

So there is no way for NYU to not have access to friend data if they want access to ad data.


Weak take. All users of the NYU app have to explicitly sign up and grant the researchers access to their data. It has a very clear privacy policy:

https://adobserver.org/privacy-policy/

And, unlike Facebook which sucks up an ever increasing amount of data on you, this project takes only basic demographic information (age group, gender, ethnicity) and what ads that you're shown. No personal data is retained by NYU.


The user signed up, but the app would have access to that user's friend's data who didn't sign up


It only has access to what the user is browsing; if a friend hasn’t posted in a year (and doesn’t appear on timeline) and user doesn’t go specifically to their page, then adobserver would be oblivious the the existence of that user (and of the friend relation).

This is entirely unlike an FB app like CA’s that had full unadulterated access to anything the user might browse.


Nope. This isn't a Facebook app. It's a browser plugin.


Doesn't NYU have an Institutional Review Board?


So does the University of Cambridge, and that didn't prevent one of their researchers from scraping and selling user data to Cambridge Analytica.


As far as I've been able to find out, Kosinski and the others developed their techniques at University of Cambridge (and other universities), then took those techniques to Cambridge Analytica/SCL (something that no one here would have any complaints about); CA/SCL then applied them to Facebook. The UofC IRB has no influence on that.

If there is any evidence that CA/SCL/Kosinski said the data collection was affiliated with UofC, I cannot find it. And when Kosinski attempted to use the data in his research, the UofC IRB denied it.

In this case the data collection is by the NYU AdObservatory project, meaning the data collection and its use (should) have to go through the IRB.


It does.


That's enough? Any university with an IRB can scrape people's personal data?


Just in case OP never comes back or you're not aware when you reply later:

This was _exactly_ the issue with CA, data for academics with an IRB laundered into a for-profit entity.


More or less, yes. The purpose of IRB review is to ensure that personal data collection and use are legally and ethically kosher.

Cambridge Analytica and the researchers when they were working for it never claimed to be doing UofC research; if they did, UofC could and should have applied an academic (and possibly legal) baseball bat to their collective face. In fact, when Kosinski did try to use the data as part of his UofC related research, the UofC IRB denied it.


Too bad he didn't cite where in the source code it does any of this stuff.


I mean, I trust NYU researchers a lot more than I trust Facebook execs.

All these big data-harvesting companies (FB, Google, etc.) start with the false premise that well-informed users have affirmatively chosen to trust that company with their private data.


s/NYU app/Google Chrome/g and somehow FB is ok with it, so it isn't security model, it is the people and the goals of their actions what ire FB.


>"The supposed scandal around the data analytics supplied to campaign groups by Cambridge Analytica was manufactured by people with a political agenda.

>...UK Information Commissioner’s Office has published the findings of its three-year investigation (predating the scandal) into the matter, which concluded there was no illegal electoral interference whatsoever...In other words, the data was commercially available and concerned US voters. The only ‘special sauce’ in CA’s model was the hyperbole of its sales people..." [1]

the left has pushed a false narratives and misinformation making Cambridge Analytica, like Russia, the convenient scapegoat for all the things. The same tricks are in play now with Hunter Biden's laptop coverage, which is non-existent from MSM

[1] https://telecoms.com/506834/uk-information-commissioner-conf...


Turns out Cambridge analytics might have been practically useless https://www.wired.co.uk/article/cambridge-analytica-facebook...


That's not what the article you posted is arguing. The article is arguing that the data they collected directly through their trojan apps is not particularly useful, NOT that the full connection graphs (including data drawn from connections of users who didn't use their app) they collected and allegedly used to target advertising is useless.


Don't miss Alex Stamos (former Facebook Chief Security Officer's reply): https://twitter.com/alexstamos/status/1320006154424967170

If I'm understanding Alex right, he's saying that Facebook's 2019 FTC consent decree requires them to limit the personal information collected by apps on the platform.


It's a browser plugin, not a facebook app. Why should facebook have a say on what plugins I have installed in my browser that did not come from facebook? Should they be able to have a say about ublock origin or other apps I have installed on my computer/browser?


it would be a new angle to require functionality qualifications examination to weed out any compromized browsers or OS's, all in the name of security and privacy of course


SECURITY ERROR

This comment by /u/XMPPwocky cannot be displayed because of a security problem with your device. If this error persists, reinstall your operating system or replace your device.

Technical details: "An enclave could not be verified due to a problem with its digital certificate" (E_BAD_ENCLAVE_SIG).

Logging info:

Local: Success (monotonic counter 5 increment OK)

Remote: Success (200 OK in 0.113s)


Is ublock origin allowing you to scrape data from their platform? Then they may want to block that.


the clipboard lets me scrape data. ban that too?


I'm sure they would absolutely love to dictate this.


FB claims automated data collection, the data is apparently input to the app by a user , i think the hairs will be split between copy paste mechaniics, or ten finger entry in response to prompt, thus massaging the definition of automated entry.

(1)FB has consistently refused to publish anything about how the ads are targeted.

(2)The NYU researchers have tried to fill that gap, offering the Ad Observer plug-in to users who want to voluntarily donate the ads they see — along with the limited targeting data FB displays to users.

(3)Here’s where things get troubling: Facebook is now trying to shut down the Ad Observer plug-in, saying that it violates Facebook’s terms of service by automating the collection of data that Facebook shows to its users.


Though presumably the Ad Observer plug-in didn't sign up to Facebook's terms of service. If anyone's violating their terms it's probably the users who install the plug-in.


Why would the AdObserver plugin (completely independent piece of software installed by the user) need to follow facebook's Terms of Service?


Wait so Facebook can collect data about its users but Facebook users aren't allowed to collect data about Facebook?


If I’m your friend on Facebook should you be allowed to go to my page and give all the information I’ve shared with you to another 3rd party?

You trust NYU, OK fine. Can I give the same information about you to pro-Trump researchers?


Yes the OP is absolutely allowed to do this. Their are consequences of befriending people in real life and online. It's up to you to decide who you befriend.


Not according to FTC.


Yes. By befriending someone on Facebook you gave them access to data you chose to share with your Facebook friends. You don't get to choose what they do with that data. It's just like if you tell someone something irl and they use that info in a way you don't like. Once you share info you can't put the genie back in the bottle.

Unfortunately, Facebook has a history of changing data permissions that have caused info to be shared in ways users didn't intend.


By that logic, why is it ok for Facebook to have that data? Why is it ok for Facebook to scrape my address book?


Not that I want to defend Facebook, but that person entered into an agreement laid out by Facebook's terms of service and privacy policy (lol), not the privacy policy set by this plugin.


Yes. So if anyone here is in violation, it would be the user installing the plugin.

But I don’t remember the TOS says “you promise not to let anyone see the ads we show you” (remember this isn’t the user data that’s being collected)


The ToS can't guarantee what other users will or will not do with the data displayed to them, though.


Something can both be a shitty thing to do, and still be allowed because we'd be worse off if we tried to ban it.

(Usually because detection and enforcement is absolutely unreasonable)


You shared the information with your Facebook friends without any protections. Why shouldn't they be allowed to share it with other people? What legal obligation do they have to keep the information you shared private?


>Why shouldn't they be allowed to share it with other people?

>What legal obligation do they have to keep the information you shared private?

Those exact same questions can very well be asked of the whole Cambridge Analytica scandal, and yet some people will give different answers to those two scenarios.

Cambridge Analytica app was explicitly asking users for permission to access their data and data their friends publicly shared with those users. The first one is obviously ok, but the latter was what people had issues with, and I don't see how it is different here.

As for what legal obligations your friends have to keep your information private, I don't think they do. However, Facebook does have the obligation to not share info of users who didn't explicitly consent to it with third party apps, according to this order from FTC issued in 2019[0]. And third party apps that share not only your FB info, but that of your friends (who do not use those third-party apps), definitely fall under this.

0. https://www.ftc.gov/system/files/documents/cases/c4365facebo...


> However, Facebook does have the obligation to not share info of users who didn't explicitly consent to it with third party apps

But Facebook is not sharing info of users' friends. Users are sharing information about their friends.

If a user should not be able to access particular information about their friends, then the onus is on Facebook to restrict that access. It was Facebook's fault for exposing excessive data to users' friends during the Cambridge Analytica scandal and it's their fault for doing the same thing now.

Facebook needs to clean up their own mess instead of suing research groups for taking advantage of it.


>It was Facebook's fault for exposing excessive data to users' friends during the Cambridge Analytica scandal

I don't follow this logic at all. The data shown to users' friends is the same data that is shown to them now. Which is usually all their public photos (nothing from private albums), the friend list (if they didn't make it private), etc., only the stuff that friends are expected to be able to access (and still can). And on the list of permissions on the permission request page, the app had a separate line for "friends' info" specifically (just like it has for every single permission requested), so there was nothing sneaky about it. The CA app asked users to provide them the same data about their friends that they can see in the browser by visiting their friend's page (and page only, nothing private or your messages with them; basically, only the info that everyone in the same security group that you are in sees). The exact same set of data that the browser extension this whole thread is about is accessing.

With that error corrected, it sounds like you are arguing for the case that FB was not at fault during the CA scandal because of all those logical reasons you brought up, and then conclude that FB was at fault and CA was in the clear.

I am reserving my own judgement on who was at fault, but I hope you can see why your reply left me (and likely some other people) confused.

As a cherry on top, CA didn't acquire the data directly from the app, as it wasn't their app. They got the data later on from a research team at Cambridge University's Psychometrics Center, which was the one originally collecting it. Sounds eerily similar to the scenario at hand.


Facebook created a platform which allows users to access information about others users who have agreed to be "friends". A third party then came along and asked users for the information which their friends gave them access to. The users gave the third party the data.

I'm not missing anything here, right?

If the third party should not have access to the data, then neither should the friends who gave it to them. Facebook is responsible for allowing the users access to the data.

If users should have access to the data, then it's the friends' fault for agreeing to be Facebook friends with those users in the first place. Alternatively, it's Facebooks fault for not making it clear what data is made available to friends.

Either way, I don't see how this is a problem with thr research group.

I guess you could argue that the data was still technically owned by the friends and therefore the users had no right to give it away. In which case the fault belongs to the users.


>Either way, I don't see how this is a problem with thr research group.

Which is a valid take, not trying to say that your logic doesn't make sense. It does. But it is literally no difference in terms of what happened during the CA scandal, so all the same rules apply here. If you are ok with this group of researches and think they did nothing wrong, and that FB should have let them have the data, then the CA situation was a perfect happy road scenario for you. Because in that case, CA just got that data, and FB didn't stop them. Win-win, right?

Also, regardless of how valid this take is, FB was ordered by FTC to prevent third party sharing of friend data like that from happening. So FB's hands are kinda tied on this one.


Exactly, the FTC order asks FB to prevent data about someone from leaving FB's servers and go into a third party's database unless that person explicitly allows it.


If you do not trust your friend, then unfriend them on facebook. Or sue them.

If said friend takes a pic of you and records everything you say in real life and gives that to researchers (or "pro Trump" people), you wouldn't go after your or their landlord claiming they "allowed this to happen", nor would you go after the city where it happened, would you? (your friend is voluntarily participating, knowing what they are doing)

Now if the researchers tricked your friend into giving them information about you somehow, then you'd go after your researchers. But do not expect your landlord to go after those researchers for you. (NYU researchers misinforming participants about the scope of their data collection)

Now if your friend abused a camera that was sneakily installed by your landlord to obtain that information, then you might go after that landlord (Cambridge Analytica on Facebook)


I don't understand what the researchers violated - it looks like they didn't sign the FB's EULA, nor did they perform anything resembling CFAA violation.

It is the FB users who signed the EULA and use "unapproved" user-agent to access FB services and to voluntarily share the data (isn't FB a sharing platform btw?) in "unapproved" way. Thus FB should go after the real violators - their users. I wonder why FB didn't do it ...

I mean i can write any stupid EULA, yet until you agree to it my C&D based on that EULA is just my personal hallucinations, and even if you agree to it, your communication/business/etc. partners don't magically become bound by it too.


Facebook says it's a privacy issue. So it doesn't make sense to go after individual users - users can violate their own privacy if they really want - but it might make sense to go after the researchers if Facebook thinks they're tricking people into giving up more privacy than they expect. (I hate to sound like a broken record, because this comes up truly constantly, but the Cambridge Analytica scandal was caused by an academic researcher collecting voluntarily shared data.)


Cambridge Analytica was doing it on FB platform. Thus it isn't related to the current situation.


Facebook's complaint about user privacy is really a red herring. I am sure that someone is worried about negative stories written based on the data collected and tasked another person to shut it down in a way that makes the researchers look like a villain and Facebook looks like a hero.


It's well within their rights as a private company, right? We need to have a serious discussion about the power these FAANGs are exerting and we need to stop making excuses for them.


Is it within the rights of a company to dictate what software I have installed on my personal machine? If Facebook wants to ban every account using the software that is within their rights, but they don't get a say in what I have installed.


Different field, but GeForce Now received a lot of cease-and-desist letters from a bunch of game publishers for allowing people to run games on their platform.

For context, GFN lets you essentially rent a virtual machine in the cloud, where you can log in with your own Steam account and play the games that you've already purchased. For whatever reason, game publishers saw it appropriate to demand GFN to stop offering this service. And, more baffling, some users actually suported the publisher's actions. Just to make it clear, GFN is not letting you play games you do not own, they're simply letting you rent hardware on which to play games you've already purchased on Steam.

Here is one of the game devs explaining why they asked GFN to remove the game from their platform (and subsequently getting slammed by gamers): https://mobile.twitter.com/RaphLife/status/12341813158402293...


This is no different than cd.com back in the day. You could rip and upload CDs so you could play your own songs anywhere, and it was immediately shut down by the RIAA. The argument is that streaming doesn't have a doctrine of first sale, even for items you have supposedly "bought" at full price, so publishers get total control over anything that is streamed. The government and the courts have agreed.


In this case though, the license on Steam follows your account on the platform and not a single time downloaded digital or physical copy.

So the lack of license argument doesn't make sense. At best they can put an explicit term in the license that the game can't run from servers you rent access to, or equivalent. But then that has to already be in the license.


I don't remember the name of the company but there was a video streaming service that worked similarly a decade or so ago with remotely playing DVDs. They were sued out of business despite going to extreme lengths to follow copyright laws. That legal precedent was probably used here to block GFN.


The one I recall was about streaming OTA transmissions captured from individual antennas, one per user, essentially renting remote access to the antennas. The court essentially said relaying the stream is a new broadcast and needs its own broadcast license.


"Devs should control where their games exist."

That seems rather questionable. It is probably the snarkiest tweet I have ever read.


Not too long ago I remember people on this exact site making the argument that Slack had a right to determine what browser extensions I ran. Slack also tried to make this argument in the name of "security" when I wrote to them about it. But yes to your point, NO they do not get a say in what users install.

  Thank you for taking the time to write in and share 
  your concerns.
  We hear you and understand how important this issue is 
  to developers. As a communication company we want to 
  make sure we don't put our customers and their data at 
  risk, and it's something we take very seriously. We 
  provide a full-featured platform with many avenues for 
  improving user experience while working with Slack, but 
  we need to also provide the security and privacy 
  controls business owners, IT administrators, and users 
  expect. We're happy to continue the conversation about 
  UX improvements and future extensions to the platform. 
  This is something we'll keep working on as we listen to 
  feedback from the developer community.


  That is a cop out and you know it. An it administrator 
  or corporate is not using this extension and they would 
  lock down their users from installing BROWSER 
  extensions if they thought that was a security risk. 


  Thanks for getting back to me. I appreciate your 
  thoughts on this matter.
  I'd like to pose a hypothetical to you if you don't 
  mind! Bearing in mind Slack is a product designed to 
  help teams work together and is geared towards business 
  and enterprises — imagine for a second we're working 
  with an enterprise considering adopting Slack and their 
  security team comes across this extension (or something 
  like it) — and identifies it as a 
  security/privacy/reliability concern. Our job is to 
  alleviate these concerns. They expect better from us 
  and we're doing our best to meet and even exceed those 
  expectations. We're certainly not perfect and we can 
  always do better — for businesses big and small and for 
  developers.
  It is important to note here, we're learning from this 
  experience. We're working with the developer to find a 
  middle-ground which is beneficial to everyone involved. 
  We're reevaluating our processes for these situations. 
  We're listening to feedback from users such as yourself 
  and identifying areas we could improve.
  I understand this has been a disappointing and 
  frustrating situation and I do apologise for any 
  difficulty it has caused you. I promise you, we're 
  working on it. I for one am very excited to see what 
  comes from this. It's been a learning opportunity for 
  us all.
  Let me know if you have any questions, or suggestions. 
  I'm here to help!


No it's not. However, they didn't send a cease-and-desist letter to the users that installed the plugin. They sent the letter to the researchers that built the plugin that uses the service Facebook offers in a way that is against Facebook's wishes. Therefore, technically, the letter is going to the correct party and for the right reasons.

Mind, I'm saying 'technically'. Ethically, I think Facebook (and other big social tech) should be researched more. In legal ways, this is how they should handle this.


It's interesting. What would be an "acceptable" mix of demographics for political ads recipients? That is to say, what is the answer to that question where there is no PR story, where Facebook doesn't look bad? If the answer is, none at all... I'm sure we can see where they are coming from.

In my opinion you don't really need to do the NYU study. Intellectually honestly, many political ads will disproportionately appear in front of users with different demographics than their census tracts, regardless of their targeting parameters. In my experience many of the demographics of users in many software products are arbitrary, telling you nothing about the content and much more about acquisition channels and technology usage patterns at a particular point in time.

As far as I know, Facebook allows some targeting parameters for political ads. So they should publish how often those targeting parameters are selected. Great, advocate for that.

Intellectually honestly, that will conclusively show that ad buyers have a wide diversity of targeting parameters that, in aggregate, represent a complex mix of objectives oftentimes only adjacent to a specific election. Almost certainly Facebook already looked at this and found that geography, gender, age and proxies for user's race (like "multicultural affinity") are among the top choices, and that looks bad, even though it may be an important part of all ads targeted anywhere.

Is NYU's study going to have enough power to measure targeting in an intellectually honest way? They can certainly write something descriptive.

That descriptive, "Well here are some ads we looked at, and some of them disproportionately appeared in front of users with e.g. this ethnicity more often than others, which we editorially chose" - I can see how that is a lose-lose for Facebook.


> That is to say, what is the answer to that question where there is no PR story, where Facebook doesn't look bad?

How about don't have political ads at all, if this question is so difficult to answer...


How was their acquisition of Instagram and Whatsapp legal? Haven't antitrust laws broken up companies for less?


Pre-Reagan, sure. Since then, none. There's zero interest in breaking up monopolies in the US from both Democrats and Republicans.

Hell, today's AT&T has more market share than Bell Systems did before it was split up.

EU, on the other hand, appears to be slowly progressing towards the goal of breaking up tech monopolies. Leaked plans would shake up the tech quite a bit: https://www.eff.org/deeplinks/2020/10/eu-vs-big-tech-leaked-...


Nice re-writing of history. Microsoft anti-trust was post Reagan. AT7T now is about 34% market share. The rest is between Verizon, Sprint etc. Not the same as what Bell had.


Microsoft has been broken up?


In a just society, their "rights" stop at their network interface.


"Government is that which governs"

Who governs information and speech online? Washington DC or Silicon Valley?


Silicon valley. Was that a rhetorical question?


"It's well within their rights as a private company, right?"

To do what? What does that mean?


Block extensions, user software, that interact with their site. Remember, Slack did something very similar not very long ago.

Only with pushback did they relent somewhat.

https://g3rv4.com/2018/08/bye-bye-betterslack


Well within their rights to do what? They certainly have access to the courts.


Yes, the FAANGs are very close to being public utilities, and therefore deserve a special treatment.


I see this repeated often, but it's totally incorrect, they do not resemble utilities at all, not even a little bit. I think everyone agrees these companies need more regulation, but the idea that they are in any way comparable to utilities is absurd.


> they do not resemble utilities at all, not even a little bit.

I think that these large communication platforms absolutely resemble other large communication platforms that are currently covered by common carrier laws.

Specifically, a lot of the functionality that these platforms provides, fulfills a usecase that is similar to the phone network.

And the phone network is both a large communication platform, and is all covered by common carrier laws.

The laws need to be updated to recognize that many of the online communication platforms are now as important as the phone system, and therefore should be covered by our existing common carrier laws.


All of this assumes an equivalence in function and access that just isn't there.

If my phone company decides who I can call, then I got issues. I can't trivially change carrier however I want.

If a website arbitrarily bans all kind of people it will soon be their own problem as people leave with a single click.

More specifically, I do not need linkedin, Facebook and co at all to communicate with people over the internet, stuff like P2P software never went away.


> If my phone company decides who I can call, then I got issues.

> If a website arbitrarily bans all kind of people it will soon be their own problem as people leave with a single click.

False. I use facebook for communicating with people more often than I use the phone network.

Being banned from facebook would have a much larger effect on me, and many others, than being banned from ever making phone calls again, due to the fact that we use facebook for the vast majority of our online communication.


Is that because you can not replace Facebook, or because it's merely inconvenient to do so?

Because the law doesn't protect convenience in these contexts.


> Is that because you can not replace Facebook, or because it's merely inconvenient to do so

As in it would be much more difficult to replace Facebook as a communication platform for me than it would be for me to be banned from making phone calls ever again.

So facebook is more in the category of "can not replace" than being able to make phone calls.


Again, is that because you literally don't have the physical option of communicating by other ways, or because the people you want to communicate with are too lazy to use another option?

If I ask you to meet me at the bar and you're banned from the bar I've selected, it's not the bar's problem.


> literally don't have the physical option of communicating

I am saying that the problems of getting kicked off of a facebook are larger than that of getting kicked off of the communication utility that is phone calls.

And this is due to things like network effect.

And these issues and prevention that make it difficult for people to switch is larger on facebook than it is for phone calls.

So the "physical prevention" is larger for facebook than it is for phone calls.


I seriously doubt that. Network effect "just" makes a particular means of communication more favorable - it doesn't prevent the use of any of the options.

Not being able to use the phone cuts you of from a lot of services where no option exists.


> Not being able to use the phone cuts you of from a lot of services where no option exists.

Nah, it really doesn't when compared to something like getting banned from facebook.

I make way more video calls with people than I make phone calls. And probably around 50% of my communication is done over FB message.

It would be way less of a problem to get banned from making phone calls for me and for many other people.


> I think that these large communication platforms absolutely resemble other large communication platforms that are currently covered by common carrier laws

They have nothing in common except the "communication" label.

> Specifically, a lot of the functionality that these platforms provides, fulfills a usecase that is similar to the phone network

They don't. The "phone network" is what provides access to the internet, a website is not a phone company, it sits at a higher level of abstraction. That's like saying a popular TV show is a broadcast network because everyone watches that show.

> online communication platforms are now as important as the phone system

That's not true. If the top 10 most popular websites on the internet disappeared overnight there would still be many thousands of ways to communicate over the internet.


> They have nothing in common except the "communication" label.

Sure they do. They have usecases in common.

There are many ways that I communicate with people, online, that has now entirely replaced me calling those people up on the phone.

That is how they are similar. They are similar in that online services are, in many cases, direct or indirect substitutes for the same exact usecase.

> They don't

They absolutely do. The shared functionality is that I no longer use phone companies anymore, and I instead use online services for that same exact content.

> That's not true.

It absolutely is true. To explain what I mean, I would say that I would truly rather be banned from the entire phone network than to be banned from something like facebook.

This is because I legitimately use facebook for communication more often than I use phones. Therefore, being banned from facebook would have a larger impact on my life than being banned from ever making a phone call again.

That is how facebook is more important than the phone network.

It is more important in that being banned from facebook would have a larger effect on my life than being banned from make phone calls ever again, due to the fact that I use facebook much much more for communication.


> This is because I legitimately use facebook for communication more often than I use phones. Therefore, being banned from facebook would have a larger impact on my life than being banned from ever making a phone call again..

If you use Facebook as your primary communication platform I can see how getting banned from Facebook would be very inconvenient, unfortunately, the fact that you would rely on Facebook in such a manner doesn't change the reality that Facebook is just one website, while a phone network is fundamental infrastructure that underpins internet connectivity. For the vast majority of Facebook users losing access to the phone network would mean losing access to not just Facebook, but everything on the internet. You might have grown accustomed to thinking of Facebook as something more than just a website, but that conception is simply wrong, the idea that Facebook is comparable to the network that Facebook runs on is categorically incorrect.


> e fact that you would rely on Facebook in such a manner

It is the reality of the situation that a whole lot of people rely on facebook in such a manner, and because of the network effect they would be unable to convince all of their friends and family to switch to other platforms.

One person cannot defeat platform locking and network effects.

> while a phone network is fundamental infrastructure

Not really. It would be easier for me to never call someone's phone number again, than to get rid of other communication platforms that I use.


> It is the reality of the situation that a whole lot of people rely on facebook

You're just wrong. The overwhelming majority of people use a variety of communication services like e-mail, imessage/sms and many others, and this is common knowledge, people who only use Facebook for communication are frankly extremely rare.


> people who only

I am saying that they rely on it more for communication than they do on phone calls.

I communicate with people much more over Facebook than I do through actual phone calls, and it would be less of a problem to be banned from ever making phone calls again than it would be to be banned from facebook.


> I am saying that they rely on it more for communication than they do on phone calls.

Nobody is talking about phone calls and you know it. It doesn't matter that you use Facebook more than anything else, it doesn't change what Facebook actually is. If you use discord or slack to do most of your communicating that doesn't mean they become utilities, that's just your personal preference.


> Nobody is talking about phone calls

Literally I was the one to bring up this example in the very beginning. It was my example, that I chose at the start. So yes, that is relevant.

The fact of the matter is, that me being banned from ever making phone calls again would absolutely be a larger problem for me, and many other people, than if we were banned from using facebook.

> It doesn't matter that you use Facebook more than anything else

Of course it does. It is a point of comparison, so as to show that it would be a bigger problem to be banned from facebook than it would be to be banned from making phone calls.

> that's just your personal preference.

I can assure you that there are many people for whom it would be a bigger problem to be banned from facebook than to be banned from making phone calls.


> It was my example, that I chose at the start. So yes, that is relevant.

It's not relevant because you're ignoring the fact that "the phone network" doesn't primarily mean "phone calls" it primarily means "internet access".

> The fact of the matter is, that me being banned from ever making phone calls again would absolutely be a larger problem for me

Yes, you keep repeating that over and over again (3 times in this response) but what you don't seem to understand is that nobody is forcing you to rely exclusively on a single website for all your communications, that is a self-imposed restriction that isn't meaningful when trying to decide if a website meets the definition of a utility.


> what you don't seem to understand is that nobody is forcing you to rely exclusively on a single website for all your communications

That does not change the fact that it would be a bigger problem for me to be banned from facebook than it would be for me to be banned from making phone calls, lol.

So when you say this "nobody is forcing you", you are ignoring the fact that they would be forcing a problem on me that would be larger than if I were banned from making phone calls.

So yes. They would be forcing an issue on me that would be larger than if they banned me from making phone calls.

> ignoring the fact that "the phone network"

Phone systems have fallen under utilities laws since before the internet existed. Therefore the analogy to phone calls is relevant.

You can look at home phone line systems. A home phone lines, that gives zero internet access, still falls under utilities laws.

Are you aware that a landline, that gives zero internet access, would still have to follow utilities laws? Just want to make sure you are aware of that.

> meets the definition of a utility

A perfectly reasonable thing to do is compare it to how much a problem it would be to switch from a different utility.

A landline, that has no internet access, is a utility. It falls under utilities laws, even if the singular only thing that it does, is make phone calls, without any internet access. Phone calls, without internet, is a utility.

And switching away from the system that only allows you to make phone calls, and has no internet, and is therefore a utility, would be easier than switching away from facebook.


It doesn't matter what's easier, it matters that facebook is just a website, not something that at all resembles a utility.


> It doesn't matter what's easier

Sure it does. It matters regarding the justification for the law.

Yes, I understand that common carrier laws do not currently apply to facebook. But, I am saying that the law should be changed so that that do apply to them.

And the justification for this, is because we have utility laws that currently apply to things like a phone system, (even if that phone system provides no internet), and yet it is easier for me to switch from that than it is to switch from facebook.

I understand that the laws don't currently apply to facebook. But it absolutely does resemble a utility in that the problems that it pushes on people are larger than that of other utilities.

That is how it resembles it. The problems are larger than that of another similar utility.


How so?


It's absurd on its face and this topic has been beaten to death on this forum; the burden of demonstrating why social media websites are like utilities rests on the person making the claim.

However, for the sake of discussion, I'll start out with the fact that a privately owned website is not an example of shared public infrastructure. Think water, electricity and other basic staples of civilization... Facebook is not an example of that.


I admit I use some Google services. At the same time, I have no relationship with the non G:s of FAANG, which arguably is 80% of them (though perhaps not 80% of the influence). I'm not sure how they can really be considered public utilities at that point. None of the services they provide are essential.


Much of the web is hosted on AWS. So, you have an invisible relationship with Amazon. Same with Facebook and their trackers, unless you block them.


> Much of the web is hosted on AWS.

Fair enough. I'm sure I use some web sites that are hosted on AWS, I did not consider that.

> Same with Facebook and their trackers, unless you block them.

That may be (though I block at least some of them), but that's hardly equivalent. I don't depend in any way on Facebook tracking me. If they stopped, I would certainly not suffer from it.


> None of the services they provide are essential.

I use online messaging and online communication more than I use the telephone network.

To me, and many others, these online platforms are as or more essential than the phone system, which is already covered by common carrier laws.


Utility regulation doesn't depend only on importance. It depends on importance plus access.

That's why your local grocery store can ban you even if food is necessary for your survival, because you didn't lose access to other sources (stores).


> I use online messaging and online communication more than I use the telephone network.

That's your choice, you don't have to use any particular online messaging service. My cousins communicate over steam more than over SMS, that doesn't mean steam is a utility.


I can cook dinner, heat my house, call 911 and even check out government websites on the internet without google or facebook.


More government websites are relying on infrastructure from these companies. S3, cloudfront, etc.

And don't worry soon enough Amazon themselves will insert themselves between you and 911 https://aws.amazon.com/blogs/publicsector/modernizing-911-to...

Edit: I would like to point out my original parent comment that started all of this was in the context of FAANGs not just Google/Facebook


How exactly is Netflix close to being a public utility?


How exactly does one thing out of a set of things not form the set of things?


Ah yes, it’s assumed that 20% of the set is not included but let’s use the snappy acronym anyway.


I can buy the argument that they could require special treatment but I'm much more sceptical that social media specifically resembles a public utility.

If I decide Facebook and all it's products are trash I can pretty trivially not use any of them. If I dislike all social media I can choose not to engage with it at all. I would have a much harder time going without electricity or water.

Social media feels somewhere between a public commons and print media, and it's acting more and more like print media all the time. I'm seeing this election the banning of certain sources like the New York Post and the editorialization of what people say with friendly links to approved "non-partisan fact checkers" who are essentially opinion columnists that cite more sources.

Netflix is about as close to a public utility as the local movie theatre, which is to say, nowhere close.


Did they even agree to the ToS? It seems like they wrote their own plugin to harvest the data, what agreement did they made with Facebook that they are in violation of?


WSJ: "In a letter sent Oct. 16 to the researchers behind the NYU Ad Observatory, Facebook said the project violates provisions in its terms of service that prohibit bulk data collection from its site."

That's the key point here. The researchers are not a party to Facebook's terms of service. The user installing the add-on may be, but that does not bind the add-on developer. (This is called "privity" in law; contract constraints do not obligate third parties who didn't agree to the contract.)

Facebook could disconnect Facebook users using the add-on, if they can detect them. That would be a bad PR move.


> The researchers are not a party to Facebook's terms of service.

Unless the researchers have their own personal FB accounts. You know, like 3.1 billion other people.


Then they would only have ToS violation claim against those particular users. It's not legally straightforward that FB could demand that the tool can't be distributed. Any 3rd party that didn't agree to the ToS would not be bound by it, and can still redistribute it.


Having an agreement with someone is not a prerequisite for being able to send them a C&D.


I agree, but:

> In a letter sent Oct. 16 to the researchers behind the NYU Ad Observatory, Facebook said the project violates provisions in its terms of service that prohibit bulk data collection from its site.


Sure, but at that point they'd have to be breaking some kind of law to warrant one, wouldn't they?


Given the wonders of the US legal system Facebook can probably throw a few million in legal fees into fighting the NYU researchers who will have to back down as unable to afford their own legal fees irrespective of who's right or wrong.


I imagine in a situation like this the university (may/would) stand behind them. And, you know, Facebook really picked a target given NYU is a top five US law school.


No, not at all. They aren't really legal documents, anyone can send one, even to simply intimidate the recipient, or force them to spend time and resources (they may not have) addressing it. And even if the sender is acting in good faith, the courts could interpret the law differently from them.

Remember, one is presumed innocent/not liable until proven otherwise, not when you get a C&D (or get arrested, or get sued, for that matter).


I legitimately don't understand how Facebook has any grounds to C&D this application.

Its a Terms and Conditions violation. Ok, I get that. But, at what point did the developers of this application ever agree to any terms and conditions? Its not like its accessing data via the API, or requires some kind of privileged access levels.

Morals, ethics, security, whatever aside; I just don't understand the legal angle Facebook is using here.


The world would have been a very different place had Facebook or other big companies built the internet.


Facebook, and Twitter and Google were built by particular libertarian-types who were young and idealistic. They erred on the side of openness and really disliked regulating content. I don't mind them.

The people who are taking over these companies, those people I'm afraid of because they are political and their first instinct is to push ideology and censorship.


Facebook always has the “dumb f*cks” mentality.

Google was benevolent (or at least seemed that way) until they had any competition, at which point they threw out the “don’t be evil” motto, but same guys (page, brin, Schmidt) were still at the helm.

Wasn’t following twitter closely, but Dorsey is still running it - same guy.

It’s the same guys, not some “nefarious guys taking over”.


good point!


Everyone talks openness but rarely commit when its really needed.

I think mobile app development would have been a different experience if people had the convictions similar to their 90s parallels that made GNU, WWW or Linux foundations possible and lasting.

The paper "Protocols, Not Platforms: A Technological Approach to Free Speech, Masnick 2019" underscores the difference.


Monitoring Facebook needs to work like the Black hole photo project.

Large alliances spanning countries and multiple institutions, hundreds of researchers working in tandem. Thats the best way to tame a beast this large.

Same goes for regulators/legal strategies/journalism etc. Associations and alliances are key.


Isn't it ironic, Facebook has no trouble explaining mass surveillance of its user base but gets lawyered up when somebody tries to surveil them.


Apparently they created software that someone else installs and then uses to share info they saw on facebook with them.

This seems like a pretty big stretch.


Mixed feelings on this. On the one hand, I'm against Facebook censorship, especially of academics. On the other hand, the researchers (and journalists) are going to use this research to browbeat Facebook into more censorship (and specifically censorship of the right, though some non-mainstream progressives will be caught up in the dragnet as well) as they have already done with a ton of hit pieces [1].

What to do, what to do.

[1] Brings to mind the hit-piece: "The Making of a YouTube Radical - The New York Times" article. But there is an ongoing effort to continue pushing Facebook, Twitter and YouTube into more and more censorship of anyone deemed on the right and the Overton window keeps getting smaller and smaller. The situation with the Hunter Biden laptop story and NYPost is absolutely bonkers. Not only do journalists at mainstream center/center-left news outfits not care that Twitter and Facebook outright decided that the story is false and therefore shouldn't be shared by anyone and banned the account of their colleagues at NYPost .. but worse, actually applaud it and justify it.


The evidence just doesn't support your claim that Facebook predominantly censors content from the political right. Facebook produces disproportionately more engagements for right-wing sites[1]. This is reflected in the best performing content on the site -- after the first debate, 9 of the top 10 posts were right leaning [2], which is a regular occurrence.

[1] https://www.economist.com/graphic-detail/2020/09/10/facebook... [2] https://www.washingtonpost.com/graphics/2020/elections/debat...


The existence of popular engagement with a subset of allowed right-wing content does not mean there is no censorship of other right-wing content. If most of the political content on a platform by volume is right-wing, then it actually wouldn't be surprising if most of the censored political content would also be right-wing.

The most engaged with articles on HN are technology-related. That fact does not magically prove that there is also zero moderation of technology-related articles.


>Facebook produces disproportionately more engagements for right-wing sites

Bull.

America is a country where any given election 50% vote Democrat and 50% vote Republican. Look at the chart. Look at the total number of left-wing vs right-wing sites. I bet if you add those up, total left-wing engagements will dwarf the number of total right-wing engagements. Looks to me that engagement is skewed towards left-wing, it's just that there are way more left-wing outfits so individual share is lower. Which makes sense, there is one major right-wing news network, Fox News, there's like 10 that are center-left/left wing which have to share the audience.


Sounds good. I just signed up.



Unbelievable. So now they want to prevent people from helping others share what appears on their computer screens?

And they have the face to even WRITE IT!

Just say no.


Thoughtful critique is obviously welcome, but please don't break the HN guidelines like this. These ones in particular:

"Please don't fulminate."

"Please don't use uppercase for emphasis. If you want to emphasize a word or phrase, put asterisks around it and it will get italicized."

https://news.ycombinator.com/newsguidelines.html


Frankly... Fck Facebook and their alien overlord.


And this is why I will refuse to take any calls from Facebook researchers. It's bad enough that Facebook is disproportionately used by Right Wing Trolls and Authoritarians. See [1] for more information about why this is so.

[1] https://www.lawfareblog.com/lawfare-podcast-maria-ressa-weap...


Not sure why people are angry at Facebook for this.

Cambridge Analytica was very similar—it was a third-party using an API for its intended purpose (with the user's consent) and then doing questionable things with the data, yet that did not stop the world from raking FB through the coals.

And it does not matter that CA was using FB's API and platform. In both cases there was user consent to provide the data.


Good. I didn't agree for NYU to have my Facebook data and for them to operate a plugin at scale which crawls my data (if anyone with permissions to see my profile installs this plugin) is a violation of my rights.


How do you feel about it when anyone who has you in their phone's contacts list hands over all that personal information about you when they install WhatsApp or any other app that requests contacts data? Seems like the same thing.


Yes, it is objectionable, and ordinarily people here would agree it is objectionable.

The wrongness doesn't change just because in this particular case the framing is "poor researcher vs evil Facebook"


An app should request contacts data to do what contacts data is intended for ... place calls and messages to contacts. Anything else is an abuse of trust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: