Hacker News new | past | comments | ask | show | jobs | submit login
Facebook's PR feels broken (themargins.substack.com)
352 points by cjbest on Jan 10, 2020 | hide | past | favorite | 242 comments



> I’m sure, internally, it’s looked at as the “mainstream media trying to get clicks...

I interviewed with FB shortly after the Cambridge Analytica stuff came out (but long enough after that it was clear to everyone that FB had screwed up big time).

When it was my turn to ask questions in one particular interview, I gave a standard fallback when I have nothing else: "[despite all the blahblah positives], having worked at other technology companies, what's your least favorite thing about working at FB?"

This guy was a FB vet (maybe 5 or 10 years? Very long in FB time, I think). He gives me this spiel about how hard the teams work to maintain user privacy and how unfairly they're being treated by the public and the media, and how hard it is to work in an environment where everyone treats you so unfairly.

I was floored. Not even a hint of apology, remorse, or "we could have done X better". "Unfair. Fake news. We're doing great things, and no one thanks us enough." And from someone that had probably made a mint having been around at FB near-IPO time. It was my first or second of the day, and while no one else was so blatant, the sentiment persisted throughout the rest of the day.

I bombed the interviews hard, they didn't want me back, but I had basically decided I'd never work there by the time lunch rolled around.


Someone who worked at Facebook for the better part of a decade responded that they felt the media coverage was unfair to Facebook. You were suprised by this response because from an outsider's perspective you felt that Facebook employees should be apologetic? What makes you more qualified to judge whether the public perception was accurate as compared to someone who actually worked there - an "FB vet" in your own words?

Having dug into the Cambridge Analytica story, I can see a number of reasons why Facebook workers would feel wronged by the coverage. In particular, many outlets omitted or downplayed the fact that Cambridge Analytica had lied about the purpose of their data collection, falsely claiming that it was for academic research. Dozens of other academic institutions were similarly allowed to solicit data from users, yet those go unmentioned because most people don't see users voluntarily sharing their data for academic purposes. But that's what Cambridge Analytica was, from Facebook's perspective. The picture painted by the media was one where Facebook brazenly sold people's data to nefarious actors, when in reality Facebook treated Cambridge Analytica just like other academic organizations in allowing them to solicit users for data - except it turned out that CA had lied about the purpose of its data collection.

This story leaves makes me more empathetic towards your interviewer. If my company was so throughly demonized by the media such that candidates discount my own experience actually working at the company and are "floored" when when I tell them that the reality is different from public coverage, I'd be pretty salty too.


>What makes you more qualified to judge whether the public perception was accurate as compared to someone who actually worked there - an "FB vet" in your own words?

For starters, most of us here don't benefit financially from putting a positive spin on disasters at Facebook. As the Upton Sinclair saying goes, it is difficult to get a man to understand something, when his salary depends upon his not understanding it.

I've not talked to Facebook employees personally but I've witnessed the same degree of delusion at other large tech companies. Some are straight-up like a cult with employees willing to defend just about anything their company does regardless of how hazardous it was. Which is even more baffling if one takes into consideration that these are usually salaried employees working there only for a few years with no personal stake whatsoever.

The point isn't even to discuss some sort of internal nuance about the technical details of CA. If you are a company as large as Facebook and you allow user data to be abused to this degree, even if there is some nuance to it, when facing the public you apologize, tuck your tail between your legs, admit that you screwed up, and fix your problems, and you turn the arrogance and the world saving rhetoric down a notch.


> I've not talked to Facebook employees personally but I've witnessed the same degree of delusion at other large tech companies. Some are straight-up like a cult with employees willing to defend just about anything their company does regardless of how hazardous it was. Which is even more baffling if one takes into consideration that these are usually salaried employees working there only for a few years with no personal stake whatsoever.

Well I think I can solve this puzzle. It's possible some people disagree with you about how "hazardous" their company is.

They're not "cult-like" or "delusional". They're not willing to "defend just about anything" despite the fact that they are only there a few years.

They just don't agree with you on the amount of hazard their companies are doing.

Note: This is regardless of whether you are in fact right or wrong. But it kind of bugs me that someone not having the same views is branded in such a way, as if somehow clearly you know the truth, so anyone who isn't automatically on your side must be delusional/evil in some way.


> But it kind of bugs me that someone not having the same views is branded in such a way, as if somehow clearly you know the truth, so anyone who isn't automatically on your side must be delusional/evil in some way.

That's not what I believe at all. I've had plenty of discussions in my life with people from all kinds of sectors and the tech industry, in particular, FAANG employees in my experience stand out in this way. If you talk to someone from the Big 4 like PWC I never exactly got the impression that they're overly attached to their companies point of view.

Tech companies have very cleverly fostered some sort of ideological atmosphere among their employees that makes them defensive about their wrongdoings, and they have long pushed the idea that they're not just vehicles to create profit for shareholders but on world-saving missions.

As another example, remember when Uber essentially spammed mayor DeBlasio's office through their app in an attempt to undermine regulation and to effectively get ahead of the law through a harassment campaign? At that time I talked to Uber employees and a good chunk defended it.

Can you imagine any ordinary industry acting like this?


> Can you imagine any ordinary industry acting like this?

You mean proactively explaining to their workforce the reasoning behind actions likely to gain widespread attention in the press?

Yes; that's called treating your employees with respect. Everyone generally expects employees to have some level of insider knowledge, and it's polite to give your workers enough of a heads-up to not be blindsided by questions from friends and family.

The difference with Facebook and some other tech companies is that there's enough trust that employees are generally better informed about the strategic and competitive landscape the company is operating in, and that context can explain actions that may look nonsensical from the outside.

More industries should work like this, not less -- It's treating workers as people that can think for themselves instead of simply cogs in the machine.


I was working at Facebook when the CA story broke, but not when the events happened. I was in a department far removed from any of the involved parties. I haven't worked for them for several years now, and I currently own no FB stock outside of broad-based index funds¹.

As I recall, the sense of unfairness that was going around was rooted mostly in it feeling like old news. CA was the ghost of a bad policy that had already been rescinded, and there was very little awareness of that in the media coverage. Instead, there were loud calls for Facebook to do something, but every reasonable thing had already been done years before.

When I worked there, nobody believed that the policy which birthed CA was a good idea, which is why it was long gone. Also, everything had played out already in the public eye (in the tech press) -- anyone who had been paying attention should have known about most of these things already.

¹ It seems silly to make all these disclaimers, but they seem necessary with the mood here.


I remember all the puff pieces when Obama's campaign did something similar in 2012, where his app would drink down all the data it could from the social graph. When CA broke, I was asking people why they thought this was news and why I only used Facebook to shitpost on company pages, it wasn't exactly a well kept secret that you could do that.


There may have been effort to stop it from happening again, but from the outside it seems there has been zero effort to mitigate the harm already done, I don't think Facebook contacted anyone to tell them their very personal data had been downloaded by a third party because one of their Facebook friend participated in some poll, and that they would now be targeted by very personalized political propaganda .

It was swept under the rug.


> For starters, most of us here don't benefit financially from putting a positive spin on disasters at Facebook. As the Upton Sinclair saying goes, it is difficult to get a man to understand something, when his salary depends upon his not understanding it.

Ugh, this quote. If someone is working at a SWE at Facebook, their career options are probably pretty good. They can get a salary most anywhere.

Too often people use this quote as an intellectual shortcut to ‘I’m right, you’re wrong because your job blinds you to your bias’. Sometimes, they do just know more than you about it.

I’d suggest you take your own advice and dial up the humility a notch and quit branding people who disagree with you as ‘delusional’ or ‘cult like’


I think it's fair to say most people working at FANGs are probably earning significantly more than they would almost anywhere else. It's not just getting "a salary", it's getting a very large one that wouldn't be available if they decided it was morally wrong.


> Ugh, this quote. If someone is working at a SWE at Facebook, their career options are probably pretty good. They can get a salary most anywhere.

With all the demonstrable shenanigans at FB, the word "salary" is the thing you're going to criticize?


As a FB employee, I hear this sometimes - I can't take your point of view as you work at FB, instead of debating the fact straight up. It's just an Ad Hominen attack in a different form.


Academic research isn't some magic word that you just need to invoke, particularly when it involves personal data. You need specific informed consent and the sign-off from an ethics committee on your study plan. To not ask that of someone pretending to use your data for academic research is willful ignorance.

Of course, we are talking about Facebook, who have literally been in the news before for themselves running research experiments on uninformed, random users:

http://laboratorium.net/archive/2014/06/28/as_flies_to_wanto...


I remember playing with the FB tools back then, and I was surprised of how much information I could pull from it. As I can recall there were few bars of entry when it came to data collection.


The great irony to me is that Facebook does a lot of stuff people should be angry and scared about, but oddly enough the Cambridge Analytica situation, which broke this whole dam open, is not one of them, and is one where I think they are being treated unfairly.

The reason it ends up being an issue is because of politics: it gives people an explanation and someone to blame for an election that did not play out according to the expectations and/or hopes of half of the country.


Yeah, it's important to note that while my anecdote was from the "post-CA timeframe", they had also just had an exploit that resulted in mass harvesting of OAuth tokens[1].

And of course, a couple months later it came out that they were storing passwords in plaintext logs[2].

This wasn't just about CA. There was a class of problems that FB was facing, and the guy didn't acknowledge a single thing they could have done better.

[1] https://arstechnica.com/information-technology/2018/09/50-mi...

[2] https://www.npr.org/2019/03/21/705588364/facebook-stored-mil...


This is exactly it.

CA is not actually that bad of a situation.

There are much greater privacy and freedom of expression concerns.


I'm having a hard time believing that this is a legitimate opinion and not conjured up by Facebook PR.

Facebook is the one that collected the data. It is their responsibility to ensure that who they give your data to is who they are, and that they are doing what they should be doing to your data.

The should have a comprehensive compliance program for third parties with access to user data and have the necessary enforcement regime with enough bite to prevent abuse. Not to hide behind their policies until they get caught red-handed and then shrug off their responsibilities to everyone but themselves.


From what I gather, Facebook did have a compliance program in place to ensure that data was only shared with those that met its usage policies. Aleksandr Kogan was working at the University of Cambridge at the time, so the claim that he was using this data for academic purposes would seem credible to Facebook. Here are a couple sources that state that Kogan told Facebook that the data was to be used for academic purposes [1] [2].

Certainly, with the benefit of hindsight we can see that the potential for abuse for this kind of data sharing - even with the requirement that it's only used for academic purposes - is significant. But it seems to me that the primary culprit here is Kogan, who lied to Facebook about the use of the collected data, rather than Facebook whose fault was being too trusting of academics.

1. https://www.nytimes.com/2019/03/15/technology/aleksandr-koga...

2. https://www.businessinsider.com/aleksandr-kogan-facebook-def...


> From what I gather, Facebook did have a compliance program in place to ensure that data was only shared with those that met its usage policies. Aleksandr Kogan was working at the University of Cambridge at the time, so the claim that he was using this data for academic purposes would seem credible to Facebook

As though simply working at a university is sufficient reason to trust someone.


This person held a research position in the psychology department of a world renowned university, and claimed to be writing an app to gauge Facebook users' personalities. T This is more than just "simply working at a university", Kogan held a research position and proposed an app with a purported academic purpose that was in his field of study.


You would be surprised. One of the other criticisms currently is that FB doesnt share data with academics to do research as freely as academics would like.


Social engineering at its finest.


Which is why their compliance process wasn’t rigorous enough.

As others have said, requiring evidence of University IRB review (which it doesn’t sound like they did) would have been a way to require more safeguards. Yes, bad actors could and maybe would have still abused the system, it makes their work harder and more visible.


You don't need much social engineering to fool someone whose paycheck depends on being fooled.


>But it seems to me that the primary culprit here is Kogan, who lied to Facebook about the use of the collected data, rather than Facebook whose fault was being too trusting of academics.

And what did FB do as due diligence? Very little. When FB discovered the breach, what did they do as a response and to mitigate the effects to the affected users? How did they recover damages and/or enforce specific performance of contract terms to not only remove the data in question but all of the products that resulted in the processing of the data? Again very little.

When banks gave out mortgages like how FB gave out user data, the result was massive financial crisis. And now with FB, we got (more) idiocracy.


> And what did FB do as due diligence? Very little. When FB discovered the breach, what did they do as a response and to mitigate the effects to the affected users? How did they recover damages and/or enforce specific performance of contract terms to not only remove the data in question but all of the products that resulted in the processing of the data? Again very little.

They revoked Cambridge Analytica's access to Facebook's data, and told Cambridge Analytica to delete the data they had gathered. And Cambridge Analytica again lied to Facebook and told them they had deleted the data. That's the extent of what Facebook could have done. If Kogan broke laws - and he probably did - that's the government's prerogative to charge and prosecute him.

Likening this scenario to banks giving out mortgages that they know debtors cannot pay off is not an effective comparison. This is more like someone securing a loan by falsifying their income. In both cases the customers were harmed. Facebook users' data was used for purposes to which they did not consent, and the bank customers' money was loaned out at excessive risk. But the culprit that is responsible for this is the one that deceived the company, not the company itself. One could reasonably argue that Facebook should have been wise enough to avoid being duped, but that's still much more generous to Facebook than the bulk of the coverage I read that attempted to assign primary blame on Facebook rather than Cambridge Analytica.


Facebook could have sued them into oblivion to demonstrate commitment in enforcing their data protection policies instead of relying on their word to delete the data when they broke their word by misusing the data in the first place.


>That's the extent of what Facebook could have done. If Kogan broke laws - and he probably did - that's the government's prerogative to charge and prosecute him.

No. Facebook is aware of the damage that could have been done. Relying on a party that already breached their terms of use to the data (of FB users) to keep their word is negligence. They should have 1. structured their relationship better so that their is contractual and financial recourse from the third-party and 2. carried out a full investigation into the breach at the time they were notified, not when it was reported by the media.

Also Facebook did in fact breach UK data protection laws and was fined by the ICO for its role in the CA scandal. It was found that their data privacy policies and processes was insufficient. Unfortunately this was before the GDPR and hence the maximum fine that could be imposed was insignificant at £500K.


#1 is speaking from a position of hindsight. As I said, we can claim that Facebook should have been more skeptical of the intentions of university researchers, but this far from being negligent in their enforcement of their data use policies.

#2 Demonstrates persistent misunderstandings of what events transpired. Facebook was not breached in any way. Again, Camridge Analytica did not hack into Facebook's systems. This was Cambridge Analytica's subsequent misuse of the data that they had collected with Facbook's consent, but under stricter terms than the purposed for which Cambridge Analytica subsequently used the data. Facebook ordered Cambridge Analytica to delete the data.


2. Breaching data privacy laws does not mean there was a data breach or break-in. It just means that your handling of user data is in contravention of the legal requirements.

1. The ICO disagrees with you. Facebook was fined specifically for breaching data privacy laws in the UK.


I have a hard time understanding why your first sentence is necessary. Nothing against what you said subsequently (and I think it's totally right), and myself disliking facebook, your first sentence is entirely unnecessary and beside the point.

From what I read, your parent comment sound entirely logical coming from a different POV. I don't see anything illegitimate about it, even if it WAS Facebook PR, the point within are entirely valid.


The comment I replied to completely ignored FB's responsibility to its user's data and rather than accepting that FB was supposed to be responsible, proceeded to blame everyone else but themselves.

I just don't see that as reasonable in discourse. I mean you can hold those beliefs, but if you do express them in public I think it's right to be called out on how unacceptable that is.


Yes, attack the idea, not the user. I also don't see you guys are on the same page either. For one, was there false reporting? It is entirely possible to do 5 bad things and have media report you did 10 bad things. And while 5 bad things was committed it doesn't excuse the media for wrong reporting, not saying that they did this in this case.


That's exactly the reason why it feels like a classic PR tactic.

Instead of discussing what went wrong and how FB's policies and operations are deficient, the conversation is being shifted towards how much the media is biased towards it.

FB is not the victim here. Hence to have a productive discussion, being on the same page is indeed important - as in accepting first and foremost what FB's responsibilities are.


The conversation did not shift, both started from the opposite end and did not move, there were no conversation to begin with.

And this is why we have the schism in our society. No one is willing to find the middle ground and listen any more. Everyone simultaneously believes that listening is others responsibility.


As I said, you need to agree there is a problem before we can talk about it productively. This is "being on the same page".

If there is a obvious problem that one side is steadfastly refusing to acknowledge and insists on blaming it on others, then yeah we have a schism in our society as one side is just ignoring reality.


Of course one side is denying reality. I'll leave it up to you to decide which :)


Please. Only one side is portraying that Facebook is the victim.


Sure, like a bank has the duty to protect your safe deposit box, but CA defrauded Facebook the way a bank can be robbed.


> Sure, like a bank has the duty to protect your safe deposit box, but CA defrauded Facebook the way a bank can be robbed.

Choosing to give away your property to third parties is not comparable to being robbed. Facebook collected the data on unsuspecting users and proceeded to willingly give away the data. Thus Facebook is responsible for what comes out of it. There is no way around it, and it boggles the mind how this fact is brushed aside.


> Choosing to give away your property to third parties is not comparable to being robbed. Facebook collected the data on unsuspecting users and proceeded to willingly give away the data. Thus Facebook is responsible for what comes out of it. There is no way around it, and it boggles the mind how this fact is brushed aside.

This crucially omits the part where Cambridge Analytica lied about the purposes of the data that was collected, and subsequently lied again when Facebook learned of this deception and demanded that the data Cambridge Analytica collected be deleted.

You're right this isn't like a bank robbery. This is more like someone securing a loan and then running off with the money. The bank's customers were harmed and one could criticize the bank's scrutiny of its debtors. But there nefarious party is the one deceiving the bank.


> This crucially omits the part where Cambridge Analytica lied about the purposes of the data that was collected,

This line of complaint is absurdly disingenuous. The problems that Facebook has created are not solved with EULAs, and Facebook's responsibility on having created this whole mess is not brushed aside by claiming that third parties did not clicked on the right checkbox when downloading Facebook's data.

This is precisely the type of PR problem that Facebook creates for themselves: this insistent, desperate, cynical, and pathetically inneficient way they try to pin the blame on others for the problem Facebook single-handedly created. Force-feeding this nonsense through astroturfing campaigns doesn't change the problem and the responsibility that Facebook has.


> The problems that Facebook has created are not solved with EULAs, and Facebook's responsibility on having created this whole mess is not brushed aside by claiming that third parties did not clicked on the right checkbox when downloading Facebook's data.

"Third parties did not click on the right checkbox when downloading Facebook's data" is not even remotely close to what happened. The fact that this perspective is so common is big party of why I doubt many people received coverage of the events that was even close to objective.

Alexandr Kogan was a senior research associate at the University of Cambirdge, and developed a personality quiz app that collected data that he claimed he would use for academic purposes. He subsequently used this data for commercial and political purposes, and when Facebook discovered this they revoked Kogan's app's access and demanded that he delete the data that he had collected. Kogan told Facebook that he had deleted the data when he had not done so. This wasn't third parties not checking the right box, this was a deliberate and involved plan evade Facebook's data use policies.


> "Third parties did not click on the right checkbox when downloading Facebook's data" is not even remotely close to what happened.

It was precisely what happened. Facebook's excuse is that Cambridge Analytics did not complied with Facebook's terms of service.

https://www.vox.com/2018/3/17/17134072/facebook-cambridge-an...

And the fact is, Cambridge Analytics were just the stupid ones, the ones that opened their mouths and openly bragged about what they were doing.

So, enough with the bullshit.


Only if you're being metaphorical in what you meant by "checking the wrong boxes". Kogan had a research position in psychology at a world renowned university. He leveraged this position to claim that his work was for academic research on psychology, and then turned around and used this data for commercial purposes. This isn't some random app developer checking a box when they publish their app. And when Facebook learned that these restrictions were being breached, they revoked Cambridge analytica's access and demanded that the data be destroyed.


Why didn’t Facebook sue Cambridge Analytica into oblivion?


Cambridge Analytica claimed that they had deleted the data they collected.


In which case, the bank should be held responsible for your loss. And they should have adequate security in place to prevent it from happening in the first place, or refrain from that line of business. These are all costs that FB has skirted.


Banks don't usually hand over deposit boxes to people just because they work at universities.


I'm no fan of Facebook, but this comment demonstrates the complexity of the situation in ways many entrepreneurs can appreciate


Unfortunately, the problem with your analysis is that your view only makes sense if seen in isolation.

A lot of us are evaluating the entire picture, based in large part on the track record of Facebook over the last 10+ years.

For example, everyone who takes a look at the "friendly fraud" [1] case comes away thinking "Boy, these Facebook employees will stop at nothing to make a buck". Now layer that on top of whatever came out during the CA scandal, and now you can see that the issue is Facebook employees are actually acting like a cult.

And by the way, nothing has changed. You can see this in how every time someone from Facebook does any PR at all (e.g. podcast interviews), they take a lot of care to make sure they don't go on podcasts which bring up the privacy issue.

Here is an open challenge to any Facebook employee who is reading this comment - go on an interview with a known "hostile" who is also not considered an idiot conspiracy theorist - e.g. DHH - and have them interview you. You know that you would never even consider doing it, because you do have a lot of shitty things still going on at Facebook that you wouldn't want to later contradict.

By the way, the same challenge applies to folks working at Google.

[1] https://www.consumerreports.org/privacy/facebook-friendly-fr...


A common argument (that you also make) is that if you're on the inside, you obviously know better than those on the outside. There's a lot of reasons to believe that isn't true.

First, you may not have the right training (e.g., media studies, media economics, privacy). Second, you favor your coworkers because they are your friends. Third, the positive news and justified criticism of external critics is widely shared internally, but the points that are appropriately critical are much less shared. A single flawed external article can lead to defensiveness, making it that much easier to dismiss tens of other appropriate articles.

More poetically, it's like the Three Blind Men and elephant. The conviction of the blind men is so great, but their closeness leads to a misplaced confidence that they know the answer:

https://en.wikipedia.org/wiki/Blind_men_and_an_elephant

I call this effect the "truth distortion field," and liken it to the Facebook friend filter bubble:

https://www.nemil.com/tdf/part1-employees.html


I interviewed there this summer because the position/compensation offered was just too good to not consider seriously. Like, top 10% at FB kind of money, which is a lot. During the interview process, I asked many FB employees about their thoughts on FB in the news, the headlines, scandals, etc. All super smart and dedicated people, but anecdotally there was no hint of them thinking they had done anything wrong. They explained for example how Cambridge Analytica was totally overblown in the media, specifically because users opted in, and the media was just bashing them. They did say they wouldn't follow the same route again, but didn't seem to consider themselves responsible ethically/morally. I can see how one might debate whether having an opt-in absolves FB of responsibility, but it had a smell to it and didn't necessarily jive with my personal ethical framework. I did get an offer, and not necessarily for that reason ended up declining. But it parallels what you said except in May 2019. I would consider taking a job there in the future and I got the sense that they're working to turn things around, but what high level people at FB said to me largely seemed to mirror your experience many years later so I thought I'd share.


>They explained for example how Cambridge Analytica was totally overblown in the media, specifically because users opted in, and the media was just bashing them. [Emphasis mine]

That is really troubling. Excusing nefarious behavior because "well the user opted in" is a downright horrible way to rationalize bad behavior. Users never read the TOS. They barely understand the privacy settings (though I've heard they've gotten better ... haven't had Facebook in years). I tend to think that, yeah, Cambridge Analytica was probably overblown a bit for a number of reasons, but wow. I know a couple people who have quit Facebook (one recently, one a couple years ago before the election) and a couple who have stayed. I have to say, the ones who've stayed have drifted away from the rest of our friend group. Sad to see.


I would also imagine that, especially during interviews, they recognize their duty is in part to sell the candidate on the firm. There's an implicit pressure to be positive about your company because there's only downside to saying negative things, even if it's honest. I don't see why an employee would put themselves on the line with a candidate, who they may never work with nor see again, just to express what they think is right.


If you're trying to hire "top 10% pay at FB" smart level people, hiding behind poorly constructed rhetorical barriers is more of a negative thing than recognizing a real negative thing. It doesn't have to be anyone's "fault" even, but more of a consequence of the system that was built with good intentions.


The people probably had a moral conflict: the good Facebook money but lose "their souls", or go with their conscience, quit and lose that money and work friendships/network. So, so they can sleep at night they've convinced themselves the company they're a part of wasn't the responsible party. So "Hey, those users opted-in, it's their own fault!".

Funny how it means the company is now probably full of either "ignorance is bliss" or morally bankrupt people, because the people with good conscience have left. Not that society at large is much different... what uncomfortable truths are we ignoring?


I know a few people who work there too. I can have an intelligent conversation/debate about almost anything with them, from tech stacks to software development methodology, but bring up privacy and Cambridge Analytica, and all of a sudden the wagons circle and it’s all “we’re so misunderstood,” and “the biased media is out to get us,” and “we do so much for user privacy and always just get shit on!” Its as if they all went through the same training and got the same talking points. Spooky!


I don’t doubt that Facebook does a lot for user privacy, but it is probably the case that nobody there (except maybe Boz) really, thoroughly understands the enormous amount of power and influence the company wields, and they also haven’t realized that the thing they have worked hard to create is now an out-of-control monster.


Wait, didn’t the CA folks manage to get info on friends of people who “opted in” as well? Even if they didn’t?


Yes. A few hundred thousand people took the quiz and Facebook's platform allowed them to therefore gather information on tens of millions of people. Facebook had a policy at the time that the extraneous data could only be used to improve user experience, not third-party data harvesting, but reportedly it was weakly enforced.


>there was no hint of them thinking they had done anything wrong.

the grand majority of folks know not to bite the hand that feeds them very well, more than anyone else could


It doesn't pay to speak up. We glorify those who made a change but truth is that 99% of those who speak up even with small things quickly learn to shut up or leave for greener pastures. Remaining in the company are the YES-men and outside of regular employment those who speak just to breathe. Still, we encourage everyone to speak up to be able to weed out the black sheep before they cause too much damage.


Wow.

Between jyrkesh's tale and yours, it makes me think that things are even worse at FB than I thought they were.


I shared a Lyft ride with a Facebook employee right after the CA controversy. I asked her what do employees think about it.

She told me that people are really worried about how the controversy affected the stock price. (Facebook stock was down around that time.)

I'm not sure how many employees worry about stock price than the supposed effect of the CA controversy but I was disappointed to hear that from a Facebook employee.


Facebook et al are knowingly leading the social collapse of countries because of the same thing that led the bankers to knowingly lead the world economy to the brink of collapse.

Money


Ask yourself:

My company might have exposed millions of user to political propaganda/wrong think. It is being fixed though.

6 month worth of base salary.

Choose.


How was Facebook an 'horrible actor' in the CA scandal exactly?

Facebook had relatively permission APIs. The entire world knew it. Journalists, developers, Big Cos.

Nobody was screaming that there was a problem. Though there were a few issues around privacy, there wasn't really any public dialog specifically around the nature of those APIs.

CA found a sneaky way to take advantage of somewhat open APIs.

As the world started to become somewhat more concerned, and FB saw some room for potential abuse - FB rightly tightened up the APIs a little. This was long before the CA scandal blew up.

Then it became public that CA was doing some sneaky things with the data. It's actually highly debatable if they did anything wrong within the context of the information available - or at least outside of industry norms -> what CA was doing with data from FB is what everyone is and was doing with data from similar sources.

FB called on CA to erase the data, and checked that they did. It turns out, CA lied and did not.

FB did the right thing every step of the way in the CA scandal. APIs of every kind err a little bit one way or the other over time and as issues surfaced, FB moved in the right direction without coaxing.

The media absolutely misrepresented the entire issue. They didn't really make it clear what happened, nor did they clarify what exactly FB did wrong, but most importantly, they misled the public with respect to a separate, secondary issue which was 'special API access'.

The 'scandal' is that there are tons of companies like CA using private data for all sorts of reasons, it's mostly not FB.

There are definitely privacy issues around FB, and I'm not fan of them, but those are a separate issue.


So your opinion is that they weren't horrible actors because it took a while for people to complain?

They're the ones that allowed it to happen, how long the world took to complain about it is not very relevant.


I'm guessing it might be that the people working hard on maintaining user privacy are not the same as the people making the decisions, so the former feels they're treated unfairly.


Based on the op's description, it doesn't seem like this particular FB employee minded those decisions that much. In fact, they sounded pretty enthusiastic about them.


I feel pretty contrarian about this piece. In the same piece that talks about how dangerous it is and how regulation is needed because facebook has the ability to influence public opinion, they talk about how "totally normal" it is for the media to act as mouthpieces for whomever wants to pay. No mention of how Fox News can swing public opinion, and really needs to be unbiased and in fact needs regulation to that end. Or any number of other media. And then it includes a gif making fun of zuckerberg for being robotic while complaining about him writing a post that was human instead of corporate.


I think you may be missing the point. The subject of the piece is whether Facebook comms has gone off the rails, not necessarily policy.


Comment talking about bias, only mentions Fox News. Welcome to hell (imo, this is what you get when you don't teach people how to argue and feed them too many Disney films).

No, the press shouldn't be regulated. Yes, bias is normal. No, Fox News doesn't "swing public opinion" (they reflect views that already exist, and pander to them). Life is difficult, you have to think for yourself.


You seem to be arguing two different things: first, Fox News does not influence public opinion; second, that it shouldn't matter if they do because adults should think for themselves.

The second is fundamentally a subject view of the world, so you are neither right nor wrong. I disagree with you strongly. We should be concerned about things that harm the our civic health even if we ourselves to blame for them. I think there's a clear parallel to our actual health: when 80% of our population is overweight or obese, it's maybe time to think beyond "adults are responsible for what they eat" and towards "the world would be better if we could make progress on this issue". Ditto our past history with smoking. We know that people are subject to persuasion and psychological manipulation and exploitation of their cognitive biases. A model of the world that has everyone as solely responsible for their own destiny is a pretty useless one, in my opinion.

But, your first claim -- "Fox News doesn't 'swing public opinion'" is an empirical one. Either it does or it doesn't. You're either right or wrong.

DellaVigna and Kaplan's 2007 piece in the Quarterly Journal of Economics investigates this question. They exploit the fact that the phased roll-out of Fox News between 1996 and 2000 impacted 20% of the country. This allows for a natural experiment where some markets are "treated" by Fox News and others are not.

Okay, so, first, should we believe this design? If the entry of Fox News into a market was random, then this is an experiment and we have a treatment effect. If the entry of Fox news into a market was conditional on market characteristics that we can assess, we have "selection on observables", and we can recover a treatment effect. We need only a good selection model. It is also the case that if entry into a market was non-random, but conditionally ignorable with respect to the potential outcomes of the market (e.g. random with respect to politics), we can get a treatment effect without knowing the full selection model. So we have some different routes to the answer here.

The answer is between #2 and #3 -- Fox News did enter markets in a non-random way, but not based on demographics or past voting history, based mainly on geographic considerations. Given the regulatory and infrastructural component of entering a new market, this is probably not surprising, but it does guard against "Fox entered conservative markets first" as a counterclaim.

Now, having clarified the design based considerations, the authors find that Fox News was responsible for a ~0.5 percent vote share increase in Republican vote share in the presidential election of 2000. Whether you consider that large or small depends on your frame of reference. My sense would be that 0.5 percent is small cosmically, but it's maybe possible that you could have an election where that kind of margin is decisive. Remind me again, was the 2000 election close?

This is a fairly credible and careful design published in a good journal and widely cited (1486 citations, which is quite high for a nonmethods economics paper). That doesn't make it true, but it does suggest that the field of Economics views this as an important paper and that it has been exposed to scrutiny from a wide variety of sources.

If there's a little contempt in my reply here, it's because I think your off-hand claim that media diet, priming, and framing effects are solely demand driven and have no actual impact on viewers seems to fly in the face of like 50 years of communications, economics, and political science research. In other words, it feels like your first reply was "50 years of research doesn't matter, humans aren't influenced by any of this stuff". Maybe that's not what you meant, but when I see someone flippantly respond without evidence, I think "Welcome to hell" -- imo, that's the kind of thing we get when we don't teach people to argue.


What a thoughtful and well articulated reply (sincerely). I hope some day one of my stupid opinions gets this treatment.


Understanding that everyone is biased is not a "stupid" opinion. It is reality. Thinking otherwise is certainly a very common belief, and quite "stupid" indeed (the hallmark of stupidity is thinking you are becoming more intelligent, the stupider you become).


First, I didn't say anything about whether it should matter whether Fox News does or doesn't influence public opinion. I don't have an opinion (that naturally depends on whether you think Fox News is "right" or "wrong").

Second, you confusion on personal choice is quite palpable. You can have concern about a topic but not take responsibility for that. Those are two very different statements because the latter begs the question: who takes responsibility? From the tone of your answer, the suggestion is "wise people like me" (as latter comments will find, modesty is not something that pervades your thinking). Great, that is the dictator's charter...and that is also not subjective (again, you are mostly confused about this topic).

Third, as you appear to understand the study that you quote is not an RCT. There is an unknown level of selection bias, you do not mention (unsurprisingly) whether this was overcome in the study. The study used differences-in-differences, this will overcome selection bias with some caveats.

Fourth, it says nothing about the mechanism for this process. Again, you are still assuming that persuasion works in the way you feel it works...but we know that voting behaviour is mostly determined by other factors, some of which weren't controlled for in the study. And you believe too, that because the election result was close this was the deciding factor? Another extraordinary assumption with no evidence.

Fifth, citations mean nothing. The only evidence on this subject is that people do not think clearly, academics are largely left-leaning and are subject to all the same biases as the rest of us. Sorry.

Sixth, there is not fifty years of research into this. This is the kind of lazy, exaggerated claim that people who have no knowledge of the area but with strong preconceptions often make. You seem to believe that you are hyper-rational, and that you only believe in things with evidence so end up making the completely false, bombastic claim that there is fifty years of research behind your claims. I will make this simple: you are talking to someone who has done postgrad work in politics, no such evidence exists. This isn't even something about which you can produce very solid research (#8). You can model voting behavior very well...but it includes no factors like exposure to Fox News.

Seventh, something that you need to understand is that there is no point arguing with people like yourself. It is actually far easier to get through to people who have no education. The absolute worst people in the world to argue with are people who believe they are educated (although often not about the relevant area) because they believe that there is evidence for their feelings. They do not believe they think irrationally but everyone does. My point about "Welcome to hell" is: bias occurs to everyone but is only applied to groups that the observer disagrees with...this kind of position never changes (and later in life, they come to think about people on the other side too).

Eighth, the view about Fox News affecting an election isn't subjective. It is something that we can produce knowledge on but it is fundamentally unknowable. It is important to think about this and understand why it is the case. Academics, particularly within economics, do not understand this and it has led to lots of pointless work.

Ninth, I would like to hear precisely how you discovered this paper. Was it something like: google "fox media bias" -> find paper confirming belief?

Tenth, you are looking at this subject in a very unproductive way (again, because you are biased...in fact, you almost entirely missed the point of what I said). Everything is biased. Other news stations in the US are biased. This has been going on for longer than the US has been a country (political pamphlets started appearing in the late 16th century). Most people are not going to think for themselves but that option has to be left open to them. I live in the UK, and we have had massive issues with this because certain media sources have started producing "fact-check" services...inevitably, we now spend as much time fact-checking the fact-checkers who largely produce news that is just as factually inaccurate (in my experience, it is often more inaccurate, ref #7). The only reason people get angry about this topic is because they view one side as wrong, and another side as right...rather than the reality which is both sides are biased equally (again, this is a fairly well-understood aspect of human nature...thinking otherwise is a sign of bias).


Friendly feedback: your point would have been 10x more effective without the opener. I maybe agree but it never matters what you say; it matters how it’s heard.


I don't care about effectiveness, 95% of people have no interest in reality in this subject, there is nothing that you can say that will change their thinking.


This part really struck me:

> It feels like finance in 2009.

> One one side, you had smart, ambitious people who ended up there simply because you were told to go. On the other, you had the classic Gordon Gekko-ish types reciting Liar’s Poker anecdotes ad nauseam.

> Enter the crisis and everyone was equally tarred as the bad guys. The former have slowly made their way out (mostly over to tech), while the latter remain[...]

It does feel like the tide of public opinion might be turning from "too uncritical" to "too critical".

At least, as somebody who spends too much time on both Hacker News and Twitter, this seems believable to me.


I don't really agree with the idea that people are being too critical of FB, or these massive social media giants in general.

People vastly underestimate the power of data IMHO and therefore the responsibility that comes with it. We'll look back in 20 years and wonder wtf we were doing just like any other decade, cause hindsight is 20/20 (after a certain number of years out ofc).

I don't think our society is really prepared for the digitization of so much information and there will be growing pains.


I studied a bit of politics, anthropology, history in my 20s. The one things that frightens me the most is that a relatively "safe" constant of pretty much all dictatorial / totalitarian / fascist / <insert_new_dystopia> regimes of the world, going back millennia, is control of information, and thereby the population.

It's one of the simplest rule of a State hostile to its citizens: the more you control information, the stronger the leash. The tighter your hold. And if cruelty is on the menu, the worse it can get with more, better, faster information.

So I thought, when Facebook really took off, "people do this now because there's this fad, this dopamine of novelty, but surely soon enough it'll recede" and I was damn wrong.

I also thought, "oh but governments with half-decent minds will undoubtedly choose to increase protection for citizens, enact "digital rights" to further the reach of the sanctity of one's "home territory" to internet, a "digital democracy" in terms of infrastructure rules so to speak, and again I was dead wrong — they're doing it the exact opposite direction, proving that the concepts were not lunatic and very much implementable, but as it turns out in the form of "digital authoritarianism".

So yeah, I'm the usual optimistic about most things, even the worst like climate change; but in this case, I know no example in history to disprove this assumption: in the current context, if any major western country falls into dictatorship / totalitarianism / etc, it may get very dark very fast. The powers of such a modern incarnation of extreme regimes would be, well, unprecendented, and that's kind of understating orders of magnitude in scale.


> "too critical"

That wasn't my takeaway. My takeaway was, "Good. Pressure is finally getting through to the masses of complacent co-conspirators who still have enough of a soul to care. Maybe things will finally start changing."


Were we ever "too critical" of the banks? ISTM we just gave them a bunch of money so they could afford to foreclose on everyone in an orderly manner, and then forgot about the whole thing. They're all still "too big to fail", which is more a statement about political corruption than about the economy. I wouldn't be disappointed if FB would have to actually fix their problems in order to escape criticism...


One thing this piece really brings home to me is how unethical it is for media to sell sponsored content intermixed and indistinguishable from their own content. How do you trust a media outlet when their content is for sale? I know this practice pre-dates the internet, but it's hugely damaging to people's faith in the media.

I think Facebooks' failure in PR is more a symptom of how far Facebook has slid down the ethical slope into outright corruption. As you do more and more heinous things, it becomes more difficult to defend those actions.


It's almost like there's a reason the FTC requires disclosure of paid content...


In this specific case, it was clear Teen Vogue and Facebook were trying to do exactly that, and the fact that they pulled it suggests to me that omitting the "Sponsored Content" warning was part of the terms of the deal. I suspect this is extremely difficult to police and people get away with it all the time.


> how unethical it is for media to sell sponsored content intermixed and indistinguishable from their own content

Some forms of media advertisements are unethical, you say?


It still amazes me that the line of reasoning they publicly went with, around their decision not to fact check political advertisements, is that no one company should have that amount of power. The logical conclusion of their own argument is that Facebook should be broken up.


While I agree Facebook is doing a bad job with regards to political advertising, I am actually really glad facebook isnt stepping in and deciding whats truthful or not. Does anyone really think that we should give Facebook any power over our political system or give them the power to be the gatekeepers of what political ads we see? I think facebook should do what twitter did and just totally get out of the political ads game, however, since theyre not, call me crazy but I am kinda glad they arent deciding whats "right" and whats "wrong". Facebook has already proved to us over and over that they don't know the difference between right and wrong and that they make decisions that are bad for society.


Disagree. Since they have chosen to platform those ads, they should at least be held to the same standards as networks and not show ads that are provably false. It would be better if they didn't platform them, or better yet, they didn't have that much potential influence in the first place.


> same standards as networks and not show ads that are provably false

I was going to ask you for a reference on this, because I was pretty sure that no such standard existed.

I was quite wrong!

https://www.fcc.gov/consumers/guides/complaints-about-broadc...

"Broadcasters are responsible for selecting the broadcast material that airs on their stations, including advertisements. The FCC expects broadcasters to be responsible to the community they serve and act with reasonable care to ensure that advertisements aired on their stations are not false or misleading.

The FTC has primary responsibility for determining whether specific advertising is false or misleading, and for taking action against the sponsors of such material. You can file a complaint with the FTC online or call toll-free 1-877-FTC-HELP (1-877-382-4357)."

So we have:

"The FCC expects broadcasters to be responsible to the community they serve and act with reasonable care"

and

"The FTC has primary responsibility for determining"

So...maybe we should give online advertising the same treatment?

As an aside: this FCC regulation surprises me; I'm quite aware that there are numerous limits to free speech, but I didn't expect this to be one of them.

I've always been an ardent believer in expansive free speech, and I still am, though age has allowed me to accept more limits. Though it doesn't feel right to me, the negative impact of (largely?) unregulated online political advertising is big and getting bigger.


I think that's only for local TV though for the FCC ones, like your ABC, NBC, FOX, CBS affiliates. They have other obligation like providing children programming which is usually early on the weekend, in exchange for the airwaves too. So cable and satellite network stations are treated differently. So like cussing on the local TV stations is a no, no but a show on HBO can cuss all they want.

Then also some states like Ohio it's illegal to sell ads to businesses that aren't solvent, Facebook is currently being sued by school districts for selling ads to a charter school that later ended up closing, which I feel is a huge overreach since most of it's automatic and the school themselves decided to buy the ads. So I guess you have to do a financial background check before selling ads. Probably easier to just exclude Ohio from selling ads, but the lawsuit hasn't been settled yet. So far no activity for 7 months but still on the docket as a open case.


No, online ads do not get an exception with the FTC. The FTC enforces truth-in-advertising laws on ads across all mediums.

Edit: FCC -> FTC


That would be the FTC in that case but I see how people could mix up the FCC and FTC, my original reply was talking just about the FCC ones. But I know the FTC is platform neutral in that case. For example that's why YouTubers disclose when they get paid to promote something. But sounds like the FCC has some of their rules for TV, and then the FTC ones apply on top of that in general.


> The FTC enforces truth-in-advertising laws on ads across all mediums.

Hahahahahahaha yeah sure they do.


Yeah. I thought I seen something before that Hollywood had some exemption, so movies could have paid placement without disclosing it. Like I think the Emoji movie could be an example, never watched it but remember seeing posts saying it was nothing but a big advertising. The first Jurassic World had a bunch of product placement too, people didn't like. Maybe it's mentioned in the credits somewhere though? I liked the movie and kinda felt the brands added realism. Like real brands sponsor stuff at Disney, so makes sense for a fictional theme park around dinosaurs but I know some feel it takes away from the movie.

Then there's another show about entertainment and news, and the woman on their sometimes gets dresses provided to wear. I noticed that's mentioned in a text overlay at the very end... However guides for YouTubers say you should mention say it on audio too, don't just put in the text description or on the screen as text.

Then Apple gives TV shows and movies free iPhone's and Macbooks for promotion... I seen it mentioned in the end screen but don't remember ever hearing a voice over saying products provided by Apple, but if you are a YouTuber you are supposed to mention you got a free Macbook every time you mention it according to one guide. So seems unequal and some confusion out there too. I know some tech channels, rving, camping, etc will accept free products to review, but if you use or mention them in any future videos or blog posts, people might of not seen your first post where you reviewed and disclosed you got it for free to review... So I guess you have to repeat a bunch of legal jargon in every video then.

I doubt many people get free Apple products as vloggers, but one of the examples was if you got a free knife as a hunting channel, you should disclose it in future mentions. So that's a more realistic example probably, but wouldn't surprise me if someone even forgets who sent them a knife if they collect them after the initial review. Then recently the FTC set more rules for videos targeted at kids on YouTube, and some lawyer mentioned he spoke with the FTC and some of these people don't even own phones or really understand what YouTube even is. Also not sure if I remember if the Price is Right mentioning products they are giving away, I think they just show the car yet as far as I know GM or KIA is giving it to them for free for the product placement, I bet if a YouTuber gave away cars they'd have to disclose it much more. I guess the old media probably can afford more people in DC than the independent vloggers can.

Then if you got paid to go to a conference, and wanted to live tweet about it, you have to mention it in every single Tweet some how from my understanding. Not sure how you'd do it with the space limit. Maybe when I read these things, I'm taking it more literally than most do. I bet a lot of businesses are out of compliance with many things if you look for it. I was reading up on PCI recently for credit card processing, and I helped a lady once years ago with her shopping site and if I remember right she just gave people a generic admin login shared between people, but I guess that's a major no no as everyone should have their own separate account. Then some web hosts market their servers as being PCI compliance too, but wonder if it's really true. Just seems like a lot out there, but I do remember hearing once that the average citizen commits 3 felonies a day.

I think it'd be fun to be a travel vlogger some day, so I think my personal policy would be not to accept free stuff to review, but I know some companies just send people stuff in general if they post a PO Box without asking, so wonder if they feel pressured to review items... but I think the whole idea of fans sending you random stuff is creepy, there's some vloggers who do mail vlogs and people send them candy and stuff... Mommy and daddy says not to take candy from strangers, yet you let random people from around the world mail you candy... I rather just use things I paid for myself, and if I liked it enough mention it on my own. Don't want to feel obligated to some brand.

I do think the general idea of the FTC is good though, don't want companies lying and scamming people. There's probably selective enforcement too, for like the most outrages cases.


> Maybe it's mentioned in the credits somewhere though?

There's a credit for a product placement researcher and a "thanks to Thermo Fisher for lab equipment" (plus a couple of others that don't stand out as companies you'd recognise) but there's no "Promotional Consideration from ..." section in the credits that I can see (Bluray version, 2:04:21 length)


I would certainly rather have someone outside the company making that decision. Make the FEC do it.


I think that's GP's point. The problem isn't that FB has declined taking executive power over deciding what's truth. The problem is that FB has too much influence. So the answer isn't to pressurize FB into taking even more power: the answer is to reduce its dominance.

Of course, that requires a political system motivated by doing the right thing for the country's citizens. As opposed to being controlled (sorry, lobbied) by monopolistic corporations and, in particular, where few politicians dare take a stance against FB for fear of the misinformation campaign that would inevitably ensue. On Facebook.


> Does anyone really think that we should give Facebook any power over our political system or give them the power to be the gatekeepers of what political ads we see?

They already have that power, and one of the problems posed by Facebook is that they leverage that power to disseminate false information devised to undermine democracies by unscrupulous totalitarian regimes.


They decide through their algorithms and advertising platform what people see and what they don't see, already.


How does breaking up Facebook change anything in this context? Unless you're suggesting that the government seize and shut down the facebook domain and brand in addition to breaking up Facebook.


I see two issues/potential strategies with Facebook.

1) Facbook could be broken apart from Instagram and WhatsApp.

2) Mark Zuckerberg is not beholden even to shareholders due to the rights of the shares he owns. Restructure the companies so he still owns the same % of the companies but has the same voting rights as any other shareholder.


Those don't really have any effect in the context of political ads on Facebook.


> The logical conclusion of their own argument is that Facebook should be broken up.

Luckily you don't need logic when you can just throw money at lobbying. Sadly, this is incredibly effective in the US today.


> Luckily you don't need logic when you can just throw money at lobbying. Sadly, this is incredibly effective in the US today.

Even if that's true, it might not work for Facebook. The politicians that Facebook would need to lobby may feel personally threatened by Facebook's power over the electoral process. Penny-ante campaign contributions and a few face to face meetings probably aren't enough to overcome that.


> Penny-ante campaign contributions and a few face to face meetings probably aren't enough to overcome that.

True, but a promise to use that power to benefit particular politicians probably would.


I think logically the government shouldn't be threatening anyone with a breakup based upon speech. I'm not sure how you get there as a logical conclusion.

I would rather risk total garbage in the public forum than establish a ministry of truth.


> I'm not sure how you get there as a logical conclusion.

If fact-checking political advertising gives them too much power, and not doing so is something they themselves decided on, then they already have that power. If no one company should have that amount of power, they should have that power taken away from them.

The bigger next step is that any alternative to breaking them up would be against their freedom of speech. I'm having trouble coming up with an alternative that wouldn't, but there might be one.


Facebook is the Ministry of Truth. They even throw things down the Memory Hole.


Oh man, if not exactly the memory hole it's the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying "Beware of the Leopard", and they can claim their algorithm that highlights stuff in your newsfeed just decided something wasn't important (but it would've been there if you scrolled long enough or clicked some sub-menu...).


The level of power that Facebook has over global discourse is just chilling.


I don't see the logical conclusion of breaking up from that. Even if you have 2 separate companies - Instagram and Facebook - neither should have that power. Twitter shouldnt be deciding what is political advertising either.

The logical conclusion is that the government steps in write laws on what social media should do.


LOL "Before 2019, it felt like the Facebook communications machine was a well-oiled, unstoppable juggernaut." Umm how about Cambridge Analytica?

Facebook's PR has been troubled for a very long time. To suggest that they has a stellar image before 2019 is a joke. I can list many other slip ups where Facebook could have come out and said something (or even better, did something) and then weeks later they come up with a weak statement. If that's great PR, I'd like to offer my services to anybody who needs it.

While I'd agree that "No one ever broke rank. The messaging was crystal clear.", the message was always an awful one and now they have a relatively negative reputation despite being a remarkable success.


Cambridge Analytica is one of FB’s most known debacles but how did the PR team specifically fail?


How about by using "We can do better" over and over after every single gaffe that it's become a joke to people outside the SV bubble?


FB stock hit a low at ~$157 right before Zuckerberg's April 10, 2018 testimony to Congress, and bounced back into a steady increase, hitting a then-all-time high in late July before going into a dive after the earnings report. The July 2018 dive was attributed to missing revenue expectations, but AFAIK, the general opinion [0] was that when it came to public fallout specifically related to Cambridge Analytica, FB weathered it quite well – hence the stock's steady climb from Zuckerberg's testimony until the July earnings report.

That confidence in the stock is obviously not just from good PR, but I don't see the evidence that the PR folks fucked things up either, given how much potential damage the CA scandal was predicted to cause.

[0] https://www.marketwatch.com/story/facebook-stock-crushed-aft...


If one of the goals of a PR department is to prevent debacles like CA from negatively impacting shareholder value, I'd say they were massively successful.


Facebook’s Cambridge Analytica PR was very successful, evidenced by the fact that you have people in this very thread arguing that the blame for the debacle overwhelmingly lies with Cambridge Analytica because they broke the TOS and then lied about having deleted the data, and that the media completely misrepresented the whole thing because it’s biased.


>Boz posted an explanation on Facebook, where he advertises the post as an organizational, internal call-to-debate. But while it's great to have a safe space for internal, organizational debates, it's still hugely concerning when that internal debate is whether we should all have a free and fair election in the U.S.

Was Facebook having an internal debate over whether we should all have a free and fair election in the U.S.?


I would encourage you to read the memo itself [1] and find out. It's not that long. In it Bosworth acknowledges that Facebook, as a new medium for propagating information, had a role in shaping the outcome of the 2016 election. He said that it was no longer tenable for Facebook to claim that it had no effect, and that as the 2020 election approaches, Facebook should be conscious of its role and formulate specific policies proactively so that it doesn't find itself in the same position it found itself in 2016, as it reacted to candidates and third parties using the platform in ways that hadn't been anticipated.

If Bosworth is calling for an "internal debate over whether we should have a free and fair election in the U.S.", it's exactly the same sort of debate that is occurring in newsrooms, radio and television studios all across the country. Every new media offends the old. Newspapers were offended by radio. Radio was offended by television. Now they're all ganging up on Facebook.

[1]: https://www.facebook.com/boz/posts/10111288357877121


Other countries have rules about political advertisement especially before elections. Those include budget limits, content, attribution, locations, and more. It's weird to think the rules are so lax here.


An advertising ban ahead of an election feels like a logical solution, or at least a simple-to-understand one. People would try and circumvent it still but the burden would be on the ad vendors to not be liable for a violation.


And if they don't want to risk liability, they always have the option to not show any ads to the relevant population ahead of an election.


The difference is that the U.S. has this pesky thing called the First Amendment that makes restricting political speech difficult.


Why do Americans always think that the freedom of speech is a exclusively American thing? All liberal democracies have similar concepts, they do put different limits on it (just like the US puts limits)


It’s the American exceptionalism, the belief that the US is unique in how much freedom its citizens have. The land of the free. In reality, citizens in other democracies have as much or more freedom.


America has far fewer limits on freedom of speech than other democracies.

Hate speech law in the US is practically nonexistent. Libel is much more difficult case to argue. Restrictions on political speech are widely held to be unconstitutional.

Compare that with other democracies, such as the UK, Germany or India, and I think you'll find that the US does have significantly more freedom of speech than other democratic nations.


Nope, it’s just that Americans are sufficiently politically homogenous that they don’t try to say the forbidden things.

Try, for example, to blog that your company should not trade with Israel whilst tendering for a federal contract.

(Just an example: I don’t have a dog in the actual fight).


But don't forget, money is speech, and corporations are people. All of which is absolutely not in the first amendment.


The right to freely associate + the right to speak + the right to printed speech = the right to collaborate on speech

i.e., there is free speech for groups as well as individuals

Saying there's right to free speech for groups unless someone gets paid is a little silly and hard to enforce. Hence our current situation.

From another perspective, there is obviously "freedom of the press" but the Constitution does not say who is allowed to own a press and it doesn't say you have to personally own a press to make use of it. In fact, it would probably be problematic if you weren't allowed to hire a press if you had something you wanted to say.


For the same reason groups and corporations don't have voting rights, they shouldn't be able to monopolize speech by drowning out all others.

Additionally, money is not speech. We disallow politicians directly giving people money to vote for them - why would we do that, since the politicians trade is in speech, he's simply delivering stump speech in the form of a $100 bill to his potential constituents.


Money is not speech, but speaking publicly on a scale larger than a literal soap box on a street corner has never been free. Banning spending money to express oneself is banning expression just like banning paying for raw milk is banning raw milk.

And groups do have voting rights. That's how Congress works. The U.S. is a republic.


Are you saying congress voting on legislation is akin to a corporation voting on members of Congress?


I'm saying that republics are voting as groups. By definition.

Not any arbitrary group gets to directly vote, no. But different groups do have rights: races, religions, genders, ethnicities, and, yes, groups of people with common values and interests.


Why is it that free speech extends to organizations? I'm all for free speech but outside of individuals and nonprofits I'm struggling to think of good reason it should extend to corporate entities.


Nonprofits are corporate entities too.

The Citizens United case, the most recent Supreme Court case striking down restrictions on political speech, was about a nonprofit organization opposed to Hillary Clinton. The government tried to shut down their “documentary” film about Hillary.


I can’t for the life of me figure it out.


Also remember that nonprofits, labor unions, and community groups are corporations too.

Corporate speech is just "speech for groups of people". Those groups may be for-profit corporations, or they may be nonprofit.


Newspapers, Radio, and Television all have a named editor who is responsible for output. Facebook doesn't.


Facebook, at one point, did curate the news that was shown in the News Feed. It stopped doing so after one side complained.


Why does it matter if they have an editor?


I think there's a legal distinction. Just like phone companies don't have a responsibility to censor misinformation being spread over the phone.


Wow, that post is pretty deluded. To paraphrase crudely, he is doing mental gymnastics to justify and invest in his/facebook's current position. An easy example of this is:

> If I want to eat sugar and die an early death that is a valid position.

Sure, if you want to make an informed choice, go for it. Nothing prevents a person from buying and consuming 10lbs of sugar everyday. However, the obvious problem which a lot of people have been focussed on is how do we prevent unhealthy amounts of sugar from being present in all food.

All of this sidestepping from Facebook and its execs reminds me of tactics used by Trump (and Republicans).


> it's exactly the same sort of debate that is occurring in newsrooms

I'm going to have to contradict you there. Anyone who was following the 2016 Russian Interference campaign, and related news stories, would know that Facebook used to pay people to moderate news stories, perhaps in a manner that was like a news room... But then they fired those people because they were not promoting conspiracy theory news stories, and so conservatives claimed FB news was "biased".

This essentially created the environment where no one at FB was willing to fact check for fear of losing their jobs. When you hear about FB saying they will not fact check political content, it's specifically because if they did, they'd have no choice but to point out all the lies and inaccuracies in Trump's statements. So instead, they simply refuse to touch any of it, which leaves the door wide open to political actors to spread any disinformation they want.

A few related stories, for the interested: [1] https://www.theguardian.com/technology/2016/may/09/facebook-... [2] http://nymag.com/intelligencer/2019/08/facebook-will-reintro...

As the [2] reference says, unless FB has suddenly grown a backbone with regards to standing up for truth over dollars, then any new effort will fail in the same way it did last time. Facebook has never been able to demonstrably prove it cares about election meddling, and you shouldn't believe empty words in their press releases.


> As the [2] reference says, unless FB has suddenly grown a backbone with regards for truth over dollars, then any new effort will fail in the same way it did last time

I find it somewhat sanctimonious for the traditional news media to be calling out Facebook for favoring dollars over truth when they themselves were just as key as Facebook was to normalizing and publicizing Trump. CNN executives made a conscious decision to give Trump lots of coverage, in an effort to compete with Fox News. They too presented Trump's views as being just as legitimate and valid as the views of those opposing him.


>>> Boz posted an explanation on Facebook, where he advertises the post as an organizational, internal call-to-debate. But while it's great to have a safe space for internal, organizational debates, it's still hugely concerning when that internal debate is whether we should all have a free and fair election in the U.S.

>> Was Facebook having an internal debate over whether we should all have a free and fair election in the U.S.?

> I would encourage you to read the memo itself [1] and find out. It's not that long. In it Bosworth acknowledges that Facebook, as a new medium for propagating information, had a role in shaping the outcome of the 2016 election. He said that it was no longer tenable for Facebook to claim that it had no effect, and that as the 2020 election approaches, Facebook should be conscious of its role and formulate specific policies proactively so that it doesn't find itself in the same position it found itself in 2016, as it reacted to candidates and third parties using the platform in ways that hadn't been anticipated.

> If Bosworth is calling for an "internal debate over whether we should have a free and fair election in the U.S.", it's exactly the same sort of debate that is occurring in newsrooms, radio and television studios all across the country.

This seems a bit illogical to me. Facebook is a platform that enables sophisticated communication, for anyone who chooses to to use it. I don't see how the opinion that Trump "got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser" in any way implies that the election wasn't either free or fair.

Let's rewind to 2008:

https://www.nytimes.com/2008/11/10/business/media/10carr.htm...

>>> Like a lot of Web innovators, the Obama campaign did not invent anything completely new. Instead, by bolting together social networking applications under the banner of a movement, they created an unforeseen force to raise money, organize locally, fight smear campaigns and get out the vote that helped them topple the Clinton machine and then John McCain and the Republicans.

>>> As a result, when he arrives at 1600 Pennsylvania, Mr. Obama will have not just a political base, but a database, millions of names of supporters who can be engaged almost instantly. And there’s every reason to believe that he will use the network not just to campaign, but to govern. His e-mail message to supporters on Tuesday night included the line, “We have a lot of work to do to get our country back on track, and I’ll be in touch soon about what comes next.” The incoming administration is already open for business on the Web at Change.gov, a digital gateway for the transition.

>>> The Bush campaign arrived at the White House with a conviction that it would continue a conservative revolution with the help of Karl Rove’s voter lists, phone banks and direct mail. But those tools were crude and expensive compared with what the Obama camp is bringing to the Oval Office.

Obama seems to have had an advantage in 2008 - was that "unfair"?

If the capabilities within the Facebook platform, that are available to everyone, are deemed to be harmful to democracy, then so be it, but claims that it is unfair, or should somehow have no effect, seem way off the mark to me.


I think the argument goes like this:

1. The US Presidential election is decided by a small percentage of voters who have an open mind and live in the right districts.

2. Facebook’s targeted advertising allows advertisers to efficiently buy the votes of these select voters.

3. Without Facebook, this wasn’t already happening.

Personally, I believe #1 is more or less indisputable, and I’m willing to believe #2. But I’m not so sure about #3.


> 3. Without Facebook, this wasn’t already happening.

> Personally, I believe #1 is more or less indisputable, and I’m willing to believe #2. But I’m not so sure about #3.

I think you overstate #3 a little bit. It could have been happening without Facebook, but at a lower scale and not effectively enough to matter.

I think there's also a #4:

4. Facebook ad-targeting allows the influence to be covert, because watchdog group members are probably not part of the targeted demographics.


4 isn't right (anymore) because they made all political ads available to anyone who wants to see them. https://www.facebook.com/business/help/2405092116183307?id=2...


> 4 isn't right (anymore) because they made all political ads available to anyone who wants to see them.

I think #4 is still valid:

1. Facebook may not be to correctly identify political ads vs other ads.

2. Their Ad Library seems to be missing important information [1].

3. A disclosure like the Ad Library still obscures influence campaigns by greatly reducing the ability of watchdogs to passively monitor the political discourse. Instead they have expend much more manpower to actively monitor the library with the right search terms, and if they fail to do that they'll miss things.

[1] https://www.propublica.org/article/facebook-blocks-ad-transp...

> Facebook has launched an archive of American political ads, which the company says is an alternative to ProPublica’s tool. However, Facebook’s ad archive is only available in three countries, fails to disclose important targeting data and doesn’t even include all political ads run in the U.S.

> Our tool regularly caught political ads that aren’t reflected in Facebook’s archive. Just this month, we noticed four groups running ads that haven’t been in Facebook’s archive:


<But I’m not so sure about #3.

It was happening, but it was done by people from political parties visiting those voters in person at their doorsteps (I know of it from documentaries), obviously way more expensive and time consuming than ads, and ineffective when done in a foreign accent or language or without other means of identification/verification, so people outside the US had it a lot harder to influence those voters.


> Without Facebook, this wasn’t already happening.

This double negative is a bit confusing. Targeted political messaging absolutely was a thing before Facebook and even the internet. Correlate info on zip code demographics, the readership demographics of different publications and media outlets and one can provide messages that target specific groups.


Harder to track the impact of 'analog' misinformation like leaflets/handouts too.


If you accept the premise of #1 and #2, #3 may simply be analogous to how money laundering was possible before bitcoin, but now it's easier/cheaper and harder to trace.

So the argument could be said they disrupted that black market but are a publicly listed company.


I'd add that for #2 they don't need to persuade people to change their vote with the adverts, or even mislead them (although the latter may help).

All they need to do is target adverts reminding you to vote to the people who will vote the way they want.

Facebook ran an experiment in 2010 that shows they can definitely increase the chance of individual users voting: https://www.smithsonianmag.com/science-nature/how-a-facebook...


The other two major parts of this:

Facebook's ad model is built to reward "virality" - meaning being over the top and salacious means lower costs. Even Boz acknowledges that was part of the Trump team's genius - they used the platform, as it was built, perfectly. The Democrats naturally have equal access to this platform, so I don't think its as much a left-right thing, vs a crazy-calm divide.

The other thing is just disinformation / identity verification - making sure all advertisers are who they say they are (and you don't get russians posing as black lives matter, etc.)


There are a significant number of tech workers I know that do genuinely believe that companies should leverage their technology to benefit the candidates that they feel would be better for the country, and do things like ban Trump's social media accounts. This is what Bosworth was addressing:

> But he maintained that the company should not change its policies on political advertising, saying that doing so in order to avert a victory by Mr. Trump would be a misuse of power, comparing it to a scene from “The Lord of the Rings.”

https://www.nytimes.com/2020/01/07/technology/facebook-andre...


Banning Trump's social media accounts wouldn't require bias against him, it would simply require him to stop receiving special exemption from platform rules.


Specific individuals aside, the point was to explain that Bosworth was talking about how Facebook shouldn't use it's platforms to benefit the candidates of their preference.


People are downvoting this, but it's just a matter of fact. Jack Dorsey has repeatedly said that the "newsworthiness" of Trump's tweets outweighs the ordinary standards of propriety that get applied to other people's tweets. Somebody asked Jack point blank whether Twitter would take action if Trump made a tweet explicitly asking his followers to murder a journalist, and Jack's response was, "We'd certainly talk about it." (https://www.washingtonpost.com/nation/2019/01/18/would-calli...)


I've worked in media, and I can confirm the "packages" that sponsors buy as described are totally normal in media.

There's a reason editorial departments are totally separate from ad sales.

FTC disclosure of paid content is extremely important and taken very seriously (at least where I worked and by the FTC).

IMHO Facebook doesn't have any reputation left to tarnish, but Teen Vogue screwed themselves here, badly.


In my experience, editorial departments of most outlets are no longer separate from ad sales. On paper, maybe, and as a concept/line that too brass can say publicly, but the four major outlets I’ve worked all had edit staffs that end up being influenced heavily by the business side. Sometimes censorship, sometimes decision making and influence.


Is it just me, or is this completely bonkers? Let's say that teen Vogue were transparent about the article being sponsored. Meaning it would have had a tiny print somewhere saying that the article is part of a partnership with Facebook. How would that matter? Most people reading it would hardly notice it, even if they do notice it , it will hardly change the way they process the information in the piece. Our mind have a hard time knowing where certain facts it recalls came from. The bad thing is not the non cohesion of the Facebook PR it's the total lack of regard for truth and ethics that these corporate MBA and media types have. If it gets the job done then it becomes good, as in: morally good. Even if you manipulate the truth, bend the mind of the masses towards misconceptions, but do it well, then you are being ethical and good. Only if you fail then something is not good. That's a pretty messed up moral system.

Corporate shouldn't be able to buy anything from media outlets other than Ads that look.like ads and completely visually separate from actual journalistic pieces. Anything else is just unethical.


I feel like the actual change is not really Facebook PR but the reporting about it? The TeenVogue affair seems to have caused a dam-break which shifted "reporting facebook as disfunctional" right into the middle of the Overton window all across the board regarding online media.

Facebooks strategy was always running away from a trail of PR bodies via sheer sized based on "customers" (which should be called products, to be honest) which did and do not care. I mean sure, this PR debacle is not a bath in glory by any means. But I can not think of any PR campaign in response to the countless past scandals which made me think "Wow, nice catch". At least in Germany, their response to the accusation of manipulating elections was a billboard campaign advertising that in Facebook you have a settings page where you can click switches and thus be in full control of your privacy (:D).

Was there ever any effective response to Zuckerberg abusing his company data to crack journalists accounts? I can even remember an age old thing where Facebook made all your posts visible forever on your board or something which was just drowned in the ongoing unchallenged growth of Facebook after some time...

To clarify, I am glad that media is finally elevating from lizard memes but I reject the notion that this current affair is the first visible crack of rotten foundations one could have observed.


I was wholly unsurprised to see that last tweet coming from someone on the Hillary 2008 campaign. It reads just like the kind of defensive use women as human shields to deflect criticism attitude that came from the campaign when faced with a real primary challenge both in 2008 and 2016.


Facebook is just transitioning to becoming the digital equivalent of an oil company. They make a product that billions consume, but that many people think is doing damage to the environment (even though many of those critics still consume the product).

That's why their PR machine is now switching to these slightly astroturfy campaigns. The next step is an advert along the lines of "They call it pollution. We call it life".


Analogies in fiction to the "oil company uber alles" - Technocore in the Hyperion Cantos [0] as well as Phyrexian Unlife [1] in Magic the Gathering CCG.

[0] https://en.wikipedia.org/wiki/Hyperion_Cantos [1] https://gatherer.wizards.com/pages/Card/Discussion.aspx?mult...


Phyrexian Unlife might be the most emotionally effective magic card.


> Facebook is just transitioning to becoming the digital equivalent of an oil company. They make a product that billions consume, but that many people think is doing damage to the environment (even though many of those critics still consume the product).

I like that analogy or narrative. The aesthetic value of the analogy is appealing to me.


I’d argue that’s an improvement, their previous PR regime sounded like the Soviet information ministry.


> Facebook is just transitioning to becoming the digital equivalent of an oil company.

Seems more analogous to a tobacco company. Some people think it's bad and they use it anyway.


> Seems more analogous to a tobacco company

They have an addictive product in common. But oil seems more apt.

The sale of tobacco is highly regulated. Oil, less so; Facebook, not at all.

Oil became useful in the Industrial Revolution, and enabled downstream innovation. Facebook came about amidst the modern Internet and is likewise a platform. Tobacco is old and the end of the line; the ancillary market is limited.

Finally, both oil and Facebook are tools (for some). Tobacco could be thought of as a tool, but it‘a more universally hedonistic.


One interesting aspect of FB's externalized cost structure is that the damage is both to society (like pollution as you mention) and also the individual consumer (more like cigarettes) because the product is designed to be addictive and while releasing momentary dopamine to the user, makes them more depressed on the whole.


Oil so useful that modern society would collapse without it.

Facebook is not meeting a core need. It's peddling something entirely optional to society. At best it provides entertainment value, while being extremely addictive and dangerous.

Facebook is like crack.


In Southeast Asia, Facebook is the main or only online presence for many, many businesses. They don't have their own websites, just a corporate Facebook page, a phone number (for Whatsapp/SMS mostly) and a Google Maps entry. I'm not saying all these businesses would disappear if Facebook did, but it's not clear how they'd be online. Most of them have zero IT.


It's increasingly common for small businesses in US. Many restaurants, for example, have a Facebook page, a Yelp page etc, but they don't have a website.


Wow I didn’t know that. What are some examples?

That’s interesting to think about compared to so many companies I’ve seen having their own url, even if their website is all html tables and css rules from 10+ years ago.


Here's an example: https://facebook.com/BelimbingSuperstar/

That is a midrange restaurant in tech-savvy Singapore, opened a few months ago by people who already had another successful restaurant.

And here is their website: http://belimbingsuperstar.com

As of this writing the website is literally a GoDaddy placeholder page. They use the domain for email only, they didn't even bother to redirect it to Facebook.


I'm certain that given the size of that market, some enterprising company would step up to fill it.


One might expect Garena to do it, but why haven't they already? Maybe Grab can.


> Oil so useful that modern society would collapse without it.

I don't like the taste of this, I'm having a hard time figuring out exactly why, but it's something like:

We depend on oil, because we depend on oil. I can say I "need" my phone but I bet I'd be just like anybody else in the 60s if I didn't have my phone. I "need" the BART but only because I live in a city that exists in its current state because of the existence of public transit. Etc. Feel where I'm going on this?

Modern society would absolutely collapse if oil just suddenly vanished. Obviously. But how could that possibly happen? It's impossible. And it's also, I think, absurd to suggest there's no possible way to quickly and intelligently transition off oil dependency, barring the fight against the Oil lobby, which I'll get to.

Also, I postulate modern society could absolutely exist in an almost identical form to how it does today, if it hadn't become addicted to oil from the get go. It's feasible from an economic and engineering standpoint that we could have transitioned our entire electrical grid to nuclear, with some trickles of hydro and wind here and there, in the 1980s. Sure electric cars haven't really been entirely feasible until recently, but electrified public transit absolutely has been, and maybe we would have gotten better electrified vehicles sooner if there wasn't as much oil available.

My basic point is that Oil is not "necessary" for modern society, either in its formation or in its current iteration, and that we can absolutely get by without it, EXCEPT for the fact that would hurt some shareholders in a very big way and so they will fight tooth and nail to prevent them. With them out of the way, the obstacles would be so much less.


Feel free to replace oil with whatever dominant energy source you'd prefer, for your timeline of choice. I think the core argument still holds.

To elaborate: my problem with the Facebook <-> Oil Company analogy is the magnitude mismatch in the utility axis. Facebook's operating niche describes very specific solutions for social interaction, in a space saturated with stronger and very enjoyable alternatives, some as old as humanity itself.

I think the scale might work out in the toxicity axis though.


Ah, fair, I completely agree.


Human connection is a core need. Facebook provides it (or something a lot like it).


Facebook is human to computer.

I consider human connections to be person to person, where you can see, talk to, hear, gauge body language and nuance, and adjust to that in real time. Facebook is a substitute for human connection, and a poor one at that. I see a lot in the news about how people feel more depressed and alone than ever, and are online more than ever and checking their devices every few minutes. So much so that it's to the point that when they actually DO have person to person contact they're still tied into their devices instead of being present in the moment with undivided attention and enjoying that other persons company. I agree that human connection is a core need. I just think we're conflating facebook as somewhat fulfilling that need. I'd argue it's doing the opposite and destroying human connections, in general.


And yet, after not using Facebook for over a year, I don't miss it one bit and feel like my human connections have improved as a direct result.


Human connection can be had in many alternative ways, with a lot less drama and controversy.

Reading these posts full of hyperbole, one would think human society couldn't exist without Facebook.


It certainly could and did for thousands of years... but it would be very different. If facebook went away you would just get other companies springing up to fill the void (Snapchat, Tiktok, etc). That’s how you know it’s a core need.


I don't understand how "it would be very different" if other companies were to just spring "up to fill the void."

Doesn't this phenomenon merely indicate that some market forces believe that the industry Facebook works within has potential for more profits than are currently being extracted?


I haven't seen a tv ad in years.

Back then, facebook had a tv ad series. It was a vague, "what's this ad for?" kind of ad. The jist of the ad was "friendship is great" with some sort of facebook=friendship reveal at the end.

The ad was creepy. The suggestion that facebook is providing the core human need of friendship is creepy. Facebook is incidental.


Facebook is to human connection what sugar is to nutrition.


So, inherently necessary, and quite important if you're diabetic?


Not exactly. More like "not a substitute for".


i don't think FB ever had great PR.

The Social Network was largely a negative portrayal of Zuck and FB.

I remember when FB bought Instagram there was a lot of initial backlash.

Most of the positive stuff around Facebook was mainly due to the huge upward movements in it's stock price post IPO that solidified Zuckerberg and Sandberg as business geniuses. Would argue that their individual profiles became much more positive as the stock price increased, but would not really ever say Facebook the company was really perceived in a positive light outside of business community.


Ben Mezrich, author of the Accidental Billionaires which was later adapted into the Social Network, has a differing opinion: like Liars Poker and Wall Street, the movie made Facebook and Mark Zuckerberg more well known and popular than before.


highly doubt it but would trust internal Facebook data on any bump in users if that was the hypothesis


In my opinion, the problem with Facebook isn't really the PR. Yes, Facebook's PR is broken as the article describes, but the real problem started years ago, back when it was a "well-oiled machine".

The problem is that there is a huge distance between what Facebook PR says and what Facebook does. The constant apologies for misbehaviors that never are never corrected, the apparently deliberate misinterpretations of much of the criticism leveled at Facebook, the continual discovery of new misbehaviors that should have been stopped, and so forth.

That Facebook has now taken an official public stance of being antagonistic, dismissive, and condescending is bad, but it's just bad icing on an already bad cake.


Yeah it seems like whatever rules there are ... there aren't many.

The whole VPN they ran that tracks kids ... that's not even a mystery as far as being a terrible idea.

Then Apple told them to knock it off.

So Facebook renamed it (sloppily too) and put it up on the app store again until they got caught again.

Kids, users, other companies, they don't care.

How can a PR person even craft a response like "hey we did wrong but we're sure we won't ... well yeah we probabbly will do that again, maybe immediately"


As someone that knows the person that originally tipped Apple off about the tracking... this is such a misinformed editorialization of what actually happened where I don't even know where to start.


Can you help clarify what really happened?


Among other things: lots of other companies (including Google) were doing the same thing, wasn’t an app for general distribution (the kid had already had his parents permission for using said app), order of magnitude a thousand people ever installed the thing.

Enforcement from Apple on FB basically came the day after they were hit with an embarrassing “Hackers can turn on your camera without you knowing it FaceTime” bug. This part is hearsay, but, IIRC, Apple only started enforcing on the dozens of other companies that were doing the same thing once legal action was threatened for uneven enforcement. (Meanwhile, the kid that I chatted with that reported things to Apple in the first place... definitely reported more than just FB.)


From a factual perspective, it wasn't on the App Store. (It was distributed via an enterprise certificate, which I find to be even worse, since they are explicitly not supposed to be used in the manner that they were.)


Not going to comment on whether or not it was philosophically correct, but I will note that Google (not to mention dozens of other companies) were doing the exact same thing — just that they didn’t get called out for it until a few days after.

IMO, I wouldn’t be surprised if this happened this way cause Apple had decided to use FB as the fall for a PR media blitz. (In particular, Apple made the decision to “enforce on FB” basically the day after their “someone else can turn on your camera with FaceTime without you knowing” bug.)


The beginning of this article really plays out like the author is just against Zuckerberg simply talking to people with conservative viewpoints.

Being willing (and encouraging) others to listen to other viewpoints should not be considered a PR issue.


That is what Zuck thought. And it would be missing the point. That is not what the author took issue with.


I didn't get that at all. What the author was against was Zuckerberg's defensive, belittling tone in his post "explaining" why he's meeting with people with conservative viewpoints.


The author reads way too much on Zuckerberg’s response. It Makes me question their perception of the events and as a result, the analysis as a whole.


If I'm interpreting it correctly, it appears that the author is actually offended at the suggestion of listening to viewpoints that conflict with their own in order to possibly learn things from it.


I didn't read it like that at all. As stated in the article, the offending bit was the last two lines. "If you haven't tried it, I suggest you do" is the smug, condescending jab that was completely unnecessary. It is inflammatory for no benefit.

The start of the response was perfectly fine and I did not feel the author took offence to it.


And the author should not take offense to that either.

I have no idea why people would take offense to 'If you haven't tried it, I suggest you do'. Maybe the same reason they would be offended by talking to a person with different opinions.


>> I have no idea why people would take offense to 'If you haven't tried it, I suggest you do'.

On its own it’s an innocent enough statement, and even correct. But in this context, i.e. when people are accusing you of something, and it is part of your response, it comes off as very condescending and smug.

There is a reason PR departments exist: they tend to be hypersensitive to the different ways in which a piece of communication can be perceived, and to dull its sharp edges.


I think more than offended to me he seemed surprised PR let it through


From 2017:

Facebook’s Mark Zuckerberg Is Hiring a Team Worthy of a 2020 Presidential Campaign - Facebook CEO Mark Zuckerberg's latest hire is further fueling speculation that he could be planning a 2020 presidential bid https://people.com/politics/facebook-mark-zuckerberg-team-20...

>Zuckerberg and wife Priscilla Chan have hired Democratic pollster Joel Benenson, a former top adviser and longtime pollster to President Barack Obama and the chief strategist of Hillary Clinton’s 2016 presidential campaign, as a consultant,

>In January , Zuckerberg, 33, and Chan, 32, hired David Plouffe, campaign manager for Obama’s 2008 presidential run, as president of policy and advocacy. They also brought on Amy Dudley, a former communications adviser to Virginia Democratic Sen. Tim Kaine. Ken Mehlman, who ran President George W. Bush’s 2004 reelection campaign, is also on the charity’s board.

>And Zuckerberg’s personal photographer, Charles Ommanney, was the photographer for Bush and Obama’s presidential campaigns, Business Insider reported.

Joel Benenson and David Plouffe are working elsewhere now. Amy Dudley is still the spokesperson for Zuckerberg iniative.

Zuckerberg has had incredible PR army working just for him, not for Facebook. I think there has been switch in his personal ambitions and change in PR people.


Facebook’s PR isn’t broken, Facebook’s just doing some really bad stuff


Are you disagreeing with the argument in the article? Or is this just a comment on FB in general.

I agree they’re doing bad stuff but I think the argument in the article makes a very good point about how they used to be good at handling PR for their evil and getting away with it and now they seem to have lost that touch.

I doubt the level of evil has changed dramatically in the last 18mo but the PR reactions have.


Yup.

> something noticeably changed. They got combative. They got sloppy.

An interesting take, whatever you think of the underlying merits of the company and the complaints against it.


In fact, unless you think that the company was better before and is worse now, it should be striking how much the narrative has changed without any underlying change.


Could be both. Ultimately PR is ineffective when the level of issues outweigh the amount of damage control PR can contain


Read this thread. I have been noticing a lot of reporting on tech by NYT etc, is wrong, not complete. In this case, a lot of arguments are there. But, an example from the past, they spread misinfo about Google's revenue figure from news industry(https://www.niemanlab.org/2019/06/that-4-7-billion-number-fo...)

https://twitter.com/antoniogm/status/1214685876095045632?s=1...


> One of their most longtime, loyal leaders is directly saying they have the power to sway national elections. It is their decision, and their decision alone, to resist the temptation to "change the outcome"!

Oh, he better be careful. That’s pretty close to pushing a right-wing talking point /s

Facebook, etc.. Is no different in its power to influence elections than traditional news papers ever have been. The only difference is a matter of scale and homogeny. Today, instead of thousands of media publications, each pushing their own biases and perspectives (which they always have), you have a very small group of tech giants that get to decide what you see (and even more frighteningly), what you’re allowed to say about it. Sure there’s more media outlets today than there’s ever been, but a small handful of companies get to control their reach, and (again more frighteningly) their ability to collect revenue.

Then you have the issue that traditional media has always represented a diverse set of opinions and world views. You have conservative and liberal outlets, outlets the promote free market ideas and those that promote socialist ideas, outlets that promote regulation and those that promote small government. The small number of organisations that control access to ideas and speech today all represent an incredibly homogeneous political world view.

The problem isn’t that those organisations have done a poor job of controlling the flow of information, it’s that they have the ability to do it in the first place. Those companies should not have the ability to act as gatekeepers who determine the credibility of information, or the moral implications of speech. In a free society, that responsibility falls of on the shoulders of every individual, and to have an authority doing that on their behalf denies them the opportunity to do so themselves.

This is also a matter of values, not constitutional law (as people often like to derail such conversations by claiming regulation would violate 1A). The law that empowers these companies to to moderate content is the Communications Decency Act, not the 1st amendment.


> > One of their most longtime, loyal leaders is directly saying they have the power to sway national elections. It is their decision, and their decision alone, to resist the temptation to "change the outcome"!

> Oh, he better be careful. That’s pretty close to pushing a right-wing talking point /s

> Facebook, etc.. Is no different in its power to influence elections than traditional news papers ever have been. The only difference is a matter of scale and homogeny.

That's an empty argument. Everything is about the scale, by that argument each individual has the same power as e.g Facebook to influence elections, it's just a matter of scale. However, scale is the difference between effectively no power and a lot of power.

> Today, instead of thousands of media publications, each pushing their own biases and perspectives (which they always have), you have a very small group of tech giants that get to decide what you see (and even more frighteningly), what you’re allowed to say about it. Sure there’s more media outlets today than there’s ever been, but a small handful of companies get to control their reach, and (again more frighteningly) their ability to collect revenue.

> Then you have the issue that traditional media has always represented a diverse set of opinions and world views. You have conservative and liberal outlets, outlets the promote free market ideas and those that promote socialist ideas, outlets that promote regulation and those that promote small government. The small number of organisations that control access to ideas and speech today all represent an incredibly homogeneous political world view.

> The problem isn’t that those organisations have done a poor job of controlling the flow of information, it’s that they have the ability to do it in the first place. Those companies should not have the ability to act as gatekeepers who determine the credibility of information, or the moral implications of speech. In a free society, that responsibility falls of on the shoulders of every individual, and to have an authority doing that on their behalf denies them the opportunity to do so themselves.

> This is also a matter of values, not constitutional law (as people often like to derail such conversations by claiming regulation would violate 1A). The law that empowers these companies to to moderate content is the Communications Decency Act, not the 1st amendment.


This is a place that famously had a motto "move fast and break things." In other words, fail early and fail often. For that culture to succeed you need people willing to admit (and tolerate) mistakes. Sounds like that's pretty well gone.


I am not sure I completely buy the thesis of this essay because...

1. Facebook has NEVER had positive PR. The tech-media has always used FB as the Silicon Valley punching bag. Name me one good article in the news about FB? At some point you're just numb to it all.

2. Facebook has had leakers for years, and especially when Boz makes posts!

To say that Facebook PR is broken implies it wasn't before. I'd argue that FB has long needed a comms team representative of the company it actually is. FB comms feel designed for the old college only network FB once was. When Google can do similar misdeeds and get only a tiny slap on the wrist, I do agree it's probably a comms problem in the end.

Also, to completely write off Sandberg because of one corporate misstep reeks of "Cancel Culture" hysteria, but that's just me.


The org chart shown in the article is old by 2 years and seems like an oddly selected on for that time. A current org chart would shown a different split between men and women.


Not everything is run by PR or fixed by PR or even measured by PR (except for PR personnel of course).

Sometimes things are just bad and it takes time to realize.

Regardless of the packaging.


The Facebook / Cambridge Analytica story is completely baffling to me.

Here we have a company whose business idea it is to lock up as much of the web as possible on their platform. Contacts, messageboards, photos, events, calendars, chat, dating, anything goes really. They then earn money from selling data.

Then the Cambridge Analytica scandal breaks where a company has used data from Facebook unethically. How was this not expected? Had it been better if this company used their data a little less unethically? Had it been better if Facebook sold PR-as-a-service directly instead of enabling an ecosystem of smaller companies offering this?

(Because the latter is what is going to happen over the long run should Facebook continue to be successful. When there is no room to grow anymore the ecosystem is going to be cannibalized.)

This is all inherent to business models dependent on data ownership. Upon observing the majority of the population migrating to closed platforms, the discourse among the people who care about open data must instead be framed in terms of how to shape the conditions of said business models. Codifying data ownership is one way to move forward, GDPR being the most known example. It may be blunt but it at least poses the question of who owns which data.


Stifling a glib, sarcastic response, who do we feel has good PR that's working, and not in some sinister meaning of 'working'?

What groups and activities do people who think this way hold up as positive role models?


It's not English, unfortunately, but Berlin's subway operator (BVG) has been doing an excellent job at PR for the past 5 years or so.

They use mostly (self-deprecating) humour, both on billboards as well as individually as Twitter messages.

This strategy seems somewhat risky. There were some where I thought "funny, but I wouldn't want to be the one posting this".

Hard to describe, because it's mostly a function of each tweet being really good. They must have some larger advertising company behind this, with tons of talent.

They have seen a spectacular improvement in public approval, and even on such measures as violence against employees, fare-evasion, and vandalism.


Trying to answer my own question, I'd say Digital Ocean's paid documentation and whatever SEO they're doing that gets those documents highly placed in Google results might qualify as benevolent PR.

Dev blogs for a couple of projects like Github might also count.


Theoretically, the best PR would rarely be identified as such.


> You have someone like Antonia Woodford, rightly called out[2] for the absurdity of saying to combat misinformation you should require “advertisers use their real identities”. She has Yale, McKinsey and a rapid rise at Facebook, on her resume. This is the profile of someone who I imagine isn’t wanting to bunker down in a Trumpian hole of calling everything critical: “fake news”.

I don't understand. How is it "absurdity"? And the linked Gizmodo article doesn't seem to call it absurd either.

[2] https://gizmodo.com/teen-vogue-yanks-puff-piece-on-facebooks...


I believe on this forum PR means pull request.


At this point it is arguably obvious that Zuckerberg is a textbook sociopathic CEO who will say whatever is expedient to business needs. After constant misleading and outright false statements, not a single word that comes out of Facebook can be trusted. It's self serving corporatespeak taken to an extreme and I simply cannot fathom how so many people, including news media especially, are uncritical and even trusting of FB public statements.

Zuckerberg's dumbfucks quote is consistent with the content of every public communication major steering decision FB has made to date. He is truly the embodiment of lawful evil.


Understatement of the century


Just their PR?


When you have a guy who works at a company wondering if his company was responsible for an election result and making LOTR references...then you know something has gone wrong.

No, Facebook isn't powerful enough to sway an election (I get that some people are really angry about Trump...if you aren't thinking rationally, don't have an opinion rather than start up with the conspiracy theories). No, business/life is not like LOTR. There aren't goodies and baddies (irony, this is part of why Trump was elected).


> Those last two lines. We all know that style of communication. Sardonic. Snarky. Sneering. Derisive. Whatever you want to call it, that mocking tone captures a dangerous combination of insecurity and arrogance. It feels like when Trump ends a tweet with SAD!. You read that and just think, what a dick.

I get that's it's kind of sarcastic, but I don't really think "what a dick". It's not great for someone in his position to be sarcastic like that, but I think more "he must be hounded by people who are ready to burn him at the stake for having lunch with certain people so.."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: