Hacker News new | past | comments | ask | show | jobs | submit login
Facebook has struggled to hire talent since the Cambridge Analytica scandal (cnbc.com)
634 points by Despegar on May 16, 2019 | hide | past | favorite | 423 comments



I just spent three months hiring in NYC, and now that I think about it, I haven't seen a single person mention they were considering counteroffers from Facebook. For context, Facebook and Google are the two largest tech companies with a significant NYC presence. It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.

> Usually half of the close is done for recruiters with the brand Facebook has

I'm also finding that company brand plays a huge role in closing candidates. Our company's brand is generally pretty strong, and I've found one of the things candidates respond to most is the story we tell about our company's past, present, and future. Facebook's story has become "we were founded by a jerk who didn't care about privacy, our not caring about privacy has had massive consequences for American and global society, and our promises to improve our approach to privacy in the future have proven to be disingenuous smokescreens."

It's no wonder the substantial portion of people who care about their employer's ethics are turned off.


There is also an issue with the 'evaporative' effect. If no one who works there is seen as 'ethical', then you'd expect the people that do work there to be unethical/dubious. So, trying to get a promotion is then more cut-throat, the lunch crew has a few more 'jerks', the HR is a bit more biting, etc. Your hackels get raised and you are more suspicious of the motivations (however begnin) of others. Better to just not get involved.


Sheryl hired the swiftboat campaigners to stop Congress. Finding out about that made me assume that over time most of their employees would trend in the opposite direction of optimistic.


I wonder if the exec team realizes that the ad-tech industry had their Great Financial Crisis. Nobody is in love with them anymore. They'll get as much reception from politicians as Wall Street did when Congress passed Dodd-Frank. Banks don't earn much more than their cost of capital anymore.


> Banks don't earn much more than their cost of capital anymore.

This doesn't sound right. Can you add some citation or detail?


Goldman Sachs's return on equity used to average 20-30% before the crisis. Now a decade after the crisis they're glad to be doing more than 10%.

https://i.imgur.com/gtE05WX.png


I’m on mobile so I can’t read the numbers on the excel screenshot you provided but the historic high return on average equity for banks[1] (not including brokerage) was 16.29% in 1999. At last measure it was 11.85%. Dodd-Frank was merely a speed bump. The vast majority of banks have long since recovered from the crisis.

[1] https://fred.stlouisfed.org/series/USROE


That's not a good chart for banks. Their cost of capital is 8-10%, doing 11 or 12% is pretty shitty compared to the pre-crisis era.

And the smaller community banks have less onerous regulations than the big ones. Banks have to be much more capitalized and have less leverage because of Dodd-Frank. That's why the return on equity is mediocre.


While I'm not personally keen on him, Trump's admin rolled back CCAR to every 4 years, for small book every 6. Expect lower regulatory costs.


Whats the app you used to get so much history


FactSet


It isn't right. The majority of banks are profitable, the industry is somewhere around $200B/year of profit (I think that's just retail banking, not including brokerages and stuff).

This is historically high.


Wow! Source?


https://www.nytimes.com/2018/11/14/technology/facebook-data-...

The person who runs Definers Public Affairs, Matt Rhoades was the opposition research director for Bush/Cheney 04.

https://en.wikipedia.org/wiki/Definers_Public_Affairs

https://definersdc.com/team/matt-rhoades/

> In 2004, he played a critical role in President George W. Bush’s winning re-election campaign, serving as the Research Director for Bush-Cheney ’04, where he helped develop the campaign’s opposition research, message development and rapid response operations.

The oppo research director of the campaign can have no official links to the 527 group, of course. But the group exists for one reason only - to discredit political opponents.


Sheryl Sandberg and Zuck are a team. They are aligned in some bizarre goal that seems to function as a juggernaut that I got into tech specifically to avoid.


There’s also the reputation impact as well. When all of the bad things at Uber eventually became public, Uber engineers started reporting difficulty in getting new jobs. Apparently hiring managers assumed, perhaps correctly, that anyone who stuck it out at a toxic place that long was possibly the source of toxicity themselves.


If you lie down with dogs, you get up with fleas.


Can confirm. Worked there. Deeply disenchanted with the ethics of many people especially in the product, marketing, and (as you would guess) senior leadership groups.


Thanks for the input! I know you're on a throwaway account, but any stories or context for a Friday morning?


> "It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook."

intersting anecdote. google is a bigger concern for privacy and personal liberty, yet jobseekers are shunning facebook because of the more wide-ranging negative press.


>google is a bigger concern for privacy and personal liberty

Big claim. Any proofs?

With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

Extremely worrisome if you prefer to have people elected through a democratic process that is based on the discussion. This is not like understanding what people want or how do they think through big data analysis but manufacturing it.

Sure - provocations and lies are not new, it was always the case for politics but with a social media everything is at scale and everything is happening violently.


> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

(EU citizen here): I would prefer corporations (especially ones with such depth of funds and breadth of influence) be kept entirely outside the electoral process. If that's not feasible, then the second best option is, indeed, that they provide the same service to all candidates, no matter who those candidates are.

Am I getting it right that you'd prefer they pick some candidates to help in detriment of others? Because that option does not sound very healthy to me, personally.


My ideal system is to eliminate all private money from elections. You as a candidate are given a stipend by the FEC at the beginning of the campaign season, the same amount as any other candidate for the same office, and you are free to spend it. You ran out? Tough. See you next election cycle.

Nobody is allowed to give you money or anything else of value (including free airtime) -- not individuals, not companies, just the FEC. Anything else is bribery, and a crime. That way, you're not going to do things just to please your benefactors and get you an edge over your opponent next time, as your war budget is already accounted for. Instead you can focus on doing what's best for the people.


If you really want money out of politics then you replace all of them with sortition (assemblies of the people). They are representative, being drawn at random, for the whole population and they are not elected, so they don't need to campaign. Similar to politicians, they need to be supported by experts and advisors in the specific topic they are working on. I'd rather trust a group of random people deliberating than a bunch of professional liars. Sortition is a way for people to participate in democracy more than voting once every couple of years and posting on FB.


My approach is different, two houses (assemblies) one elected and one via sortition.

Elected can propose and pass but sortition house can block.

Basically the commons/lords setup but with the lord's replaced with random people.

That way the politicians answer directly to the people (or a random sample) of them.


I have been thinking exactly the same. I now have to wonder if you work at Facebook and have "read my mind" via my likes… ;-)

I been musing on whether there should be some small barriers to joining, as there is in jury service - perhaps the elected get to oppose a certain number of candidates, or they must pass a civics/governance test first so at least they've some technical knowledge going in. I can see that being twisted into something bad though.

Much as I dislike the Lords Spiritual in the current system, I wonder if the sortition should embrace it and be a "tulip farm" of certain interest groups e.g. 20 each for religion, business, justice, commoners etc, as then there's a definite base of understanding in important areas.

However it would be arranged it'd be hard for it to be worse than having the Lords full of lords though.


OK, what if people with money want to spend that money on political speech _without_ coordinating with the candidate? That's the Citizens United problem.

There's no quid pro quo bribery, but if the NRA spends a bazillion dollars attacking your opponent but not you, it'd be hard to say there's no influence on your decision making process. At the same time, it's really tricky to ban. Is something like Michael Moore's Farenheit 9/11 a form of political advertising?


> Is something like Michael Moore's Farenheit 9/11 a form of political advertising?

Not only that, is CNN or Fox News coverage of a sitting politician who is running for reelection a form of political advertising? How they choose to report stories -- and which stories they choose to report -- can certainly affect how the voters view the candidates.

Probably the best solution is to just have a signature requirement where if you get that many signatures, the government gives your campaign an amount of money equal to the average amount of private money raised by successful candidates running for the same level of office in the previous election.

Then the average privately-funded campaign will have twice that much (if they get the signatures too), but a factor of two isn't huge here. It's more of a threshold situation where once you reach a saturation point it's diminishing returns. Get the candidate to that point with public money and the value of trading legislation for private money would be much diminished.

Of course, you still have the problem that too many people vote for who cable news tells them to.


This is reasonably similar, AFAIK, to how it works in France (and I assume many other developed countries).

Campaigns do spend their own money but it is capped at some low value to even the playing field, and they also receive public funding. TV stations are required to give equal time to all candidates (in the 2007 election there were 12 candidates, only 3 of which had any realistic chance of winning, but major TV stations spent equal time interviewing all of them).


> Nobody is allowed to give you money or anything else of value

That sounds like a form of extreme boycott - and however I despise politicians in general, subjecting them to essentially expulsion from the society (at least temporarily until the election ends) and complete gag order and media blackout (because otherwise I could promote a candidate without giving them money directly - I would just publish ads under my own name but would be praising the candidate in them) just for wanting to be elected seems a bit extreme to me. Not to say at least in the US it's probably incompatible with at least half of the constitutional amendments.


Well you'd have to toss out the 1st Amendment to get that idea off the ground.


The courts have already supported certain restrictions to free speech, so it doesn't require a wholesale toss of the 1st.


Yes, the idea that campaign donations = speech is a new one as well. Tbh I think we need a constitutional convention to really solve all the problems with the American political system.



And what if a candidate wants to spend that stipend on Facebook ads. Is Facebook not allowed to have a salesperson take that money and sell ads to the candidate?


What does free airtime mean in this context? People used that term a lot about the coverage of Trump during the 2016 election, but forbidding news outlets to cover a candidate is obviously absurd.


Speaking as a U.S. citizen, your thoughts were exactly my reaction to that comment.


> I would prefer corporations (especially ones with such depth of funds and breadth of influence) be kept entirely outside the electoral process.

Do you mean corporations being banned from providing services to any electoral campaign? Probably not, because in that case election campaign would be impossible. If so, then Facebook would be free to provide promotion services to any political campaign too - they are service provider as any other.


> because in that case election campaign would be impossible

Would that be a bad thing?

I would love an election that was simply an announcement of the election date and a website to see the candidates and their politics. The only campaigning would then be the government campaigning to get people to vote.


> Would that be a bad thing?

If you want to have elections, yes. If you prefer hereditary monarchy, then you'd be fine.


With Facebook, it's easy for an average individual to leave the platform for good: stop using Fb/Insta/Whatsapp and install something like Privacy Badger to avoid tracking on all the other sites that have some form of Fb integration.

Leaving Google, by contrast, is way more difficult, their ecosystem reaches literally every corner of the web and you have to deal with it even if you don't consciously use any Google product, for example if Recaptcha doesn't like you, everyday online tasks like paying public school fees [1] or signing up to an online forum become much harder. Another example is Amp, where the fact that you are reading an article hosted into Google infrastructure is often hidden from you, there are many more examples. Trying to quit Google feels like that episode of Black Mirror where that woman is ostracised by everyone because she doesn't have the same cybernetic implant that everyone else is using. Just because Google hasn't been caught in any scandal comparable to the Cambridge Analytica one, it doesn't mean that it's OK for them to have so much unchecked power.

[1] see my submission history for details


It's getting a little wearying to have to rehearse the ways in which Google is a threat to privacy. But let's get the band together one more time:

Google runs search and email for essentially the entire web, controls the market dominant browser and mobile OS, has tracking scripts on >75% of the top million websites and runs a fair amount of the internet's infrastructure. It is the senior partner in the online advertising duopoly (together with Facebook) and runs one of the three major cloud computing services. It has also become the de facto standards authority for the internet and runs a massive continuous operation to collect photos of every street on the planet, which it is now expanding into interior spaces. It sells always-on microphones for the home, as well as a line of internet-connected home appliances. It does so much invasive stuff that I've probably forgotten half of it here.

So it's neither a big or controversial claim in 2019 to point out that Google has unique breadth of visibility into both the physical world, and anything that touches a connected device.


No one disputes that Google has its tentacles in many pots -- and definitely needs to be kept on a leash. But the claim was that Google was not just a matter of concern -- somehow a clearly bigger threat than FB.

Can anyone provide substantiation for that claim?


That seems obvious. If you don't use Facebook you're pretty much outside of the Facebook tracking network with a few exceptions wrt Facebook cookie tracking which you can kill with a browser plugin like Facebook Disconnect. With Google, the tracking surface area is orders of magnitude more ubiquitous - everything from Search to YouTube to Chrome to Email to Android and on and on. Facebook is almost (but not quite) negligible in comparison.


> If you don't use Facebook you're pretty much outside of the Facebook tracking network

Not if any of your friends use Facebook.


That's a pretty considerable exaggeration. There is a big difference between "I have friends that use Facebook" and "I have friends who take pictures of me and upload them to Facebook" or "I have friends that upload their contact list to Facebook" and even then, the amount of data that Facebook can extract from you in that way is pretty minimal relative to just about any other activity people commonly engage in online.


Google also sees pretty much every website visit for every website in the world, through Google Analytics.


"Orders or magnitude" (plural) means something on the order of 100x.

Which suggest to me that either you're either exaggerating quite a bit - or you were using the term without quite knowing what it means. (Which something more specific than simply "a lot").


lol, I know what the term means, thanks.

Anyway, a 100x tracking surface area is pretty accurate and not an exaggeration in the least; if anything it is too conservative of an estimate. Just Android, search and analytics on their own are easily 100x the tracking surface area of everything Facebook does, that's without considering:

gmail, home, docs, amp, drive, maps, hangouts, chrome, chrome os, messages, voice, ads, gcp, youtube, firebase, music, waze, play-store, places, wallet, domains, duo and so many more. I don't understand how this isn't completely obvious.


I don't understand how this isn't completely obvious.

Because you're living in a bubble, and have grown accustomed to thinking that everyone else is using the same lenses to view the world as you are.


The "bubble" is called reality; I elaborated on my point with detailed reasoning and all you've done is throw around insults like a troll. No point in continuing this discussion any further. Have a nice day.


I elaborated on my point with detailed reasoning

It was extremely hand-wavy, actually.

Whether you take that as an insult or not is up to you.


https://energycommerce.house.gov/sites/democrats.energycomme...

https://www.eff.org/deeplinks/2019/04/googles-sensorvault-ca...

Sensorvault:

“includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.”

US law enforcement had been regularly accessing Sensorvault user data in a dragnet-like fashion to obtain location details for hundreds or thousands of users


> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

I have become more and more convinced that this is Facebook's real business model; that enabling instances of archetype CambridgeAnalytica is the purpose the company actually exists for.


Just look at the amount of personal information Google knows/records about you. Your search history, web stats through Chrome, location history through Android, with whom you exchange emails if you're using GMail, which sites you visit and how long you stay on them through Google Analytics, probably online purchases with a combination of AdSense/AdWords & Analytics, everything you watch on YouTube etc.

They definitely collect much more data than Facebook. The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.


> The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.

This is the whole point though, is it not? As far as we know, Google treats the data they collect more thoughtfully and responsibly than Facebook. And so they are (rightly or not) viewed as less of a threat to the public good.

Of course, they could just be better at hiding their abuse of our data... But that's a conspiracy theory, not a matter of public record like the Cambridge Analytica scandal.


No, it's not. They share the data indirectly by allowing companies to target individuals for advertising purposes based on that data. You search for shoes on Google and then ads about shoes follow you all over the web. So while you can't download users' posts like CA did in order to profile them for their political affiliation you can surely target them for whatever product you want to sell. If it was just about ads on Google everything would be hunky dory. But it's not. Just because they're nice and cool doesn't mean we have to give them a free pass to our personal lives.


Is that really any better? Google is so monolithic and all encompassing that data collected by their services can be shipped around internally instead of having to be sold to third parties.


> be shipped around internally

How is that a problem? The issue at hand is the irresponsible handling of data (especially wrt 3rd parties), not the general handling of 1st-party data competently within an internal network.

So yes, it's a LOT better.


In what way? If Google uses your search history to target you with ads is that somehow better than them leaking the data and a third party service targetting you the same way? The end result is the same.


How are they same? Single source you trust versus multiple unknown parties having your data.


> Single source you trust

Assuming that the single source is trustworthy, sure. But we're talking about the likes of Facebook and Google here.

The two use cases for data aren't identical, and actually shipping the raw data out is worse. But, in my opinion, the two things are similar and the shipping out of data is not that much worse.


Take trust out of the equation. It's one entity that is optimised to extract money from you and your data, vs many companies doing the same.


>They definitely collect much more data than Facebook. The only reason they haven't faced the same shitstorm is because they don't seem to share all that data with 3rd parties.

And that is something that is much more relevant to many users. I don't mind sharing a lot of my data as long as I know where my data actually ends up. If Google uses my data to improve their ad algorithm I'm fine with it, if my Facebook data ends up in the hand of some election manipulation company I'm not fine with it, no matter how much data it is.


> I know where my data actually ends up

And how do you know what Google does with it? AFAIK Google has never officially stated in specific detail what data they collect, what they do with it, who can access it, etc.

Their Privacy Policy gives them a giant escape hatch to essentially do anything with it -

"We provide personal information to our affiliates and other trusted businesses or persons to process it for us, based on our instructions and in compliance with our Privacy Policy and any other appropriate confidentiality and security measures. "

https://policies.google.com/privacy?hl=en-US#infosharing


I think you not quoting the rest of that sentence is quite disingenuous

>"...For example, we use service providers to help us with customer support"

As far as I'm aware there is no evidence that Google shares my personal information, without my explicit consent, with third parties like Cambridge Analytica, which collected tens of millions of individual user profiles.


Sorry, how is it disingenuous? I didn't consider the example relevant to the policy itself, and I provided a link to the source material for anyone to read. Giving a benign example is meant to downplay the fact that Google can do anything they want with your data.


Anything? How so? They are bound by their legal disclaimers and laws. Which prohibits many options automatically.


Anecdote time.

My wife and I typically donate to a few non profits, such as the ACLU and Trout Unlimited. They occasionally mail us, but we did give them our address so that’s ok.

But one day she donated to the environmental defense fund. Since then the number of surveys and donations requests from random non profits has exploded to 3-4 a week, including weird ones like evangelical surveys and pro-Israeli things. My wife is pissed at the EDF, and will never give them another dollar.

The point? We were both fine having the non-profits having our address and using it, but knowing that one of them sold that data really pissed her off.


> I don't mind sharing a lot of my data as long as I know where my data actually ends up.

But I do, and Google (and Facebook) suck up my data anyway, whether I use their services or not.

That's the real fundamental issue.


A bigger problem to me is Google's search bias and subtle manipulation. The same goes for Facebook's news curation algorithm. These things can directly impact our democracy, yet it's much harder to tackle or even investigate, because the whole thing is so elusive and subjective.

To me privacy seems to be already a lost cause. We've lost it and there's little hope to take it back. Also privacy violation is a relatively easy problem to understand. For bias and manipulation, however, we don't even know what to do.


Has Google ever disclosed exactly what data they collect, what they do with it, who can look at it, etc? We "know" that Google takes privacy "seriously", but that is a faith based position.


Actually, Google has. [0]

And I can't vouch for all of Google, but regarding location data, Google has been pretty transparent regarding which data is collected and stored; papers like NYT covered it extensively - see [1].

And Google also gives you clear ways to delete this data, as referenced in that NYT article [2].

And moreover, Google has been consistently on track to store less private data. Example: location data is going to be auto-deleted for users that want that, as of this month[3]. Maps now gets an incognito mode[4].

>but that is a faith based position.

Hope the links I referenced will help dispel this notion. Google does take privacy seriously.

(Disclaimer: I work for Google. The opinions expressed here are mine and not of my employer; etc - what I said is public knowledge.).

[0]https://policies.google.com/technologies/retention?hl=en-US

[1]https://www.nytimes.com/2019/04/13/technology/google-sensorv...

[2]https://support.google.com/accounts/answer/3118687?hl=en

[3]https://mashable.com/article/google-auto-delete-location-his...

[4]https://www.theverge.com/2019/5/7/18535657/google-incognito-...


> And I can't vouch for all of Google, but regarding location data, Google has been pretty transparent regarding which data is collected and stored; papers like NYT covered it extensively - see [1].

How did you read that article and come away with the conclusion that Google has been "pretty transparent". The story was written after more than a year of other news outlets reporting on law enforcement using Google's location data to fish for suspects. Google has been providing this data for at least two years before the Times reported on it [0].

> And moreover, Google has been consistently on track to store less private data.

Such as credit card transaction data collected without most people's knowledge [1] or location data after you've explicitly told it not to [2]?

Technology companies need to understand that both words "informed consent" are important. We currently have very little in the way of choices when it comes to data collection. It is simply not possible to opt-out anymore without tremendous effort and personal cost. I like this quote from Maciej Ceglowski:

"A characteristic of this new world of ambient surveillance is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. However sincere our commitment to walking, the world around us would still be a world built for cars. We would still have to contend with roads, traffic jams, air pollution, and run the risk of being hit by a bus. Similarly, while it is possible in principle to throw one’s laptop into the sea and renounce all technology, it is no longer be possible to opt out of a surveillance society."

[0]: https://www.wral.com/Raleigh-police-search-google-location-h...

[1]: https://www.cnbc.com/2017/05/24/google-can-now-track-your-of...

[2]: https://www.apnews.com/828aefab64d4411bac257a07c1af0ecb


All these links are year or two old.

A big push towards openness and privacy has happened over the last year.

On an individual level, I don't think it's hard to opt out of Google's tracking.

I won't argue with Maciej's quote, though, because, just like with automobiles, people will still opt into the surveillance society willingly: because the utility it brings them outweighs other considerations.

Ask people if they want to be tracked at all times, and they'll say "no".

Ask people if they want to be able to locate their phone when they lose it, and their answer might be different.

Ask them if they'd want be able to cal 911 and ask to come and help them even if they aren't sure where they are, and you'll get a different distribution of answers again.

In the latter case, lack of "surveillance" is seen as a "tragic shortfall" [0], and adding it is a "feature"[1].

So see, it's not the surveillance per se that people object to. It's implementation details. Welcome to Ceglowski's world.

[0]https://www.usatoday.com/story/news/2015/02/22/cellphone-911...

[1]https://money.cnn.com/2018/06/18/technology/apple-911-locati...


> All these links are year or two old.

Two of them are more than a year old, but the practices described in each are ongoing. The third, which describes Google's tracking of users after they've specifically opted not to be tracked is from nine months ago.

> A big push towards openness and privacy has happened over the last year.

After literally a decade of constructing what is very likely the largest database of personal information in the world. Since the late 2000s, when Google purchased DoubleClick, it has worked to collect information without the informed consent of its users. What fraction of your users know that Google purchases their credit card transaction histories?

What is the "big push"? The only things I can think of were the opt-in auto-deletion of a subset of data announced over the last week or two. All the user has to do is pay attention to the tech press, then remember to activate the feature when it launches at an unspecified future date!

What is this "openness"? Working on a censored search engine for China without informing their own head of security?

> ...people will still opt into the surveillance society willingly: because the utility it brings them outweighs other considerations.

Sure, they absolutely do. There can be significant utility gains from large collections of information. But much of the utility could be gained from information collected in a anonymity-protecting matter. In order to have traffic information, for example, Google doesn't need to continuously track your location history.

> Ask people if they want to be tracked at all times, and they'll say "no". Ask people if they want to be able to locate their phone when they lose it, and their answer might be different.

And neither of these require surveillance. The phone could be located either by returning its location on command, or by uploading encrypted location data which only the user has the key to. Whatsapp, for example, shows that end-to-end encryption can be seamlessly integrated.

> Ask them if they'd want be able to cal 911 and ask to come and help them even if they aren't sure where they are, and you'll get a different distribution of answers again. > > In the latter case, lack of "surveillance" is seen as a "tragic shortfall" [0], and adding it is a "feature"[1].

Once again, this does not require ubiquitous surveillance, and it is misleading, at best, to imply that it does. Do you really not see the difference between location data provided to assist emergency response from a 911 caller and continuous location monitoring so that Google can serve more profitable ads?


Pre-Disclaimer: I don't mean to only pick on Google here, it applies to any company that collects such a vast amount of personal data on users. Also.. nothing personal :)

>Actually, Google has.

In extremely vague terms, yes. I want to see an itemized list.

For e.g. At company X, this is what we collect:

1) Your Name, age, location, DOB. 2) Your location is sent to COmpany X every 10 minutes 3) Your IP is tracked per-session 4) All this data is linked to your profile 5) Any thing you type in the search bar is sent to a company X server 6) After anonymizing (if we do it) this is what your data looks like 7) We never delete any of the above for the following reasons etc,etc,etc

>And moreover, Google has been consistently on track to store less private data.

The default should be zero/as little as possible collection of data. From what you've said it seems like people can opt-out of some data collection, but its vague as to the specific nature of what data is still being collected versus what isn't.

>Hope the links I referenced will help dispel this notion. Google does take privacy seriously.

Unfortunately they don't. I won't dispute your second claim.


Far better than an itemized list, you can download all your data from Google

https://support.google.com/accounts/answer/3024190?hl=en

> The default should be zero/as little as possible collection of data.

Really? What about telemetry for self-driving cars? Is it immoral to develop a system that leads to less blunt trauma and death on roads? We (HN users, I don't work for any of these companies) can define your term "as little as possible" about like you seem to define parent's term "seriously". The point being that such adjectives are difficult to pin down but also difficult to avoid. Define "difficult" however you see fit.


> What about telemetry for self-driving cars?

They own the cars so they can track them all they want.

Tracking me all over the place after I click the "Do Not Track Me" button isn't acceptable.

> Is it immoral to develop a system that leads to less blunt trauma and death on roads?

It quite could be. Just as we humans decided to not use the scientific research generated by the Nazis on unwilling human subjects there are definite limits to what is acceptable even if the overall benefits are huge.


Collectively, we did no such thing. Many individual researchers and journals refused to use Nazi research, but many felt that it was unethical not to use it if it could save lives. In particular, I believe that the results of Nazi hypeothermia experiments were extensively used after the war. It's certainly not a cut-and-dry problem with an obvious ethical answer.


Facebook has their privacy policy too. So what? Even if all the listed policies are followed, even if they don't have loopholes (and they almost certainly do), Google still collects and retains metric fuckton of information that isn't necessary to provide the actual services it provides. The NYT article is great demonstration. And there is very little oversight around this.


It's all here, and you can delete it (including batch delete by period or source): https://myactivity.google.com/

This page includes other types of data (e.g videos you upload to youtube or mails in Gmail): https://policies.google.com/privacy


Thanks for linking to the policy document. They have this convenient line that allows them to do anything.

"We provide personal information to our affiliates and other trusted businesses or persons to process it for us, based on our instructions and in compliance with our Privacy Policy and any other appropriate confidentiality and security measures."

>It's all here, and you can delete it (including batch delete by period or source)

That scratches the surface, but an iceburg hides underneath. For one, how do we know its all the data? For another, there is no indication as to who has seen it or how Google uses it. That is my point. Google has never detailed those things..I suppose for legal reasons. A user has a right to know exactly what they are trading with Google in exchange for free services. They can then make up their own mind if they think its worth it. I'm just picking on Google here, because its a soft target, but it should apply to any service. We need new privacy regulations to formalize this.


Sounds like they just needed to spin up one "affiliate" and provide the data to that for data mining / etc purposes.

Anyone deleting the data "Google" holds would have zero effect on the affiliate, while giving some people the feeling Google was doing the right thing.


> It's all here, and you can delete it

So they claim, but I don't know why anyone should trust them about that.

Aside from that, though, what about the data collected from me? I have no Google account, but they're collecting data from me anyway. Same as Facebook.


> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

That sounds like a behavior of a honest service provider. Worse behavior would be if they'd help candidates which match their political biases, but work against candidates that disagree with them. That would look like abusing their position of a steward of a world-wide platform.

> With Facebook, what annoyed me was that they set up internal teams to help with political candidates social media campaigns - no matter who the candidates are.

Isn't political campaigning part of the discussion? If so, why is it bad to give candidates equal access to tools to perform this campaigning? Would you begrudge a printer that would print signs for any candidate, no matter who they are? If not, how electronic signs are functionally different from printed ones?

> everything is happening violently.

I feel people are really abusing the word "violently" nowdays. Nothing that happens on facebook is violence, it's mostly just talk.


>>google is a bigger concern for privacy and personal liberty

>Big claim. Any proofs?

All this skepticism around Google's capacity to abuse their troves of data and invasive services is a clear indicator that this discussion has very little to do with real privacy. It is mostly a playground for various corporate, political and media shills.


>>google is a bigger concern for privacy and personal liberty

>Big claim. Any proofs?

Well. How about the following facts?

-- Android market share is about 75 % of all smartphone users. Do non-smartphone users still exist?

-- It's probably safe to say that it's much much easier to avoid using Facebook's services consciously than to avoid using Google's services consciously?

-- Android's hard coded DNS server is a Google DNS server. The vast majority of people don't use a VPN (the only way in Android to change DNS server is through a VPN setup) to get around this. I haven't looked it up, but it's probably safe to assume that all Chromebooks also use Google's DNS by default? My limited networking knowledge tells me that this means that Google knows: who's using what Android/Google device at what time (match IP to Google login to Android device), who's visiting what website at what time, who's using what app at what time (requests to the app's server and to Google's servers for location, payment, auth and other info).

-- A lot of people use Gmail. Google can literally read all Gmail emails. Even those sent into Gmail from outside of Google servers.

-- The vast majority of Android users has enabled "Google location services" in Android. This is a one time "click OK to continue dialog" to permanently enable these location services until disabled manually again in settings (nobody does this). Weather apps, Tinder, Navigation apps, etc almost all require this. This means those people are continually sending < ~ 500 ms resolution data points of their location to Google servers. Even when those apps are turned "off" (meaning running in background, "off" doesn't exist in Android). Google can literally know how long you poop, who your secret girlfriend is and what specialist you visited at the hospital. NYT had a huge piece about this: https://www.nytimes.com/interactive/2018/12/10/business/loca...

-- People use Chrome. With automated Google login. Meaning Google knows everything they do online.

-- I'm not even going to go into other popular Google apps, we all know them: Youtube, Google Assistant, etc. With all the accounts nicely automatically linked to one another.

-- All of the above information and probably much more can be used for profiling information about users. I think it's safe to say that Google knows absolutely everything about it's users at this point?

Than the following:

-- Law enforcement (internationally?) can request any "sensor vault" data from any Google user in their country. AKA: they can request to look into all of the details of one's life.

-- The NSA can secretly request access to any of Google's data. Google is not allowed to disclose this access. By law. That's what they can legally do. Snowden has told us what they illegally do: anything they want. The NSA literally knows everything there is to know about everybody. In the world. And Google, by law, has to help them with that. In secret. This obviously has political consequences for the simple fact that "information = power". Those political consequences are not in favor of democracy.

Where I write "Google knows" I mean that Google servers receive that information (a fact). It is debatable whether Google stores that information or not. As for the location data mentioned above, it is a fact that third parties (app makers) do store this location information and sell it (illegally, see NYT link above). It is also debatable whether Google deletes your information when you ask them too.

My personal guess it that absolutely all information is stored, but that this storage of information is not disclosed to the public. I personally also believe that when you request your Google data to be deleted, only "the public facing layer" of your info gets deleted. I don't believe for a second they actually delete your "anonymized" data. In quotes, because such data can very easily be de-anonymized.

I mean, just the fact that there is deliberately no "DNS server address" setting in Android. Ask yourself why? Why would Google make it so much easier to just use the Google DNS server? Why does it offer a free DNS server to begin with? Why does all your location data have to go through Google servers before being consumed by the apps that run on your phone locally? That says it all to me.


Google makes a lot of genuinely useful products and services. We've all got to wrestle with the privacy tradeoffs of "free" maps, "free" email, "free" Android, etc. But at least the satisfaction of using well-built tools to accomplish more is enough of an offset to many people.

Facebook is much more likely to be seen as a guilty pleasure, or a marvelous time-waster, or something else that's a bit farther down the utility curve.


Perhaps in some countries/demos. I'd bet Instagram and WhatsApp for free, basic communication are seen as much higher utility in a significant amount of the global population.


But much of that utility comes from the fact that everyone uses it, rather than some inherent quality of the product, like is the case for Maps. Feeling somewhat "forced" to use it does not help with a positive view of the company.


> Instagram

Don't see its utility. It is a proper time waste.

> Whatsapp

It is indeed the most popular, but isn't unique in what it offers. Many others do what whatsapp does just as well, but none have the number of users.


Facebook makes products that many people find genuinely useful, it just makes fewer of them.


Google has your data and uses it for themselves. Facebook has your data and gives it (or leaks it) to anyone with money.


They both sell ads and offer advanced targeting options. Very similar businesses.


Facebook didn't get in hot water for selling ads.


Google and Facebook are exactly the same on this score. They both collect as much data about you as they can get their grubby paws on, they both use that data for themselves, and they both allow others to leverage that data in exchange for cash.


Has there been any evidence of abuse and misdirection on the part of Google at the same level as Facebook?

This is more than just negative press, this is a question of how data collection has been misused, and what lies executives have told about current and future plans surrounding privacy and data abuse.

But, personally, I'm staying away from FB, Google, Amazon, Snapchat, et al for the reasons you've mentioned; negative press or no, I cannot ethically work for companies that are haphazardly building the foundations of a potential technocratic dystopia in their chase for profitability.


I wonder how much of it is that Facebook isn't all that much fun anymore, and working there would provoke all sorts of "I don't use it anymore" comments from one's peers.


Or maybe prospects don't use Facebook either and it seems odd to contribute to a website you don't even visit.


Instagram and WhatsApp are more popular than ever, though.


If you have time, can you elaborate why you believe Google is a larger threat to personal liberty and privacy?


Google has far greater potential for invading an individual's privacy than Facebook does. Google has Android, Chrome, search, maps and gmail (and now photos). Those are all very critical pieces to a person's real world life. Facebook has FB, Instagram and WhatsApp. Yes, some private communication but it's limited to "social networking". Your taxes and utility bills don't get mailed to FB Messenger. You don't search for cancer research on Instagram and Facebook can't tell what other apps are installed on your phone.


I’ve been getting ads on Instagram for health issues I discussed on reddit.


I met a lady the other day who said that her job at google was a PM for their advanced technologies group doing “infiltration” into people’s lives.


well, it's the nature of the advertising business to defy privacy and liberty. competition occurs around how well you know consumers and how well you can manipulate those consumers into actions favorable to you (i.e., exerting power over you). further, online advertising is basically a duopoly of google and facebook, with google being twice as big as facebook and much more invasive.

google's, or more broadly, alphabet's, only competitive advantage is a thin lead on what might be called data intelligence (or surveillance, for the more cynical). they collect data across all internet ingresses/egresses, on not just those who opt-in, but even those who actively avoid google (through android, gmail, google apps, analytics, dns, internet access, etc.). and that data is super-valuable--alphabet had $30B in profits on $137B in revenue (an extraordinary margin).

to be clear, i'm not attempting to judge or disparage individual engineers at google. i'm sure most are mighty fine folks.

but for the foreseeable future, google really has no choice in the matter, not until it finds a different massive market from which to derive revenues. it's the nature of the business. and in the meantime, it's also under assault from intelligence, paramilitary, corporate, and governmental organizations from across the globe.

at least for americans, privacy and liberty are fundamental and inalienable rights. even though the consitution explicitly forbids only governmental interference in those rights, they apply more broadly to any entity, and particularly global corporations, attempting to exert power on individuals. and while inalienable, citizens still have a duty to be vigilant against such infringements.


I too was curious of this balance weighting. FB slurps in all of the data that users voluntarily post. Google just learns things through inference about users whereas FB is getting data posted directly by the user. Seems to me that FB is able to be way more invasive.


> FB slurps in all of the data that users voluntarily post.

That seems likely to be a grand understatement. FB has the opportunity to collect a great deal of data about their users beyond what they explicitly post -- for example, data about when and how they use Facebook mobile apps, how they interact with the Facebook web site, and what external web sites they visit which contain Facebook Like widgets.


Don't forget offline credit card transactions, FB and Google both.

https://www.bbc.com/news/technology-45368040


But Google does all of this as well with its api/fonts/analytics/etc being used.


On the other hand, FB is inherently social. I assume everything I give to FB has a chance of being public one day. I have some private conversations, but in the back of my head is that time the UI was deceiving and made seemingly direct messages public. FB is for sharing things. Google runs my phone, my work and personal email, my calendar, and more. I think they have a better attitude toward it, hence my willingness to trust them so far, but from a standpoint of ability to be invasive, Google blows everyone else out of the water on my devices.


Do you not consider Gmail data posted voluntarily by the user? How about search queries or calendar entries?


I can see your concern about messages via email, but I know for me personally, email is just not a thing anymore. Forgetting plain SPAM, corporations/marketing/etc have ruined email into this signal that has such a low S/N ratio that it's just not useful. What percentage of internet users actually use email for communication anymore? Sure, some, but it's not my largest attack vector (I consider Google/FB as attacking me).


Anything serious goes trough emails and this is the data I’d be most worried about leaking - anything from security related stuff like login/id confirmation to receipts, confirmations, sensitive data, professional communication.

Waaay more valuable than FB scraping my phonebook and photos


This may be true for personal communication but any sort of business deal is going to be happening over email. Mortgages, selling your company, large sales... All of the contracts are going to end up in your inbox.


> FB slurps in all of the data that users voluntarily post.

As well as all the data they can get, whether or not you even have a Facebook account. Real-world credit/debit card usage data, for instance.


Google collects more data, but they're much less free-wheeling with how they share it around. Pick your poison I guess.


I pick neither.

This whole debate about who is worse, Google or Facebook, is a bit ridiculous. They're both unacceptably awful, and practically speaking I don't think it matters which is more awful than the other.


> google is a bigger concern for privacy and personal liberty

I disagree. I think that on the whole, they're both about the same. But in terms of integrity, honesty, and ethics, Google has a (small) lead on Facebook.


Social networks live by the sword of voyeurism and die by the sword of voyeurism. This is not something Google has to worry about.


In my experience, Facebook used to be a cool thing to be on when you were documenting college party shenanigans and sharing pictures with friends, before it reached mass adoption to the point that your parents/grandparents were trying to add you as a friend. This was a time when organizing/sharing pictures with friends digitally was not a straightforward process.

I've come to terms with a simple fact of life that after graduating, it gets harder to make friends as you get older and start to settle down away from your college towns. Most of the acquaintances I've added on Facebook might as well not exist as we don't talk offline and my core circle of friends communicate over imessage/sms or various chat apps and we try to make time to see each other, further cementing our friendships offline.

Another thing that bothers me about Facebook since I first joined around the time a .edu ending email address was required (I think?), is that everytime I visit the site the new interface and feature bloat makes it feel less and less like what made it dead simple to connect with people back in earlier times. The current experience for me consists of a noisy ad infested newsfeed, ultra-optimized to inject itself straight into your brain's reward center with statistically significant A/B tested precision and autoplaying clickbait media nonsense, all while functioning as an echo-chamber for long-lost acquaintance's political outrage spam.

I wonder if people from my age cohort feel similar cognitive dissonance and that's why Facebook isn't even on their mind career wise, cause it's like an ancient digital museum that houses dusty pictures from their younger years and has long been replaced by Instagram.

Anyone out there relate?


> a simple fact of life that after graduating, it gets harder to make friends as you get older

This is not really a simple fact of life, in my opinion. It only gets harder because people make less of an effort. If you put as much time and energy into being social later in life as you do in college, then it isn't any harder to make new friends.

The main difference is that in school, you're automatically surrounded by a lot of varied people. Out of school, that's not automatic -- you have to intentionally put yourself in such situation. Often this is done by joining and participating in clubs and organization that cover things you're interested in (dancing, crafting, whatever).


That's my point exactly, relatively speaking you'll never be surrounded by ~30,000 university students who are forced to cohabit the same location in their most formative years.

The situation is much different when you have to find a babysitter for your kids to free up what little time you might have each day that is then split between you and your life partner to afford to socialize regularly.

I've just internalized this phenomenon as a fact of life after entering mid adulthood and settling down.


Ah, I understand. I was reading more into your statement than I should have. I'm a 50-something man and I often hear others of my general age complain about how hard it is to make friends, but they rarely realize that's something they can actually fix.

> The situation is much different when you have to find a babysitter for your kids to free up what little time you might have each day

Indeed! That was what taught me the real reason to arrange "playdates". It's not really for the kids, it's so that the adults can socialize with less hassle around babysitters and such.

But having children certainly makes lots of things more difficult. Mine are adults now, and I can tell you from experience that once the kids are off to college and beyond, then your social life can come back in its entirety.


> in school, you're automatically surrounded by a lot of varied people. Out of school, that's not automatic

That's exactly OP's point. Not automatic implies not as easy.


And, yet, here in the bay - my company (a startup) sent out two offers to candidates quite recently and they both went to FB instead.

There is no shortage of people joining FB because there's no shortage of people wanting to join a big company. Maybe if they're all comparing offers between big companies then they'll join some other big co but if the difference is startup vs Facebook... FB wins.


It seems like your computer should consider remote workers. I live in Denver and have told Facebook recruiters that I’m specifically not interested in working at Facebook, but I would consider a remote position at a startup. I’m sure as hell not relocating to the Bay Area is all.


If we wanted remote workers then we'd hire people in Romania. Like the last place I worked at did.


Remote working in the same or similar timezones is nice, and no, not everyone needs to work in the same office.


Cool! I'm from Romania. What do you have on the menu on this fine evening? :)


If you think remote workers means a different timezone and a language barrier, you need to brush up on your knowledge.


Are you offering a competitive salary?


I mean... does any startup when compared to FAANG? Salaries are basically the same but the total compensation is, obviously, wildly different since expected value for startup stock is horrible.


Facebook is offering new CS grads from top schools $180k+ a year plus $30k+ signing bonus. That's cash. Most startups can't afford that unless they are very well funded.


Levels.fyi seems to disagree

See here : https://www.levels.fyi/salary/Facebook/SE/E3/

It is closer to a 155K + signing Bonus.


Hm, perhaps I'm thinking of a different company.


$180k salary? I know they offer some ridiculous numbers for top candidates but that seems high for any new grad. That'd push them near $250k with stock.


> I mean... does any startup when compared to FAANG?

Yes, in my experience. Unless, as you mentioned, you count stock options or similar (which I don't).


You should really count them. They're basically cash after you vest with a public company. Startups have a very low chance of that stock becoming worth anything even after it vests.

Comparing faang compensation at $300k tc vs $180k salary plus Monopoly money... Faang wins often enough since you never get enough stock in startups for it to be really worth it. (Short of being a founder)


> You should really count them.

I've been in the industry too long to put any real value on them, regardless of whether they're from a Fortune 50 company or a startup. Sure, sometimes they pay, but it's always a gamble. I sorta view them more like lottery tickets than actual compensation.


Dude, the stock you get as a Facebook employee is literally money. It vests every month, so you can go to a broker every month and sell it for thousands of dollars of hard cash. No waiting for IPO, no hoping the stock goes up, no 4 year vesting, no board approvals, no nothing.


It may be close, but it is not literally money. If it were, then why wouldn't they just pay the money rather than going through the hassle and expense (for both the company and the employee) of issuing stock?

But that's all beside the point. I understand why these sorts of things may be appealing to people. They just aren't to me, so they don't factor in as "compensation" when I'm evaluating a job opportunity.


> my company (a startup)

ad-tech startup?


Nah. FinTech.


> It's no wonder the substantial portion of people who care about their employer's ethics are turned off

Nope. People who I know have turned down FB offer was purely because they see them as less stable company and have doubts if their stock will keep falling. No one wants to wake up a month later to find out that their signing bonus just got reduced by 10% due to bad news cycle. I would estimate that less than 10% of people turn down employer due to privacy related ethics. Also, on side note, FB has jacked up stock bonuses for existing employees. Their attrition rate is virtually unaffected despite of all the bad news.


It is not only ethnical.

With SO much negative press, I feel that Facebook had lost its mission among wider public. If it is net bad for the society, even just the perception of it, it is hard to hire someone who shared that vision with you, only mercenaries.

Good people are weird, though. They work for money, like everyone else, but not just money.


> one of the things candidates respond to most is the story we tell about our company's past, present, and future

I hear this storyline fairly often (though exclusively from corporate recruiters) and I have a super hard time understanding why this would matter. Can someone who actually listens to this kind of (IMO) propaganda weigh in and help me understand why it matters to them?


It matters in terms of internal opportunities to advance. Say your one of the first data science or product people in a fast growing company, that's a lot of potential opportunity for someone ambitious and self driven.

It also matters in terms of how good will this look on my CV/the story I can tell later. I joined a now well established and fast growing tech company as employee no 267, we're now at ~1200. That looks great on my CV or in an interview where if I talk about scaling issues (both technical and cultural) they'll likely believe me.


It matters to the same degree (and for the same reasons) as any other marketing pitch matters.


I agree with everything you've said until the last part. Google is only marginally better than fb when it comes to some of these issues of privacy. The issue people have with facebook is that it has a reputation for being a pressure cooker.


In other words, Facebook now has no redeeming qualities.

I've gotten a bunch of pings from them over the last few months, and I just chuckle, say "hahahano", delete it and move on. I don't know if it's a coincidence that the pings happened after the scandal or if they have gotten into 'look under every rock' mode.


I think it is the latter. All the pings I see now are so mundane and banal (most of mine are friend suggestions for people I’ve never met). They really must be scraping the metaphorical bottom of the barrel.


> It's telling that a substantial portion of our candidates admitted to considering competing offers from Google, but literally no one was considering Facebook.

Perhaps they were ashamed to admit it (?)


Anecdotal, but in the past year, I had tons of recruiters from Google/Amazon/etc. knocking on my LinkedIn box. However, not a single one from Facebook. Maybe they just simply didn’t fund recruiting efforts as much as the other tech companies or weren’t hiring as aggressively.


One of the (many) things that pleased me about deleting my LinkedIn account was that I no longer routinely heard from the likes of Google/Amazon/Facebook.


Plenty from FB here, to add another anecdote.


There was a time when recruiters would put on a sheepish and embarrassed-to-bring-it-up look when mentioning the higher paying jobs they had for tobacco companies. Paid more, but few wanted the social stigma, even if their personal ethics were OK with it.


IIRC, Google places a much higher emphasis on making counteroffers in the first place, as well as making those counteroffers hard to refuse.


Definitely not true.


This is not a surprising headline. If you have values about privacy, decency, civil discourse, honesty or integrity you wouldn’t want to work there. Also, if you feel the company was collusive or willingly complicit in the dissemination of fake news and Russian propaganda efforts during our elections, it’d be a big fat “no” to working there. And it’s not just our democracy that is undermined by FB. There’s a litany of abuses that they have either been horribly naive too or downright negligent in addressing.

If you are bright-eyed optimistic about Facebook I'd be interested to hear your counterpoint to all of the scandal. I don't think there is any company in the FAANG that is an altruistic enterprise but it isn't surprising that FB would have a decline in hiring.


> I don't think there is any company in the FAANG that is an altruistic enterprise

I feel like Google started that way, and then lost its way sometime between 2009-2012.

Projects like Google Scholar, Google Books, Google Summer of Code, Google Reader, Google Open Source, Google.org, and pulling out of China didn't really have much of a business justification, but were simply something good that they could do. Unfortunately they're a public company, and when you start struggling to meet analysts' (perpetually inflating) estimates, being good - or at least not evil - is usually the first thing on the chopping block.


Google never figured out how to make serious bank outside of the marketing department.

The fact that they kept the wheels on as long as they did, I gotta give them some respect for that. But they were always destined to end up being amoral at best and a cesspool at worst.

If you are starting a company and think you want to be proud of it for the rest of your life, sell a real product, not your users.


Not true: they make billions from cloud services.

Re: “you are the product” meme. I guess it’s a mechanism for raising awareness of privacy violation, but I really don’t like it. If you were literally the product, you would be a slave. You’re not. What they sell is your attention.

A big reason for not liking “you are the product” memes is it misses the key aspect of manipulation, which phrases like “the attention economy” capture. You are being manipulated into giving up more of your time and attention.


Honestly, that seems like splitting hairs to me.


If by "marketing department" you mean "advertising business" you would be correct. But I am skeptical you meant that.


Yeah I mean the advertising business. I was feeling a little salty.


I don't think that it has anything to do with altruism. Back then, it was not the right time to optimise for profit as new and exciting things were happening daily, it was time to explore not to exploit.

These days the exciting things are happening in other areas, so for the Internet giants, it's time to optimize for profit.


Also, as a smaller player, they stood more to benefit from open source projects (Android and Chrome) and open standards (the web and email). Now that they're on top, the most rational strategy is to secure their position by destroying the bridges they used to get there, locking down those open technologies.


In other words, the most rational strategy is to become evil.


Correct. Which is why pure capitalism is broken.


> it was time to explore not to exploit.

Is that not a pretty good definition of not being evil. IMO it is still the time to explore not exploit, even if that's not what they're doing.


> and then lost its way sometime between 2009-2012.

Tahrir Square was the high water mark of the old school techies. The failure of tech to effect real and lasting change really hasn't been understood by the techies, even still. That optimism about the future and tech's role in it, is gone.


Based on discussions I’ve had with Egyptians, Facebook was used to track down dissidents after the counter-revolution that brought Sisi into power. Not sure if it was Tahrir-era posts that got them into trouble, or criticism of the Sisi government.

The only lasting legacy of social media’s role in the Arab spring seems to have been inflating the self-worth of high level execs, and blinding Obama-era officials to the way these sites could be turned into tools of disinformation and repression.


"The only lasting legacy of social media’s role in the Arab spring seems to have been inflating the self-worth of high level execs"

When the media talked about the "Twitter revolution" I still remember thinking that there were people risking their lives on the streets and how ridiculous it was that some social media guys drinking lattes in their offices got the credit.


When you spend enough time in the future, you forget all the shitty things about the past that tech has changed and only notice the problems that stand out today. Not sure if you're specifically referencing Tahrir Square with your second sentence, but tech has definitely led to real, lasting and immensely positive change worldwide.


Oh yeah, but I think all the wind went out of the sails after Tahrir Square.

Before, there was such optimisim about tech. Nothing could stop it. Everything would be just better.

Look, the oppressed are rising up together! Look, medicine is getting better! Look, we're talking at each other, not shooting and hurting!

The arab spring was the high point, the proofed pudding.

After the failures there, sure, yes, tech has helped, has advanced the world. But that optimisim that was in Tahrir Square never came back. FB was a way to talk with each other and be a 3rd space, now it's a Skinner box. Wikipedia was the nascent Enclycopedia Galactica, now it's just mostly good and sometimes suspicious. Google wasn't evil, now it works with China to make Orwell sigh.

Things are chugging along, yes. But before people actually thought they could change the world for the better, now tech just has mortgages.


Google had been receiving shit from people for violating privacy since they had the novel idea to release a free email service that scanned your emails to deliver you targeted ads. The consequent centralization of email (ISP provided email pretty much died after) was subsequently used to allow the NSA to scan a huge amount of peoples personal information.

I think in the last few years is when things tipped to me distrusting Google more than the boogeyman of older times - Microsoft.


Forget the election, just what social networking is doing to young people's minds. They're making money by making a lot of people miserable - just not how I'd want to make a living.


Making people miserable and unable to understand the world outside these addictive platforms. I know so so many 20 somethings who genuinely don't understand the facade that is social media. They're giving up their youth in pursuit of a drug and they don't even realize it.


I have no money and a very shitty laptop, and thanks to Google Colab's free, hosted Jupyter Notebooks I'm having a blast learning Keras.

I'm not saying they're saints, but they've given me something free that's improved my life. Maybe it's ultimately greedy in the sense that later if I need a cloud platform I'll definitely use GCP. But I think that kind of mutualism is actually better in practice than altruism.


Also given the power and information that they have I think they've been fairly well behaved. I'm not sure what other commercial management I'd rather have have all my search and email info than the google guys. I guess a non profit might have advantages but then who'd pay for all the servers and the like?


> they've given me something free

No, they haven't. They're just making you pay with a different sort of currency.

I'm not saying that's good or bad, and I'm not saying that you aren't getting value for what you're paying. I'm just saying that the notion that these things are "free" is incorrect.


What is the currency?


Data about you and your use of your machines.


Don't forget about WhatsApp. It was the main channel of dissemination of fake news in Brazilian Election. Now we have a global warning denier in the presidency and Amazon deforestation is reaching record levels.

Sure there isn't any company in the FAANG that is an altruistic enterprise, but to be only pure evil one is Facebook.

What really impresses me is that there's still a lot of talented people working there.


Any communication platform that is easy to use and easy to reach people on, and will therefore be popular, is great channel of dissemination of fake news. Well, guess what, it is also a great channel for communicating nonfake news, and talking to people that matter to you, and sharing your interests with likeminded people, and...

Blaming the platform for carrying fake news seems disingenuous. Fake news have been spreading over any available channels ever since humans learned to talk and figured out that they can tell lies to each other. Blame people for believing most of anything they're told.


I think one needs to be very intentionally oblivious to not notice the qualitative difference between fake news of the past and fake news right now.

Fake news in the past always had an identifiable source, because there was still an institution, a company, or someone with their name on the door between reader and publisher. As it stands, no such barrier exists any more. Things can be inserted by malicious actors into the debate, and they spread automatically simply because they have the tendency to 'go viral', something entirely absent in the past. That has added a completely new set of problems.

>Blame people for believing most of anything they're told

Precisely because it is very much in everyone's nature to suffer from these mechanisms it makes no sense to blame ' the people'. What does this imply, a great re-education of everyone? Obviously the only thing we can change is the companies, institutions and rules that determine how we consume the news, not how human brains disseminate them.


Yes, the Internet has made spreading fake news easier, but let me reiterate my argument - the Internet has made communication as a whole easier, so it's only natural that fake news spreads easier. However, that is not the fault of any given communication platform on the Internet, unless that platform happens to explicitly select fake news stories to spread, and suppress anything else. Which certainly is not case for WhatsApp.

Even in the past, good old rumor mills (I heard it from a friend of a friend of a cousin's barber) were reigning supreme in spreading bullshit around, by simple word of mouth. The Internet here is just a compounding factor to something that is very, very old and already very, very effective.

Edit: expanded first paragraph


>Yes, the Internet has made spreading fake news easier, but let me reiterate my argument - the Internet has made communication as a whole easier, so it's only natural that fake news spreads easier.

I totally agree, but it doesn't follow that this means that the fake news situation is acceptable, or that Facebook isn't responsible as a platform which the OP argued.

With the increased density of urban living came more opportunity but also more crime and disease, but we don't shrug and accept that barowners have no responsibility as platforms, we give them a set of safety and health regulations and responsibilities, and we equip the police with tools to combat crime.

So in that spirit, just as the internet isn't the internet of the hacker and small community age, companies should have to deal with the problems they produce. Just like everyone else always had to.


You're trying to cure the symptom, not the disease.

The disease is people being morons. If you want people to not be affected by fake news, making platforms censor people won't have any positive effect. You have to educate people.


No it didn't, that's historical revisionism. Go read any historical text or even Chinese censorship propaganda today and you'll discover the authorities were/are obsessed with "rumours". Attempts to control what people can say are a historical constant, the only difference between then and now is the lingo has changed.

There's no need to change anything, people or companies. Left to their own devices people figure out propaganda eventually. It may not be in the direction to your liking, but then, as a supposedly rational person, you must accept that maybe it's you who is victim of propaganda and the other people who are not.

The best example of this in recent times is the large number of supposedly smart people who fell in love with "The US President is a Russian spy" as an idea, which was based on nothing - it was rumours, it was fake news, it was propaganda distributed by the press, and now 50% or more of the US population agree with their president that it was also a witchhunt. Seems like people were drowned in fake news and still, a large chunk of them understood it was fake. Of those who still believe it, it might be more accurate to say they wish it was true - but that's a common theme in all rumours and propaganda throughout history.


> Don't forget about WhatsApp. It was the main channel of dissemination of fake news in Brazilian Election.

How would you solve this problem?



I asked one, who pointed out he'd like to make change from within rather than blog about them being evil from the outside.

I don't know which way is right.


FB's core business model is the root problem. "Working from within" is vain, naive, and futile; only Zuckerberg has the power to change the business model, and we all know thats not happening.


Unpopular opinion:

I don't have a problem with business model (targeted ads) but I have a massive problem with lack of honesty, and this is the distinction between Facebook and Google for me. Google tells you what they collect and gives you the controls to delete it. This is enough for me.

Facebook struggles to remember that I want my timeline kept private.

I also believe that Cambridge Analytica was no accident, FB knew what they were doing, and they decided to throw them under the bus when they changed the media turned on them.

Trust is hard to build up and can be shattered in a day.


>Also, if you feel the company was collusive or willingly complicit in the dissemination of fake news and Russian propaganda efforts during our elections, it’d be a big fat “no” to working there. And it’s not just our democracy that is undermined by FB

Come on. According to FB the IRA had 80000 posts over a two year period. In the same period there were 33 trillion FB posts. What moron still believes this garbage?

FB was hung out to dry by Congressional democrats too spineless to own up to their own pathetic failure to defeat Trump.


I don't think you can discount that a concerted effort to create viral content will spread much farther than arbitrary wall posts by individuals. There are statistical methods Facebook could use to figure out how much of an impact that they had and I have not seen any such analysis yet.


That's only one reason this whole thing is bullshit. The other is that the alleged content is just random gibberish with no obvious intent or means to subvert anything. It's only by assuming that every post had its maximum theoretical pernicious effect (and that a pernicious effect was the intent in the first place, which is just supposition) that this whole thing becomes meaningful.

It is the desire to make this assumption (that Russia subverted the campaign) that drives the conclusion more than anything else. None of which is to say FB is innocent of blame. But their crime is hooking up an ad network to the social network, not colluding with Russians.


Viral gibberish can still influence subconsciously. But I think FB gets more blame than they should out of all this, and agree the collective is trying to pin the blame for complex social trends on a singular actor. Domestically, the fault is on FB, internationally on Russia, clean and tidy right? Makes it seem like regulation will solve the perceived issues next time around, while the bulk of the real issues are overlooked or ignored. I like Martin Gurri's take on this right now.


Its also worth noting that when something goes viral, its often not contained on one social network, and it becomes impossible for the platform to measure its reach and impact.

How could twitter, for example, really measure the impact of something like that video of the Covington High School kids, which was amplified on twitter (shared by a fake account, IIRC), picked up by the media, and then talked about incessantly for weeks, all over the place?


How many ads were there? If trillions of messages are needed to influence behavior, then Facebook ads would have no value.


it's surprising because it's totally false. read the above comments.


I'm not sure how the journalist fact checked this, but in 2016 CMU sent 12 people to Facebook[1]. In 2018 CMU sent 27 people to Facebook[2].

[1] https://www.cmu.edu/career/documents/2016_one_pagers/scs/scs... [2] https://www.cmu.edu/career/documents/2018_one_pagers/scs/1-P...


Those numbers are almost perfectly inline with the growth of Facebook, 17k employees in 2016 to 36k in 2018.

https://www.statista.com/statistics/273563/number-of-faceboo...


Or 29 if you include WhatsApp.


Those are just SCS numbers. The CNBC article cites all of CMU. Facebook recruits from the math, engineering (EE), and info systems programs as well.



Huh, those numbers are lower than I expected. Perhaps CNBC counted grad students as well, which is half the CMU population I think.


As much as everyone wants to believe this is because all the applicants are suddenly taking strong ethical stances, I bet it has more to do with Facebook simply not being considering cool or exciting anymore.


Sure, but one of the biggest reasons it isn't considered cool or exiting anymore is all the negative press.


Really? Obvious data privacy issues finally becoming mainstream is what is finally convincing programmers to not want to work there?

Hasn’t all of this stuff been obvious forever to programmers?


It's possible now, that this information has gone mainstream, programmers worry about how their non-tech friends view them for working there.


> Hasn’t all of this stuff been obvious forever to programmers?

Yes, but it wasn't at the "oh crap elections were manipulated, democracies toppled, dissidents tracked down, and genocides enabled" level.

The fact that Apple, the world's richest company in the world, now has a mainstream marketing campaign around privacy tells you it is now officially mainstream mainstream, not just programmer mainstream.


Yes, I think we're finally getting to the state of engineering and medicine in the 1800s, where bridges and buildings were collapsing, snake oil salesmen and physicians were indistinguishable to the layman, etc. Enough catastrophe will eventually motivate society to regulate the upstarts.


None of those things have happened though, have they? Which democracies has Facebook toppled? Which genocides did they "enable"? And as for "elections were manipulated", I'll give you that one, but the only actual evidence I've seen of Facebook manipulating elections is shutting down the pages and followers of actual conservative political parties. The whole Russia story turned out to be smoke and mirrors, and based on rather huge assumptions about the efficiency of political advertising to begin with.

Apple use privacy as an attempt to differentiate themselves from Google despite in the phone market having very little actual differences between them. It doesn't seem to have helped: Android is globally dominant.


> Which genocides did they "enable"?

Mynamar

> It doesn't seem to have helped

yet. Also Apple's goal with the iPhone isn't global dominance.


> Hasn’t all of this stuff been obvious forever to programmers?

I don't think so, considering all the devs who called others "paranoid" before for raising these issues.


I think this story is submarine PR paid for by Facebook to garner sympathy.


Agreed. I would argue that Facebook is not considered cool as a direct result of all the outrage surrounding it.


Facebook was uncool before the outrage really took off. It's bloated and fewer young people from each cohort take to it each year.


Its ML research is exciting. I would like to work with Yann Lecun


And the root cause of its suddenly "not being considering cool or exciting anymore" would be?


... not really innovating? its main product is still centered in social signaling and gossip ... just like day 1. Also the social craze is not so crazy anymore (I wonder, how are the other social apps doing?)


Limited upside for the stock?


Yeah, it's seen as the platform your parents (or worse, grandparents) use. Pretty much a step above Next Door. Why would you want to work for that over some of the other companies out there?


Facebook is also instagram and whatsapp, two platforms used by young people


My teenagers have pretty much moved on from those.


> My teenagers have pretty much moved on from those.

Out of curiosity what have they moved on to?


TikTok is pretty popular these days, for one.


Honestly they don't want to tell me :-) They really, really don't want their parents on their apps.


I've known several people that would no longer work for Facebook, but the Cambridge Analytica isn't the biggest concern. It's the fact that they are censoring people, even within private groups.

I have a friend that jokingly said (in a private group) that men are vile pigs. We knew she was joking - it was good natured. Yet, Facebook issued her a warning and removed her post and threatened her with a ban. First they came for Alex Jones and I said nothing because I don't like Alex Jones (and think he's insane), but now that the precedent is set that Facebook is the speech police, it will expand to us all (especially with their machine learning advancements that are here and yet to come).

The EFF has a really important article about this that I implore everyone to read[1].

[1] https://www.eff.org/deeplinks/2018/01/private-censorship-not...


For a detailed nuanced piece about how FB handles some of this complexity check this out: https://www.vanityfair.com/news/2019/02/men-are-scum-inside-...

FB has its problems, but I generally find the negative press overstated and wonder if Zuck's approach to interact with the press and congress actually backfires (compare to the other companies which largely ignore them). I appreciate how often he talks to the press to explain what they're trying to do though.

I also see the Cambridge Analytica scandal as what it is - permissive APIs that were abused and then locked down. Cambridge Analytica is to blame in this for abusing TOS and behaving badly, FB is arguably negligent - but I think the reaction is extreme.

Plus from people I know inside FB there really is a huge funded effort to stop abuse and manipulation via 'integrity' teams. It'll be interesting to see how they modify things given Zuck's recent pivot towards focusing on privacy as a core feature.


>For a detailed nuanced piece [...]

I don't see any nuance, I see invented complexity masquerading as something sophisticated.

This article is written as if they're trying to suss out the perfect boundaries along which to apply censorship. That frame of thinking is a con. They will never have a perfect censorship implementation because there is no win state.

Let me back up: As you read this, facebook has a censorship policy. What would it take for facebook to be know they're done arguing over what does or doesn't get deleted? How do they know when the censorship policy isn't good enough? Advertiser pressure, press pressure and political pressure. i.e. fashion.

It should have been obvious to everyone present at (or read about) the meeting in this article where Facebook attempted to invent a principled stance that allows casual misandry while banning similarly-tempered casual misogyny. But I'm perfectly willing to believe Facebook can't see it. I can explain.

Facebook's stance towards nudity has a very clever property (that is almost certainly unintentional). It allows users to lie to themselves. Facebook could automatically hide nudity to everyone who isn't an opted-in adult, and still throw it behind a twitter-style click gate so there is no accidental NSFW at W. But they don't. They'd have to add an "I'm not a prude" checkbox. And THERE'S the rub. The lack of such an option lets people uncomfortable with nudity tell themselves that they're not the kind of person that's uncomfortable with nudity. The want to think they're sex-positive enough to have nudity, and just a reasonable person who doesn't mind if it happens to be banned. Even more importantly, it lets people them avoid thinking about the question "am I so uncomfortable with nudity that the idea of other people - the WRONG PEOPLE - seeing it makes me uncomfortable?"


They're not a government and it's reasonable for them to have some editorial control over what's posted to their platform (in the interest of keeping it a place their users want to be).

A lot of this is detailed in the article, it's helpful to read it.


> They're not a government

I'm not saying they are.

>it's reasonable for them to have some editorial control over what's posted to their platform

I'm not saying it's not.

I'm saying that their editorial policy is (in part) driven by fashion instead of principle. And complexity is used to obscure that rather than reveal it. My attempt to use Occam's razor to explain the obfuscation leads me to conclude there must be some utility that caused the system to evolve in such a way that it's possible to avoid seeing/acknowledging that.


> if Zuck's approach to interact with the press and congress actually backfires

Since when Zuckerberg speaks, he always seems to be equivocating if not outright lying, and his historical responses when Facebook has been called out for being abusive in the past has always been lots of promises with no real changes, I suspect so.

> It'll be interesting to see how they modify things given Zuck's recent pivot towards focusing on privacy as a core feature.

That entire effort is Facebook's attempt to change the topic to something that is less threatening to them. The real issue is the privacy invasions from Facebook itself. Everything about their "pivot towards privacy" is premised on the privacy threat being actors external from Facebook.


It’s really easy to win the crowd by calling everyone else biased. When FB is one of the biggest lobbyists in Washington I highly doubt the press has ever been critical enough.

They got by on being the new darling child startup and built up an gargantuan pile of moral debt which they are now fairly paying for.


Wow that article was incredibly relevant and super interesting. Thanks so much for the link!


How did FB find out about her post?


Privacy in any unencrypted cloud service is an illusion.


What if someone in the group reported it ?


I suppose it's possible but unlikely. I'd be shocked if that were true.


I have no idea. I would guess they have an algorithm that checks every post and makes determinations.

I would have thought a private group would be immune, but apparently not.


Recently I think the scandals haven't been the single biggest factor when deciding between Facebook and other firms.

The common reason I heard from most of my friends who turned down FB, or quitted FB was that the working culture is too demanding and kind of pressure. Google on the other hand is more laid back and family friendly. So people who started building a family will prefer Google over FB. The nice thing is FB tends to offer higher level than Google, so in some cases, if you get matched, it works out pretty well.

I have a friend who worked at FB, after he came back from paternity leave, his manager told him he has been slacking (his reviews were always "meet all"/"exceeding" before), it's time to put in more work, he quitted after a month.


Honestly I think its more than just that, Facebook is no longer the cool start up building the world's favorite website. They're a multi-national advertising mega corp, and TBH most people just don't want to work there.


But Apple, Google, Microsoft aren't the cool start-ups either and they are mega-corps, yet people still want to work there. So I don't believe that line of reasoning holds up.


In terms of cool tech, Apple/Google absolutely are a cut above Facebook now in my experience. Microsoft used to be below, now they rose as they embraced more open source and modern tech, I'd probably put Facebook/Microsoft/Amazon even in the cool category these days while Facebook was stereotypically above Microsoft before. Amazon, depends who you ask for a definition of "cool" from more so.


I would wager that at all of these companies, how “cool” your job is depends heavily on your specific team and role. I’m sure there are thousands of people with very uncool jobs at Google, and thousands of people with very cool jobs at FB. Comparing two companies of that size and saying one is cooler than the other is probably an oversimplification.

That being said, I share your perception that Google and Apple are cooler than FB. :)


I think your wager is completely accurate. With Amazon as an example, depending on your team you could find yourself doing high-performance work in cutting edge languages on a globally distributed high-availability FaaS platform, or you could be using Perl 5.8 to make tweaks to a website that will only be displayed in certain parts of India.


You may have forgotten that Facebook owns oculus and is doing some pretty cool things in the AR/VR space


I am sure that people are still super psyched to work at FAIR and Oculus, but those teams are quite small and AFAIK don't hire using the standard FB SDE hiring process.


At least to me, Microsoft's desirability has gone up considerably over the last decade.


Totally agree. There are dozens of companies that aggregate user information (Google amongst them). Like @bognition said, FB lost its cool a while ago.


Is collecting user information itself considered unethical? Isn't the unethical part abusing sharing of that information?

No one rallies against the US Census as unethical


> Is collecting user information itself considered unethical?

If it's being done without the informed consent of the user whose data is being collected, then yes.


A long while ago. In Australia I'd say that 2009 was probably the end of FB being perceived as (at least somewhat) cool. 2008/2009 was when I remember being encouraged to join up by peers. My joke way of deflecting that (because I've never wanted to join) was to say that I'd applied but been rejected for not being cool enough. This reply would often leave people non-plussed - I was ironically mocking the idea of FB being cool (at the time) but I don't think most people got the joke.


Since 2014 yes


Is anyone?


I can only think of SpaceX, but they're not a website.


Yes, but you haven't heard of them yet.


They're not "the cool startup building the world's favorite website" until we've heard of them.


My good dude, once you've heard of them, they are not cool anymore. Cool is exclusive. Cool is mysterious. Cool doesn't care what is popular. When cool goes mainstream, it's not cool anymore! The trendsetters have already moved on.


>Facebook candidates are asking much tougher questions about the company’s approach to privacy, according to multiple former recruiters.

This narrative is highly suspicious.

Zukerberg openly and repeatedly said that he doesn't care about anyone's privacy for well over a decade[1]. The whole company is built around collecting and selling private information. Why would people who care about privacy interview with Facebook in the first place?

[1] https://www.theguardian.com/technology/2010/jan/11/facebook-...


> Why would people who care about privacy interview with Facebook in the first place?

I believe this may come from the responses to cold e-mails. A recruiter working for FB presents an offer. They want to tell them to GTFO and they highlight the privacy concerns in the response to say thanks, but no thanks.

At least that's what I do when a recruiter working for a company I find morally incompatible approaches me. I reply with something like "The tech stack looks great and my professional experience aligns with what the job description requires. However I don't think I'd feel comfortable working for a <short-term loans | kids gambling | personal data mining> company, but I'm open to hear about similar positions in other areas if you had any in the future."


I think that over the past couple of years in particular, the real-world consequences of all of this have really come into the spotlight.

It's one thing to hear a tech CEO talk about something you may not agree with -- many people just categorize it as "a Facebook thing" (as in, huh, maybe I'll try to use their products less) and move on with their day. It's quite another to come to the realization that non-trivial parts of (what many see as) seriously negative political consequences have come from these products and, being fully aware of these, the CEO/company still hasn't meaningfully acted.

And with all the recent publicity (there's a difference between being mentioned in the technology section of a paper, and giving a congressional testimony), pretty much no one can say anymore that they aren't aware of it, or haven't thought about it.


Mark Zuckerberg does not say he doesn't care about anyone's privacy in the article you cited "openly and repeatedly" or otherwise. I suppose you can infer that from the actions of the company he runs, but your citation does not support what you've said here. Someone reading your comment without reading the entire Guardian article could come away with an incorrect impression of what he's publicly said.


I believe this is pretty true, cause I was interviewing with FB and I brought up some of those questions.

Before the scandal broke out, I didn't really know much about Zukerberg's view on privacy. The scandal definitive rose my awareness on this topic.

But the questions for me is less on FB's approach to privacy, but more on how much is Zukerberg dictating the company. In other words, how much are FB employees empowered to do what's right, to fix their problems. Empowerment and autonomy is a very important for tech talents. FB is not presenting itself too well in this perspective.


The whole Facebook sells data thing is something I see said a lot, but haven't seen anything show. Is there any evidence of this?

Wouldn't selling data be detrimental to their business model of being the golden goose for showing ads to people interested in your product? If people get user data, they can better target people themselves rather than paying Facebook to do it


People often use "selling data" as shorthand for "providing access to groups of people based on what the company knows about them". The two are very similar.


Maybe they’re just more aware of what Zuckerberg is saying now than what he has said and done in the past.


Perhaps developers now value privacy more on average than they did back then.


Yeah, I think it's really common to downplay this as an “everyone knows” situation but there seemed to be a sea-change where it went from “maybe people see more spam than they used to” to “major world events are impacted”.


I'd responded, years ago, to a recruiter's contact email with Zuck's "dumb fucks" IM comment.

Their response was "people change".

Evidence is that this one hasn't. I stand by my response at the time.


As a current student, I'm actually surprised by this. Maybe I just hang out with evil people, but I don't get the impression most young programmers care that much about ethics. Or they claim they do, but then the 6 figure salary, cushy benefits and signing bonus wins them over. Perhaps there's other reasons?

I do joke with my friend who works at Bloomberg that the "evil" finance view has now flipped completely. Bloomberg is a pretty ethical company compared to Facebook, Google, etc.


Age is a proxy for putting value into such concepts as "ethics". When you're more or less senior and have your basic needs met, then you can afford to be picky on what you work. Fresh grads might not care because it takes to know evil to know good (see the story of the original sin and Tree of the knowledge of good and evil), and work experience is like a separate life experience.

Heck, I've heard a theory that you should only be counting programming years as life years, i.e. if they haven't been programming for 18 years, then they aren't adults in the world of software. And the funny thing is, once you're at least a teenager by this definition, then you start thinking that they might actually be onto something...


> When you're more or less senior and have your basic needs met, then you can afford to be picky on what you work.

If you start from there, it follows that countries with poor social support systems will tend to generate less ethical workers and less ethical companies.

I grew up in Canada. When I came to California, I had zero debt. That made it relatively easy for me to choose ethical (even altruistic!) places to work starting from very early in my career.

Imagine how different the entire technology landscape would be if most American computer science grads had the same flexibility.


> When you're more or less senior and have your basic needs met, then you can afford to be picky on what you work.

You can afford it a fresh entrant to the industry as well. Jobs are plentiful and there is competition for software devs.


What you think you want while in classes and the shifting realities once you enter the workforce and be pretty night-and-day.

Personally I would have entertained the idea at working at a "Big 4"-type company out of school knowing that they were ethically opposed to me. I guess mostly because the name on a resume is worth multiple other jobs in some scenarios and gives you an advantage over your peers.

Just a few years later, a 6-figure salary doesn't seem that outrageous and unique to those big corporations. Now that the difference is only in the $10s of thousands, these decisions become a little more nuanced and uncertain. Besides that, once you hit the high 5-figures, money becomes less and less of a driver in your life unless you are in desperate need due to your circumstances.

My point is that I think fresh graduates will put aside their ethical qualms because they don't yet know their worth and place in the workforce. That can change pretty quick.

That being said, plenty of people just don't care, and that's why these companies still have thousands of developers. It's easy to be blissfully ignorant of your contributions to privacy degeneration and corporate takeovers of our lives when "all you do" is write some React components or speed up some data pipelines. The executives and managers are the ones who will really need to reckon with their consciences, knowing they implemented all these nasty programs.


I work in finance (a trading firm, not a bank). In a way it's freeing to admit to yourself and to your coworkers that all you really care about is making money. No one's deluded thinking they're changing the world by selling options like FAANGs think they are by harvesting personal data.

It's not like we're making the world better but we're not actively harming it.


> Or they claim they do, but then the 6 figure salary, cushy benefits and signing bonus wins them over.

But the issue is that top recruits can get a 6 figure salary, cushy benefits and a signing bonus from Netflix or Google or Amazon or AirBnB etc. etc. It's still easy to at least to pretend to be moral of you can shun Facebook as an employer but don't need to give anything up to do so.


Yeah, but you're assuming they have a counteroffer that's Netflix, Google or Amazon. Sure, some people get 2-3 offers, but I'd bet it's more common to get 1-2. Not out of skill or anything, just because the FAANG hiring process is random and arbitrary.


Ironically working for a bank is probably often more moral / does more good than Google. Keeping people's money safe is a very real service. People complain about CC interchange fees, but it costs ~1-2% to process cash as well. (Safes, hiring armored trucks to pick it up, etc).

Not all of finance is swapping debt like a commodity til the economy crashes and forclosing on people who had some bad luck.

Allowing people to use their money conveniently and securely is arguably bringing more value to the world than helping run psyops to convince people to buy things they don't actually need. People need checking, savings, and credit card accounts and they need them to be secure and reliable


"Keeping people's money safe is a very real service."

Banks primary objective is not to handle transactions but to invest others money in as risky and profitable ventures as legally allowed. As such you are not their customer, you are their product, just like when you use useful free Google or Facebook products such as messaging, document editing, friend connections, search, maps etc.


Depends on the bank. Credit unions don't do this. Neither do "mutual banks"

https://en.wikipedia.org/wiki/Mutual_savings_bank

There's usually one or the other in pretty much every decent sized city.

My credit union doesn't have a lot of fancy features but it has shared branching (can pull out money at other CUs), ATM access at most 7/11s, and zero cost for a basic checking account that can receive direct deposit, with an online bill pay that can even mail a physical check to whoever I want.

Even larger banks sometimes aren't publicly traded. (


In my average size telecom company almost nobody cares about privacy, at least it seems so any time I mention anything related or link some article in the chat. Or at least they don't see an issue there. (same with equality problems, but that's different topic) It seems programmers are just an average sample of population.


It’s not just ethics - they have shit wlb and cut-throat culture. If you top it off with mountains of tech debt due to above and brain-dead hiring practices it’s not that surprising they have trouble hiring despite huge compensation.


> Or they claim they do, but then the 6 figure salary, cushy benefits and signing bonus wins them over.

Which means that they don't care about ethics.


Why do you hang out with people who don't care about ethics?


Fair question. I'd like to think my close friends are ethical people, although it's not like I've seen them do the trolley problem. I'd categorize the people who would accept a Facebook (or even Palantir) offer in a second as casual acquaintances. As for why I'd be casual acquaintances with people who don't care about ethics, well, sometimes you can hang out with people who you don't trust completely.


Personal anecdote: I had a job offer from Facebook and a couple other big tech companies. The Facebook offer was substantially better fiscally than the other ones and it was clear to me that they were having trouble hiring. Their initial equity grant has no cliff and the signing bonus was massive for somebody two years out of school: $75,000 cash in first paycheck.

However I ultimately turned it down because of ethical concerns about working there combined with a sense that people would not approve of my job choice. I.e. even if I don't find what they're doing ethically questionable (and I do, although I don't think they're so bad), I didn't want to have to explain myself or defend them to everybody when I mentioned where I worked. Just my two cents as somebody who was one of the 50% of candidates who turned down the job.


> Their initial equity grant has no cliff and the signing bonus was massive for somebody two years out of school: $75,000 cash in first paycheck.

* Google got rid of the cliff too. * The $75K sign-on bonus is nothing new. These are not signals that we're having trouble hiring now.

> However I ultimately turned it down because of ethical concerns about working there combined with a sense that people would not approve of my job choice. I.e. even if I don't find what they're doing ethically questionable (and I do, although I don't think they're so bad), I didn't want to have to explain myself or defend them to everybody when I mentioned where I worked. Just my two cents as somebody who was one of the 50% of candidates who turned down the job.

Honestly, working at FB as a SWE is awesome. Like beyond awesome. If impressing other people is what you're optimizing for, you do you, but just know that you're missing out big time.


A timely commit, because at this rate Fb will soon be just as cool to work for as big tobacco and bragging about all the awesome "cancer-causing" projects will be kinda awkward.


They said it was also ethical concerns, so not just about impressing other people. If you don’t have ethical doubts, you do you, but some companies have beyond awesome ethical standards.


how do you feel about still having to use outlook because of concerns about a competitor that's not even doing social anymore? I've read FB SWEs be aghast about people uploading whiteboard photos to Google Photos to be searchable later, paranoid about Larry and Sergey poring over all the data while twirling their evil mustache.

Also, if impressing other people is not what you're optimizing for then you might not have a great trajectory at a company that's obsessed with i m p a c t and hallucinating metrics to measure it.

Some parts of working at FB as a SWE are awesome, some aren't, everyone's mileage certainly varies.


Kudos for sticking up for what you believe in, but if I was you I would have taken the job, that kind of money that early in your career can shape your future massively.

I'm fairly senior and I would still do some really unethical stuff for a $75k signing bonus.


I have seen cold emails from Facebook and Instagram recruiters recently and they all start on the defensive about privacy, how it's "Zuck's" big thing and how he's taking it seriously. Seems a little desperate.


Does anyone actually believe Zuck's new found interest in privacy? His entire company is built on the sharing of data (even in ways users don't understand).


Facebook's brand is tainted enough now that smart engineers don't want any of the bleedover into their personal brand that would come from working there.

How many engineers, in hiring positions, do you know that have a positive opinion of FB?


Ok but how many engineers use React, yarn, or graphql? Facebook is still leading the way in front end development and it’s still a net positive to have that name on your resume. Their brand isn’t any more tainted than Google, Microsoft, or Amazon.


I will happily profit from the work FB has poured into all three of those technologies and smile knowing they won't make a single cent from me.


Facebook is not lacking on the problem in their web tech either. Like how terrible it's maintaining (or the lack of) flow. It's buggy as hell and I wish I didn't choose it couple years ago in my projects

And for React, the head of React left FB because of the hostile working environment earlier this year. https://www.cnbc.com/2019/01/17/facebook-manager-quits-after...

Facebook led the way in frontend. But for future candidate, the important question is, if they have faith that FB will continue to lead that way.


Sure but no company is going to be perfect and I’d say their track record of maintaining tech is better than Google’s right now.


That is a fair point. But, that's not what the topic is about. Just because they contributed to front-end development doesn't make them any less questionable. Not saying I wouldn't want to work there myself. They do some good work there i'm sure.


There's still much more chances that you will end up working on some Ad-Tech at Facebook and not cutting edge open-source development. As for the brand not being any more tainted than the other tech giants, I think the article shows the exact opposite


They're definitely more "tainted" than those brands. I'd say public perception of ethics in tech brands, from most to least ethical, is:

Apple Microsoft Google Amazon Facebook

(whether or not those perceptions are well founded is another matter)


I've got good news for you. You can contribute to these projects (and put your contributions on your résumé) without having the Facebook taint. e.g.:

* https://github.com/facebook/react/pulls


It’s a lot easier when you’re getting paid for your time and are surrounded by other people working on the project.


Open source libraries like React should not be attributed to companies. It just so happened that Jordan Walke worked for FB when he wrote the library.

Its not like any of these companies specifically target creation of such tools as part of their business goals.


Those companies have teams that maintain them and they get tons of work done internally before they’re released. Microsoft has been great for Typescript just because of the brand that’s behind it and the teams that get paid to maintain it. React is the same way.


I think your point about the personal aspect of work is most relevant. I personally would be mortified to work at a company that was always in the news for various scandals and generally being full of shit. Not that I'd assume anyone who works there to be full of shit - just that I wouldn't want my friends making fun of me for working for the Zuck. Facebook just isn't cool anymore in my neck of the woods and there's no social capital in using it or working there as far as I can see.


I would hire an engineer from Facebook any day of the week. For the most part the devs are excellent. I would not let my personal opinion of a product stop me from hiring a great candidate.


Yes, I actually have an overall positive opinion of Facebook in terms of engineering talent. Facebook has made some questionable decisions, but I would not automatically assume that everyone who works there is so immoral that you shouldn't hire them.

Facebook AI research also has some of the top AI researchers.


I know lots of engineers, in hiring positions, that have a positive opinion of FB.

They still do great engineering and someone working there will learn a lot and take those skills to their next job.


There are engineers who were already working at Facebook when all the negative press started piling up.

Then there are the engineers who went to work for them in spite of that bad press.

Given the revelations that have come to light, that is a distinction worth considering.


At the same time I don't see people having too much bad opinion about people working in defense industry. Ethics is not an issue with hiring.


It’s not easy to build an app at the scale of FB. If I saw FB experience on a candidate’s resume, I would view that as a positive.


"In general, Facebook candidates are asking much tougher questions about the company’s approach to privacy, according to multiple former recruiters."

That made me smile <3


The general feel I get from most of my engineer friends is that Facebook's product is definitely not inspiring and doesn't "make the world a better place" (whatever that means).

But the consensus is that given the right amount of money, they would all accept an offer from them. And Facebook is known to pay very well.


It's pretty true that the pay trumps them all. But Google is generally pretty good at matching Facebook's offer. So when choosing between FB and Google, or any of FAANG, the pay is usually less of a concern. So being considered "not inspiring" and "evil" is definitively not helping FB to close on the candidates.


FAANG's hiring processes are so arbitrary and non-repeatable that it's entirely possible (even likely, I suspect) for a person to have offers from just one of these. Other than Amazon, an offer from these companies is practically a lottery ticket.


[flagged]


[flagged]


W/r to comp, that's my data point, don't like? Then don't like...

FB has a salary cap below GOOG, AMZN is just cheap and prefers people leaving to countering. Then AMZN makes a big deal of hiring such people back for what they should have countered, proudly calling them "boomerangs."


This is the most encouraging thing I've heard recently. We might be doing better culturally, and more resilient, than it sometimes seems.


Wow! Every so often, I see something that makes me feel hopeful about the future. This is one of those times.


I feel bad for the Facebook employees below middle management level, but I also really would like some harsh penalties directed at Zuckerberg et al.

I hate that huge corporations can just kind of shrug and say "oops" to egregious crimes, without any meaningful consequences. Same with corporations not paying tax while still benefiting from the stable society created by those taxes.


If you're referring to the Cambridge Analytica Scandal, Facebook didn't break any US laws. While it may have been morally wrong (however, if you accept that Facebook's business otherwise wrt collecting user information is okay, I don't see how you could see this use of that information as any less okay), there weren't yet any US laws on the books to throw at them.


There are real signs of a moral change in this upcoming generation. I suspect a lot of people are going to be very surprised when that wave comes in and breaks.


The only thing that I see is a complete loss of faith in the rule of the law. Hence the Twitter lynchmobs.


It's a double edged sword. They seem to live in thicker bubbles. They don't seem particularly tolerant or forgiving.


While I share the same hope, don't get too excited. There are billions in the rising ranks who don't care and may be from a country that exist lower on Maslow's hierarchy of needs than first world countries.


I wouldn't get too optimistic. If the types of candidates that can make a positive social impact on the direction of the company don't join the company, that leaves Facebook with only those candidates joining that will continue to make it commercially successful without the positive social impact.

Basically, if you're unhappy with a Facebook with talented do-gooder employees, just wait until we've got a Facebook with only talented employees that don't care about do-gooding.


Great discussion here, both about FB and G! There are trade offs between getting valuable services in exchange for user data vs. not wanted to live in a dystopian future (or, are we already almost there?)

It seems like an almost impossible optimization problem (for me) to balance wanting my country to be competitive in developing useful future AI and other tech to solve health, climate, resource, etc. problems, and, protecting people’s private information.

I am positively biased in favor of FB and G for personal reasons: I have been working in the field of AI and ML since 1982 and my recent work has been spectacularly helped by frameworks and pre-trained models released by G and FB.

I want a future world of combined privacy and tech-fueled hyper productivity. I just don’t know how to get there. I act consistently for these two goals but I have no control over how other people act, nor do I want to dictate what other people do. I think the best we can all hope for is some large degree of personal sovereignty over our own individual lives.


They pay well, it's a good brand name to have on your resume, but on principle, I ignore any recruiter from Facebook.

It's glorified MySpace that exists to build de facto detailed psychological profiles on unsuspecting participants, and it's specifically engineered to manipulate their behavior. No thanks.


Same feeling about the recruiters from Tesla.

It sounds horrible to work there, there are lots of companies that make cars.


I believe (strong personal assumption) that this is something not only for Facebook, but also for Google and Apple. Back 3 years ago, a lot of people from my network had dreams to work for Google or Facebook . Today it's no longer the case. They demand a company who is serious about things like privacy, environment. Things like career possibilities are still important, but so many companies these days offer similar work experience. Surprisingly I hear more and more positive news coming from old evil: Microsoft.


Isn't Apple serious about privacy and the environment?


I always had the impression that the incentive alignment between Apple and it's users was far better than any of the other big tech companies.


Privacy, maybe. Environment? They make their phones in Chinese factories with suicide nets


Nearly every criticism of Google also applies to Microsoft. Google considered a censored search engine in China. Microsoft actually has one. Google had a contract with the military but didn't renew it. Microsoft actively has one. Microsoft products also collect large amounts of data on you, and provide less privacy than Google

I can't imagine anyone who no longer wants to work at Google wanting to work at Microsoft


So has AOL, friendster and MySpace I heard. Different DNA than Google and based on the foundation of voyeurism with text entry boxes cut and pasted from MySpace, who cut and pasted them from friendster and the list goes on. You don't cut and paste algorithmic search technology.


My career is and has always been defined by a refusal to work for companies I'm not ethically comfortable with. Hopefully it becomes more of a norm.


This is a luxury and a privilege many people can't afford, and I am right there with you in exercising it consciously.


I agree, but there are many people who are in positions to exercise this kind of discretion who refuse or who only do so after they've extracted tainted benefits.


Wahoo! This is great for people who are interested in working for FB who can now have more bargaining room.


Hmm... the headline is in direct contradiction with what I read on Blind. People are still flocking at Facebook's gate for an offer.


Ok, but Blind is filled with the type of people who would fit right in at FB.


Ouch, there's honestly no worse insult than being compared to the average Blind user.


Oops my mistake for looking there, noped out after viewing the train wreck for myself.


The difference being, the very best, with multiple offers, are choosing another path.

Its like the very best high school students with offers from Harvard, Princeton and Yale- but suddenly Facebook is being treated as if it's Dartmouth or Cornell instead of Princeton and being left with the leavings of the very best instead of having its pick. Still very talented people to be sure, but not the 'best.'


how do we know that the ones no longer seeking employment are the very best and not just the ones that are most woke?

> Among top schools, such as Stanford, Carnegie Mellon and Ivy League universities, Facebook’s acceptance rate for full-time positions offered to new graduates has fallen from an average of 85% for the 2017-2018 school year to between 35% and 55% as of December, according to former Facebook recruiters. The biggest decline came from Carnegie Mellon University, where the acceptance rate for new recruits dropped to 35%.

It's entirely possible that the 35-55% of those that are still accepting offers at FB are the most talented of the people that have interviewed at FB and gotten offers.

I know a lot of great engineers, woke and politically indifferent. The most effective ones that I've worked with are the least politically engaged because they are more heavily engaged with building things than politics.


> The difference being, the very best,...

... at doing LeetCode all day.


Ha, first thing I thought of when I saw a Facebook struggling to hire post is blind. If any reason is causing people to not interview with facebook, I would argue it was the weekly burnout thread that was happening on blind last year.


Oh, interviewing is one thing. I would advise everyone to interview with Facebook, especially when you don't want to work there (less pressure). If you get an offer, it would most likely be pretty generous so you can further use it as leverage at the companies you want to work.

But yeah, Facebook is the new Amazon when it comes to working people to death.


What I know from friends at Facebook is that there is a mass exodus at the moment, at the same time mass hiring to compensate. Overall pretty uncomfortable so most of them consider leaving to other places.


Correction: Facebook has struggled to hire talent [from top universities] since the Cambridge Analytica scandal.


Everyone knows if you didn’t go to a top school you don’t matter /s.

Anecdotally nobody who went to my undergrad got a new grad offer and then declined it at FB. Because most of them just can’t get offers there, and if they do they can’t get equivalent ones.


FWIW, I think this past year everyone has been expecting Uber, Lyft, Airbnb and Pintrest to unlock a flood of money. Anecdotally, a researcher friend of mine turned down FB to work in Snap Inc, research and someone else's wife also turned down FB to work at Lyft went to get in before the IPO specifically and no other real reason. So as much as it makes a good story about ethics or privacy concerns, I think it's lots of things, but probably very little about Cambridge Analytica.


If US "Defense" is any indicator, just pay more money.

Someone is willing to kill people for more money.


An anecdotal piece of evidence.

There’s this senior engineer, by name of Jafar Husain. Used to work at Microsoft, where he learned RxJS (from Matt Podwysocki, I shouldn’t wonder). Then he moved to Netflix and brought his RxJS know-how there. Worked there as a senior software engineer for about 7 years. Authored the Falcor library (a graphql competitor in the early days of graphql). Then, in October 2018, he joined Facebook.

And he is a very talented engineer :-)


So he worked at MSFT till ~2011 (before they became open source friendly), and joined Netflix ~2011 (when they announced Qwikster [1]) and joined Facebook just a short while after the Cambridge Analytica scandal broke out? He may be talented, but boy does he have to work on his timing.

[1] https://theoatmeal.com/comics/netflix


Facebook has began scaling back on hiring since the Cambridge Analytica scandal.

Alternative possible headline working just as well. People are still applying en masse to work at Facebook.

Among top schools, Facebook’s acceptance rate for full-time positions offered to new graduates has fallen from an average of 85% for the 2017-2018 school year to between 35% and 55% as of December.

A fall in acceptance rates may mean saturation in needed roles at Facebook.


I think you’re misinterpreting the term acceptance rate. It’s the number of offers extended that we’re accepted, not the number of applicants who were extended offers.


> A fall in acceptance rates may mean saturation in needed roles at Facebook.

No. If Facebook didn't need those people, they wouldn't have extended an offer to them in the first place, so it wouldn't have affected acceptance rates.


Then Facebook shouldn't be giving out offers or interviewing so many people?

I think these are primarily offers given to interns. Otherwise, why go through the interview process if you don't want to work there? It may be a better place to go for internship than for full time.

Then again, maybe their compensation packages may no longer be competitive. It's possible.


> their compensation packages may no longer be competitive

They're competitive cash-wise. But the career hit is unmistakable.

(My friends who had other options eventually got tired of carrying around reputation that comes with that place. If you have no other options, that's one thing. But not everyone loves broadcasting that fact.)


There is no career hit associated with Facebook. That’s just ludicrous wishful thinking. People love to complain about Goldman Sachs too, but that still looks good on a resume.


> There is no career hit associated with Facebook

Having witnessed it personally, yes there is. It's not evenly distributed. And if you do something amazing later on, people will look past it. But it's there in a way it isn't for e.g. Apple or Google or Amazon.

> People love to complain about Goldman Sachs too, but that still looks good on a resume

Goldman is universally respected within finance. Facebook is not universally respected within tech.

(Neither Goldman nor Facebook care about public opinion, outside of managing political risk, since neither sells its product to the mass market [1].)

[1] Goldman recently started doing this, though


Yes, I've seen it too. As you say, it's not evenly distributed, and it also depends on other factors such as when you started working for them and what your role was.

But there's definitely some amount of stigma associated with having Facebook on your resume. The interesting question is whether that's temporary, or whether it will become even more pronounced over time.

I could see it going either way.


Is there a scenario similar to Facebook's in finance, right now or historically?


Like Wells Fargo, or any of the folks in this list: https://www.investopedia.com/articles/investing/101515/3-big...

I'd be hesitant about someone who'd work for Madoff.


Personally, I wouldn't overlook someone who worked at Facebook. I don't really like them as a company, but it doesn't mean everyone who works there is a bad actor.


Yea that’s a dismal stat. Roughly, It means smart people don’t want to work for you.


I can't help but think of Zuckerberg's famous quote "young people are just smarter."[1]

I guess they are Mark.

[1] https://www.forbes.com/sites/stevenkotler/2015/02/14/is-sili...


The older I get, the funnier that quote gets.


Most companies go through this transition as they fill their niche, the raw eng talent starts to get hidden under an empire-building layer of middle management.

They've been able to hire some middle management to mind what remains. Replacing change-makers with reject managers from Netflix, etc. Not necessarily bad folks, just not going to be doing a lot of work.


A manager from fb reached out about a position, I told him to take a good look at what he is supporting with his work, and that I might reconsider if the leadership ever sees the light.


If this is true, then it would be good for humankind!

My position on this has always been that people still working at Facebook and its other properties don't think much about what affects millions of other people negatively (they got their money and some interesting work though).

I wish that sometime in the next few years, a past job at Facebook is seen as a personal blemish and a question of character and conduct.


in a current climate of talent shortage in tech and especially in machine learning/deep learning there is really no excuse for going to FB


If there will be fewer people who go to work at Facebook who care about privacy, that seems like bad news?


I don't think that people who care about privacy could possibly affect what Facebook does by working there.


Why not? Tech firms work collaboratively and even new employees participate in (some) decision-making.


Making a decision that limits the growth of the core product means that you are a negatively impacting revenue... IE, career suicide. This isn't a government (ideally) trying to prevent conflict and unrest, this is just a company making money.


It's the eternal debate. What has more impact? Employees joining the company and making changes from the inside? You always have the risk of having the employer be: "see! no need to change! there's no problem, people are still applying and we are filling all our position easily!"

Or the employer realizing they can't hire anyone good anymore and decide to make real changes in order to be able to attract the right talent?

It's hard to say which one has the most impact. At the end of the day, management needs to be on board to make changes


Facebook's entire business model depends on being very intrusive into people's privacy. It's is extraordinarily rare for non-board members to be able to affect a company to the extent of actually altering its business model.


I will believe this story when the stock price actually goes down. Until then it is wishful schadenfreude.

As a point of reference Boeing's stock price hasn't really gone down, and Boeing's action killed people.

If the stock price doesn't go down Zuck will not care.


While many people may not care about a company's morals, they definitely will care about the reputation of a company in terms of how the company is perceived by the general population and others in the industry.


I’ve recently left a CTO job and am going to a large software thing. Facebook was the one company I didn’t consider.

Facebook being “the past” played an equal role to “Facebook being evil” in my decision.


Good. Maybe the scarce resource is good talent and this being harder to find for FB they’ll quit being so creepy and too hopped on their own greatness.


Let's let facebook die peacefully.

Talents have many more options to work for.

Be that for money (in finance), or passion (in other fields where their consciousness will be cleaner).


I tell everyone I can to avoid them, just based on my own personal experience. It was by far the worst process of any company I interviewed with.


It's certainly no longer cool to mention you work for Facebook in my area. Google is still a very strong and popular brand.


Last time FB recruiters wrote to me on LinkedIn I told them I'm not interested in working for a scandal factory.


I interviewed at Facebook with the intention of getting an offer letter I could use as a bargaining chip with my first choices. I was rejected, but fortunately got into my first choice anyway. Despite all the perks and compensation, I would never want to work for Facebook.


[deleted]


source?


So is Facebook going the way of MySpace?

If so what will replace it?


The self-righteous people should be alarmed by this: this means facebook will be hiring even worse scum and do more evil things. Save the world from evil facebook, go work for them.


Let it die, this accelerates the process


The sooner FB just dries up and dies, the better (IMHO). What do we need that junk for?


I get a lot of value from Facebook, and it greatly improves my life, even just this past week. I found out about some career opportunities through friends' posts. I saw photos of old friends that made me happy. I got answers to questions I had about a specific uncommon musical instrument I had via a Facebook group. I searched for people who would be in a city I was visiting, to message them to meet up while I was there. I learned news about acquaintences having kids or moving between cities. I read some backstory about a city hall decision that made me more informed about what particular politicians have been focused on and why. I bet I'm in the majority actually, and people who get nothing from it are probably already not using it.


Literally none of those things depend on facebook though. Mastodon, reddit, IRC, or just sitting in your aunt's kitchen for an afternoon could substitute nicely. They have no niche.


You forget about the network effect. None of our friends are on IRC. Reddit is great for some niche groups, but there are tons of facebook groups specific to a niche that really have no equivalent anywhere else.


The value being in the network means the value is subject to social whims. That's not a secure place to do business long term. All other places I listed have networks as well. You have chosen to think this network is better for you, but if/when people move then you might need to re-evaluate that as well.


No one is claiming that it is the best place forever and ever and that can never change.

But today, facebook rules from network effects. Someday it won't (just like myspace doesn't anymore).


Seconding asciident' comment here, I can count at least 3 different things that had a positive effect on me, and 0 negative ones, in the last week.

I do agree that you probably don't need this junk.


They are still php based business.


> After the publication of this story, Harrison contacted CNBC to say “these numbers are totally wrong.”

> “Facebook regularly ranks high on industry lists of most attractive employers,” Harrison said in a statement. “For example, in the last year we were rated as #1 on Indeed’s Top Rated Workplaces, #2 on LinkedIn’s Top Companies, and #7 on Glassdoor’s Best Places to Work. Our annual intern survey showed exceptionally strong sentiment and intent to return and we continue to see strong acceptance rates across University Recruiting.”

Perhaps it's best not to take a couple ex-recruiters word as blanket truth about company wide trends.

Of course, the article simply mentions this then goes straight back to asserting company wide morale problems, which is an interesting narrative to pursue, when that's not really what the majority of employees are feeling (which is further reflected by strong hiring numbers and low engineer attrition).


Also, every one of the metrics he cites there are of dubious value in terms of real-world meaning.


Forget Cambridge Analytica. There have been several reports about how badly they've been treating their executives. Why would anyone else trust them after those reports?


do you think the rank-and-file employees give a damn about how they're treating the executives?


Haha So true!


I deleted my Facebook profile 2 months ago. Best decision ever, but now I'm just wasting more time on Twitter haha.


Why stop at facebook? Nuke twitter and join mastodon!


I actively turned down a facebook offer out of college, however, boy does facebook throw money at people who seem to turn a blind eye to online ethics...

$$$


Is it possible that ppl who work at facebook have ethics that merely differ from yours?


It's also possible that the Saudi Prince's ethics are merely different from someone at Facebook but I don't think beheadings are commonly considered a good thing in the U.S.


The same kind of ethics that can justify every terrible action.

As usual if you feel fine and inline with the company's ethic then go for it. I suspect that most engineers and Facebook employee know deep down how shitty the product they build really is.


As possible as it is they have no morality at all.


Everyone has moral, it's just might be different than you.


Obviously that's exactly why they are NOT at facebook, though I chuckle at your attempt to reducing it to something as inconsequential as personal preference.


They have to pay more. Median monthly pay for intern: $8,000

https://www.vox.com/2019/5/15/18564801/facebook-interns-doub...


That's been the common intern pay for a while at FAANG companies. Heck I got paid more than $8k at a non-FAANG company back in 2012.


As a business, this is not a big problem for FB. FB will still find plenty of talent.

What they will find less abundant is "top talent," whatever that means. I doubt FB actually needs that much "top talent" to continue successfully, anyway. They're not performing brain surgery on a rocket.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: