To summarize why I think professors are going with this arrangement (in order):
1) Much less, if any, grant writing. A standard NSF $500K 3 year grant provides a lab about $100K per year (due to university overhead). This will support 1-2 PhD students on a pittance of a stipend.
2) Less teaching (1 course once a year)
3) Computing resources that are far greater than you could get a grant for. One NVIDIA DGX-1 is about $100K. Very hard to get a grant for something like that. Companies have far greater resources than just one DGX-1
4) Access to software engineering personnel to help with research
5) Access to data, and the ability to create new datasets. Many datasets cost over $100K to make (e.g., those for semantic segmentation). Very hard to get a grant to make a dataset (I've tried)
6) Much better media relations to popularize your research
7) Bigger salary
A big part of this, in my opinion, is just that FB funds their labs. Getting government funding for a lab is difficult and frustrating and must be done continually, especially for these professors that run huge labs. NSF has a 7-15% success rate, and it can take a lot of effort to write a decent proposal. Including teaching regularly, this takes a lot of time from actual research. What these professors, I think, are negotiating is only teaching one semester a year and having a spigot of cash to fund their labs. They also gain access to massive amounts of data and computing resources.
Other companies don't seem to be as open to this arrangement as FB, but it has a lot of appeal to me (although I'm not a lover of FB).
I don't know how it compares in software, I worked for a professor in undergrad who was in microelectronics. I was astonished at how much of their time was taken up by grant writing and applying for funding. The pressure for research professors to support their position was immense.
There's a lot of reasons to hate FB, a lot of reasons to be wary of what comes out of their research. But to castigate researchers that cash-spewing tech titans create a better environment and experience for researchers than the public and universities feels a bit asinine. I've known too many starving grad students, post docs, and non-tenure professors to prefer people subject themselves to the system we have rather than what a company like FB wants to provide.
1. Industry funding comes with its own set of hassles. And overhead is still a thing for industry funding - sometimes the overhead rate is less, but there's been a lot more pushback against this from universities (which I actually think is quite fair).
2. Being able to buy out teaching loads is a function of funding, not necessarily this funding
3. In many institutions, salaries are fixed - what this does is cover a percentage of your salary you don't have to cover from other mechanisms, or let you buy out teaching, as you note. It would be interesting to know if the dual FAIR appointments work like that or not.
While private funding is awesome, and these labs are in a very good position, it should be noted that in most cases in my experience between NSF/NIH-style funding and an industry grant, the government grants last longer, are more flexible, and less of a hassle administratively.
While all of what you say is correct for grants given to universities by industry, this isn't in the same class of activity. These professors are becoming FB employees, while still retaining their professorships. Some aren't even located at the same location as their university for 7 months of the year. Its almost like a yearly sabbatical with funding for your lab and far more resources.
Your list seems pretty complete, but I wonder which items are the most important. It might be counterintuitive.
For instance, I assume that grant writing is such a hassle, the average person would accept just to get out of writing grants. But the guys and gals that they picked up are at the top of their field. Would they also have the same constraints? Seems like they would have higher hit rates, less teaching and so forth.
I think the ability to launch the careers of their grad students, get them hired etc might be at the top, but I am just guessing.
I'm not in CS research, but even 'top labs' can struggle to get government funding from the NIH/NSF. I worked at a lab that consistently publishes in Nature/Science, a few times front cover, and gets very good PR. It's a running joke that we've never gotten NIH funding despite translating our research to people several times. Majority of our funding comes from 3rd party private companies and DARPA, which people might find objectionable despite being a govt. agency.
There is no free lunch in raising money for research.
I understand the call of money but I cannot help but feel very negatively towards academics doing this with facebook of all organisations. After recent events, they cannot pretend not to know the impact and damage their work may have here. Excusing yourself with "I am just a researcher, I don't have anything to do with how my work is used" is just not good enough any more.
I would categorically reject any collaborations with FB as an academic in ML.
I see Facebook as the least ethical, and least useful from a civilization standpoint of all the big tech firms.
Google is driven by the same ad-clicking incentives, but the one-tricky pony has been developing other extremely societally useful tech, like self-driving cars and other moonshot projects.
Apple and Microsoft sell products, they do not make users the product (on the whole). Together they pioneered computing revolutions, and I'm confident history will judge them for making a positive contribution (on the whole).
Amazon is a leviathan whose societal value I find more difficult to classify, but I genuinely derive lots of value from their service personally. It's good for my lifestyle.
Facebook on the other hand, is a waste of my time, mental energy and a drain on society. As an academic, how can you turn your mind to furthering its goals?
That might be true, but don't confuse FAIR (their DL research lab) with FB. FAIR employs good people doing interesting and useful research. They promoted pytorch as a framework, and it is perhaps the best framework for non industrial applications.
On a parallel line of thought, I prefer FB's React to Google's Angular. In both React and Pytorch I see the same elegant design. TensorFlow and Angular on the other part are unnecessarily complicated.
FAIR is part of FB. The reason why FB invests billions of dollars into FAIR is because it supports its business model and its democracy-wrecking product. FAIR researchers are complicit in the societal damage perpetrated by FB. They're cashing the (multi-million $) checks, and in exchange they work on making FB more powerful, by giving it better AI.
When you talk to these guys (they're almost all guys), you realize they're fully aware of what they're doing, and in the back of their minds they know FB is evil. They just like the money too much.
This is 100% about money trumping conscience. FAIR researchers may be millionaires, but they're ethically challenged. I wouldn't trade place with them. These people disgust me.
The $$ they can get from anywhere else in industry given how hot research level ML experience is (and how scarce this talent is). I imagine the appeal of FAIR is much more about the academic freedom. But sure the $$ doesnt hurt.
Interestingly the head of the Pittsburgh lab here isn't even an ML researcher -- she is primarily known for her motion capture work and for running Disney Research.
I agree with the general principle, expressed fairly crudely, that - good, innovative things (React) can come out of bad places (Facebook, in my opinion).
That's historically true of lots of research innovation though.
War (generally accepted as bad thing!) has advanced technology and civilization repeatedly.
At least the scientists aiding war (on average) had some awareness that killing people is clearly not a good thing, and at absolute best a necessary evil. Can the same be said of the people at Facebook?
The culture of their management is to be in denial about how damaging their service is to the mental health of individuals and society.
Even if Facebook were completely evil, why not take their money? They will benefit from your research just as much even if you don't take it, because you're publishing it. Are you concerned that Facebook is telling the scientists what to work on?
History will also judge Alphabet with a positive mindset (T&C apply!). Google did to web what Apple,MS did to computing. Google files patents but doesn't extract royalty from it unlike MS. I think next 10 years will be really crucial to Alphabet( not talking about Google here). The work Calico, Verily,Loon, Dandelion Energy are doing takes time to create impact. I think Google, Calico,Verily are going to make considerable contributions to healthcare.
Our products aren't perfect, and we understand that we have a lot of work to do.
However, the fundamental purpose of our products is to allow people to efficiently communicate with each other. Hard for me to square that with "drain on society." I have many friends who, via Facebook, found a connection that was life changing: from finding a job, a spouse, to a community to deal with the loss of a loved one or support after being diagnosed with a terminal illness.
One of the things that draws AI researchers to come work at Facebook is the opportunity to see their work make a positive impact on billions of people around the world.
The research done by FAIR is helping us do things like deliver billions of translations a day, provide automatic photo captions for people who are visually impaired, and help bring blood donors and people in need together. It also helps us spot when someone is expressing thoughts about self-harm so we can alert first responders.
But we also believe there's even more we can do to help bring the world closer together, to give people a voice, and to open up new opportunities for everyone. AI is a key part of that and we believe pretty deeply in the power of open research to help not just us but the whole industry.
> The research done by FAIR is helping us do things like deliver billions of translations a day, etc...
All for the purpose of increasing buy-in to an increasingly Orwellian digital surveillance regime.
> But we also believe there's even more we can do to help bring the world closer together...
What brings people closer together is real human interaction and connection. Face to face communication with visible emotion. Vulnerability. FB's video chat is the only thing serving that interest, but that's better served elsewhere with less tracking. Posts that broadcast one-way to an invisible audience are inhuman. Filter bubbles are toxic. Widespread use of FB is cancerous on the social fabric of society.
This post kind of reminded me of one of those drug commercials with old people happily skipping hand-in-hand through a field of flowers. The only difference is that you forgot to quickly list the many terrible side effects of your product at the end.
Tell your PR team the appeal to emotion was a nice touch. If I didn't know anything about your company I might have even been able to get through it without feeling absolutely nauseated.
> the fundamental purpose of our products is to allow people to efficiently communicate with each other
No, the fundamental purpose of your products is to efficiently surveil, profile and manipulate your users on behalf of your customers.
> It also helps us spot when someone is expressing thoughts about self-harm so we can alert first responders
What happens when those first responders bust someone's door down and your "helpful" feature essentially becomes algorithmic swatting?
On a semi-related note, did you guys ever figure out how many of the hundreds of thousands of people you enrolled in an emotional manipulation study without their consent ended up killing themselves as a result? It's a given that the figure isn't zero across that number of people.
All of these goals are positive. I also assume that's all of the goals that you focus on.
But it's an abuse of power to only look at one side of the equation.
Companies are run by people, and at the end of the day, no person would want to dump their money into a technology that couldn't yield any financial gains. Where do all the large companies get most of their revenue from? From figuring out how to trigger dopamine to be released into our brains, and we're starting to see the negative effects it's having on people.
Also, you aren't giving people voices. You're opening up a door to a world where they have no control over. In their outrage, and futility, they focus more and more time trying to fix something that doesn't exist.
Is this what you tell yourself in order to sleep at night? From the perspective of an external observer, this talk of "making the world more open and connected (and making billions in the process)" seems shockingly disconnected from the reality of the damage that FB is causing in America and in the world.
You talk about impact. There's no doubt FB is making a big impact. Unfortunately, it's overwhelmingly destructive impact. As someone in position to change that, it would be great for you not to dismiss out of hand the valid concerns of the people in this thread. Personally, it's because of replies like this (in particular zuck's attitude) that I have zero confidence in FB's potential to fix its products in the future. Bye bye democracy I guess.
One day you may be held responsible for your impact on the world. I hope the talk about making the world a better place will work out then.
Fb's research in computer vision has produced works like training a neutral net on large datasets in 1 hour, instance segmentation neural nets, and many more, and made the code and research public with nonrestrictive licenses. These are pushing the state of art! Check out the work yourself and then come back and criticize these scientists if you feel that their work is a net negative on the world.
The question as you so rightly point out is whether there is a net value added.
If we took their AI contributions and JS frameworks on one side of the equation, do you really think it balances the other side of the equation?
On that right side lies encouraging general disinformation leading to broken elections and even aiding genocide. Academic studies on happiness show using Facebook and Instagram correlates with poor mental health; that research has been replicated.
What product actively damages those that consume it?
Facebook is the digital equivalent of the Tobacco industry; good business that's bad for people.
I think you can go even farther, to, "intentionally malicious".
Mark hasn't taken back his comments about his users being "dumb fucks" for "trust[ing]" him, as far as I know, although he has apparently said that he regrets saying it[0].
Well said, and totally agree. They've had too many whoopsie moments and seem entrenched in not learning or changing what is fundamentally broken in their management and business model. They have no legitimate place in research or academia.
It's a bit of a poisoned grail, though. On the one hand, you're selling your soul. On the other hand, they have data beyond your wildest dreams and you can use it all for anything as long as it might make money.
Would you take that offer? Would I? Probably not. But I can see the appeal, and plenty of people wouldn't hesitate.
But I can see the appeal, and plenty of people wouldn't hesitate.
It will keep happening until these companies are a black mark on your CV, like say a tobacco company might be. Will people be so eager if it means they will be shunned by the wider research and engineering communities?
FB and Google are the only ones pushing the industry forward. The huge amount of data and computing power is what brings these researchers, its not all about money. In the long run the scientific advancements from these companies is a much higher positive than the negatives they have in the present.
Any researcher in this area worth their salt can easily get cloud credit grants and collaborations from Google, Microsoft, Amazon. I pose if you go to Facebook, it's very much about money.
But can they get the huge volumes of data they have tagged out? The real time pipelines and internal tools? The community of people they will be working with?
>>> FB and Google are the only ones pushing the industry forward. The huge amount of data and computing power is what brings these researchers, its not all about money. In the long run the scientific advancements from these companies is a much higher positive than the negatives they have in the present.
> But can they get the huge volumes of data they have tagged out? The real time pipelines and internal tools? The community of people they will be working with?
It's highly problematic when a researcher pushes ethics aside in order to gain access to data and chase "long run ... scientific advancements."
This is incorrect, in my opinion. FB and Google are where capital is currently concentrated in the software industry; therefore, they are able to hire a lot of the top software talent out there. This talent is responsible for pushing the industry forward, and it often does so in a manner that is largely company-agnostic. This is why we get React help pages telling us to use Enzyme from AirBnb for running tests - because the work is being done by software developers who are building general infrastructure for the web, and who would probably end up doing the same basic work regardless of who was paying their salaries.
It's best to think of Silicon Valley as two entities: a mass of technology workers who build software, and a financial extraction function that attempts to extract value from the work they do.
tell that to the royhinga who were betrayed by facebook and subsequently killed.
i'll be waiting for the scientific advancements to be useful to the public. so far they've increased rates of depression, anxiety, etc while enabling totalitarianism.
How is IP handled in this situation? Especially given the recent discussion [1] on HN about how the recently announced DeepMind Patent Portfolio could be a problem, how is it that state employees (Washington, California) are "co-employed" by an organization to do _exactly_ the research that their home universities work on ? Who owns the IP in this case? Are these sorts of agreements FOIA-able?
> how is it that state employees (Washington, California) are "co-employed" by an organization to do _exactly_ the research that their home universities work on
They will work part-time for the university, and part-time for Facebook. I don't see why that's so shocking? Presumably the university only funds them part-time for part-time work, so it's not like they're not getting what they're paying for if that's what you're asking. This is a very conventional set up for academics.
Most employer agreements have fairly draconian claims around IP. And I believe that many universities require their faculty to run consulting, etc. gigs through their legal teams, and the university gets a "right of first refusal" of sorts on the generated IP. Many academics get around this by claiming that they are doing research X at company (more applied, etc.) and research Y at the university (more theory, technically different projects under different grants). But with ML it seems like it would be harder. Regardless, I'd love to figure out if we can get these agreements through freedom of information act requests, to shed light on what state-employees are doing.
Yeah, at my university partnerships between academia and industry have very complex IP terms that depend on the scope of work, how much of it involves novel research, etc.
And there, there's the clarity of "I'm an employee of the university". With dual appointments...the agreement has to be complex. Either that, or really draconian based on Facebook throwing it's weight around and saying "Accept this, or we'll find another university that will."
A related quote from this fascinating interview with Jonathan Tow (media researcher)[1] on FB research:
> Often, these companies [like FB] are open to research partnerships and things, but it’s always on their terms. If you do research with them, you’re dealing with IP issues, you’re signing over the rights to the research. It has to be reviewed completely and vetted by their legal process. They often handpick researchers that help them and help their purpose and help their cause — they maybe throw in some sprinkles of criticism. I understand why they would be hesitant to want to work with people like me.
I think Facebook hate is clouding out reasonability in this thread. E.g., there's even a comment asserting that people who collaborate with facebook (a.k.a., Jessica Hodgins, Andrea Vedaldi, and Jitendra Malik) are "not worth their salt" (?!?!?!).
IDK. Maybe -- just possibly -- there do exist researchers who have their choice of funding spigots and are choosing to work with Facebook. Either that or HN has some damn high standards for what it means to be "worth their salt".
There's nothing intrinsically wrong with industry collaboration, even when it involves companies whose impact on the world you might not like. The big oil companies are, unlike FB, an actual existential threat to humanity. But I wouldn't fault renewable energy researchers for taking research dollars from those companies.
The question is: will the funded research agendas push science forward in the direction it was headed anyways, or will this money distort the type of research being done?
In any case, in a week we'll be back to our regular programming bemoaning the fall of the industry research lab and the paltry salaries offered to phd students...
You are referring to my comment about 'worth their salt' in a gross misreading. I said anyone worth their salt has other choices, so going to FB is a deliberate choice on these professors.
Serious question: Is it wrong to work for organizations that have done horrible things? The US government has plenty of blood on its hands from the past few centuries. Yet I don't think it follows that everyone should quit. Rather, if you're a moral person, it seems better to think about you can enact positive change from within, rather than seeking to morally disentangle yourself from an organization you find unethical and thereby allowing it to become even more unethical.
And of course, the US governement is different from Facebook in many ways. Perhaps the same logic of not working for them doesn't apply. But if it doesn't, I think it would be helpful to explain why.
> By choosing to work with facebook you're literally working for a company that has stoked mob killings and enabled genocide.
Okay, granted.
> Anyone working there needs to be called out.
I'll take this seriously when you start "calling out" IBM employees for working for a company that in the past enabled genocide.
I'm unsure how research on "visual and robot learning" or on "geometric 3D reasoning" is supposed to be enabling genocide. Those seem more akin to "green energy researchers taking money from big oil cos". Which is very different, ethically, from actively helping build and maintain the core business. Unless you want to suss out the link between geometric reasoning and genocide for me?
And in the case of Facebook, your opinion is worse than just hyperbolic. There is an entire research field that focuses on detecting speech sentiment. Your position, as stated, is that those people should not work with Facebook, even if 100% of their time is spent designing algorithms and processes to detect and remove hate speech/calls to violence during active genocides.
I think that advice is actively dangerous because I don't believe in a magical world where Facebook disappears tomorrow.
Facebook is an evil company that has harnessed and unleashed an ugliness in humanity which they take no responsibility for and is literally getting people killed.
An important aspect in any research arrangement is PROTECT THE STUDENTS. Academic research is likely to have many student assistants. The students should be able to publish their results in a timely manner. Students will benefit from exposure to researchers and resources in industry. And potentially lucrative internships.
My alma mater Stanford had a bumpy ride in the 1980s: over clingy patent policy, faculty startups not allowing publications. Lawsuits ensued. Ut by the time dot.com began the next decade it was much smoother, e.g. the Yahoos and Googles.
And this not just comp-sci, but biotech, oil&gas, aero engineering too.
What level are these professors being hired at within Facebook? What sort of base salaries, stock options do they command? Any idea on total compensation numbers here?
With them being part-time professors what are their university salaries like?
In total compensation between industry and university I'd imagine this would be a good deal.
Lastly, how does recruiting work here? I assume they don’t make these people go through 5-6 algorithmic interview rounds.
Total compensation: Definitely a wide range, but a few hundred K would be the median.
Recruiting: one of my labmates flat out refused to do algorithmic interviews for a research position and the company still gave him an offer. Your projects, papers, and ability to demonstrate your understanding of research topics are far more important then your ability to solve a small, defined problem in 45 minutes.
I assume there is a distinction between software engineering and research applicants in terms of their interviews at the Big 4s? I know two people with PhDs that were given algorithmic questions and had to go through the same process as regular software developers (this was at the same Big 4 company). The only exception is one had expertise in ML and during one of his interviews they brought in a ML expert and interviewed them on their dissertation, in addition to their 4-5 algorithmic interviews. Both of these people were interviewing for software developer positions despite having expertise in research. Wasn't sure if the research interviews would be different. Good on your lab mate.
I interviewed for a research position at Facebook reality labs, and it was the same process as an interview for an academic job. Give an hour talk about your research, meet with 6 researchers in hour chunks for the rest of the day. The process was similar for my previous industry research job as well. I can't speak to Facebook's other research arms or Google.
Good to know. So you you weren't asked any algorithmic leetcode-style questions or questions testing that you have basic knowledge of your field (e.g. tell me what a random forest is)? Doesn't sound like it. It was more focused on your research?
I see similarities here with pharma and medical research. Conflicts of interest are endemic there, and they will be here, too, but this sort of sponsorship is inevitable. What can be done? In the case of pharma, I'm OK with them sponsoring peer reviewed, academic research, as long as policies are in place to make it immediately obvious where the money is coming from when I read the literature.
While I dislike FB, I don't blame researchers/profs for joining FB when the numerous advantages for joining them dwarf what you could get at a lot of other institutions.
Reminds me of how on the one hand it sucks that certain talented engineers/designers work for ad-driven companies, but at the end of the day the compensation is simply on another level compared to working for more benevolent causes.
As usual FB and other companies only collaborates with the "elite/Ivy League" schools. IMHO there are plenty of great researchers at non-elite schools and it would have been great if FB would start collaborating with those to reduce this vast concentration of academic research that has been happening for the past decade.
It's a bit chicken-and-egg. These collaborators are also in cities where FB already has offices that they can lead teams out of. Of course, these offices were started because of the prestigious schools that were already there.
How does being a public school mean a college cannot be elite? Pretty sure his point is these schools are elite within this field, and people from lesser known schools (not just public) are rarely given these chances.
In my area first everyone dropped messenger because some people had no accounts and moved instead to Whatsapp that guarantee that if you had a phone you could have easily an account in a matter of seconds (download and install) without having to create a full profile on some site you could not care less ( at this point of time in my old account no one is writing anymore in his home but instead using it just as a news aggregator).
Then everyone moved to Telegram because it allows you to share your account to others without giving them your personal number.
It is just a matter of time and everyone will move to the most simple system for the specified purpose
Social dynamics being in your favor can work for you and against you. It's delusional to think that just because you can make that graceful transition that that doesn't mean people aren't going to get pissed off at it. All you've done is silence their perspective as you are making that graceful transition. That builds up over time, and this is why this stuff goes in circles. You have to actively listen to people and make an effort to understand that the social dynamics being in your favor is not something you control. It controls you. You are the head of the crowd therefore all your actions must be aligned with the interests of the social dynamic. If you silence any single one of them you create a different dynamic. So you have to make the effort to be aware of alternative perspectives instead of choosing blind ignorance in the face of valid arguments. That's just the business of politics.
> It's like saying "quit using the Telco stuff!" 20 years ago.
Come on.
I know there's a slew of people out there happy to spout about how it's the only way they can stay in touch, but it's it's only true in the rarest of cases.
Facebook is, by numerous reports, a company you should easily dislike. I'm not going to post links. You know the stories, and if you don't it's not hard to search for them.
Using their tech is just feeding the beast. It's like a democrat staying at a Trump property.
Everyone knows better, but few have the spine to stay away.
Yea, and there was a lot of controversy that went all the way up to the government and there was the attempt to break apart the tight centralization that had become Telcos. Obviously this is repeating itself.
Social systems are built by people and broken apart by people. People are telling you to stop using Facebook because they don't have anyone's best interests aligned with anything besides their own. So stop using Facebook unless they figure out how to prove differently. Trust takes time and when it's broken, it takes effort to regrow. Don't give them that power over you. That's the point of what a lot of people are saying. All of that stuff influences the ways we see all online communication. A breakdown in trust from one superpower affects perception of everything it is connected to. And some people like socializing online freely, without having to constantly police themselves for every statement uttered, because for some people, that's all we ever had for real and fluid and trusting social interaction and that's all we learned to feel comfortable with. Facebook changed that dynamic to an extreme.
So Facebook sucks, that's why we have that perspective. They are going to have to do more than AI research because it's still very obviously aligned with their own interests and not necessarily the interests of the people that use Facebook. They need to rethink stuff, or at least think about the perspective of folks that use Facebook without projecting their own insecurities onto the rest of the world. Do different things, think about what Facebook does that breaks relationships down instead of builds them, think about how to solve those problems. Because it does. It establishes social dynamics that are not useful for meaningful dialogue. They took advantage of that, which is an abuse of trust. All of this stuff matters in this political climate. It affects all of us.
Yes, you use a service, you become the product. That's what we've been led to accept. How about just not seeing people solely as means to an end (profit)? It's a balance. Capitalism matters to this country clearly, it matters to lots of people on Ycombinator, entrepreneurs, startups, small business, developers, etc. But it's still a balance. You don't want it to override everything you are and everything you do because then it controls you, instead of you have control over you. That would seem like a contradiction because the point of having financial security is to have your own control. But the world is much more complicated than the business of making money. This doesn't mean toss everything you've ever done in the garbage. Just means be mindful of the balance.
Facebook, for whatever reason it was built, was sold to a consumer public as a means for improving socialization. It is not doing that because it chose to not do that. It took an action that was orthogonal to the mutual trust established in the business dynamic. Takes time to repair that. AI is still too tightly coupled to predicting how people think.
Bottom line is, you have to ask yourself "Why do they want to do that?".
1) Much less, if any, grant writing. A standard NSF $500K 3 year grant provides a lab about $100K per year (due to university overhead). This will support 1-2 PhD students on a pittance of a stipend.
2) Less teaching (1 course once a year)
3) Computing resources that are far greater than you could get a grant for. One NVIDIA DGX-1 is about $100K. Very hard to get a grant for something like that. Companies have far greater resources than just one DGX-1
4) Access to software engineering personnel to help with research
5) Access to data, and the ability to create new datasets. Many datasets cost over $100K to make (e.g., those for semantic segmentation). Very hard to get a grant to make a dataset (I've tried)
6) Much better media relations to popularize your research
7) Bigger salary
A big part of this, in my opinion, is just that FB funds their labs. Getting government funding for a lab is difficult and frustrating and must be done continually, especially for these professors that run huge labs. NSF has a 7-15% success rate, and it can take a lot of effort to write a decent proposal. Including teaching regularly, this takes a lot of time from actual research. What these professors, I think, are negotiating is only teaching one semester a year and having a spigot of cash to fund their labs. They also gain access to massive amounts of data and computing resources.
Other companies don't seem to be as open to this arrangement as FB, but it has a lot of appeal to me (although I'm not a lover of FB).