Maybe different teams are different, but on my previous team within Google AI, we thought the goal of google's pubapproval process was to ensure that internal company IP (eg. details about datasets, details about google compute infra) does not leak to the public, and maybe to shield Google from liability. Nothing more.
In all of my time at Google AI, I never heard of pubapproval being used for peer review or to critique the scientific rigor of the work. It was never used as a journal, it was an afterthought that folks on my team would usually clear only hours before important deadlines. We like to leave peer review to the conferences/journals' existing process to weed out bad papers; why duplicate that work internally?
I'm disappointed that Jeff has chosen to imply that pubapproval is used to enforce rigour. That is a new use case and not how it has been traditionally used. Pubapproval hasn't been used to silence uncomfortable minority viewpoints until now. If this has changed, it's a very, very new change.
And the examples of issues flagged in review that Jeff keeps highlighting—like Timnit’s alleged failure to mention recent work to reduce the environmental impact of large models—are themselves a bit worrisome. Jeff gives the impression that they demanded retraction (!) because they wanted Timnit and her coauthors to soften their critique. The more I read about this, the worse it looks.
Yeah, put more simply, they pushed out someone in their Ethical AI department because they did not soften critiques against AI enough. They couch these in terms of rigour, but the substance of the problem has to do with her criticisms against AI.
Ultimately it makes the whole Ethical AI department look more like a rubber stamp for Google.
Let's be even more clear - they pushed out someone in their Ethical AI department because she wanted to have human conversations to determine the basis for being asked to soften critiques.
It's one thing for reviewers, even anonymous reviewers, to reject a paper on its merits; it's another, in Timnit's own words [0], to be told "'it has been decided'" through "a privileged and confidential document to HR" despite clearing the subject matter beforehand. In light of a more general frustration, it's very reasonable for Timnit to escalate the situation by putting her own career on the table, simply to request that people engage with the paper rather than flat-out rejecting it.
And if Jeff wants to respond by immediately cutting ties, and by putting out a document that doesn't even address the situation at hand (edit: much less the underlying issues of unequal treatment for women that Timnit describes)... that's a reflection of his ethics and the ethics of the company that stands behind him.
Also from Gebru's memo ([0] in the parent comment):
>And you are told after a while, that your manager can read you a privileged and confidential document
Emphasis mine. Showing your employee that you don't even trust her with a written copy of the rejection of her paper is not a great way to engender a good working relationship. Note that this pretty clearly seems to have happened before Gebru sent the email that Dean characterized as an ultimatum.
It sounds like there wasn’t a great working relationship. It seems management was concerned (reasonably, based on her track record) about the prospect of her responding with hostility directed at the coworkers who expressed concerns about the quality of the work if she managed to discover their identities. Refusing to let a person have a written copy of anonymous feedback is a rational thing to do if you’re concerned that the person will closely analyze the feedback in an attempt to de-anonymize the reviewers.
The fact that she issued an ultimatum for the identities of the reviewers suggests that management was correct to have safeguarded them in the first place.
What I could read was and from the responses of her own teammates was the fact that the paper passed the internal review and that she already gave heads-up to the PR department about her work and they gave a heads up to her and suddenly a meeting pops-up and a manger's manager says to her that you need to either retract the paper or make certain changes. She was fine with the internal committee being anonymous but at this stage anyone would have demanded the same, i.e. who is the authority that thinks this paper sets a lower precedence for what google stands for i.e. some sort of human engagement and what does the authority do they take her at her word twist it and fire her by "accepting her resignation" what does this sound to you, for me a kind of high and might attitude by the authorities i.e. how come a black woman that too from the ethics department question our conception of the matter, let us how her what we can do Fire Her!!!.
This might sound a bit exaggerated but all of this is just putting google in bad light and top of that over 500 googlers have written a letter demanding an explanation for the same, those guys know about there internal workings more than you and me, so it is surprising how many review processes does google have, its just like double pressure first get the internal clearance and then work with the original reviewers of the conference.
And now Jeff comes up with this explanation:https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQ....
And for not once he mentions that the paper did already pass the internal standard review process.
"based on her track record"? The fuck does that mean? If you wanna say that people shouldn't call out their employers, then say that. No need for character assassination.
Her contributions to the exchange were certainly unprofessionally hostile (“I don’t have time for this.”), but in and of itself that sort of behavior isn’t a real problem. The actual problem is that as a person with a large following, her hostility precipitated harassment by her followers that ultimately drove Lecun from Twitter. For someone on the receiving end there isn’t a meaningful distinction between whether someone harasses them directly or whether they incite harassment by others.
From what I’ve read about Gebru and this situation it doesn’t seem implausible to me that had she identified the reviewers she would have named them in a public venue and characterized their criticisms as being driven by discriminatory bias or an intent to suppress her work. Obviously nobody is going to present criticism, regardless of whether the criticism is legitimate, if that is a possible consequence.
Call out her employers? No. I’m referring to the exchange she initiated with Yann Lecun that caused him to be harassed to such an extent that he withdrew from Twitter.
Really makes one wonder if this document is one that Google does not want to come out in discovery, ever, and that it's in some system with a relatively short TTL before it gets deleted, because policy.
> Really makes one wonder if this document is one that Google does not want to come out in discovery, ever, and that it's in some system with a relatively short TTL before it gets deleted, because policy.
I suspect not, because it's probably a carefully constructed document to fit the pretextual narrative of the constructive termination campaign that it was part of, which was targeting Gebru based not on the particular paper but on race/sex and criticism around the internal culture on those issues.
At least, it's pretty clear to me from all the Google AI people describing how Dean’s characterization of the review process does not comport with the usual practice of that process, and in some way differs from even the official documented process, suggests very strongly that the entire review issue was pretextual and personally targeted, and not about the paper itself at all. The interpretation of what is behind that pretext is a little more speculative, but you don't need a pretextual campaign unless the actual basis is prohibited or even worse for PR than the pretext.
> Yeah, put more simply, they pushed out someone in their Ethical AI department because they did not soften critiques against AI enough.
I didn't read that. I read the person _demanded_ who said a particular critical feedback, or questioned the approaches instead of addressing them. The person gave the ultimatum to resign if details were not shared.
If your work is suddenly and unexpected roadblocked at the last second by internal review, the only way to make changes that prevent that from happening in the future is to clearly understand the situations and criticisms that led to the roadblock. This is why understanding who raised these concerns is important. Anonymous feedback blowing up a project at the final moments is sure to frustrate anybody. If what she has said is true then it was also very difficult for her to even have access to the substance of the critique in the first place, with the initial story from her management being that she would not be able to see the documents explaining why the paper was to be retracted.
The critique here appears to have been fairly minor, too. Failing to cite some recent research is rarely grounds for rejection.
>Now might be a good time to remind everyone that the easiest way to discriminate is to make stringent rules, then to decide when and for whom to enforce them.
It would be pretty easy to discriminate if you had loose underspecified rules, then decide action on a case by case basis. The problem seems to not be in the rules but in the deciding.
Why can't it be both? I once observed an instance very early in my career, while working in an employment litigation office where rules were explicitly created in order to box an individual in such that their actions, while completely legal and moral and in the course of their professional duties could be used as grounds for dismissal "because policy".
A lawsuit emerged. A settlement followed.
Just because "we made rules for this" doesn't mean the scrutiny should suddenly cease.
I mean this sounds deep and sensible and I support the underlying sentiment when it comes to laws and government force, but it's not so black-and-white in social situations..
Someone farts once - no big deal. Someone farts all the time => they're quitting or I am.
It was only ‘at the last second’ because Gebru chose not to follow the normal procedure.
If the paper genuinely can't be ready until one day before the external deadline, the right thing to do is engage with the reviewers in advance, explain the problem, and provide them with drafts and work in progress, so that they can complete their work a few hours after yours.
What Gebru did is the equivalent of bypassing code review and pushing to prod on Friday afternoon.
According to Jeff Dean's account of events, yes. According to nearly everyone else's, this process was unusual in that it even involved reviewing for content and not just IP, and Gebru herself says that the website about the process says to submit at least 1 week before publication, not two.
"According to nearly everyone else's, this process was unusual in that it even involved reviewing for content and not just IP"
This is ethics for Google's AI+Search which is currently undergoing global scrutiny, particularly by Congress and specific politicians who are considering anti-trust measures against Google - and who believe that 'their political party is being treated unfairly'.
It's existential concern for them right now, relating to the possible breakup of the company.
Every public communication on 'ethics' or search results etc. at Google is obviously going to have to be reviewed.
If you're publishing the latest thing on 'AI Random Number Generation' obviously nobody cares about anything other than IP.
The fact is, she must have known this and submitted anyhow - which is in and of itself not so bad, but that there was calamity afterwards ... there is no excuse.
Google was absolutely reasonable - they did not ask to change the nature of the research, but wanted to make sure that information about new, better processes were included.
It's beyond gracious for Google to do this, when really their starting point is 'silence' and they really don't have to do anything at all.
A request for a fairly short review with very basic and reasonable concerns blew up.
This is not a public university, you don't get perfectly tenured academic freedom, if Google wants to put a reasonable subnote in there - and take 2 weeks to do it, it's perfectly fine.
Obviously Google would have kept her if they wanted to, but it's clear they were both looking for a way to part ways and it's probably for the better.
> This is not a public university, you don't get perfectly tenured academic freedom, if Google wants to put a reasonable subnote in there - and take 2 weeks to do it, it's perfectly fine.
They may be allowed to, but they're fools if they think world-class academics are going to work for them under draconian publishing standards that are not even consistently specified. I'm sure Gebru could get a tenured position at a university of her choice. They're throwing away a lot by choosing to die on this hill.
"but they're fools if they think world-class academics are going to work for them under draconian publishing standards"
?
I suggest it might be 'foolish' to imply that 'a 2 week quick review with minor additions' are anything remotely 'draconian'.
Just the opposite -- this is a siren call to great researchers who want to be highly paid and work on great and novel things, full well knowing Google has a very light review process, won't interfere or suppress.
This makes Google sound like a great place to do research, probably better than most public institutions.
I think that demanding retraction to a paper with no reason, and then only providing a verbal reason (aka the researcher cannot have the notes with them when making revisions), refusing to explain the process in which feedback was solicited, and then demanding retraction (NOT a revise and resubmit)...
yeah that's super draconian.
ESPECIALLY if other people in your department are claiming no one else has to go through this, just one of the few black women! Damn!
> when really their starting point is 'silence' and they really don't have to do anything at all.
They can't have their cake and eat it too; if they want hire people to do AI ethics research, and then censor them for doing their job, then they should get called out for ethics-washing, which is exactly what's happening.
I don't know why so many people love to defend power, especially when that power is not benevolent.
The requirement for a fairly light review process, and asking for more, truthful, factual and contextualizing information to be disclosed is not censorship.
Nobody is suppressing research, or even asking that specific opinions or results be changed.
The commenter above used the term 'draconian' to refer to this process, which is just superlatively false.
"I don't know why so many people love to defend power, especially when that power is not benevolent."
How is this power not benevolent exactly?
What's 'hard to understand' is the petulance and irreverence people have for the offices and responsibilities they hold, and the lack of professionalism in their conduct.
This should have been an easy issue to address by any mature researcher who cared about working with others to achieve positive outcomes - instead of trying to force their opinion on an organization, or engender public support for their career.
There are plenty of reasonable voices at the table for 'Ethics in AI' nobody has a magic wand in this equation.
But they didn’t ask for more factual and contextualizing information. She wasn’t given a chance to revise the paper to include that. It was just canned.
I’ve only ever done the paper review process twice. In both cases I got it approved concurrently with submission to a conference. Other googlers have similar stories.
As a manager, if someone gives an ultimatum, you basically have to fire them. There's no real option; Ben Horowitz covered it somewhere, but the bottom line is that if you yield, you've given up all control.
This is rubbish. At Intel we called it "badge on the table". It's a statement of complete commitment (and I've been the beneficiary of someone going 'badge on the table' at least once, under circumstances I can't disclose).
It doesn't imply that if a manager or VP or CEO concedes the point at issue once, then the person can now go around "putting their badge on the table" and getting their own way over and over again on other issues; probably making a habit of issuing ultimatums (ultimata?) will get you fired PDQ.
I only know of 2 cases where I know for sure that it happened, although I've heard rumors about more. It's not going to be one of those things where you run around yelling about it, especially if you succeeded (after all, flexing on your management that you did it is likely to make your management unhappy).
One case someone succeeded. The other case some other person resigned.
Sorry, that was clumsily put - what I meant was in one case person A got what they wanted, and in another case not involving anyone from the first case, person B didn't get what they wanted and resigned.
I understand some of the reasons why some managers think that way, but you can't have such simplistic rules.
An ultimatum like this is an opportunity for a responsible manager to talk and rethink, but it seems like Google jumped at the opportunity to double-down on their mistake and then send out cowardly emails claiming the employee had actually resigned.
If I were to apply a simplistic rule here, I would actually invert it - if you get to a point where you are sufficiently undervalued that you feel the need to issue an ultimatum, you basically have to resign.
The problem with accepting anti-social behavior is that you encourage it. This individual or another will use the same strategy against you in the future, and it will create antagonism and toxicity.
No, an ultimatum is a choice between two options; she offered Google a choice, and they selected. Expecting them to try to carve out a 'third way' is just unrealistic.
I agree that you can frame this many ways; she could have portrayed this as her resigning in protest, instead of blaming Google for being vindictive.
Ah the old I don't want to do my job and accept that there "was nothing I could do" which is middle management bullshit.
It is fine if you think that, but accept that you are the weak one here. If you want to err on the side of keeping your job it's fine, but don't pretend you didn't make a trade off.
You do not have to accept anti social behavior but a good manager would have handled this and it would never have reached this point, public or otherwise. This whole episode is failure of management top to bottom.
Respectfully, we don't have enough information to judge whether she was net-positive to the team who was acting reasonably. That's a complex calculation, and I don't know whether management triumphed or failed.
It does seem like she judged the situation incorrectly, as she is now complaining, not gloating.
The fact that this has over-flowed into the public sphere is a failure.
If they handled the situation correctly it should have been sorted internally. Whether the person in question spilled the beans at all is proof of that.
Managing people is a skill, and being good at computer science does not make you a good manager. They should know that complaining to social media is an option that someone might take and they should consider that when dealing with these issues.
The fact that we are here discussing anything at all proves the above, It isnt 1995, if someone feels slighted for whatever reason, expect it show up on Twitter, true or not. You don't want to be chasing the narrative with a potentially one sided google doc. No one is giving the mega corp the benefit of the doubt in 2020 which means it is bad PR either way.
"a good manager would have handled this and it would never have reached this point, public or otherwise. This whole episode is failure of management top to bottom."
My point is that I do not know whether this situation could have been handled better. We don't know enough to judge whether this could have been sorted out neatly. You seem to think that a clean resolution was possible, and you might be right, or you might be wrong.
Fair, but I don't think we need to know what happened to asses what is happening now.
I consider that this being discussed in a public sphere a failure regardless of situation as it looks bad on the company no matter what.
If a person feels their only way out is to appeal to the mob then I think the people doing the management have made a misstep. If that person has a history of appealing to the mob then it is still a misstep as that should have been considered when dealing with the issue.
Perhaps they did the calculus and this is the best result, but looking in, it doesn't feel like it.
> My point is that I do not know whether this situation could have been handled better.
Let's follow the timeline and discover a root cause:
1. Anonymous feedback being given through HR about a
research paper in AI Ethics to be published in an
academic forum.
2. Manager schedules a meeting where: “it has been
decided that you need to retract this paper by next week..."
without context and without a chance to confront others.
3. She puts an ultimatum to her boss that she can't
continue to work there with conditions like that limit her
freedom to speak and research. Google decides to accept
her resignation.
This suggests that:
A. People can just go to HR with criticisms of a research paper
apparently with the intent to sabotage authors, and HR is
apparently fine with being used like this. Or possibly a manager
convinced HR that OKR's trump AI Ethics.
B. They wanted her to say certain things in an academic forum --
which didn't appear to be IP/Trade Secret related, but for
some other reason, which they refused to disclose. This is
in an environment of ethics where papers might become guidelines
for legislation.
C. They're not interested in fixing the issues she brought up,
because they allowed #1 and #2 to happen above.
It looks like the root cause was A above. Everything after that cascaded from there.
Should HR be involved in "fixing" a paper in AI ethics? Probably not. Just like you wouldn't take your car to HR to get it repaired. They simply don't have the knowledge to do so.
Then Jeff Dean probably has $20 to $30 million wrapped up in Google, so he's going to take their side on the matter publically, unfortunately. Privately he may have been cussing out HR because of forcing him into the situation. We don't know.
Is it anti-social behavior if a company tells you, 'do X or else'? Even recently plenty of companies have told employees that they can move and work remotely but they had better report it so their salary can be adjusted. The penalty for not reporting being firing.
Ultimatums shouldn't be a frequent occurrence but they are a part of business relationships. It seems a bit unfair for an employer to treat an employee ultimatum as a fireable offense when company policies are sometimes the equivalent.
Employees sometimes decide that an employer ultimatum is offensive and quit sometimes too. But I don't think it is nor should be a set-in-stone rule that an employee that issues an ultimatum should be terminated.
> No, an ultimatum is a choice between two options; she offered Google a choice, and they selected. Expecting them to try to carve out a 'third way' is just unrealistic.
But you're claiming that for a company there shouldn't be a choice, it should just lead to termination.
Well, the company is in slightly different position from the manager. They can abrogate the manager's authority, but that would permanently undermine that individual. On the other hand, they can also choose to accept the subordinate's resignation. They could try to transfer the subordinate somewhere else, but that's also risky, and wouldn't really address the ultimatum in this case.
Accepting a resignation achieves three separate objectives:
I think you're coming at the ultimatum from some sort of strange power dynamic perspective, where an employee who successfully gets their ultimatum approved somehow disenfranchises their manager of their authority, enabling future employees to…what? Vie for the managerial position? This has a "crush dissent" kind of vibe.
In every employment contract there is a balance of things and employee is willing to do and an employer is willing to provide in exchange. If my boss said that they wouldn't pay me anymore I would rightfully respond with an "ultimatum" of "pay me or I quit". That's the ultimatum they respond to every day by paying me; they look at the balance of things I offer, consider what I provide to the company to be adequate, and then give me the money I ask for. The same is true for any ultimatum: you come to the table with one final negotiation; the negotiation of "do you value me? Then you must provide me this". It's an entirely transactional exchange.
Now, ultimatums are general to be discouraged not because they undermine some sort of authority, but because they are a sign that negotiations have broken down on both sides. As a manager, your goal should be to try to reach a compromise far before that point–not only does to hurt your relationship if you don't, even when the ultimatum is "successful" from the point of the employee, but by letting a conflict reach an ultimatum point you're exposing yourself to significant risk and often poor deals. The way to handle an ultimatum is to forestall "pay me x or I quit" with "I'll pay you almost x if you show good performance for the next three months". If you are at the point where the argument is "I'm going to quit" then yes, you may have to carry through with the termination if you think what they provide is less valuable than what they want from you, but you should really be looking at what you did to get to that point instead.
Yeah, and whether intended or not, a "fire anyone who gives you an ultimatum" strategy absolutely creates that vibe.
If you have a top down management style where you employees do not question anything you say, that might be the way to go, but I find in the software business what you want is the opposite. You want all the criticism and feedback you can get from your skilled and knowledgeable work force. If you don't get that, you're wasting the majority of that money in their pay check.
The irony here is that if you have a manager firing someone who presents an ultimatum, then tat in itself is effectively an ultimatum that you are supporting. ;-)
That of course also doesn't mean you accede to every ultimatum. I mean, if your business plan is to do X, you want employees that will help you to do X. If they are getting in the way of X, then you need different employees anyway. Usually though, you and they have already worked out that they want to work with you to help you do X before you hire them.
So the main reason you get ultimatums is because they didn't anticipate and do not like the approach you are taking to get to X. Assuming they are smart and have good judgement (and again, if not, why did you hire them? why are you paying them?), there's a very good chance that there are some problems with your approach and you'd be wise to at least consider that possibility and their perspective. They may be trying to save you from making a terrible mistake, and feel like it is incumbent on them to stop working for you because allowing you to proceed would be working against that goal you hired them for.
It's not uncommon for two people to have very different perspectives on what helps to achieve a company's objective. It's also not uncommon for one of those people to be horribly, horribly wrong. Sure, if you've got an employee who has presented an ultimatum based on horribly wrong judgement, it may make no sense be their employer.
I'll tell you though... just because their a subordinate doesn't automatically mean they are the ones exercising horrible judgement... and the farther you go up the food chain, the more severe the consequences from supporting someone's horrible misjudgement. So having a policy of summarily firing subordinates who present ultimatums both creates the wrong environment to get the best out of your team and terribly harmful for the leadership of your organization.
What gets me is all this talk of an employee making their terms of employment known (this so-called "ultimatum") being somehow unusual. An employee/employer relationship is ultimately a running series of ultimatums. What's really discouraged is making each one explicit, but of course, that doesn't mean they aren't there, nor that occasional forthright discussions aren't customary. What do these people think a performance review is?
Usually the goal of management is to employ explicit, stop-gap communication to avoid having to get to the explicit question of continued employment, because the company has already made a committment to that employment by hiring the employee in the first place. Obviously, most employees want to continue on, also. So it seems nonsensical to view anything save an explicit declaration of resignation as the same. "I would like to discuss what would cause me to resign," is not a declaration of resignation, and the people reading this situation in good faith understand that.
Getting to the point where things need to be explicitly stated is unusual, I think. The rest of the ultimatums remain unsaid because people are aware of them already and work within their bounds already. And getting to the point where you have to give a verbal ultimatum requires a party to not be aware of its existence, which is rare when communication isn’t totally broken.
Why? Is there something about CRT that threatens your means and way of living, or is it forcing a type of introspection about what minorities have and continue to go through in various forms and machinations you'd rather not entertain?
It wasn't veiled at all, it was a bald-faced ask, why dodge it? If the veiled accusations that some people utilize in the name of CRT is bothersome, why wouldn't you call THAT out from the very start?
That tactic is not a problem inherent to CRT, that tactic is a problem with how people deploy and weaponize CRT.
In the absence of anything else, yes, people are going to make assumptions.
But you're doing the thing. The tactic that you agree is bothersome.
Edit: I agree that one could incorporate some CRT into their worldview without becoming insufferable, in fact I think lots of normal people have without calling it that. That said, there are a lot of true believers out there, that's who I was talking about.
okay, well since the comment I initially replied to has been edited entirely post hoc to represent an entirely different tenor than what you originally replied with, I guess I need to edit mine as well:
No, I'm not doing that right now, I am trying to understand your framing of CRT and where your issues lie with it. It would seem those issues lie with how certain people argue CRT, not CRT itself.
What stands in your way other than being faced with possible objections to, and responses in kind to whatever your critiques may be? Objections and responses that-I would boldly say-are not stopping you from making said critiques, or rather, they hold no enforceable power that precludes you or anyone from making rebuttals of your own.
They are just that, objections and responses. Which you are free to entertain or not, attempt to unpack and understand or not, respond to with better critiques, objections, observations and rebuttals of your own...or not. But you're not being prevented from making them by anyone or anything short of I suppose committing some sort of crime in order to make that point (that's just an extreme example to stretch the metaphor).
This is the form and function of debate, it is a crucible that boils away impurities of all manner and dialect (for anyone who may be thinking they've heard this one before, yes, I absolutely stole this from an episode of Star Trek).
If you feel you are being stopped from doing any of this, might I ask why and how you have been completely prevented and kept from expressing yourself?
Alright, here's my substantive criticism of how CRT affects various groups in practice (not doxxing my membership in any of these groups):
Elite coastal white: Absolutely not threatened. Beneficiary of the system and knows how to navigate all of the social codes.
Less elite or poor white: Takes the bullet that was aimed at the elite white.
Asian: Scores way too high on tests for their % of the population and this is a problem for a worldview that cares about what % of college slots go to which races
Professional class black or latin: Does great, huge beneficiary of CRT activism
Working class black or latin: Invisible and accidentally hurt despite good intentions. CRT proponents tried to pass a referendum legalizing racial discrimination in hiring in California this year, which would have helped professional class POC and probably hurt this class. Fortunately it failed.
EDIT: I removed some cattiness above. Not trying to pull the rug out from under you but I'm rate-limited and wanted to focus on my actual points. I don't think I'm a caricature of 'unwoke' person who never thought about or dealt with these things before.
We just went through a mini-version of it: the go-to move is that any criticism is immediately labelled as closet white supremacy.
Is that what you truly believe I did above? That I am labeling you, and think you to be a white supremacist?
If so then allow me to be clear for a moment: I have literally no way of knowing if you're a white supremacist. I have no way of knowing if you're not actually an armada of ants collectively working to actuate the keys of a mechanical keyboard or a Boltzmann brain sending these messages through some strange and baffling form of quantum entanglement. What I am trying to expose is the very real reality that these are uncomfortable conversations, that's just intrinsic to this topic and the climate we are in.
This is fine. It is fine to admit being uncomfortable trying to process where we are, how we got here, and how we got out of it.
But one has to start by looking that beast in the face first in order to reckon with it. For some, that uncomfort gets unwittingly channeled into anger and frustration and they might not even know why or even realize it, but that can be focused, and turned into knowledge and wisdom on the issues. One's just gotta start, like I said: see it for what it is, and working from there.
If you took that to be me associating you with white supremacy, I'll try to find other ways of seeking out clarity from people next time.
"an opportunity for a responsible manager to talk and rethink"
Mostly it's an opportunity to let the staffer know that such ultimates are unacceptable, and that taken literally by her own terms - she could be called out and let go. Which is what happened.
It's very doubtful that if they wanted to keep her, that they couldn't have found terms.
Surely the manager would have bent, indicated the wording was a little bit strong, and found a way forward.
It seems clear they were wavering, she crossed a line and offered them the path out and they took it.
If there were material issues being covered up, there was material suppression of information, this story would look completely different - but there wasn't.
This was the right thing to do by Google in a tricky situation.
Maybe in certain situations. But as an engineer on more than a couple of occasions I have pushed back on safety concerns and I was adamant that certain things be fixed for the company's reputation and for safety reasons. I did go over my manager's head because he wouldn't listen. Should I have been fired? Ultimately on one occasion I went up 4 layers of org chart to a VP who finally had the sense to listen because my concern was going to cost the company a lot to fix. I didn't get fired and actually got a bonus and raise that year because I stood my ground. However I never threatened to resign, they would have had to fire me to get me out of there :)
That's not true at all. You can say you'll (go to the media/refuse to sign off on the regulatory paperwork/refuse to change the code that way and they don't have anyone else who can do it/refuse to change the password) or any of a number of other things.
I once worked on a federal IT contract where the project manager for the team was from another company. He was a dishonest, backstabbing snake, and it reached a point where I was quite fed up.
I told my company that I was fed up and the only way I would continue in that situation is if I was given a sizable raise because I wasn't paid enough to put up with him. They gave me the raise rather than having me walk. I worked there for several more years after that.
I never issued an ultimatum before or since. Maybe there are people issuing threats all the time, bug it seems to me that people usually do that if they're frustrated but they want to stay at the company. For IT folks with desirable skills it's far easier to just get another job.
I told my boss I won't work past 6pm and that I won't bring my work phone on vacation. I've had reports tell me they'd leave the team unless they can see a certain rate of career growth. No instant firings on either side.
As a manager, it's your job not to push people to a corner where they need to make an ultimatim. If your company is ethical, you should be able to navigate this.
Well, you're assuming she was pushed in 'to a corner where they need to make an ultimatim [sic]'. All I know is that she used an ultimatum to challenge/corner her manager, and he decided to discontinue their working relationship.
She certainly felt that she was pushed into a corner where she had to make an ultimatim. As a manager, it's my job to make sure my people don't get into situations where they feel that threatened.
I disagree with your premise. I abide by Andy Grove's philosophy, which is that the manager's job is to optimize value production by a team. Sometimes the manager's objectives are in conflict with those of the subordinates, and there is no way to avoid the problem.
A leader is not a friend or an ally, they are just a leader; the leader can be friendly and supportive, but they are still just a leader.
Hey, just so you know, regardless of whether things work this way under /today's environment/ and/or whoever has said it as some "management wisdom", the words you have typed here represent some blatant power-tripping BS to anyone with half a brain.
I hope you've said it with the intention to make a point about how dysfunctional certain managers can become, rather than illustrating a belief. If you can't lead other human beings without having control over them, then please hang up your leadership hat and go do something else for a living.
Isn't the definition of "ultimatum" that no further compromise is possible? You can try or offer different alternatives, but if the other person is really at the ultimatum stage then you've both already lost.
> Isn't the definition of "ultimatum" that no further compromise is possible?
Given that, at least in Timnit's narrative, the email included a request to discuss the issue in person when she returned from vacation, I don't think that the "ultimatum" characterization is uncontroversially accurate for the immediate case.
I'm responding narrowly to the subthread here, which is about firing someone who gives an ultimatum in the abstract. I don't know enough about the specifics of Timnit/Google's situation to pass judgment. (I'm also an employee there, so doing so would be unwise and a potential violation of confidentiality rules if I did know anything.) To me I'm filing it under "Everybody sees through their emotions, and different people will have different perceptions of what actually happened and what people actually intended."
Your point seems to be that not willing to compromise on a specific point means the employee is lost forever.
There’s tons of issues I wouldn’t compromise on, and better leave the company if I had to. Does that mean I’ll be fired the very second these subjects become remotely relevant and/if I make clear where my boundaries are ?
Well there are a few factors which make this situation different. If the ultimatum had been made in person, I'd think there might be room for negotiation, depending on the relationship between manager and subordinate.
Putting the ultimatum in e-mail form really raises the stakes, because there may be other people CC-ed or BCC-ed, and any response could later be weaponized.
If the relationship was already troubled, anything like an ultimatum is an opportunity for the manager to be rid of all their troubles.
The level of the threat also comes into play, and more severe threats increase the risk/tension. If the guardian had threatened to disown the child rather than send them to bed, we would read the situation differently.
I agree. An ultimatum given in an e-mail is more difficult to treat as a negotiation tactic, and it seems like there is much more to this story than we will ever know.
They way this was handled doesn't make any of the involved parties look good.
Her description, quoted from a Wired article which she has re-tweeted (which I interpret as an endorsement) is as follows:
"Tuesday Gebru emailed back offering a deal: If she received a full explanation of what happened, and the research team met with management to agree on a process for fair handling of future research, she would remove her name from the paper. If not, she would arrange to depart the company at a later date, leaving her free to publish the paper without the company’s affiliation."
I don't want to work for a manager who will never think: "Huh, this situation is serious enough for someone to make this kind of ultimatum. Who is right and why? Let me take a moment to think about it with an open mind and pick the most appropriate reaction regardless of what I previously thought."
Sometimes the person making an ultimatum is right, sometimes they're wrong. It shouldn't be as adversarial as viewing yielding as weak. Insisting on always "winning" is in my view the weak position.
Additionally, firing someone is not always legal in some countries, even after an ultimatum, assuming they pick the wording of their ultimatum carefully (e.g. "I may very well resign if/unless [desired condition]") to retain control over whether they will later finalize their conditional decision to resign.
As one example, in Quebec, employees who don't qualify as "senior management" and who have been employed at a company there for an uninterrupted period of at least 2 years cannot legally be fired without what the law considers good cause, period, not even if the company gives them a notice period or pay in lieu. Any alleged noncompliance or misconduct that falls short of the most extreme examples must be first dealt with a graduated process of progressively stronger discipline, and it must be possible for someone to recover from that instead of having the outcome of the process as a foregone conclusion. There is a government tribunal to which an aggrieved party can appeal if they aren't happy with the outcome, with the power to order remedies including back pay and even reinstatement.
Similar things are found in many European countries, though certainly not all.
Of course, ultimatums with more definitive wording like "I resign if/unless [condition that the listener has control over]" -- note the absence of hesitating words like "may very well" -- can irreversibly become an effective resignation worldwide, based on choice of the listener on whether to satisfy or reject the condition.
> but the bottom line is that if you yield, you've given up all control.
This doesn't seem logical to me. I don't doubt there are indeed scenarios where this is true, but as an absolute, this doesn't resemble my real world experience at all. It seems like kind of the opposite of how human interaction should work.
Indeed. The very idea of using words like "yield" or "control" belies a fundamental weakness - managers who are so insecure that they can't ever change their minds in case someone figures out how mediocre they are (when in fact the opposite is true - listening to your expert employees and allowing them to change your opinions is seen by them as a sign of strength).
I'm sorry, but that's no more true than when the ultimatum is the other way around.
I think that statement presumes some degree of unreasonableness. Honestly, I value having employees that have principles and clear boundaries, if for no other reason than I can rest assured that when I'm not observing/involved, those principles and clear boundaries are still there. Now, if those principles are, "I won't accept that paying me gives you any kind of authority over what I do", then you know that's not going to work out for anyone involved. However, if it is something like, "You can't pay me enough to do X", and I have no desire for them to do X, I'm really okay with that.
As a manager, your job is to manage people to get results, and if you are insecure enough about managing those people that you feel you have to enforce some sort of idiotic one size fits all policy then you shouldn't be a manager and should resign yourself, immediately.
This isn't (or wasn't) a review process for scholarship. Oodles of people within Google (even within Brain) have gone through this process and it seems to have always been the case that it just checks for things like PR problems, IP leaks, etc.
Further, she claims that initially she was not allowed to even see the contents of the criticisms, only that the paper needed to be withdrawn.
Let's say you were working on a feature. At the 11th hour, just before it hit production, you get an email telling you to revert everything and scrap the release. Apparently somebody in the company thought it had problems but they won't tell you the problems. Then after prying you do get to see the criticisms and they look like ordinary stuff that is easily addressed in code review rather than fundamental issues. They still won't tell you who made the critiques. Would you be upset?
Because this isn't peer review — or at least, it's not meant to be (per the top-level comment). That's the whole issue, really: there already exists a peer review process to ensure the paper's academic rigor, so why is Google hiding behind a claim of the necessity of anonymity for a corporate (not academic) process?
Fro my understanding, this paper had already passed peer review and been accepted. Google management then decided to block the publication using the IP review process.
Please go read the link first. Jeff clearly states that Google has a review protocol for journal submission which requires a two weeks internal review period.
Timnit shared the paper a day before the publication deadline, ie, no time for internal review, and someone with a fat finger apparently approved it for submission without the required review.
That's not under dispute. What's under dispute is:
1) Is the review protocol that requires a two-week review period a peer review process intended to maintain scientific rigor, or an internal controls process intended to prevent unwanted disclosure of trade secrets, PII, etc.?
Repeating the comment at the very top of the thread:
> Maybe different teams are different, but on my previous team within Google AI, we thought the goal of google's pubapproval process was to ensure that internal company IP (eg. details about datasets, details about google compute infra) does not leak to the public, and maybe to shield Google from liability. Nothing more.
If it's not a scientific peer review process, arguments about why scientific peer review is generally anonymous are irrelevant, just like arguments about why, say, code review is generally not anonymous would also be irrelevant. It's a different kind of review process from both of those.
2) In practice, is the two-week review period actually expected / enforced? Other Googlers, including people in her organization, are saying that the two week requirement is a guideline, not a hard rule, and submissions on short notice are regularly accepted without complaint:
(I don't work for Google, but I work for another very IP-leak-sensitive employer that does ML stuff, and we have a two-week review period on publications. The two-week rule exists for the purpose of not causing last-minute work for people, but if you miss it, it's totally permissible to bug folks to get it approved, and if they do, it's not considered "someone with a fat finger." It certainly doesn't exist for the purpose of peer review - it's assumed that the venue you're submitting to will do review, and I think everyone understands that someone from your own employer isn't going to be a fair peer reviewer anyway. There is a "technical reviewer" of your choice, but basically they just make sure you're not embarrassing yourself and the company, and there's no requirement for how deeply they review. I think I've gone through the process twice and missed the deadlines both times.)
So, if this "rule" exists on paper, but only exists in practice for her, then this is the textbook definition of unfairness.
Papers differ. A short, straightforward, low-impact paper on a non-controversial topic could probably be reviewed in a glance or even rubberstamped. A long, complex, high-impact paper on a controversial topic (or worse, a paper with a fundamental conflict of interest) might take a long time and definitely can't be rubberstamped. The paper at question seems to fall under the latter category? It's like skipping a stop sign; 99 times you do it in your neighborhood with no one around and there are no consequences whatsoever, but that one time you do it in downtown with a cop parked right around the corner and you get a ticket.
I think the "skipping a stop sign" analogy doesn't quite work because there was someone around - someone had to approve it, and furthermore, the fact of the late submission and shortened approval is recorded in the review system. If they wanted to tell people "Hey, in the future, don't do that," they could. There'd be more of an argument there if the common case was that, say, people ignored the system and submitted anyway and hoped nobody would notice.
(... Also, comparing this rule to our overpoliced society where everyone commits some sort of crime and the police just choose who they go after kind of reinforces my point about unfairness. Sure, it may have been strategically wrong for her to not do everything by the book, but if so, it's very interesting that the in-house ethicist has to play by all the rules to not get fired and the practitioners can safely skip them.)
Anyway, the culpability for rubber-stamping this paper is on the person who rubber-stamped it, given that short approvals are commonplace. Saying "You should have known that this approval didn't really count, so it's your fault for going through the normal process and not realizing it should have been abnormal" is nonsense. That's literally the job of the reviewer, and if the reviewer can't do that, someone else needs to fulfill that role. At worst, if they told her on day one "Your job is publishing high-impact papers with fundamental conflicts of interest with the company, so everything needs detailed review from X in addition to the usual process," that would be different. But they didn't. Better yet, they could have flagged her in the publication review system as needing extra review. There were lots of options available to Google if they weren't trying to make up rules after the fact to censor a researcher.
> Anyway, the culpability for rubber-stamping this paper is on the person who rubber-stamped it, given that short approvals are commonplace. Saying "You should have known that this approval didn't really count, so it's your fault for going through the normal process and not realizing it should have been abnormal" is nonsense. That's literally the job of the reviewer, and if the reviewer can't do that, someone else needs to fulfill that role.
This is key, and I don't see it being mentioned as much in other comments. It was approved.
This is a essentially false. The author submitted the paper the day before publishing, given there at least was some form of standard review - the actions by Google could not be construed as 'roadblock'.
There is no 'roadblocking' and the review was certainly not 'unexpected.
The constant misrepresentation of the facts in this situation is harmful for those ostensibly wanting to do good.
"This is why understanding who raised these concerns is important."
Since there was no roadblock - this answer makes no sense.
The answer more likely that the researcher wanted a named list of what she perceived to be as her personal enemies.
"Failing to cite some recent research is rarely grounds for rejection."
There doesn't seem to be any reasonable cause for major concern in this whole issue - it seems the company raised some points and she could have managed them reasonably in professional terms.
I’ve personally submitted papers for this form of review on the same timeline that she did. No problems. So no, I don’t consider the method by which her paper was rejected to be normal practice.
Given that internal prepublication review at every company I've ever been with is merely there to avoid IP leakage, I find it very hard to believe that the feedback is is given in good faith. It's like the oil industry claiming that a climate change paper isn't talking enough about the economic benefits of growing citrus in Alaska. Quite frankly, there's simply no reason to address them, because the problems with BERT, exist with every LLM.
Google stepped in and changed the procedure for this paper, because they wanted to spike it because they were embarrassed by it.
Unless she lied in her first e-mail, which it doesn't seem like she did, the reason she made those demands is because they asked for a retraction of the paper without indicating why the paper should be retracted.
Asking for the identity of people that have the authority to ask for a withdrawal of your research without stating their issues with it seems understandable, if excessive.
Dean's statement is clear that it was approved before being submitted:
> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted. A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. [...] We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.
There is no statement at all of how to reconcile "approved for submission" with "didn’t meet our bar for publication", which probably means that there is no reconciliation, and the cancellation was done outside normal process.
I wonder if he is trying to say that there was a process error, it was approved without review (in error), she sent it out, and then they came back to her and said "wait, no, you can't publish that after all"
Big oil companies don't pump out damning studies on oil use, big tech companies won't pump out research damning the use of their tech.
If people have an expectation of Google to turn out academically pure research then I certainly respect the position and encourage it in reality. But thinking like that means life is going to contain a bunch of surprises that really shouldn't be surprising. Google is simply not going to employ people who they recognise as undermining the success of Google. It is not feasible to run a company that way; roughly speaking companies can choose between ruthlessness and bankruptcy. If you expect tolerance of radicals and debate, look to the universities.
I think the difference here is that the research may have shown that Google was unintentionally breaking the law, and after Google realized this, they used an existing review process in a way they don't normally use it to block publication.
The possibly shady part is that they could be suppressing evidence that they broke the law, but, like your said, they can decide how to run their own business. I'm not even sure if the researcher would be a whistleblower if they didn't intend to report something illegal.
To makes matters worse, in this case at least, the law or laws they may be breaking were established to protect a class of people the researcher is a member of.
> If you expect tolerance of radicals and debate, look to the universities.
Universities used to tolerate radicals and debate. But going by the copious media reports of the last few years, that doesn't seem to be how they operate any more.
Yeah, but then they should get called out for wanting the good PR of having an AI ethics team, but without the headaches of having their poor ethical standards exposed.
Companies are given a life under the premise they provide a social good. They get a charter from the government. They are what's called a legal fiction. Government should demand AI ethics instead of letting the companies self regulate. (after all, most AI is developed with gov funds)
I remember when Google made a whole big deal about their "AI Ethics Board" (or something along those lines), and then not even a year later they reshuffled it because those people were too critical of the company's practices.
And then when there was backlash they "promised to do better" and Sundar Pichai came out with some "principles" that the company would follow for AI.
Another 1-2 years later and here we are again - this just proves that whatever "AI Ethics Board" they might set-up, it will end up being a sham, because they'd never allow that board to stop them from using AI however they like if it's in the interest of the company's profit growth.
If we want real AI oversight we need to demand it from outside nonprofits or even government agencies (why not both?!) - and there should be zero affiliation between the company being monitored and those organizations/agencies.
At then end of the day the incentives for large companies are always monetary.
It might be that they follow ethics because the appearance to do so has a monetary public relations value. It always comes down to that, and for publicly traded companies that set up things like an "AI Ethics Board" it is always for show since the incentives don't allow for anything else.
At the end of the day someones compensation depends on these things and you can't be hurting the bottom line.
> Jeff gives the impression that they demanded retraction (!) because they wanted Timnit and her coauthors to soften their critique. The more I read about this, the worse it looks.
I get the impression that she wrote a hit piece on Google and published with Google's name. For me, it's correct they demand a retraction. It's simply unprofessional to critque your company for something while not mentioning the work they're doing to combat that.
> It's simply unprofessional to critque your company for something while not mentioning the work they're doing to combat that.
It would seem deeply problematic for an AI ethics researcher to have the expectation that when they critique their own employer, they should mention all their work to ameliorate bias or ethics problems, but to not have a similar expectation when they're critiquing other companies. Is the point of having an ethics researcher to expand our understanding of ethical issues, or merely to aid in PR?
If a university administrator were to attempt to tell a PI not to submit a paper critical of work from another lab at the same institution, I think that would be judged as a shocking overreach. But for Google, we're not even in agreement that this behavior is a problem. It's unfortunate that we expect so little from corporations, even if those corporations are some of the main drivers of research in a field.
That's not the assertion here. A good researcher should be aware of the current state of the field, and thus mention current efforts to solve a problem when discussing that problem. Regardless of who's doing the work.
I agree with you that a researcher should be aware of current work relevant to the problem under discussion. Jeff's quoted email states that part of the issue was that Timnit's paper "ignored too much relevant research", without specifically saying whether the unmentioned relevant research was done by Google.
But the parent comment to which I responded, and which I quoted, specifically said the problem was to criticize google while working for google, and seemed to approve that this should be judged unacceptable.
> I get the impression that she wrote a hit piece on Google and published with Google's name. For me, it's correct they demand a retraction.
The "regardless of who's doing the work" part is key, and not all participants in this conversation are on the same page about it.
It's not "runaway activism" if you hire ethics researchers who find ethics problems.
Reasons for not citing research, especially recent research, range from lack of relevance (since even though environmental improvements could have been done, they were not done! So the actual impact wasnt lessened by them at all!); To simply not having known about it. The correct reviewer response to this would be an "accept with corrections" to "revise and resubmit"; retraction is overboard. Moreover, that's the role of a conference reviewer, not the employer. Once your employer starts interfering with what you can and can't publish, it's time to find a new affiliation indeed.
> It's not "runaway activism" if you hire ethics researchers who find ethics problems.
The fundamental difference between activism and research is that activism sets the agenda ahead of time while research explores the knowledge space and reports findings. One helps us incrementally make better sense of the world, the other wants us to narrow on particular facts while omitting other relevant facts in the name of advancing a cause.
> Reasons for not citing research, especially recent research, range from lack of relevance ... To simply not having known about it
The omitted research is clearly relevant. Deciding what is relevant and what is not is precisely what narrative warfare is. If they did not know about the adjacent research, then they would simply be incompetent researchers (which I highly doubt is the case).
> The correct reviewer response to this would be an "accept with corrections" to "revise and resubmit"; retraction is overboard.
There is no retraction because there was no external publication. Jeff Dean states reviewer response as you stated was there but was ignored by the approver.
> Once your employer starts interfering with what you can and can't publish, it's time to find a new affiliation indeed.
Corporate researchers are still employees and are bound by a job description. Independent of the content of the research, it is also entirely within the rights of the employer to set a certain bar of quality being cleared. In this case it seems like Google didn't want affiliation with this paper, not the other way around.
You seem to have an exceptionally narrow view of research; notably, that you can't start with a thing you want to prove, which is in fact step 2 of the elementary-school scientific method. You disagree with the conclusions, so like Google, you have retroactively declared the research incompetent. You seem to think this is within their rights, but this renders any future research from them irrevocably tainted -- from now on, it's no more than Google PR.
> notably, that you can't start with a thing you want to prove, which is in fact step 2 of the elementary-school scientific method.
On the contrary, I totally agree with you on this, researcher needs to pick a particular part of the combinatorially explosive knowledge space to explore, in that they get to be opinionated on what hypothesis they want to prove. What they can't do is however to ignore opponent research that conflict with their propositions. This is precisely what Jeff Dean is talking about in his second letter, you need opponent processing to overcome self-deception and bullshitting.
You can't have opponent processing when you omit relevant research, try to steamroll the review process, throw a tantrum when your paper is found lacking, and ask names to further your agenda through social engineering.
It is not onto me to prove if and why I disagree with the conclusions, it is onto the paper to prove that their assumptions and methods were sound to begin with, if they want their conclusions to be taken seriously. And they were not.
> but this renders any future research from them irrevocably tainted -- from now on, it's no more than Google PR.
On the contrary, this move increases trust in Google research and it would have lessened if they were to buckle under activist strong-arming.
If folks think this was a sign of broken research machinery, they are free to ignore all future Google research, at their own risk for competitive disadvantage.
Ignoring future Google research in ethics has low-risks, as top ethics talent will certainly avoid working for Google, and prefer academic freedom elsewhere (where reviewers are external, to avoid CoI).
> as top ethics talent will certainly avoid working for Google, and prefer academic freedom elsewhere
I wouldn't be that sure. I know the "headline narrative" is invested in painting Google as evil (and they don't tend to be that wrong in many instances), but the actual sentiment among the general talent pool is very divided in this instance. There is a sizeable percentage who are relieved to see activist pressure being resisted in a corporation and would be inclined to make a pick on that basis. We all know corporate pressure is not the only threat to academic/intellectual freedom.
"The omitted research is clearly relevant" -- how do you know? Why do you get to decide?
I wrote a paper recently in which I omitted most tangentially-related papers in mathematical physics, as they would not be mathematically accessible to the audience in question and also do not address the questions posed in the paper. A mathematical physicist wrote to me and was grumpy about it. Fine, I added his name one more time to make him happy. That's the reality of research papers.
It's clear that Google didn't want to be affiliated with this paper. And it's clear that it's time for Gebru to find a place with intellectual freedom, so her work can be judged on its merits.
> It's not "runaway activism" if you hire ethics researchers who find ethics problems.
Agreed, but there's obviously a difference of perspective about what transpired here and I don't think any of us knows with certainty what the truth is. Finding ethics problems is grand and all, but framing a narrative that misrepresents those problems is highly problematic, particularly if they are in a role that makes them the leading ethical voice of the company.
For me, it's pretty unprofessional (and cowardly) to hire a well-respected ethics researcher, write some PR pieces about how the company takes ethical actions seriously, and then tell her that her publications have to follow the party line and cannot overly criticize the company.
Having said that, if Jeff were to make public the paper, criticisms of the paper, and improvements made to address the problems described in the paper, that could go a long way towards clearing the air.
The past three years of work in natural language processing have been characterized by the development and deployment of ever larger language models, especially for English. GPT-2, GPT-3, BERT and its variants have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pre- trained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We end with recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.
I have plenty of experience in Natural Language Processing (NLP), but I am not an expert in ethics and bias – although I have read my fair share of papers on it. To me, the abstract comes across as modest, very reasonable, and exploring questions highly relevant to the community as a whole. Sure, if I was to review it I would be picky in regards to any strong empirical claims since I suspect it would be difficult to demonstrate conclusively some aspects of what they hint towards in the abstract. But as a position paper it looks better than plenty of work already published at top-tier NLP venues and I doubt that it could not get accepted on academic merits.
Still, to echo the parent, does anyone have the paper in its entirety?
A second hand account from a VentureBeat journalist is the best I could find [1]. As a researcher with more than a decade of experience in Natural Language Processing, what is described by the journalist in regards to the paper content seems to be non-controversial and nothing out of the ordinary from this kind of work. If anyone could find an actual leak I would be more than excited to have a look at it using my own eyes rather than filtered through someone else’s.
The communication doesn't give that impression; instead it says that the paper makes claims that ignore significant and credible challenges to those claims. Dean said that these factors would need to be addressed, not agreed with.
Publishing a transparently one-sided paper in Google's name would be a problem, not because of the side it picks, but because it suggests the researchers are too ideologically motivated to see the see the problem clearly.
Ironically, it indicates systemic bias on the part of the researchers who are explicitly trying to eliminate systemic bias. That's just a bit too relevant to ignore.
If that is indeed why the demand for retraction, why didn't they state that up front in the meeting where they told Timnit she needed to retract the paper or remove her name? Instead they initially refused to tell her the reasons for the demand for retraction.
They didn't give her a chance to address those factors at first.
Later they had a manager read the confidential feedback on the paper in question, but still didn't leg her read it herself.
If that feedback was only saying that the paper lacked relevant new context and advancements, why were they being so cagey about it? Something doesn't smell right about that.
Usually those are "accept with minor revisions" or "revise and resubmit". Rarely are they grounds for complete rejection. This is extra true for internal review, since the actual conference review process would provide an additional layer to ensure that the scholarship was strong.
"Jeff gives the impression that they demanded retraction (!) because they wanted Timnit and her coauthors to soften their critique. The more I read about this, the worse it looks."
Representing a more truthful reality is not 'softening'.
It's only 'softening' for those who have an already accepted, extremist view, and for whom any evidence to the contrary doesn't help their arguments.
While initially sympathetic to the author - the more I read - the more I have completely the opposite view.
Google isnt publicly funded academic institution. Whatever they are doing, in particular publishing, is part of the business/PR. So if the management sees something not good for business it is a reasonable that they decided to not do it. If i were a shareholder i can see how i may have questioned why a person being paid $1M+/year (my understanding this is minimum what a manager in AI at Google would be making) for publicly disparaging Google.
Even more, it sounds like Google didn't ask originally for retraction, they just asked to take into account the newer research contradicting the paper - the thing that any researcher valuing integrity over agenda wouldn't refuse.
If somebody wants to do that research and publishing they just have to find another source of funding, i guess.
Anyway, the firing wasn't over the paper, the firing was over the unacceptably unprofessional reaction to it.
> If i were a shareholder i can see how i may have questioned why a person being paid $1M+/year (my understanding this is minimum what a manager in AI at Google would be making) for publicly disparaging Google.
Salary aside (because I do doubt she earned $1M+/year, my guess is probably more on the ballpark of $300k~$500k and either way not really denting Google's finances), you are not wrong, but also it's worth understanding here we're entering the realm of the notion that companies can (and for many reasons should) be about more than maximizing shareholder value.
Also, if I'm being completely honest, from a PR perspective this could be worse than Timnit's paper might've been just given how public it has become and the people involved. People internally are perhaps more comfortable having that paper not be published and not having Timnit in their ranks, but as far as PR for Google goes this isn't great.
Yes, this is absolutely far worse than just letting the paper be published. AI ethics papers are not exactly the kind of material that gets a lot of conversation at the best of times, outside that world, but Google firing a black woman for speaking up is the kind of thing that definitely does get talked about (as we can see here).
But that aside, Google should want this kind of paper published. They absolutely should want to know and discuss every possible weakness in the ethics of their approach to AI - Google has a scale of influence so large that how they act in areas like AI, trickles down to many other organisations. To me, that gives them a responsibility to make it as ethical as is reasonably possible, and that will only happen if experts are allowed to speak freely.
One can make short-term arguments about how that hurts them, but the long-term damage of getting massive AI systems wrong, will be far, far worse.
Even from the narrow view that in-house academic work is part of the PR budget (which I disagree with), Google has made a huge mistake here. This is a giant PR black eye for them. If the game is to pretend to have in-house ethical checks (say to avoid actual regulation), then they need to at least generate the appearance of independence. The correct sinister move here would either been to keep her on staff and give her the runaround or manage her out the door in a way that she wasn't particularly angry and where she signs a non-disparagement agreement.
But as others point out, it's entirely in Google's long-term interests to have internal critics who prod Google and the rest of the industry toward long-term behavior. So I think it makes good sense for them to have independent academics that occasionally make people uncomfortable.
From a certain narrow, selfish perspective it's reasonable for Google to not want to have an AI ethics department placing a check on their leading edge research at all. Fortunately, we don't live in a world where corporations are the ones to determine right from wrong with total impunity.
> AI ethics department placing a check on their leading edge research
that reminds how in USSR each non-miniscule factory, organization, etc. had "the department #1" - it was an ideological check and control department which at sufficiently large/important organizations even included KGB officers.
You have identified a similarity between two situations, but it is not a similarity that matters. The distinction that matters is one of normativity, and on that measure there is clearly no equivalence to be drawn here.
every time it is the same - somebody got the power to enforce the prevalent ideology of the time and place, they happily do it under the premise that it is the most right and good ideology, and because of being such visibly pious followers and strict enforcers these self-declared occupants of high moral ground start to feel and behave themselves as more entitled and better than others. They highjack the cause and frame any disagreement with or critique toward them as a heretical attack on the cause. The main point here is that once something becomes an ideology the "right", "good", etc. gradually lose any meaning in that context, and the only thing which really continues to matter and grows more and more is the enforcement of the ideology.
You are right that there have been many iterations of normative standards, but that does not imply that all situations, ideologies, positions and so on are equally correct. It does not mean that we should stop trying to do better, nor that we have made no progress made through these efforts toward a better world.
No, they're describing a particular scenario where the Political Officers of those norms wind up being a sick joke of careerism and weaponized ideology.
The Soviet Union was about equality for workers. Who could be against that?
I should have been more precise. The phenomenon the other poster was describing is independent of a particular norm or ideology. Talk of evolving norms misses their point.
I see. Yes, any norm or ideology can and often does grow cancerous and counterproductive. What I mean to do is cancel one implicature instantiated by that statement. It's not a reason to be a nihilist, or to stop holding things accountable in a normative sense, in this case as justification for giving Google unchecked free rein of AI development. That the Soviet Union preached and botched "equality for workers" doesn't make it any less important an issue, and indeed we could see every failure toward that end as progress, as in "finding 10000 ways that don't work".
In most cases, yes. In this case, because the paper was about bias in Google's AI models, it might not be just a business decision because the racial bias described in that paper might result in a disparate impact on users, which could be in violation of state or federal law.
1. There exist laws to prevent discrimination against people based on protected attributes
2. ML models make predictions based on attributes without interpretability (it's not possible to prove that protected attributes are not factoring into model predictions)
3. Empirical observation that a model proxies a protected property exposes corporation to liability for regulatory non compliance
4. Therefore any study that could expose bias of a model used in production is to be road blocked or prevented ...
To combat flows like above -- seems like regulators are going to need to update rules with third party audits and an incentive structure that encourages self-regulation and derisks self-detection and self-reporting of non-intentional violations... ideally google should not be put into a position where it is incentivized to police its own ai ethics research to ensure that such research doesn't expose their own illegal/non-compliant activity ...
A company can still protect themselves by fixing the model and delaying publication of a study about it's bias until after the statute of limitations had expired.
In this case, there were recent changes to the statute of limitations for CA laws that extended it from a year to 3 years, which could be why this whole process seems weird.
well, imagine a manager in your company publishing a paper stating that your company products are probably violating state or federal laws. All that without raising the issue up the proper management chain, without working through the correct procedure with compliance and legal depts, and without going to law enforcement if the law violation is still continues after all that.
At least when I was there, my papers were getting thoroughly reviewed and often had to make some adjustments before getting approval. Never occurred to me to make any demands from the reviewers or threaten to resign if my paper doesn't get immediately and unconditionally approved. Seems like she's asking for preferential treatment.
Do you want to know whats interesting? I read alot of computer science research, particularly what comes out of Google. Its clear to me that details are left out of specific papers, especially how things are done in sub systems. But, like a jig saw puzzle, I discovered that many papers are actually descriptions of computing systems and algorithms that interact. If you read between the lines and squint your eyes, you can get a much bigger picture of internal google AI systems than you guys think you can.
This response really seems like gaslighting. He doesn't address her concerns and glosses over whether she was held to a different standard than others at GR.
He was also extremely vague, perhaps intentionally, about what the issue actually was. His sentence about when the paper was submitted and approved and all that is impossible to parse and make sense of who did what and when.
Of course he was vague. This just happened, tensions are high, and no doubt Timnit is talking to an employment lawyer to find out what both parties' right and obligations are, and I'm sure Google's lawyers are also getting all their ducks in a row.
This is spin at best, gaslighting at worst. We'll never get the full story (and should we? it is an internal company matter made public, after all)..
>We'll never get the full story (and should we? it is an internal company matter made public, after all))
Not really sure what the point of an 'ethical AI' department is when there's no transparency or accountability facing the public because if it can be cancelled internally at any point if it threatens the company you've basically recreated some kind of Soviet ministry for truth
I think you misunderstand. This is an HR matter, not an "Ethical AI" matter.
The official outputs and products of that department should (hopefully) be public and shared, I 100% agree with you. But that's not what this is about.
This matter in particular is an internal employee/employer dispute and dismissal, and is only public because of the high profile of the persons involved.
And what I meant that we'll never get the full story is that these kinds of situations are always more complicated than they appear. We are only seeing the tip of the iceberg, and are not privy to the history that led to this moment.
These kinds of things don't just happen out of nowhere.
If I had to guess who's "more right" here, I'd side with Timnit, personally.. but again I don't have all the facts, so it's just a gut feeling based on what I know about how large enterprises work.
The point is to do foundational research, and to help Google ensure that its AI development complies with its own ethics. Google did try to set up an AI ethics board for accountability to the public, but it fell apart, because many segments of the public have ethical views which were seen as unacceptable. (https://www.cnbc.com/2019/04/04/google-cancels-controversial...)
Right -- he once again talks about "accepting" her resignation, when it reeeeeally just looks like they fired her. At the very least, she certainly feels like she was fired; why is that not mentioned at all? Even just, I don't know, "sorry we were abrupt?"
Not really. If your girlfriend told you she was planning on breaking up with you after her birthday, would you stay with her until she did it or would you end it immediately?
Sure, but sending out an e-mail accusing your colleagues of racism, exhorting them to stop working, and talking about potential lawsuits isn't. There's no way that Google (or any company) would continue to employ her after that.
Perhaps it's just me as a URM, but her email resonated with me, especially this part. I see this position of calling what she did "exhorting them to stop working" often, but this isn't really what she did.
I too care about DEI, but after putting lots of time and effort into it I saw how futile the effort was in my organization because there was real buy-in from higher ups. I was putting a lot of unrewarded volunteer work helping with "inclusivity" and talking about the problems/solutions, but that was all it was in the end for the people we needed action from; "talk". I did decide eventually to dial it back and stick to my actual paid job of programming, and although I didn't send an email to other people telling them their effort was being wasted, if someone came and asked me, I'd tell them to not bother. There's other places, usually further removed from the the company and easily PR-able channels, where the effort is better spent.
In any case, I hope you realize your comment is full of hyperbole and the people who think she isn't in the wrong, myself included, aren't being unreasonable. We're smart people too.
> There's no way that Google (or any company) would continue to employ her after that
I agree. None of this comes as a surprise and I'm sure she expected it too; that doesn't mean Google is in the moral high ground.
Citation needed, for sure. I certainly believe that most companies don't particularly value honesty, especially when pointing out managerial flaws. But looking at what she wrote with a manager's eyes, I don't see anything I'd fire her for. But like aylmao writes, I see it more as an impassioned and probably valid critique of DEI work that is more posture than substance. What I see is somebody who really cares about the problem, and who could be channeled into productive work as long as that work truly has impact.
I also suspect that if she'd written the equivalently passionate comment about a technical failure or bad product choices, people here would be cheering her on. Especially if she were a he.
Just had a corporate counsel seminar on this: Under federal and most state harassment/discrimination laws the company actually can’t fire her for any of those “protected” activities unless the accusations of racism are shown to be untrue and made in bad faith.
I personally do not have enough info to decide who is telling the truth in this case.
Judging from most of her writing online (which is all pretty assertive) I think it's far more likely she said "if you don't fix this, I can't continue working here".
And that's both a negotiating position and a resignation.
Something can't be both. A resignation is an unconditional desire to leave. "If you don't fix this, I can't continue working here", though, is a desire to stay.
It sounds like she said "I demand that you do X Y Z or I must resign" and they said "Very well we regrettably accept your resignation. No backsies." and she was like "You're firing me??!"
> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
I find it unlikely Dean would lie about that, not least because the email would be easy to find.
Now, were the actions leading up to that effectively a firing, ie Timnit would have been unable to effectively continue in her role? Quite possibly.
And I'm sure that her lawsuit will allege bias, and will include a demand for exactly that information so that she can prove racial discrimination against her.
Which means that Google is likely to have to produce all of the documents that they didn't want to produce.
Having your research -- the very rationale for your employment -- being squashed by execs suddenly without explaination, and in a highly unusual procedure, should make you question whether you are able to effectively continue in your role, or whether you're simply window dressing.
It looks like Google's AI Ethics team is meant to be green washing.
Every paper we submitted went through a technical review as well as legal and IP reviews. They were along the lines of cite this, cite that, run these experiments etc.
What's different in her case is that you don't see the names of the people reviewing. Being the devil's advocate, she MIGHT have a pattern of aggressively attacking people who reviewed their work before. So they might have made the reviewers anonymous this time.
They might have enough of a PR budget to make the Google version of the story stick. But, its concerning that, if what you say is true, they are hoping to make that work by leveraging the public's ignorance of how the Google specific process works. Its also not the smartest move, since Google is important enough and public goodwill towards tech is low enough that journalists will have a field day looking for evidence of double standards/a cover up. And they're not making it too difficult for the journalists if that evidence is found in the top comment on a hacker news post.
This really seems like whistling past the graveyard on google's part here. There's too much meat to the story for them to really do much more than obfuscate. The intersection of race and gender, ethical implications of big tech, the capitalistic pursuit of innovation at the expense of individual freedom. All of these look bad for google
We know large language models are super important to google, and there are lots of competitors.
If they approved the paper the message would be "google thinks language models are a waste of resources and racist". There would be no academic debate on this topic as its been framed as woke and published by a militant activist, so any disagreement would be racist (see prior interactions between this researcher and other researchers [1]).
Thats why the standard process of publishing, peer reviewing, academic critique etc would not work.
Why would their researchers working on language models stay? when they can go to Facebook, OpenAI etc. Why would new researchers join?
Academic debate is, in fact, done through conferences and journals. You saying there can be no debate is a strawman position with no basis in reality. The idea that standard rigor cannot be applied to ethics research is absurd, and seems to insinuate that the entire field is absent discipline.
The proper response to her position would be to publish a response or critique. Attacking her entire field does nothing to further the conversation.
The statement that "Academic debate is, in fact, done through conferences and journals" is not strictly true. Specially given that a lot of reviews in more popular conference are very hit and miss. You can submit the same paper to the same conference multiple times and get wildly different opinions on the same paper.
The variation in reviewers' response is often due their lack of knowledge and unfamiliarity with the problem. Take a look at the recent reviews on some of the more popular conferences on OpenReview.net. Most of the reviews don't have any substance and are often vague/generic.
I'd take the reviews from peers that I trust and are aware of my work more seriously than reviewers of conferences.
I have to assume both sides here are adults that can deal with criticism of their chosen discipline without immediately resigning, or not joining a specific company over it.
But in this scenario, shielding Google from liability is actually a primary concern given that Timnit is discussing ethics/bias. A paper on say novel transformer architecture, the lottery ticket hypothesis in a new setting, a new RL benchmark suite, etc is not going to expose Google to legal risk the way ethical AI research often can.
This. I have been arguing the unpopular opinion that most AI ethics work in corporate settings is not designed to empower real research. It was a matter of time before an actual researcher with an ethical compass was removed unceremoniously. Anyone in an AI ethics team at a large company — you need to know exactly what your job means to the company, because it isn’t safe.
Then what's the point of hiring her and people like her to work for Google in the first place? So that Google could claim that they have Ethical AI researchers and Google's AI research is indeed ethical?
How would publishing a paper open a 3rd party up to legal risk? Research papers aren't laws, and it is chronologically impossible for a research paper to influence already on the book laws.
I can imagine a scenario where a politician who wants to pick a fight with Google uses some of the unflattering findings in the published work as supporting material for why Google needs to be regulated/fined/etc: "Google does {bad thing}. Look at this research report from Google researchers! They admit to doing {bad thing}!"
A paper from Google saying that Google knows that its systems discriminate against minority groups can open Google up to liability for a class action lawsuit from said minority groups against Google. And the fact that Google knows increases the damages that they can look for.
The same paper from outside of Google also creates liability, but now the argument for increased damages becomes about whether Google knew.
That would be more of a journalistic paper than a research paper then. Timnit's research, at least in the past, is along the lines of "Hey, this <thing you thought benign> is not actually benign"
That confused me as well -- where I work we have a legal dept. approval for IP issues, and that's it. Academic review doesn't make sense in that context or time frame.
"Pubapproval hasn't been used to silence uncomfortable minority viewpoints until now."
This is sad gaslighting of a reasonable concern the team had.
Having to endure some external review for what could otherwise be sensitive material.
The inability for the SJW crowd to work reasonably within very reasonable terms, to then resort to aggressive tactics such as 'demand the names and opinions of everyone on the board' and then publicly misrepresenting the situation is going to lose you a lot of favour.
Every time I read one of these stories I immediately feel sympathetic to the individual, but then upon learning more, I feel duped and maligned for having been effectively lied to.
The doors are wide open for progress, those who take it to micro-totalitarian lengths are not doing anyone any favours.
I don't think the approval process is being used to enforce rigor in general, the (claimed) problem is the paper lacks rigor specifically in regards to claims about google's behavior.
Publishing a paper with a lack of rigor about some obscure mathematical technique isn't a problem for google (beyond some possible but unlikely mild reputation damage). Publishing a paper with a lack of rigor that says google is doing unethical things, when those things are questionably accurate, that is something google would (and should) have a problem with.
Whether the paper actually lacks rigor in a relevant way is not something I can comment on.
Why wouldn't you want to weed out bad papers as close to the source as possible to save company embarrassment and external people's time? If you see something wrong during a review why not push it back to the author before it does rounds outside the company? That would seem like a very bad practice to me.
Yes it is disingenuous for Dean to pretend that this was a normal process applied to a normal situation. Clearly whatever happened on that team, this latest round was not the beginning or even most important part. Gebru's letter mentions her threatening to sue Google previously, for instance. [1] The discussion about rigour in a conference paper or internal review is obviously a pretense.
When I was in academia, it was not unusual for the referees to reject a paper for this reason. Of course, you are informed of that and always have the option to rewrite the paper to include that information.
In some cases, though, it's not simply a matter of listing it as other work in the Intro - you may need to incorporate it into your models, etc.
That's no justification for the rationale that critique or negative results in general are not paper worth or do not advance science in less short-sighted ways.
beyond reviewers or editors asking you to make changes like this (for example, "so-and-so just published <blah> which means that your sentence about <blah> is obsolete"), we're talking about research coming from a corporation. If one part of the company is trashing another part of the company, and it's based on stale results, asking to have the paper updated to include the latest results is reasonable.
If you are actually an academic researchers in an academia institution, and are exposed to a large scope of the dealings of the community, then you might find that professional academia is as political as corporations, if not more so...
This reads to me like Google felt that the paper painted some of their other technologies in a poor light, and wanted to insert language that made them look better. The way he describes their objections, they strike me as the sort of thing that is routinely addressed in the camera-ready version of papers by adding a few lines to the related work section. Not the sort of thing that a conference reviewer or an internal reviewer would reject a paper over.
Previously, we only had one side of this story. But if this is Dean's best spin on Google's side of the story, I'm very tempted to conclude they're in the wrong here. Obviously I don't have all the information, but the information I do have feels consistent with the idea that someone important at Google didn't like Gebru's paper for corporate-political (meaning making Google look good, as opposed to political-political) reasons, they tried to get Gebru to play ball, she refused, and now they have to back-project a justification in the name of "scientific integrity".
Unfortunately, I think this is a story where most people's opinions about who's in the right will be more informed by their previous opinions about Gebru and Dean than the narrower question of what happened with this particular paper. I'm probably even guilty of that to some extent myself, given that I'm a fan of some of Gebru's previous work.
> they strike me as the sort of thing that is routinely addressed in the camera-ready version of papers by adding a few lines to the related work section
What I don't understand is why in the discussion nobody proposed amending the paper rather than withdrawing it. If Dean's issue was it didn't cite papers X,Y and Z, rather than demand a withdrawal, why didn't he just demand "I want you to amend the paper to add cites to X,Y,Z". And then, if Gebru and her coauthors were willing to add those cites, that would resolve it.
Indeed, from what I understand, "you should add a cite to X" is common peer review feedback, and a lot of papers get citations added due to requests from peer reviewers. So this isn't hugely different from that scenario.
It would seem that withdrawal over this issue would only make sense if Gebru and her coauthors refused to amend the paper to add the requested citations, but I haven't heard anything saying that she did refuse to do so. It isn't clear if the alternative solution of amending rather than withdrawing was ever brought up in the discussion by either party.
Not that I'm a researcher or anything, but if I was, and a superior told me "we need you to withdraw your paper because it doesn't cite X,Y,Z", my immediate response would be "How about I add the citations you are requesting and resubmit it with those additions?"
Jeff's document says she submitted without asking for approval, so the request was to withdraw that unapproved submission. It is reasonable to interpret that as permission to revise and resubmit. She seems to have had her heart set on this particular conference and submission deadline.
That’s not what it says, though. It says her paper was approved, she submitted, and then a reviewer had a complaint. He seems to be deliberately vague here to make it seem as if she was acting without permission, but as I read it she had a vid reason to submit. This lines up with other people in the thread saying submitting hours before the deadline was risky, but common.
It's just weird that I'm pretty sure that the reasons for the request to withdraw was for reasons of the scholarly quality of the work, not for corporate reasons like protecting corporate information, which from what I understand is the job of the reviewers of the conferences and not Google. Especially because other people who work in her department have gone on record saying that withdrawal requests purely due to paper quality never happen and internal review is purely for corporate secrecy reasons.
None of this is adding up to the process issue Dean is claiming.
> What I don't understand is why in the discussion nobody proposed amending the paper rather than withdrawing it. If Dean's issue was it didn't cite papers X,Y and Z, rather than demand a withdrawal, why didn't he just demand "I want you to amend the paper to add cites to X,Y,Z". And then, if Gebru and her coauthors were willing to add those cites, that would resolve it.
Unless corporate tells you to kill the paper and you need something that resembles legal cover.
More specifically, she stated in her original email [0]:
>Have you ever heard of someone getting “feedback” on a paper through a privileged and confidential document to HR? Does that sound like a standard procedure to you or does it just happen to people like me who are constantly dehumanized?
So she did get feedback of some kind, though my reading is that it was probably vague legalese that she reasonably deemed insufficient.
In Timnit's account, she said she asked for specific feedback so it could be addressed and the paper published; this was denied to her and is part of her complaint over which she resigned.
One thing I don't understand about this situation is which of the following was she denied:
(1) the substance of the feedback and actionable specifics (e.g. "you didn't cite X")
(2) the actual text of the feedback (if it was given in written form)
If she was denied (1), I agree that is grossly unfair. If she was denied (2) but granted (1), I don't think that is so unreasonable. If feedback is anonymous, sharing its exact text can give away who gave it (you can often work out who wrote something just from the style of language the author used, especially if these are people you know and work closely with.)
If she was given actionable specifics ("add a cite to X"), then knowing who it came from and the exact text of it is irrelevant and I don't think she has a right to it. If she was denied actionable specifics, that is grossly unfair to her. I think one difficulty is that her account makes it sound like she was denied that, Dean's makes it sound like she wasn't, I wasn't there so I don't know whose account is more accurate.
It sounds like, after pressing the point, she was allowed to have her manager read the feedback to her, without sharing the author or process by which feedback was solicited. So, not provided in written form, but the substance would be received.
However, she was also told that the paper was to be retracted and that she wouldn't be allowed to address the feedback and keep the paper published. So the form in which the feedback was provided was immaterial.
For example suppose that Timnit was told to withdraw after an incomplete review indicates that they don't want to publish. If she asked at that time for a complete list of things to fix, nobody can give it to her because the review isn't finished. Nor can they produce it in a timely matter because it takes time to do the review.
Later the detailed feedback is ready. And if she wanted to submit the paper elsewhere, she could have fixed it and done so. But by then things have blown up and it is too late.
I've seen these review comments before and I don't know if we can say from our position whether it's corrections that can be made for camera ready or not.
If a paper does not analyze the improvements from recent work, and just older work that has been surpassed (deemed inefficient by Dean), the new reanalysis might not be as favorable to the results as the paper proposes which means the paper is moot.
Or the second point is about bias in language models, but it sounds like these issues are mitigated in recent research, which means people are already aware and have solved a bulk of the issue being described in the paper.
But certainly it's possible that the paper's contributions stands strong even after accounting for the recent work that Dean mentions. In that case, it could be corrected for camera ready. But my point is that we can't tell right now without seeing the paper, and the relevant research that was omitted.
But this review process had never been used for introducing comments of this sort to a paper ever before. Plus, there was no suggestion of change, just retraction.
"My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review."[1]
"In all of my time at Google AI, I never heard of pubapproval being used for peer review or to critique the scientific rigor of the work. It was never used as a journal, it was an afterthought that folks on my team would usually clear only hours before important deadlines. We like to leave peer review to the conferences/journals' existing process to weed out bad papers; why duplicate that work internally?"[2]
Actually, the opposite. Speaking personally, I have been a fan boy of Dean's work on a number of fronts for a decade. I had awareness of her work but more superficially, and saw it as largely pissing into the wind. Reading hers more closely now, her papers and her communication, she has my complete respect. I better understand the way in which she was scaling this incredibly difficult mountain. Deeply, deeply impressive.
And Jeff has lost mine. Even if his comments are not just bad faith, his blindness borders on incompetence for the role he has. Yet another clever, blind white man.
I hope a position in the Biden administration can be found for her, and her vision can have a larger societal impact.
I don't know Jeff Dean. I have read some of his work, watched some of his presentations. He seems a credible bloke.
This, though, looks and feels like thinly-veiled retroactive and pretty unconvincing PR. It's short on detail and appears somewhat at odds with several points from Timnit Gebru's resignation note [0]:
- Dean says the paper was reviewed by a "cross-functional team". Gebru says she received the feedback through a "privileged and confidential document to HR"
- Dean says the paper was submitted for review on the day it was due to be published; Gebru says they had notified "PR & Policy 2 months before".
- Dean suggests the feedback was due to the paper not highlighting mitigating work for some of the limitations the paper was exposing. That seems like a very normal part of the research process. Why, then, does Gebru claim that she was told that a "manager can read you a privileged and confidential document" and that no other recourse or exploration of the feedback was permissible?
The only thing we know from the outside is that reality will be far more nuanced and complicated than the tidbits that leak out. Even allowing for that though - and reading some of the related comments here - Google isn't coming out of this well at the moment.
Dean says the paper was submitted for review on the day it was due to be published; Gebru says they had notified "PR & Policy 2 months before".
Wait, really? That’s an important detail if true. The one-day timeline was a central part of the narrative surrounding this story. Notifying them two months ahead of time makes this a completely different situation.
I’m a bit skeptical. Could this claim be verified somehow? Since it’s very public news at this point, we may as well try to be rigorous.
From what I read it sounds like Timnit cleared the general idea of the paper 2 months before. The paper itself seems to have been submitted for approval 1 day prior to the deadline. Another HN commenter says that submitting papers for approval "hours" before the deadline is common at Google.
Thank you. I’m not sure what to think. It seemed absurd to submit a paper for internal academic review one day prior to a major deadline. Yet today we’re hearing that it’s common (I can believe that; standard big company stuff) and that Google hasn’t been doing academic reviews at all until very recently / possibly this incident.
So it feels like, suddenly, the cornerstones of the arguments against her are vaporizing before our eyes. This could go badly for El Goog unless they stop making official statements on the matter.
Saying nothing would have been better than giving a convenient post for all the former Google employees to come out of the woodwork and say “That’s not true! Google never did academic reviews; they solely checked whether business IP was being exposed.”
It’s ironic that people are painting her as unprofessional in that context; I’d be frustrated too, if that’s really the situation.
> It seemed absurd to submit a paper for internal academic review one day prior to a major deadline
I think that there isn't internal academic review of the sort implied by Jeff at Google.
Timnit seems pretty clearly in the right here. As an AI researcher at a competitor, this impacts my desire to join Google in the future. I imagine these sort of PR disasters hurt their standing in the academic labor market.
"My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review."[1]
"In all of my time at Google AI, I never heard of pubapproval being used for peer review or to critique the scientific rigor of the work. It was never used as a journal, it was an afterthought that folks on my team would usually clear only hours before important deadlines. We like to leave peer review to the conferences/journals' existing process to weed out bad papers; why duplicate that work internally?"[2]
Do you have examples? The closest I've seen is the person I replied to saying a key person in their less academic department spiked papers he thought weren't interesting enough.[1]
I'm interested to know too. Because although I treat my ML work professionally, I haven't worked at a large AI lab. So I had no idea whether academic review was common inside DeepMind/OpenAI/Facebook AI, or whether researchers generally did their own thing -- i.e. that the researcher and their co-authors are the main judges of their own academic integrity.
And in hindsight, it seems dumb to think it'd be any other way. Of course the researchers are their own judge. That's what the reviewers are for! You submit your paper for academic review at a journal, and the reviewers are in charge of reviewing.
Would you really want to mess with that dynamic if you're a big company? It's been a tried-and-true way to do science for more than a century. It's also a recipe for failing at science, as many will attest. But being allowed to fail at science is a key aspect of science. It would be terrible if we only published papers that were completely correct in every detail, because it means everyone is playing it safe rather than pushing the boundaries. The most interesting work is usually on the frontier of some new idea.
When the news broke, I didn't give it a second thought. "Oh, Jeff is saying that there's an academic review process. Yeah, obviously DeepMind would have something like that. And what's this -- she sent the paper one day before the journal deadline? That's almost giving them the middle finger. Yeah, pretty clear-cut firing."
... But when you think back on it, none of that adds up. Researchers are paid to do research. Being hamstrung by some manager insisting that you namedrop every relevant paper from the last decade would certainly be rigorous, but not necessarily productive. Sure, you can argue that maybe she should have talked about X or Y. But you could also write your own paper.
I'll admit, I didn't think highly of her. All I knew was that she liked to stir up drama. Why won't she just keep quiet and do her job like everyone else? Yet now it seems like she was doing her job. And if I ask myself how I'd react in that situation -- some middle manager is forcing a bogus new "review" process, and now they're demanding us to retract a paper that we put several months of work into, for reasons other than "You're revealing Google's IP," then my thoughts would be: (a) where were you during the two months I've been writing this and asking for feedback? (b) what are you trying to accomplish here, and is this really how a world-class institution treats the process?
Every company is different. And at Google scale, different teams are different. But now it's looking pretty bad. They certainly had grounds to fire her, and for many folks perhaps that's enough, but as a researcher I'm thinking "Why did Google try so hard to retract her paper anyway?" They keep dancing around that. And the article certainly doesn't address it:
Our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.
Why? That's the point of publishing! Do reviewers just say "Oh this is from Google" and click the "approve" button? Maybe, but the whole point is for people to read a paper and decide for themselves whether it's mistaken. This whole "keep it under the rug until it's polished perfectly for six months for no reason other than prestige" is... well, rather a grim-sounding idea.
Outsiders can't know what insiders know. But we can picture various things based on the information we're getting. And this reads like some manager tried to double down, and she called him on it each time. After four or five doublings, now it's headline news and Google is looking like they went nuclear without some very solid reasons.
From my time in a corporate research lab: The formal review process was conducted in parallel to initial review, with the goal to amend to prevent internal details from being widespread and amending for camera-ready. While people would freely review each other's papers, the goal was to increase initial acceptance rate. However, our lab notably, and perhaps exceptionally, mirrored the structure of an academic lab with corporate funding, even though all researchers were employed by the corporation.
Google's story would not fly where I worked, or where I work now.
What does this even mean? The power and resource disparity between Google and an individual researcher are so vast that this in no way can go bad for Google.
The idea that a few members on HN are disillusioned with Google is just another Tuesday for them. They literally do not care...no business of this size and magnitude do. The general public will never hear about this and if they do, they won't understand it, and if they do then they aren't the general public.
You're not wrong. But like it or not, HN is the newspaper of our time. For many of us, anyway. So stuff like this tends to percolate in unexpected ways. And as a hiring manager, you get no feedback when people decide not to apply to your company, so it's probably better not to kill your hiring momentum.
If it seems absurd that anyone would turn down a job at DeepMind, well... Let's just say, in my experience, prestigious institutions tend to come with a pile of downsides that everyone puts up with (because prestige) but no one really talks about (because no reason). If you care about shipping results quickly -- some researchers do (or at least I do) -- then the idea of joining a big company is already worrisome. Like you're a professional rower, happily rowing along and navigating wherever you want to go, then you're asked to join a galley rower: https://youtu.be/TyzQ-bVaqPU?t=294
There's no substitute for Google-scale work. (Working on TPUs would be a dream, IMO; where else could you possibly build those?) But if you join Google as a researcher, it sounds like your ideas have to (a) pass through their internal academic review, (b) pass through a journal's review, and then finally your idea can be published to the wider scientific community for comment. (b) was painful but possibly worthwhile, with arxiv serving as a bucket to catch everything else. Why roll your own internal review process? And why is Google trying to micromanage what researchers are allowed to publish?
I know we're probably missing a lot of the story. But on the other hand, Jeff has now given an official side of that story, so it's not like they didn't have a chance to set expectations.
It is common to submit for approvals hours before the deadline. However, if pubapprove process finds something that needs to be redacted, you have to withdraw the paper. That's basically the risk in it.
Since most people are frantically working on their papers until day of/ hour before ML conference submission deadlines, the "final" version of the paper may very well have been submitted the day before the deadline.
Someone in the ML community posted the abstract and provided feedback, which seems to indicate that this followed the typical review cycles for conference papers.
> Gebru says she received the feedback through a "privileged and confidential document to HR"
I agree that Google isn't looking good at the moment. But if I had a colleague who seemed intent on finding ways to place the company in legal jeopardy, then I too would avoid direct communication whenever possible.
>Dean says the paper was reviewed by a "cross-functional team". Gebru says she received the feedback through a "privileged and confidential document to HR"
I don't think these two takes are at odds with each other at all. It sounds like a cross-functional team reviewed the paper and produced that "privileged and confidential document" but Gebru didn't find that document sufficiently detailed.
There are a lot of people commenting that she didn't actually resign. I agree, but it sounds like the conversation went like this:
employee: I'm not happy about x, y and z. If you don't do those, I'm going to quit.
manager: well we are not going to do those, so thank you for your time. We accept your resignation and would like it to start immediately (i.e. you're fired).
If you are gonna tell your manager that you plan to resign if a condition isn't met, then what do you expect them to say if they don't plan to fulfill that condition? It sounds like she was expecting them to say "Hey, well we don't want to meet your demands, but sure, we're happy to have a disgruntled employee around here, so feel free to stick around, or you could just quit on your own timeline, no sweat".
I suspect that many people would be fired on the spot for threatening to resign, so don't threaten it if you aren't okay with that consequence.
There are three different discussions happening all at the same time, which is muddying things.
1. Was the treatment of her research in internal review reasonable?
2. Was terminating her employment reasonable after she sent the (now public) email to the women-and-allies brain listserv?
3. Was the end of her employment at Google a resignation or a firing?
To me, (3) is by a wide margin the least interesting part of the story and all of the discussion here is missing the point entirely. Whether she was fired or resigned has zero bearing on whether (1) and (2) show reasonable or unreasonable actions.
For what it's worth, I have read the abstract of the paper and discussions around it and I have seen more than one person rate it as not very interesting. Why is all this fuss about a paper restating common knowledge? For example, they say datasets need to be filtered for bias, and that large models consume ... duh ... a lot. We already know that, where's the new shiny architecture for fair modelling?
As a manager I would only do this if I really wanted to fire the person already. For employees I care about I would give them an out.
Part of being a good manager is understanding your employees and helping them succeed. If someone makes a statement like this in the heat of frustration it doesn't necessarily mean they will actually quit. If they're a valuable member of the team you should present them with an opportunity to save face and remain.
To me this seems like taking an opportunity to fire someone they already wanted to get rid of. Either that or a bad manager who wanted to flex their power as a threat to the rest of the team.
> As a manager I would only do this if I really wanted to fire the person already. For employees I care about I would give them an out.
One of the more difficult lessons I learned as a manager: Once a team member starts giving ultimatums in order to get their way or override decisions, it's not in your best interests to give in for the sake of keeping that employee. The obvious exception is if you realize you were actually wrong from the start, but reversing decisions for the sake of caving to someone's demands is a problem.
If someone is so ready to quit that they'll flaunt it to the company, it's doubtful that reversing a single decision is going to suddenly make them happy again with their employer. Worse yet, it sends a message that threatening to quit is the way to get what you want from the company. Once you validate the strategy, you will get a lot more of it.
Unfortunately, if someone already has one foot out the door and has been complaining openly (even on Twitter, in this case) about their employer, it's best for everyone to go their separate ways. From there, focus on identifying and fixing any underlying problems to minimize the chance of this happening again in the future.
Given the employee's general attitude, I find it easy to believe her manager was already not happy with her.
See her previous (potential) legal troubles with Google that she even acknowledges herself.
> I was in the middle of a potential lawsuit for which Kat Herller and I hired feminist lawyers who threatened to sue Google (which is when they backed off--before that Google lawyers were prepared to throw us under the bus and our leaders were following as instructed) and the next day I get some random “impact award.”
Yes. I probably didn't use the word "resign" as such but there was a time in my career when I went to my manager's manager or maybe another level up the chain and basically said I could not work for my direct manager. I got my way even though it involved working in a bit of a backwater for a time. But I was prepared to leave if I didn't get my way. (Didn't hurt that I knew said manager was a bit on the outs with the exec who I knew quite well.)
It's more nicely worded to be sure, and definitely less aggressive, but it still says the same thing, which is "I'm gonna be an unhappy employee if I can't find out who reviewed my work". If they aren't willing to tell her who reviewed her work (which they may or may not have valid reasons for doing, but clearly they don't want to do), then they are dealing with someone who is going to be an unhappy employee since their conditions won't be met. Sure, not a resignation, but if your employer doesn't think you'll be a happy employee, why keep you around?
In any case, she has been very vocal on twitter and has not seemed to deny that she gave some sort of ultimatum. If she didn't give an ultimatum, it would only make Google look worse, so why not mention that on Twitter (given that she has tweeted probably 100 things about this incident in the last two days)? Given the absence of a denial, I'm going to assume that it was worded as an ultimatum.
I vehemently disagree with just your first paragraph. (Your second and arguably more important paragraph I’m sympathetic to.) People state their feelings toward things all the time and none of it should ever be considered permanent. I’ve told my boss many times something of similar ilk, “if you’re going to have me and my team do this last minute demo when it was fully in your capacity to plan better, I’m going to be pretty unhappy.” Even to that I demand things change. Yet, for some reason, I’m not fired afterward as a “disgruntled employee” or “somebody who can’t be happy working”. I take time to understand my boss’s disposition and I strongly seek my boss to understand mine, and hopefully we end up in a better place afterward.
What’s wrong with Google saying, “we refuse to comply with your demands, and we understand you may feel blablabla. We are trying to streamline our submission process and we would like you to help us do that.”? Maybe it’s because Google doesn’t actually have a desire to work with her, in which case, the ultimatum (or whatever it truly was) is just a convenient out.
> Maybe it’s because Google doesn’t actually have a desire to work with her, in which case, the ultimatum (or whatever it truly was) is just a convenient out.
There's almost no question about that in my mind. Maybe google didn't see her work as useful. Maybe she was just an asshole and they didn't like working with her. I have no idea. But if you are already on thin ice (or your company even feels just neutral about you) and you give an ultimatum, prepare for them to use it against you.
"We can't meet your demands for sure. But please hang around the company and choose your exit date at your liking" -- no reasonable person would respond like this.
In any case, making this an issue sounds way overkill.
Both want to part way. Either side can choose an earlier date of the termination.
She doesn't need Google's permission to leave earlier. Google also doesn't need her permission to make her leave earlier.
> manager: well we are not going to do those, so thank you for your time. We accept your resignation and would like it to start immediately (i.e. you're fired).
The thing is, there's a big difference between resigning and being fired for cause, even if both end with you not working at the company anymore.
IANAL, but my best guess is that she was just let go without cause. Your employer can fire you at any time and technically doesn't need a reason. Typically, "terminated with cause" is a specific thing where they fire you and give a specific reason (e.g. stealing) that might have bearing on whether you receive unemployment benefits, accrued vacation, etc. It's hard to imagine that they fired her in that way and that she was just plain-old-fired (there's a reason for it, but not legal cause).
Firing for cause would be something of a “nuclear option” here and IMO would significantly increase the risk of a court battle. The peanuts Google would save in severance costs would not be worth the PR damage.
This debate over whether she was fired or resigned is distracting from the real issue. Why isn’t Google willing to listen and work with Timnit? What are the issues she’s raising? These are more important questions, fired/resigned is just an easy thing to be outraged or give leeway over.
> Why isn’t Google willing to listen and work with Timnit?
Maybe she isn't that important for the company? Why bother working with someone making demands if they aren't worth your time. Google might have actually thought she was detrimental to the company and wanted an excuse to fire her and she gave it.
I suspect that they were looking for a way to separate themselves from her. I was very upset when I read her attack on Yann LeCun. At least in my eyes, she was hurting Google brand. I read her tweets and she seems to see racism in every turn. In my opinion, that was a behavior of a political activist, not an AI researcher.
After reading Jeff Dean's response, I can only come to the conclusion that Timnit Gebru acted like a primadonna. It is completely normal to have to obtain prepublication signoff on material before it is submitted to a conference (and manager signoff even before abstract submission). Given the breadth of experience at Google, it seems strange not to avail yourself of this. Demanding to know the identity of reviewers is absurd (no journal would tolerate that) and deeply unprofessional. Making it an explicit ultimatum was her decision. Denigrating the entire area of research at Google on a large mailing list is the action of someone who wants to be terminated.
People claiming that the deficiencies in the paper are minor and wouldn't be blockers obviously have little experience submitting to academic journals. Other parts of Google doing deeply technical work probably don't have the same level of review as the Ethical AI group -- for obvious reasons.
There is usually a long back and forth -- there are even memes about the infuriating comments from "Reviewer 2" [1][2]. Omitting to mention argument-obsoleting developments in the field (from your own lab!) is more than enough to send you back to extensive redrafting.
To be clear on terminology -- a retraction is an academic black mark, and occurs to a paper after publication, usually for reasons of research misconduct. This is not an instance of that.
I would agree with this assessment. It seems people think this is like academia -- it's not. There are no guarantees of tenure at a private company, even one with a research culture. If she wants that, then give up the paycheck and go back to academia. Google doesn't owe her anything. (I bet she'll get a decent settlement.)
AI ethics are important and I'm willing to bet that the Google execs want to know about biases and other problems in their products and probably don't care that much what is published. They probably don't want high maintenance researchers that cause them headaches and are obsessed with microaggressions, etc. I expect that this stuff has been a distraction for too long and they are happy to be rid of this and happy for the signal it sends across the company.
I have little sympathy or concern for Ivy educated elite in-fighting. There is probably a huge number of qualified people just dying to have the opportunity to work there, have access to those resources and do research.
I think you should be more open about what potentially actually happened because your first sentence seems very accusatory without proper information, the very bias that Gebru speaks of. According to her email that Dean references in his, her "feedback" for the paper was given in the form of a confidential HR document. That seems highly nonstandard for a review process supposedly only concerned with scientific rigor and is not addressed at all by Dean's statement. Further, his statement clearly, likely intentionally, muddies the water about what the actual sequence of events were that led to the paper being created, internal reviewers originally being notified, when the submission happened, who approved it, who submitted it externally, who asked for this to be retracted, and when and in what order all this happened. The fact that his statement, that was apparently meant to clarify these exact proceedings, makes it even more vague about what actually happened seems pretty damning to Dean's case, in my opinion.
Both Gebru's and Dean's statements have very little overlap in flavor and in facts, so it's pretty apparent something is going on here that is nontrivial and abnormal.
To be sure, Google has other objectives. But the people in this area are essentially academics, and I'd expect them to have a similar process given their backgrounds. The papers from Google are routinely of high calibre.
The non-research motive is arguably to control the narrative about using ML (I admit I still choke at calling it AI) and big data techniques. I fail to see how this is advanced by having a very public spat.
I've no doubt some senior exec decided she had to go, but I don't think it is because of any intersectional reason, or to cover up any particular publication, but that she wasn't seen as an asset to the company. Strategies for negotiating exercise of power are very different at Google to social sciences academia (or twitter, which is engorged with righteousness over this), which she seems not to have grasped. There are few enfants terribles in corporate tech that don't have controlling stock.
This. And one of the main reasons that the paper from Google are routinely of high caliber is due to the internal review processes and your peers.
Publishing a piece of work for the sake of publishing by ignoring the processes that are put in place is outright irresponsible and that's the end of it.
That's a very snide comment. I've read her tweets and other Brain tweets (eg, from mmitchell_ai, though she deleted them, which seems wise as they were angry). I don't take any of them at their surface as being truthful. I think we're only seeing the 10% of the iceberg. But I don't find Gebru convincing.
> It is completely normal to have to obtain prepublication signoff on material before it is submitted to a conference (and manager signoff even before abstract submission)
To be clear, both of these had been received for the paper in question.
That isn't the impression I had from reading the principal's statements, to wit:
> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.
It seems like someone short-circuited the review process and submitted without review to meet a deadline, with a post-submission review. When this occurred, it was not all green lights. The expectation is that you then pull the submission and, post review, submit elsewhere. After all, conferences are all virtual now, it is not a hardship to submit elsewhere.
If you have other information as to the facts or the internal process do tell.
Jeff's statement is a lie. I work at Google, and the publication approval process doesn't normally take to weeks. Any Google employee can verify this, and others have, both in this thread and on twitter. There's some documentation that asks for 2 weeks, but it's not followed, and inconsistent application of policy is absolutely a concern in a case like this.
A summary of the events, as best as I can tell, is the normal presubmission review was done. After the paper was approved and submitted, someone, whether this was upper management or some other entity, did some additional review and required that the authors withdraw (?, but retract is the wrong term) the paper.
From what I know, both firsthand and from other sources[0], while not spotless, the issues with the paper were mostly nitpicks and fairly straightforward to resolve. That they weren't even provided until some escalation from Dr. Gebru is strange. That even after they were, sort of, provided, she and her team were not given the option to address them in the paper, is extraordinarily strange.
I work at Google too, and it's embarrassing to see you attack your colleagues like this in public - how is Jeff's statement "a lie"? Whether or not you believe the 2 week deadline is a hard and fast rule followed 100% of the time, Jeff never made that claim. 1 day is certainly unreasonable.
What makes you think they weren't given the opportunity? We were told specifically they _were_ given an opportunity to revise, and instead, demanded HR(!?) provide attributed versions of any and all statements made by any colleagues regarding the paper (!??!!?), or they'd 'work on setting an end date'
I'm not exactly proud, excited, or anything but pensive about this series of developments, and it's _highly_ likely Jeff feels the same way. Everything got too hostile and out of control here for people to be able to work together healthily moving forward, and I'm skeptical of anything beyond that being anything more than people attaching a narrative to an unfortunate breakdown in a relationship.
> Whether or not you believe the 2 week deadline is a hard and fast rule followed 100% of the time, Jeff never made that claim
His words were, and I quote "we require two weeks for this sort of review". That is an absolutely false statement, and Jeff knows it. I see no other characterization than a lie. I take no solace in that fact, priors were that Jeff Dean was an above average executive and ran a more ethical then average org. I'm disappointed to see my trust was misplaced and that Jeff was willing to endorse that document, but I'm not going to not hold someone accountable when I see them do something obviously wrong, even if we work for the same employer. Ethics extend beyond that.
> 1 day is certainly unreasonable.
It is not though. It's the norm.
> We were told specifically they _were_ given an opportunity to revise
This is not true (if you believe this to be true, can you cite said statement?). I'd be more than happy to discuss this with you privately, my username should be obvious ;)
I don't think talking with you 1:1, privately, would be productive: you have a quite different viewpoint on this that reads as lawyerly and standoffish to me - I'm also required to submit expense reports within 30 days, yet here I am, 60 days later. It's unbecoming of you to be writing epitaths for the moral character of colleagues based on a legalistic interpretation of "we require two weeks"
> It is not though. It's the norm.
Not sure what I'm supposed to do with this. A literal intrepretation has you claiming that all Google researchers only allow 24 hours of review of their papers before submitting them for publication. That sounds wrong!
And if you were fired or retaliated against for having failed to file an expense report on time, when that isn't normally a requirement, I'd be criticising leadership for that too.
Fwiw, I'm not sure what you mean by legalistic. I see Jeff's interpretation as more legalistic: this is the policy, while mine is based on practice: the policy isn't enforced, and here's what people actually do. And like I said, it brings me no joy to see that Jeff is lying to justify this. I have a deep respect for his technical achievements, and prior to this my understanding was that he was an above average executive. Perhaps he still is. But that's not an excuse for me to not levy criticism at him when I see him do something wrong.
In practice, Google doesn't require two weeks for pub approvals. I'd be more than happy to provide the underlying date I'm basing that statement on, but obviously not here. Granted, if you want you can find the data for yourself and if encourage you to do so!
> . A literal intrepretation has you claiming that all Google researchers only allow 24 hours of review of their papers before submitting them for publication.
Given your experiences with Google reimbursement policy, why would you expect adherence to the written publication approval process to be different?
That's why I'd much prefer to have this conversation on corp. I can show you raw data and you can draw your own conclusions no need to listen to me or my interpretation whatsoever.
But this supposes that there is some deep and nefarious need for Google to kill a paper written by some AI Ethics people. From looking at their recent pubs I can't think of anything remotely significant about any one paper they could write. Hopefully we'll get a full leak at some point.
Nothing deep and nefarious needed. The rules were ambiguous (not enforced) and they didn't want this paper published, so they just had to say "no". No extreme motivation is necessary. It's hard to imagine that they expected an ultimatum from Timnit, and the resulting PR disaster. Now that it's in the news, I imagine they actually do have a "deep" need to protect their image by controlling the narrative.
I was re-reading The Gervais Principle this past week. It really does cover this situation & what you describe as the ambiguity in the rules in Part V, “Heads I Win, Tails You Lose.”
> This is a simple and child-like example of the operation of a basic human instinct: the heads-I-win-tails-you-lose or HIWTYL (let’s pronounce that “hightail”) instinct. It is the tendency to grab more than your fair share of the rewards of success, and less than your fair share of the blame for failure.
Now whether one buys into that as literally true or not is one thing, but I think it is definitely a useful way of thinking about situations like these.
In fact, the “Golden Ticket Reconsidered” section at the link sounds somewhat like I imagined incentives within AI ethics research at Google are structured:
1. Cut a deal for performance-linked bonus for successful initiative
2. Set up a committee and charter it to collect, vet & recommend ideas
3. Drop hints & suggestions to create things that leadership favors
4. Create appropriate urgency in the work of the committee to achieve the risk-levels you want in the ideas produced.
The outcome of such a system:
> If it works, you praise everybody generously, hand out a few gift certificates, keep your bonus to yourself, and move on. If it fails, you blame the people in charge of the work for failing to consider an “obvious” (with 20/20 hindsight) issue. The chair of such a committee would likely be Clueless [a term of art in the blog series], his appointment being a false honor — a case of being set up take a fall
Further on, the author brings up the “Hanlon Dodge.” Dean’s characterization of Gebru’s termination (“resignation”) seems quite like that to me.
Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
According to her Twitter feed, Timnit was asking for the same transparency that normally goes with Google pub reviews -- where all the reviewers and their feedback are known to the reviewee. This red-stamp "review" did not follow the normal review process. Not a Googler, just going on what she says:
Considering her previous attacks to anyone who disagrees with her and her previous public feuds, I'd say people may be afraid of being stamped as a racist/sexist because they had to reject her paper. She was a disruptive employee who threatened her employer and the employer didn't accept it. Why is Google obliged to keep her employed?
There are many people in SV that are very quick to bring up race, gender and diversity whenever a dispute arises. It's becoming predictable to the point of being problematic.
As a POC myself, I understand there is a time and place for this.. and it definitely isn't all the time, publicly, on social media.
I fear that people like Timnit are inappropriately wielding justice rhetoric to benefit their own careers, at the cost of actual injustices that may occur to others. This is just my opinion.
That is very sad indeed. I am a second degree immigrant and active in politics in Norway. This behaviour is actually limiting our abilities to work towards eliminating real discriminatory behaviour. I've seen it first hand where someone has accused someone else of racism, when it was there own fault. I guess it benefits them, because they get their way but it also creates a divided and toxic society. Working with people like that is also very tiring and stressing.
The funny thing is she isn’t going to be harmed by this. In fact if she felt like she needed a different job this is one way to do it. I personally would hate the stress, but some people like it.
One thing I rarely see is people talking about her email where she said “give me a and b or else I’ll leave” - paraphrasing slightly. This isn’t in dispute, she readily admits this.
What ... is an employer supposed to do with this? Also she’s a manager, so there are extra lame legal crap. It feels like they could have not taken her bait - which sounds like hyperbole to me - but I guess they did, and do they have to?
I can’t say about any of the review things, if she is being over reviewed or not, could be. Demanding to know everyone who looked at her work so she could... wage a Twitter war of destruction, well I could understand not wanting to do that. I mean if I was asked to review her paper, I would decline. Who would get into that?
That's an angle I had not considered. She has a reputation for retaliation using -isms and those reviewers were able to successfully ask their employer for protections against a historically confrontational coworker. Google as a company takes the hit, as it is in their great interest to prevent their less-vocal employees from getting raked into a public fight.
Exactly. That would be very unfair to those employees. How she treated Yann LeCun and how she now publicly bashes Jeff Dean it is no doubt that those employees would also be named and shamed publicly. She threatened to resign and her resignation was accepted. Don't talk the talk if you can't walk the walk :)
Yeah the way I see it, once someone makes an ultimatum, the relationship is destroyed, there's no trust. You can ignore the quality of her paper or the critiques from Google, they probably just decided to end the relationship because of this.
Not really. People bluff in negotiations all the time. A normal response would have been, “I can’t do that. Are you still committed to leaving?” Instead Google went all scorched earth and it’s still unclear why. It also indicates that they don’t particularly value her in progress work in the lab since normally you want the 2 weeks or whatever to squeeze some wrap up work out of folks.
People bluff but you generally have to be prepared to walk if your bluff is called. If I walk into a manager's office and demand 2x salary or I walk and the response is "we believe you are fairly compensated," you're now in a rather uncomfortable situation.
Which is of course not to say you can't have a more measured negotiation. But it can be hard to walk back from give me X or I do Y, especially if there isn't a lot of middle ground between giving you X or not giving you X.
Fair enough. And maybe the lesson is don't bluff in situations where you aren't prepared to deal with the consequences if the other party calls your bluff.
How do you figure that? If I hold 27 off suit then I still have to be prepared to show my hand if you call me. Doesn't mean my all-in raise wasn't bluffing.
But presumably you understand that your opponent may call you. The relevant MW definition is "a false threat or claim intended to deter or deceive someone." So in this context, it's a claim you won't really leave if you don't get your way. But the other party may decide to get rid of you anyway based on the bluff. (Thinking about it, it's probably reasonable to call it a bluff but that doesn't mean there aren't consequences if the other party calls you on it.)
Yep. You don't really know the ranking of stay with conditions granted, leave without conditions granted, and leave without conditions granted rank. (Well #1 is obvious but the others less so.)
If I find myself in a situation where, if I don't get X, I prefer to leave, then an ultimatum is an obvious course: "give me X or I will leave."
It can be to my benefit in negotiation for the other party to think that I am in such a situation, and so I may still say "give me X or I will leave."
If I do not intend to follow through, I am clearly bluffing.
If I intend to follow through, but absent this gambit I have preferred to stay even in the absence of X, I think we can still call it a bluff in a weaker sense - I am not misleading about my future actions, but I am misleading about the strength of my preferences.
"Do X or I'll quit" is a childish way to negotiate, its akin to a child throwing a tantrum if they don't get their way. It's extremely unprofessional and not something a team player would do. I think its clear why they took her up on her offer.
> "Do X or I'll quit" is a childish way to negotiate
I disagree with this. It's really the only way to negotiate, pretty common. The wording is important here. If you word it as "Do X or I'll quit" it sounds childish, if you word it as "I believe the company should X, otherwise I can't see myself working here comfortably and will have to consider my resignation" it sounds professional, but the idea is the exact same.
I would respectfully recommend that you read "Getting to Yes" by Roger Fisher and William Ury. Effective negotiation is often about expressing interests, not specific actions or ultimative positions.
Yes, in that they are both childish. Your version just added more weasel words. The second you bring up the "R" word as a threat you have played your hand; everything else is window dressing.
Better to word it as "I believe the company should X instead of the current W, and to that end I will work with Y and Z to come up with a plan for moving to W' and what that should look like, once that is complete I will have better clarity over how to get to (W+X)/2, or if perhaps some V would actually be better for all parties.
If an employee is adversary toward the company, the trade off needs to be made no matter how much valuable the employee's work is.
This is a non-issue tbh. She wants to leave. Company wants her to leave. They both agree to part way because the premises are fulfilled (i.e. company can't meet her requirements).
If you want to get technical, I'd bet her employee's status is still in tact for 2 weeks; she just doesn't have access to laptop and etc.
I don't know what that means. It could mean her employment status is off today. Or it could mean she can't access any corp material but her status is on.
It's fairly standard to let the employee leave the building immediately but still on payroll for 2 more weeks.
Again, I think it's a non-issue. Both wants to part way. Either party can singlehandedly make that date earlier.
It seems one side brings up this point because other points are not salient, so they try to make it like "you see they fire me today. I actually want to leave in 10 days instead. This is unethical!!". It's a weak point and muddles the main point.
No, she did not want to part ways. If she did, she would have quit. Instead, she said she needed certain things to keep working there. That pretty obviously indicates a desire to keep working.
> Instead, she said she needed certain things to keep working there.
And, from what I understand, that she wanted to discuss the issue in person when she returned from vacation (which she was on at the time.)
EDIT: I point this out because I think that this potentially recasts the whole communication from an non-negotiable ultimatum to something more like a fair warning to avoid blindsiding anyone going into an in-person discussion in which negotiation is implicitly anticipated.
> And, from what I understand, that she wanted to discuss the issue in person when she returned from vacation (which she was on at the time.)
Then, she should have said that.
It would be ridiculous for Google to let her hang out at the company.
"Oh we cannot meet your meet ultimatum for sure. But please feel free to hang around and finish your vacation. We'll wait for you, a disgruntled employee, to choose when to leave."
> People bluff in negotiations all the time. A normal response would have been, “I can’t do that. Are you still committed to leaving?” Instead Google went all scorched earth and it’s still unclear why. It also indicates that they don’t particularly value her in progress work in the lab since normally you want the 2 weeks or whatever to squeeze some wrap up work out of folks.
I strongly encourage people to read some books on negotiations - as well as read up on legal ramifications to some negotiations.
Pretty much all books/courses on negotiations say: Ultimatums have their place, but are a minefield (i.e. they can blow up on you), and should be used as a last resort. From a negotiations standpoint, the response was adequate - which is why they all caution against using such an approach.
As for the 2 week thing, this is a convention, but not a requirement. In my company, it's not unusual for someone to be shown the door the same day they announce they plan to leave to another company (it's not the norm, but not at all unusual). The manager/company always ponders whether there are risks in keeping the employee for a few more weeks vs the gains, and this is the question Jeff pondered - that he did this is quite normal. Will the employee provide anything useful to us in those two weeks (e.g. handoff work to others, etc)? Could he/she cause problems (bad mouth people to fellow employees, steal IP, etc). If it's a disgruntled employee, they are usually shown the door the same day. In Timnit's case, it's unlikely there was any value in letting her stay for 2 more weeks.
I once intended to leave the company I was working for. The night before, I took out everything of (personal) value from my cubicle, as well as from my work machine. Only then did I have the discussion with my manager.
Having seen how she communicates and handles difficult situations, I think she really should read those kinds of books. Sometimes her behaviors are textbook examples of what not to do.
(Hint: If you're trying to influence someone, or a whole industry, you are negotiating, whether you choose to think of it that way or not).
The last bullet (arguably the last two bullets) are about conversation skills, but that is an essential part of negotiations.
I won't claim to be good at this stuff. It takes a lot of effort and practice to change habits you've formed your whole life. But still, I've improved somewhat. What I do think I've become much better at is identifying why someone's efforts succeeded (or in this case, failed).
I would also recommend Influence by Cialdini. It is not a negotiation book at all, but will make much of the material in those books more meaningful if you've read this book.
Books/courses I discourage:
- Never Split The Difference
- The Lynda course (there may be more than one now, but the one I took years ago was bad).
> Google went all scorched earth and it’s still unclear why
Possibly because she made a personal problem into a team/department problem by asking their colleagues to stop working ("stop writing your documents because it doesn’t make a difference"). I couldn't imagine a company where such a call for work refusal wouldn't immediately lose you a ton of goodwill.
It's so obvious that she was "negotiating" in bad faith by threatening to quit if she didn't get what she wanted (i.e. throwing a tantrum) that I'm surprised that other people are surprised at Google's response.
Welcome to the status quo in the Bay Area. Mentally deranged leftists are having public breakdowns because their usual tactic of calling any slight some kind of -ism is slowly ceasing to work.
That's correct, but calling a bluff can mean different things. In this case, it could be "No, and we don't think you will actually leave" or 'No, goodbye".
My guess is they chose the latter because they don't like employees that run such hard negotiation tactics, and she was becoming too internally disruptive. Either way, any employee should know they are at risk after playing those cards, if not in the short term, then in the long term.
Some people don't have time for an bluff game, person choose the tone in negations with in power opponent without considering full extent of bluffing consequences.
"We can't meet your demand. But sure please hang around and let us know your end date at your liking. I know you are unhappy with the company. But that's okay. Nobody cares about security anyway."
People are consistently leaving out the fact that she wrote an internal email encouraging her coworkers to reach out to congress about Google's behavior, at a time when big tech companies are being dragged in front of congressional hearings nearly every month
My researcher acquaintances at industry labs at IBM, Microsoft, HP, Xerox, ATT, Bell, DEC, Compaq, etc. never have had to have their papers reviewed internally before submitting them to conferences or journals. What's up with Google?
Technically you can submit it without pubapprove unless somebody rats you out and you might face some repercussions. Otherwise every paper is supposed to go through AI Ethics committee review along with other types of reviews.
I wonder how consistently these standards are applied, even zeroing in on Google. If it turns out that most people within the company don’t face a barrier from this process in the same way then that sounds like a really problematic symptom.
Our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.
But Gebru writes that HR and her management chain delivered her feedback in a surprise meeting where she was not allowed to read the actual feedback, understand the process which generated it, or engage in a dialogue about it:
Have you ever heard of someone getting “feedback” on a paper through a privileged and confidential document to HR?
A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar (this popped up at around 2pm). No one would tell you what the meeting was about in advance. Then in that meeting your manager’s manager tells you “it has been decided” that you need to retract this paper by next week...
And you are told after a while, that your manager can read you a privileged and confidential document and you’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored.
I've been through the peer review process at Physical Review Letters, SIGMOD, and VLDB. You get a document containing all the reviewer's comments, plus a metareviewer's take on the overall decision and what has to change. You can engage in dialogue with the metareviewer, including a detailed response letter justifying your choices, highlighting things the reviewers may have missed, and explaining where you plan to make changes. You get additional rounds of comments from the reviewers in light of that letter on later drafts.
I'm not a Googler, and I have no idea what the standard review process looks like there, but what Gebru describes does not sound at all like peer review. I also note that Dean does not contradict Gebru's account of the meeting or feedback process. If I had a paper rejected in this fashion, I would also demand to know what the hell was going on and who was responsible.
I think the person who submitted feedback in privileged and confidential way made a great choice. I expect more people taking this route in the future actually, people are scared in the current political athmosphere. Look at what Jeff Dean is getting, even though he didn’t do anything bad.
A manager in ethics shouldn’t ask Google to break the law by not providing confidientality that was requested.
I would certainly be fearful of providing an honest review of a paper championed by such a powerful figure at Google who could get me fired and unemployable with a tweet. Revealing the reviewers and throwing them under the bus would have been the end of any honest reviewing at Google. Dean made the right move here.
> A manager in ethics shouldn’t ask Google to break the law by not providing confidientality that was requested.
It is not implied anywhere that the reviewers requested confidentiality. It is certainly not implied that Google would be violating the law to rescind that confidentiality (and, to be clear: they almost certainly would not be).
I think it’s fairly strongly implied, particularly due to the fact that nobody would let her have possession of the written feedback. It sounds to me like there was concern that if she was provided with the written feedback she would be able to de-blind the reviewers via close analysis of the writing itself. Why else would management refuse to share the written feedback, considering that (a) they were willing to share the content, just not the specific writing itself
and (b) the feedback doesn’t seem to have been in and of itself particularly noteworthy?
My take is that the reviewers wanted to be able to express some mundane criticisms about the quality of the work without having to expose their identity to a person with a public reputation of unusual hostility.
The fact that she was able to hear the feedback but specifically not allowed to walk off with a written copy of it is what makes me think there was concern that she would try to discover the identities of the reviewers. That and the fact that she noted specifically in her writeup that she was told the feedback had come via HR.
Timnit was presented with a privileged and confidential document. That binds her from sharing the information in the document with others. It has nothing to do with the authors of the document or the confidentiality of their identities.
And importantly, in the context of a company like Google, an executive (like jeff Dean) likely has the authority to "unprivilge" said contents, whatever they are.
Given that Timnit ostensibly does not know who the people who gave the feedback, nor how they did so, and elsewhere notes that she was able to see the feedback, but was told she couldn't share it because it was privileged, can you explain how you reach this conclusion instead of that "to" was a typo in a tweet and instead should have been "via" or similar.
It does feel off, and it’s easy to read as Google trying to suppress her findings. But there’s a simple charitable reading as well. Gebru has recently and very publicly developed a reputation for behaving with hostility toward colleagues. Google is a company that prizes (or at least claims to prize) psychological safety of employees. I can see all of this being the chain reaction caused by a number of her coworkers expressing that they were unwilling to give an honest opinion about her work if she would be able to tie it back to them. If an ordinary employee caused their coworkers so much concern they would probably already have been dismissed, but Gebru is especially talented and was high-ranking.
I imagine some reviewers extracted agreement from management before giving negative feedback on this paper that the “anonymous” in “anonymous feedback” was a promise. This explains the unusualness of the situation, why the feedback flowed though a special HR channel, why specifically management was unwilling to let her have a written copy of the feedback (that could be closely analyzed to de-blind the reviewers), and why management accepted her resignation and the resulting fallout rather than agree to de-blind the reviewers.
In her email she claims that she is constantly being dehumanized. This is unacceptable to someone in her amazing position and to be honest sounds like she is a narcissist. I think Google couldn't wait to get rid of her (cannot blame them, she wanted to sue the company a year prior, also "represented" the company really bad on social media) and her email with demands was their opportunity. I don't think she would survive in any company. She is better off starting her own company or going to academia. I don't feel bad for her, I'm sure she already got multiple offers. Not from FB that's for sure though :)
This is the problem with hiring SJWs. They will claim being victimized, racism, etc. the moment they don't have their way. I'm sure many people would have loved to be in her position and have her salary and benefits. Some people can't have enough it seems. "Wokeness" is a disease, the goal is to keep getting more and more power behind the guise of "equality".
Hopefully more employers will reject hiring "woke" people after seeing the trouble they're causing.
I suspect you're being downvoted because this post sounds like a denial of very real issues of sexism and racism.
However, the suggestion that employers will work to identify and preemptively reject employees with an SJW bent is not only sound but almost certainly already underway.
Yesterday the conversation was all over the tone of the email exchange, and my gut was regardless of the research it discussed that Google was probably alright to choose to accept that researcher's resignation.
Now I think I was wrong, Google looks like they're full of crap. If the research doesn't pass muster I'd like to read it and pass my own judgement. I'm guessing the tone was justified.
What's even worse is that Google ignored Timnit's attempts to figure out what went wrong, handing down a decision by fiat. I am surprised that this doesn't violate Google's standards! It would certainly violate mine.
OTOH Timnit sounds like a down-the-middle SJW type, which means that every conflict is about identity, privilege, and so on, which honestly hurts her message. To me, its enough that Google shut her paper down and wouldn't tell her why or give her even a chance for a different outcome -- perhaps it is, ironically, her SJW nature that makes it risky for Google to engage her in the way she wants to be engaged, because a deeper conflict cannot look good for Google in the current cultural climate. (E.g. sounds like she's got strong intellectual integrity points, and weak SJW points to make about Google, but they couldn't engage the former without also dealing with the latter.)
> ignored Timnit's attempts to figure out what went wrong, handing down a decision by fiat
Funny, because she mobbed Yann LeCun and after she obtained a forced apology from him and an invitation to a civilised and serious discussion she said "this is not worth my time".
> Our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.
I can see how this might be frustrating for academics working within Google. The field already has systems in place for peer-review. While I admire the idea of Google holding their research to a certain standard, it also provides a mechanism for dismissing research that paints Google (the corporation) in a bad light. If a paper is good enough to pass an (external) review process, why should it not be published?
> If a paper is good enough to pass an (external) review process, why should it not be published?
There are many journals that have a poor track record on peer review, and in fact even those that have a generally good track record often have times that it famously failed.
Whether we agree or not, Google want to protect their reputation. Because not every journal will be up to their standards, ensuring that papers do meet a standard is going to be important for them.
They could do this by only allowing submission to certain journals, but this would require an understanding of each journal, its processes reputation, etc. Perhaps easier is just to review papers before they are submitted.
This does also allow them to protect potentially NDA'd info from being accidentally included in a paper given to external reviewers, which is (whether you agree or not) something they will clearly want to do.
I think this is the crux, in that google is not academia; so naturally a band-gap would arise if your expectations were that it should behave like an academic institution. From their perspective, what requirement is there to publish work that is not well aligned with their goals (optics, financial, etc)?
The problem is also that Google's AI labs are treated more or less as academia, that's one of the big selling points for recruiting.
I'm in industry and am not allowed to even talk about the projects I work on, much less publish papers, it kills recruiting conversations with maybe 10% of the people at conferences.
Great question! Here's a scenario, though I don't know if it's the case here:
Image that one set of NDA'd data was included in the paper, that painted Google in a bad light, but a different set of NDA'd data was excluded in paper, that painted Google in a good light.
I have no idea if that's what's going on here. But if some of the people who objected to publishing the paper thought this was happening, it may explain their objections to publishing the work.
It sadden me that they make Jeff Dean work on these kind of issues. His map/reduce invention brought to the world a lot more than managing his coworker egos.
How do you know they are making him to this. It seems to me Jeff chose to accept leadership position at Google. So managing egos would be first class task to deal with for him.
If managing people were the same skill set as computer science perhaps we would not be seeing this play out as it has.
This is seriously a massive failure on the part of Google to handle their own shit. I often wonder if these people are too old or out of touch to appreciate how this kind of thing may or may not blow up on social media.
To not consider how this might cause public relations issues especially when dealing with someone who has an established social media presence seems like a misstep.
On the flip side maybe they thought it was worth it. Weather the storm for a few days and twitter will move one. It's hard to say.
Jeff Dean and Sanjay Ghemawat developed the system called MapReduce at Google and published a paper about it. They didn't invent the map or reduce functions, nor is the MapReduce system they published really about map or reduce functions.
The correct name is really MapShuffleReduce and the most important step is the shuffle, because it's a distributed sort. Of course, they didn't invent distributed sort, but combining these three concepts together in a distributed fashion and running it as a production system is really what was important.
MapReduce and MPI are very different. MapReduce doesn't use message passing- the map phase reads inputs from sharded files in a separate disk system, applies map to the inputs, and writes out the mapped outputs to temp sort files on a seperate disk system. Then the shuffler sorts those and writes the outputs to the appropriate destination output shards in a seperate disk system, at which point the reducer reads them, applies reduce, and writes the final outputs, sharded by key to a seperate disk system.
The mappers, shufflers, and reducers are all independent of each other, reading and writing from the filesystem, and managed by a coordinator. There's nothing like MPI, other than the use of the Stubby RPC system, which sort of resembles MPI but has completely different distributed communication semantics.
While I’m a spectator to this unfolding story and am reacting to a pretty cursory overview of what happened, this has the distinct feeling that Google thought they were moving in for a checkmate by, in their words, “accepting a resignation”, only to have it — very predictably? - blow up in their faces completely.
Even if Google is acting presenting the chain of events faithfully, their final move to jump on an opportunity to remove the researcher and call it resignation seems so aggressive and incongruous that from the outside it makes it seem like this conflict is rooted in a larger and more difficult relationship that they calculated was no longer in their interest.
I’m wondering if the cost benefit analysis is still looking that way on the inside, because this move and the attention it’s causing is so contrary to their stated goals that I have to wonder if Google is committed to those goals at all. Others must be wondering exactly the same thing.
> While I’m a spectator to this unfolding story and am reacting to a pretty cursory overview of what happened, this has the distinct feeling that Google thought they were moving in for a checkmate by, in their words, “accepting a resignation”, only to have it — very predictably? - blow up in their faces completely.
I wouldn't say this has blown up in their face. But to be honest, I've not seen that much drama about it than one post. Have I been missing something?
This seems a very solid move from Google. Someone tried to bully their way into getting information from Google they shouldn't give out. If you go to HR in confidence that confidence should be upheld. The fact someone had to go to HR for a peer review and the fact it was upheld states something.
If someone says "Do X or I'll quit" when you're not going to do X, it is a resigination and seems quite standard that they don't want such an active detractor within their company ranks.
I believe she basically sent an email to fellow employees basically telling them to stop working. That alone is a firable. That is completely nuts and unacceptable work place behaviour. The fact Google didn't fire her for that should speak volumes.
To me, from a HR point of view, this is completely the right move to make.
I believe she basically sent an email to fellow employees basically telling them to stop working. That alone is a firable. That is completely nuts and unacceptable work place behaviour. The fact Google didn't fire her for that should speak volumes.
This is little more than gossip unless you can point to something specific that people can judge for themselves. Saying “she basically did X” is a clever way to influence how people feel about her.
I’ll admit, I had negative thoughts till this thread. Now I’m not so sure. She’s definitely passionate, and one might say aggressive, but more and more people are saying that it’s extremely unusual for Google to demand a rejection for academic reasons rather than business concerns. Whether she was a good employee is kind of beside the point now.
> This is little more than gossip unless you can point to something specific that people can judge for themselves. Saying “she basically did X” is a clever way to influence how people feel about her.
That is totally not the same thing as telling your fellow employees to stop working. I like to think I’m a pretty reasonable fellow, but it’s hard to see that point of view.
I felt similarly to you yesterday. Now I’m not so sure. It sounds like she was criticizing company process and was frustrated not being able to have any impact on what she perceived were real problems to the integrity of the process. I can empathize with that feeling, and I’ve done some embarrassing or unprofessional things when I was younger and slightly less wiser in similar situations.
I dunno mate, people here are suddenly trying awfully hard to repeat their points about her personal behavior / paint her as an unprofessional loon. And that’s a pretty convenient distraction if her criticisms were true, wouldn’t you say?
Honestly, if a black woman was saying that DEI isn't working at a company, I believe her more than I believe management. Even with a reputation for blowing up at people.
I don't think that this the narriative here. The narriative is a black woman told people to stop working on DEI. The company begged people to carry on. She says it's not working so they should stop trying.
I think that's a fair concern. If Women and POC are being encouraged by a company to use their time at work on something that doesn't work, they're asking women and POC to sabotage their own careers by diverting their effort from accomplishments the company will actually reward them for, and only asking women and POC to do so.
How is saying "stop writing your documents because it doesn’t make a difference" "totally not the same thing as telling your fellow employees to stop working"? Assuming your fellow employees are writing those documents for their job, isn't telling them to stop doing that exactly the same thing?
She was saying that right now, even if you write such documents, your work essentially won't matter due to some perceived problem with the process. Or at least that's how I read it.
You're not wrong. But it's also the least charitable interpretation anyone could have. Why would anyone tell their coworkers "Stop working, go to McDonalds instead"? Nobody would do that, because that would be both unhelpful and dumb.
Now, maybe it's true that she was telling her coworkers to come play dota 2 with her for 8 hours on company time. But without far more context, we can't know. And there's almost zero additional information here.
Think about how you'd want to be judged. Would you want someone to take a single sentence, stripped of context, and parade it around as if you'd said something you didn't? I wouldn't.
Listen, yesterday I was in complete agreement with you and the other commenter. I felt like she might have been an entitled employee, spreading drama wherever she went, and that Google had been generous to humor her for as long as they did. But today I'm (un)surprised to find myself feeling like I was being a judgmental ass based on very little information.
Her actions seemed crazy, but now that the dust has settled, it seems like her heart was probably in the right place, and that Jeff was giving her an exceptionally hard time for unclear reasons. Her reaction reminds me very much of how I acted at the breaking point. It gets exhausting to try to work around someone who is determined to stand in your way.
You'd be correct to say that I'm guilty of the same thing in reverse: I'm extrapolating far in the other direction, inventing all kinds of plausible-sounding stories about bully managers and such. But it doesn't feel like mental gymnastics to read her comment as "I am extremely frustrated and disappointed that writing these papers seems not to matter whatsoever, and we should probably take a hard look at whether we're having any effect."
That boils right down to "stop writing your documents because it doesn't make a difference," just offensively curt. And yes, phrasing does matter, but people keep saying she said X when it sounds like she said Y.
If somebody used HR as the mechanism to communicate feedback on the scientific rigor of research I was leading I would also find that concerning and objectionable. Human Resources is not peer review.
Honestly, if someone had to use HR to give feedback on anything, I would be more concerned as to why they couldn't say it directly to me.
HR wouldn't have made the final decision here, it seems other departments did. HR was just how someone felt it was safe to communicate that feedback, which for me is a major issue in a workplace.
I think there's a place for HR to help somebody give feedback about social or work-style issues. If somebody has a track record of getting defensive with face-to-face feedback it might be useful to have a manager present as a mediator and go through a formal process.
That's just not the same as giving scientific feedback in the review process for research and shouldn't be in the way there.
I think you're missing the massive point that someone thought that doing it in a non-anonymous way would have resulted in retribution. That is not defensive, it's offensive. You seem to be missing the fact that the woman seems quite happy to try and bury people over seemed transgressions.
Someone thought they HAD to goto HR to provide the feedback because no other way was safe. If you want to act about the scientifc review process, that should never happen and when it does there should be serious action taken.
What if you were the sort of person with a very public history of suing your employer? Could you understand then why they might be very formal when dealing with you, especially when needing to deliver news you'd likely not be a fan of?
I don't really see how that's relevant to the specific issue of scientific rigor, and in this case is a strangely one-sided issue. as Dr. Gebru pointed out, soon after she took steps towards legal action against google (I don't think a suit was actually filed) she was also given an award by Google for her work and performance.
Also, it's strange to be giving google so much benefit of the doubt here. On the same day this happened the NLRB filed two formal complaints against google for retaliatory firing practices. If any participant in this drama has a known record of retaliatory and unfair labor practices, it is google, as documented by the federal government's extremely detailed complaint.
> If someone says "Do X or I'll quit" when you're not going to do X, it is a resigination
I mean, whether or not this stands up legally isn't worth discussing but I don't think this is typically how an ultimatum goes. Rather it's more along the lines of "this or I'll quit", "okay, we're not doing this", "right, I quit". It's not called calling your bluff for nothing.
I agree that her email to the Brain group makes her position at google almost untenable, but that's a separate question. However, it might make her choose not to fight them on the resignation.
> However, it might make her choose not to fight them on the resignation.
How can she fight? Just curious, the US is almost completely an at will employment. Fighting to get your name dirted that you didn't resign but were fired, seems odd. Google seems to saving face for her if anything.
> How can she fight? Just curious, the US is almost completely an at will employment.
Even with a contract? I know little enough about US labour laws, although I'm pretty sure it's not at-will all over. Where I am (not the US) you can sue for unfair dismissal.
> Fighting to get your name dirted that you didn't resign but were fired, seems odd.
Yeah, I agree if there's a good chance if the resignation stands. However if she could get a court to say she was unfairly dismissed it might be a different case, as it would help her case that Google have acted unfairly towards her. None of this publicity is going to help her future employment potential generally but may help in niche circles that she might be aiming for.
This morning, there were front-page posts about this on the New York Times, Washington Post, Wired, Google, BBC, and Financial Times. It's being covered pretty widely.
I don’t think you’re incorrect on any particular point.
Just anecdotally, I don’t think a man sending a “do x or I’ll quit” email would be met with the same response. I think issues of race and gender are playing a role here, and that’s a shame, because while firing (accepting this resignation) of this researcher might make short term business sense, it seems like it does harm to Google’s long term credibility in trying to engage with reducing the barriers faced by (among others) women and people of color.
It’s a shame there wasn’t a pathway to keeping the dialogue going, for example by getting in touch with her and letting her know that some things she was asking for wouldn’t work, but that you could work with her to accomplish her goals in some other way.
From my privileged outside point of view, that seems like it could have been a more humane response which could have disarmed the conflict. I think opportunities to pursue options like that are de-emphasized when management are put in an all or nothing position.
> I don’t think a man sending a “do x or I’ll quit” email would be met with the same response.
Ugh, this is a bit infuriating to me. I think there are a ton of parallels in this whole situation to the James Damore memo issue, and IIRC Damore was fired pretty quickly: Google employee sends out an email to a wide distribution, and while that email may make some valid points, the overall tone of the post guarantees it to be a net negative to the company, and then when the employee is let go, they bitch and moan about how there is some sort of conspiracy in the company against them.
I believe firing Damore was the right decision and I believe Google was right to accept Gebru's resignation.
I think that Google viewed her as a ticking time bomb, so they might as well detonate it early to try to contain explosion. She had threatened to sue Google in the past and was most likely gathering even more ammunition for her lawyers.
Edit: I'm not sure what this tweet is referencing, but it's an indicator of her attitude towards her superiors, I can see why they would want to let go of her.
You might be on to something. The paper doesn't seem too controversial or especially amazing, the objections seem minor (some citations and updates), maybe it's just a pretext. Could be about a lawsuit.
I don't see what else they should have done? This researcher made some ultimatum. As part of it, she said if Google refused, she would set a date to leave the company. Google obviously did not concede to her demand, and obviously didn't think it beneficial to let her be a thorn any longer, so took her up on her offer.
Seriously, I'm kinda a little wondering how people expect that when someone makes an extremely demanding ultimatum that includes challenging the rights of other people in the company, and the company declines to agree to that ultimatum, that we should be shocked that the person doesn't get to dictate all of the details around their resignation.
What I get most from this whole discussion is the extremely nauseating level of entitlement from some employees at Google.
If I storm into my manager's office and demand a set of conditions, or else I resign on some date, and my manager says "Well, fine, we accept your resignation, but it's effective immediately" I don't really have a problem with my manager calling it a resignation.
Did it blow up in Google's face? A bunch of perpetually angry people on Twitter are angry. So what? Google's actual operations are affected not only tiny bit by this episode. Outrage has no power.
Social media is not real life. Every company, Google included, needs to simply ignore social media outrage and focus on their mission instead.
Internet outrage only has power because companies give it power. Angry words and hit pieces from biased media do nothing on their own. If you don't let them make you afraid, you have no reason to be afraid of them. The internet screamed at Coinbase after the company showed their internal activists the door, but Coinbase is doing just fine. See? Nothing to be afraid of. Just get rid of troublemakers.
One thing that is missing here is the demands the said researcher had brought up to the leadership. Didn't see either party bring it up in their side of story.
This sums it up almost perfectly. The only thing I would add is that things like these tend to have a pattern. If a single person speaks up and you have multiple others claiming similar treatment, it significantly increases the likelihood of the workplace being hostile to such persons.
Also: Googles bar for hiring is extremely high. For multiple senior engineers/managers to corroborate this description is rather damning.
I would be absolutely furious if a manager blocked my work from being published. Even a manager who has worked as a researcher, in my experience, significantly lacks the expertise to be making such judgments. A research manager’s job is to be familiar with work across an entire portfolio, so they will be necessarily less knowledgeable than individual senior researchers. Presumably Google generally also feels this way seeing as this appears to be the first case where prepublication approval was denied for content. I would never work somewhere where management had such a lack of respect for my own judgment as a researcher.
No, in my experience, you are wrong. Becoming an industrial researcher trades some independence for money, steady project funding, and the opportunity to more directly impact the world. In return your employer gets first or sole access to your research in order to make their products better and more profitable. Clueless suits forcing you to degrade the integrity of your work by demanding you soften your message or by puffing it up with PR need not be part of the equation. This case is over a research paper, not a press release. Smart companies know that employing researchers that are honest people rather than sycophants is good for their bottom line.
Probably, even Google knows this, and this is just a mistake. If I was in the position of this researcher, I’d threaten to quit, too. Google’s actions suggest it wants to appear to care about ethics in AI rather than actually caring. That’s not a environment conducive to her to have any impact, so why stay? Clearly she’s not going to have a problem finding a job elsewhere.
It's frankly unprofessional for Jeff Dean to post only his email in this document without providing more context. The news media has in cases provided balanced coverage that included the email from Timnit that prompted Google's action: https://www.platformer.news/p/the-withering-email-that-got-a...
This post from Jeff Dean simply underscores that he has failed to balance the need for diplomacy with the research thesis of his own research group. I'm not saying he's being malicious, but incredibly tone deaf. While I appreciate he gets "attacked" at nearly every talk (at one retinopathy talk I saw him grilled for 10 minutes on race), he's going to continue to get this sort of attention until he can stop being the Googler who wants to tell you why their view is right.
It seems reasonable to choose to publish something you wrote yourself (and presumably cleared with legal/PR) and not publish something written by someone else with an intended internal audience. It would be WAY worse if they unilaterally published Timnit's private emails publicly.
The 'email from Timnit' doesn't exactly look good... I would expect a manager posting company lists in that manner to be dismissed (or a junior employee to receive some mentioning about professional conduct, if not dismissed).
I don't believe anyone has published the ultimatum email, which I think would be more accurately described as what precipitated the outcome.
Surely the email can be found in the inboxes of some of her friends still at Google. Any of her friends that were on the same mailing list could give her a copy. Assuming she has any friends at Google.
A large part of the criticism directed towards google is that they weren't particularly transparent in their handling of this situation. This is making that look worse.
I'm not sure what you mean by "unprofessional" here. In my eyes, a vague statement that they had some irreconcilable differences but she's still a great researcher is nearly the archetype of professionalism. It would be extremely unprofessional for him to publicly criticize an email Timnit sent privately to the Women in Google Brain group.
I think we’re starting to see companies discover the limits of how far they’re willing to let employees push their “woke” agenda using the company’s name.
It seems disparaging your own company while ignoring research that counters yours is Google’s limit, but we’ll have to wait to see the research paper if it leaks.
Reading the abstract, it seems to be a pretty vanilla-esque survey paper.... which implies there's some more info that will only come out if this goes to court (doubtful).
The followup does not answer the major questions raised by this part of Dean's original email:
> Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted. A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. [...] We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.
When it was "approved for submission," was that approval final and actionable, or some kind of conditional approval? Is demanding a retraction after an approval the normal way it works at Google, or was this an unusual occurrence that Gebru was right to question?
People have argued both cases. The tl;dr is that she said "I will resign if XYZ" and Google was like "we accept your resignation effective immediately." Whether or not that counts as firing is a matter of opinion.
There’s no legal requirement to pay severance. That’s just a courtesy most companies do in exchange for a non-disparagement agreement or during layoffs to save face a little.
She made demands and gave a resignation date if those demands were not met. Google said that they could not meet those demands and accepted her resignation but because of her conduct (emailing hundreds of Googlers), the resignation date was made immediate.
I believe on currently available information she did not give a date, but said she wanted to "discuss her exit timeline".
Also it's likely that her contract allows Google to provide her pay in lieu of her working out the end of her contract, so they are probably able in general to move dates like this (not commenting on the other aspects of this situation though).
Had Gebru not raised an ultimatum, what would have been the consequence of the paper not passing internal review be?
It sounds like she submitted the paper anyway, would there have been consequences to those actions?
This topic, over the past couple of days, seems to me to have become fraught, polarised and argued to distraction and faction.
It's not a stretch to say that the world has a problem with discrimination, or that big personalities can rock the boat when they go against the grain. Or that corporates have interests to defend.
I'd like in this case, though - not to say that all the other factors aren't worthy of examination as subjects in their own right - to see this paper. The authors/collaborators are leaders in the field. There are legitimate concerns about AI/ML; the validity/reliability of data from which, nowadays, consequential outcomes are derived.
Please, let's ignore - at least for now - the corporate politics, the heat of the race/gender politics, the PR machines weighing in (look at all the news outlets grabbing this atm), and take a look at what the paper says. The right people (those in the field and qualified/able to do so) should be listened to.
As for the politics of this, and the posturing and politicising and agenda-building on all sides: that doesn't help.
AI and ML affect us all now. This is a trend. Let's at least expose the findings of acknowledged leaders in the field - let's see this study - before we descend into the sideshows of politics and factionalism. Please.
Edit to clarify: tl;dr - I've not seen/read this contested paper. Whatever its contents, I want to see them, and imo that is more important than the current controversy blizzard etc.
Sorry for not being clear. My only point is that I would like to see the research paper. I respect acknowledged and qualified domain expertise. It's an education for people like me, non-academics and not domain experts (but working with some ML/AI stuff). I want to see the thoughts and discoveries and recommendations and cautions and all that - from people who know their stuff.
The rest, to me, is a distraction. This specialist/academic, has a really good rep. I care not about character or friendliness or whatever. As a brain, this person is documented as having the chops. I want to see the report.
I am reluctant to accuse Jeff Dean of bad faith, but this argument doesn't scan. I've been on the inside of an AI team during a crisis and then seen how senior management spun details to obfuscate critical details to avoid responsibility. Dean is slandering Gebru in a manner that will make it easier to dismiss her work and the work of other AI ethicists (especially women and bipoc) in the future. He and Google are actually themselves guilty of a lacking rigor (i.e. ethical rigor). Worst of all, I expect that racists inside Google and the wider industry will utilize this argumentative structure in the service of neutralizing ethicists and bipoc in the future. This is truly despicable. I used to be a great admirer of Jeff Dean.
> 1/Man there’s so much to pick apart. Let’s start with one thing. I want to ask if Jeff Dean has looked at the publication approval policy that he keeps on mentioning in his email. Like, for example, a simple look at the website? Let’s read.
> 2/First off “Start the PubApprove process at least 1 week in advance of any deadline”. Okay, not sure where the 2 weeks in Jeff’s email came from.
> 3/ But ALSO “The perfect policy” “There is no such thing as the perfect policy. Fortunately Googlers like to do the right thing. Please do that here—read the policy and do what makes sense.”
> 4/ ALSO “Meanwhile, we strive to make the PubApprove process as lightweight as possible: hopefully eliminating the temptation to skip it.” I don’t know man you might have to resign immediately if you just “do what makes sense” so beware.
> 5/ Finally, I wouldn’t want to know what would happen to you if you had “the temptation to skip it” mentioned on the website. Beware researchers
> 6/In spite of this, we gave a heads up BEFORE even writing the paper—on September 18. Saying that we were about to write this paper. So much to say here, so much. But I’ll stop here for now.
My impression was that James Damore was fired when his memo alleged that men were essentially a target of discrimination, while here this researcher it seems has been fired in a way that seems to be an act of discrimination.
I don’t think there’s a contradiction in the reaction, personally.
I read the abstract. The paper doesn't critique Google. It critiques the current focus of AI research of building bigger and bigger models. Google is one of the leaders in building those big models. So what? They also invest in research to make smaller models that perform equally well. They have a huge incentive - those big models are very expensive. I don't think anyone at Google really care if the paper is published. I'm pretty sure it will get published sooner or later. The authors could've addressed the reviewers' comments pretty quickly and then re-submit to another conference. The whole thing blew out of proportion. She picked the wrong battle. Should've bitten the bullet.
I don't know why Google is bothering to say anything. It's pretty obvious that those who support the person who left have anchored their opinions and won't change under any circumstances, and those defending Google are anchored as well.
While that is likely true, there's value to staking out your position clearly. It reduces speculation and can help you at least claim a side in a fraught conflict that could quickly turn both sides against you.
I've read her email. In my first job, right after college, I have also sent an inflammatory email (nothing compared to the email she has sent) to my manager (not a whole group of coworkers). And I got seriously reprimanded for it by my then manager. Even after a decade I cringe when I remember the email I wrote. I have no idea why people think it's okay to send emails like this and expect not to get fired.
> Highlighting risks without pointing out methods for researchers and developers to understand and mitigate those risks misses the mark on helping with these problems
You have to have sympathy for her position but in the end if all you do is offer criticism then it's hard to see you inside Google, let alone as a manager.
This letter reads like Exxon-Mobil chastising their climatologist for not taking into account their latest research on fuel efficiency, or Philip Morris laying into their house doctor for discounting the psychological benefits of rich, mellow flavor.
Funny that Jeff Dean would write and share this in a google doc. After 20 years of being a driving force for the web, there was no other easier, more appropriate tool avaialable to a senior google employee to share his opinion online.
TL;DR: Jeff and Co got sick of Timnit's woke bullshit, pushed back, she threw around some ultimatums, they called her bluff and pushed her out. Personally I'm glad that Google is finally cleaning house. More of this plz.
Every "woke" company in SV is going to go through this. I really want to know how "diversity' will help someone build a software company say a CRM app. How does a skin color/race help with that? If ideas of a different race matter, why should they be employees and not found through user testing/customer feedback? Coinbase, Google, keep it rolling.. No fully remote company has to go through this bullshit.
> "So if you would like to change things, I suggest focusing on leadership accountability and thinking through what types of pressures can also be applied from the outside. For instance, I believe that the Congressional Black Caucus is the entity that started forcing tech companies to report their diversity numbers. Writing more documents and saying things over and over again will tire you out but no one will listen."
People are forgetting about the part where she basically encourages her colleagues to talk to congress, at a time when tech CEOs are regularly being hauled in front of congressional committees. At the point when that is written, this clearly is an adversarial relationship between her and Google. And it wasn't Google that made it adversarial
I couldn't imagine writing something like that and keeping my job
I believe this excerpt is actually what led to her firing on such short notice. Actually, I find it surprising that this is going largely unnoticed.
Jeff's emails, which were surely also crafted with sign off from PR and legal, attempt to frame this "resignation" as an issue of the paper's content, as well as the inability to meet Timnit's demands. However, this framing does not seem to align with the experiences of other Google researchers, and does not explain why she was terminated in this particular manner. If that were really the case, why didn't they just wait for Timnit to return from vacation, try to resolve the issue, and if unsuccessful, accept her resignation? Why not try to deescalate and handle the situation more tactfully?
Timnit's account of the email she received also seems to confirm this:
"we believe the end of your employment should happen faster than your email reflects because certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager. As a result, we are accepting your resignation immediately ..." [1]
So, what aspects of that email triggered such a prompt termination? Well, Jeff's first email says that Timnit was telling employees to "stop work on critical DEI programs", and I've similarly seen some comments focus on the part of her email that reads, "What I want to say is stop writing your documents because it doesn't make a difference," but Timnit is clearly not saying to stop DEI work, she's saying to "focus on leadership accountability" and to apply external pressure.
That seems to me to be the real issue, and the sort of thing that would trigger pressure from legal. The fact that the tone of the email is frustrated, and it airs some dirty laundry, would not necessarily result in firing, if leadership really wanted to keep her on, they could work something out. According to Timnit, her own manager was not informed, she's well liked by her academic community and her Google reports (who are expressing their dismay on Twitter), and Jeff has regularly praised Timnit in the past. So, the mention of congressional pressure in her email really seems to me to be the real catalyst here.
That paragraph, in hindsight, sounds like foreshadowing. This news cycle is her own "pressure applied from the outside" because she got tired of writing papers.
A lot of people mistakenly think Google is part of their family and has incentives other than making profit and avoiding bad PR. It's not really a surprise they took the first opportunity they could to fire someone who has in the past threatened them with Lawsuits... Don't be evil Google died a long time ago, there's nothing to see here, just business as usual.
> Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback
It always amazes how blatantly authoritarian these "woke" types are. I would not be surprised if the sole reason she wanted the identities of every consultant was to engage in some sort of witch-hunting and bigoteering[0]
In the context of peer review it’s common, or at least not uncommon, for reviewers to be anonymous. Personally I think it should be completely blind both directions to promote maximum objectivity.
Whether that applies at all in this case is a separate matter.
True, but it's also the norm that peer reviewer's feedback is provided to allow the author the address issues. In Timnit's account, she received neither identities nor the feedback itself.
A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar (this popped up at around 2pm). No one would tell you what the meeting was about in advance. Then in that meeting your manager’s manager tells you “it has been decided” that you need to retract this paper by next week...
And you are told after a while, that your manager can read you a privileged and confidential document and you’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything. You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored.
According to her account, she was not given the feedback when she was told that the paper had to be retracted. After discussion, she was told her manager would read her the feedback without telling her who wrote it or what process was followed.
This is different than a normal peer review process that, anonymous or not, aims to provide feedback to the author so they can incorporate it.
Sure that’s a fair point but asking for the reviewers to be identified is way out of bounds. What possible legitimate reason could she have for that request?
I think it was less "who said this?" and more "who was the reviewer, how were they chosen, what guidelines did they follow?"
Peer review in Academia is anonymous but has relative guarantees that the reviewer will have sufficient knowledge to judge your paper, won't be an academic rival out to skewer your work, and is generally capable of being a disinterested judge. As many others have pointed out here, Google doesn't do peer review in the academic sense, they do pre-publication review, almost always on short notice, for the purposes of protecting IP, mainly.
So imagine you're called in and told to retract your paper on substantive grounds provided by an anonymous reviewer whose feedback is sufficiently bad for you that you don't get a chance to amend your paper, it's just retracted, full stop. You can't have their written feedback, but after you kick up a fuss, you'll be granted a recitation of the feedback that won't change anything about the fate of your paper.
All this in the context of review that's normally a rubber stamp when trade secrets aren't involved.
Demanding to know who torpedoed your paper in an extraordinary way doesn't seem so unreasonable.
Good point. Asking who gave the feedback could mean their qualifications but be interpreted as their names. Or their qualifications might be enough to identify them.
This tweet may explain why the reviewers do not want to be confronted:
> Nothing like a bunch of privileged White men trying to squash research by marginalized communities for marginalized communities by ordering them to STOP with ZERO conversation. The amount of disrespect is incredible. Every time I think about it my blood starts boiling again.
I can imagine she felt helpless when her work was being blocked. Imagine you were going to start a project and was told someone in your org was against it and that you couldn't do it as a result.
Maybe take a step back, consider the possibility that you may be wrong, and if you still think you are right, then attack their ideas and arguments, instead of acting entitled and demanding to know who said this or that.
Like PG wrote, the worst thing you can say about an idea is that it is wrong, you don't need to resort to personal attacks or x-isms.
Woke politics is a red-herring. Google is typically cool with woke politics, as "anti-sjw" types are usually quick to point out. If you take it out of the culture-war context its a lot easier to see what this story is all about.
I wonder if stories like these should be off-limits for accounts that were created specifically to spread FUD. Several accounts just like this one appeared simultaneously in one of the other submissions as well. They added nothing to the discussion other than to make ad hominem attacks on Gebru. We’re seeing the same thing here.
I am more concerned whether she is really a good researcher or not. If she is, just continue her great work and contribute real good stuff in the field. BTW, what exactly is Ethical AI?
With her credentials, no less than "a famous scientist that's authored landmark papers in AI research", she could easily start her own AI company, or at least a top-tier consultancy, to advance the AI research frontier at the unprecedented pace. She could even hire the top notch researchers from the SJW community and show Jeff what the real science looks like. I'm sure VCs will line up to fund her novel ideas.
I think Google can't be force to admit that when postmodernist racial theory ideologues are in the research / leadership ranks this kind of review is necessary. So Google is just going to say this is how we do it (or did it always).
Jeff Dean has(/had) a cult following at Google. Internally, there was even a large collection of Jeff Dean jokes (like Chuck Norris jokes of yesteryear).
It is sad to see him drop his credibility and become a PR mouthpiece. I know how this happens and I know why. But it is still sad to watch. Like watching you favourite band sell out to a big label.
Why doesn’t Jeff just suck up his pride, apologize and hire her back rather than write increasingly thin rationalizations for his reaction?
High level research (and engineering) will involve egos and you should expect this kind of push back when you stop someone from publishing. Nothing here justifies how he handled it
Of course, I think many of us have seen this reaction before, maybe even done it ourselves. It’s bad for everyone, don’t do it Jeff!
Ok, put another way. What interest does Google have in re-hiring her? She left on a bad note and brought tremendous negative PR. The company isn't losing business over this as the treatment didn't rise to abuse or harassment. Her team will operate just fine - everyone is replaceable.
Apparently Google has varying degrees of transparency when it comes to decision making. To be expected at a large company. There are some things my company & boss do that seem abnormal and out of character. You have to go with the flow sometimes and shut up. It seems like this person had some kind of utopian mindset where she felt management was obligated to explain everything around the blocking of her paper.
I doubt a guy like Jeff who wants to do tech all day has a vendetta against black women that want to publish papers.
In all of my time at Google AI, I never heard of pubapproval being used for peer review or to critique the scientific rigor of the work. It was never used as a journal, it was an afterthought that folks on my team would usually clear only hours before important deadlines. We like to leave peer review to the conferences/journals' existing process to weed out bad papers; why duplicate that work internally?
I'm disappointed that Jeff has chosen to imply that pubapproval is used to enforce rigour. That is a new use case and not how it has been traditionally used. Pubapproval hasn't been used to silence uncomfortable minority viewpoints until now. If this has changed, it's a very, very new change.