“After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees,” a Google spokesman said in a statement.
This feels pretty cut and dry to me. Meg exported business material from the company. That a fire-able offense at any company.
I feel bad for her but I honestly don’t understand why she is even surprised she got fired.
It's so vague it's hard to draw any conclusion without specifics.
"Exfiltration of confidential business-sensitive documents" could be nefariously downloading proprietary research for personal gain... or innocently downloading a Slides presentation to your personal phone as a PDF in preparation for a client meeting in case it's impossible to connect to their Wi-Fi. Or saving e-mail communications with your manager in case you need them for personal legal defense purposes.
Companies can be amazing at twisting innocent actions into horrible-sounding policy violations when they want.
I'm not saying this is the case, I'm just saying we don't have enough information to jump to a conclusion.
According to an Axios scoop from Jan, she was running a script to find emails in her Inbox related to Gebru and sending thousands of files "to multiple external parties".
I mean multiple external parties could mean two parties and technically not be a lie. It could also be multiple external services that she has an account at and still not technically be a lie. I guess someone could not be specifying more clearly how nefarious the goings on because of legal reasons, but it could also be that they're not specifying more clearly because then everyone would make the pffft noise and point and laugh.
I guess I just don't trust unclear language that emanate from within big corporations somewhere.
I think it's the "external" part of the phrase that's a problem, not the "multiple". She could've sent it to one external party and that still may have been a fire-able offense.
(Of course, this depends on what she was sending, if it was really protected information, and who she was sending it to. If she was sending evidence of criminal wrongdoing on the part of Google to her lawyer or a government official, then I don't know).
I don't have Google experience, but you can be summarily fired for forwarding to a personal address at many companies with a security or compliance policy. You would have known this in advance, whether through training, a manual, or warnings inside the email client.
I remember an all hands where someone forwarded a relatively innocuous email to her Yahoo, once, and we all received a big reminder.
It’s true that companies don’t like that. It’s also true that common advice to people experiencing some kind of workplace harassment or abuse or bullying is to follow up to verbal conversations in writing, and to keep your own copies of those documents in case the company decides to accidentally delete them before they get a subpoena or whatever.
So a question is then whether it’s morally sufficient for someone to be fired for this sort of behaviour. It feels to me like the reason for the firing was not really at all related to breaking the written policies in the company handbook and a lot more related to breaking the unwritten policies about stirring up trouble and dissent.
no, sending it to the wrong 2 parties is still just as bad as sending it to more than 3 wrong parties, but if you send to 200 external parties there is more of a chance that you are sending something to the wrong party.
Yes, you could make the argument that this information is not definitively damning ... at the same time it looks pretty bad on her, and it seems likely that Google was doing something because someone was doing bad things.
Also worth noting is that from a PR perspective, the press doesn't seem to be interested in anything other than making Google look bad, which strengthens my view that they are narrative creators more than anything.
So,if you are being abused by your boss via email, you can't give the evidence to a lawyer, but first have to apply for a court order without showing any evidence in order to get the evidence? I think this is unlikely to be the law, and if it is, it shouldn't be.
But without solid evidence of some sort, one cannot establish that there is a case fit to be heard in court, and so there will be no discovery. It is a bootstrapping problem.
Well, that's true. But you're still having to give your employer notice that you are talking to a lawyer before the lawyer has seen anything. This seems seriously unfair, and I hope that NDAs are not enforceable under such circumstances.
[edited to add ] Also it seems like it would be a big waste of the courts time.
From both a practical and ethical point of view, I find it more than acceptable to retain copies of email correspondence one is participating in, and to share that correspondence with one's legal representation. The alternative puts too much power in corporate hands.
I can understand maybe showing an email or several perhaps to a lawyer, a court might well consider that reasonable even if it's a technical violation, or just give a warning. It's a matter of proportionality. Breaking a speed limit by a few mph won't get you banned from driving. This is completely different though, it was an automated query trawling and exfiltrating vast quantities of mail threads.
That's not the point. Exfiltrating the data means it's available to non-employees (and it appears she did send it to non-employees which is by itself a clear and prosecutable violation), is not protected by the company's security and privacy policies, and would still be available to her beyond her period of employment. She's not entitled to do any of those things, especially since as you say she had access to the info as an employee anyway so she had no legitimate excuse.
Paradoxically, AI may have been part of the tech used to flag her, but it's all mundane at this point.
Kind of implying that AI is just 'tech' and there's no material reason to separate AI from everything else from an ethical perspective.
Google's main Search Engine should be the biggest point of controversy if there is one - the things they chose to filter, or not, or highlight, their ranking, etc. - a lot of that is algorithmic and derived from human input and it has massive impact. But there's no cool moniker like 'AI' with Hollywood movies about the tech to get people paying attention.
Perhaps it's not as strong outside of the Google Campus? It could probably run a large botnet, but maybe it would want to fly under the radar with each account?
Selective enforcement could be another issue. If this is common practice at the organization, then using it as justification for termination is a false pretense. Possibly still legal but not really ethical.
I've said this before, but if it's not accurate then whoever said it publicly should be fired immediately because that is actionable if it's a lie. Google has extremely good lawyers and a PR machine that cares what they say. There's absolutely no way that they are plainly making it up.
Google will happily take a slap on the wrist and pay out another few hundred million, as they have done just a few months ago. Might even pencil out PR wise to ensure Google controls the narrative on why these two employees were fired.
I know nothing of the particulars of this situation, but you are committing a simple error in your reasoning.
The purpose of PR isn't to tell the truth. PR people don't get fired for lying.
The purpose of PR is to make whomever is paying you look good. Sometimes, that involves telling the truth. Sometimes, that involves telling a highly cherrypicked, out-of-context subset of the truth. Sometimes, that involves torturing the truth beyond any recognition. Sometimes, it has nothing to do with the truth.
If it's not true, it's a high-level multiple-executive strategic decision, probably including the CEO who has apparently been actively involved at least since the response to Gebru’s firing boiled over, not a rogue employee.
Which isn't to say you are wrong that they should be fired, but it's kind of like saying whoever in the US government was involved in inciting in the Jan. 6 attack on the Capitol should have been fired.
Yeah I have a friend who essentially got fired for the same thing. His crime? Putting some icons he designed for the company on his personal portfolio website. They never used the icons FYI.
While that might be an overreaction -- to be clear, it's the "they never used the icons" part that is the main problem here.
For designers or artists of any kind, the general accepted pratice is that you're allowed to display/use pubicly released (even if sold e.g. in a book, TV show, etc.) work you've created for others for purposes of a portfolio, or that you've been given permission to. It's a kind of fair use.
On the other hand, if the material was never released, you don't EVER show it unless you have permission. Because that leaks potential future ideas, different directions that are inconsistent with their branding/image that they don't want public, "mistakes" they made, etc.
You absolutely should NEVER post stuff to a personal website you did for work that was never publicly released, unless you have explicit permission.
At this point, I am moving past all this drama. This corporate soap opera televised on Twitter needs to cease.
If the person fired thinks that it was illegal, they should settle this in the court with Google. A company can fire anyone for any reason that's legal just the way you can leave the company for any reason.
It seems Timrit is upset that someone got fired via personal communication channels. Well, when you're locked out of the corporate one for security violations. That's what is going to happen.
It’s pretty cool we can just take Google at it’s word after a 5 week investigation with literally no details coming out.
Google also decided to drop this bombshell on a Friday the same day they are announcing a rebranded ethics team. It’s clear they wanted to bury this story and the leaders of the old team.
I for one don’t plan to in any way give Google the benefit of the doubt here...
Some of these "AI ethics researchers" seem like wingnuts, and their profession appears to serve more of a PR purpose than a business purpose. What am I missing? Why are they so essential? Are these people the canaries in the coal mine, or do they simply exist because of paranoia and political correctness?
So, I work in medical research and was headhunted by (but turned down) Juul to work on sponsored trials and research related to smoking cessation and health improvement in patients who are switching to pods.
Per the article, "Google has recruited top scientists with promises of research freedom, but the limits are tested as researchers increasingly write about the negative effects of technology and offer unflattering perspectives on their employer's products."
Like... these AI researchers are delusional if they think their work is truly unbiased and that they can work at a company and get paid by that company to produce unbiased research. If I took that job at Juul even though it was ridiculous $$ I would have known what I was getting into and who was buttering my bread.
It's either sell your soul and move to industry and enjoy the industry $$ or stay in academia and enjoy your moral purity and being able to look yourself in the mirror at the end of the day but not being able to buy a house in the Bay Area.
Different companies, different industries, different rules.
Researchers can publish anything they like as long as it's uncontroversial. Google sponsors tons of work for which there is no oversight.
But if you're going to directly make the company look bad, then yes, it's delusional.
It's one thing to protect whistle blowers, it's another thing to make bad public statements. The previously fired researcher wanted to demonstrate how training AI was energy intensive and that was a problem, though in the grand scheme it may not be i.e. it's not as though model training is a primary source of energy draw. Google wanted to review the work and asked her to include more recent information about how Google had actually improved energy consumption process given the old data set from the research.
That's perfectly reasonable. Corporate HQ asks the researcher to include more recent data which doesn't make the company look so bad? On a scale of 1-10 in terms of interference that's a 1 or 2. I suggest the vast majority of researchers would be fine with that, and consider it ethical.
Juul probably hires scientists more along the lines of cigarette companies wanting to provide science to show that cigarette smoking is not harmful. That's another thing altogether.
I have a very hard time believing you can retain much moral purity after all the publish, publish, publish, grants and will they or won’t they tenure drama. At least the motives of the company hiring you to do research for them are clear.
I don't know. On the surface of it, it seems a very good opportunity to work in an area with lots of potential, but which also need lots of regulations. You get to ensure decisions by machines will improve lives and not be unintentionally biased by any means. It's also good money, knowledgeable peers and pretty wide net for R&D. Good times!
It's just a bitch that for-profit company lobby for no regulations, and fire people who speak up to power.
However, we all know company culture do not tolerate speaking up to power. So these people also went about it the wrong way, and going public too soon over something that was just theoretical.
What you need to do is gather evidence, and escalate as security violations internally. Use the proper channels. This is harder work than crying wolf, obviously.
> However, we all know company culture do not tolerate speaking up to power. So these people also went about it the wrong way, and going public too soon over something that was just theoretical. What you need to do is gather evidence, and escalate as security violations internally. Use the proper channels. This is harder work than crying wolf, obviously.
This is incredibly spot-on way to describe navigating through delicate office channels to create change and also not ruffle too many feathers. Well said! Can you sit on my shoulder please and advise me real-time throughout my career? Comments like this are why hacker news is so much better than reddit!
I think there's more to it. First is that they made their data collection much more sophisticated, and second is that people seem to be paying more attention to data collection in general.
I don't think the comparison holds. Juul's business is that of selling nicotine pods, as many as possible- there doesn't seem to be much wiggle space there. Google's business is not that of selling AIs, much less that of selling unethical AIs. It seems there's plenty of space for constructive research on how to improve, or at least monitor, Google products for better results.
It seems that the issue here is that Google didn't hire dispassionate ethical researchers able to deal with a wide range of ethical issues; instead it hired activists with a precise, narrow agenda. People who think that the world itself is biased (including the company they work for and their own management) and that Google should change its products to enact a change upon it. Once these activists have found themselves cuddled the most powerful company in the world and showered with praise and money, it seems their egos just started inflating until they burst.
They wanted both the FAANG level compensation AND the academic freedom. They ought to know their role in this machine is akin to Big Sugar tricking the country into believing that fat was the real evil for 5 decades.
My wish is for companies to stop outputting "research" and researchers to work for the correctly aligned institutions...and accept the pay cut that comes with it.
It's a bit of a tragedy to be honest. We/I lived through both developing social networking and big data, and it's pretty clear that the community made huge mistakes that both hurt loads of people and we can't really take back or clear up at this point. Bad things happened to innocent people. Bad people got power over good people and did bad things with it.
AI has at least as much potential to be dangerous, ethical constraints on it are important when you consider the failure of common sense and community good will for SN and BD. However, every berk going latched onto the idea that they could write some sort of rubbish about "applying type 1 and type 2 thinking in AI design" or "why robots can't really be people" and have founded community dedicated to promoting themselves and getting money at the expense of insight and real work and real constraints developed by people who know what they are on about and can really help.
I think most of the "AI ethics" thing is two things: first, for people that are actual AI practitioners, it makes "AI" seem a lot more powerful and interesting than it is. I think AI is a small part of the example problems you mentioned. Second, it's a way for humanities types to wedge themselves into a hot, high paying field.
I think it's unwise to minimize the impact and significance of AI at this time - this is the same sort of thinking that prevented proactive and effective management of Social Networking and Big Data technologies. We have to take it seriously this time round, and if it proves to be a y2k thing - well no problem.
You are right about the humanities types - the otherside of the coin is that there is a definite class of insight that should come out of social sciences and history that is relevant here. Unfortunately those fields are dominated by bullshit artists which means that the signal to noise ratio is very very low.
The "artificial intelligence" we should worry about at the moment are not computers, but corporations. And i believe they should sink money/energy into ethics the same way humans sink food/energy into ethics and i truly hope their environment does not favor the most ruthless and unethical one, in the same way i hope ours does not. Now as a species we are working on computer AI and, unsurprisingly, i would say those should use resources on ethics, too.
But i agree that many of the people who have such job titles, and make it into the media, seem to be wingnuts
Google created these jobs and deemed them important. Like I agree, if AI ethics researchers are doing their job correctly Google will be unhappy with them, but this is really what Google offered, lots of freedom.
You can be as cynical as you want but the fact Google fired them with little rhyme or reason and destroyed their team is BAD thing, we should be upset at Google for it.
I don't think that's true. AI ethic researchers should provide solutions, rather than simply pointing out problems, especially for companies. You need to provide solutions how things can move forward, that's your job.
> You need to provide solutions how things can move forward, that's your job.
I hace no expertise in the field of AI ethics, and I'm not sure what to think about the Timnit drama in general, but I don't think what you're saying here is really accurate. There can be a lot of value in pointing out problems even if you're not the one who can provide a solution. If no-one else has spotted the problem then pointing it out can draw the attention of those who might know how to fix it before things get worse.
If you can say "hey, there's a flaw in the design of this rocket that means it'll explode if we launch it, wasting millions of dollars" then that's valuable to your company, even if you don't know how to design a rocket that works.
A company, Google included, will not comment on the circumstances of an employee's exit unless provoked. It is uncharacteristic for them to comment at all. If they are willing to go public with any statements, that indicates to me that they feel their reasons are iron-clad.
Or they think that the PR win for going to the press early is worth more than the lawsuits.
Comparing this situation to your standard firing situation isn't going to add up. The calculus is different, as can be seen from the mere fact that it's being discussed here and has news articles written about it.
Unlikely I think as if they weren’t iron clad, they would muffle up and prepare for the wrongful termination lawsuit that will only be greater if they lie about the departure. Since a lie would be slander, damage reputation and lead to lots more money.
I think them issuing this statement means they feel confident in their defense. It also being so specific seems more credible since that’s something to easy to verify (emailing company info out of company).
How much is a wrongful termination going to cost them? Max a couple million?
How much do they lose from the PR of gutting their AI ethics division of people who didn't toe the company line? More than a couple million easily, IMO.
I can see them being just willing to take the loss on the wrongful termination if they get to smear her on the way out and think they can get her to accept a deal in the end that comes with a more stringent NDA.
> How much do they lose from the PR of gutting their AI ethics division of people who didn't toe the company line? More than a couple million easily, IMO.
Very few people really care about this issue enough to permanently change their behavior. My family was pretty upset about Gebru firing but when I visited all of their computers were still defaulting to search Google instead of Bing or duckduckgo.
A lawsuit where they lied about firing someone is much worse PR and then the truth about why they fired her would get out and it would be much worse. Also she hasn't denied the accusation.
> A lawsuit where they lied about firing someone is much worse PR and then the truth about why they fired her would get out and it would be much worse.
We already know they lied about firing Gebru.
> Also she hasn't denied the accusation.
The fact that she's not blasting the press yet isn't a point against her.
Did google lie about a verifiable fact in the Gebru firing? I would doubt that. If you could find something like that I would seriously reconsider how I viewed Google's statements.
But right now what we have is
Google: we fired her for exfiltrating data
Source close to meg as reported by newspaper: she wrote a script to exfiltrate data related Gebru
Meg: no comment
And your conclusion from this is Google is lying and she didn't exfiltrate data?
2. Google said "okay, we accept your resignation with immediate effect"
3. Under California law this is considered a firing, not a resignation (at least according to the discussion elsewhere in this thread).
Does this mean that Google lied when they said she resigned? I don't think so. Facts 1 and 2 are sufficient to establish that she "resigned" by anyone's typical understanding of the term. Fact #3 strikes me as little more than a legal technicality. She might not have "resigned" from a legal pov but we don't have to restrict our understanding of that word to the precise semantics of California employment law - just like it might be truthful to say that someone (OJ?) is "guilty" of a crime even if a California court deemed them to be "not guilty".
> Did google lie about a verifiable fact in the Gebru firing? I would doubt that. If you could find something like that I would seriously reconsider how I viewed Google's statements.
They literally denied firing her to begin with, stating that they had instead had simply accepted her resignation.
> ... And your conclusion from this is Google is lying and she didn't exfiltrate data?
No I'm saying there's probably more information that we're missing that completely changes the circumstances (as was the case if you only looked at Google's statements on Gebru's firing). The word on the street is that she saw a wrongful termination coming and made backups of emails to discuss with her lawyers.
> They literally denied firing her to begin with, stating that they had instead had simply accepted her resignation.
Gebru sent a conditional resignation(ultimatum) letter to Google. Google rejected those conditions and accepted her resignation. Whether or not you count that as resigned or fired is a definitional issue. But legally there are many jurisdictions that consider an ultimatum letter a resignation. The general rule has been "Would a reasonable person who read this letter conclude they had the intention of quitting if their conditions were not met?". And Gebru's letter really made it seem like she has every intention of quitting if her conditions were not met.
So this is definitely not lying in an easily verifiable way.
> No I'm saying there's probably more information that we're missing that completely changes the circumstances (as was the case if you only looked at Google's statements on Gebru's firing).
This is probably true, I'm not saying we know the whole story. I'm arguing that it is highly probably(>90%) she exfiltrated data from Google.
> The word on the street is that she saw a wrongful termination coming and made backups of emails to discuss with her lawyers.
This might be entirely possible, but this also gave them reason to terminate her so it seems like a very bad move.
> In other cases, a claimant may give notice of resignation which is contingent upon factors within the employer's control, such as hiring a replacement. The employer does not become the moving party by securing a replacement. The separation is still a voluntary quit.
This sentence makes it seem like California is one of those jurisdictions.
> Unless she was working under the assumption that they already were going to fire her.
It really seems like if your planning on suing your employer for wrongful termination, giving them a good reason to terminate you is a bad idea. You could always get those emails later via subpoena.
Gebru proposed discussing a resignation date. Google fired her immediately.
> In P-B-39, the claimant gave notice on October 24 that she was quitting effective November 15. The employer permitted her to work only until October 31. The Board held that the claimant was discharged...
> > In other cases, a claimant may give notice of resignation which is contingent upon factors within the employer's control, such as hiring a replacement. The employer does not become the moving party by securing a replacement. The separation is still a voluntary quit.
> This sentence makes it seem like California is one of those jurisdictions.
She didn't provide a legal notice of resignation for California law, because she didn't specify an end date to her employment. Therefore the sentence you've cited doesn't apply. That's more to cover "I said I was quitting on the 17th if you as the company don't hire this person", and trying to claim that the otherwise voluntary quit was a termination because the company "had a choice".
> It really seems like if your planning on suing your employer for wrongful termination, giving them a good reason to terminate you is a bad idea. You could always get those emails later via subpoena.
Maybe, maybe not. I've seen wrongful termination cases play out in favor of the employee only because they made copies of the relevant emails. A company that's already committing illegal acts terminating someone doesn't always comply with the discovery process the way they should.
I'm not trying to prove she "voluntarily quit" vs was "discharged" via California law. I'm arguing that Google used resigned in a way that wasn't a lie, and referencing California law to show how it uses the word resignation, and is therefore a reasonable use of the word.
Did Google use the word resigned in a reasonable way?
1. Gebru submitted a conditional resignation, and demonstrated she wanted to leave the job
2. California law says that conditional resignations count as resignations, as well as accelerating resignations(depending on pay)
3. Gebru did not withdraw her resignation
4. Gebru no longer wished to work at Google
5. Gebru(as a manager) advised co-workers to stop work
6. Google "accepted" her resignation.
I don't think Gebru is a liar either. I think that her resigning and being fired are both reasonable one word descriptions of the events. Fired implies she wanted to continue working at Google, and resigned implies Google wanted to keep her employed. But neither of those were true.
They just mutually decided they weren't a good fit. Gebru because she thinks Google is evil, and Google because they thought Gebru was being unprofessional.
California law says conditional resignations count when the conditions are met. Gebru proposed discussing a resignation date after she was back from vacation.[1] Those conditions were not met.
California law allows the employer to accelerate the last day of work. Not the effective date of resignation.
Gebru advised coworkers to stop writing documents about DEI initiatives. Not to stop work.
Is it illegal to say they resigned when you fired them? It makes sense to say that they resigned when you accelerated their termination after they said they want to quit, even if you still legally treat it as a normal firing. Pretty sure Google still filed the paperwork's correctly so she got her unemployment benefits, and if they didn't then I trust their lawyers view on it much better than yours.
> I'm not trying to prove she "voluntarily quit" vs was "discharged" via California law. I'm arguing that Google used resigned in a way that wasn't a lie, and referencing California law to show how it uses the word resignation, and is therefore a reasonable use of the word.
You can't first claim that you're not using California law in your argument, then immediately cliam that your arguement is based in California law.
Particularly when I explained how your undertanding of California law is mistaken, and doubly so when the crux of your misunderstanding is what a resignation consists of.
> ... California law says that conditional resignations count as resignations, as well as accelerating resignations(depending on pay) ...
_If_ it was a resignation in the first place, which requires the employee to set the end date. That is not this situation.
The point is for someone to be lying their use of the word has to be inconsistent with how any reasonable person could use the word. My argument is that if you use the word in the way California law uses it your not lying. But also if you don't use it in the way California law uses it but the way U.S. law uses your not lying. Or if you use it in a manner that's inconsistent with California law and U.S. law but is how people use the word in everyday language your not lying.
"Did Gebru voluntarily quit according to California law?"
"Is it obviously lying to say resigned?"
These are two separate questions.
Also I'm not convinced that it says anywhere in the California statutes that a letter of resignation requires an end date to be considered a letter of resignation. (It might say that about voluntary quitting, it might say that in case law, but I don't think it says clearly in any statute that to be considered a letter of resignation it requires an end date (which is different from whether or not it was a voluntary quit)
> But also if you don't use it in the way California law uses it but the way U.S. law uses your not lying.
You've given no citations that "US law" is somehow different than California law here. I've stuck to the specific citations directly applicable to their situation, but the idea that if the employer is the one who specifies the end date of employment, it's a firing not a resignation is pretty universal across the western world.
And yes, using "everyday language" to make public statements about a legal situation that if you used the legal definition you'd be making the opposite statement is very much a lie.
Additionally, even the "everyday language" argument has you making a distinction between "voluntary quit" and "voluntarily leave a job or other position". The crux is that if the employer specifies the end date, it's not voluntary on the part of the employee. Both under legal definitions and common parlance.
> > In P-B-39, the claimant gave notice on October 24 that she was quitting effective November 15. The employer permitted her to work only until October 31. The Board held that the claimant was discharged...
Next line
> On the other hand, if the employer continues paying the claimant's wages through the announced leaving date, the separation remains a voluntary quit
To get back to basics, the definition of resign is
"voluntarily leave a job or other position."
It doesn't mentioning leaving at the exact date and time you choose.
She didn't want to work @ Google anymore, and she told them she didn't and she'd find a good time to leave with her boss. Google responded with "now is the best time to leave".
It isn't voluntary if the date and time is chosen for you. California allows the employer to choose the last day of work. Not the effective date of resignation.
"An involuntary leaving of work occurs when the employer is the moving party in causing the unemployment of an employee at a time when the employee is able and willing to continue working."[1]
Gebru said she would respond after she was back from vacation.[2] So she was willing to continue working that long at least.
The legal definition doesn't matter. She resigned. Google almost surely reported it as a firing so she got her unemployment etc, but in practice they just accelerated her resignation.
Easy to understand example: You say you will quit in 2 weeks. The company fires you on the spot. If people ask the company what happens do you think it is more accurate for them to say you were fired? Would you prefer a company to say you were fired if that happened to you and you used them as a reference?
Pretty sure almost everyone would see that as you quitting and not you being fired. Legally you were fired but in practice you quit.
The factual definition is the same as the legal definition.
I would say they fired me after I gave notice. Saying I resigned would be inaccurate. What I would want them to say doesn't matter. Gebru wanted them to say they fired her. And she didn't give notice.
I would say I resigned before they could fire me if it was the other way around. Or resigned under duress if that's what happened. I know people who did. None of them said they were fired.
Jumping from a legal definition to the Oxford English Dictionary's definition doesn't look great for your argument.
"Well sure, but if we use another definition about this legal situation that has no basis in the applicable law then Google wasn't lying" doesn't really hold a lot of water.
To be a resignation she needed to set the final date of employment. She did not, so it was not a resignation. Google knows this fact very well and therefore lied to the press about the circumstances.
You even use the phrase "Gebru firing" in your first post in this particular thread.
I'm sorry if it was confusing.
But I was not using the law to argue she was ineligible for unemployment. Or that legally she "voluntarily quit".(though there is at least an argument she did) Only that Google's or Jeff Dean's use of word resigned was not an easily verifiable lie.
I used the term Gebru firing because I think either description is equally accurate. It was a firing in the sense Google did not want her to work there anymore. It was also a resignation in sense she didn't want to work there anymore.
You're arguing that it's a bold faced law to describe a letter of resignation as such if there is not a specific date. You haven't shown this is true in California law much less English. And if it's true in any sense it's not a lie.
If I went to my boss and said "I don't wanna work here anywhere, I'll check my calendar to find a convenient last day". And they respond with don't bother go home you'll get your two weeks.
You're argument is if someone described that event to someone else as me resigning they'd be a big ol liar?
Or this group was a thorn in their side and thousands of employees are disgusted by their actions and they want to bury this as effectively as possibly.
They don’t need an iron clad case they have millions of dollar to throw at lawyers.
Once again, why should we give them the benefit of the doubt? Why should we ever give multi billion dollar companies the benefit of the doubt?
Why give people benefit of the doubt when they're publicly throwing people under the bus?
These people are looking for drama. They're looking for attention and their name attached as some kind of martyr.
The entitlement is insane, you can't work at Google, earn $450k salary and then start a culture war inside Google and create a massive impossible to quantify damage to the company morale and sow discord.
Accusing a fired employee in such a public way of stealing documents could be argued to be blowing up her career forever. That's maybr 20 years at let's say 600k/year, ballpark what someone her level was making there (maybe even low, sheay have been L7+). That's a chunky enough defamation suit that it shouldn't be hard to find a really solid firm to defend you on spec, and it'd be worth enough that it would be pretty crazy not to pursue.
So I guess we'll find out, when she either files this suit or does not. But if you were a lawyer at Google, would you have let this statement go out without quadruple demanding to see the proof?
Except you can't do that. The problem is Google has no idea that this is where the documents/e-mails went. All they can see is that a large number of documents and e-mails were sent outside of Google. I can't imagine Google would say, well it's okay if you take the law into your own hands. Any lawyer should be able to send a note telling Google to preserve the files for potential litigation. And given that she seems to have people sympathetic to her cause, it's unlikely that Google would be able to make all documents and e-mails she thinks are supportive of her case disappear.
Now, I can see if she wanted to take a few key e-mails out because any lawsuit needs some evidence or it might be quashed in summary judgement. So maybe that's what she thought she was doing, but it sounds like she grabbed a lot more than that. I would also really be wary of this because people can get death threats when e-mail that they thought was company internal are leaked out. I wouldn't be surprised that a bunch of e-mails will make it into a lawsuit with the people's names on them. Anyone knows that lawsuits can easily take an e-mail out of context to make it say something that is changed by the context around it. Even worse, company internal e-mails probably have a tremendous amount of internal company information.
There is no way that any company will just shrug and go, well if you sent it to yourself and your lawyer, then that it's okay that you might have sent out company secrets.
Legal counsel is a common exception to NDAs. Different jurisdictions have different specific rules. And most people would have tried to hide it. But saying you can't do that isn't true categorically.
She could have taken files that weren't material. But that's just speculation.
Google claimed the documents went to multiple external accounts. So they must know which accounts. So Mitchell could say who owned them. And they could verify.
Any lawyer will tell you to preserve evidence yourself if you can.
How would people sympathetic to her cause preserve evidence without violating Google's policies?
What might be disclosed during the trial is the same as if Google provided the files.
I'm not sure I understand your claim. Assuming she could get an exemption from an NDA, I don't think that means she can just grab all of the data she wants. Do you have some cases in mind where a whistle blower, or a worker was able to grab large volumes of data and send them to their counsel without repercussion? I think if she had taken a small number of targeted e-mails and documents and sent them to legal counsel, that might stand up. But it appears that she took thousands of e-mails and some number of documents (or maybe the e-mails contained attached documents). We'll have to wait for whatever lawsuit comes to find out if Google is exaggerating the total number of documents, and she only took a small handful. However, I would find it odd if an automated system would trip if she grabbed only a few tens of e-mails/documents. But maybe after the Uber incident, Google is a little more paranoid about when documents are taken off Google computers.
The problem is that Google will be fine, but if a lawsuit takes an e-mail and uses it to say that this person is sexist/racist, or multiple people are sexist/racist, it could be bad for those people. Google will be able to provide context in a legal reply, but that doesn't mean that if the people are named with a quote, that they won't be mobbed. And this has happened with other lawsuits related to Google, where e-mail quotes in a legal document resulted in that person getting death threats.
I'm also not sure what you mean by employees are allowed to discuss working conditions. Are you assuming that the only e-mails/documents that she took are related to working conditions? There is no current evidence that is what she took. Or do you mean something else? Because the only public statement from Google alleges she took a large number of e-mails/documents from Google, not that she was fired for discussing working conditions. We'll have to wait for a lawsuit from her to find out further details though.
It wouldn't be either misleading or lying, if her or her lawyer want personal copies of the emails they can apply to a court. They have no right to unilaterally decide to take personal copies of internal communications.
It doesn’t matter if it’s her lawyer. Any lawyer would have told her not to do this because it damages her case, and the emails are discoverable anyway in pretrial.
Also, I have to ask, why didn’t she just print them? Or take screen pics with her phone? Emailing things from work directly to external parties seems rather dumb for a computer expert.
Any lawyer I know would have told her don't rely on discovery if you can help it. Why do you think it damages her case?
Do you think Google doesn't monitor printing?
I can imagine a lawyer saying to send the documents directly and not send them to herself. It helps show she didn't distribute them to anyone else. CCing a paralegal would still make it multiple accounts.
She could of downloaded documents for an attorney or to search for the allegations Timnit was accused of. Google will of course phrase it as damningly as possible.
So once again, why assume Google has the moral high ground (in any way) in this firing or is being honest at all in it?
I'm not assuming Google has the moral high ground. I'm assuming that Google is not lying about an easily verifiable fact to the media especially when Meg is not disputing Google's events.
> She could of downloaded documents for an attorney or to search for the allegations Timnit was accused of
This would support Google's description of events as "exfiltrating a bunch of documents related to Gebru".
They also have a second source close to Meg that confirms she was running a script to exfiltrate a bunch of documents related to Gebru.
I’m curious why there needs to be a script involved and feel like we are missing a big part of the story here. You don’t need “a script” to search your email. And you don’t need to take the risk to “exfiltrate documents” if it’s just emails in your possession or if you’re just worried about being fired. And it doesn’t make sense that a computer expert would take the risk to do all this electronically where it is easily monitored.
Perhaps she made a dumb mistake but Dr Gebru and her colleague were both managers, and it wouldn’t surprise me if this script her colleague was running was not just backing up her emails but rather pulling management-only and HR docs related to Gebrus situation. Which would be a much more serious offense and explain a lot of what’s going on here.
Well the good news is, they open themselves up to lawsuits or adjudication by making false claims about the reason for firing someone. So we can just wait and see if those are forthcoming
The rumor going around is that she thought she was going to be retaliated against, and the documents were her backing up some of her emails in case she was fired illegally to discuss with her lawyer.
Who's to say she didn't. Some of the wrongful termination suits I've seen personally lived and died based on if an employee had their own copy of emails. Discovery doesn't always work against an entity that's already behaving illegally.
'Wrongful termination doesn't tend to hit the court system like you think it should unless you have hard evidence, particularly in at will states' is absolutely something I've heard from lawyers.
Discovery is a thing. Big companies are more likely to catch these shenanigans, making it automatically a legit term, and also more likely to have retention policies where you can get it later in discovery.
It's not a big corp vs small corp distinction; it's a question of how important the employees in question at the company are who were in the more powerful position. A large company isn't going to bend over backwards for a seven rungs down middle manager committing the wrongful termination, but when the most influential members of the company are involved all the stops come out. It's just more likely that a small company involves extremely influential to the company employees.
1. Google's statement, is it still accurate then? Seems to me.
2. Firing the employee. There's one thing for the law to shield employee from legal repercussions in a situation of them sending communications to their lawyer and quite another for the company still being within rights to fire them because of that. A large set of perfectly legal actions can get you fired at almost any workplace.
"I know, since I'm worried I'm going to get fired for vague non-specific reasons, I'm going to directly violate company policy and get terminated for cause" is an odd game plan.
Did you ever leave a notebook that had work notes at home or in your car? Ever gotten an e-mail on your work account that should have gone to a personal one? Use any SaaS service that would get access to company data?
No, what I'm saying is that dismissing a complaint of the company doing something wrong is missing the point. Deferring to a policy whose theoretical purpose is to defend against leaks and protect trade secrets as a justification for firing someone who was archiving their own e-mails is naive.
"Archiving their own emails". There's an orwellian tint to your comments.
This policy is not arbitrary, rare, or unreasonable. In fact not enforcing it is not really an option, if you want to continue to exist as a company.
Every comment you have made in this thread has tried to distort the truth, and you've avoided saying what actually happened.
I have no respect for this sort of rhetoric. Instead of making an actual point you deliberately misdescribe the facts.
You, sir or madam, are a liar.
Two wrongs don't make a right. I'm calling neither Mitchell, Gebru, nor Google blameless here. But there's no point discussing it with someone who's actively out to move away from the truth.
The language that Google use is consistent with simply bcc-ing outgoing e-mails to a personal account and a lawyer's account.
That is exactly what I have done and would do whenever my job has been threatened - so that I have access to relevant written evidence after my e-mail is switched off.
The thing is, if a company wants to fire an employee they can usually find a reason. It takes extreme care for a person to work for any length of time at a company that wants to fire that person. The employee must change the behaviour that is causing the company to want to get rid of them. It is usually not expected people will change their behaviour and as such, HR and outside lawyers are always available to assist companies with these "problem employees".
It is possible that there are many employees who are committing "fire-able offenses" at many companies but because the companies are not trying to get rid of these employees, these offenses go undetected or no serious action is taken if they are discovered. That's life.
The question here is whether Google wanted this employee gone, and if so, why. It sounds like she was reading documents associated with the other employee on her team who was fired. We are not told of any actual damage that was caused.
The quoted statement in the top comment states "After conducting a review of this manager's conduct...". They have discretion over what they choose to do after the review. They wanted this employee gone. Why?
Speculation? Because she was good friends with Gebru, and was continuing the (internal) drama that they wanted stopped. Assuming that Jeff Dean actually took the right decision in firing Gebru (I tend to believe so) - Google really can't blink here; they'd be at the mercy of vocal employees if they did. It's basically a cancer-like situation, you either cut it all & apply radiotherapy (however unpleasant), or it will overwhelm & kill you.
You don't need a fireable offense to fire an "at will" employee. I'm not even sure you need a reason for it. There are restrictions in terms of not being able to fire a person because of their gender, race, etc but beyond that anything goes.
"At-will means that an employer can terminate an employee at any time for any reason, except an illegal one, or for no reason without incurring legal liability. Likewise, an employee is free to leave a job at any time for any or no reason with no adverse legal consequences.
At-will also means that an employer can change the terms of the employment relationship with no notice and no consequences. For example, an employer can alter wages, terminate benefits, or reduce paid time off. In its unadulterated form, the U.S. at-will rule leaves employees vulnerable to arbitrary and sudden dismissal, a limited or on-call work schedule depending on the employer’s needs, and unannounced cuts in pay and benefits."
What is a business-sensitive data? If I am working on my laptop and download a 1GB customer orders CSV to my laptop for analysis, is that sensitive data? Who does not do it?
Private data for other employees - is that like downloading the accessible intranet profile of an user (say in the HR portal), or hacking into HR portal to download their private info? In a 1k+ employee company, who does not look up employee details everyday?
Does it? As pointed out, this could mean a lot of things and I suspect the majority of Google’s tech employees are “guilty” of violating this particular guideline, including the execs and the investigators involved in this case. Getting “sensitive” data on a personal device is actually really easy to do and just goes overlooked until they want a reason to fire you.
But the part of this that stands out to me is that they are giving this much detail to the press. The standard reply is always that “personnel matters are confidential” even when the person being fired has been publicly accused of something criminal. But for this case Google chose to make specific public allegations. Now why do you think they would do that?
“Cut and dry”? Given how google fired a previous AI ethicist in an entirely misleading way (“We accept and respect her decision to resign from Google”), it would be naive to take google at their word here.
It wasn't very misleading. Giving an ultimatum on threat of resignation is more or less entails giving your resignation if the company doesn't accept. My understanding is that the conversation more or less went:
Gebru: "Don't make me retract the paper and give me the names of the reviewers or I'll resign."
Google: "We're not doing that. We accept and respect your decision to resign from Google."
Google has a specific process for resigning and writing a resignation letter. Saying "if you don't do this let's talk about my end date" isn't it. When the company decides you're not working there anymore and tells you so, it's a firing. That shouldn't be hard to grasp.
Of all the things to argue about how that incident went down, this is the worst.
She sent an email with a list of demands saying if they weren't met, she'd work on an exit date with the company.
The company couldn't meet those demands. By her own words she would be exiting the company, they worked out that exit date with her in that email: right now
The State of California is very clear that the sequence you've outlined is a firing. An employee needs to be the sole one setting the last date for it not to be a firing.
Say you hire Troy Hunt to do an audit (security, IG, finance, anything), then repeatedly block them from accessing the information they need.
They then email you saying "I need access to this data to be able to do my job, otherwise there's no point me being here and I should move on to something else"
Another example. You hire a sales person on commission, but then deliberately stall the payment of commissions for cashflow reasons. They say "Look, I'm buying a house in 6 months, so you need to start approving these commissions or I'll need to find another job"
Is your stance that any statement from an employee that a problem with their role is severe enough that they can't do the job they were hired to do, or that the compensation they were promised (whether that is money, career development, publication of papers, etc) is cause for immediate dismissal?
I just find it strange to focus on the very subjective whether or not the "demands" of an employee are reasonable, rather than the far more objective "Is this an explicit resignation".
To the company, every demand is unreasonable...
And every negotiation/collective bargaining begins with overstated demands, that's hardly unusual.
They didn't work "out that exit date with her in that email". They told her she was no longer an employee effective immediately. That is a firing, legally and otherwise.
Yes, I realize I could have picked from several other examples which demonstrated Google acting in bad faith, but I felt that quote (which I put in parens and was not the main point of my argument) was the most memorable from the whole sad affair.
Jeff Dean just yesterday officially apologized for this situation so clearly he feels he did something wrong, although he doesn’t admit to anything specific. It wouldn’t surprise me if this quote from his email (and the corresponding behavior) would be on the list though.
"Firing @timnitGebru created a domino effect of trauma for me and the rest of the team, and I believe we are being increasingly punished for that trauma."
Wow. These activists are truly living in an alternate reality.
Google, by virtue of its sheer influence in the industry, is primarily responsible for the activist-driven grief that has shat itself all over the tech industry in recent years.
So fuck them. Watching the toxic crops they've planted and tended to go for their own creator's throats is nothing short of delightful.
All the world's trillion-dollar companies have gone woke, so I'd wager it's quite profitable. It may not seem like it to you because you're not the intended audience.
They have usually gotten woke after becoming multi-billion-dollar companies. Being woke seems to be a luxury available when you have the money to spend on it.
You make it sound like politics was invented at Google... As opposed to being something that people have engaged in, since before recorded history[1], during times of upheaval of social and cultural norms.
[1] Certainly, since before 1998. The recent past was not some apolitical paradise.
The quote is from the article and it appears in almost every article on Internet about this topic, so they at least they are not random words. Everybody say this is a Twitter message, at least some may have read it.
I was referring to the initial quote, the "Firing ..." one.
The quote on "literal act of violence" is a cultural reference to an article in NYT in 2017 that is now used as a way to call someone a certain kind of leftist. The entire comment is about "these people", the "activists". So these words are also not "random words in quotes".
Not sure how and why they are creating them, but it's definitely a fertile breeding ground. Look what happened to Stallman - no corporation could bring him down, but the activists did.
Probably profit. They want enrollments, so they create "safe spaces", which in turn are more breeding grounds for these types of individuals. It also doesn't help that fields like the social sciences are extremely heavily left leaning already, and continue to hire from their kind.
I think the trauma comes from the percieved unjustness of the firing. (Which is understandable, but it seems - based on the linked r/ML thread and comments - they have a rather warped sense of victimhood.)
You use the term "these activists" and describe a problem of two people. I think you are trying to imply that activists in general have the problem. That would be the "hasty generalization" fallacy [0].
Because they are stressed from an unexpected and public termination, you think they're "activists"? Activists for what? Or is "activist" the new "communist"?
They also seem to have sidelined Samy Bengio (Yoshua's brother) who was the skip-level manager of the team and, as far as I can tell, backed both of them and continued to: https://twitter.com/L_badikho/status/1362892301979312128
As can be seen above, current team members state that Google is running a smear campaign against both of them. We should emphatically not take Google's words at face value here either.
All of this increases my respect for Microsoft Research immensely. They've been more hands off and had researchers publish work critical of MS technologies. It's quite telling that supposedly more open Google is incapable of creating as open a research org as MSR, which remains in my view an excellent example of what an industrial research org can be.
As can be seen above, current team members state that Google is running a smear campaign against both of them. We should emphatically not take Google's words at face value here either.
I don’t get this. Google’s words are that she exfiltrated company documents. She either did or didn’t do this. And is google going to risk making this into a much bigger story by committing libel against a former employee?
Or should AI ethics researchers be empowered to steal corporate documents?
I try to maintain an open mind, but things seem almost straightforward in this case.
I don't know, but I think it is much more likely that she is being fired for sharing internal info with journalists, outside activists, or former employees, than for sharing it with her own counsel.
Your employer might not like you sharing their internal info with your own attorney (under attorney-client privilege), but if they fire you for it, they potentially expose themselves to significant legal risk. Maybe you are planning to file a complaint or lawsuit about discrimination or harassment, or considering reporting your employer to some regulatory agency, and firing you for sharing evidence with your own attorney could legally be retaliation and viewed poorly by the legal system.
Now, if you share internal info with someone else's lawyer, or if you ask your own lawyer to share that info with third parties (other than regulatory agencies), different story – I think your employer is on much firmer legal grounds in those cases.
What is clear though is that what you think is likely, is probably not particularly indicative of what actually happened because, flat out, you have no idea.
It wouldn't be the first time Google miscalculated legal risk.
And they didn't say they fired her for exfiltrating documents. That's how most people will interpret what they said. But they left room to say it wasn't material.
Isn’t that “actually” exfiltrating company documents? Not Google, but I don’t remember any of my NDAs mentioning that I couldn’t share confidential/privileged/company information unless I really wanted to with my lawyer...
Everyone is assuming it’s email, but you don’t need “a script” to search your own email, and you don’t need to risk getting fired sending emails to your lawyer, you just print them out. Or wait to get fired and get them during discovery. Google isn’t going to go into conspiracy mode and risk hundreds of millions over a line manager getting fired, that’s absurd.
It wouldn’t surprise me if this “script” was trying to pull documents from manager or HR only sources and that they were related to Gebru’s employment—hence the need for a script to do a search. Which would imply she was looking for ‘dirt’ outside of her normal access and responsibilities.
That scenario would explain a lot of what’s going on here.
Saying documents could mean emails doesn't assume it's email. Axios said the unnamed source said Mitchell looked through "her messages" though. And what other relevant files would there be thousands of?
Google Apps Script can do things Gmail search can't. Like regex.
Lawyers I know say don't rely on discovery if you can help it.
The political activist collective is slowly falling apart. If you get blamed for racism when firing an employee for telling colleagues not to pursue KPIs, there is not much you can do to continue placating these professional victims. They are now resorting to calling Google gaslighting for replacing Gebru with a Black person. It is low-brow ugly neo-racism, but then again, I don't really want Google AI in charge of AI ethics, with or without Gebru, so may the mess and reputation damage linger for a long time.
> The political activist collective is slowly falling apart.
In my heart and heart and dream of dreams, yes I wish that to be true. However my intuition tells me another generation with the same persona will pick up the banner again in a few years.
IMHO Philosophy is great and all for motivations and decoration, but for structural items like farming, biology, and architecture it's blind and mute. And the movement in question is a philosophical one, namely a bastard child of post-structuralism and select bone and gristle from Marxism. Sure, it'll rise again, like a bad penny zombie. But for now its lost half its population that just wanted Trump out the door.
For context, you had a reporting chain of Jeff Dean -> Megan Kacholia -> Samy Bengio -> Timnit Gebru/Meg Mitchell.
Timnit and Meg have been fired. Samy appears to have been sidelined, despite everyone involved in the ethical work liking Samy and finding him willing to protect them.
My best Guess is that Marian Croak (a Black woman who is currently an Engineering VP in an unrelated area) will approximately replace Megan Kacholia with Megan focusing on other areas, except that "ethical AI" was only a part of Samy's purview, much less Megan's, and its unclear if Marian is stepping down from her former position, or if she's going to split her time between ethical AI and completely unrelated work.
So there are no direct leads of the ethics team currently. There's a VP in charge who has other significant responsibilities and relatively junior researchers (some of whom are very recent hires with limited experience), and no one in the middle. So any chance of mentorship is gone, and the system is quite clearly adversarial. Those researchers have lost their direct and skip level manager, with no clear replacement. Lovely.
> They are now resorting to calling Google gaslighting for replacing Gebru with a Black person.
The complaint isn't what they did, it's what they didn't do. There were no substantive policy changes that could address the underlying issues. Tokenism without change isn't improvement.
1. Concerns about censorship of research critical of Google.
2. Concerns about diversity efforts within the Research organization.
Replacing someone with a proven track record of being above-average on diversity work, even to his own detriment, and who was willing to fight for his reports when they were censored by higher ups (Samy) with someone who is at best an unknown quantity (Marian) does not instill confidence.
Further, the claimed changes don't really do much. Supposedly, failures of DEI will be more impactful on perf for executives. Does that mean that the executives involved in this sequence of events will be dinged for mishandling this? My impression is no, or if anyone is, it'll be Samy.
As a sidebar, there's also the more nefarious thought that if you're being judged on the diversity of your org, firing a black woman and then bringing in a black woman executive means you won't get dinged for DE&I failures. Hence: tokenism without addressing the underlying issues, which might be summed up as many of the employees have no confidence in management.
Timnit Gebru
Jul 2, 2020
Yep generally you did, thanks. But not about me, Deb and others being attacked by White supremacists, anonymous accounts etc. Clear support means taking a clear stand. And this doesn’t seem to be ending.
Because she mentioned Jeff Dean first. Then, Jeff needed to ask what this was about.
Then, Jeff ended with "personal attack is not okay". Like huh?
At this point, we might as well ask Jeff to tweet that the earth is round. Otherwise, we would call him a flat earther.
This isn't called "engaging" when one side would assume you were racist if you didn't tweet to support their cause. The word you are looking for is strongarming.
I haven’t seen any of the “smear campaign”, just what these two employees have written publicly and, honestly, they both seem like entitled self-absorbed pains in the ass who crap on everything around them just because they didn’t get their way, who view anyone who disagrees with them as an idiot (or worse, evil), and as completely unprofessional and poorly prepared to be dealing with something of such significance at a $50B company.
These are people who should have never made it past individual contributer.
There is a difference between pushing your company towards a better place and dropping bombs every time someone disagrees with you, a distinction neither of these two seem to realize.
There is some disagreement within this thread as to exactly what happened, but it seems to me that Gebru had a tendency of taking perceived slights and blowing them out of proportion, creating a more difficult work environment.
2. Mitchell was fired for stealing thousands of company emails
I wasn't impressed with the process Google used to rid themselves of Gebru, but let's not pretend that Google lost the ethics A-team here.
Mitchell ran a scripts to scrape emails (wonder if it was her company ones or mailing lists for the AI group) for signs of racism (heck how on earth does that work) against Gebru.
It’s a really common refrain here that if you are employee critical of your employer you should expect to get fired…
Well perhaps, but you’d think that specifically someone in AI ethics needs to be empowered to speak up against both internal external issues. Or did google just want an research ethics hit team against other companies technologies?
Makes me wonder if project zero is really trying as hard as possible with googles own products. Clearly it’s impossible to be an independent team at google.
It's not just the employer. James' Damore's memo silenced a lot of the more conservative thinkers, especially as he didn't even want that memo to be public, just a research on a mailing list. Harassment from the political left was very strong at Google, that it was just a matter of time until it bit back.
As for me as a white man although I was just doing my job silently, I didn't feel appreciated and tried to isolate myself from meetings / politics and just focus on work (and was happy to leave 1 year after that memo). The fun old times when we could just focus on making the user happy and talk about doing cool stuff was over.
Maybe you are right, I was working in Switzerland, not in US.
What I remember clearly though is that Google's training contained material about how I am allowed to socialize with coworkers outside work, which may be allowed in the US, but it was clearly against the Swiss constitution (my job security was more important to me than speaking up about this though)
What exactly did this training say? I'm just a little suspicious that Google was saying you couldn't socialize with friends outside the office, since that's unenforceable, not in their interests, and also deeply stupid.
I didn't write that that we couldn't, it was about how we could.
For example we shouldn't tell a coworker that he/she's pretty, shouldn't get drunk together, and a lot of behaviour that is considered flirting and getting to know eachother in Switzerland.
In US behaviour outside Google is handled by the company as well, but in Switzerland there are very strict privacy laws and culture that forbid companies to control any part of the life of people outside work. The right to privacy is part of the Swiss constitution, and it's taken there very seriously (to the point where generally the tax authorities don't have the right to look at your bank accounts, unlike in US for example).
Edit: I can't reply, so I extend this comment
,,shouldn't'' is not banning.
And also what you write that Google may have a problem if harassment happens is true, but I still think you think in US law terms: privacy in Switzerland is like voting right in US: it's like telling that ,,you are not advised to vote for Trump as a president''. Or telling you that ,,you are hired, but you should probably go back to Mexico''.
warkdarrior: Again, yes, in US that training is legal and socially acceptable. But the training was in violation of the Swiss constitution...I'm not sure why people in US don't understand that the laws are different in different countries (people from other countries understand it generally). Harassment is a trendy topic right now, but it doesn't trump privacy and the right for not controlling any part of a person's life outside work, even if that's with coworkers, and even if it may lead to a legal problem for Google. In Switzerland people are grownups, they don't need babysitting.
I have to add one more thing: Andy Ruby belongs to prison, everybody knows it, and giving us more trainings won't help the main issue, that the top management was/is raping women or coercing them to have sex to keep their jobs.
Typically that type of employee training in US is meant to limit sexual harassment, where a person thinks they are flirting and the co-worker on the receiving end is offended. HR would step in at that point and take action against the harasser, so i see that kind of training about being careful in socializing as a way to warn people about the consequences of their actions.
Is it sexual harassment because it's in the workplace or it's because the receiving end didn't like it/the flirting person or because flirting IS a sexual harassment?
TBH I am again skeptical that there was a training that said this. I could buy if there were a training that explained that such behaviors could be construed as harassment under certain circumstances (e.g. if you had previously made advances and been turned down). I could buy if there were a training that told you you should think twice before doing these sorts of things. That's just sensible advice, since if someone does take your behavior as harassment and took the complaint to HR, then you're in a bad situation, at best. A training that banned you from doing those things? I doubt.
Corps have a habit of buying in US services tailored for a US audience and then just blithely pushing that crap out to all their international employees with scant attention about outcomes, its just "mandatory".
As such you have to watch some real thoughtless, poorly put together, bottom of the barrel crap so legal can tick a box that diminishes their responsibility.
Given the motivations at play here I wouldn't be surprised if they ballsed it up like this.
I work at Google in California. Unless you have completely different trainings, I don't think your characterization is accurate. For one it doesn't say at all that you should not get drunk together. In fact there are many company events where alcohol is served and some teams keep a stash of liquor in the office.
I work a corp that dishes out mandatory training and I feel like you have the wrong end of the stick. Its not necessarily well put together by smart people, rather its cheaply produced, so this outcome would not be surprising.
So far every example I ever saw of a conservative complaining about their freedoms and speech being limited, they either don't disclose the full story, or when they do, I never end up agreeing that any valid right of theirs was violated or infringed. Instead it's always just crying about having to be even the most basic level of civilized.
And I'm very much in the Bill Maher camp when it comes to liberal counter-productive hyper righteousness and intolerance. And I'm a leftie who disagrees with the left on guns. Try being that guy at parties.
Yet somehow I don't feel beaten into submission and into hiding my opinions for survival, even the unpopular ones. There must be some mysterious other ingredient involved with these poor hobbled heros. What could it possibly be?
Haha, what are you talking about? Are you saying Swiss employees are legally not allowed to socialise with work colleagues? Do you honestly think this happens in practice?
I was confused by GP until I reread it and realised that the operative word is how.
I don't know anything about Swiss law or Google internal policy but I think what he's saying is that it's illegal in Switzerland to dictate how your employees socialise with each other outside of work (and perhaps to dictate anything else about how employees behave in their private lives?), not that it's illegal for employees to socialise at all.
> It’s a really common refrain here that if you are employee critical of your employer you should expect to get fired
I don't know how common that refrain is but I have one a little less controversial: if you use corporate email to exfiltrate confidential documents or coworkers' personal information, you really should expect to get fired.
I don't think that's exactly what happened. From my understanding she was backing up emails that served as evidence to support the claim that Google had treated Timnit Gebru unfairly during their termination. Not sure where that legally falls but I wouldn't exactly paint it in the light you did here.
The legal way to handle that is for Dr. Gebru to sue Google and get the emails through discovery. (If Google didn't do anything that Gebru can/is willing to sue over, well, she doesn't have any legal right to the emails.) Exfiltrating confidential documents will get you fired, and saying you did it to assist a friend in badmouthing your employer on Twitter is not going to help matters.
> legal way to handle that is for Dr. Gebru to sue Google and get the emails through discovery
The not-quite-legal but defensible way might also involve openly forwarding those emails to her lawyer. Sending them anywhere else removes the shadow of doubt one needs when strategically but willfully violating a contract.
Multiple external accounts could mean her lawyer's account and her own. Or 2 accounts at her lawyer's firm. So we don't know if she sent them anywhere else.
IANAL but I wonder, would this be a reckless thing to do without encrypting the data first?
In any case, I would not do this in any form unless my lawyer said it was a good idea first.
But I would also not post a poorly worded rant, with strange footnotes, trashing my employer, on my employer's own document-hosting service, and publicize it on Twitter as soon as I suspect I'm in trouble, so what do I know.
[Edit:] I mean reckless because the email could theoretically be intercepted in transit.
The assumption that corporations, not courts, decide legality seems inane to me. The company you worship could commit a crime and then engage in a cover up to destroy all evidence before discovery could occur. The easiest, recent example is Ebay:
If there's a law that permits exfiltrating confidential documents from an employer because you don't like one of their policies, I'd be happy to know of it. But I doubt such a law exists.
If they were confidential documents and she signed an agreement indicating she would not expose or retain such property, legally, google has standing to fire her and sue her. But morally? She and others may feel it was the right choice, particularly if she felt that Google would not provide the evidence through the discovery process.
It’s not like she’s an ombudsman for Google. If she has concerns she should voice those concerns internally. If she doesn’t feel that her concerns are being taken seriously, she should quit. On the way out the door, she should tell her story.
It’s honestly just basic professionalism. Don’t publicly trash your employer.
I'm curious how you think someone who is a professional ethicist is supposed to do their job without being allowed to comment on ethics.
It's not a cog-in-a-machine profession. Having critical opinions is what the job is about.
It's literally stupid for a corporation to employ an ethicist and then somehow be outraged and appalled when they have their ethics questioned and challenged.
Wanted to write this exact same reply before I saw yours.
She was not a general ethicist at google. Her job was not to comment on the ethical issues of employment. Her ethics role was AI specific and none of this has anything to do with AI
I'm curious why you think she was fired for having critical opinions as opposed to grossly unprofessional conduct. She could have done her job just fine without throwing her colleagues under a bus, badmouthing her employer in the press, and having a paranoid meltdown because her paper didn't pass peer review. Being an ethicist doesn't give you the right to act like an asshole.
>> It’s a really common refrain here that if you are employee critical of your employer you should expect to get fired
I was just critical of my employer, I called them out on how evil they were, I cost them money, I embarrassed them publicly, and then, all of the sudden out of nowhere I was fired!
That's sort of your assertion, right? Am I misrepresenting anything?
Yeah, why wouldn't a company want a workforce full of yes-men and engineers who care about nothing but making shareholders money and making executives look good?
There's a gulf between "hire only yes-men" and "hire activists who openly state a desire to undermine the company, who break its policies and share its internal documents to forward that agenda, and generally spend their global-0.1%-level salary accusing the company of actively harming the least among us".
that's clearly a strawman interpretation of my comment.
> why would a company pay someone to publicly criticize them?
by all means, a healthy company should encourage internal critiques. but if you're going to take that criticism public, you can't expect to also collect a paycheck at the same time. do you make a habit of paying people to work against your interests?
I'm seeking to normalize community accountability practices that are pretty core to many indigenous cultures, including my Irish ancestors.
So, yeah...I'll pay for that in every organization I get involved with from here on out.
Just cause it's not normalized doesn't mean it isn't useful. The reason to do it is for the sake of actual accountability, as strictly internal accountability is nonsense.
This is the business version of "keep it in the family": it gives all kinds of toxic behaviors a shade to hide and grow in, while keeping the public from actually choosing who gets their business based on their values.
"community accountability practices" just sounds like mob justice. It seems like an especially bad idea in the current environment where people who are more than happy to join in an outrage mob after only hearing one side of the story, and sometimes aren't even affected by whatever they're targeting. A "community" shouldn't be anyone who happens to agree with you on twitter.
I think what you're speaking to is the result of a lack of normalized practices actually oriented toward accountability and healing, rather than shaming, canceling, mobbing, etc.
I'm not even saying I know how to do it. I think the current environment is an indicator of a greater need for true accountability and not the theater of "justice."
I'd also say outrage mobs are the direct result of people confusing themselves with who is angering themselves. Many adults I know still insist it is other people who make them angry, instead of it being themselves who use the stimulus of others' actions to anger themselves. We need better EQ/CQ education and what you're highlighting points directly to it.
At this point a fair proportion of the planetary population is affected by FAANG's ethical stance on ad tech. It's looking likely AI is only going to make that worse.
A "community" should also not be an enforced happy smiley corporate PR face.
And mob justice can also look a lot like running people out of town because they're not properly respectful.
But how can community accountability practices work in an organization like Google, which hires from many different communities? It seems like it inevitably produces loud, angry, unresolvable conflict whenever your community and my community don't agree on a decision.
I think the difference lies between harm and disagreement. How can it work? I don't know or have all the answers, as I've never even seen community accountability in action at a small local scale.
This highlights to me the importance of learning how to do this stuff.
I don't think it's as complex as you're making it. Community accountability works through shared cultural standards and social pressure to conform to the status quo; Google employees don't have shared cultural standards, and the whole problem here is that some of them don't think it's right to conform to the status quo. The reason you've never seen it in action is that, in a multicultural society, it doesn't work on any scale larger than a friend group.
It is literally the job of an AI ethicist to highlight ethical issues, so if they're publicly criticizing you, then you're likely doing something unethical.
If your team of ethicists has repeatedly highlighted issues and you continue to not address them, what should they do then? In the case of a company whose self-image is of openness and directness and benevolence, it seems reasonable that publicly highlight the lack of change is within reason.
If they cared less, they'd just shut up, but employees seem to have a higher opinion of Google than Google does of them.
Firing them in the middle of that suggests an unwillingness to deal with the previous issues, doubling down on unethical behavior.
I might be speaking from ignorance here, but I highly doubt their job description said anything about making accusations via tweet.
> If your team of ethicists has repeatedly highlighted issues and you continue to not address them, what should they do then? In the case of a company whose self-image is of openness and directness and benevolence, it seems reasonable that publicly highlight the lack of change is within reason.
if they strongly disagree with the direction the company is taking, they ought to refuse to be part of it and resign. public criticism can wait until they stop collecting the (quite fat, I imagine) checks.
edit: I guess I'm not really objecting to the public criticism before leaving the company. I'm objecting to the incredible feeling of entitlement it must take to expect to keep getting paid while actively damaging your employer.
And how exactly is your suggestion going to play out in the long term?
All the critical employees resign, until they are all eventually replaced yes-men who just care about the money or are too afraid to speak out.
And if a company is doing something unethical, then why is it inherently _bad_ to damage that company until it changes direction? Acting in unethical ways has to hurt the bottomline, otherwise what incentive is there for a company to change?
I don't understand why people seem so confused about the notion of employment.
If a company hires a security researcher, and that security researcher finds that the company is not listening to their advice, is it their responsibility to publicly expose the security flaws of the company?
What if you hire a plumber, and the plumber doesn't find that you're taking his advice regarding your toilet. Is it the plumbers responsibility to publicly comment on that?
Well, what she is doing is called performative acitvism. She is doing staged martyrdom in front of Twitter mob and activist media. She can try to do the job well, if she felt like she was being hired as a PR move and not for any actual impact, she should've quit and told her story. I guess she didn't feel she will get any media attention if she has taken the high road. Instead she created all these drama including blatantly violating company policy. I am not sure whether she has done all these things in good faith.
Unfortuantely the amount media attention (including this very thread) these people get, it will encourage more staged martyrdoms. Soon, we will have our own tech industry reality TV show. Sadly, that's the reality we are heading into right now.
I agree that employees should be allowed to criticize their employers, but there's nothing inherent to an Ethics specialist that would weight things even more in that direction. Ostensibly, she was hired to report internally on ethics considerations. Unless part of her job description was to liaise with the public on those ethical issues, public exposure would come as a bit of a surprise to her employer.
Well paying someone to bite yourself is never a good sale in private organizations.
Otherwise, why regulation, like at all?
Google is a giant corporate with deeply vested interests all around. It is laughable that it still tries to put up an ethnical makeup to just feel good about itself. Which inlines with my observations that Googlers often have this narcissism rooted deeply in themselves, the eagerness to appear to be better.
Most activism isn't successful. You would find your way to violence very quickly if a few examples of unsuccessful non-violent activism caused you to justify breaking more and more serious laws each time.
Google decides for your behavior. And you decide for Google.
The person giving you money is allowed to decide whether they want to continue giving you money. Similarly, you are allowed to decide whether you want to continue taking the money.
If google is acting unethically then I, as a user of google, want to know so I can hold them accountable.
Otherwise having an “ethics committee” is just feel-good marketing signaling at the same level of privacy terms stating how google, the ad behemoth, “cares about our privacy”.
> but you’d think that specifically someone in AI ethics needs to be empowered to speak up against both internal external issues.
I'd think ethical (AI or not) researcher would work far away from Google's influence as possible if they really want to analyze Google AI work.
> Makes me wonder if project zero is really trying as hard as possible
I wouldn't be surprised if project zero find as many vulnerabilities as possible in Google's software and give them a chance to fix quietly.
If there are huge security problems to be found in their software then independent researchers can make big name/fame and money by publishing them out.
It's not that I want them to get fired, I'm just aware that the whole point of an in-house ethics committee is to excuse the objectionable stuff, not to publicly criticize it. Sort of like when people complain that HR wasn't on their side, but actually just ratted them out to management. Yeah that sucks, but that's their role. Be smart about how you dissent. Be anonymous and leak stuff. You'll still get to keep on cashing that paycheck and probably have a more meaningful impact on the broader world compared to some de-fanged "ethics" committee press releases.
It seems like Google really likes firing people instead of free speech. Sure they can do that because of at will employment but at some point it has to start leaving a bad taste in everyone's mouths especially when these firings get publicized. At some point CS grads will start taking this into consideration when it comes to working at places and having their own political views. Who wants to work at a place where there's a witch hunt? This isn't just leftists getting fired either
Maybe when Google stops paying these CS grads obscene amounts of money. Only people with very strong political views will convince themselves to not go there, and I doubt it's a large group of people.
> It seems like Google really likes firing people instead of free speech.
Because it has consequences in the outside world that need to be smoothed over. Some people think that Google should just let people do whatever and allow themselves to be the whipping boy for what their employees have said.
She was fired for violating company confidentiality, anything she says otherwise about her gender, race, culture, etc is just a deflection, but it’s probably coming.
Do you think society at large really benefits from employers doing things this way?
If you were in the ethics department of a company, unpacked deep seated entrenched breach of social norms and were blocked from talking about it but still believed in ethics, what would you do?
I am unsure I would be prepared to do what she and Timbru did but that's a long way from the implied cynicism and also frankly abject legalistic kow-towing implicitly in your statement.
"Yea, that whistle blower had it coming, they signed the same contract i did" is pretty sad. Its not this case, sure, but its in the room, alongside sacking union organisers, refusing to approve talks, private illegal nonpoaching deals with Apple, paying your founder millions to cover up their sexualising the workforce,
Someone asked you a question and you belittled them assuming worst intent. Please don't be THAT person. For one not everyone speaks english as a first language, two while i _think_ i understand what your getting at you picked a very obtuse way to phrase it and even i'm uncertain if i understand, three when speaking to a diverse audience from around the globe who probably are not aware of the language used when speaking about diversity you should default to simple clear explanations and language, not packing it all up into a dense phrase like that.
Google claimed to have higher regard for equity in the workplace and to correcting structural bias at all levels.
The AI ethics people were meeting structural process and behaviour around their own work, which i believe they felt fundamentally contradicts that.
They did not receive support to say it outside the company. Ultimately in at least one case, they probably broke their terms and conditions of employment and have been terminated.
Looking at it from the perspective of what people say they expect from the modern workplace, it wasn't a good alignment. People expect equity. They expect to be able to speak their mind. Within limits people can and do, but some staff have found the informal rules of what you can say about Google were not what they think they wanted.
Fired for cause, and at-will terms of employment are entirely normal. Nobody can seriously say Google broke the law, absent strong evidence. If you hire somebody to occupy a senior role, but constrain their ability to self assert, its a bit contradictory.
What do you think society at large wants, from the entities which now control the vast majority of our private state as individuals?
I’m sorry but as a native speaker your phrasing is awkward and a pain to parse. Unpack isn’t synonym for ‘uncover’ and the rest was equally clumsy. The “if the rules didn’t stop me I’d give you the finger, you imbecile” tact is a little ironic.
What is wrong with you? Why do you talk like that? It's like you take a normal sentence, then swap out some random words until it no longer makes sense.
What makes it hard for me to read is mainly the comma-placement, which is quite different from the verbal pauses you'd make when reading it, and break up rather than separate the logical parts. I found myself asking "what does it mean to point in rule?" Italics or quotes would have helped.
No, its just how some people mangle English. I didn't go to Oxbridge, but a lot of my workmates in the eighties did and they sometimes wrote very strange sentence structure. An American instance: Prof Dave Mills (certainly in no sense a peer, since he's out of my league, but we did write on the same arpanet and pre great renaming Usenet lists. Also he didn't go to Oxbridge) says this of his own style of writing:
It is an open secret among my correspondents that I on occasion do twitch the English language in mail messages and published works. Paper referees have come to agreement on what they call millsspeak to refer to the subtilities with which I personalize my work. If you read my papers or my mail, you know my resonances. If not, you can calibrate my naughtimeter from children's books
Jon Crowcroft, who is a CS professor in Cambridge, but was a phd student in ucl-cs when I worked there in the eighties: http://paravirtualization.blogspot.com/ also, probably presumptuous to call him a peer: if he keeps going, he may be one both literally, and by appointment. Of the realm that is, not I.
Overall I think I miswrote above. Absent a time machine, it stands.
Obviously, this stuff irritates a lot of people. I apologise, but really at this stage I doubt I'm changing.
Btw, and please forgive me if this is a breach of privacy but HN comment history shows you've accumulated a 25 year deep curated unix command history file. If I had the forethought to have done the same (which btw, is a brilliant idea) it would be older than yours. It most definitely would not be better, and very possibly more narrow in focus. I suspect, the pretensions of written English aside, we're not that different.
> If you were in the ethics department of a company, unpacked deep seated entrenched breach of social norms and were blocked from talking about it but still believed in ethics, what would you do?
Whatever you do, just don't try to engage in things that go against the rules of employment (and quite possibly are illegal) like downloading and send confidential documents to third-parties.
Sure, simply said its obvious. Now, idieate into the circumstance. Can you find no path to "i better make sure can prove my side of the story factually" as a thought process?
There's room for discussion, certainly. If I hired an accountant to keep me on the straight and narrow and I expensed a dinner I'm not supposed to, I wouldn't want him to go running to Twitter and be like "Hey, everyone! Rene expensed a dinner with this guy abroad! That's a violation of the FCPA! He's a violator, guys! Heeeeeee's a violator!"
Like, if he did that, I'd be pretty incensed honestly. I'd expect him to tell me "Dude, you can't go around taking these guys out to dinner. That's like a bribery violation and shit, homie".
If he had to keep warning me, I could see him feeling that his professional ethics are being called into question, and then I'd expect him to either:
* quit and maybe blow the whistle
* stay and blow the whistle and then I'm gonna retaliate within my legal ability
> There's room for discussion, certainly. If I hired an accountant to keep me on the straight and narrow and I expensed a dinner I'm not supposed to, I wouldn't want him to go running to Twitter
Or emailing your expense report to who knows whom.
There is disagreeing with your employer and then flagrantly breaking rules and confidentiality and comparing the two like you are doing here is disingenuous.
No, its not disingenuous. It has to be thought about, sure. I think by the time you're archiving company secrets outside the firewall you mentally left some time ago. What led you there? Is that maybe the interest for me? I don't think she wound up there for entirely base reasons, maybe that's what I'm thinking. Most strict legal compliance answers aren't "wrong" but boy, they feel limited. Hard to change things when it's down to contact terms. Why have smart people, paid to critique if you want to wind up submarining their work?
As a manager someone can disagree with me all they want, whatever i honestly don't care speak your mind. But the moment they start breaking company rules it starts to get into a problematic space, depending on the rule it can mean their judgement can't be trusted anymore and they will have to be let go. Some company rules are to meet legal requirements, and breaking legally required rules is an entirely different ballgame as your opening the company up to lawsuits and legal repercussions. Just because your smart and paid to critique doesn't mean you get to break the law, even if you, I, and the company disagree with it.
As a manager you have structural obligations it would be impossible to ignore. I believe Mitchell will fail to argue illegal termination in law. I am sure google will have a mixed outcome here, some aspects (reinforcement of hr expectations and contract compliance obligations will be net beneficial for everyone) and some (confirmation of how strongly there is a gap between Google's posture, post "dont be evil" and the actual reality) less so.
I've met and worked with a small, couple of handfuls of google people, men and women, all amazingly skilled. Meredith Whittaker was one of them and I think her termination was unnecessary. I do think less of Google as a consequence.
i feel you generally on this issue, but that last question is easy to answer. a lot of these aren't out of the goodness of our hearts type things but are more CYA if needed, kind of like white collar crime training. "well, we did the best we could - we even had an ethical AI team." also is a nice recruiting bullet.
now, we don't know if this joint was one of those "haha, yeah, we'll get back to ya" things in management's eyes from the start, but it sure looked like its leadership either was not informed or intended to make it actually matter. praise ought to be given there, as this industry seems largely morally bankrupt. however - and i'm not intending to be super negative here - she may have known that her and G weren't gonna work out, and this was just the most spectacular way to set sail.
Only if the disagreeing was in relation to ethical AI.
As far as I know, she's made this about perceived sexism, racism, and personally attacking hermanagers for failing to step up regarding various topics NOT related to AI ethics. There hasn't been much discussion about actual AI ethics, at least not on her behalf.
I would agree, when you consciously step over the legal, you accepted the coming dismissal. Entitlement is a label. I think she knows cause exists. She "left" long before and her anger reflects the construction of events which led there. I doubt she really feels entitled, beyond the expectations of respect and recognition somebody in her pay grades would normally expect.
"Every moment where Jeff Dean and Megan Kacholia do not take responsibility for their actions is another moment where the company as a whole stands by silently as if to intentionally send the horrifying message that Dr. Gebru deserves to be treated this way."
"Dr. Gebru refused to subjugate herself to a system requiring her to belittle her integrity as a researcher and degrade herself below her fellow researchers. Within the next year, let those of us in positions of privilege and power come to terms with the discomfort of being part of an unjust system that devalued one of the world’s leading scientists, and keep something like this from ever happening again."
In short, she was getting paid handsomely while calling out her bosses by name and taking them through the dirt. I have nothing but respect for people willing to fall on their sword for the sake of making the world a better place. But if she thought even for a second that she could possibly be a martyr and also keep her job, she's delusional. I have a hard time aligning myself with people who are fighting for the right cause, but have completely lost their touch with reality.
Perhaps she expected to get fired and knew that moving the files was only going to accelerate the inevitable? In that case, she deserved to be celebrated. But if she thought that she was untouchable, and in addition to being a major PITA for her employer also ended up so easily getting caught red-handed for a fireable offense - then I start questioning her judgement and have to wonder if her arguments against Google are equally skewed.
> Dr. Gebru refused to subjugate herself to a system
Like working at a gigantic publicly traded US corporation?
> belittle her integrity as a researcher and degrade herself below her fellow researchers
Funny way of phrasing "paper wasn't good enough yet to pass peer review."
> let those of us in positions of privilege and power come to terms with the discomfort of being part of an unjust system
Like those who went to elite academic institutions and then went on to make boatloads of money at one of the worlds best known, respected, and profitable companies? Must be nice to have that sort of economic privilege where you can go out of your way to get fired and not to have to worry about losing your home or not being able to pay your bills.
> part of an unjust system that devalued one of the world’s leading scientists
I don't mean to insult her or anything, but one of the _world's leading scientists_? Really? The myopia and self-important of the Valley never fails to baffle me.
> In short, she was getting paid handsomely while calling out her bosses by name and taking them through the dirt.
Exactly. Who wants to work with someone who craps on everyone around them when they don't get their way? And what company is going to tolerate that sort of behavior out of someone who is supposed to be a leader?
News flash: if even Jeff Dean thinks you're an unbearable asshole, you got some issues.
No, the opposite. Jeff Dean is known for exceptional kindness and humility. He is the kind of person that wouldn't have a problem with a researcher standing up for ethics.
Therefore, the implication is that anybody Jeff Dean cannot bear is an unbearable asshole.
Does Google have a particularly effective PR machine? I think people like Google because of things like search, Maps, Gmail, YouTube, Android, etc. - not because they are some PR wizard.
They dont :), but unless someone is willing to take the risk themselves and stand up to defend Google, what is reported in the press will then be simply known as "truth". And it is their PR's job to do something about it. Although arguably not doing something about it is also a variable option.
You’re conflating the current person who was fired and did the public callouts with the person who has a Doctorate from Stanford and was fired or resigned because Google wouldn’t let her publish a research paper criticizing large language models. Let’s instead just not trash people we don’t know, and don’t have any firsthand knowledge of their motivation.
To me the story is that it turns out when you fire a fairly accomplished ethicist in a way that people perceive as unethically, a bunch of ethicists they work with don’t take it well - and in this case it looks like they went pretty extreme on it.
> Funny way of phrasing "paper wasn't good enough yet to pass peer review."
Peer review happens after it's submitted, in a blinded review process, by other researchers who don't have a conflict of interest - i.e., people who are at other institutions. It does not happen internally by people hand-picked by management.
You can disagree about the merits if you want, but it's important that we're on the same page about the facts. The review process Jeff Dean got mad about was an internal review process (for IP etc.), not a peer review process. It also was approved in the internal review process, which Jeff Dean confirmed in his letter. He was unhappy that the reviewer chose to review it and approve it within a day (which the reviewer was allowed to) instead of sitting around for two weeks to give management a chance to stick their fingers into the process.
Gebru was a very bright individual who could have gotten a job in academia, but instead chose to work for Google. Part of that arrangement was Google got discretion on what she could print. They wanted her to change what was printed because they believed the paper was unfairly critical of technologies Google had a stake in.
She disagreed, threatened to resign, wrote an unprofessional email and then Google accepted her resignation.
Very bright in some aspects, maybe. But also apparently insensitive, abusive, narcissistic, myopic, and apparently clueless about corporate realities, and either delusional or dishonest with extreme bias in many communications.
Oh, that's not the scandal at all, and I don't know why people think it is.
The scandal is that Google, already one of the most powerful entities in the world, is building an AGI and doesn't even want the slightest veneer of accountability for it. Nobody who isn't on Gebru's old team has any realistic access to know what they're really building. All we know is that the AI keeps outsmarting humans.
And now they've fired the manager in charge of the "AI ethics" team.
What are they doing such that they felt obligated to hire an AI ethics team and then ensure that team could not criticize Google?
If you worked in this field you would know how ridiculous your claims are. Dr Gebru wasn’t trying to stave off the robot apocalypse, she was interrogating biases exhibited by some neural networks due to the data they are trained on and how they are trained, which is something people have been looking at since statistical learning and statistics has existed. And when her colleagues raised reasonable scientific objections to her claims she went on the warpath and started calling her colleagues a bunch of dumb racist and sexist dirt bags. And it is actually fairly well known what Google is working on because they publish most of their work, and anyone in the field can go out and pull up all their research and have a pretty reasonable idea of what they’re working on, there’s not much mystery there. And despite claims to the contrary, the “AI ethics team” isn’t some grand overarching team, they’re a small team in one small part of the Google hierarchy—the leads of the AI Ethics are/were essentially first line managers, which is hardly the grandiose mandate that everyone seems to be attaching to it.
People are attaching far more significance to this whole flap than is remotely reasonable to anyone who works in this area or has any experience in the corporate world.
If the AI ethics team is a small team subservient to the AI practitioners, and they don't have the ability to raise concerns about small and apparently well-known problems, how could they ever hope to raise concerns about big problems?
And whether I work in the field or not is irrelevant (though, as it happens, I was fighting with deep learning GPU drivers just yesterday). The whole problem is the idea that nobody has the right to direct the AI beyond the people who are running the AI, that if practitioners have been "looking at" a specific concern, then it's illegitimate for anyone else to ask questions.
> Peer review happens after it's submitted, in a blinded review process, by other researchers who don't have a conflict of interest - i.e., people who are at other institutions. It does not happen internally by people hand-picked by management.
This is absolutely untrue. Your confusing peer review of journal with peer review prior to submitting to the journal. It’s super common to have papers reviewed prior to submission anywhere, and that review is usually done by peers or even superiors.
She's also not publishing this as just herself, she works for Google and is publishing on their behalf. Of course they have veto power over what she publishes. You take issue with that; quit. Which, I believe she did (she didn't get fired AFAIK)
Yes, there exist review processes by peers, and yes, your institution can choose to exercise veto power. The scientific process of "peer review" is a very specific thing. (My code is not "peer reviewed" in the scientific sense simply because a teammate, a peer, reviewed my pull request.) So saying the paper was not good enough for peer review is misleading.
And yes, Google can veto any paper. That's not the issue. The issue is they chose to use their veto power to block a paper that did not sufficiently praise Google. That means no paper coming out of Google can be trusted, because it's not the output of researchers but also management. Does Spanner work? Maybe there were some negative examples that were deemed too embarrassing to Google and removed.
I don't know, maybe you already acted as if no paper coming out of Google could be trusted, and you were smarter than the rest of is in that regard. But a lot of people act as if their papers are trustworthy and not propaganda (e.g., https://blog.acolyer.org/2015/01/08/spanner-googles-globally... takes the Spanner paper as if it were a legitimate scientific paper). Want to tell them they need to stop?
My organization includes a peer review process before it is submitted to journals. It makes sense because how would an organization be able to do this without peers.
There are multiple types of peer review, not just from formal journals. Your statement of a narrowly scoped peer review being the only kind is incorrect.
I’m not talking about Google specifically, but I imagine any institution creating scientific knowledge will have multiple phases of peer review before the literature.
Separately your logic of “no paper coming out of Google can be trusted” is faulty. It’s not that it can’t be trusted at all, it just can’t be trusted for some things. Peer review outside Google should identify if Spanner has bugs or whatnot. I think it’s safe to say that work of Google should not be estimated to only exist based on what’s published.
So while Google published things should be true since they are reviewed outside of Google, we’ll never know the things not published.
Also this is sort of the point of science right, you shouldn’t blindly trust things based on institution, but need to review critically based on your own experience and expertise.
She was complaining about her peer and co-lead being fired by top management, going around their shared immediate management, dishonestly and in retaliation for internally highlighting problems.
I think the one thing you can be absolutely sure of is that she didn't see herself as invulnerable.
> so easily getting caught red-handed
“so easily caught red-handed”? It took 5 weeks from when she was locked out over a supposed automated alarm of a possible unauthorized activity. for her to be fired.
If (and no one not directly involved has any way of knowing this) she was actually “caught red handed” it doesn't seem to have been very easy.
Dr. Gebru and I had just been promoted to “Staff” Research Scientist, which is meaningful in the world of STEM, with the sort of honor associated with becoming tenured. We had thought it meant a certain amount of job security.
I’m assuming the parent comment means she didn’t see herself as invulnerable after Dr. Gebru’s firing. The quote describes their mindset before the firing in order to highlight why she found it unexpected.
"she was getting paid handsomely while calling out her bosses by name and taking them through the dirt. I have nothing but respect for people willing to fall on their sword for the sake of making the world a better place. "
There is a major implied assumption here that those she was 'calling out' were actually doing bad things.
I sincerely doubt this of Jeff Dean & Co. and a look at this specifics of the Timnit case I don't think make Google look bad.
In summary - she was castigating her managers and peers who were actually acting within reason.
This employee was toxic, furthering more toxicity.
Just because you believe that you're 'doing good' doesn't mean that you are, or those who you disagree with are evil.
This is the big blind spot of (some of) the ostensible promoters of Social Justice: the self absolution and assumption of perfect moral clarity.
Ironically, Google is full of good people, more or less, by virtue of their every day practices, making the world a tiny bit of a better places. (Except for the anti-competitive things of course, but that's more strategic impetus)
There seem to be quite a few people that consider Dr. Gebru to be one of the leading scientists within that specific (sub)field.
I don't know the field well enough to have an opinion on the merits but it seems like a view one might disagree with but would not be surprised to find somebody to hold.
But calling someone "one of the world's leading scientists within their specific subfield" is a much, much weaker claim than calling them "one of the world's leading scientists". People in the latter category win Nobel Prizes, not Twitter fights.
When both people involved share a subfield and the piece is talking about what's fundamentally intra-subfield drama, I'd say there are three possibilities:
1) They meant "within the subfield" but assumed it could be taken from context
2) They meant "within the subfield" but have an overinflated notion of the importance of their subfield
3) They genuinely believe the strongest form of the claim
Which of these possibilities one considers most likely probably depends more on one's prior opinion of the people involved than anything else, and I continue to not really have a prior opinion because this whole situation is a trainwreck and I don't know the background well enough to be able to even start to try and read past the biases of the various people commenting.
>The firing of Dr. Timnit Gebru is not okay, and the way it was done is not okay.
if i remember correctly she made an ultimatum and threatened to resign, and her resignation was accepted. That doesn't look like firing until we call "firing" any resignation when one disagrees with the employer.
To me that intentionally looking sleight of hands with firing/resignation in the Mitchell's doc already undermines trust to whatever point she is making there.
I have no idea why the media (people through this word around, but in this case, it's a lot of major outlets) keeps reporting it as her getting fired when she clearly threatened to resigned as an ultimatum. Maybe Google making it immediate changed things, but saying "fired" is pretty disingenuous.
But it was still an ultimatum. Acting like she was fired out of the blue is dishonest. She threatened to resign unless things changed so good fired her.
> In short, she was getting paid handsomely while calling out her bosses by name and taking them through the dirt
I think their job was just that: call out the company (and others) on AI ethics issues - I don't see how it comes as a surprise to you that a person in such a role is also concerned about ethics in general, or the ethics of AI ethics at Google, so to speak.
The way I see it, so much about this hassle seems to point towards some fundamental misunderstanding about what their job was - that perhaps they believed, like you say, that their job was to call out the company and others on AI ethics issues, but that the management who formed and funded that team believed that their job was something else - I think that what Google wanted from that team was to do research on applied methods on how to make the models less biased and on process improvements on how to deploy them in a more ethical manner - i.e. to work on technical solutions to mitigate the ethical problems they identify, to enable Google to achieve the same goals through ML but with less ethical "side-effects" so to speak.
For a specific example, I presume Google would be happy if the team produced methods that help facial recognition generalize better to different ethnic groups, or methods that identify what specific gaps in training data should be filled to cover a diverse target audience; but apparently did not consider highlighting problems where the only solution is "don't deploy such models" and social activism goals as their intended job.
So when the push came to shove, this misalignment of goals had to be addressed; either by one or another changing their goals (which seems to have been unacceptable for both Google and Timnit&co) or by firing/quitting.
The unit was full of Academic researchers, who regularly published papers and collaborated with others on papers; the whole kerfuffle started with Google's uncharacteristic request to put a lid on Dr. Timnit Gebru's paper. To me, that doesn't sound like a team that expects, or is expected to just work quietly on internal - likely confidential - projects.
I suspect Google leadership were on board, and wasn't even aware of the misalignment until Dr. Gebru declined to pull a paper that raised questions on a crown-jewels-level, Jeff-Dean-supported AI model that materially impacts Google's finances (and likely some leaders' bonuses too).
Now that Google leadership is aware of the misalignment, I think its actions show the direction they are taking going forward.
Her call outs had nothing to do with AI ethics, though, and centered almost entirely on alleged perceived sexism, racism, shady HR practices with a sprinkle of political correctness on top, with regards to her colleagues firing.
She started slandering her employer on Twitter in front of the whole world, when that conversation should CLEARLY have been private, and she got fired for it.
> “After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees.”
I assume this was to release to the press to shame/pressure Google, and wasn't corporate espionage. But what reason do you think she was doing it for?
Dr Gebru was talking with a lawyer before they fired her. Mitchell was likely contributing to the case because she herself was experiencing retaliation from Google. At this point it doesn’t really matter if it goes to the press or ends up in a complaint—- Google’s retaliation here is illegal. You can’t reproach an employee for simply talking to a lawyer and that’s exactly what Jeff Dean and co did.
I am a major pain in the ass at my company. I point out flaws constantly and clearly state what problems exist. But when I do that, I literally don't name names and make general statements such as "Company does not currently have the expertise to solve this issue and are requesting a consultant" while talking about basic issues or "Everyone is leaving because IT is seeing how bad things are and we're at the forefront so all the problems will be hitting the rest of you guys soon." And even then I think if I wasn't in a country with such strong legal employement rights I would so get fired for it. So calling out people by name just seems nuts.
Jeff Dean is getting paid two orders of magnitude more and yet he let the fire burn for several months before things went critical. Everybody in this story is making too much money, but some way way way more than others.
Money isn’t usually enough. I walked away from a job paying more than $1m/yr (to be clear - I am not rich or financially independent, the stock is still illiquid atm and I live in a 400sqft in-law unit). You need satisfaction overall with your environment.
I kept asking myself how much a year of my life was worth and it turns out - unless it’s fucking insane money ($10mil+ liquid) then I can’t suffer through something I disagree with or basically feel like I’m someone’s bitch.
I think eventually to a person it feels like you’re living a dishonest life because you’re working on something you disagree with but you have to shove down those feelings to get paid. It grows resentment. But who knows, I’m really only speaking to my own experience.
None of it's nonsense that I can see; a lot of it is jargon laden and clearly expecting people not have a lot of context people outside of Google wouldn't to understand details beyond the general shape of the point. And it's “poorly written” from the perspective of a formal document as it's fairly stream of consciousness without a lot of editing for structure, but it was a posting to an internal, by most accounts I've seen fairly informal, discussion group, but Google seems to be notorious for (1) having these, (2) encouraging their use for informal internal discussion aimed at improving the environment including critiquing internal cultural problems, and (3) intermittently deciding to treating the fact that people took them seriously about #2 as key factors in termination.
I agree it's poorly written - just like her twitter, and it even sounds nonsensical, but why is it nonsense?
The basic point seems to be that she has been handled differently than others (discrimination), made to wait a lot; and that she has been handled a "final decision" without any explanation, which she tries to present as a big no-no and against any inclusion effort that Google officially virtue signals.
Furthermore her point about how OKRs don't incentivize what officially is Google's policy (diversity, equity/egality, inclusion) seems like a solid point. Ironically Damore's memo tried to make the same "data driven" argument, and that made Google's blessed diversity team look pretty foolish, so he got fired because his style was also poor and nonsensical.
Unless we are talking about a job with inherently long term health risk and/or ilegal activity with prosecution risk I would be willing to put up a year or two in order to secure a life long financial independence. 2M USD is enough to confortable spend the rest of your life doing what you really want to do.
Depends on where you want to live and the lifestyle you want. Personally for me, it’s not enough. I’d need $10mil+ to retire. Even then, I don’t want to retire. I’d prefer to keep learning and growing. I have goals of eventually running a large organization or doing my own startup.
Also, again, you’re literally trading years of your life away.
Of course totally depends on location and lifestyle, but I guess that someone who is quitting a 1M/y is not really optimizing for a expensive lifestyle. In any case, there are a lot of places in EU and USA where you can have high quality of life with a warchest of 2M cash.
And yeah lets clarify that I don't mean retiring when I said you can do what you want, and I was not thinking that you may not have any other income for the remainder of your life. For me is having the freedom to work on what interest you, with people that is interesting and not having to suffer any fools never again in order to make ends meet. We all trade time for money, I would just preffer the condensed version and be done with it.
Interrogating corporate behavior and publicly embarrassing the corporation are two totally different things.
You can employ someone and have them develop and even publicly espouse ethics principles, while also instructing them to keep their criticism of corporate behavior confidential to the corporation.
This is super common in other professions sometimes charged with enforcing ethical and legal corporate behavior, including law and accounting. It's not a new idea.
I suppose that depends on whether you think an "Ethical AI" group's job is as window dressing or to actually make an ethical impact.
If the former, then they probably shouldn't be doing it in the first place; PR people are cheaper and more tractable. If the latter, then there's only so long you can ask good people to stay quiet. Enabling ethical failure is also an ethical failure.
Your analogies don't really work, I'm afraid. The job of corporate lawyers isn't to force some sort of ethical or legal standard. It's to give confidential advice so a company can do what it wants without anybody going to jail. Accountants do maintain a standard that is quasi-ethical, but they do it by controlling their own work. Auditors are closer to the Ethical AI relationship, but they're specifically external, so as to avoid the conflicts of interest in Google trying to police itself. And both accountants and auditors drop a dime on people all the time, and even have protection under law for doing so: https://www.accountingtoday.com/whistleblower
> I suppose that depends on whether you think an "Ethical AI" group's job is as window dressing or to actually make an ethical impact.
1. You can make an ethical impact without applying external pressure. Usually through internal channels. That is their expectations when they hire you.
2. Do you really think "apply external pressure via bad publicity on companynwhen necessary leaking internal documents" was in their job responsibilities?
Point 1 depends on the internal channels. If you have never had the experience of management not listening to you, I congratulate you on your luck and/or your youth.
As to point 2, I think that if Google didn't put "make a public stink if necessary" in the job description itself, they should have known it was part of the deal. You can't hire people for their independence and their ethics and then not expect them to be independent and ethical.
I've definitely had management not listen to me. But I wouldn't share that information publicly in order to get them to coerce them into making the right call. And if I did I definitely wouldn't expect to be working there for long.
Should have known that they were hiring people they would later consider trouble makers is a very different goal post than part of the job description.
In which case, you see yourself not as a professional, but as a minion. That's fine, but understand that it works differently for other people.
Consider a doctor, for example. A hospital can hire them and give them orders. But they see themselves as having a duty to the patient and to society. If something isn't right, they will surely say so internally first. But if harm continues, they absolutely will raise a ruckus, up to and including talking publicly.
That's not them being a "trouble maker". That's them being professionals, not bootlickers. Whether or not the hospital administrator puts that in the job description, everybody understands it's part of the deal.
It's weird that use the term and minion and professional in a way such that the farther you go up the org chart the more "minion like and unprofessional" they get. With the CEO probably being the most minion like and least profressional.
Not at all. The CEO isn't doing whatever the powerful tell him. Neither are high-level execs. See Locke and Spender's "Confronting Mangerialism" for a good breakdown of the game they're playing.
Its line workers, and maybe first-level managers that are the most minion-like, the ones who most believe that their job is to follow orders make their boss look good, without regard to impact, value, or ethics.
Forcing the company to comply with legal standards is often literally the job of a corporate attorney in an employment or compliance department.
And the primary concern isn't to avoid "going to jail," it's avoiding civil liability (for employment) or regulatory action (for compliance). Offering guidance on ethical behavior helps to avoid both of those issues.
Whistleblower statutes are an enforcement mechanism that provide cover (and sometimes motivation) to report or disclose ethical violations. But they are more of an escape valve for truly extraordinary situations, as opposed to the day-to-day guidance that professionals provide.
> Forcing the company to comply with legal standards is often literally the job of a corporate attorney in an employment or compliance department.
Compliance departments may include attorneys, but plenty of people aren't. The force comes not because of their attorney-ness, but because they're part of a department tasked with enforcement. Lawyers in general definitely are not at a company to enforce the law.
> And the primary concern isn't to avoid "going to jail,"
Allow me to introduce you to the notion of hyperbole.
I think the one thing both sides can agree on is that there should be whistleblower protections enshrined in law for people reporting ethics breaches. (I'm not going to try and suggest how it should be implemented, that's legitimately difficult)
The heart of the matter is what the ethics experts see as an attempt to suppress criticism of ethical problems with Google's technology. So yes, it's very much an ethics breach.
Whistleblowing need not be only about crime. It can just be about harm. [1] Indeed, with new technologies, regulation often has yet to be written, so whistleblowing can only be about harm.
Dean said in his letter that the paper didn’t address relevant literature related to the subject that the authors weren’t aware of. To me it doesn’t seem that the reason is that it besmirched Google, but that it wasn’t complete.
Also the article wasn’t an ethics breach type of article and internal so I’m not sure what the legal protection would be. I think if the employees had some evidence of illegal activity then they would be protected.
If this paper would be considered an ethics breach then I don’t think there should ever be relevant protections.
You seem to be conflating ethics and legality. The two are only hazily related. And yes, of course the executives who hired ethicists to make the company look good disagree with the ethicists on what constitutes good ethics. But your notion that "No one in this scenario reported an ethics breach" is incorrect. We can't of course know the truth of the matter, because Google insists on keeping relevant facts hidden. But unless clear evidence shows otherwise, I'm going to believe the ethics professionals, not the executives whose identity and financial success are strongly bound up with making their company look good.
> This is super common in other professions sometimes charged with enforcing ethical and legal corporate behavior, including law and accounting. It's not a new idea.
It has not worked out well. Making employees keep unethical behaviour on the down-low has resulted in immense, catastrophic harm.
If we are to maintain a healthy society we need a lot more whistleblowing. We need to put work into being better.
> This is super common in other professions sometimes charged with enforcing ethical and legal corporate behavior, including law and accounting. It's not a new idea.
I'd be interested in reading about guidelines for people tasked with "enforcing ethical and legal corporate behavior" in other industries, do you have any references?
As one of many, many examples, leaders in the employment law field regularly write publicly about fair employment practices (particularly when it comes to discrimination, whether racial, age-related or disability-related), and strive for changes in the law and in employers' practices. At the same time, they also counsel their employers or clients about the rules and about compliance. That's fine.
But a lawyer generally cannot publicly critique their employer/client without permission. Public disclosure of an employer's or client's confidential information without permission violates, for example, ABA model rule 1.6, and is a great way to get yourself disbarred (or at least face disciplinary action).
> Why hire an Ethics team if you won't let them interrogate unethical behavior?
The same reason to hire a token minority in a position of on-paper significance: something to point to for PR purposes. But, with an ethics team, there's also another advantage; if you get strong people who would be contributing to the field anyway and constrain their output to suit your interests, you reduce the degree to which areas you wish to exploit come under ethical scrutiny.
Also, artificial intelligence is dealing with a lot of controversial issues. Things like privacy, data ownership, and discrimination. Government is getting involved. SF banned facial recognition, and both progressives and conservatives are accusing big tech platforms of bias. Having an internal ethics team they can claim is independent would probably be helpful politically.
> Why hire an Ethics team if you won't let them interrogate unethical behavior?
There is a huge difference between balatantly violating company policy, name calling your manager and creating drama online to garner attention and doing your job in professional manner. She knew what she was doing and she staged martyrdom for her Twitter mob audience and activist news media.
It saddens me more is that so many people in this thread can't (or unwilling to) differentiate between doing your job in professional manner and creating drama online. Sigh, it will only get worse.
My general impression of this entire thing, starting with Gebru being fired, is that the department should have never been named "ethical AI" anything.
It's research into social stuff. How AI interacts with things like faces and how that gets biased by a lot of social issues, like racism and existing class stratification related to race (among other things).
These are things that can have "ethical" implications but when you start talking about "Ethical AI," you inadvertently convince people "We are the Good Guys. We will treat everyone in some extremely above board fashion. Our little department is going to single-handedly right all historical racial and gendered wrongs ever in the history of the universe and all the baggage all of society is living with as a consequence."
Good luck with that. It isn't happening and the implied contract there suggesting it will is just going to piss people off and give them unrealistic expectations.
No, I am absolutely not justifying shitty behavior of any sort, not in this case nor in the general case. Don't come at me with ridiculous accusations of that sort.
It's hard to make progress. It's actively counterproductive to inadvertently set expectations that this one department will meet some crazy strict set of ethics at all times every minute of every day in contrast with the entire rest of the world where we duke it out for small gains on a regular basis.
I feel as though people looking joining such a group would have at the very first question “what is the scope of what we will be studying” or something similar. I don’t think people take jobs like that without having a decent answer to that question and I haven’t seen any accusations of bait and switch, only of poor treatment over a disagreeing researcher ballooning into this.
I'm pretty confident the crux of the issue is "We expected to be treated better in this department than what we have had to tolerate elsewhere due to sexism, racism, etc."
And I think they expected that based on "This is the Ethical AI department. We are trying to make the world a better place. You do that by walking the walk first, not just researching how AI works."
That's what I think. It's not something I can prove. For various reasons, I'm not really wanting to debate it. I tossed it out there because some folks know I post as openly female here (and HN skews rather strongly male) and some folks know I have studied social stuff and sometimes people find my take on certain subjects of value or at least food for thought.
Assuming you are correct, is there a way to reset expectations without blowing up the department? Not that we shouldn't be striving to be ethical, but it is clear that Google doesn't like the direction the team is heading.
If Google is truly interested in the thorny issues of making AI as ethical as possible, can it do so without tackling the (seemingly insurmountable) issue of social ethics? And if not, do we just throw in the towel and cross our fingers that AI doesn't hurt us too much?
I would start by renaming the department to reflect the idea that the mission is to explore and understand how and why things go wrong socially with the deployment of so-called AI.
I don't know what that name should be. I've had maybe two hours of sleep and this is likely a thing that would require research.
I've had college classes in Social Psychology, Negotiation and Conflict Management and Homelessness and Public Policy. All three of those classes deal with topics you could think of as falling under the umbrella of human ethics. None of them has that word in the title.
You need to first and foremost understand how things happen. You need to understand that before making value judgements concerning ethics.
Speaking of ethics in AI before you even know how and why you get certain outcomes amounts to putting the cart before the horse.
Seeking to understand how things happen for thorny topics inevitably raises hackles.
I was molested and raped as a child. I speak somewhat often on topics like rape and there is a long history of people accusing me of being a rape apologist.
I would like to see rape happen less often in the world. That goal requires one to understand what goes wrong socially without first assuming "Men are just all rapey bastards with hearts of pure evil who do evil things intentionally, knowingly and on purpose."
A lot of people who have already been hurt by the current state of things don't want to understand what happened. They want "justice" by which they typically mean revenge.
You will be facing that same dynamic for AI and issues of racial justice, gendered issues, etc.
Many people will call you a White supremacist or a misogynist for trying to understand what went wrong as neutrally as possible.
But that is the only useful first step for actually redressing a lot of social ills.
Fiction sometimes covers this. People consume it, get jazzed at feeling "their side" understood for once and then routinely fail to actually apply those lessons in life.
Fiction sometimes gives you both sides of a story and lets you see how a+b went kaboom although neither was inherently bad. Research needs to seek that perspective.
Ethics occurs at the point where you apply the research. Simply taking a look at how it happens will not be about ethics and will, in fact, make most people pretty darn uncomfortable.
> Within the next year, let those of us in positions of privilege and power come to terms with the discomfort of being part of an unjust system
If you have this pinned to your Twitter (on 2/5) about your employer why are you still working there and not resigning? It took Google 14 days to fire her, which I suppose is the typical two weeks notice since that date.
Additionally, she can stage martyrdom in front of Twitter and vulture news media. If she quits and writes blog posts about her experience working at Google, she would no way get the same amount of attention she is getting now. This is called performative activism.
Somehow Microsoft Research manages to be a much more open place than Google and has researchers publish critical pieces all the time. It's ultimately a bit sad that Google, the company started by two PhD students, is incapable of creating a similar culture to MSR.
I have no insider knowledge, but from the outside it sure as heck looks like Microsoft has rebuilt its internal culture since the founding of Google. It certainly doesn’t feel like the MS of Bill Gates or the MS of Steve Ballmer anymore.
If that is true, wouldn’t this imply that the difference between the two isn’t time since these unspecified “trends” began, but instead quality of management?
IIRC MSR was started by Rick Rashid (the inventor of the Mach microkernel that NeXTstep and Mac OS X are based on), and it was always very independent of the rest of Microsoft and its business priorities.
not surprising. please read 'the age of surveillance capitalism' nearly every move by google (and friends) post-publication (of course, any move pre too) can be captured by the thesis of the book.
This behavior starts all the way at the top with Sergey. See Google's leaked internal all-hands[1] after Trump won basically calling for the end of the world without any regard to employees who may be fiscally conservative or Trump supporters. Google is by far the worse of the big bay area tech companies in terms of stifling speech, equality of outcome policies, and being an echo chamber.
Trump was and is a clown. Brin, in that video, was expressing his personal feelings (that he doesn't like Trump) being a refugee himself. He was not expressing company policies.
That was supposed to be an internal forum. From what I've heard executives treated internal q&a sessions as public press conferences afterwards and try to be "politically correct". Not a good outcome for anyone involved
Ironically I've found YouTube's policies to be far more liberal & open than Twitter or Facebook's. I think Sergey was wrong to kick off about Trump so publicly, but he is also the guy that pulled Google out of China on principle alone. A mixed bag, for sure.
Google doesn’t have any obligation to its Trump supporters. It is an entity outside of its employees and its decisions issue from the CEO and Board. If you don’t like their decisions don’t work there.
For people who are familiar with this, I have a question. How does research in an industrial lab vs. a university work, insofar as independence is concerned? In a university, aside from well-defined ethics guidelines for conducting research (e.g., involving animal or human subjects), staff are generally free to publish the findings of their research at any venue they see fit. Many are tenured professors, so they also have the added guarantee of tenure, which makes them unfireable save for clear violations of policies such as sexual harassment.
How do these things work at industrial labs, such as those of Google and Microsoft?
> In a university, aside from well-defined ethics guidelines for conducting research (e.g., involving animal or human subjects), staff are generally free to publish the findings of their research at any venue they see fit.
I'm not sure what university were you at where that was true - academia is horribly political (in many ways more than corporations in some ways less) and until you get tenure your job is to put up and shut up. Your career is dependant on good recommendation letters and goodwill of quality advisors.
Attacking your organization, supervisor or lab will end your career faster than Google cancels products.
Even after you gain tenure, you're still dependant on external funding to keep your research going and employ postdocs and other researchers. That funding disappears really quickly if you decide to start an assault against your organization or people paying you. Like these Googlers did.
>Your career is dependant on good recommendation letters and goodwill of quality advisors
life pro tip: for any paper with more than 2 authors, the last author is the most senior and did the least amount of work (if any). i leave it to you to conjecture about quid pro quo.
All of what you said is true, but the meaning of tenure is that you are paid a salary by the university and get to keep your job. That's a very generous protection and it's offered to encourage free discussion and exchange of ideas, which is what the University is all about. There are professors who choose to ditch research and make teaching their calling, even at research universities.
Universities are not very well-run when compared to your average fortune 500 corporation, and there is a lot of skullduggery that goes on inside of academic departments, particularly those that do not receive a whole lot of research funding, such as the humanities. The path to tenure is brutal and is full of pitfalls (in fact many positions are explicitly not tenure-track; there are many, many adjunct positions where folks are paid a pittance and forced to sign very unfair contracts that must be renewed periodically). Even with all that being said, academics do enjoy a degree of freedom that those in the corporate world can only dream of.
MSR has a very free research culture. People there are technically fireable by normal means, but Microsoft chooses not to terminate employment for "outside-of-work" engagements, or for the subject of their research. Some very critical papers of Microsoft's AI approach were published under the MSR header. Not all of the findings were put into place, and Aether has the unenviable job of balancing ethical behavior with potential profits. But their ethicists are leaving because they're shouting themselves hoarse into the void, not because they're being asked to change their results.
I have a lot more respect for MS and MSR than I used to. I grew up, and I learned more about both. Nowhere is perfect but, in a different context of the US attempts to define extraterritorial control on data held by European hosted services, MS was the company which stood up to the department of state. Google and the others folded.
I am not sure I necessarily buy your comparison. Do you have any records where the employees have done something similar, e.g., giving ultimatums to manager, sending inflammatory emails to a large group of employees, leaking internal documents to outsiders, in MSR, but didn't get fired?
You are talking about the Google ethicists who are leaving, right?
Also, can you clarify what you mean by 'shouting themselves into the void'? Are they being denied venues to publish their research, thereby rendering their shouting inaudible, or did you mean something else?
Anyone interested in working in a research position should care a great deal that their management hierarchy is separate from their research hierarchy, and that the organization supports researchers working across teams. That will reduce the impact of personality conflicts and assholes.
I know that some places feel that the people managers have to be the technical experts but I would strongly advise against working in such a place.
I hope you are using hyperbole here. Google employs hundreds of researchers who are free to publish their work as far as I can tell (plenty of seminal systems, AI, economics -- auction mechanisms and such, and other papers are testament to this fact). There appear to be some exceptions to this that are causing a furore.
I’ve been wondering how sincere the Silicon Valley corporate mantra of Be yourself, Be authentic, be open to management really was.
There was so much idealism even on Hacker news. Nowadays I witness the strange contradiction of people bashing giant tech companies while mostly discussing how to found the next giant tech company.
A lot of us did genuinely believe it. However as the companies grew and became mega corps the realities of life in normal companies have slowly made themsleves felt.
It’s a bit like the Adam Curtis documentary hyper normalization. Where we all pretend that everything is still the same as before, but beneath the still free sushi ,massages and high compensation a lot has changed.
For me at least, it really was sincere, but in retrospect I was terribly naive about the shared cultural context required to make it work. My idea of "be authentic, be yourself" just didn't include a lot of people who authentically disagree with me on what a good working environment is like.
Honestly, I think this attitude makes much more sense in the present era than it did before about ten years ago. There was a time when IMHO you weren't just naive for believing that Google was sincere about its "don't be evil" mantra. The current age really is one of cynicism.
Of course, in my mind that's a completely separate question from whether or not Google is being evil in this case. As far as that goes, I honestly really don't know the answer.
I suspect very few employees actually think that google is evil. The phrase was noteworthy at the time because it was so hyperbolic. It was only now when nuance has been thrown out the window that everything has to be black and white.
I hate that the HN audience put so much significance on google “abandoning” (it didn’t) this mantra. Anyone who actually understands depth in any topic knows that it’s not as simple as “don’t be evil”.
Like oh cool thanks man. Yeah now I know exactly what the right moral path is - just don’t be evil duh.
It adds no practical value to decision making other to serve as directional inspiration. But you’re not going to be sitting there trying to find the right balance of weights in search ranking in order to keep fairness and equity and just be like “right, don’t be evil”
I'm probably not intelligent then. In the early 00's, I did think Google mostly believed in its mission to organize and act as a gateway for the world's information.
Yep, though I knew even at the time that most companies' mission statements were BS. Google seemed different. I think at some point they still actually were different, but also were themselves naive about what being successful and transforming into a large corporation would do to them.
For myself, I reconcile the tension by telling myself that giant tech companies are like invasive species - aggressive and bad for the environment in the short term, but also the only thing that can survive in a certain kind of environment.
The proper goal, as I see it, is to plan to build the next giant tech company and be willing to step on as many toes as necessary to make it happen... but to have an ideal in mind that goes beyond that 'growth at all costs' stage in which the company could actually increase the health of its ecosystem.
have an ideal in mind that goes beyond that 'growth at all costs' stage in which the company could actually increase the health of its ecosystem.
Ideal is right, because that is idealistic to the point of naivete. The second a company takes its eye off the ball and starts trying to better the world, its competitors swoop in and take over.
The right answer is for a society to create laws that ensure the behavior we want to see out of corporations. When that's not effective because corporations have undue influence on the legal system, then it's very clear and simple the problem you need to solve.
See, that looks to me like naivete about the behaviour of the regulatory apparatus.
Ultimately, a civilisation runs on both rules and tools, and while we clearly have opposite biases about where the low-hanging fruit are, I think we also both know it's not really true that there are zero improvements to be made in either realm. I'd be happy to share my scepticism about attempts to reform the 'rules' side of the equation if you like, but my own interest is in tools.
Some tools allow a few players to dominate a limited number of niches, others allow many players to thrive in many niches. If you're going to be a toolmaker, trying to gain dominance and then keep the environment static is actually a self-limiting strategy, Nothing in nature or history has proven able to do that. The only time one player can dominate a limited number of niches is when there are few niches to begin with. Successfully dominating a niche actually gives more opportunities to create niches adjacent to one's own. Amazon carving out a niche in e-commerce is what allows it to provide services to authors and vendors.
Wow, this is a good example of sounds-smart nonsense. Deepak Chopra style.
> The only time one player can dominate a limited number of niches is when there are few niches to begin with.
What?? What are you even saying? It's an entire world filled with niches. There's no sense in which niches are limited. How does it ever make sense to talk about "a limited number of niches"??
If that were true, people would still be buying pet supplies from Pets.com and books from Amazon.com. The fact everyone knows what is meant by 'FAANG' should demonstrate how few actual niches exist in the digital space.
Communication is difficult when worldviews are very different and you seem uninterested in meeting me halfway. I'd be happy to explore this area of dialogue further, but you'd need to agree to stop insulting me.
If that were true, people would still be buying pet supplies from Pets.com and books from Amazon.com
I don't understand. Amazon sells some very large percentage of every book sold in the world. What are you saying?
The fact everyone knows what is meant by 'FAANG' should demonstrate how few actual niches exist in the digital space.
What does this mean? A niche is a very specific market segment. The presence of extremely large players does not negate the presence of niches. Niches exist whether or not there companies taking advantage of them.
Communication is difficult, that's why words matter. I think you should choose your words more carefully and make sure you are double-checking that the things you are saying actually make sense.
No qualms for Googlers getting a reality check. I don't get why people think they work at a liberal think-tank where no consequences exist for sharing opinions or confidential data about the company publicly. Had someone on my team decided to write a similar doc like the one Margaret published, I'd have fired them on the spot. Shut your mouth and move on if your ideology conflicts with your employer's.
> Had someone on my team decided to write a similar doc like the one Margaret published, I'd have fired them on the spot
I sometimes don't agree with decisions my (present and past) employers have made. But I'd never go to the media or public forums with those disagreements. If I thought it was so important that I had to do that, I'd resign first.
I strenuously avoid making any public comment about my current employer. (Even past employers, I may be a little more open, but I'm still careful about what I publicly say – don't want to violate any confidentiality obligations, and don't want to unnecessarily burn bridges in case I want to seek re-employment or other commercial relations with them in the future.)
If any of my employers (current or past) caught me downloading company documents/emails/etc in order to share them with the media or other external parties – I'd expect to be fired.
I'm curious, what will Mitchell's and Gebru's career prospects be from now on? After badmouthing their employer publicly, sharing internal communication, I just don't see who in their right minds would hire them.
Many academic departments would certainly hire them - there are universities/departments who pretty much ignore such politics, and there are many universities/departments who would consider their stance as heroic and a bonus for hiring.
Would you please stop posting unsubstantive and/or flamebait comments, and especially personal attacks? You've done the latter more than once recently and we ban that sort of account.
Instead, if you'd please make your substantive points thoughtfully and without taking the thread further into flamewar hell, we'd be grateful.
Which doc are you referring to? Could you link it?
Your last sentence has the implication that people should not attempt the change or challenge from within the corporate capitalist ideology of the big tech companies, which together make up a large portion of the entire technology industry and the US stock market. They should just concede and “move on”.
I’m kind of sympathetic to this view. Someone who joins a corporate law firm to “change things from within” is going to get a deserved skeptical smirk. It’s not just attempting to dismantle the master’s house with the master’s tools. It’s attempting the dismantling while standing next to the master inside his own house, asking him to pass the hammer.
Hiring an ethics in AI research team presupposes you want to have ethics in your AI. It's disappointing, but not surprising, to find they want a bunch of syncopants. Perhaps they shouldn't have bothered creating the department if they didn't want people working there to speak out.
You’d have to think there was some middle ground between investigating and adding ethical accountability to AI programs and exfiltrating thousands of sensitive files.
I was reading a book about dry stone bridges, and one of the bridges featured was built across a stream in a protected watershed. The builder said he was surprised the permit for it was approved, he told the client to expect it to be rejected. Talking to the government bureaucrats & scientists, he found out that they had to approve a permit every now and then. Not just because of political reasons, but strategic ones -- if they denied every permit, people would start ignoring and circumventing their department. By approving one every once in a while, they kept themselves in the loop as gatekeepers and prevented more harm than they would have by denying everything. His own bridge project was minimally harmful, and was the token approval so they could keep denying the real problem applications.
I feel like there's a lesson there for other gatekeepers trying to fight the good fight.
I think it was from "How To Build Dry-Stacked Stone Walls" by John Shaw-Rimmington, but I pick up nearly every book I see on the topic so I can't be sure.
In this context, it's very likely that you can translate "exfiltrating thousands of sensitive files" to "downloaded some of her emails", perhaps related to fear of retaliation
The point is that "exfiltrating thousands of sensitive files" can mean many things. It can mean, for example, downloading one's emails to have an independent record of them for fear of retaliation.
It can also mean stealing company IP to try and start a competitor.
Both may be equally forbidden by policy, but they're vastly different acts.
Everytime I see cases like activist employees getting fired for violating company policy, the sentiment seems to be the company is the bad guy. At what point it becomes the fault of the activist employee? At what point we can have an introspective look that are we enabling toxic behavior of people suffering from Messiah Complex?
This is my personal anecdote. I had experienced working with an online activist co-worker. She was one of the people who are very popular online, but most of her co-workers hated her. She was extremely entitled, always bossed people (who are at the same level as her) around, did very little of her actual job because she was busy in more important online activism. She had a Messiah Complex, and we were all there just to support her in her worthy cause. That's why I am conflicted when I see activist employees getting fired.
Sure, there are "activist employees", and sometimes companies may be justified in thinking that some of these employees engage in soapboxing to an extent that it's detrimental to the goals of the company or a distraction from their actual jobs, as you say.
But if a company serially butts heads with people whose actual jobs ARE to hold said company accountable in some way, I find it suspicious.
If a company keeps firing accountants "because they kept yapping about GAAP", wouldn't that raise a red flag with you? Or if a company told their auditor's managers that they'd take their business elsewhere unless more reasonable auditing standards were applied — a practice only all too common in the lead up to financial scandals.
If a president keeps firing inspector generals because apparently they are all part of a Grand Conspiracy against America, wouldn't that be a bit suspicious?
So if Google serially fires AI ethicists, some HN commenters keep insisting that all of them must have been uniquely unreasonable and toxic people, but my money would be on Google having increasing issues with being held to certain ethical standards.
As an aside, paradoxically, it may not be a good sign either if such watchdogs stay at a company forever, appear to be perpetually content and grow highly prosperous. The reason why is left as an exercise to the reader.
So every company that has internal audits expects a bunch of syncopants? WTF?
Internal audits ranging from a technical issue that impacted clients to sexual harassments investigations are all handled privately and details are shared AFTER the issue is handled. And the details shared are filtered to avoid legal liabilities/bad PR.
An ethics team can certainly coexist with this, the same paper that caused this chain of events could have been shared internally, get a team on board to try to improve on those issues and later publish both the Parrots paper at FAccT and fixes simultaneously in a technical conference.
Still the whole thing is sad because the research they were conducting is impactful
It's pretty important to get this story straight: She moved files when she wasn't supposed to. This seems like a pretty standard firing. People are already claiming on twitter that Google is assaulting ethical AI, and I just don't see it.
Google did not immediately comment on those claims. Its statement said of Mitchell: “We confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees.”
Not just exfiltration but sharing outside the organization, as per news articles about it.
There's no story here, this is an obvious outcome.
Edit: to add that the more interesting question is whether they should sue her. If they don't, that may be the contextually-appropriate version of letting her off the hook easy (to live up to their culture or for morale or for optics or whatever reason).
You always have to be careful with people who do bad things for "good" reasons (ie the difference between Sci-Hub and reselling pirated papers for a profit).
You want a gulf of difference between the punishment for nefarious activity and the punishment for well-intended-but-unlawful activity.
She is an authority on Ethics, being listed on the ICLR conference as part of the Ethics Review Committee:
> Margaret Mitchell, Google Research and Machine Intelligence
But I do think they can miss her, since there are already 8 people responsible for Ethics and Diversity, with 11 responsible for the actual organization of the conference.
Maybe they can update https://iclr.cc/ to reflect the fact that she is not representing Google anymore?
This is so pretentious. Ethics is a branch of philosophy and even religion with deep debates raging from 4000 years ago and no end in sight. Hundreds of specialists have been going at it in multi-decades efforts and yet any of their opinions must not carry more weight than the average citizen's. Mitchell has a BS in Linguistics an MS in computational linguistics and a PhD in CS. Her fist paper about something even tangentially related to ethics is from 2015.
Philosophers have already identified the problem. There is no universal standard of morality or ethics.
The core of ethics derives from evolution. Societies with certain sets of cooperative and competitive values destroy, dominate, and expand. Mutations are constantly happening and some of those ethics can lead to the self-destruction of a culture, while some make a culture even more dominant. Some are neutral and just background noise and have no substantial impact on the society.
In the grand scheme of things the only thing that matters for morals and ethics is whether the culture that embodies those ethics can survive against pressure from other cultures.
Some ethics have been selected into the genome as they provide superior fitness of individuals.
The idea that we should pursue equal outcomes is a mutation. In general it has been shown to destroy a culture (socialism) when competing with more capitalistic cultures. The victim type culture is likely to be much weaker than one where people constantly compete to rise to the pinnacle of power and wealth.
Ultimately power and wealth drive innovation; and innovation, especially of weapons, drives domination. There are other types of domination, such as media, food, language, etc.
Good luck trying to find a consensus in philosophy, specially in such a contentious field like ethics. There isnt, for myriad of reasons, from the banal (PhD theses, and tenure decisions depend on "new approaches"), to the historical(there are several 'schools' in ethics, with their subdivisions), to the sociological (society in general does not grant to professional philosophers the monopoly on deciding what is wrong or right)
If she didn't do it and they don't have good reason to believe she did, then she could and should easily win a large lawsuit against them, since they have publicly accused her of illegal activity. That is a hugely more risky move than just saying that someone underperformed or was let go for other reasons, and knowing how heavily pre-lawyered everything that Google does is, I highly doubt they would publicize something like that without having evidence to back it up.
Maybe I'm wrong, but afaik she has yet to deny doing exactly what they've claimed. My guess is that in her mind she was doing something totally reasonable by trying to dig up old emails that she remembered that she thought would help her friend's case against Google, and feels that even if that's against the letter of her agreements with the company it should be morally okay. Which is fine as a moral argument, but you're still gonna get fired.
"Maybe I'm wrong, but afaik she has yet to deny doing exactly what they've claimed."
That seems like the natural thing to do if you feel like you may already be in the process of being fired. I assume a lawyer would advise you to be quiet about it, right?
> If she didn't do it and they don't have good reason to believe she did, then she could and should easily win a large lawsuit against them, since they have publicly accused her of illegal activity.
Despite the popular refrain of “truth is an absolute defense” in the context of defamation law, in US law falsity is an element of the tort, which means that the plaintiff bears the burden of making a prima facie case that the accusation was false by at least some evidence, the defendant doesn't bear the initial burden of showing that it is true or reasonably believed.
Also, there is a very good chance, since Mitchell was publicly involved in the controversy over Google's firing of Timnit Gebru before Google made the accusation, that Mitchell would be found to be a public figure within the context of Google's alleged defamation, which would heighten the requirement from falsity plus harm to proving actual malice.
Also, if Google is lying, they almost certainly also spent over a month building a constructed evidence trail and doctoring records to support the lie between the very public lockout and the actual firing.
So, no, winning a defamation case wouldn't be easy, wouldn't be quick, and wouldn't fail to be extremely expensive in litigation costs.
Plus, every step Google takes on this is just resulting in more of the remaining people in the affected area of the company going public with problems, and it’s not like Mitchell seems to be hurting for alternative employment so as likely the only value of winning the lawsuit would be a PR poke in Google's eye that they are getting on their own anyway...why bother?
The rule number 1 in letting people go: never go on record why you did that. There's a good reason for that - you're opening yourself up for lawsuits. Google violated that rule in a big way by making such a public and damaging statement. You bet they can back it up.
> And, it's not a court of law. At best, it's "allegedly moved files when they weren't supposed to".
Are you operating under the assumption that nothing should ever be stated as a fact unless determined by a court of law? Because that's not how I think of things.
(I do agree that Google is a biased actor here. If I hear that Margaret Mitchell disputes moving files in contravention of company policy I will give her the benefit of the doubt. But my default assumption will be that everyone is telling the truth about basic facts.)
It's pretty important to get it straight: what files did she move and why? So far it appears she did NOT do an Anthony Levandowski and steal IP. In particular Google has said zero about the actual dollar value of those files, or cited any damages from the movement of the files. It appears much more likely she was talking to a lawyer, which is a perfectly legal thing to do, and probably pretty wise given Google's maximally adversarial approach to this whole mess.
Yes you should always respect your employer's property. In tech, that's almost always IP stuff: code, diagrams, plans, etc. But don't let an employer defame a former employee with an intentionally "concise" story.
Just so everyone knows, in CA your assertions are false. CA courts have ruled that if a company protects documents or email systems then they earn the status of a trade secret. Theft of trade secrets is a felony criminal offence in CA. Do I think the CA AG is going to charge her? Unlikely. But there’s no need to demonstrate damages here like in a civil suit.
Illegally exfiltrating documents to your lawyer is also, illegal. Such documents would not be admissible in court and should be obtained through the usual lawful channels of legal discovery.
Just so everybody knows, sexual coercion is a criminal offense and yet Google certainly didn’t pursue Andy Rubin for that, actually they gave him a $90m severance.
It doesn't matter what files she moved and why. What matters is they were internal Google documents and exfiltrating sensitive contents that you don't own is a fireable offence in any corporation.
She should have known better.
An employer has a vested interest in letting you go can always find a reason that can be summarized as some violation of corporate policy.
Right now I'm writing this during the work day on my work computer.
I forwarded a work email to my own personal email once because my work email client wasn't syncing on my phone.
Could these be used against me if my employer cared enough? Undoubtedly.
OP is preemptively judging Mitchell 1 hour after she announced she was let go. We have no concept if there claims are legitimate and will not survive a wrongful dismissal lawsuit.
Google doesn't care. Paying Mitchell X Million dollars would be just a cost of doing business. In the meantime, an "uppity" provocateur is out of the building and not fighting for change.
Dr. Gebru originally asked, as part of her complaint, that the names of the people reviewing her work be revealed. Does that sound particularly ethical to you? At best she wanted to talk to them face to face about their criticisms. At worst (and this is more likely if you ask me) she just wanted to throw -isms in their faces.
Just to make sure people don't forget that aspect of what appears to be the root of this situation.
Yes, because they were not peer reviewers, and they had no business reviewing the paper in the first place. She went through the process for internal publication review (which does not involve anonymous reviewers, any more than a code review or a preparing-to-open-source-something review or a branding-guidelines review involves anonymous reviewers), and the paper was approved, and then management told her there was a secret extra process and she couldn't know who was involved in the process.
The paper was eventually anonymously peer reviewed by other scientists at other institutions in a process run by the journal (not by anyone's manager) without incident.
I'm not interested in the details of this, but from a distance - people sure do make life difficult for themselves, bringing politics and their opinions and controversy into workplaces. It's always tech and academia, too - you'd think it'd be enough to get just paid shitloads of money for interesting knowledge work and save the drama for friends in the pub (or the bar, or whatever).
Well, maybe. But it's not like this sort of drama is unusual. Maybe you have to be outside looking in to see the absurdity (I'm not in Silicon Valley, or the US for that matter).
I’m very vocal at work about all sorts of things. But to openly condemn your employer repeatedly on social media, steal data, then complain about the repercussions? How absurd. And this person is in charge of ethics for your company? I could not think of a more inappropriate position for such a person.
I think I was not referring to arbitrary high paying jobs, but specifically the highest paying and possibly highest reach jobs, like ethical AI at the biggest tech company. The sociopath term may have seemed a little dramatic, but really, if you feel like there's more than just petty politics problems and you don't feel conflicted, that's concerning.
Also was flippantly disregarding the tech as sufficiently compelling to ignore what you feel is really damaging if such a thing exists. I personally would have a high bar for technical work being sufficiently interesting, and a medium bar for keeping quiet.
The immediate example I think of is that I'd have a hard time accepting FB money to write JS there, because of ultimately how they make money maybe or not respecting the ceo. Granted, I have no idea what it's like on the inside.
Lot's of people just chase the next incremental step up, compensation increase, etc. Their not even sociopaths, they just don't know how to find meaning in their lives in other ways.
Out of curiousity I checked out Dr. Mitchells Twitter feed. On her feed for the past several months, shes been posting and reposting articles that are heavily critical of her employer.
Now there is a line between public and private space, but many people use Twitter as a professional network. And Dr. Mitchell includes her Google credentials in her bio.
So given her substantial, critical, public views of her employer on a quasi-professional network - what did she expect?
To quote many commenters when this has happened to people on the wrong side of the political divide:
"Well, they're a private company and they can fire who they want to." Or if you prefer, "freedom of speech doesn't apply to companies".
Personally, I think this weird mash-up of the personal and the private has landed us in a new and fresh hell and it's only going to get worse. Eventually we'll need to find a way out or we'll get to the zainy ends of your employer firing you because of the way you voted or what you wrote in your diary - but we need to be consistent in our separation between personal and corporate.
Very much agree on that. However people are contributing to this themselves by becoming activists within a company. That’s mingling the personal and the professional.
Also don’t bite the hand that feeds you. If you disagree with the hand so vehemently quit and take up the cause.
We have ham at Christmas. Now I've got this weird feeling I've been doing something that's a faux pas but I can't find a reference on the Internet anywhere. Why will people complain about the ham sandwiches?
Jews do not eat ham. They also don't celebrate Christmas. Having an office Christmas party and serving mostly ham is a great way to drive home exclusion. Or e.g. demanding people work late Fridays. Observant Jews absolutely, positively have to get home by sundown on Friday.
Having kosher food doesn't mean we can't have ham. People eat other things at Christmas anyway. And calling it a holiday party doesn't take anything away from anyone.
Well, isn't the simplest solution for them to not eat the ham sandwich or not attend a Christmas party if it's such a big problem for them?
There will always be someone offended by anything you do.
I'm atheist and have no problem celebrating Christmas or Diwali or whatever with my colleagues from work.
I guess it entirely depends on the type of person you are.
Your "politics" is someone else's "fundamental human survival rights".
The exact details matter from jurisdiction to jurisdiction. There are parts of the world where you can be fired for being gay. In most of the US where you can be fired for being trans. There are parts of the US where having TAMPONS in a women's bathroom is considered "political".
Thanks for reply. I can see where you're coming from I think, but reporting or not reporting the law being broken is political exactly because, as you say, laws stem from the political process, but also because law enforcement is a socio-political activity.
I do see what though, as it's supposed to be accepted as a basic premise of society that we set laws and follow them, and all accept the responsibility of ensuring that those laws are followed.
Google manipulates search results to favour certain political beliefs. Censors people on youtube for expressing counter view points. And quite likely has enough power to skew elections. Plenty of what they do wrong is political in nature. And should be open to criticism.
I seem to remember Google being criticized for having a disproportionate amount of left leaning employees pushing a left agenda while simultaneously being criticized for showing/pushing right/far right content on youtube. So which is it?
The left-wing employees may have the political power within the company, but the algorithm maximizing clicks could end up recommending right-wing content because more users click on it.
The original claim was that Google uses their political power to push leftish views and censor right/far right views, so that particular claim is not consistent
You have to be wilfully blind to think that tech companies are some sort of benevolent, apolitical, neutral arbiters of truth.
Here's a direct article from Today confirming that tech companies are coordinating censorship with your government.
You can push left wing views while still having more people choose right wing recommendations. There's a kind of feedback loop which pushes people to different viewpoints.
Different politics can correlate with different web usage. Either directly, e.g. because of different type of censorship on different websites, or indirectly e.g. because politics correlates with social class, and social class correlates with web usage.
So maybe YouTube is relatively more popular with right wing, and Twitter with left wing... which would make the average YouTube user more right-wing than the average of the population (and the average Twitter user more left-wing that the average of the population).
It could even be some statistical artifact. Maybe the audience is divided 50:50, but e.g. left-wing users are more likely to produce their own videos. That would mean that right-wing videos have on average more views per video.
It could be an artifact of censorship itself: suppose that e.g. extreme right-wing is 5% of population, and they produce 5% of videos. But half of those videos are removed by YouTube. If they keep using the same search terms, the surviving videos will have twice more views per video. And if then YouTube moderators freak out and delete half of those, the views of the remaining half will double again, feeding a spiral of hysteric moderation and extreme popularity of the few surviving fringe videos. (And it doesn't help if the content-agnostic algorithm predicts -- correctly -- that the popularity of these videos is likely to skyrocket.)
Unfortunately, free speech protections are generally to protect you from government action that may suppress speech, while private entities have a lot more discretion.
We also learned about “First Amendment values” in law school. This idea that there is no moral component associated with the First Amendment is a new and weird one.
One of the first things the founders did was to pass the Sedition Act of 1798, so whatever the moral component of the First Amendment is, it wasn't as clear as it's made out to be today.
It was the Alien and Sedition act and closely tied to a fear of foreigners, and of national collapse in general, before the country had had time to gel. You can squint and hold it up to the light and see it as a principled diagreement over free expression, but that's pretty ahistorical.
The important point here is that limits on speech and free expression in American discourse have traditionally gone hand in hand with a fear of foreign influence. If anything, that gives more of a moral edge to the argument over free speech, not less.
Yeah, I'm referring specifically to the Sedition Act of 1798 (one of 4 "A&S" acts), which was not principally used against foreigners --- and, in fact, was exploited by its opponents (notably Jefferson) to persecute political adversaries.
"Are we bound hand and foot that we must be witnesses to these deadly thrusts at our liberty?" (The thrusts were words on paper.) "Are these approaches to revolution and Jacobinic domination to be observed with the eye of meek submission? No, sir, they are indeed terrible; they are calculated to freeze the very blood in our veins. Such liberty of the press and of opinion is calculated to destroy all confidence between man and man."
The target of this speech, by a proponent of the Sedition act, was Benjamin Franklin Bache, a printer and native son of Pennsylvania.
My point is just that I'm deeply skeptical of moral principles encoded in the Constitution. I think it's a good system, a set of functional and useful institutions, not something I would want to fuck with, &c. Maybe not my moral north star, though.
I don't understand that you're trying to say, but of course my point is that the authors and ratifiers of the bill of rights immediately turned around and abrogated them.
Free speech is at it's core dangerous speech. It's speech in favor of trans rights when nobody supports it. It's speech calling attention to inequality nobody wants to hear. It's speech that's unfavorable to the government and our neighbors.
Safe speech isn't free speech. Free speech is the engine of change for things like trans rights, same sex marriage, equal protection under the law and even evolution being taught in schools.
If you do not hold free speech as a moral value, if you don't feel a moral duty to use speech that may be censored or unfavored, then you don't agree with using speech to fix social problems.
Dangerous because of what? Because of government action? That's regular free speech. Because of the actions of powerful persons? We have other actions covering their possible retribution.
Why exactly is free speech something important in the private arena?
I still don't think it's a moral value on its own.
I'm a little conflicted on this one. On the one hand, it seems like a clear case of 'live by the sword, die by the sword'. On the other hand, maybe a bunch of gun-toting conspiracy theorists speaking up for these professional ethicists would be the hand of friendship that could breach a deep but narrow chasm in public life.
But realistically, I imagine if a gun-toting conspiracy theorist showed up, someone like the article subject would scapegoat them for another month of comfortable employment, so she's probably exactly where she needs to be.
At the same time, Google unveiled a "Black owned businesses near me" on Google Maps. How has that been received? We need look no further than Twitter: (the Tl;Dr. racists are using this to lower all ratings of these Black owned businesses)
Google recently added a feature allowing users to search for "Black-owned businesses near you". They're advertising it all over the place (see below).
I've also already started hearing from Black-owned businesses I follow that they're seeing an influx of racist fake reviews...
BLK MKT Vintage, for instance, got over 30 new, clearly fake reviews in 24 hours (e.g., calling them "a store that promotes segregation", saying their items were "broken" and "covered in dead cockroaches").
And if you look at the users writing those reviews...
The users leaving those fraudulent 1-star reviews for this business are also leaving 1-star reviews on dozens of other Black-owned businesses.
All this while Google pays for PR lauding this new feature.
We need to ask: is it always a good idea to make marginalized groups more visible? Whom does it benefit?
Have such companies thought through the consequences for those they're "helping"? Was this done at the direction of those groups or paternalistically on behalf of?
This is yet another example of why app features cannot and should not replace community networks and mutual aid.
Technosolutionism, the white savior complex, and paternalism towards marginalized groups lead to selfish PR efforts that actively harm those they ostensibly "help".
In what will be an entirely self-serving, "everything is evidence for my existing worldview" claim...
...I would promote race-blindness as the solution to this. The more we talk about and focus on race, especially in a zero-sum context like "buy black-owned", the worse the racial tensions and competition will become.
People confuse cultural power with economic power...just because many powerful people/businesses support black enterprise right now, doesn't mean the wider society will just blindly follow with fists full of cash.
Racism is terrible. "Antiracism", with no regard to the actual outcomes, is hardly better. These businesses were not being harassed by awful people until Google tried to leverage them over their non-black-owned competitors.
> The more we talk about and focus on race, especially in a zero-sum context like "buy black-owned", the worse the racial tensions and competition will become.
My dad was telling me this yesterday. He saw sectarian conflict growing up in Bangladesh, and working in international public health, he’s been to 70 countries and has had a chance to see ethnic conflict first hand. In his observation, emphasizing differences will increase rather than decrease conflict.
Oddly, this is very similar to how colonial powers divided and conquered countries. They would emphasize differences between groups and treat them differently based on group membership, fueling animosity between them. Obviously the motivation for doing those things here in the US is different. But if you’re telling some kids they can go back to school and others have to stay in remote learning, based on the color of their skin, I’m not sure why would expect the result to be different. Measures like kids getting back to in person learning at different times based on skin color should worry people: see: https://www.wsj.com/articles/can-school-be-antiracist-a-new-....
The rather old viewpoint of content of character over skin color is the only good way forward in my opinion. I find it to be a real tragedy that skin color has come under hyper focus in the last decade.
I think it stems from a tragic mixture of deep desire to make things better (a good thing) and youthful impatience. Combined with the accelerated ability to disseminate ideas via the internet ... we are in an ongoing flame war that I never would have imagined. It makes me feel sick, sad, and frustrated.
See, I could probably get on board with being passively opposed to about half the stuff on that triangle. The other half is either outright bs or stuff I've never heard of.
Yes, antiracism must be measured by its effects. Google's campaign had antiracist intentions, and racist effects.
That doesn't mean we should all give up on effective antiracism, though. This tactic/campaign/etc. was not antiracist. Others can be. It's not clear at all why this should be taken to mean that "the more we talk about race...the worse the racial tensions and competition will become." The conversation around race in the US has been ongoing for decades, and there are many objective metrics whereby things have not become worse.
I could have been clearer but I meant to stress "in zero-sum contexts". Not all race discussion is necessarily bad/harmful in outcomes.
However it sure does appear to me that in the last five or ten years, the manner of the conversation has degraded so much that very very little progress is made per unit of "increasing tensions". That is entirely subjective, but it is how it appears to me.
Racism must absolutely be measured by intention, not effects. The very definition of racism is somebody actively discriminating against another race. Any other definition just ends up discussing Marxist ideas like "systems of oppression" which go nowhere and actively harm the conversation.
No, you'd like to define racism narrowly. Not that I can get in your head, but doing so sure makes it easier to imagine ourselves as not complicit. But evil is all too often banal, and passive.
I'm not complicit. I've never been in a position of authority over hiring or loans, etc... nor have I ever treated anybody differently for their skin colour. Racism is about intent and your definition expands it well beyond what figures like MLK understood it to be.
MLK named "the white moderate, who is more devoted to 'order' than to justice" as "the great stumbling block in [the] stride toward justice." I'm not expanding a thing.
That's still incorrect. I'm happy for there to be disruptions in order to get justice. What I don't agree with is that I'm somehow complicit for society's problems be virtue of being born into it. Or would you be prepared to charge your children with that burden too?
Here's one guess, off the top of my head: what actions have you taken to reduce the likelihood that police in your community will kill a black person unnecessarily?
Cause as much as I care about that issue, I never reached out to leaders in my community to encourage hiring more police and reducing overtime, for example. I didn't join any protest. I have been completely passive, standing by, waiting for someone else to solve the problem.
I could save a lot more lives per dollar/hour by working on antimalarial treatments in Africa. There's countless things we're not doing so save lives. That does not mean we're relentlessly evil.
If you aren't working to save every life, you're basically Godwin's Law.
You're strawmanning. I didn't call anyone relentlessly evil—the whole point is that complicity is woven into our daily lives in a way that's banal and easily ignored.
You could direct more resources this way or that. Do you? What about your time? Do you invite others in your life to do so as well?
> We need to ask: is it always a good idea to make marginalized groups more visible? Whom does it benefit?
No it doesn’t. And I wish people would stop experimenting with this sort of thing and treating it as some sort of fad.
I grew up, as an immigrant from a Muslim country, in a white Republican suburb in the 1990s. And it was great. I can’t imagine what life would have been like today - constant reminders of the fact that you’re different, white teachers admitting they’re “gatekeepers of white supremacy,” using “whiteness” as a pejorative, etc. I’m fully developed and fluent in the language so I can tolerate it. But I’ve got kids and this is not the culture I want my kids to grow up in.
Leaving aside whether this stuff is right on the merits. There is a limited number of white people who will put up with a constant drone of being called “racist” and being blamed for things that happened before they were born. As these confrontational tactics continue, people’s patience will wear thin. And if it becomes more prevalent to treat kids different based in their skin color, such as Evanston’s proposal to keep white and Asian kids behind in remote learning while letting other kids go back in person, people’s patience will evaporate in an instant. And overstepping that tolerance will have real consequences for people of color.
When white people amplify this hyper-activist, in-you-face approach, they aren’t just drawing attention to a problem, they’re picking a particular solution. And because they comprise a majority and control the culture, they can impose an approach that people of color (who have to live with the consequences) don’t necessarily want. When you jump on board this bandwagon, you’re not just changing the culture for yourself. You’re not just changing the culture for activist POC you went to grad school with. You’re changing it for the blue collar worker who happens to be a POC. And you can bet that what kind of culture he wants to live in is not well represented in the academic and activist circles that dream up these approaches.
> And if it becomes more prevalent to treat kids different based in their skin color, such as Evanston’s proposal to keep white and Asian kids behind in remote learning while letting other kids go back in person, people’s patience will evaporate in an instant.
Woah. The accelerationism here is truly scary. If our kids grow up in an environment with this rhetoric then it's not inconceivable we end up back in a segregationist hell-hole. Only needs to be normalized for one generation before the damage is done.
You think your kids won't be white? My father wasn't white when he came here 30 years ago. Somehow magically I am. The same thing will happen to you too.
I call it Schrodinger's whiteness. You are white enough to be blamed for right wing racism yet not white enough to not be the target of it.
It's hilarious to me that the "diversity in tech" movement scrupulously avoids noticing that all kinds of asian people exist in our industry.
Acknowledging them would undercut narratives about white supremacy, but you can't get away with calling them white either... so they're invisible! Good for them, frankly, I'm jealous.
I really didn't understand why they were doing that. It seemed very strange. I couldn't quite put my finger on why it didn't seem appropriate... just that it seemed wholly irrelevant to how I would decide where to shop. I freely admit, however, that as I middle aged white male I may have some blind spots.
I would very much like to know how much testing Google did on the “Black-owned businesses near you” feature. I know they extensively test algorithm changes to their search engine before rolling them out to everyone. It would be really disturbing if they didn’t do due diligence on this feature, to make sure it has a positive impact for businesses using it.
Keeping lists of people who have special traits should be kept to an absolute minimum. Just ask the Dutch (good people got punished and killed trying to destroy the official registers of Dutch jews as it became clear the Nazis sought them.)
Pointing out the differences between people isn't going to help!
In the place I grew up in Europe there were almost no foreigners. But in summer holidays we lived next to a really black family at a camp site for years. Their dad were friends with my dad and well, that was it. They were humans like the rest of us.
Today it is almost like it is scarier to meet people from other cultures.
And if it is opt-in, and happened to boost business instead of attracting racist reviews, I would fully expect non-black owned businesses to capitalize on that and opt-in as well.
The mandate of the department is ethics. Ethics is not agnostic to one's personal convictions, unlike code. People assume that because someone got fired, they "lost". Another viewpoint is that they retained their ethical integrity at great personal cost - doing so is precisely their actual job.
I’ve seen this bait and switch a couple of times now, but there’s not really any reason to expect a researcher working in an Ethics Research team to only do ethical things, in the same way that you wouldn’t expect someone working on a Truth Team to only do true things. The nominal mandate over her team does not extend as if a veneer to all of her personal actions. It’s just a name.
Her team was very specifically engaged in culture war, and wanted to wield Google’s clout in that war. Google disagreed. That disagreement led to her co-lead refusing to withdraw and edit a paper, so that co-lead was terminated. This really shouldn’t have escalated as much as it did, but it happened, tough shit. Meg was understandably upset about this, but then [appears to have] made the unforced error of sending internal documents to external accounts, and thus was canned.
Companies generally need to be able to trust that you aren’t going to violate their security policies, even when you’re upset about and disagree with their decisions. That’s why she was fired. Every other goddamn thing about this is an irritating optics war between the people that want to use this as ammunition in a crusade to prove that Google doesn’t care about ethics-and-also-minorities [which is manifestly not the case], versus the people that are legally obligated to never say anything publicly about internal matters.
It’s shitty and tiresome, but it has nothing to do with ethics.
I don't disagree with your frustrations. It is hard for anyone to be accused, including the Google management, of being insensitive when you are acting out of good intentions yet getting lumped with blatant racists. Perhaps there was security violations as well with these documents leaking.
But can't there be an ombudsman mechanism within Google to settle such acrimony before it spills over into the larger public domain? Perhaps there is such a mechanism, I don't know about it. An outright firing seems very high-handed, and this is what dismays people.
Yes, ethics is agnostic to one's personal convictions when it is your job.
Same as how defense lawyers leave their personal sense of justice out of their jobs.
Same as how sports referees don't let their childhood fandom of a particular team affect their calls.
Same as how actors will be physically intimate with people the opposite of the sexual preference.
Imagine if the person in charge of AI Ethics converted to a fundamentalist Christian sect. Their ethics changed and now they think LGBT people should be discriminated against explicitly. Nobody supporting Margaret Mitchell would be defending this new convert's application of the personal ethics in their job. That proves this is merely a convenient argument on their part, not a principled one.
not sure if you understand this or not, but google is a massive global company dealing with sensitive personal data which makes its work inherently political. They are not manufacturing bolts...
Gebru was fired because she gave an ultimatum to her boss threatening to resign if they don't disclose names of all reviewers who criticized her work AND blasted group emails calling for sabotage https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQ...
The gulf between "I know what I am supposed to say" and "I truly believe in this" grows wider by the day.
Rarely do people resign out of principle...but when you're claiming that your employer actively does evil and harms the least among us, and then continue to work for them, it is obvious that you are either entirely without morals OR you are saying what is socially rewarded to say, without meaning it.
This is something I feel like few people learn in life in the US. You may be morally right, but nobody ever has your back. The few that do you can probably call friends, but the many that don't will never lift a finger to help you legitimately when it matters.
I don’t see why you say US, it’s the same in any country.
Nobody in their right mind is going to quit their job over this during a pandemic, when it’s already so hard to find a job. I imagine ethics jobs are not in high demand.
you been out on these streets lately? lemme tell ya, you falling on your sword at this time in history without a rock-solid plan in place to keep that bread coming is going to leave you with a lot of regrets, i'll wager. there ain't a lot of companies in the world looking for "quit bc i thought the bosses gave a coworker a bad shake" on a resume, and they sure as hell ain't paying what google is.
In the Motorcycle Safety Foundation course, and probably in driver's ed everywhere there is the phrase: you can be right, or dead right. And they will write on your grave that you had the right of way.
Basically, is it more important to be right, or to be alive (or be employed).
I accept I will be looked at negatively for this, but I simply don't believe Google has any obligation to keep paying giant salaries to people who are actively - by their own words - working to show how harmful Google and/or its underlying technologies are to the designated victims of the age.
She was moving files and "private data on other employees" (?) off the Google internal network into her personal possession, in order to contravene their policies and share their work against their wishes.
You may be able to claim this is hypocritical as they claim to support openness, but I really think the very clear and states goal of undermining Google's alleged negative impact on non-whites negates any mutual goodwill that should be assumed.
TL;dr this is a completely legit firing, even if her intentions are good.
Tech companies will increasingly find themselves wrestling with a fifth column if this sort of thing persists. Just wait until the Woke stuff really gets tied up with unionism, and you'll see an explosion of allegations of "union-busting-to-protect-white-supremacy", some of which has already been alleged of Bezos and Musk.
Possibly she violated policy by sending internal emails to an ex-employee. That is fireable offense, yes. But this whole avalanche of incidents at google is still ugly and appears to be out of control. A bad look for google. Has anyone over there heard of damage control? Did it need to get this bad? At some level this situation must have been enabled by really clueless management. Or, it could just be the reality that google's use of user data is basically unethical and cannot coexist with a real AI Ethics team. It kinda looks like that.
As noted in her pinned tweet (https://twitter.com/mmitchell_ai/status/1357819673455202304), she was locked out of Google's system after sending an email supporting Timnit Gebru. (EDIT: and allegations of systems abuse; see child comment)
That's only if you take Google's PR at face value. Also, no where in the article nor in Google's statement do they claim that Meg Mitchell did anything "illegal". I haven't seen any details about the allegedly extracted files, so I think there's more to the story.
There have been countless Googlers who came out to support Gebru. Did Google fire all of them? Or maybe Margaret's support was particularly outspoken, but I doubt it.
Its google says vs she says - honestly i take neither at face value, but i'd expect repercussions if i started download/sharing internal files en masse.
The firing seem related to a conflict between CRT morals and normative western moral systems. The ethics researcher in question seem to subscribe to the Critical Race Theory way of looking at the problem, "it's not a question if X is present, but how it manifest", which is in direct opposition with normative western morals.
It seems AI ethics are on the decline. But in 10-20 years weapons of math destruction which includes AI will be the subject of Congressional and Senate hearings. I would go into public policy in the government/think tank, maybe with an academic gig as well. The Ethics on the medical and criminal justice side seem more pressing than search itself.
If we are to believe the memo that Google wrote about their investigation for firing her, then it seems pretty cut and dry. Either she should have tried going through official channels first, then be smart in her own investigation, or google is lying.
Honestly, who's the bone head who set up this department in first place? It's like letting the cock sleep in the henhouse. Either this department is one big charade, or it tries to do a good job and is in constant and acrimonious conflict with rest of the org.
It's like a bank or HF having a compliance department who is not only responsible for ensuring rules (it doesn't entirely understand) are followed, and making up new rules at the same time. In effect, a dual legislative and judicial branch.
" tying executive pay companywide to diversity and inclusion efforts." - how does this work? I cannot imagine the CTO doing recruiting of minorities to meet diversity targets instead of doing the work one was hired for. I saw this happening in other companies and that had devastating effects on hiring quality, employee morale and productivity.
Working on product, features, etc., may potentially be unethical to end-users. May introduce unknown biases, pose privacy issue, gray areas, to end-users, security and other related similar topics.
Speak out of such issue to only find an adverse reaction. Timely delivery of the product is more important.
So they fired an ethics leader because she thought something unethical might be going on and was taking steps to investigate it by automating a review of her email, or something like that.
It seems she may have been passing on some of that information to persons outside of Google, which isn't great, but I also don't see much in the way of details on what that actually entailed, so it's hard to judge how inappropriate those actions might have been. Either way, still a bad look for Google.
Remembering the mid 00's, it's always jarring to juxtapose "Don't be Evil" with how they currently operate.
I don't know. I agree it's probably a job-ending move in most cases, but until the details are available I'll withhold full judgement. Also has she confirmed this, or is it just what Google is claiming? If they simply want her gone, they're probably not above making a mountain out of a molehill.
another way to frame it is she got fired for transferring a bulk dump of her corporate email to someone outside the company. I'm extremely sympathetic to the cause but I don't think it does any good to have such a blinkered approach to what happened
She gave a talk at my school once, she was invited by my then advisor. This was in 2017 right after she left MSR. We had a brief change, and even back then you can tell she’s not into kowtowing to the patriarchy. It’s quite hard for ambitious women in compsci, they have to walk a tight edge between firm and “motherly”. Men have no such constraints.
——————————————
Clarification: I’m speaking about women researchers in an academic setting in general. And hoping to lend a bit of detail on this researcher as a person.
I've worked with many women at FANG companies and I wouldn't describe any of them as even somewhat motherly (though I have noticed "firm"). I think Dr. Gebru was fired because she cast Google in a bad light with some rather dubious arguments in a paper she wanted to publish and when she was asked to not publish her research she she sent an unprofessional email to a wide audience.
I don't know why this person was fired, but looking over the email/document she posted on her twitter it looks like she is asserting that her employers are racists and sexist on little evidence. Why would you, or she, expect to be permitted to slander the company this way and still keep her job? Although I don't know if she published this document before or after her termination, so it may, or may not have been a factor.
Gebru wasn't fired because she "cast Google in a bad light", she gave an ultimatum to her boss threatening to resign if they don't disclose names of all reviewers who criticized her work AND blasted group emails calling for sabotage
Her research does cast Google in a bad light. She writes about how language models (like Google's BERT which influences somewhere between 10% and 100% of searches) are environmentally unfriendly and encode hegemonic language making them subtly racist and sexist. Her solutions are to use smaller language models and more curated data sets, which are probably not palatable for Google.
Dr. Gebru was told the paper shouldn't be published with Google's name and some of the feedback on it that she received had to do with the paper not mentioning how Google was combating or leading on the problems with language models that she called out. In other words, if her paper had made Google look good, they probably would've been okay with her publishing it. Given that her paper made Google look bad, they wanted her to change it or not publish it. When she got that feedback she sent her intemperate email (in which she included her ultimatum to resign) and was terminated shortly after.
I think it's debatable, given that she did offer an ultimatum, whether she was fired or resigned - but I know she prefers the former and I think that's at least a coherent way to describe what happened (Google accepted one side of her ultimatum by firing her).
>In other words, if her paper had made Google look good
A more charitable assumption might be that they wanted her paper to address feedback from the reviewers in lieu of being published as is. The feedback that she was presenting a biased interpretation by leaving unmentioned the existing work being done to combat bias in the models.
I have heard that academic departments literally bend over backwards to cultivate women as professors and extend every possible advantage to them in order to promote them. This is from a classmate who did her PhD and is now a tenure-track professor at a prominent school. I know for a fact that she was also the recipients of various scholarships, grants etc. while she was in school, many of which she attributed to being a woman in the field.
Your statement that "it's hard for women in CompSci" rings false to me. I think women do face higher barriers in many corporate settings that have been mostly male-dominated, but my personal observations about women in CompSci academia lead me to believe that women are treated better vis-a-vis their counterparts in other university departments, and also in other male-dominated industries.
Yeha I’ve noticed structural programs that help women, especially at the undergrad level. And the grad level (where people mentioned operated) it’s highly advisor/manager dependent. At the academy, each PI run their own independent fiefdom so some can be very abusive across the board. This is where being a minority makes it worse because it’s more difficult to form commodory with those around you. These are all first hand accounts and observations I’m reporting. So both can be true: favored at the institutional level but still lagging at the personal level.
Having been through grad school I can assure you it’s tough no matter your gender. In fact, women stand a higher chance of getting more financial assistance for their studies because of the structural support for women that continues at the graduate level. Also, in my experience, men tended to have much more conflicts with their advisors and left the graduate program in larger numbers than women (perhaps even proportionally, men walked out more). Ironically, my women colleagues in grad school complained much more about interpersonal conflict with their female professors and research group members than the men. On the personal front, it seems to me that it’s neither here nor there: it’s so subjective that it’s practically impossible to see any pattern whatsoever.
+1 to this. On my very first day of my very first CS class in college over 10 years ago, a guest speaker came in to beg to take girls to an all expenses paid conference about women in CS. I was never afforded a similar opportunity.
Reading their comment I don’t see that specific claim being made. The comment discussed different treatment with regards to personality, but doesn’t relate that to this firing directly.
I hate Google, but I have to say this article is garbage. It sounds like she was fired for perfectly legitimate reasons. Trying to make it about racism or sexism is a disservice to those who actually care about those issues.
> Mitchell has been locked out of the corporate email since last month after what a source says was her effort to search corporate correspondence for evidence to back up Gebru's claim of discrimination and harassment
It sounds like she dredged through her email, and sent emails she found to an ex-employee. That's probably going to get you fired.
It seems to me like there is no possible situation where searching (dredged is a weirdly loaded word) through ones own email is a fire able offense. In fact, the idea that one can be fired for reading ones own work email strikes me as kind of absurd and unworkable.
Perhaps there are things one can do with the emails you found that are fireable, that’s beyond my (lack of) legal expertise, but searching? That doesn’t seem right.
It seems like she was passing stuff on outside of Google, but then we don't really know the details of those activities to know if it was an inappropriate action.
I assume that there are exceptions. Off the top of my head I believe whistleblowing would be a legally protected use of corporate emails. Maybe there are others, I’m not a lawyer. But I definitely accept that in general sharing work emails without authorization is a fireable offense.
Honestly this whole thing strikes me as something more legally complex than engineers can analyze. I for one want to hear an actual expert weigh in on this, once enough facts are known. Anything else has too high a risk of foolish things being said.
Well I guess it is easier to sweep hard problems under the rug than have a dialog with unsatisfied employees.
This is one of those times where people are going to intentionally conflate Google's legit right to terminate employment with what is best for the long term culture of the company.
I think if you have more that 3 employees you can't have a dialogue on every issue with all of them. You have to just decide and accept that X% of people won't like the decision and leave. So you minimise X while maximising profit.
"Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered. In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today. We are actively investigating this matter as part of standard procedures to gather additional details.”
That said, anything that makes Google a less effective organization I support.
Funny how they co-announced monetary incentives around "diversity and inclusion", I expect nothing less from a giant evil ad company. A bunch of handwavy idpol stuff with the giant elephants in the room seems to be the status quo.
A company with 100 000 people will have to quite a number of staff, literally daily, for all sorts of things, it doesn't remotely imply anything about the overall relationship between staff and management.
Even when CNN, MSNBC, Business Insider 'team up' to overrepresent an issue because 'some number of Google employees walked out for some reason' - it really doesn't help tell the story of 'relations' because it's hard to fathom what the rest of the 99% of Google staff think.
how would this system detect sharing with external accounts? sounds like they put some words together to make it sound like it’s all standardized and automated and not result of human actions
All outgoing email is being scanned for sensitive information in plain text or attachments. This is standard practice in every corporation and something you are being warned about when being on-boarded as a new-joiner.
why would she email it from her corp account to various accounts instead of emailing it to her personal account or just uploading it directly to some personal drive account
> an account had exfiltrated thousands of files and shared them with multiple external accounts
I'm just questioning why she would email to multiple external accounts directly from her corp email instead of saving to 1 personal account and then sharing from there, undetectable, many times. And yes, they would still have detected the 1 email but I'm questioning the accuracy of this explanation; sounds partly made up to make it sound more serious.
Multiple external accounts could mean her lawyer's account and her own. Or 2 lawyers' accounts. The latter would help show she didn't distribute the files to anyone else.
> sounds like they put some words together to make it sound like it’s all standardized and automated and not result of human actions
Part of my job is administering a system for detecting unauthorized sharing, exporting, or printing. It has thresholds and a rules-based engine for specifying what behavior should trigger alerts. Google's system is probably custom but there are definitely products on the market for this sort of thing.
Releasing even a single internal document without permission is cause for firing, and possible legal action. I can't even fathom this being up for debate.
Sarbanes Oxley provides protections for reporting accounting fraud against shareholders to the US government. Not for leaking docs to private citizens.
> Releasing even a single internal document without permission is cause for firing, and possible legal action. I can't even fathom this being up for debate.
I was referring to "releasing", in general, as it was written in the comment. Obviously the protection is very conditional on how exactly it is done.
Bret Weinstein was fired from Evergreen for not participating in a no-white-people-on-campus day (and he was also subject to credible death threats, requiring police escorts when present on campus)
And then we have the epidemic of academic mobbing, where colleagues engage in herding behavior that focus their rage on one of their colleagues in a type of modern equivalent of Monsters on Maple Street. It is estimated that 12 percent of those mobbed end up committing suicide. https://www.universityaffairs.ca/opinion/in-my-opinion/acade...
There are many, many things that will get you fired or subject to attack in the modern university, and I don't see any reason to view academia as a place that provides more intellectual freedom than private industry. Indeed, it is often the epicenter of cancel culture and petty utopian dictators.
> A settlement was reached in September 2017 in which Weinstein and Heying resigned and received $250,000 each, after having sought $3.8 million in damages.
He resigned as part of a settlement when he sued them. The university paid to get rid of him.
The videos show him getting mobbed by students, the University offered no protection for him. When told that students were going "car to car" looking for them, the University told police not to get involved.
considering that there are no market forces at play in the universities, it is almost guaranteed that intellectual freedom is much lesser due to the very polarizing, take no prisoners approach currently in place.
Dissent is not being tolerated, when in former times, it even used to be welcome.
The Evergreen example is incorrect, Bret Weinstein was not fired from Evergreen. He sued the college and resigned as part of a settlement in which he and his wife received a combined $500,000 payout.
This is one of the things that really annoys me about the anti cancel-culture warriors.
I get it, there has been some excess, some poor decisions made, but the hyperbolic, bad faith, and often dishonest framing they often use makes me really suspect of basically anything they say.
In MOST states in the US your boss can fire you for NO REASON at all. Or ANY reason save a few federally protected reasons like race/gender etc. That's "at-will" employment.
Anyone who tells you this is the same as the protections tenured profs have is a liar or complete moron.
I realise that the corporatisation of universities and increasing monoculture/groupthink is eroding many of the protections offered to even tenured professors, but that's a different discussion. Do you have anything specific about the topic to add?
Not withstanding what you say, how do you counter this with related responses to your parent comment which note corporate research is legally bound and ownership vests strongly in IP law and not your research your brain.
My counter is what foolinaround also said in reply to my post. Intellectual freedom is a question of culture, not of any legal or financial apparatus. There was enormous intellectual freedom at the for-profit Bell Labs. There was enormous intellectual freedom in the publicly owned Berkeley in the 70s-80s.
The culture present in the organization will determine whether you need to always be on the guard from saying the wrong thing that might destroy your career or whether you are free to challenge incumbent ideas/viewpoints/theories.
Personnel is policy. The legal charter of an organization is far down the list of factors that determine how much freedom you have -- the mindset of your peers and of the leadership in the organization matters much, much more.
The evidence to me, late stage career (I was in government, and uni research and then latterly in systems and networks in the uni system, latterly outside in the NFP internet public governance space), is we're off the rails both inside academia and in private industry research. Neither feel like fully open minds.
Your description of Bret Weinstein and Evergreen is substantially inaccurate, perhaps because you read Bret Weinstein's dishonest description of it (linked below).
There was no "no-white-people-on-campus" day. There was an optional off-campus field trip for a seminar aimed at people living with a white perspective.
Bret Weinstein was not fired. He resigned with $500K severance as part of a settlement agreement arising from his lawsuit against the school for failing to protect him from students who verbally harassed him.
Sure, the student organized event asked that white participants who supported this misguided movement stay off campus. Sure, that's a pretty dumb thing to ask of anyone and Bret Weinstein has no obligation to adhere to it, and furthermore adhering to it is pretty dumb...
But that's not the college's doing nor is it the college's responsibility.
> Bret Weinstein was fired from Evergreen for not participating in a no-white-people-on-campus day
Interpretation 1:
There was a "no-white-people-on-campus day" at a university, a professor named Bret Weinstein-- a white person-- was on the campus that day (either in defiance or without foreknowledge of the event), and was consequently fired for being white while on campus.
Interpretation 2:
There was a "no-white-people-on-campus day" at a university where white people were singled out to attend some off-campus event (whether mandatory or optional), Bret Weinstein-- a white professor-- did not attend this off-campus event (whether in defiance or accidentally, whether going on campus, or not), and consequently the university fired Bret Weinstein for his action of not attending an event targeted exclusively at white people.
Interpretation 3:
You are leaving some crucial detail that explains what actually happened.
If #3 isn't the case I'll eat my hat. If it is, you should consider the possibility that some kind of anti-academic axe-grinding is keeping you from writing as clearly as you could about historical events.
Interpretation 2 is closest to correct. There was a "day of absence", and in 2017, whites were "invited" to leave the campus for the day's activities. Weinstein protested this in a letter, which led to protests of a violent nature to the point that he had to hold classes off-campus. Weinstein sued over the fact that he had to hold the event off-campus rather than the University maintaining an academic setting in which he could teach. He also needed a police escort on campus. He contended that this represented a hostile work environment.
How does one get to lead such a high caliber team at Google AI with background in Linguistics of all things?
What are some of accomplishments this person has achieved, beyond being from one or many oppressed victim group(s) that Google needed to hire to fend off lack of diversity claims?
> How does one get to lead such a high caliber team at Google AI with background in Linguistics of all things?
Wow. I believe you should begin by assuming ignorance, and not assume the opposite. This [1] will get you started. Cognitive Science, Linguistics, and Computer Science, independently, are all valid and highly valued, highly sought after backgrounds for careers in AI. Have all those in a multidisciplinary background, it is likely your career is in AI.
Or they'll just conclude the whole thing was a waste and only caused negative publicity and hard feelings and have legal and PR and relevant stakeholders from any given project manage ethics going forward like normal BigCos do.
No, they just hired the wrong people. They employed political activists instead of real professionals which was initially great for PR. Not the best long term decision they've made.
I don’t think it’s just for PR, I think any organisation that invests heavily into the development and use of ML/AI needs wants to keep a close eye on whether the outcomes are in line with positive humanistic values.
I don’t know how these ethics professionals added value to Google. I _do_ know that repeatedly belittling your CEO and employer on a public forum might win you lots of virtue signalling points but not be constructive for the company paying you a salary.
They rejected her proposal to discuss an end date and imposed immediate termination. You can argue it was justified. You can argue California law should be different. But the title states a legal fact.
You linked to a very long legalistic webpage which I don't have time to read; please link to a concise neutral reference that specifically points out she was fired because I've googled and can't find anything other than biased opinions either way.
The fact remains she threatened them with resignation; using words like "proposing for discussion" runs the risk of portraying her as an innocent victim in this for anybody who hasn't read about the original case.
For those people also, the title remains opinionated without a clear reference to a formal description of the exit that is generally accepted.
Legal guidelines are legalistic. I provided a concise neutral summary. And you have time to read if you have time to argue.
"Proposed" and "discuss" are neutral. "Threatened" and "bluff" are loaded. And anyone this far into the comments should have read about the original case.
I'm not a lawyer and have no intention of deciding the legal definitions of this case myself, I look to those who actually know what they are talking about to do so. I note you still haven't provided a link to anybody clarifying it in a reasonably neutral way; I suspect there isn't one without bias (either way).
Nobody "proposes" resignation in such circumstances to an employer unless they are fully in battle. And according to this article she acknowledged that herself:
I actually know what I'm talking about. Some other people here actually know what they're talking about. I'm not going to waste my time finding more links when you can just call them biased or too legalistic.
You quoted a reporter paraphrasing Gebru. And "proposed" doesn't mean everything was peaceful.
I actually know what I'm talking about. Some other people here actually know what they're talking about.
Considering there is a variety of opinions here on the subject (including many who would reject my impartial take and fervently take the opposing position to you and claim it was entirely her fault), I'm sure you didn't mean to imply that only those who agree with you know what they are talking about.
Anyway, in the absence of clarity on the event itself, I'm happy to agree to disagree with you on what has been established - which rather emphasises my original point about the title...
Wut!? please tell me you followed the whole story and knows all facts. Jeff manages whole department, whereas this person was working in one team, "ethical ai". There are literally more than 20+ teams that answers Jeff. Each team got anywhere from 4-50 people.
Google only built the Ethical AI team to placate internal criticism of its attempts to sell AI tech to the Department of Defense and building a search engine for China with censorship built-in. They were supposed to be window-dressing, not actually rock the boat, so when they did they were culled.
Since they are not contributing to the bottom line, directly or indirectly, the disaffection in the team couldn't matter less to management. They cleverly appointed a black woman VP to neuter the group without the bad optics of the initial firing of Ms. Gebru, and I expect the remainder of the team will in the not-so-distant future be reassigned or let go, and replaced by a new cast of ethnically diverse yes-people who will produce lots of impressive-sounding papers but not any criticism of substance.
“We all live in the digital poorhouse. We have always lived in the world we built for the poor. We create a society that has no use for the disabled or the elderly, and then are cast aside when we are hurt or grow old. We measure human worth based only on the ability to earn a wage, and suffer in a world that undervalues care and community. We base our economy on exploiting the labor of racial and ethnic minorities, and watch lasting inequities snuff out human potential. We see the world as inevitably riven by bloody competition and are left unable to recognize the many ways we cooperate and lift each other up.
But only the poor lived in the common dorms of the county poorhouse. Only the poor were put under the diagnostic microscope of scientific clarity. Today, we all live among the digital traps we have laid for the destitute.“
- Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
Dr. Gebru was fired for having consulted a lawyer. (Jeff Dean and others were aware that Dr. Gebru was consulting a lawyer long before her termination). Employers can do stupid things, but not illegal things, and it's beyond stupid to fire somebody for talking to an attorney. Google felt threatened and so they acted, fair and simple. While it's OK to feel threatened, it's not OK to retaliate, which is what Google, legally, did.
Dr. Mitchell was likely talking with the same legal team-- it sounds like Google cited cause for Dr. Mitchell sending documents (non-IP) outside of Google's network. So Dr. Mitchell is almost certainly also a victim of retaliation.
The bottom line: Sundar Pichai has again and again admitted to Google's struggles with retaining or regaining trust, and here he has again failed to act effectively. Google's behavior here is, legally, maximally adversarial and strictly for Google's benefit. Sundar Pichai has failed yet again to show Google can be trustworthy.
Gebru was fired for unprofessional conduct and disturbing the work place. She tried to sneak a Google authored paper through internal peer review, by submitting it on the day of the deadline, and having the paper approved without incorporating any feedback of the internal review. Then she wrote an angry unhinged email to her colleagues and subordinates, where she said that Google was marginalizing Black women for speaking out about diversity and that KPI diversity metrics do no matter, they can be ignored. Then she threatened resignation, if Jeff Dean would not de-anonymize all the internal reviewers. Since that is a demand that no scientific chair can possibly in good conscience satisfy, her bluff was called and she was "resignated". Then, of course, she asked on Twitter for a good employee lawyer, said that Jeff Dean was white privilege, and that Google fired her for advocating for diversity, essentially dissing all her Black colleagues at Google advocating for diversity without any problems relating to their skin color.
Gebru is an absolutely toxic person, who can't keep her politics and racial justice activism outside of her profession or academic work (her work is cited so much, because she actively mails other scientists when they don't cite her work, threatening to call them marginalizing and ignorers of Black women in science -- pity cites).
She is bubbled in a network which supports her with tags suchs as "BelieveBlackWomen" when she publicly smears Google for being racists. No rationality, but stirred up angry feelings. Trust should not be assigned due to skin color or gender. Stop that racism and sexism, it has no place in business or academia.
Offtopic. Oracle does something similar, but on a bigger scale. They approach potential clients with two stacks of papers: one is a contract and the other is a lawsuit. A discreet way to nudge clients to do business with you.
She didn’t sneak the paper, and the committee was trying to block it for BS reasons like the committee member wanted his work cited.
Jeff Dean is a billion-dollar employee who has evidently extremely weak people skills. He was in way over his head in this one and he made the exceptionally bad choice of continuing the argument. Moreover he knew Gebru was consulting a lawyer, so his actions are de facto retaliation for that.
If you can’t muster skepticism for Google’s astroturf here, you’re falling straight into the issues of bias and prejudice that we do dearly need to expunge from the workplace.
She tried to bypass internal review. She knew that internal review was part of the process. She auhored a paper under the Google banner, not making note of her colleagues' work which combat the bias she was decrying. Then she submitted it for internal review the day of the submission deadline. After internal review was done and suggestions to improve were made (valuable time from your peers!) the paper was already accepted by external reviewers. She was then miffed having to update or retract the paper.
Jeff Dean is an absolute saint and ally for the cause. For sure he has better people skills than Gebru. Jeff was over his head with the Twitter mob, which would mess with anyone, no matter how famous.
Google fired Gebru not for retaliation, or for critical science, or for being Black, or for advocating diversity, or for Dean having bad people skills. She was fired for being insufferable.
Only astroturfing Ive seen is from activists claiming racism or Jeff Dean autism / white privilege.
This feels pretty cut and dry to me. Meg exported business material from the company. That a fire-able offense at any company.
I feel bad for her but I honestly don’t understand why she is even surprised she got fired.