Hacker News new | past | comments | ask | show | jobs | submit login
AI at Google: our principles (blog.google)
644 points by dannyrosen on June 7, 2018 | hide | past | favorite | 405 comments



So, I'm all for giving someone the benefit of the doubt if they have a change of heart upon reconsidering an issue, but this coming after the fact rings a bit hollow to me. I think the only principle at play here is that it became a PR issue. That's fine, but let's be honest about it.

Early emails between Google execs framed this project only in terms of revenue and potential PR backlash. As far as we're aware, there was no discussion about the morality of the matter (I'm not taking any moral stance here just to be clear.) Once this became an internal and external PR issue, Google held a series of all hands meetings and claimed that this was a "small project" and that the AI would not be used to kill people. While technically true, those same internal emails show that Google expected this to become a much larger project over time, eventually bringing in about $250M / year[1]. So even then they were being a bit disingenuous by focusing only on the current scope of the deal.

And here we are now with a release from the CEO talking about morality and "principles" well after the fact. I doubt many people do anyway, but I'm not buying the "these are our morals" bit.

https://www.bizjournals.com/sanjose/news/2018/06/01/report-g...


I doubt that Google spelling out their moral stance is intended to convince you right away that they're all good now. It's a public standard that they're setting for themselves. If you think their actions don't match their words, you now have concrete terms and principles to critique and compare with. It's a benchmark to which employees and the public can hold them accountable.


> It's a public standard that they're setting for themselves.

I'd like to really draw attention to the "for themselves" part here. Yes, this is a public document, and of course it serves a PR purpose, but the function of setting the terms for internal discussion at Google is at least as important.

I think that since most people aren't Google employees, they tend to focus on the PR angle of this, but I don't even think that's the primary motivation.


Small addendum: Big companies are big.

I didn't see the actual email chain (raw wasn't published?), but at Google-size it's conceivable there wasn't company-wide exec awareness of the details.

That's how big organizations operate.


Given how a lot of people don't hold Microsoft accountable for past misdeeds (the 4 last posts about github acquisition are endless arguments about it), there is few reasons to believe it's going to be different with google.

For them, it's always better to benefit from screwing up. If you don't get caught, yay ! If you do, apologize, wait a bit, and send your PR team go their magic. Bim, you are green again.

Why would they do otherwise if they can keep the loot and face little consequences ?


1. Does Microsoft have written promises that they broke about their past acquisitions? In the case of Skype it's going quite poorly, but as far as I know LinkedIn is running quite independently and is doing well. Nokia again is doing pretty poorly, but Mojang also seems to be doing fine. It's pretty hit an miss, but to be fair, smartphone and communication are pretty hard industries to succeed in.


All the arguments have already been used in the past threads. No use to repeat them here.


As a neutral observer, I've not been on past threads. Most people who don't have particular interest in this haven't. It would be nice to hear both sides of the argument


Go back to the threads on github acquisition. There are at least 4 of them during the past week. They are very long, very rich and very divided, so making a tl;dr would be too much work.


If people are complaining about Microsoft acquiring GitHub then is that not people trying to hold Microsoft accountable?

If Microsoft's sins were truly forgiven or forgotten, people wouldn't be complaining about the acquisition.


You missed the numerous HN comments defending microsoft.

You missed people on reddit or imgur singing glory to microsoft.

They now have a fan base.

A fan base.

That's not something I would have ever imagined in the 90'.


Yes they are a big company with many facets. You can like some parts and dislike others.

They have always had a fan base, even during those dark times (but not as many). But seems like they worked on engaging others and now have a bigger fan base.


Perhaps another good example closer to what google is doing is Cisco providing to China the means to build their great firewall. They got some criticism of it for a bit of time but China's censorship regime has since become the "new normal" and has clawed its way into western media via heavy investment into Hollywood studios by the country.


Historically has anyone succeeded in holding such giant firms accountable to their own stated principles? At the moment, I like those principles more than I like Google.


I'm not sure externally being held accountable is as important as it would seem.

Publicly stated principles such as these give a clear framework for employees to raise ethical concerns in a way that management is likely to listen to.

For example, one of my previous employers had ten "tenets of operation" that began with "Always". While starting each one with "never" would have been more accurate in practice, they were still useful. If you wanted to get management to listen to you about a potential safety or operational issue, framing the conversation in terms of "This violates tenet #X" was _extremely_ effective. It gave them a common language to use with their management about why an issue was important. Otherwise, potentially lethal safety hazards were continually blown off and the employees who brought them up were usually reprimanded.

Putting some airy-sounding principles in place and making them very public is effective because they're an excellent internal communication tool, not because of external accountability.


Look at it from the other side: with those principles written down, executives will at least have the option to adhere to them, something to point at when they do. Without, shareholders might give them a very hard time for every not strictly illegal profit opportunity they preferred to skip.

Google might be in a position to not get bullied around much by investors though, so that line of thought might be slightly off topic here.


One example I can think of is private colleges. Many in the US have made public statements dedicating themselves to uphold principles like freedom of speech. Organizations like FIRE do a pretty good job holding them accountable to those principles and there are many instances in which they have documentated policy or enforcement changes made due to their activism.


Arguably, the Googlers who stopped Maven just did. Labor organization is the one of the few checks on this level of corporate power.


The funny thing about "holding people accountable" is that people rarely explain what it means, and I'm not even sure they know what it means? It's a stock phrase in politics that needs to be made more concrete to have any meaning.


As best as I can tell, it means something like "using the generally available levers of social shame and guilt to dissuade someone from doing something, or if they have already done the bad thing, then requiring them to explain their behavior in a satisfactory way and make a public commitment to avoid doing it again."


And it requires that you be in a position of power - otherwise it's just heckling, which isn't likely to have any real impact. In this case it'd be having the ability to impose fines, or discipline corporate officers, etc.


I wouldn't think of bad press is "just heckling." A company's reputation can be worth billions in sales.

It's true that many boycotts fizzle out, though.


> It's a public standard that they're setting for themselves.

They already had a public standard that people actually believed in for a good many years: *Don't be evil."

They've been palpably moving away from that each year, and it's been obvious in their statements, documents, as well as actions.


"Don't be evil" is incredibly vague and practically meaningless. What the hell is evil, and since when did everyone agree on what evil means? It's obvious to you that they're getting "evil", it certainly isn't obvious to me.


Is explicitly circumventing a browser’s privacy setting evil?

How about shaking down a competitor? [2]

[1] http://fortune.com/2016/08/30/google-safari-class-action/

[2] https://www.bostonglobe.com/business/2015/05/19/skyhook-got-...


collusion to keep salaries down may not be evil in the super-villain sense, but it's hard to see as ethical.

Not being evil has always been a side-show to the main event: the enormous wealth-generation that paid for all the good stuff. It's still the wealth-generation in the drivers seat.


Even disregarding the issue of how "evil" is defined, there is another level of vagueness: when does one become evil, as opposed to only doing some evil? Arguably, one could do some amount of evil deeds without actually being evil.

The above is sometimes mentioned in discussion, were people point out that the motto is "don't be evil" and not "don't do evil".


>If you think their actions don't match their words, you now have concrete terms and principles to critique and compare with.

What I think is that they will go forward with any project that has potential for good return if they don't think it will blow up in their faces, and that opinion is based on their past behavior.


I didn't realize they already have past behaviour of violating their own stated AI principles within the day of publishing those principles. /s

Doesn't sound like you're really that willing to give them the benefit of the doubt like you said.


>Doesn't sound like you're really that willing to give them the benefit of the doubt like you said.

I said I'm all for giving the benefit of the doubt _but_... That _but_ is important as it explains why I don't really buy it this time around, and that's based on how they handled this situation.

And c'mon, really; judging their behavior should be solely based on ML (it's not AI, let's avoid marketing terms) code? Why does the application matter? They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.


> Why does the application matter?

Possibly because it's literally the subject of this thread, blog post, and the change of heart we're discussing.

> but this coming after the fact rings a bit hollow to me

^ from your original comment. So you don't buy the change of heart because...they had a change of heart after an event that told them they need a change of heart?

Did you expect them to have a change of heart before they realized they need to have a change of heart? Did you expect them to already know the correct ethics before anything happened and therefore not need the change of heart that you'd totally be willing to give them the benefit of the doubt on?

> They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.

Right, in the same way that I can just say they are good and didn't violate that tenet based on my own arbitrary set of values that Google never specified (in spirit, of course, not saying they are literally "good", otherwise I'd be saying something meaningful).

It still doesn't look like you were ever willing of giving them the benefit of the doubt on a change of heart like the one expressed in this blog post. Which is fine, if you're honest about it. Companies don't inherently deserve trust. But don't pretend to be a forgiving fellow who has the graciousness to give them a chance.


Even if they abide by this, who's to say that once a party has some Google developed military AI, they won't misuse it? I fail to see how Google can effectively prevent this.


If they develop an AI that administers medicine to veterans, and the army takes it and change it so it will administer torture substances to prisoners of war, is it Google's fault or the army's fault?

Google makes tools with a purpose in mind, but like many other technologies in history, they can always be twisted into something harmful, just like Einstein's Theory of Relativity was used as basis for the first nuclear weapons.


> It's a public standard that they're setting for themselves. If you think their actions don't match their words, you now have concrete terms and principles to critique and compare with.

Absolutely. Because "Don't be evil." was so vague and hard to apply to projects with ambiguous and subtle moral connotations like automating warfare and supporting the military-industrial complex' quest to avoid peace in our time ;)


Yes, like “do no evil”.


• At first there was no explicit policy.

• A decision was taken (contract entered into).

• As part of the backlash to that decision, one of the demands was that an explicit policy be created.

• That demand was accepted, and what you see here is the outcome.

(Sorry for the passive voice….)

That's all there is to it; I don't see a claim anywhere that the article reflects anyone's principles or morals in the past, only that this is a (only now thought-out) policy on how to decide about similar things in the future (“So today, we’re announcing seven principles to guide our work going forward”).


The emails you're aware of framed it in those terms. Do you think a leaker might selectively leak with the intent to paint a particular picture?

As someone who has helped write policy, it is literally impossible to write up all policy in advance. You always end up weathering some angry insights in the comment section (we used to call it the peanut gallery). If you can write out all policy a priori, every CEO, economist, scientist, and psychologist can just go home now.


Why would your default stance be to believe a company over a leaker?

The company has millions to gain from the contract and hasn’t shown morals on this issue.

The leaker has so much to lose by releasing the documents, everything from their career to a significant portion of their life. You could call that incentive to deceive, but I call it incentive to be just about their leak.

Especially when it’d be so easy for the company to leak counter examples showing moral consideration if they did...


> Why would your default stance be to believe a company over a leaker?

Because Sundar Pichai has a strong public track record. Every CEO ends up with warts, but I have some sense of what he's about. The leaker, I have zero information on. Given known vs unknown, I put more faith in the known. Whether I by default believe or disbelieve depends on who's saying what.


Many of the Googlers involved in the campaign against Project Maven are engineers of high caliber and those I know of are fairly activist in general. While I haven't always agreed with those I've interacted with, they're high quality people with high quality records. The sort of Googlers protesting Maven are the sort of Googlers who made Google the sort of company people loved. And they've put their careers on the line to make a statement about what is and isn't okay.

Sundar Pichai's claim to fame was getting the Google Toolbar installer (and later the Chrome one) injected into the Adobe Reader installer. [0]

[0] https://www.quora.com/What-did-Sundar-Pichai-do-that-his-pee...


I don't know how accurate it is to say that these engineers have put their careers on the line. It could also be that they wouldn't be able to make these statements were they not secure enough in their jobs to feel their careers wouldn't be on the line.


While they definitely have above average incomes and probably some good financial security, there's a Damore factor risk: Public attention could render them unhireable if they come off as troublesome or likely to cause issues with future employers.


> Microsoft didn't even ask their customers for permission. They just automatically switched anyone who installed IE7 to Bing as the default.

Don’t worry everyone it’s different now!


I don't think they're believing anyone over anyone, but rather entering the discussion with a fair amount of skepticism.

The point is that in any discussion, both sides have biases, and you need to take both sides into consideration to get a fuller picture.


If they really believed this stuff, I don’t see why they would have had so many resignations over the issue. Had somebody brought up the ethical aspects when the project was being discussed, they wouldn’t be scrambling to limit the damage now.


A few people resigned and they're trying to limit the damage because people like being outraged at things that don't matter. It's the same thing that got James Damore fired.


AI-powered drone warfare doesn't matter?

What does matter in your world?


Fulfilling a military contract isn't the same as killing people with drones. Virtually every plane you've ever flown on is built by a company that also builds killing machines, but you don't see people making a hissy fit over it.


>Do you think a leaker might selectively leak with the intent to paint a particular picture?

Possibly. As I said "...that we're aware of." Anything beyond what we know is speculation. This is the information I have available. Let me ask you this; if there was real debate and concern beforehand, why is it only now that Google has decided to back out?


Because one very good policy is to not make policy in the heat of the moment. Write things down. Discuss with confidantes, counsel, etc. Sleep it on it. Change it. Sleep on it again. The bigger the issue, the more you think about before going public.


While selectively leaking certain emails and withholding other might color an issue it won't make a negative into a positive. And if the leaked emails aren't genuine I have not seen any claims to that effect. So either they are real and they paint a real, possibly distorted picture or they're false but as it sits right now they are the evidence that people use to build their opinion on. If there is an email trail that establishes a different narrative Google is of course perfectly free to release that too to offer some counter weight.


> we used to call it the peanut gallery

At some risk of proving your point:

At least you're honest about your contempt for the common man.


Organizations do not always communicate well, and sometimes things only reach the CEO via the press. Do not assume that the whole organization, or even just its executive, agrees with or knows about everything the organization does.


In addition to this, individual executives and teams of them often have to compartmentalize discussions of different aspects of a complex issue. This makes taking specific communications, chosen by a third party, as indicative of the whole conversation iffy.


There was a similar "comming to jesus moment" with Google in China. They saw how they had to do the right thing after years of censoring and manipulating data for the chinese government, but only after they got massively hacked, blocked, and essentially forced out of china...

However a good thing done for the wrong reasons is still a good thing.


> However a good thing done for the wrong reasons is still a good thing.

Agreed, and I try not to be too hard on them. I don't think it's a black and white issue personally, the only issue I have is how this implies Google always wants to do the right thing from the get go, which very much seems to not be the case here.


You should read Daniel Kahneman's "Thinking, Fast and Slow". It's not possible to make all the right policy decisions that are right in hindsight and before a sentinal event occurs. Hindsight bias is always 20/20. Anyone making real decisions of consequence will eventually curse hindsight.


Well Google is now framing this as a moral issue, so did morality change significantly between when they accepted this project and today?


Do you think regret is morally valid?


Sure.

But Google has no intention of doing the right thing anymore than Microsoft or Disney does. These are corporations and their executives HAVE to do what they think will be best for the corporation. Not what they think is best for mankind.

This is how for profit businesses currently work. And PR saying anything to the contrary is simply not true.


This is a gross generalization that people trot out as if it were unassailable but never back it up with any support.

Corporations are run by people with a complex set of motivations and constraints in which they make decisions. Some of them make decisions with intent to harm. Some make decisions with intent to help.

No one person is automatically turned into a ruthless amoral person just be being employed at a corporation.


... and most make decisions in a space where (local) zero-sum games mean there is no option available that uniformly helps or harms.


It gets complicated but it's more about the employees responsibility to shareholders. Not their personal morals.

https://www.reddit.com/r/law/comments/3pv8bh/is_it_really_tr...


And do you know what can happen when a person's own morals or ethics come in to conflict with their responsibility to shareholders?

They can quit. They can speak out. They can organize. They can petition for change. They could join the more ethical competition (if one exists), or start their own.

This is especially easy to do for employees of a company like Google, with excellent job prospects and often enough "fuck you money" to do whatever they want without serious financial hardship.

They are not hopelessly chained to the corporate profit machine. They can revolt -- that is, if their morals are important enough to them. Otherwise they can stay on and try not to rock the boat, or pretend they don't know or are helpless to act.

A handful of Google employees chose to act and publicly express their objections. This action got results. More employees in companies which act unethically should follow their lead.


I used to work at google about 5 years ago. While I was there it was clear that Google employed some of the most morally conscience people I've ever worked with. It's why I still trust them to this day with data that I would never trust anyone else with. As long as Google employees continue to have a voice in the company I'll continue to trust them.


Google public shareholders do not have control of the company. Larry, Sergey, and Eric are the only shareholders who matter. So executives are responsible to them first and foremost.


Even if this is true, they can make the subjective decision that doing certain things will make the company look bad in the eyes of employees (which not only can cause employees to resign, but can disadvantage a company in negotiations to hire new employees) and users of the product, and can ultimately be worse for their bottom line than things that don't bring the same short-term financial benefits.

Ultimately, though, I agree with zaphar that you are overgeneralizing, since corporations are controlled by humans -- executives, other employees, and shareholders -- and human motivations can be complex.


This sort of thing gets said a lot. It's not a valid excuse and it's not true in the black and white sense that people constantly present it.


Otoh Google tries to claim much more moral highground than they actually have. Insincerity does rub people in the wrong way.


> However a good thing done for the wrong reasons is still a good thing.

I'd say it is definitely better than not doing a good thing. For me, the real question is this though: considering there is a pattern here (doing the right thing after doing the wrong thing), do you trust they will do the right thing in the first place next time?


> However a good thing done for the wrong reasons is still a good thing.

Yes, but we should absolutely remember what the original intention was.


Hmm not really. Google is bad and they should feel bad, you're just handwaving away how bad they are because you like them.

Imagine I wanted to have somebody killed and I hired a hitman to kill them and when I go to pay the hitman I accidentally wire the money to the wrong place and inadvertantly pay off the victims mortgage instead of paying the hitman. Now the victim doesn't die and gets their mortgage paid off. I'm not a good guy what I did is not a good thing, I just fucked up, that's all. Had everything gone to plan the guy would be dead and I would be blameless.

Similarly if everything had gone to plan Google AI would now be powering various autonomous murder bots except they realized that they didn't want to be associated with this, not because they have any morals, but because WE DO. They are still bad.


>Imagine I wanted to have somebody killed and I hired a hitman to kill them and when I go to pay the hitman I accidentally wire the money to the wrong place and inadvertantly pay off the victims mortgage instead of paying the hitman. Now the victim doesn't die and gets their mortgage paid off. I'm not a good guy what I did is not a good thing, I just fucked up, that's all.

That's an odd analogy considering the the would-be conspirator didn't make a decision to not go through with it. Do you believe Google published this article by accident? And really; comparing Google's actions to murder... c'mon.


Not comparing googles actions to murder specifically, that's simply you not being able to see the forest for the trees. The only reason they wrote the article is to make it seem like what they did was a proactive moral choice when in reality it was a retroactive action to frame their realization that in supplying AI for the DoD murder bots they would be part of the evil empire. I mean it's literally twisty mustache tier lack of scruples.

They didn't fess up because they realize that the outcome of their actions woudl be bad, they fess up because YOU realize that the outcome of their actions would be bad.


> comparing Google's actions to murder

you think people weren't/wouldn't be killed off intel gathered from the project?


Beleive me when I say that I do not like Google. But what is a person or a group of people supposed to do when they have done something wrong? All they can do is stop doing it, and try to prevent doing it in the future.

You can speculate about their motives, I personally beleive it to be PR motivated aswell, but what matters in the end is that it stopped happening.


That's all fine, but that person or group shouldn't expect everybody to love them afterwards. If anything, they should expect distrust and dislike.


Google’s disregard for the ethics of the arrangement is further supported by this interview with one of the Googlers who organized the resistance to this project:

https://jacobinmag.com/2018/06/google-project-maven-military...


Jeez, that's just brutal. I try to avoid content-free comments like this one but... damn. Just brutal.

If things actually went down like this person describes I would have been out the door too. (Due to the betrayal. I actually think Project JEDI is pretty cool.)


I noticed there was one particularly phrase missing from Apple's WWDC this year: artificial intelligence.

It wasn't mentioned one time. Never in the presentations, none of the talk titles had it included either.

Machine Learning was mentioned and Apple also released many updates related to it, but never did they call it artificially intelligent, or use the phrase artificial intelligence. Not a single time. Go ahead and watch it if you haven't already. They'll never say those words.

Pretty remarkable considering it was a developer conference, I wonder why?


I credit their sense of discipline in using the more accurate term. They know they can only stand to lose down the road if they over-hype notions if "artificial intelligence" now when it really is a misnomer. And they're probably betting that's what will happen to their competition who more enthusiastically use the phrase for marketing.


Probably more that they aren't as heavily invested in selling the idea that we are close to smart general AI.

Compare it to Musk who constantly talks up the idea while the actual AI systems his company has deployed are killing people.


That's a perfect example of someone who might get bitten in the ass by the over-selling thing, too.


Musk's systems are more A than I.


Probably because "machine learning" is a more accurate naming while "artificial intelligence" is the buzzword for non-tech people. There’s nothing "intelligent" in machine learning.


Impending AI Winter poised to make a lot of people look like fools?


The article you linked says the email was written by an exec, but the source article in the intercept said it was written by "a member of the sales team". It also says Fei-Fei Li, arguably the only exec on the email chain, was actually against it:

“I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry,” she continued. “Google Cloud has been building our theme on Democratizing AI in 2017, and Diane and I have been talking about Humanistic AI for enterprise. I’d be super careful to protect these very positive images.”

If you take this literally you might say her concern is "just PR", but this is exactly the kind of argument I would use if I was against something on moral grounds but I am trying to convince who does not share the same values.


https://www.nytimes.com/2018/05/30/technology/google-project...

That article adds some pre-context to the quote above.

> "Avoid at ALL COSTS any mention or implication of AI,” she wrote [...] . “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”


I'd put this another way. Here is the CEO of Google laying down the law. And the first question to ask, before you bother reading or discussing any law, is what redress you have when someone breaks it.

If Google does start building weapons, you can always turn up with a sledgehammer. How do these principles reduce the need for you to do that?


If Google starts building weapons, wouldn't one want to turn up with something a bit more effective than a sledgehammer?


The context of those leaked emails pretty much ensures that it would be framed in terms of revenue and potential PR backlash. Even if those individuals have a moral problem with it, they're likely going to frame their argument in that way because that's probably the most effective argument to make in that context.


There's a lot to be said for this view, and I've reshared and highlighted your comments elsewhere.

At the same time, Google and Alphabet Inc., far-from perfect entities which I have and do criticise strongly and often, at least present a public face of learning from mistakess rather than doubling down or issueing repeated and increasingly trite and vacant apologies (say: Facebook).

This is a line Google have drawn for themselves in the sand, and our various future selves may judge them against.


These principles seem to have already been wholesomely applied to Google then Waymo's approach to self-driving cars. This is in stark contrast to the approaches used by competitors such as Uber and Tesla who appear to favor capturing market first and foremost.

It seems a narrow view to take to assume Google's only AI project with mortal human consequence is Maven, then using that narrow view to confirm your own negative bias about profit, perception, disingenuity.


I agree. I'm encouraged that there are people who can see through the propaganda.


Is that all? 250M/year is still peanuts for a company like Google.


It’s 250M a year for now.

Considering the size of the global arms trade, it’s very unlikely to stay 250M a year for long.


You hit the nail on the head.


And now someone else will easily pick up the contract, and Google loses any ability at all to influence it. Perhaps they could have reduced civilian casualities more so than whoever picks it up.


My crack selling business is the safest, too!


That's the biggest integrity-lacking cop-out imaginable.

"If I don't do it somebody else will!"


Give them a break. Based on how much it cost to buy a house in the Bay Area, I don't blame them. /s


The best way to lead is by example. Thank you, Googlers.

The choice not to accept business is a hard one. I've recently turned away from precision-metrology work where I couldn't be certain of its intent; in every other way, it was precisely the sort of work I'd like to do, and the compensation was likely to be good.

These stated principles are very much in line with those that I've chosen; a technology's primary purpose and intent must be for non-offensive and non-surveillance purposes.

We should have a lot of respect for a company's clear declaration of work which it will not do.


That's a pity - as someone in stereo machine vision, metrology is something we really really need for a huge range of applications - that said, if its feeding an autonomous armed targeting system, I agree. We really need to decide as a species we want to continue being a species and realize that solving problems with each other by force is coming to an end. I'm not speaking from a pacifist viewpoint, just a practical one - there's pretty much no way we're not going to continue evolving vision systems in all their hyperspectral glory, and they can do a lot of good - we just cannot arm them. Ever. And surveillance is a complex question - there are enough positives it can offer not to rule it out a priori


Russia reportedly has armed robots capable of operating autonomously and they preemptively refused to honor any ban, and other nations are developing their own.

As terrifying as the prospect is, it's already happening.


Yes. I can only refuse to play and hope the Russians et al come to their senses, the United Nations starts to work better, etc. If we go down that road, it will be the end of most of us, if not all of us. We really have no chance against a machine that's decided to shoot us. As a species we need to have zero tolerance for it - we won't be rendered extinct by snazzy terminators, we'll be killed by the 21st century version of a Victorian steam loom. Its truly a horrible problem, because the solution can't be creating autonomous systems to kill other autonomous systems. The rest of the distinction is just data. The only way to win is not to play.


That’s the problem with mutually assured destruction weapons (which AI bots most surely qualify as). If you accept as a given that at least one faction is building them, then the other factions’ only choices are: 1. Build your own and pray for a Cold War stalemate and 2. Don’t build your own and guarantee you’ll be destroyed.


MAD was based on the concept of the 'Nuclear Club' - the barrier to entry was the mass defect ratio, which you'd have to calculate on your own, and then you'd confront nation-state level expenses of turning that knowledge into practical engineering. It had an inherent cap on the number of possible players. Not saying it was nice, it was just manageable. The barriers to entry here are much lower. For smaller players, lethal drone prices would be at the several hundred dollar range with a six digit investment in infrastructure, and I'd have full confidence in American defense contractors to drive that price much lower, in order to protect their margins, and at scale, they'll mess up their inventory control. The only choice is to recognize as a species this cannot be done, or it will be done. This equation is not about the few agreeing not to kill the many, it is about the many realizing that this is a road to an extinction level event. I don't mean to be dramatic, but I have a system I could make kill if I wanted to, and I paid for the whole thing on credit cards. Its too easy and too efficient. The number of players can't be counted on one hand, it's a tech lone individuals could deploy. If they do then others will, you net an exponential growth rate in deployment, and then the game is over. At the end of the day, all of the pieces are already here - a motivated individual can do a lot of damage, several motivated individuals or groups can get into a squabble and the damage rises exponentially. It is simply not an avenue we can accept. The limit to entry is not financial, it is not technical, it only can be that we must not. I'm pleased Google has defined their ethics as a company, I just argue in this case we need to define our ethics as a species. We cannot build these because if we do they will get loose, they will be used, and it will be a tragedy. We must all agree that arming autonomous machines is beyond the pale, for any one, for any reason. As soon as one is built, your destruction is guaranteed, regardless of whether you too build one. .


I guess my point is (and it looks like we agree): good luck getting all billion+ people on the planet with a credit card to agree that building a kill bot is a bad idea. All it takes is for one of those people to disagree, build them, and then we all must have them.


>All it takes is for one of those people to disagree, build them, and then we all must have them.

A killbot is (more or less) a mobile booby trap. If we have a problem with madmen leaving booby traps around, we can't solve that by laying more booby traps.


This is a bit dramatic though because these robots can't reload themselves. Automated targeting for a single area is miles away from planning strategy, deploying battle groups, etc.


> This is a bit dramatic though because these robots can't reload themselves.

Neither can mines, and look at the decades of devastation those have caused. Now imagine a minefield where the mines get up and chase after you.


I just love hacker news. One of the last bastions of people who think on the internet. This comment is succinct, dead on the money, and absolutely terrifying. My hat is off to you, I had never managed to formulate the risk so concisely. :-) Keep on doing this, and you help keep us all honest.


Truly the stuff of nightmares. Makes me think about that episode of Black Mirror with the Big Dog (DARPA) inspired surveillance robot...


Not dramatic - the robots are not expected to reload - one robot one kill - simple as that- mass produced plastic, lots of robots, no intention whatsoever for reusability - the risk is that this means you dump thousands to millions of these stupid little drones, many make mistakes, and many others take naps and then wake up and do more damage when everyone thought it was all over. These are disposable killing machines, not Reapers. Furthermore, at todays prices, a cloud infrastructure and a million dollars can net you 10K lethal drones. There's no finesse, just brute force.


None of that stuff sounds impossible, though. Hard, sure, but well within the bounds of possibility.


> This is a bit dramatic though because these robots can't reload themselves

Yet...


Seriously, I've already got three ideas cooking on how that could be done. That's essentially the same skill as landing.


Russia didn't say that they refused to honor any ban, they said they refused to honor any ban that was based on trust. Their point was that, unlike nuclear weapons, these types of devices can be manufactured in secret or can be manufactured to be quickly convertible into fully autonomous devices.

I far more concerned about the hackability of civilian autonomous systems than I am about Russia's killbots. If Russia wants to end the world they already can and paranoid militaries make more secure cyber systems than random internet connected cars or planes.


Agreed regarding surveillance. In my book, working on Landsat, JWST, or LISA = awesome.

I will only direct expertise toward an imaging system like Keyhole or targeting/guidance systems if our society faces a clear and acute foreign threat.


What about street level city/town surveillance? Of course all the cop shops want to feed all their wants/warrants into it. I have no problem with this if there is a legal framework that supports it. More importantly, there are toddlers that get loose from their mothers because they are fundamentally greased pigs, or there are older members of our society who may be prone to wandering off in a daze - the ability to locate them within a very few minutes would be a good thing indeed. My own take is that I will look, I will try and recognize, but I will NEVER target. Machine is too good, sacks of meat like us have no prayer. That said, I've talked to a couple of cop shops, my position is that I will work with them when there is a framework that defines a legal basis for that level of tracking of people who clearly wish not to be tracked, even if they are bad actors. It's not a question of how far away you are, its a question of what you will track, why you will track, and what structure our society has put in place to get the benefits, not the risks. Looping back to the main point of the post, looking is OK, tracking requires societal governance, and any autonomous lethal capability, never mind action, should be absolutely forbidden, not just by law, but because we don't all want to die.


Turning away will slow things down but AI for military applications will happen. Someone will fill the void.


That doesn't make it ok to be the one to fill it.


On a geopolitical level, it does. It's far better for the world that the United States developed atomic weapons before either Germany or the Soviets did.


It's a good point, and if anybody thinks an arms race isn't already happening then they are very naive. While we're debating the morals of this, the Chinese, Russians and many others are trying to make it happen. For all we know, they may have already succeeded, and we're discussing the morals of it while it's already in full production a few thousand miles away.

The relevant research papers, knowledge and skills are widely available across the world. There are some advanced courses at Chinese universities right now that can only be seen as 'AI for military'.


And I can almost guarantee that, unlike the US, the Russian and Chinese militarized AI programs are not optimized for minimizing civilian casualties.


Really? Looking at drone strike causualities you’d have no clue how much the US even tries.

Its funny cause I haven’t heard of large scale Chinese or Russian drones flying over other countries targeting terrorists but ending up murdering children on more than one occasion.

Perhaps these technologies can be just evil and the US is the only country powerful enough to get away with using them.

I really have no stomach for just a vapid excuse, and I cannot fathom how so many people fall for it.


If you ever end up in a war with China, you'll be seeing plenty of drones. Obviously China doesn't use drones for killing terrorists - China doesn't kill terrorists because terrorists have no way to get into China in the first place.


So if China does X US can do it too, also if China could do X but we do not know for sure then US can also do X.

Using this logic then US can do anything with the exception of the things you are 100% sure China(or insert other country here) is not and will not do it.

This means surveillance, killer robots, black magic,genetically enhanced humans, illegal experiments and procedures, is a valid tool for US because "what if we have a war with China, we must have same tools as them"


Not so, nobody forces anybody to compete in an arms race. My point is that there is an arms race - and inventions such as AI for war are part of that arms race and are being developed. These are very easy inventions too, given the widely available and extremely powerful software and hardware. These are facts. No amount of upsetting moral arguments make the arms race go away.

The question is: should America compete in the arms race? I don't know. But there are big consequences either way.


Sure, you are not forced, I said "CAN",

Say US wants to spend a lot of many with some black magic consultant that could assassinate at a distance, you can justify it by launching a rumor that China does it too or probably does it or it will do it.

So you throw away any moral discussions by blaming China , they do it so we have no choice.


Hmm makes sense. I see what you mean from the position of say a general in US army tasked with this stuff: if someone says China can do X, his bosses are going to ask why America can't do X too. And therefore to force America into doing X, you just need reliable sounding misinformation to prove China doing X, and then moral arguments are swept aside.

However that aside, AI is very big in China right now, and they're using it for numerous applications with thousands of students going through Chinese universities being taught how to handle this stuff. While the same doesn't apply to niche interests like genetically enhanced humans (who is working on that, really?), something like AI with thousands of capable researchers and engineers is a different story.


Sure, I would like to see the real reason, We want drones to strike in 3rd world countries. Is not like China will send drones in US.

People don't like war, so it is natural that some people won't want to use their talent for making weapons.


They have plenty of terrorists and frequent attack, but also no free press so you do not get to know much about it...


I would really love it if a single person who derides the US for their drone strike casualties would actually back their position up with some data.

At a minimum, I would like to know the civilian and non-civilian casualty rates of drone strikes, the definition of civilian being used, a good idea of what alternative military action the US would have taken if they didn't have drone capabilities, and the civilian and non-civilian casualty rates of those military strike options.

Without that, bringing up the drone strike casualties is nothing more than moral grandstanding based on how certain types of military action make you feel. Bonus points if you use the words "murdering children" in an attempt to bypass any logic and go straight for emotions.



Look at Syria and Chechnya if you have the slightest amount of ignorance about how Putin does counterterrorism. Are you literally a Russian troll?


Doesn't seem like the US one does either.


You can't just claim something like that as if it was a fact.

Maybe it would be better for the world for Switzerland to have the most advanced AI. Why does it need to be the US, especially in the rapidly deteriorating political climate from the past 2 decades?


If e.g. France was willing to step up, and had a flourishing tech industry and a track record of responsible global leadership, that would be great. They seem to have their shit together slightly better than the US does, at this point. But, even at our worst, the US is a better custodian of this kind of power than either China or Russia, and those are the only two other countries that would be remotely interested in this kind of technology.


Please tell me what was the last time when China attacked someone across the globe (or anywhere). When was the last time Russia attacked someone across the globe (yes they have wars on their borders, but tell me what would happen if Russia builds the military base in Mexico like NATO does at Russia borders)? Now tell me when was the last year when USA was not in a war?

I don`t know how somebody rationalizes deaths of millions of civilians caused by USA/NATO army/interventions, which acted in non-defense, but I for sure know that they would not rationalize it anymore if they would happen to be on the receiving and of "democratization".


Yeah, I'm sure USA would go to war with Mexico, deny it and annex a part of it. Russia has military bases near NATO countries as well. I'm not rationalising USA army interventions but comparing it to Russia in this case is ridiculous.


Tell that to Kuba when Russia tried to install army equipment there. And USA has about 800 military bases in the whole world outside of USA. Russia has 8 or 9 outside of Russia. USA is spending hundreds of billions of dollars annually just to support those military bases. I know I would be pissed about that if I were a US citizen.


The US were the only one to use atomic weapons on a already defeated enemy/at all.


The Japanese were hardly already defeated. Had the bombs not been dropped, the United States would have invaded Japan directly, island by island, until the country surrendered. The loss of life on both sides would have been tremendous.

Source: My grandfather had orders to go and do exactly that when the dropping of the bombs ended the war.


This is of course one of the justifications American leaders used, and as always the victor gets to set the perceived historical narrative. Politically it was extremely important for the US to believe the bomb materially shortened the war given the huge amount of resources the Manhattan Project had consumed that otherwise could have been invested elsewhere in the war effort, especially when the military had to justify the incredible expense to Congress (adjusted for inflation the total cost is around 30 billion in 2018 dollars). I've recently been reading the excellent "The Making of The Atomic Bomb" by Richard Rhodes which covers the events of this period in much detail.

The US had already been ridiculously effective using firebombing to level Japanese cities with their B-29s - so much so, they actually had to consider slowing down/changing targets to leave enough behind to use the Atomic Bomb on: there was almost nothing left worth hitting in strategic terms. By the time the bomb was dropped Japan was largely a beaten nation already considering surrender, Tokyo a smoldering rubble pile save for the Imperial Palace.

"The bomb simply had to be used -- so much money had been expended on it. Had it failed, how would we have explained the huge expenditure? Think of the public outcry there would have been... The relief to everyone concerned when the bomb was finished and dropped was enormous." - AJP Taylor.

Of course no one can say with certainty, but I certainly don't consider the answer to this question to be a simple one.


The US had no way of knowing for sure what the top-level strategic decisions were in Japan. All they knew was that, throughout the war, Japanese troops virtually never surrendered, repeatedly fought to the death, and engaged in outright suicidal tactics including Kamikaze attacks. This persistence not only continued but intensified on Okinawa. There was no reason to believe that the Japanese military would ever stop short of fighting to the bloody end.

Even after Nagasaki, it took personal intervention from the Emperor and the foiling of an attempted coup for Japan to surrender.

Of course, dropping the bomb and developing the bomb are two distinct, albeit related, ethical questions.


I believe that much of the world is completely unaware of the devastation wrought by the firebombing campaigns.

To quote Wikipedia: "On the night of 9–10 March 1945, Operation Meetinghouse was conducted and is regarded as the single most destructive bombing raid in human history. 16 square miles (41 km2) of central Tokyo were annihilated, over 1 million were made homeless with an estimated 100,000 civilian deaths."

https://en.wikipedia.org/wiki/Bombing_of_Tokyo


> there was almost nothing left worth hitting in strategic terms.

The bombing campaign leading up to the atom bombs specifically left about 5-6 cities relatively untouched. There were still major strategic targets left in August. They did this to test their effectiveness on cities, as a demonstration to the soviets, and to destroy morale.

Demonstrating their effectiveness to the soviets is why they didn't drop them in Tokyo bay.


Would you rather it was Stalin?


> It's far better for the world that the United States developed atomic weapons before either Germany or the Soviets did.

[citation needed]

The US having nuclear weapons didn't work out so well for the 70,000-120,000 innocent civilians that were killed in the attack on Hiroshima.[1] I don't have handy access for how many innocent civilians were killed in Nagasaki but I would assume it was similar.

Would the Nazis have done the same thing? We dont know, and we cant know. But what we do know is that despite the Soviets/Russia, France, UK, China, India, Pakistan and probably Israel and North Korea having the capability, only the US has used nuclear weapons for indiscriminate & wholesale massacre.

So with respect, I really dont think you can go around trumpeting how it was "far better for the world" for this to happen when there is zero evidence to support that viewpoint, and at least 70,000-120,000 reasons to refute it.

1 - https://en.wikipedia.org/wiki/Atomic_bombings_of_Hiroshima_a...


Come on are you really making this argument? That we cant know if it was better for the USA or Nazi Germany to have nuclear weapons? Like, say you have to go back in time and give nuclear weapons to Nazi Germany or the USA during ww2.. you would throw up your hands and say we can't know who the weapons should go to?


If I understand correctly, you're saying that Germany and Russia shouldn't have had nuclear weapons because they would have used them, while arguing that it was right that they went to the Americans, in spite of the fact that they used them.

This does not seem like a valid position.


They didn't mention the USSR/Russia at all, just Nazi Germany. Their position is quite obviously that the US having nuclear weapons and the end of WW2 was better than Nazi Germany having them.

It has nothing to do with whether or not they would have used them. The reason that it was better for the US to have them is because Nazi Germany was conquering sovereign countries through military action, not to mention engaging in the industrialized genocide of millions of people.

Their position rests on the fact that most people agree that the Allies were the "good guys" and the Axis were the "bad guys" in WW2, which is not a position that really has to be defended.


The United States imposed their will on Japan with the force of nuclear weapons, and as a result, Japan is a prosperous, free, and independent country.

If Germany or Soviet Russia had the opportunity to use nuclear weapons to impose their will on their enemies, one need only look at what happened to the victims who already fell under their dominion.


I am saying that we cant know what Nazi Germany would have done with them - i.e. we cant know if they'd have been "good" or "bad" with them. Anything we think now is merely conjecture and speculation. We have to take a step back and really examine and question our own beliefs and biases. How much of what I am thinking is actually legit, and how much is affected by what I've seen at the movies/been told by teachers/read in the papers/seen on TV and just accepted as fact?

But we do know for an absolute fact is that the US did use nuclear weapnos to kill thousands and thousands and thousands of innocent men, women and children (and for the sake of balance, we do know for an absolute fact that the Nazis did kill thousands and thousands and thousands of innocent men, women and children, but not using nuclear weapons)

There is only one country with actual blood on their hands here with regard to nuclear weapons - the other nuclear powers have so far been able to show restraint.

As such, I find it pretty objectionable for people to suggest that it was "far better" this way, when the evidence really does not back it up.

I am not saying it is not best possible outcome for the world. Could it have been worse if the Nazis had for example nuked London in 1945 and killed a million? Sure, of course that might have happened, but it didn't actually happen. Perhaps had the nazis had that chance, the UK would have surrendered and there would have been peace and countless lives could have been saved and a completely new era of peace and prosperity begun? Or perhaps it would have also been untold slaughter and misery like the US did to Japan?

We just cant know, and so I object to people saying it was "far better" for history to have played out the way it did based mainly on - I suspect - the plot of Hollywood movies they've seen. History is written by the victors.

Anyway, this is way off topic and Godwin's Law has clearly been invoked. We should stop.


We absolutely know the Nazis would have at least use atom bombs on the Eastern front. Whether they would use them on the UK could be up for debate, but arguably they would have used them there as well.

Regardless, the atom bombs were certainly not the worst things any country did in WWII. The US firebombing was far worse. Everyone did bad things in that war.

Stop using your modern sensibilities to judge them.


Firstly, Godwin's law obviously doesn't apply to discussions about WW2.

Secondly, your assertion that we can't know what Nazi Germany would have done with nuclear weapons is correct, but you seem to be interpreting that as meaning "all possible outcomes of Nazi Germany having nuclear weapons are equally as likely", which is absolutely not true and a common mistake to make in an argument.


I'm sure the German and Soviet scientists thought that it would be best if they developed nukes first, too.

How sure are you that the present-day United States is the "right" group to have AI-controlled murder-drones?


Would you rather use Google self-driving car, or one made by a random hacker start-up[0]? Similarly, would you rather your military use AI from a low tier company?

https://www.theverge.com/2017/7/7/15933554/george-hotz-hacki...


I don't get it. Do you suggest that Google or big players make things better than anyone else? Or other way around, small startups cannot make good products? I believe if it was so then only companies starting with 1000+ people could do anything useful, small startups were doomed to fail, also big player could never do wrong. Sounds a bit simplistic and contrary to countless examples from life.


In the specific space of machine learning in life-critical applications, I generally prefer product from established companies with reputation to sub-1000 startups.

The incentives to cut corners and go to market are much higher for small startups with short runways. I don't want corners cut when lives are on the line.


"I don't want corners cut when lives are on the line"

...still not sure if you are talking about the (heavily) cost optimising conglomerates or not....if we agree to constrain the topic to finances leaving out innovative ideas, ethics, integrity, trustworthiness, etc. where conglomerates may be loose with standards.

Also do you know how Apple and Google started? (I hoped the suggestion will get through without stating the obvious, but it did not)


This is not a binary choice nor a necessety.


That claim is subject to different perspective. This might not represent as a problem for someone else.


Is it better to develop weapons tech for your own in-group, or to risk being on the receiving end of weapons tech that another group developed and against which you can't defend yourself due to your earlier principled stand?


I don't think it's a given that if you don't develop a certain type of weapon it renders you unable to defend yourself against it.


No, but it's a given that if you don't develop weapons tech at all, it renders you unable to defend against weapons that you would otherwise be able to counter.


Probably someone else will do the dirty deed, but now 1) it won't be Google, and 2) it'll be someone willing to do more evil than Google.

That said, your argument does not avoid complicity in behaving badly or potentially doing so. It says only, "I'm a shit, but I'm willing to be a shit because there are other shitty people in the world who will behave badly even if I behave well, so I choose to behave badly because it serves me and the outcome is probably the same either way."

Of course, if your business partners adopt the same Machiavellian philosophy toward you that you espouse, one day they'll probably speak those very words when they turn against you, since someone else probably would have.


The only things we can change in life are our own choices.


The entire point of weapons is that they give you significant influence over other people's choices.


You can say that about violence in general


> Someone will fill the void.

Especially since Google is publishing their results for every void-filler out there to review. Unless they plan to start hiding results that might have military applications?


Inevitable isnt the same thing as right.

After all, everyone inevitably dies, so why not murder them.


If someone will inevitably develop nuclear weapons, then it is right to develop nuclear weapons even if it "just" create s mutually assured destruction, as opposed to abstaining and thus ensuring dominance of whoever does decide to develop them.


> ”non-surveillance purposes”

Not what it says. It says:

“...surveillance violating internationally accepted norms.”

Thanks to leaks, we have a glimpse of the new normal.


>2. Avoid creating or reinforcing unfair bias.

They DO realize that the YouTube recommendation algorithm is a political bias reinforcement machine, right?

Like, I think it’s fun to talk trash on google because they’re in an increably powerful position, but this one isn’t even banter.


FWIW, since they updated the google news algorithm, I've started getting a pretty diverse set of results. It might be because it picked up that I try to read opposing views, or it might be because the algorithm is terrible, or any other number of reasons, but regardless of the source it's refreshing to see.

...and revealing! The headlines differences between, say, Washington Post, CNN, The Hill, and Fox News for the same news blurb is even more dramatic than I'd expect when you put them up side-by-side.

None of which contradicts your point, I just wanted to flag a happy instance where the result wasn't just "yo dawg".


When did that happen? I'd given up on Google News, it reached a point where it only wanted to show me articles from the Guardian or CNN even though I read a wide range of outlets. I just went back and looked again and there does indeed seem to be a better mix of stories now, although I am still very skeptical - the News algorithm is even less documented than the web search ranking algorithms and it would be tough to rebuild the trust. Google is clearly run by people who feel they need to "make the world a better place" through subtle conditioning of their algorithm outputs these days... very different to how it used to be. I'm not sure I trust them to rank news, even if improved.


for some reason the new Google News app needs is a net new download/install of the app.

https://itunes.apple.com/us/app/google-news/id459182288?mt=8

When they demo'd at GoogleIO I was pretty excited to try and then I went to look at my already installed google news app and it was still the same old one I'd been using for a while. Long story short, discovered it's a whole different app with the exact same name but updated icon.

The actual app is very very good though. I've been reading lots more news in the app. It's been about 3 weeks using daily and I'm starting to notice reinforcement of my common subjects & sources on the 'For You' page. I hope I start to notice more curve balls. It's great to the 'Headlines' page which I'm pretty sure is the same for everyone.


>The headlines differences between, say, Washington Post, CNN, The Hill, and Fox News for the same news blurb is even more dramatic than I'd expect when you put them up side-by-side.

Can you give a concrete example of this? I'm definitely interested in seeing how big the differences are


It's pretty evident if you just look. I literally just grabbed some headlines off the North Korea Summit topic:

- WaPo: Trump dangles White House visit for North Korea’s Kim if summit goes well

- Fox News: Trump forced Kim Jong Un to 'beg' for meeting, Giuliani says

- CNN: How is Donald Trump preparing for the huge North Korea summit? He's not.

- Reuters: Latte art and a gym ad: Kim Jong Un's softer image in South Korea


They should use color coded background on key words, using sentiment analysis. Then you could see the contrast without having to read all the headlines. (Though, "beg" here, for example, is positive, while in most contexts it is probably negative.)


That could be easily executed as a chrome extension!


Wow, this really is dramatic and telling!


An example from today (first political result on my screen):

- Reuters: Trump says Russia should be at G7 meeting, Moscow not so sure

- CNN: Trump: Russia should be in the G7 summit

- TheHill: An isolated Trump attends the G-7

- Fox News: Trump prepares for North Korea summit as a great performer -- like Reagan


I'm convinced that Google is touting its' 'principles' with a hackneyed blog post probably written by some PR flack.

As an American, I'm disappointed, and positively enraged by the hubris on display here. A bunch of (non-US) employees have pressured Google and therefore compromised the national interests of the United States.

See this for an alternative viewpoint: http://www.chicagotribune.com/news/opinion/commentary/ct-per...

It's high time these companies are regulated and their malfeasance reined in by the United States.


So some people who worked for Google think Google was helping do something that is immoral, and other people agreed, and the collective outrage convinced Google to change their minds. Great, that's how it should work. What else would induce a company to stop doing something immoral, if not outrage?

This applies regardless of whether you think this specific example is immoral.


Certainly many US employees were against it.

Also Google is a global organization. I don’t believe that corporations should primarily serve the interests of their government, they should serve their users and reflect the attitudes of their employees.


I'm sure many US employees were against it, and they were rightfully exercising their voice.

Google may have offices all over the world, but it's an American company, and like people (Corporations are like people, no?), it must be held responsible for its actions.

I would argue that the public should be especially wary of 'global' corporations such as these (Facebook is another one) that suddenly grow a conscience when it suits them.

Surely a company with such high morals and ethics should easily withstand regulation and public scrutiny that protects the national interests of the country that's responsible for the majority of its profits, and provided the fertile ground from which it sprang to life.


You appear to be suggesting that the government should be able to compel corporations (and thus their employees, ie. normal people) to work on military technology.

That sounds antithetical to a lot of freedoms that we hold dear as Americans.

>I would argue that the public should be especially wary of 'global' corporations such as these (Facebook is another one) that suddenly grow a conscience when it suits them.

How do you mean? To paint this in a very cynical light, the sequence of events here was

Google does a thing. Then, many Google employees threaten to quit over that thing (among a bunch of other potential downsides). So, Google agrees to stop doing the thing

Is "Google changes its policy to maintain its workforce" something that you should be wary of? That seems like reasonable corporate governance.

On the other hand, you can paint this in a much less cynical light, where the sequence was

Google does something that is potentially antithetical to its values. Employees object to this thing, claiming that it really is antithetical to those values. As a result, Google reaffirms its values and makes them more explicit, promising not to do the thing.

In other words, a very anti-google view sees this as a move for retention, and a pro-google view sees this as a reaffirmation of the "conscience" (read: values) that Google already had. I don't see how your worries apply here.

(Am a Googler, but that isn't particularly relevant to this post)


"Google changes its policy to maintain its workforce"

Google, an already known to be duplicitous company, changed its policy to maintain its workforce at the expense of US national interests. That's certainly something to be wary about, as a member of the public.


People should never be compelled to work on producing weapons or munitions, except in times of war.

Google, specifically, like Facebook, should be invited to explain itself and generally describe its activities a bit more transparently for the public to see. At this point Google is effectively a utility, so there's plenty of good reasons to regulate it like one. Right now, it has benefited from almost no oversight and has grown a bit too cocky and self-righteous. Silicon Valley CEOs need to be cut down to size. Almost no other industry has this level of smugness and self-righteous belief in their superiority over the American people.

Here is a quote by Louis Brandeis, an erstwhile Justice of the Supreme Court, that pretty much captures what I have to say: Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.


What does "we (read: our employees) would prefer to not work on weapons systems" have to do with a "self-righteous belief in their superiority over the American people"?

I'm honestly confused as to how those two concepts are connected.


You didn't address the main point of my comment, so I'll ask again more directly: are you suggesting that the government create regulation that compels companies, (and thus individuals) to do work on military technology that it deems relevant to national interests?

If not, what exactly are you suggesting?


What does "we (read: our employees) would prefer to not work on weapons systems" have to do with a "self-righteous belief in their superiority over the American people"? I'm honestly confused as to how those two concepts are connected.

The connection is the hubris that enables Google to benefit from public largesse while simultaneously believing itself to be superior to it (and by extension, the public). Like it or not, Google has benefited immensely from research facilitated/instigated by DARPA, DoD etc. which were all military technologies (GPS, internet, Grand Challenge). But now, it has serious qualms about AI that enables civilian areas to be identified in conflict zones (by its own admission).


>itself to be superior to it

I'm still confused as to how "we don't want to work on systems that kill people" is in any way "superiority".

In fact, reading your comments you seem to suggest that any entity (any individual even!) who has a moral objection to working on military technology, but who uses any modern technology believes themselves superior to the public. But that description describes a large minority (or perhaps a majority!) of the population.

You appear to say that Google's sense of superiority stems from its objection to working on military technology. But I think that description applies to much of the public.

That is to say, I find it likely that most of the public would object to working on AI for drones. Yet you're arguing that the objection to working on AI for drones makes one believe them-self superior to the public. In other words, most of the public believes themselves to be superior to...themselves.

Hence my continued confusion.


In my comments, I repeatedly say that individuals should not be compelled to work on military technology, except in times of war. Hardly confusing.

> In fact, reading your comments you seem to suggest that any entity (any individual even!) who has a moral objection to working on military technology, but who uses any modern technology believes themselves superior to the public. But that description describes a large minority (or perhaps a majority!) of the population.

I do not know how one could read that into my comments. At this point, I'm beginning to think you're being deliberately obtuse. Google doesn't just use this technology like you or me, it benefits, i.e., enriches itself immensely to the tune of billions of dollars every quarter! It also becomes much more powerful and further embeds itself into the lives of ordinary people in this process.

> You appear to say that Google's sense of superiority stems from its objection to working on military technology. But I think that description applies to much of the public.

I do not think that description applies to much of the public, which lives outside the SV bubble. But I will concede that this is something that's debatable.

> That is to say, I find it likely that most of the public would object to working on AI for drones. Yet you're arguing that the objection to working on AI for drones makes one believe them-self superior to the public. In other words, most of the public believes themselves to be superior to...themselves.

Here, again, you are twisting my words and ascribing meaning that simply doesn't follow from what I have said.

I am definitely saying that Google the corporation and leadership in Silicon Valley suffers from the hubris that they can simultaneously benefit (i.e., make billions of dollars) off the fruits of research that's quite explicitly geared toward military technology, and they can rebuff those very same benefactors without consequences (and while holding the moral high ground). These benefactors are Government agencies, that exist to carry out the will of the people (nominally, at least).


>Hardly confusing.

Except that much of what you're saying implies that you do think these companies should be regulated in a way that forces them to do this. Or at least, if that's not what you're saying, then you seem to be insinuating a whole lot for no apparent purpose. This is why I'm confused. Your stated words and actions (ie the rest of your words) don't appear to match up.

>Google doesn't just use this technology like you or me, it benefits,

Are you suggesting that you and I don't benefit immensely from the internet?

If I'm understand you correctly, you're saying that it is unethical for an entity to benefit from another entity without supporting it. That is to say, its unethical for Google to benefit from the military's technology without also supporting the military.

Ignoring, for a moment that there's a whole host of debate on whether or not that's even true to begin with, such an objection applies equally to any individual as well as Google. I, personally, benefit greatly from military technology. Is it unethical for me to refuse to work on drone warfare? It seems odd for you to say yes to that, but on the other hand, that's basically what you're saying about Google.

>I do not think that description applies to much of the public, which lives outside the SV bubble. But I will concede that this is something that's debatable.

I would consider more than a third of the US population to be "much of" [1].

> suffers from the hubris that they can simultaneously benefit (i.e., make billions of dollars) off the fruits of research that's quite explicitly geared toward military technology, and they can rebuff those very same benefactors without consequences

But again, except perhaps in terms of scale, this applies to anyone. You and I both benefit, significantly, from military technology, both in terms of safety and quality of life. Yet you've stated that we should not be compelled to give back.

Why should Google (or any other corporation, which again, is really just a set of individuals) be treated differently?

[1]: https://www.politico.com/story/2015/05/poll-support-drone-st...


Here we have a difference of opinion. I strongly believe corporations are less accountable to the general public than individuals or (democratic) Government, including the Military. I simply do not believe in corporate personhood. I do think there is a large distinction between individuals and corporations, and corporations, on the whole behave much more unethically and can wreak much more havoc than individuals. The large ones also seem to have an uncanny ability to perpetuate their existence (in various guises and incarnations) and generate some really nasty externalities.

I do think that the level of freedom and lack of accountability (vis-a-vis individuals and governments) corporations enjoy in the United States has reached insane levels. One the other hand, I do prefer it to the situation in China, where any corporation is likely to become a tool of the State. I believe corporations can and should contribute back to the Military if they have benefited financially from military technology. Perhaps not as much as would have been the case in a Socialist/Communist country, but definitely at some level higher than the present.


> I believe corporations can and should contribute back to the Military if they have benefited financially from military technology.

Do you also believe corporations should contribute to Experimental Particle Physics at CERN if they have benefited financially from the world wide web?

https://home.cern/topics/birth-web


>I simply do not believe in corporate personhood.

I don't think what I'm saying depends on corporate personhood. My point is, if you claim that corporations have a responsibility to contribute back to the military, you are claiming that someone at the company should do that.

Further, in your post that started this subthread, you stated

>It's high time these companies are regulated and their malfeasance reined in by the United States.

That, at least to me, reads as though you think that the US should regulate these companies in ways that require them to give back to the military. Which again, which employees should do that? How can you compel a company without, at some level, compelling the individuals within the company? Which you've at least claimed you don't want to do?


> I don't think what I'm saying depends on corporate personhood. My point is, if you claim that corporations have a responsibility to contribute back to the military, you are claiming that someone at the company should do that.

It does. A corporation can easily set up a division or a separate subsidiary or sub entity and staff it with willing individuals to do this sort of thing. There is no direct conflict with individuals' rights. So sure, Google the corporation can be compelled to do this without affecting individuals. It's quite common in other industries, but of course, for SV, it's all about the hubris, optics and innate sense of superiority.


>It's quite common in other industries

In what other industries does the government require companies to develop military technology?

>A corporation can easily set up a division or a separate subsidiary or sub entity and staff it with willing individuals to do this sort of thing.

So you're saying that if I found a company that is based on internet-related technologies, it is reasonable to at some point in the future compel me (or compel me to pay for someone else) to work on military drones?


> In what other industries does the government require companies to develop military technology?

The canonical example is the early days of aerospace, where for all practical purposes you were developing military technology.

> So you're saying that if I found a company that is based on internet-related technologies, it is reasonable to at some point in the future compel me (or compel me to pay for someone else) to work on military drones?

The government can already compel you to license your work via eminent domain. There's an established process for this.

https://en.wikipedia.org/wiki/Government_patent_use_(United_...

At any point of time there are always a slew of 'sensitive technologies' whose use and development will be closely monitored and companies are incentivized, severely restricted, or outright barred from freely trafficking in them. It's not a giant leap of the imagination that they will be forced to do the federal govt's bidding if they already do a large amount of business with them or have started out with military IP.

'Internet-related technologies' is not one of them. There was a time when supercomputers were in this category, then it was cryptography, and now it's looking like AI.

BTW, Cisco and a few others have been forced to develop 'lawful intercept' technologies on their routers for the three-letter agencies for years, I think. There was a big controversy about this a few years ago.


> BTW, Cisco and a few others have been forced to develop 'lawful intercept' technologies on their routers for the three-letter agencies for years, I think. There was a big controversy about this a few years ago.

And you seem to be arguing that this is a good thing? Surely any company (filled with people) should be free to work on whatever technologies that those people feel is ethically right.


> And you seem to be arguing that this is a good thing? Surely any company (filled with people) should be free to work on whatever technologies that those people feel is ethically right.

I'm arguing that it's not a cut-and-dry thing. Clearly it's susceptible to abuse, but on the other hand it is vital to the long-term security interests of the United States. In any case, there is more accountability than Google or Cisco 'self-regulating' themselves. These companies can't massively leverage military research and then turn back and say they have no obligation whatsoever. They can choose to do no business with the federal government, but that's clearly not the case. In fact, the opposite is true.


The whole idea of “protecting the national insterests” of the US creeps me out, as that to me usually means oppressing others, interfering with democracy in foreign countries, and generally sacrificing people outside the US to maintain the global hegemony we promote.

I can tell you see it differently, but I hope you see that not all people see promoting “US government interests” as an automatically good thing, given what that phrase has meant historically.


I am aware of America's many follies and villainous acts in support of misguided notions. It's regrettable, but Americans as a group have strived to rectify wrongs and steer our country in the right direction. I deeply cherish our American values and think they're what set us apart from the rest of the world.

I can't for the life of me see why Sundar Pichai should be beholden to his (significantly foreign) employees and privilege their interest over that of the nation (his nation). Remember that Google itself is tainted, being complicit in spying over Americans (PRISM, anyone?). Why shouldn't such an entity be regulated?


> I can't for the life of me see why Sundar Pichai should be beholden to his (significantly foreign) employees and privilege their interest over that of the nation (his nation).

He's beholden to his shareholders and every move they have made is all about that. They knew it would be bad PR but they wanted to make money so they took the contract initially and tried to keep it quite. Then, all the leaks happened causing it to backfire, tons of employees and users got upset, and so they reversed directions as they think the good reputation for attracting employees and users will pay off more than military contracts in the long term.

> Remember that Google itself is tainted, being complicit in spying over Americans (PRISM, anyone?). Why shouldn't such an entity be regulated?

It wasn't just Google that had to comply with prism. They as long with every other company that wants to operate in the US had to comply with it because every company has to comply with the laws of each country they operate in.


> He's beholden to his shareholders and every move they have made is all about that.

That's the real nub. I'd posit that this is also about keeping a small but vocal group of employees happy. However, why did Google effectively disavow all cooperation with the military to appease this group? They could have easily set up a division and staffed it with willing people. Or Alphabet the parent company could have started something else up (how costly is it to incorporate, really?). Looks like none of these things was even given serious consideration.


> They could have easily set up a division and staffed it with willing people. Or Alphabet the parent company could have started something else up (how costly is it to incorporate, really?). Looks like none of these things was even given serious consideration.

Did you even read up on this story because that is essentially what they tried to do and it backfired?


> Did you even read up on this story because that is essentially what they tried to do and it backfired?

I read that the 'contract was routed through' some front-company. I don't think that was separate company staffed with people to adapt google AI to this purpose.


> . It's regrettable, but Americans as a group have strived to rectify wrongs and steer our country in the right direction. I deeply cherish our American values and think they're what set us apart from the rest of the world.

Ha. Ha ha. Hahahah.

Please, keep drinking that Kool-Aid of American exceptionalism. Apart from the British, which other country has contributed to more invasions in foreign countries for the sake of purely economic interests?

Your position is laughable, and the classic, pathetic opinion of the sort of American who hasn't traveled around the world to see that their system of values is not qualitatively different from that of most other developed Western nations.


> Apart from the British, which other country has contributed to more invasions in foreign countries for the sake of purely economic interests?

The "purely economic interests" qualifier makes your question difficult to answer, but France has likely been involved in more invasions in foreign countries.

https://www.reddit.com/r/MapPorn/comments/7ax7g1/all_countri...


> Please, keep drinking that Kool-Aid of American exceptionalism. Apart from the British, which other country has contributed to more invasions in foreign countries for the sake of purely economic interests?

I will!

Which country came to the aid of Britain, and all those Western European nations when they were faced with the existential threat of Nazism? Which country has provided refuge and solace to more oppressed and exiled peoples in the world than the US? Which country fought a violent war to rid itself of slavery? Surely not the Western European nations you speak of.

America has done a lot of damage in this world, but I can't think of any other nation that has done so much good either.


Well, technically, we didn't fight a violent war to rid ourselves of slavery -- as hard as the history book try to push that view.

Slavery died out pretty much everywhere else in the world without bloodshed so it's pretty safe to assume the US wouldn't have been different in that regard, the freeing of the slaves was more of a punitive action against the seceding states than anything else.

Anyhoo, my point being that it isn't a grand example of American exceptionalism. Much better to just link to Kevin Kline's A Fish Called Wanda rant...

And why doesn't my iPhone know how to spellcheck "slavery", are we trying to remove it from the language double-plus fast?


Worth remembering: slavery was outlawed in the United States after it was outlawed in Great Britain.

In the long-view, the whole American Experiment may not have been a net positive for slaves and their descendants relative to a hypothetical alternate history where the Revolutionary War failed and the US was just more British Empire. Nearly impossible to say with certainty, of course, because a Britain that included the US may have had different incentives to push it away from ending slavery.


The British just exported it to locales outside their own. See the Caribbean, the remote Pacific Islands (Australia, effectively British for practical purposes at that time, enslaved many Pacific Islanders to work in plantations), much of the Indian Subcontinent, Zimbabwe/Rhodesia... shall I go on?



> Which country came to the aid of Britain, and all those Western European nations when they were faced with the existential threat of Nazism?

Russia mainly, from where an estimated 24 million people died. American also didn't wade into a war through purely selfless means either, if Germany managed to invade Britain and western Europe, America would have been at significant threat.


India (part of the British Empire) also sent millions who died fighting in North Africa.

The reasons every country got into the war were complex. But it is also largely true that America spilled blood and treasure out of a sense of obligation to fight Nazism, despite having a significant immigrant German population (who fought against their brethren on the other side), going so far as to impose a draft in the later stages of the war.


This is so much BS. You should actually read the history of the reasons for the involvement of the US in the WW2 theaters.

You sound like a young kid who has never actually read about history and geopolitics.

Please dude, actually read up on the involvement of the US in the secret wars in Cambodia by Kissinger, the School of the Americas, instrumental in teaching South American militaries repressive strategies that killed tens of thousands of innocents with the full support of the American government, the Iran-Contra affair, the ridiculous involvement in Vietnam.

There is an innumerably long list of atrocities commited purposely by the American government and with the silent consent of the American majority.

Seriously bro, Irak happened less than 15 years ago in a completely manufactured war and you're actually so stupid to believe that there's anything particularly worthy of American Imperalism?

You should try to be on the receiving end of the American business interests that have fueled these conflicts, lest we see what your opinion on the matter would be.


> Please dude, actually read up on the involvement of the US in the secret wars in Cambodia by Kissinger, the School of the Americas, instrumental in teaching South American militaries repressive strategies that killed tens of thousands of innocents with the full support of the American government, the Iran-Contra affair, the ridiculous involvement in Vietnam.

Kissinger was just pure evil. Apart from his involvement in the things above, he also actively supported dictators in Pakistan, and indirectly did nothing to stop the slaughter of civilians in (then) East Pakistan.

But, I still stand by my point: Americans have strived to right the wrongs (and there have been many and monumental ones). Eventually all these 'secret' activities have come out and the public has ensured that the people responsible were shamed or held accountable, to some extent (it never is a full reckoning, unfortunately). I can't think of that happening in China, for instance. Name any other great power that hasn't had stuff like this?


> and therefore compromised the national interests of the United States.

Did I miss something? Was the Selective Service Act amended to extend to corporations, too? Was Google drafted?

Last I checked, cooperation with the United States military has been purely on a volunteer basis since Vietnam.


You raise an interesting point. What makes Google the corporation better than males under 26?

If you believe in corporate personhood, then Google, Facebook are definitely villains --- avoiding taxes, running ads from enemy states etc, while maintaining a shroud of secrecy and non-accountability --- positively treasonous acts if committed by a person. If you do not, then what right is violated by making corporations subject to the Selective Service Act?


The supreme court has ruled that Corporations are people. So why wouldn't it extend to corporations?


I’m a US employee and I’m opposed to it. I’m not what you mean by ‘reigned in’, you mean you want the US government to nationalize companies who don’t want to work on military contracts and force their workers to build weapons?

don’t worry, The DoD will get their AI weapons, image recognition with machine learning is a commodity now, and some other company will end up doing it.


It doesn't have to be that drastic. Plenty of other companies do this kind of stuff. See for example, Boeing. It has a separate Commercial Airplanes division. Plenty of technology sharing happens between the Military arm of Boeing and the DoD. I imagine that people who object to military use of their work could find a happy home on the commercial side and vice versa.

My point is, where there is a will, there is a way. Individual rights are sacrosanct, corporations', not so much (or at least, they need to be incentivized to work in the national interest).


Maybe. But it takes guts to draw any line in the sand, when none of your rivals has been willing to do it.

Sure, it'd be nice if Google tightened up their rhetoric a bit, or proposed concrete ways they intend to act deliberately and publicly to enforce these bylaws.

But this is a start, and reveals a willingness to speak out both to their employees and the public at large that company policy disallows some future lines of business, especially building weapons and surveillance tech.

IMHO, this is a positive step in the right direction.


They circumvent this by using the word "avoid." Avoiding and not doing are two different things.


I don't know how you do something and avoid doing something at the same time.


Personally, I think the term "avoid" is more of an ideal/pragmatic stance ("would be ideal not to") than the binary "will do"/" will not do"


    avoid |əˈvoid|
    
    verb
    
    keep away from or stop oneself from doing (something)
Instead, I think Google just doesn't mind lying[1].

1. e.g., "Don't be evil"


So true. I'm increasingly finding myself having to 'play' the recommendation algorithm strategically, just to keep my own political sanity...


Several comments don't seem to understand what the "unfair bias" mentioned is. It doesn't have anything to do with censoring your favorite conservative search result.

The machine learning "bias", at least the low hanging fruit, is learning things like "doctor == male", or "black face = gorilla". How fair is it that facial recognition or photo algorithms are trained on datasets of white faces or not tested for adversarial images that harm black people?

Or if you use translation tools and your daughter translates careers like scientist, engineer, doctor, et al and all of the pronouns come out male?

The point is that if you train AI on datasets from the real world, you can end up reinforcing existing discrimination local to your own culture. I don't know why trying to alleviate this problem triggers some people.


Even for the low hanging fruit there is some concerns that the bias correction will only be made based on view of those currently in power at Google. They might want to make sure that translation AI will make careers like scientist, engineer, doctor to not get associated with male gender, but will Google do the same for crime and criminals? Google may want to steer YT recommendation AI away from the damsel in distress trope, but will it do the same for the bumbling dad trope?

In current polarized climate there isn't much trust left that bias correction will be itself unbiased, or if it will reduce existing discrimination rather than doing the opposite.


"Machine learning is like money laundering for bias"

https://twitter.com/pinboard/status/744595961217835008


It's definitely a problem worth alleviating. But it is "triggering" because it is an open problem to determine whether a bias is harmful, even for human beings. So it becomes an impossible/unreasonable amount of extra work if you demand it as a prerequisite.

For example, in your translation tool example, even a human translator would have trouble making the least offensive translation possible. She/he/(insert favorite pronoun here) would need to realize the audience is a young impressionable child who is about to base her entire world-view on whether there's statistically more of her gender in that one sentence of translation.

For a machine learning algorithm to understand enough about human nature to not offend the parent of that child, you're better off waiting for AGI that can surpass human tact.


You don't think a black person trying to use a photo management app and having their children's photos miscategorized as gorillas is harmful?

We know what biases people say offend them already, there's no evidence fixing them is harmful, but a non-zero risk that not fixing them is harmful.

I feel like what I'm encountering is a conservative bias against changing the status quo, "social engineering", and the like. It seems people don't like deliberate, non-organic, changes to the status quo (well, they don't tend to like organic ones either, like succeeding generations becoming say, more sexually liberal)

Machine learning can create filter bubbles, echo chambers, feedback loops, and people may attribute more weight to answers provided by machines than people, and so having machine learning reinforce even more strongly, current cultural biases that we're already seeking politically to ameliorate seems prudent and pragmatic to try and balance.


It doesn't have anything to do with censoring your favorite conservative search result.

But that's not correct. That's exactly what it's to do with.

A big part of the philosophy of conservatism is to accept the world as you find it. Conservatives, at least in theory (not saying the Republicans are a great implementation of the philosophy), eschew large social engineering schemes, they eschew attempts to remould attitudes via manipulation of language and so on. These are all traits associated with the opposing end of the political spectrum. Think how important re-engineering people's thinking via language was in Orwell's 1984, for example.

So now we have Google and related AI researchers announcing that when an ML model learns the actual fact that most doctors are male, this is "bias" and it needs to be "corrected". This is Orwellian. It's not at all biased, it's accurate. But because of some vague, handwavy assumption that if AIs believe gender ratios are 1:1 in medicine then somehow ... via some obscure chain of action ... more girls will become doctors, the basic reality of the matter is justified in being inverted. Or possibly, more boys will choose NOT to be doctors. Quite why this outcome is better than the existing status quo is never explained with any rigour - it's ideology.

This is the very style of social engineering that conservatism rejects, on the grounds that it so often goes wrong or has unintended side effects. So whilst I am interested to see that Google is deciding to walk the walk here when it comes to AI and weapons, I nonetheless find their statement of moral principles to be quite chilling and frankly, it renders their most important products useless for me. Their take on this isn't news, but it's sad to see them doubling down on it. I do not wish to be subtly manipulated via AI driven translation or re-rankings into not believing things about the world that are true but upset radical feminists.

It's especially sad for me because I worked there for so many years.


Obtaining the correct categorization of something is not social engineering, it's returning a FACTUAL answer. I didn't ask the AI "are men more likely to be doctors" I asked "what is a doctor" Do you understand the difference? I don't expect it to tell me "Doctors are male individuals who...." No where in my original post did I say the AI has to lie about gender ratios, I would expect any AI to tell me also, the socio-historical-cultural perspective of the medical profession, and how it has changed over time. I am simply not interested in a neat little, static, steady state view of something that conveniently leaves out context. Do conservatives want to exist in a bubble where no criticism of the status quo is seen?

If a machine learning algorithm learned a bad definition of conservative, that cast them as crypto-racists, you'd want it corrected wouldn't you? Even if a reading of conservative news sites comment forums gives you exactly that impression is likely true?

> "Quite why this outcome is better than the existing status quo is never explained with any rigour - it's ideology."

Right, progressives are the only ones with ideology, conservative positions are arrived at by cold, hard logic?

Tell me why an African American slave boy, growing up in the 1800s, who learns the existing status quo that black people are property, and white people or not, is a worse outcome? Clearly, it's a better outcome for white people of the era, so any explanation for why it might be preferable to alter it has to argue that the status quo wasn't good for black people, or that it wasn't good for white people for some presumably economic reason.

Maybe just maybe, the status quo isn't good for women? Maybe it would be good to ask them if it's ideology, or if changing cultural attitudes about what women are allowed to do, and capable of doing over the last 200 years has been a positive change in the status quo for them?

BTW, Conservatives don't reject social engineering, they just reject social engineering they disapprove of. Social conservatives around the world, in concert with religion, have sought to engineer human behavior with appeal to cultural and religious piety, and in many cases, winning laws that enforce such behavior. We've had anti-sodomy laws, anti-booze laws, anti-miscegenation laws, all of them enacted by social conservatism. And what do you call religious proselytizing, if not social engineering? Trying to spread memes and infect and convert more people into a new way of thinking.

Sometimes I feel that conservatives are against secular, humanist 'social engineering' the way Scientologists are against Psychotherapy, because it's competing in the same meme space.

Society is a dynamic equilibrium. It is constantly evolving, sometimes it evolves purely organically, in a spontaneous order, and sometimes there are clusters and movements that boil over, and change arrives by deliberate persuasion.

Google is a global, transnational company, that serves the entire world, 7+ billion people. It needs to reflect that diversity. And like I said, it absolutely cannot have AI that does stupid stuff like learn that black faces are gorillas.


I didn't ask the AI "are men more likely to be doctors" I asked "what is a doctor"

I'm afraid you're obfuscating. The most important paper in question is this one:

http://papers.nips.cc/paper/6228-man-is-to-computer-programm...

It documents the researcher's "discovery" that word embeddings trained on all available text learn that most doctors are male and most nurses are female, along with many other relationships, like volleyball / football being a female / male analogy.

Word embeddings can't answer a question like "what is a doctor", even people would struggle to give a good answer to such a vague question. So they asked it specific questions about gender, namely, "if you ask for a gender relationship starting from doctor what do you get" (answer: nurse). And then they decided this was bias, and wrong, and should be edited out of the model.

So yes - if you asked such a model "are men more likely to be doctors" it would answer "no" although the correct answer is "yes".

I wonder if that changes your views?

Right, progressives are the only ones with ideology, conservative positions are arrived at by cold, hard logic?

Both positions arrive at their conclusions via logic, but working forward from different assumptions and premises.

The difference is progressives are much more likely to try and impose change on the world top down, whereas conservatives are much more likely to leave the world be (note: attempts to remove top-down imposition of progressive views is often cast by progressives as equally equivalent "change", but it's not, likewise, not attempting to change old traditions is often cast as inherent support for them rather than a general aversion to imposing top-down change).

Your post is a great example of the dangers of this attitude:

Maybe just maybe, the status quo isn't good for women?

I happen to think the status quo, where women are routinely handed jobs and money simply for being women (e.g. the whole 'women in tech' movement), is excellent for women!

But our views on this are both irrelevant because it's not what the argument is about.

The point is, do we want to build AIs that understand the world as it is today or which have been given an understanding of the world as Googlers believe it should be.

You are very clearly arguing for the latter, here, to the extent that earlier in your post, you argued a good answer to the question "what is a doctor" would actually involve a lecture on the "socio-historical-cultural perspective". Who says users of AI give a crap about any of that? Maybe they just want to get a definition of the word doctor, without some AI trying to change their kids ideas about what job they'd like to do along the way?

A Google AI that is constantly engaging in proxy social advocacy with me would be annoying as hell. A Google AI that doesn't but has a warped and distorted view of reality because its creators feel they're towering moral actors with a mandate to change the world on behalf of billions of people they never met? Even worse! I'd rather it was at least open about it.

Look at it this way. If Google AI was constantly subtly suggesting that all progressives were naive, hated America, that government intervention always failed and markets were the best way to do things, <insert random political stereotype here>, you wouldn't be very happy about it, would you? Especially not if the AI hadn't actually learned such things but such beliefs had been added later by libertarian programmers convinced that they'd make the world a better place by doing so.

We've had anti-sodomy laws, anti-booze laws, anti-miscegenation laws, all of them enacted by social conservatism

I suspect there's a slight terminological difference here between libertarianism and conservatism (I've been meaning primarily the former).

But regardless, are you sure you aren't assuming that?

Let's take anti-booze laws. Prohibition in the USA was a bipartisan issue at the time and both Democrats and Republicans voted in favour of it, in fact more Democrats voted in favour than Republicans did.

The constitutional amendment that gave women the right to vote was introduced by a Republican. It took a long time to get passed (decades), but this was partly because - just like the UK - the woman's suffrage movement hurt its popularity through aggressive tactics and association with unrelated social policies, in particular with pacifism and refusal to join World War 1.

And as for religion, I'd note that the USA is probably the most conservative (or libertarian) country in the world and it's also famous for separation of church and state.


>Avoid creating or reinforcing unfair bias.

I recommend _The impossibility of “fairness”: a generalized impossibility result for decisions_[0] and _Inherent Trade-Offs in the Fair Determination of Risk Scores_[1]

[0] https://arxiv.org/pdf/1707.01195.pdf [1] https://arxiv.org/pdf/1609.05807v1.pdf


'Fairness' isn't a single optimal decision. It's a continuing debate, refreshed by new data and a refined awareness of consequences, especially those unanticipated and unseen.

Google isn't promising that they're done with the question of how to use data justly. They are promising that as the public debate continues over where the line is drawn and redrawn between fair use and abuse of data, they will be a willing participant in that debate, and that they're receptive to abiding by decisions that require compromise on their part. What more do you want?


> Avoid creating or reinforcing unfair bias.

This very high up and is written in a way which would explicitly allow "fair bias". This means activists will have a free hand to use their positions at Google to enforce their vision of political orthodoxy.


A search engine with no biases whatsoever is useless. The whole point is to be biased towards articles that users find more useful, not to give a random sample of the Internet (whatever that means).

I'm sure there will be internal debate over what biases are good ones to keep and nobody gets a free hand. But as a policy, it doesn't restrict Google's options very much.


Bias and discrimination are foundational to information and decision systems. Thate's wwhat they do: decide, choose, and filter.

Pichai's point is that such discrimination must be fair and societally beneficiaĺ.

The line is not clean, straight, or constant. But it provides a guiding principle to future decisionmakers aand stakeholders.


It's true that you can't ever be perfectly fair, for every reasonable definition of fair.

However, you can easily mess up and be very unfair. So even though there is no "perfect", there is still a continuum of bad to good.


Your definition of "fair" is some other groups definition of "unfair". And I'm not even talking about pro- anti- nazis, I'm talking about more mundane leanings.


Not sure why you are being downvoted. This is a reasonable sentiment.

It's similar to people who want to lose their accent when learning a language. It's something that is impossible to do, you can't not speak with an accent. The way to lose your accent is to pick a new one, and begin emulating it.


> We believe that AI should:

>

> 1. Be socially beneficial.

> 2. Avoid creating or reinforcing unfair bias.

> 3. Be built and tested for safety.

> 4. Be accountable to people.

> 5. Incorporate privacy design principles.

> 6. Uphold high standards of scientific excellence.

> 7. Be made available for uses that accord with these principles.

While I like this list a lot, I don't understand why this is AI-specific, and not software-specific. Is Google using the word "AI" to mean "software"?


Great point. AI is not the only way to reinforce biases.


> AI applications we will not pursue [...] Technologies that cause or are likely to cause overall harm. [...] Weapons or other technologies

This statement will have zero impact on subsequent sensational headlines or posters here claiming Google is making killbots.


Well, now you are talking about philosophical questions.

- Is pursuing an AI that kills 10 bad guys but saves 20 good ones overall bad?

- Is pursuing an AI that doesn't kill anyone but pushes us to watch ads and lose our life on youtube overall bad?


We only call it AI - creating an autonomous ML system that is designed to kill anyone can lead to us all being killed by math and plastic - it's a singularly bad idea, so much so there is no scoring mechanism that justifies it - too much chance we end up with this -- https://youtu.be/9CO6M2HsoIA


We will end up with that. Do we want to be as helpless as the people in that video?


I've done work in this space, and my stereo cameras are about to go airborne on Air Swimmers (Bruce the Shark :-) ) in my house. In the movie in the linked document its just looking around my office and showing parallax error - I have an app that lets it track, and I spent an afternoon trying to make it loose me - the only way I won was because I jumped from where I was, crashed into the toolchest, and fell to the floor - it actually tracked me successfully, what defeated it was the fact that I was using cheap servos and it snapped an internal axle on the cheap yaw servo when I hit the toolchest. We are helpless, there is no way around it, and there is absolutely nothing we can do about it except realize our survival lies in not going down this road. https://www.linkedin.com/pulse/low-cost-high-volume-stereosc...


> - Is pursuing an AI that kills 10 bad guys but saves 20 good ones overall bad?

- Yes. If your philosophical development is to the point where there are clearly defined good guys and bad guys in your mind I suggest you read more.


Am I philosophically underdeveloped if I think it's pretty clear that 20 merchants and shoppers in a souk are the good guys, and the people with suicide vests on are the bad guys?


An argument that relays heavily on racial prejudice and emotional appeal is philosophically underdeveloped.


Or...ya know...we don't want those people to kill us or our loved ones. Maybe the suicide bombers are the ones with an underdeveloped philosophy.


A suicide vest isn’t racially prejudiced. Identifying those that are wearing them isn’t an emotional appeal.


The identification of a person wearing a suicide vest isn't a racial or emotional problem.

Declaring that a person wearing a suicide vest in an Arab market is inherently bad does play on racial prejudice and emotional appeal. It's impossible to separate that statement from a decade and a half of propaganda, and the trauma caused by the numerous suicide attacks on civilians in the western.


> Declaring that a person wearing a suicide vest in an Arab market is inherently bad does play on racial prejudice and emotional appeal.

In what contexts is a person wearing a suicide vest in a market anything other than inherently bad?


Forget "inherently" good or bad. It's entirely possible that the guy in the vest isn't a morally horrible person; he may have been brainwashed since birth and genuinely believe that his actions will bring about a better world. Still, he intends to kill himself and as many other people as possible, and if I have the ability to cause the only death to be his, I'm doing that every time.


I understand the self defense argument. Law Enforcement and Armies are trained to respond with violence to people with intent to kill. The passengers of United Airlines Flight 93 are heros.

Killing "bad people" to save "good people" is not a self defense argument. The statement is too general. Is it okay to do medical testing on criminals if it speeds the advancement of medicines that saves law abiding citizens lives? We could have limits to only use people found guilty of the worst crimes and increase the legal burden of proof for criminals going into the medical testing program. It would decrease overall harm. It would kill bad guys to save good guys. Still, this practice is illegal for good reason. We need a better reason to kill someone than they are bad.


There was no argument - I was only expressing my feelings. And what racial prejudice is there?


61% of Americans have unfavorable views of Islam. [0] Islamophobia is prevalent across the western world.

[0] https://www.brookings.edu/blog/markaz/2015/12/09/what-americ...


Are you in that 61%, and that is why you assumed I was talking about an Islamic suicide bomber?


It is relatively easy to define good or bad though. Telling black from white isn't hard. The problem is all the shades in between (where the majority of the colours are).


> Telling black from white isn't hard.

Yes. That's kind of the problem.


His statement doesn't really depend on the definition of "good" and "bad", as the logic only requires that the good people be half as valuable as the bad people, not more valuable.


Mathematically true, ethically dubious.


You actually are implying that there are no bad guys? Under no standard of humanity would ISIS or the Taliban be confused for good guys. There are a lot of grey areas in the world for sure, but acting like badness is just a matter of opinion is ridiculous. There is evil in the world.


> There is evil in the world.

The argument here isn't that the people have the right to defend themselves. The argument here is that a person wearing a suicide vest is bad and deserves to die for being bad. They are evil. This argument is used by governments and religions throughout history to justify killing. Often by both sides of a conflict. It's a worn out argument, old and tired. Who is the judge of what is evil? The only long standing definition for evil involves religious context to believe. Anything else is poorly defined, not widely agreed upon, or hard to test against.

'Evil-skeptics give three main reasons to abandon the concept of evil: (1) the concept of evil involves unwarranted metaphysical commitments to dark spirits, the supernatural, or the devil; (2) the concept of evil is useless because it lacks explanatory power; and (3) the concept of evil can be harmful or dangerous when used in moral, political, and legal contexts, and so, it should not be used in those contexts, if at all.'[0]

[0] https://plato.stanford.edu/entries/concept-evil/


Ask the question, US or THEM. Those who want to defend the things they love from THEM, will always choose US over THEM. Simple. One can argue that "it doesn't have to be that way" but sadly there are millions of years of biology to prove that incorrect.


The argument that we should defend ourselves or that we should kill our enemies does not require the existence of evil.


Can you really build an AI that can identify bad guys? What if it's a good guy in the wrong uniform?


I think the argument is more so that the military killing 'bad guys' will protect 'good guys' in America long-term, if not civilians local to the country in question.

Not commenting on whether that's accurate; just trying to better articulate what I believe is the above poster's philosophical argument.


Then we can just do it Russian style. Kill everyone.


>-Is pursuing an AI that kills 10 bad guys but saves 20 good ones overall bad?

It's irrelevant. The swordsmith doesn't get to choose who gets cut down.

Does anyone think it's possible to build autonomous weapons that can't be turned upon arbitrarily chosen populations?


Google promised not to kill. Killing X to save 2X is still killing.

Google did not promise to employ utilitarian ideals when deciding if the outcome of a new contract or product is good or bad. They said, "We will not directly aid in killing or harming people."

However, I suspect this vow will invite challenges that question the redistribution of personal data Google aggressively collects to other parties, both advertisers and Orwellian governments. But I'm glad they spoke out nonetheless, since it invites closer scrutiny and upbraiding of the company by eagle-eyed social critics, who are all too often silent or ignored on these issues.


I don't know about you, but I know that I do not want an AI "solving" the trolley problem in real world situations.


It's unavoidable. Let's say a self driving car realizes that it is on an unavoidable collusion course, but changes in it's behavior (steering) will change where it collides. That's the trolley problem.


If I have understood it correctly in most self driving cars the actual AI/ML part is recognition of the scene/area, not deciding the action or outcome of such a situation.

Either way this is one of the reasons I'm not wild about the crazy self-driving race.


An autonomous car must decide at every moment whether to brake, accelerate, swerve left, swerve right, or do nothing. Otherwise the car is not very autonomous. :)

If it senses an unavoidable collusion ahead, it will need to choose from possible reactions that each have different consequences for the various people and property. Someone has to program or train the car what to value when choosing its reaction. The car will be biased, implicitly or explicitly, by the programmer's values.


If you make the car flip a coin (perhaps a weighted coin, on the strength of the belief in its sensor data), you avoid providing a solution to the trolley problem.

Alternatively, you can have the people vote on a self-driving car referendum, which specifies a DMV-enforced policy for the trolley problem.

As a society we vote on such matters all the time. Nothing new here.


Flipping a coin is still a conscious decision by the programmer.

MIT created an online survey called the Moral Machine asking people to choose who a car should hit in various (randomized and user-created) scenarios. Data like that could be used to train a "fair" AI. It's an interesting thought experiment but the survey's scenarios are unrealistic and include information that a car AI would not have available, e.g. person A is homeless and has no children while person B is a wealthy executive in good health.

http://moralmachine.mit.edu/


How is that better than a human solving it?

The trolley problem is unsolvable; that's the point.


The point is that it is a philosophical problem, not a technical one. A human will see that it is unsolvable, whereas a machine will probably optimize for unethical or strictly cost-win payoffs without considering the philosophical parts.

I'm not saying the decision will be better when a human makes it, I'm saying it will be more humanely considered.


- Yes : Directly responsible for death. AI is likely not the only means to have stopped those 'bad-guys'.

- Depends : How humans choose to spend their lives is their own issue. Personal responsibility dictates it is each individuals choice how to live their lives. Youtube is not some resource needed for life.


> Will Google's AI push the fat man onto the tracks?


Well, "Don't kill anyone or commit any human rights atrocities" isn't exactly a high bar they're setting for themselves - especially when the list of no-nos is immediately followed up with "we're still cool with supporting the military-industrial complex in other ways though". The article isn't really a list of principles so much as a list of PR disasters they'd like to avoid.


In the name of national security Google may or even have to lie.


They never have to lie. But they may have to remain silent.


Some time ago, the π day... I become aware of the following. Sadly I totally agree with the "trend" :(

"If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality."

-- Stephen Hawking


Under "AI applications we will not pursue", its telling that the first rule basically allows them to override all the subsequent ones - "where we believe that the benefits substantially outweigh the risks". "We believe" gives them a lot of leeway.


"We believe that AI powered nukes will lead to world peace when nobody dares go up against them, therefore we're going ahead with the project."


> Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

> We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.

I wonder if this is an official response to the people at Google[1] who were protesting[2] against Project Maven.

[1] https://www.nytimes.com/2018/04/04/technology/google-letter-...

[2] https://static01.nyt.com/files/2018/technology/googleletter....


I concur. This post seems to be specifically targeting the negative perception the resignations were generating.


Is there any room in your cynicism for any other view?

For example, maybe Google leadership did hear the complaints, then realized what was happening and what could happen, and decided that something needed to be done to create and uphold values.

I don't approve of defensive cynicism, I don't believe it makes a better world. In fact, I think that defensive cynicism is the reason why the world sucks. Let me guess, you don't vote because "all politicians are liars"? How's that working out for you?


Sorry for your pain. I understand this realistic approach bothers you a bit. As it happens, I expect decency from individuals, not corporations. My skepticism is justified in Google's post, since at the end of all the words, absolutely nothing was changed. If Google were truly listening, they would stop working on the project immediately. Therefore, rather than outright contrition, this post reads more like "Sorry, not sorry."

Politicians follow Hotelling's Law. It doesn't actually matter who you vote for as long as you vote. The only thing that changes is the flavour, and that's a matter of personal preference.


> For example, maybe Google leadership did hear the complaints, then realized what was happening and what could happen, and decided that something needed to be done to create and uphold values.

I don't find the idea of Google's leadership being as out of touch and out of control as that would require to be more comforting than the idea that they are responsive to bad PR.

> Let me guess, you don't vote because "all politicians are liars"? How's that working out for you?

Let me guess, you vote for the candidate with the worst track record to prove that you aren't cynical about them. How's that working out for you and Mr. Trump?


Also on this topic, from students who would interview at Google. It's important to hear how the upcoming generation, who would actually be doing this work, feel

https://gizmodo.com/students-pledge-to-refuse-job-interviews... [Students Pledge to Refuse Job Interviews at Google in Protest of Pentagon Work]


From my observations in my college, Google is starting to become the new "Microsoft". It is still seen as well paying, and nerdy but it is no longer new and exciting. Facebook is taking over Google's spot as the well established but still exciting tech company, while Airbnb seems to be taking FB's spot as the new and exciting tech company.


I have rather the opposite impression of Facebook. They seem sort of tired and are moving onto the same level as Uber in terms of ethical conduct.


Facebooks business seems to be suffering in various dimensions...

The key figure, minutes of time spent on the site in total, is being warped to include Instagram and Whatsapp, trying to hide the fact that Facebook itself is dying.


I've been asking myself this question for over 20 years: who are these people that click on ads anyway?

Ads are inherently going to be the opposite of Google's values, yet Google depends on them for the vast majority of their revenue. They show you some search results in line with their values, and if you can't get to the top of that "intrinsically", you buy ads or SEO. The folks that use that system to exploit the least intelligent win here, and Google takes a share of the profit.

Based on my Google search results in the recent past, Google isn't doing a good job of making sure the "best" websites (by my own value system, of course) make it to the top. I find myself having to go into second and third page results to get legitimate information. I'm seeing pages of medical quackery that "sounds good" but isn't based on science when I try to find diet or exercise advice.

As technology becomes more democratic, more people will use it. That means that the people that spend more time trying to sell you shit are going to win, because they're the ones that are willing to reverse-engineer the algorithm and push stuff up to the top. They add less value to society because they're spending all their time on marketing and promotion.

I wish I knew how to solve this problem. By imposing morals, Google "bites the hand that feeds".


US government should consider accelerating breaking Google monopoly. So that " ….we understand there is room for many voices in this conversation. " becomes more meaningful.


As much as I appreciate the conflict of interest here between doing good, making money, helping the US government do its thing, and simply chickening out for PR reasons; I'd like to provide a few sobering thoughts. AI and misappropriation by governments, foreign nations, and worse is going to happen. We might not like it but that cat has long been out of the bag. So, the right attitude is not to decline to do the research and pretend it is not happening but to make sure it ends up in the right hands and is done on the right terms. Google, being at the forefront of research here, has a heavy responsibility to both do well and good.

I don't believe Google declining to weaponize AI, which lets face it is what all this posturing is about, would be helpful at all. It would just lead to somebody else doing the same, or worse. There's some advantage to being involved: you can set terms, drive opinions, influence legislation, and dictate roadmaps. The flip side is of course that with great power comes great responsibility.

I grew up in a world where 1984 was science fiction and then became science fact. I worry about ubiquitous surveillance, un-escapable AI driven life time camera surveillance, and worse. George Orwell was a naive fool compared to what current technology enables right now. That doesn't mean we should shy away from doing the research. Instead make sure that those cameras are also pointed at those most likely to abuse their privileges. That's the only way to keep the system in check. The next best thing to preventing this from happening is rapidly commodotizing the technology so that we can all keep tabs on each other. So, Google: do the research and continue to open source your results.


@jillesvangurp. I am arguing in my post above, that there is a solution.

Solution is in 'fragmentation of power'. Yes it has its costs, but it is just a much better model.

Be it economic fragmentation of power (via de-monopolization), or political (via explicitly enumerating federal powers, and then splitting the rest for states to manage), or moral authority power (by ensuring multi-cultural, multi-view, no-moral-police environment).

Google cannot remain the monopoly it is. And US gov has to accelerate its breakout.


This was basically the same argument the Zentrumspartei made for voting for the Enabling Act in 1933.


What was their argument? Do you have a link?


Reminded me of their first principle, and how well they did with that one: "Don't be evil"


Pretty well, all things considered.


3 antitrust cases in the EU (Shopping, Android, and Adsense), and a couple of FTC antitrust cases im the US (one of which Eric Schmidt lobbied away through Obama) say otherwise.

https://www.theregister.co.uk/2016/08/18/google_had_obamas_e...

If that's supposed to be "our best", we're in trouble.


Antitrust isn't about committing a crime so much as competing so well that the the company sucks the oxygen from the room. Becoming a monopoly isn't evil. It's an excess of success.

It's the responsibility of government antitrust law, not Google, to 'un-distort' the playing field and resurrect competition / opportunity. No one expects a corporation to voluntarily give away marketshare to ensure it doesn't run afoul of antitrust law, just because the company also vowed not to be evil. A negotiated settlement with regulators is exactly where this kind of matter should lead; no crime done.


Yes, being a monopoly isn’t evil, leveraging monopoly position is.


Elaborate?


Well I’m guessing everyone will have a slightly different viewpoint, but my use of google services over the last 15? years has not had me feeling they’ve done anything evil to me.

Yes I’m sure they have a huge swell of data about me, but so far the use of that data has not caused me any suffering, or pain, or misfortune etc.

“Don’t be evil” is quite a poignant statement. If they were to say, “don’t do bad things” then I wouldn’t have quite the defence. But no, for at least me, a sample size of one, they have not been evil.


They’re only fighting the EU to maintain their horizontal integration against allegations of unfair competition.


It's good that they're openly acknowledging the misstep here. However, I wish that the "will not pursue" section got the same bold-faced treatment as the one above it.

It seems appropriate at this point for industry leaders in this field, and governments, to come together with a set of Geneva-convention-like rules which address the ethical risks inherent in this space.


It certainly leaves the door open. Reminds me of that saying that no ethically trained software engineer would write a DestroyBaghdad procedure, but would write a DestroyCity procedure to chick Baghdad could be passed as a parameter.


> Technologies that gather or use information for surveillance violating internationally accepted norms.

What does that even mean? Internationally accepted? By what nations and people groups? I’m pretty sure China and Russia have different accepted norms than Norway and Canada - which ones will you adhere to?


> We want to be clear that while we are not developing AI for use in weapons...

we will be developing AI for things that have weapons attached to them. We hope our lawyerly semantics are enough to fool you rubes for as long as it takes us to pocket that sweet military money.


reminds me of the joke

>"It should be noted that no ethically-trained software engineer would ever consent to write a NukeChicago procedure. Basic professional ethics would instead require him to write a NukeCity procedure, to which Chicago could be given as a parameter."


So was the “Don’t be evil” principle or mantra that we’re all disappointed about documented in a blog post? For some reason I thought it was on a page like this: https://www.google.com/about/our-commitments/

Either way it’s just a statement on a webpage which has all the permanence of a sign in their HQ lobby. It’s going to be hard to convince people that statements like this from a Google, a Facebook, or an Uber really mean anything — especially long term.

Will their next leadership team or CEO carry on with this?


Don't be evil is on the Code of Conduct:

https://abc.xyz/investor/other/google-code-of-conduct.html


Pretty rich for them to claim privacy is important when all of this technology is based on funneling your private data straight to them for storage and processing.


But how? Let's assume I personally offer artificial intelligence services. So I provide some API's where my customers upload training and testing data, and I return a trained ML model. I do not know who uses my service or what they are doing...

Furthermore, if ban the military. Then another company could do it for them. So every customer would have to explain their activities?


FTA:

> As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors: [...] Nature and uniqueness: whether we are making available technology that is unique or more generally available

Presumably this means they would allow the US military to use their cloud services like any other customer. This is almost certain not to happen because of the classified nature of their data.

Also, this is meant to guide how Google decides what to develop, not (AFAICT) meant as a terms of service for customers. Though I bet Google still reserves the right to block accounts it decides are using its platform in a way they think is immoral.

Disclaimer: I work for Google.


Banks are required to do this for at least money-laundering purposes. It's called Know Your Customer.


This likely took careful consideration and deliberation among a number of people. Google should be commended for the effort.


What do you think about the following potential additions?

1. "Pursue legislation and regulation to promote these principles across the industry."

2. "Develop or support the development of AI based tools to help combat, alleviate, the dangers noted in the other principles in products developed by other companies and governments."


At least they are starting the conversation. I'd be much more comfortable with principles of design and implementation in addition to outcomes. For example, transparency is essential. Also:

5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

Why not "give people control over their privacy and over their information"? That's a commitment to an outcome. "Incorporate ... principles", "give opportunity", "encourage", and "appropriate transparency and control" are not commitments. Google seems to be hedging on privacy.


they're only starting it because of the backlash from employees internally and more importantly the external backlash.


The principles state that they will not make weapons. However the latest report I’ve seen states that their current contract for the military ends some time in 2019. [1]

So while google says it will not make weapons, it seems that for the next 6-18 months it will continue to do so.

Does anyone know when in 2019 the contact expires? It seems odd to come out with a pledge not to make weapons while continuing to make weapons (assuming that is what they are doing).

(Full disclosure, I am a contractor at an Alphabet company, but I don’t know much about project Maven. These are my own opinions.)

[1] https://www.theverge.com/2018/6/1/17418406/google-maven-dron...


I believe it continues through March 2019.


Google: We take Pentagon contracts to track people's location with our AI. That's so bad.

Also Google: We will totally use our AI to 'legally' track a single mom that clicked a fine print EULA once while signing into our app. That's totally fine. It's different mmk?


This comment comes off as intentionally disingenuous to me.

In the latter case, it's about serving up ads.

In the former case, it's literally about assassinating people from the sky.

They really are different.


Both can be used for malicious prosecution.


Oh and if you want Google Maps to retain your search history, you have to turn on Global Search History across all Google products. It's impossible to keep a local file on your phone that would have the history in it just for that app. Totally impossible. That data needs to be sent to us.


Which we forgot to mention when you turn that on we begin recording your location history.


>At its heart, AI is computer programming that learns and adapts

No, that's machine learning. AI is intelligence demonstrated by machines, and it doesn't necessarily mean that it learns or adapts.


Luckily no one needs to worry about Google ever creating advancements AI (they can't, they lack the required skillset). Google is the modern day IBM, and AlphaGo is just another DeepBlue. I wonder when Google will make a gimmick like Watson. I guess Duplex is the beginning of it. It's amazing to see how many people were impressed by that. Then again, the tech scene lacks the scientific rigour that is required for spotting breakthroughs.


Applications they will not pursue include those "that gather or use information for surveillance violating internationally accepted norms." That's some fancy gymnastics there, Mr. Pichai. Well played.

I was wondering how or if they were going to address this. It saddens me to see that Google considers collecting as much data as possible about all its users to maximize ad revenue an international norm. It saddens me more to see that they're correct.


Didn't Google have a motto of "Don't be evil," and then new management retired the saying? What's stopping that from happening again in this case?


No, it's still in there code of conduct.


Great, another piece of "Don't be evil" with a new coat, and they can ditch it whenever they feel powerful enough to ignore society's feedback.

Such statement absolutely relieves the pressure came from the public, hence law makers. Can we make sure big companies are legally accountable for what they claim to the public? Otherwise they can say just whatever persuades people to be less vigilant about what they are doing, which is so deceptive and irresponsible.


>4. Be accountable to people. >We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

Youtube moderation and automated account banning combined with the inability to actually get in contact with a human show they they have a long way to go with this principle.


That is going to be a tough one, and possibly even impossible based on where current ML tech is leading. A laudable goal, but taking a complex system with a training time of many GPU years and asking it how it came up with the answer basically nets a very large pile of numbers (weights) tied together in a complex multidimensional relationship that we just plain can't follow outside of the system. Right now the practical focus is on trying to stop feeding the systems biased data. Your example is spot on, save for the 'not able to talk to someone' which is just googz being too aloof and too cheap


Trust is like a mirror: you can fix it if it's broken, but you'll always see the crack in that motherfucker's reflection.


I was kind of reminded of Asimov's Three Laws of Robotics while going through the Principles, especially the 7th one.


3. Be built and tested for safety.

4. Be accountable to people.

6. Uphold high standards of scientific excellence.


> Technologies that gather or use information for surveillance violating internationally accepted norms.

I guess Google's policy of sucking up any and all data doesn't go against internationally accepted norms.

This entire article reads like BS if you think about what Google actually does.


This is pretty weak tea. It seems to completely justify working on anything, as long as the tiny part that Google engineers touch is software, and they aren't personally pulling triggers.

> 1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

Is this "We have solved the trolley problem"?

Benefits to who? US Consumers? Shareholders? Someone in Afghanistan with the wrong IMEI who's making a phone-call?

Without specifying this, this statement completely fails as a restraint on behavior. For an extrajudicial assassination via drone, is 'the technology' the re-purposed consumer software to aid target selection, or the bomb? Presumably the latter in every case.

> 2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

This leaves the vast majority of military applications in scope. By this definition, Project Maven (the cause of resignations/protests) meets the criteria of not "directly facilitat[ing] injury to people". It selects who and what to cause injury too at lower cost and accuracy, to scale up the total number of causable injuries per dollar.

> 3. Technologies that gather or use information for surveillance violating internationally accepted norms.

Google set the norms for surveillance by being at the leading edge of it. It's pretty clear from Google's positioning that they consider data stored with them for monetization and distribution to Goverments completely fine. Governments do, too. And of course, "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."[0].

> 4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

It's difficult to see how this could be anything but a circular argument that whatever the US military thinks is appropriate, is accepted as appropriate, because the US military thinks it is.

The most widely accepted definitions of human rights are the UN's, and the least controversial of those is the Right to Life. There are legal limits to this right, but by definition, extrajudicial assassinations via drone strike are in contravention of it. Even if they're Googley extrajudicial assassinations.

[0]: https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmid...


Was going to say much the same thing, thank you. You said it much better.


Love this leadership from Jeff Dean and the team at Google AI. Technology can be an incredible lever for positive change, but can just as easily be a destructive force. Always import to think principally about how to ensure the former is the case and not the latter.


I wished they could define and clarify what "harm" means.


AI can and will be used to cause harm. I hope this doesn't cause US huge disadvantage against other nations like China where govt has more control and access to AI.


> Avoid creating or reinforcing unfair bias.

AI will likely reflect the bias of its training set, which likely reflects the bias of the creators. So, it is fair to say that AI will be biased?


There is definitely a risk of machines learning to reinforce existing but undesirable bias in real-world training data. There is research into documenting forms of bias so they can be recognized and countered when selecting training data.

https://sloanreview.mit.edu/article/the-risk-of-machine-lear...

https://www.entrepreneur.com/article/279927


The same AI tech developed for "search and rescue" can be easily re-purposed for "search and destroy". How would Google prevent that from happening?


As someone who has worked in the defense industry his entire career (and served in the Army before that), I find the general tone of most of these comments - in particular the ones coming from supposedly loyal American citizens - disturbing (not to mention insulting). Almost makes me wish we'd actually institute mandatory national service.

That said, I'd love to work on ML/AI related defense projects. Thanks to Google, more of this type of work will surely be thrown over to the traditional defense contractors - so maybe I'll get that chance, eh?


Humanity is racing ever faster to craft its own replacement as a species and we need to acknowledge this as our finest gift imaginable ... the cat is out of the bag on AI and no amount of corporate double speak can shed responsibility for any organization who employs armies who then freely spread these skills ... passing the torch to that which runs at light speed and is free of the limits of time which self evolves its own hardware and software can only be something we collectively should be proud of not afraid of ... rejoice as we molt and fly into the infinite now


>can only be something we collectively should be proud of not afraid of

If AI is used for mass murder of the human species, should we be proud? Humility, rather than hubris, is existentially important when it comes to wielding the most extreme power humanity has ever known


How could something that can easily be added as a plugin replace us?


AI applications we will not pursue

Technologies that gather or use information for surveillance violating internationally accepted norms.

They already failed.


Its interesting the sections discussing 'privacy' and 'accountability to people' contain the least amount of information.


In a way, the engineers who quit Google had some part in this success. Would it be unwise for Google to reach out to them?


I do not know, really... if not them, someone else will do however. Google has a competitive advantage (they can hire & pay well the smartest minds on Earth) and is letting it go? EDIT: going to be even more controversial but needs to be said that Google just can’t stay neutral here imho, they either work for autonomus killing machines or against them in order to preserve their market position and brand


I think Google's point isn't that it can't be done, or even that it shouldn't be done, but rather simply they won't do it.


Ok but what if it does not make sense from a business perspective in the mid or long term?


If Google renews defense contracts, I'm going to simply pull the plug, and I suspect many more will follow. Changing your email address is hard, but the bulk of the task will be over in a few hours.

If financial sense is the only reason, then still yes. Why? Google works for the military, gets a bad rep, usage plummets. It's not unheard of before.

Of course they might renew the contracts secretly (and I'm afraid they might) but secret work is more expensive and hard to keep secret for long at large scales.

Also, looking at the very long term: Making killbots creates instability which creates war which damages the purchase power of their users which creates less profits for the shareholders.


That begs the question.


If I don't build these gas chambers, somebody else will, so i might as well pocket the money.

You don't see a problem with that position?


I have no competitive advantage to take care of, though.


Questions of competitive advantage and profitability should likely be taking a backseat to concerns about autonomous killing machines.


Going to be even more controversial but needs to be said that Google just can’t stay neutral here imho, they either work for autonomus killing machines or against them in order to preserve their market position and brand


Ah, yes, extrapolate directly to the holocaust.


It extrapolates pretty well, and this is the problem, because if you stop early at merely "the world (political and economical) dominance" everything should be fine, right?


The military is using open source software to sort images, with consulting help from Google. No killbots, no acts of war, just doing the only thing that machine learning has any practical use for.

Science fiction writing is hard. I don't know why all of you are doing it for no pay. We can't judge Google for what we think they might do. And so far, they're just using ml in the real world


All corporations are amoral. They exist to maximise the profit of their shareholders. This is marketing. It is a nice sounding lie. If it were authentic, the last few months wouldn't have happened at Google. For me, it only makes it worse. Because they think we are suckers. Actions speak louder than words. These words ring hollow.


I think the avoidance of harm is fundamentally flawed. Creation necessitates destruction. At times safety necessitates assault. Violence can not be eradicated we can only strive to maximize our values.

Anyone who claims to be non-violent has simply rationalized ignorance of their violence. See: vegans. (spoken as someone who eats a plant based diet)


This seems like a PR stunt, but at least something. Nothing prevents them and reverting those newly found principles over time... similar with removing "Don't be Evil" from their mission - which kinda would have covered that. Google's goal is to make money and that's what this is about.


Just follow the three laws of robotics and you'll be fine.


google has no moral or principles. how could it possibly have those things? how can a global advertisement corp. not be evil? it doesn't make any sense!


"We're just going to put the tip in...."


Off topic, is there a way to tag all the stakeholders of the main company/government mentioned in title/article?


1) money, 2) profit, 3) revenue.


Don't be Skynet?


"Those are my principles, and if you don't like them... well, I have others." Groucho Marx


“Hey Google, give me three concrete examples of fair bias.”


Bias has a very broad meaning in AI work. See https://en.wikipedia.org/wiki/Bias-variance_tradeoff

Most AI/ML/Statistics procedures use Occam's razor: prefer the simplest explanation of the data. That's a bias (toward simpler explanations), but not "unfair" to anyone.


Oh, so they mean the boring technical definition that nobody reading the page cares about?

We can trust google will try to build tools that are effective for googles buisness goals without it being proclaimed in a statement of principles.


Spam detection. Driving directions. OCR.


Maybe people who had certain training or education will perform better for a particular job function. There might be biases affecting who gets into a school, but from a set of Stanford grads maybe certain courses, professors, or a cohort could be learned to be ideal candidates.

[Note: I work at Google but have no insight into AI projects]


New hot job title: AI ombudsman.


This is Google you are talking about. It's more like hot new script: ai_ombudsman.py


"Our AI is so powerful it needs special rules!" is pure marketing.


The fact that “make money” isn’t on the list means that you can’t believe any of it.

Also point 5 is an outright, blatant falsehood given Google’s track record and indeed entire business model.


This is clearly the only reasonable response to these 'principles'. Google must have a strong employee presence here.


"Principles"


You've been posting a lot of unsubstantive comments. Could you please try to increase the amount of information?

https://news.ycombinator.com/newsguidelines.html


Sure thing; going into the future I'll make sure to; and sorry for breaking guidelines a bit.

I was thinking I should probably take a break from HN until the current news cycle's run its course (way too many things happening that I'm tempted to reply with one-liners to; works on Twitter, doesn't work here), and should have went with my intuition + will now.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: