The great news is that this is news. Google did a thing they thought was fine. They drew a line in the Sand and said we won't build a box that kills.
Employees said that's not good enough. Because you're building a box that targets and makes killing easier.
I'm thrilled that this discussion is coming up and leaking out into the rest of the world because we need to talk about it.
Because the smart, capable people here need to think about our ethical obligation not to create terrible weapons. I assure you AI tracking in a weapon system is a terrible weapon. The test is easy. Imagine yourself on the looking at the sharp end of the weapon. Does it feel like you're in a nightmare? That's a terrible weapon.
Granted, smart, capable people know not to misuse a terrible weapon, but the last thing you do when you create something is the act of relinquishing control. And I assure you the world is full of people who cannot or choose not to consider the repercussions of their actions. A weapon once created will be misused.
We all therefore have an obligation not to create such things.
Turn your attention towards creations which create the world you want to live in and never work on terrible weapons.
I hope that this comment causes a response. I hope that it kicks off a debate. I trust that my argument is sound and I look forward to exploring the nuances of it with all of you.
Ultimately I think it was inevitable that Google was to become so heavily involved in military (https://wikileaks.org/google-is-not-what-it-seems/), but what bugs me is how intertwined this now military-industrial complex has been in our lives.
It is at best misleading and worst incredibly dishonest for Google to portray itself as it is, while at the same time using people to contribute to data that goes into targeting and killing human beings. I am hoping more people begin to consider the ethics of supporting such a company.
I'm glad that google's internal culture is healthy enough for this. I wonder how the internal conversations about affecting politics, issues of privacy, etc. look..
However, I really think it would be better if these debates happened somewhere that the public could see. It has to be a biased debate when every participant is an employee of the company.
> I'm glad that google's internal culture is healthy enough for this. I wonder how the internal conversations about affecting politics, issues of privacy, etc. look..
Stymied, I suspect. Lest they end up like James Damore.
Stymied in what sense? The cultural power brokers at Google have allocated a tiny sliver of lenience and they encourage vigorous debate within the confines of this narrowly-defined freedom. Similarly, this logic extends outward into society.
The message sent out from Google's cultural authorities is this -- if you fall outside of the progressive chauvinist mindset then you are persona non grata by default. Repent, sinners, or be excommunicated.
Well, I think this why Google has been so good at controlling their employees to stay within the margins of Google’s business interests. It creates a vibrant illusion of consequential debate and “resistance,” while keeping all the real power in the hands of a few executives.
Don’t get me wrong, Google employees have ways to put a stop to things like this project. But they’re not to be found in these pseudo-battles of their internal forums. Employees have to organize outside of company channels, withdraw their labor, or quit like those who were already brave enough to do so.
Agreed. And let's not forget about google's current core business, which always reminds me of this clockwork orange cinema scene: reprograming your brain by forcefully feeding it images, thoughts and other stimuli advertising whoever pays them. Nothing is black or white, but that's already far into "evil" for me. Yet every spark of subversive action is good to take.
This is absurd. The true power of democracy is freedom to speak. To disagree to challenge. Google does this better than any big company I've ever heard of. I literally challenged the CEO in public on the question of the housing crisis in the bay area. He responded and the conversation continued. There have been a dozen times in the last several years when healthy internal conversation at Google has changed policy. Name one time that happened at IBM.
The true power of democracy is...democracy. Google doesn’t have a labor union or worker-ownership, so while executives might be open to appeals, ultimately what you’re describing is an authoritarian arrangement.
I do think employees and users would be better off if Google was democratically controlled in either of the ways described above, but unfortunately that’s not where we are right now.
Companies like Google making company culture such a visible part of their identity strikes me as the kind of wise foolishness that you find a lot in mythical narratives. Sure, you will get some very motivated employees that swallow the kool-aid, but if you think those employees' loyalties are with you, you better think again. They're loyal to the idea that you sold them on, to the gospel that you preached.
Once they get it in their heads that the road you're leading them on is going to hell and not heaven, there's gonna be a reckoning. Now the faithful are going to start questioning things they wouldn't allow themselves to question years ago. Is making all this software so that Google can profit off of collecting people's data worthwhile?
I hope at least one Google exec is forced to sell a yacht over this. But what I really want is that we find a way to force more accountability onto these assholes. I'm happy it happened, but I think we're going to start seeing a shift away from companies making culture such a visible part of their identity as Google did.
Companies like Google have trained an entire generation to seek greater meaning in their work. Even if Google’s promises were hollow, millennials don’t have the cultural institutions our parents did. Thus we seek self-fulfillment in the ways we know how: at work.
I work with a lot of C-suites on “the culture issue”. They see statistics like “40% of people between age 12 and 18 identify as LGBTQ” and they start to panic because their company culture isn’t so friendly to that. Things that were taboo only a decade ago are now commonplace: every kid today knows at least one trans kid, and a over half of teens identify as “somewhere in between gay and straight”.
The institutions that used to prop up culture in this country (marriage, church, family, etc) are no longer a reality for most people. Corporations have lobbied successfully for more power, so the employees of these companies expect to be able to wield that power from within to make the change they want to see.
Because the culture at large is so hostile to those ideas, the culture at companies is only going to become more important as time goes on. Companies that ignore it are going to have a difficult time attracting the top talent that can afford to be choosy. Companies that provide a safe, nurturing refuge will find themselves drowning in qualified applicants.
> They see statistics like “40% of people between age 12 and 18 identify as LGBTQ” ... and a over half of teens identify as “somewhere in between gay and straight”
If you think about it, it's not too surprising. 12-18 is a weird age with lots of weird hormonal things happening, and it's hardly unheard of that otherwise 'straight' teenagers in that age bracket experience some sort of same sex fantasies or experimentation.
It's just that in the past they would identify as "100% totally straight!!! (except for that time...)" but they're now more comfortable to be more fluid in their definitions.
Exactly. Those kids will eventually reach a company, look around and see it’s 95% cisgender/heterosexual people, and they don’t feel welcome because that’s not the world they grew up in.
The whole “trans agenda” is really about letting kids know that they are not weird, and they are not morally bound to the “default” gender they were given. Turns out when you do that young, a lot of people don’t pick the default without at least exploring the other options. This blows many adults’ minds, and it’s going to be a huge shift in the corporate world. You think #MeToo is a problem? Wait till gender discrimination has a whole lot more variables.
Point is, culture is important. The culture that got you here is not the culture you’re going to need tomorrow. Google’s culture is pretty close to what a lot of this generation aspires to though; and doing work to enable state sponsored surveillance is likely personally threatening to their LGBT staff given the current president’s agenda.
> Google’s culture is pretty close to what a lot of this generation aspires to though; and doing work to enable state sponsored surveillance is likely personally threatening to their LGBT staff given the current president’s agenda
That brings up a good point about ideology (pro-LGBT) versus politics (Trump hate). Almost every country on the receiving end of the US military under Trump can be characterized as a terrible place to be LGBT. The same can be said about Obama, but I suspect most Google employees would not bad an eye if this were happening under Obama.
It’s not just pro-LGBT; it’s a whole liberal ideology that values people of all kinds, and looks to transform the culture to include all that diversity.
Trump is very much against this ideology... but he’s also reminded the country after 8 years of Obama that the government is not your friend. I think the combination of those two things is what caused the backlash at Google; and you’re right, I don’t think it would have happened under Obama.
Edit: despite all the crap, the US is still one of the best countries to be transgender. At least the medical system here doesn’t prevent anyone who wants care and can afford it from getting it.
Okay now I've figured out my objection. I strongly doubt that the slow liberal ratchet is going to suddenly force companies to become accountable for their actions in the same way Google has made itself. Google employees were more than capable of ignoring certain aspects of Google's business in order to work at Google.
My argument is that Google's professed ethos of "Don't be evil" has directly led to this sequence of events. You can be LGBTQBBQ-friendly and pay lip service to the whole host of liberal society acceptance tropes and still be evil, so long as you don't actually claim that you'll never be evil. Once you claim that, and make it a core part of the company mythos, then you're opening yourself up to exactly this sort of thing.
I love the accountability that's being forced onto Google right now, but I feel it's a rare bird at best.
I think companies that don’t have some statement of corporate ethics are going to find themselves starved for talent while the ones who do eat their lunch. I’m seeing it happen across the professional services industry; the big 4 are consistently outperformed by smaller shops that bring a more diverse team.
The “don’t be evil” pledges are a big part of why people feel safe at those companies. Millennials and especially the generation after them need to be involved with institutions that reflect their values. They are more than willing to go create them if they don’t exist.
When you think about it social stigma against homosexuality is really the only thing that prevents teens "going crazy". I'm not a gay man but apparently it's pretty easy to get laid on Grindr, easier than a straight guy finding a hookup. Extrapolate this back to teenage years, many/most guys aren't successful with chicks but virtually all of them have a best (male) friend. With less social stigma around it, I suspect now that in the next generation, for a great many boys (girls likely follow similar logic) their first sexual experience will be a homosexual one.
I'd put the causality the other way around—Sergei & Larry's parents and other mentors taught them to seek greater meaning, so they created a company that embodied that. Likewise with the employees they sought out. "Google-y."
Admittedly it goes both ways—it's a virtuous cycle of people seeing that it's possible and embracing it more fully. Plus, the YC rule of "no assholes" leading to better companies, which may even mean that you can do well by being good.
Larry and Sergey's research was initially funded by DARPA. Silicon Valley itself was built by military funding. Self-driving cars and AI research were initially funded by the military. Everybody working in tech is getting blood money, anyone who thinks otherwise is either ignorant or a raging hypocrite.
> However, he said he thought that it was better for peace if the world’s militaries were intertwined with international organizations like Google rather than working solely with nationalistic defense contractors.
I remember when sergey said nearly the same thing about china.
>However, [Brin] said he thought that it was better for peace if the world’s militaries were intertwined with international organizations like Google rather than working solely with nationalistic defense contractors.
You could make an argument that to maximize global wellbeing that that is the better strategy? Focus on stuff to avoid civilian casualties rather than the nastier possibilities? Maybe spin it off into a specific division so people don't have to work there.
AI is going to get used in warfare but we may have a choice between kill all the humans or take out the weapons while not killing humans.
Often I hear about people who refuse some contract because of ethical reserves. But, we should have more people with ethical sensibility working on military projects, not less. If the only people working on military projects are sociopaths, what future are we building?
This presumes that the people with ethical sensibilities have any power to change the structure of the MIC from the inside. On the contrary, I'd say a fundamentally rotten institution can't be reformed by a few good apples joining up to help turn civilians into red streaks on the ground.
Hell, even the "head of Stanford University’s A.I. lab and chief scientist for A.I. at Google Cloud, one of the search giant’s most promising enterprises" who literally said “I believe in human-centered AI to benefit people in positive and benevolent ways. It is deeply against my principles to work on any project that I think is to weaponize AI.” had little to no effect on Googles direction towards military applications. She couldn't even get a publicized email out.
I would never ever work on anything for the military or that is designed to hurt human beings. That being so, I wouldn't describe those engineers that do as sociopaths.
> I would never ever work on anything for the military or that is designed to hurt human beings. That being so, I wouldn't describe those engineers that do as sociopaths.
It seems that, essentially, you're describing conscientious objection.
edit/ Removed some words that weren't adding value.
I've heard this again and again through my career as though it's some sort of gotcha. I think most people understand the difference between developing (for example) a communications client which among its future customers, may include a military, and actively working (for example) the targeting system of a predator drone.
There are no ethical people only ethical or unethical actions.
When an ordinarily ethical person builds a gun it's still a gun. Once it has been created, the creator give up control. (That's the last stage in the act of creation).
The person who creates a weapon is not the same person who decides to use the weapon.
Smart people know better than to use a terrible weapon, but that's not how we decide who gets to use weapons. The decision goes to those with power not those best suited to make the decision. Power accumulates with people most willing to misuse power.
Therefore smart people have an obligation to not work on weapons. And the more terrible the weapon the stronger the obligation not to create it.
AI targeted drone systems are terrible. Even if there's a human in the loop for now. We cannot advance that branch of weaponry. It will only fuel an arms race and hasten the arrival of a terrible future.
I think an important part of that calculus though is that the ethical people on these projects are willing to walk away if their ethical concerns aren't met - which looks to be what happened at AlphaBet.
I used to have a pacifist friend who worked for a local defense contractor. I asked her about the apparent conflict of interest and her answer what that she hadn't quite figured it out, but she hoped that her presence there at least balanced out some of the rampant jingoism.
Technologists who fancy themselves pacifists are naive. The history of human technological advancement is the history of violence and the desire for control. DARPA funded the early internet, DARPA funded AI and NLP research, DARPA funded computer vision.
It's not new, it's been going on forever - certainly in our lifetimes.
Hey - I am a soon to be Xoogler who just gave notice in ethical objection to Project Maven (btw, I hugely appreciate even simple words of support over the issue from this community).
I don't identify as a pacifist - quite the opposite, I think we should fight Project Maven like hell. The DoD wanted to fund a communications network? Cool. A private company recruiting 10k+ engineers on a cool, creative "don't be evil" brand, and then backdooring their work into use by a defense department is not cool (I'm not saying the company doesn't have the legal right to, I'm saying it's a jerk thing to do).
Sharing research, open-source code, etc is fine, and of course defense is free to poach away top engineers. But they can do that through recruiting, not their IT budget. Or if Amazon and Microsoft want to do the work, that's their choice, I never wanted to work at those companies. I think Google is missing a huge opportunity to define itself uniquely here by not canceling immediately.
I believe true neutrality is to do defense for nobody, I would like there to be a premier technology company that international colleagues can work at that serves no military. I wonder how many of my coworkers can't express their solidarity because of their visa status, because the insane cost of living of the bay area, etc. I don't know.
Is it going to change anything? Probably not. At the end of the day everyone is complicit is some respect. Part of those taxes taken out of your paycheck are consequently going to defense and the military. The US spends 35% of the world's share of military expenditures[0]. Project Maven was going to happen with or without Google's help, it's just a matter of when. Who knows, they might even be using TensorFlow.
At the end of the day, there's little "defining" for Google to do. Google is not some angel child that fell into dating a bad boy with the DoD. Google does both good and bad things all the time, even when the consequences are not as highly publicized as with Project Maven. Google is a for-profit company that was following the money. Figureheads like Eric Schmidt dove right in to working with the DoD without even giving it a second thought, and there was little news from any C-levels on the matter prior to this PR issue for Google, which I think speaks for itself about the philosophy of the company from its leadership. Cynically put, Google now cares because their reputation, and by proxy their fiscal performance for their shareholders, is now threatened and somebody calculated that the risk is greater than the reward of the government contract.
Google spends incredible amounts of money hiring and retaining people. They may not lead to any change, but I'm sure Google is keeping track of how many people are leaving over this issue.
Googler here. Really good engineers are difficult to replace, our hire/no-hire ratio for candidates is insanely small and and increasing recruiting efforts puts more burden on the engineers who do the interviews. Losing people over this issue definitely hurts.
No need to worry, lots of people you can pick up from the following companies they may want to shift a few levels up from hell on the morality spectrum.
Lockheed Martin
Boeing
Raytheon
General Dynamics
Northrop Grumman
United Technologies
L-3 Communications
BAE Systems
Out of curiosity do you really feel "forcing them to do backdoor defense work" is an accurate depiction? Not a googler, just curious if the option to switch team or opt out of the work was available.
I am not sure about exact plans but I have ideas of the rough ones. Obviously this is not an official statement from Google!
Anyone will be able to opt-out of direct work on the project. Not sure about indirect work.
In the short-term, only a few people will work directly on the project. Long term maybe some more, since the goal is to get bigger projects. Many more people will indirectly contribute to it. For example, I recently TAed the ML101 class. Will one of my students work on the project, and whats the impact of that on my complicity?
Anyway, I basically think that if your company is a DoD contractor then you work for a DoD contractor, especially as an engineer. And I think the typical Googler is massively underestimating the implications of Google becoming a DoD contractor.
> Anyway, I basically think that if your company is a DoD contractor then you work for a DoD contractor, especially as an engineer. And I think the typical Googler is massively underestimating the implications of Google becoming a DoD contractor.
I think a lot of folks in tech are comfortable selling out other people (i.e. creating shitty products) to get some sense of financial security in the Bay Area.
I am ok selling out, but we all have our lines. I just think defense is probably not an uncommon one which makes it an unreasonable pivot for a huge company. Trying to act like "they're just another customer" is worth a good laugh at least.
That statement makes sense in the context of the article. People on the project objected to it's being used in a weapon platform. If I build a self driving car and you attach an auto-targeting turret to it you've forced my work into a weapon platform even if I didn't turn the wrench attaching the turret.
> I believe true neutrality is to do defense for nobody, I would like there to be a premier technology company that international colleagues can work at that serves no military.
I'm not sure what the problem is with the military specifically as opposed to other government contracts. The military largely just does what they're told, so if you're bothered by how the military kills people, you should be blaming our elected representatives. You should object to all government contracts.
> And also it's still wrong to make it easier, cheaper or faster to kill people.
I don't see how that follows. In fact, I think it's flat out wrong. The military is absolutely, unequivocally essential. You're effectively arguing that we should have stopped weapons development after we realized rocks smash in skulls, but then we would have been conquered by whomever invented knives, arrows, muskets, and so on. Military power is an arms race, and it will never stop advancing, nor should it. Weapons development leads to development of countermeasures. Not pursuing weapons development means you're at the mercy of whomever is less scrupulous than you.
It's a very interesting moral dilemma; one the people on the Manhattan Project surely had. On one hand, you don't want to be responsible for creating something that kills a bunch of civilians, but on the other hand, if your adversary particularly in a time of war, is doing the same thing, your civilization would be at risk.
Oppenheimer's famous Bhagavad Gita words, "Now I am become Death, the destroyer of worlds."
When Oppenheimer said he felt compelled to act because he had blood on his hands, Truman angrily told the scientist that “the blood is on my hands, let me worry about that.” He then kicked him out of the Oval Office...
Look at the effect nuclear weapons had on the world. There have been half a dozen near misses in total annihilation. The Russians got the technology almost immediately after it was created. We have lived in fear ever since. Rogue states, fear of nuclear terrorists.
Now imagine the smart and capable people on the Manhattan project didn't push, didn't apply their brilliance. Simply said "the amount of uranium necessary to create a chain reaction is on the order of several tons, it is therefore not a viable weapon" and went on to work on more worthwhile creations.
I argue the world would be a better place. Sure, maybe someone else would do it, but that person would be smart too and susceptible to the same sound argument. I'll say it again. We all have an ethical obligation not to build terrible weapons.
But didn't Google work on search technology for the NSA? That's what I get from The Silicon Jungle by Shumeet Baluja. And maybe it was just dystopian fiction.
I can't comment on every tangential thought related to this topic. But basically we live in a very morally ambiguous world , but this specific issue seems more clear cut to me. In discussing this issue with some people, I've been offered a lot of weird moral comparisons and analogies I don't think are worth seriously considering too much.
You don't have to "fancy yourself a pacifist" to not want to work on military projects directly.
For one, you may pragmatically think it hurts your brand more than it helps your company.
For another, you may think that is you wanted to work in defense, you would have gotten a job in defense. If your company starts chasing defense, maybe it's time to change companies.
For another, you may think the government of the United States doesn't or no longer deserves the best weapons in the world. Perhaps another Western country does.
That last point was a major driver of leaks to the Soviet Union during the Manhattan project. Many of the scientists thought it was simply too much power to be in the hands of one nation.
They were guilty of treason, however not everyone's morals are directly inline with the law. That isn't to say that all spies were of a noble nature, some just did it for the money or to feel important.
That's true - I left academia because I figured a lot of my work was possibly going to go to killing brown children in faraway lands, so I understand that pretty well.
But I didn't get the impression that this was the case here. It was a fairly generic object recognition program (as far as I've heard; could be wrong there).
> For another, you may think that is you wanted to work in defense, you would have gotten a job in defense. If your company starts chasing defense, maybe it's time to change companies.
Totally. Great opportunity to move teams at a big company like Google or quit and move elsewhere. But what happened was more of a statement, people walking out together. I don't think it's wrong, and I admire them for it. But I think it belies a certain naivete about the whole setup we've found ourselves in.
> For another, you may think the government of the United States doesn't or no longer deserves the best weapons in the world. Perhaps another Western country does.
Sure again, or an Eastern one, or a Southern one. Western hegemony has been very, very bad for the rest of the world, so a change might be just what we all need.
Yes, Western governments have their problems. But honestly what alternative is there?
Chinese "communism" (really Fascism)?
Russian dictatorship / oligarchy?
African tribalism and anarchy?
Middle Eastern theocracies?
There are a lot of things I'd change about Western states. Making them truer to the Western values of reason, individualism, justice, peace and voluntarism would be a good start :)
But really, the only reason we're not living in the dark ages is those same Western principles. (Which we might now refer to as Eastern, had not Ibn Battuta and others rolled back the Islamic renaissance, plunging the world back into darkness until the Europeans caught up. What might have been, etc.)
This type of thinking is why we haven't escaped from the prisoners dilemma yet. One day, two of our "world leaders" are going to commit to a war and all the soldiers of both sides are going to look at eachother and say 'nah, you fight it!' - and we will realise that its been the fault of the people who do the actual fighting all along.
> One day, two of our "world leaders" are going to commit to a war and all the soldiers of both sides are going to look at eachother and say 'nah, you fight it!'
Nah. With autonomous weapons that's not going to happen.
The civilian deaths are horrifying. But terrorist groups have made a practice of using non-combatants as human shields for propaganda purposes. If a terrorist leader who has killed many people in the past and intends to kill more in the future is located in a house filled with children do you take him out or let him live and continue his activities? Innocent people are likely to die either way.
That's why you don't fight terrorism by blowing up buildings from robot sky drones.
And to be clear: If you make the decision to murder a household of children, there is no justification, you are a sick, twisted individual. The justification that you might prevent other indeterminate future innocent lives doesn't change that you carried out a heinous act.
You cannot claim the moral high ground against terrorism when you are acting like a terrorist.
>If a terrorist leader who has killed many people in the past and intends to kill more in the future is located in a house filled with children do you take him out or let him live and continue his activities? Innocent people are likely to die either way.
so, the real choice i'm facing here is _who_ is going to kill innocent people and children - me today or the terrorist leader tomorrow. Do you really think that it is a non-obvious choice?
That's assuming those are the only two choices. In real life there are many more. The US could learn from France who have fought islamic terrorists far longer and tend to assassinate the guilty with a bullet rather than taking out a bunch of innocents with bombs.
France bombed us down here in New Zealand in an act of terrorism over a peaceful nuclear protest and has a brutal history in the Middle East, even quite recently. They are far from a shining light.
> The history of human technological advancement is the history of violence and the desire for control.
It's certainly not given, though. Post-war Japan is an example of enormous technological progress (that surpassed other nations at some point) without military emphasis.
Ever since I learned about Sapolski's baboon troupe, I have been always wondering how many things that people call "natural" for humans to do are actually quite easily culturally changed. I think a fair bit.
Post-war Japan is an example of enormous technological progress (that surpassed other nations at some point) without military emphasis.
This oversimplifies the issue, and insinuates that the Japanese became pacifist.
In fact because of a long imperialist history Japan chose to become personally defensive - while being protected by the US forces through the Treaty of Mutual Cooperation [1]. So postwar Japan effectively outsourced their strategic capability to the US and UN.
I don't think having a good defense is in contradiction with pacifism. Unfortunately, "defense" in the U.S. is more orwellian term.
Anyhow, I was addressing the perception that you cannot have technological progress (or prowess) without significant military applications. Of course you can.
You’re right that Japan’s rapid technological progress wouldn’t have been possible without hiding behind the US shield, but that’s just because it allowed them to divert resources. The development itself was distinctly civilian and not dependent on the military as GP mentioned.
Also, Japan’s imperialist history was about 50 years; that’s certainly not “long” in the scheme of world history.
You could extend this argument to pretty much anything since our own history as humans is completely tied to violence, but not because of this one should consider every person who opposes violence/war naive...
Yes, this is true. War has been a constant for all of human history, so practically any technological advance can be attributed to warfare. Anyone who thinks that war is necessary for technological progress in 2018 has been drinking too much US military koolaid.
War has been forever, but you have to ask in service of what? For the US, it appears to be in favor of world dominion. The words empire and constitutional government don't really go together very well. Project Maven helps the drone program in some fashion. It helps provide support for a program that assassinates many innocent people. If you choose to not be "naive", then at least pick something that materially contributes to national defense or international peace. This program can only be seen as offense, which is a violation of the UN charter.
The current context of this discussion is the primary driving force behind technological innovation, you seem to be addressing a different context entirely.
I assert that inventors and inventions need not be shackled to their origin.
Case in point, it's entirely possible that ARPANET was used by foreign powers to subvert American Democracy in the 2016 election. I'm not advocating that, I'm saying it's worth recognizing.
You can retort, yes, that's continued weaponization.
And I can respond, that I can want to work on altruistic internet projects, and not feel "guilt" that the internet was invented by the military. And if someone subverts my project to military uses, I have every right to voice my objections.
We'd have modern computing without those things. There was enough demand to develop computing even without military purposes in mind. The transistor was invented at a telephone company.
One earlier example that comes to mind is Galileo demonstrating his prototype telescope to the doge of Venice in the early 1600s. The potential of the new instrument to detect far-away ships who might wish to raid the port city was apparent to the government, and they doubled Galileo's salary.
All I can think of is people who eat meat, but find animal slaughter abhorrent. They can find another nice valley office to work in, put their fingers in their ears, and refuse to think about how America was built on the back of war and an effective defence force.
I'm watching the show "War" on PBS and it is very interesting. They start in boot camp and the soldiers (of multiple generations) discuss the process of transformation from a me first attitude to a group first attitude. I find that interesting to think about society in general and how we seem to be just a nation of me me me or at least that is how it is marketed... however, success really only comes by working together for a common cause.
I'm not sure whether the point you're trying to make is:
- that the high standard of living of all of the wealthy in the USA is enabled by its position as the world's main military power - you can try to appear noble and above it by quitting your job when your employer is too close to the military but you're basically complicit
- that these snowflakes need to realise that computers couldn't exist without the military
If it's the first then I can sympathise - however it's getting into "oh you dislike capitalism but you bought an iPhone" (or laptop, bank account, home, clothes etc) territory. I would definitely prefer more people open their eyes to the suffering our governments are causing, but objecting to your employer working directly with the military doesn't mean they're ignorant to this.
I really hope it's not the second - there are a few comments floating around are getting into this direction.
The first point is what I was getting at. Although more from the perspective of “enjoying the fruits of”, rather than “complicit in”. Which seems hypocritical to me, no matter how you look at it. Especially considering we’re generally among the most highly paid professionals in the world.
I find this entire situation so odd. None of these people leaving Google over Project Maven had any issues with working for the largest corporate spy agency in the world. They were fine with massive data collection in the pursuit of serving people better ads. But helping analysts find objects in imagery? That’s a bridge too far.
I'm an aerospace engineer and I specialize in rockets. I understand that the nature of my work will almost always 100% benefit the weapons industry because there's just no avoiding it. When I started my career ~10 years ago I worked on a defense project as I was a fresh graduate and had no choice, but nowadays I make it very clear to prospective companies that while I have no issues with my colleagues working on weapons, I myself will absolutely, under no circumstances, work on defense funded projects in any capacity. Most companies from my experience are willing to work with you on this especially because a lot of them are already set up for segregated access.
The fault in your thinking is that you seem to think it needs to be an "all or nothing" choice. It's not, and it doesn't need to be. I try to be green and recycle everything I can, but I don't lose sleep over throwing plastic away in the garbage because I do what I can. I'm 99% vegetarian because of my views on animal ethics and I honestly can't remember the last time I've bought meat for my personal consumption, but I have no qualms about going to a friend's backyard BBQ and eating the same grilled burgers that everyone else is enjoying because I do what I can. For this reason, I choose to not work directly on weapons, despite the fact that basically all rocket related technologies can be applied to weapons, because that is something that I can not change, so I do what I can, which in this case is refusing to work on defense funded projects.
> I'm an aerospace engineer and I specialize in rockets. I understand that the nature of my work will almost always 100% benefit the weapons industry because there's just no avoiding it.
Those people are software engineers. Unlike you, they could have easily found equivalent jobs elsewhere (almost anywhere) that did not contribute to building a worldwide surveillance network in the name of serving better ads.
I agree with GP that it takes a wacky moral compass to willingly engage in mass surveillance but feel strongly enough about weapons to quit over them. It's either very naive, hypocritical, or both.
This is not entirely true. I mean, sure, they could not work for _Google_, but almost the entire field of eCommerce involves either contributing to surveillance, or benefitting from it as much as you can. You basically get a choice over the brand of the bad.
Of course, there are other jobs, but much of it either comes with its own significant ethical hurdles - like fintech, for example, or any form of marketing and much of banking - or is much less accessible (good luck switching fields from eCommerce to embedded programming for power plant equipment or something).
Pretty much only solution I can come up with is seizing the means of production, but that might be an unpopular proposition on this particular website.
> I agree with GP that it takes a wacky moral compass to willingly engage in mass surveillance but feel strongly enough about weapons to quit over them. It's either very naive, hypocritical, or both.
I don't see why it's inconsistent. They don't want to be complicit in physical violence and death, but surveillance leads to neither.
Surveillance does lead to that if it's how we identify where someone is located or leads to actionable intelligence which can then be used as justification for dropping a missile from the sky.
1. Being curt with someone can lead to them going home and beating their spouse to vent their frustrations. Should we then never be curt because it might lead to domestic abuse? How improbable does the outcome need to be before this kind of argument is considered invalid?
2. Dragnet surveillance alone cannot lead to actionable intelligence, it can only be used to confirm other sources of intelligence. The false positive rate is simply too high. Would they still drop that bomb without that surveillance? In most cases, probably.
3. Weapons have only one purpose: to physically harm. Surveillance can have many legitimate purposes. They simply aren't directly comparable.
This is the key. Your actions matter. You should do the best you can. And the best you can probably includes never working on weapons. That should apply to all of us, but each person must decide. What is the best you can do?
But you're clearly not doing what you can. You're just doing what's convenient.
You still eat meat in one of the most important times not to eat meat (in front of others) if you are truly driven by ethics for animals. You refuse to work on defense-funded projects because companies in your field have a pretend isolation mechanism (here's a hint: unless they are actually separate companies, they aren't isolated) that allows you to think you aren't supporting the military despite working for the company supporting the military.
This type of behavior shocks idealists. Even though it's better than nothing, just doing little bits here and there where convenient actually makes it harder to change the status quo for idealists (because you think you're helping when you're not).
Then, as an idealist, what do you expect from other people to reach your goal ?
If 50% of the people do 80% of what is needed you're at 40% of your goal. If 10% of the people do 100% you're only at 10% plus you've turned your goal in to an "us vs them" instead of "everybody for the greater good".
Voting with your wallet 80% of the time is a valid means to support what you think is right.
An idealistic minority doesn't change the world. The majority does.
I think a more CS way to think about it is shipping out a MVP vs. shipping out a 100% perfect, 100% feature complete, 100% bug free, 100% documented, product.
I think everybody would be happy to start a project and ship out that 100% finished product, but you'll spend your entire life working on this and die without succeeding because it doesn't exist. So the choices are to 1) not bother, 2) do what you can (MVP), or 3) work until you die as a failure. OP would want me to choose 1 or 3, it seems.
Choosing to work for a defense contractor with restrictions on projects gives them nearly as much benefit as without the restrictions. The fruits of your labor are still easily funneled into military projects and at a minimum they are keeping the company alive longer while serving the military.
This isn't someone doing 80% of what's required. It's them doing active harm to a cause while continuing to take credit for pretending to help.
Well done. Great logic. The majority can also prop up a Kim Kardashian and Trump who met in the Whitehouse today and took a nice selfie. Those two idealists are taking your majority for great ride pal.
You're not wrong on the first statement. I am doing what's convenient. But I am also doing what I can with what's laid out in front of me. Basically every single company in my field has ties and contracts with military, so what, am I suppose to just change careers? I enjoy what I do, and I think military projects are neat in the technical sense, but that doesn't mean I want a part in it.
I'm not an idealist. The world isn't black and white.
Of course. If you like to make ice cream and the only organization employing people to make ice cream is the mafia, you stop making ice cream professionally.
You should absolutely change jobs if your current one is immoral, and careers if it comes to that. This much is obvious.
The problem is in the hurdles making that difficult or unrealistic. In America, this process could leave you and your family homeless, maybe bankrupt if anything goes wrong. This fear leaves (maybe most) people in jobs they know to be immoral, and society is that much worse off because of it.
I think you should be doing whatever you can do* to create a society in which people are not forced to act immorally, such as you have been.
That kind of black and white thinking implies that the only moral way to live is as a monk, if you're ever spending money on say, a vacation or a new TV or eating out, well why don't you give that money to charity you monster??
You can believe that people have a moral obligation to do good, without believing that people have a moral obligation to maximize good in every aspect of their lives.
Idealism isn't about living as a monk, it's about not compromising on the things that are often seen as "trivial" or "inconsequential". Most people don't jump from acting ethically to accepting work on that is obviously problematic. What actually happens is a long series of small, trivial compromises and other deviations that eventually become normalized[1].
Humans make mistakes; it isn't possible to live the "ideal monk" life. The real question is how small problems are handled? Compromising when the problem isn't a "big enough" ethical problem inevitably[2] pushes that boundary back in a cycle of normalized small, easily avoided deviances. Instead, if a hard line is taken against small problems when they are still small, a lot of larger problems can be avoided.
> Idealism isn't about living as a monk, it's about not compromising on the things that are often seen as "trivial" or "inconsequential".
Not seeing the distinction here. If you're an idealist the way hueving indicated and don't live as a monk, you're necessarily compromising on your values every day. And not on trivial or inconsequential things either, since you could easily donate to a variety of non-trivial causes.
What you're suggesting just sounds like reflexive goalpost-moving: the compromises I make are okay, the compromises others make are not.
Morals never serve individually. They serve to facilitate a social environment that favors a group to effectively work as a team. The reality of this is sure to be rife with contradictions, especially at individual scale, but somehow they effort is not contradictory if we consider the madness of a society erased of it’s morals. The challenge of this contradiction can be overcome through conviction and forgiveness alone. It’s essentially the scientific argument for religion. I would like to think nobody would say this is easy, but many people do say it’s easy. It’s not. It’s nearly impossible, but humans have done it time and time again.
What kicked off this whole chain was a discussion on whether a particular individual's decision was moral or not.
"This ideology is impractical/contradictory but still effective in a group" is a potential valid point to make in another context, but irrelevant to what we were specifically discussing here.
No, it's not about living as a monk, it's about not compromising when things really matter.
Refusing to work for a defense contractor has a real long-term impact because you are depriving them of your labor. Working for a defense contractor but not on specific projects is meaningless because they still profit from you and support the military.
> No, it's not about living as a monk, it's about not compromising when things really matter.
If you cut back on living expenses, you could give thousands every year to hungry kids or the environment or [insert good cause here]. Those are all things that really matter. Especially over the course of many years, you could make a real difference. You don't get to handwave away this point just because it's inconvenient to your argument.
You're trying to find a loophole in your own logic to excuse your own behavior that violates your self-proclaimed values. I get that nobody likes cognitive dissonance, but it's quite obvious you're not being internally consistent here.
It might have to do with how finding objects in imagery is directly tied to killing people. Human beings tend to not like being involved in killing other human beings.
No, the project was to prevent humans from having to slog through hours of footage to find vehicle tracks. This makes it easier, faster and cheaper to kill them. Not a super good use of our greatest technology. Let's keep it manual, expensive and slow. The alternative is terrifying efficient killing machines.
You assume that we're not willing to kill innocent people once we've identified a target near them. This also depends on who's being targeted. I'm not comfortable with Trump being able to murder anyone on the globe with pinpoint accuracy.
Doesn't seem very strange, and I work at Google. Tracking personal info to sell better ads, vs interpreting data to imprison or kill people? Those seem pretty hugely different to me.
The same technology you're developing to track personal info to sell ads is what hostile governments around the world use to suppress individual rights, imprison and kill people. The fact that you don't understand the irony in what you're saying is remarkable.
> Doesn't seem very strange, and I work at Google
Yeah... Your working at Google might be precisely why it doesn't seem strange to you.
> The same technology you're developing to track personal info to sell ads is what hostile governments around the world use to suppress individual rights, imprison and kill people.
GP suggested organizations were using Google tech to do so, that's the part I find rather hard to believe.
Either that or by "same technology" they actually meant "different, but same kind of technology" in which case their point makes no sense. Making chemicals to sell to hospitals vs making chemicals to sell to terrorists are things that have different moral weight, even if the underlying thing being created is basically the same.
Why has this gone on so long? Surely the optics are worse than the revenue at this point. “We hear you, we are canceling this, we deeply apologize,” blah blah.
I would assume that there are also very bad "optics" to an American conglomerate announcing to its government that they are unwilling to help with defense projects of (apparently) national interest.
This is the start of a play for the JEDI contract, which is estimated at $10bn over 2 years. This would make that one contract responsible for ~1/4 of all Google Cloud revenue.
I also think that while there is a lot of gnashing of teeth, it's not really clear that the optics here matter. Amazon & Microsoft are already doing these sorts of projects with no backlash. This is really only a story because there are people inside Google speaking out.
What's pretty clear from these threads is that people who are already dislike Google, dislike this too, but it's not clear that this is having an impact on people's opinions who otherwise have a positive image of Google.
your mention of a big contract sounds true, also i somehow got the impression that we did not get all the details in this story. But then JEDI seems to be about creating a cloud for the DOD [1], whereas this contract seems to be about image recognition; but what do we know...
But then: why didn't they create an alphabet company for military research ? (you can even call it Google Mars or Ares). Would have been more compartmentalized away from the rest of the company (that's how the military likes it).
also interesting that this item seems to have disappeared from the front page - is this an editorial practice here with discussions that have gone 'bad'? (idea for a new application: track HN stories that have been on the front page but then disappeared from it).
my only comment is that the opportunity to stop weaponized AI is long long long gone. the barriers to entry for modern machine learning methods are just so low and the military benefits are there. Near peer states are going to do this and we frankly cannot stop them. the train has left the station...
sidenote: my impression is the next block of the small diameter bomb has a semi-autonomous feature where it can glide over a target area without a pre-sighted target-use some computer vision-recognize a tank- adjust its trajectory to hit the tank. all without a man-in-the-loop. pretty nifty.
edit: also these jobs are literally hiring right now. you can polish up your cv and send it to a DOD lab and in a few months you will be using deep learning to kill. SICK!
How is using unmanned drone to automatically kill people around the world different from blind terrorism? It's just a matter of precision.
I agree it's inevitable that someone ill intentioned use ML + drone to create deadly weapons, I don't understand why USA is doing it, nor what are they trying to accomplish like that.
Drone strick is already a main argument in terrorist propaganda, I don't see how increasing the number of attacks and decreasing human intervention is going to help that.
Well currently the unmanned drones do not automatically kill anything. Each drone has `pilot' who runs that drone and I believe only that drone. that pilot is in arizona somewhere instead of in the plane.
look I am pretty critical of american foreign policy and our current state of imperial dominance but I mean...calling it blind terrorism misses a huge amount of nuance.
most drone strikes are very targeted in that they try to kill specific groups of people in very specific locations.
However, in a counter insurgency war, from 20,000 feet + the fog of war it is quite easy to drone a wedding for instance.
> I agree it's inevitable that someone ill intentioned use ML + drone to create deadly weapons, I don't understand why USA is doing it, nor what are they trying to accomplish like that.
Do you really not understand it? People in government think this technology will have substantial military benefits and they don't see any compelling reason not to develop the technology.
> Drone strick is already a main argument in terrorist propaganda, I don't see how increasing the number of attacks and decreasing human intervention is going to help that.
I don't really know if more autonomous weapons will result in more air strikes/drone strikes or less. I doubt it would have any huge effect on the number.
Moreover I don't really see increasing automation will have much bearing on propaganda.
I think people in the middle east don't like drones because their family, friends, and neighbors are killed. The degree of automation in the kill chain is probably the last thing they are thinking about as they grieve.
Regarding that SDB improvement... Isn't that something that's been done for a while? Not with image recognition, I mean, but the general "fire weapon at area and let terminal search + guidance deal with the rest" methodology. HARM missiles, Anti-Ship missiles, that sort of thing. Granted, it's probably a little bit harder to misidentify a radar installation or warship than a stationary vehicle...
I am actually not sure. before posting this comment I spent like 20 minutes reading the defense literature/media to get a more precise understanding of the technical details. The commentary is pretty vague but I think the difference between HARM/Anti-Ship is you fire-and-forget but before you fire, the missile does have a TARGET that the missile then is tracked onto while in this case you release the bomb to a general area and then it either identifies a target and locks on or it destroys itself in the air. I think that is somewhat different from previous smart weapons.
warning: this could be apocryphal since I can't find the documentation that made this distinction explicit but i remember reading about this a few weeks ago.
Hmm. My recollection was that the later Anti-Radiation Missile systems would do a form of target persistence to deal with "target radar switched off during flight" issues, but that wasn't a feature of the original models, such as the AGM-45 Shrike [0]. So with that, you might have a target in mind when you fired... But the target you hit would be the one that was radiating when the missile got close enough.
Automatic visual targeting is still something rather new, though.
> The weapon gives pilots the ability to destroy moving targets on the battlefield. Its seeker detects, classifies, tracks and destroys targets, even in adverse weather conditions from standoff ranges. "We call SDB II a game changer because the weapon doesn't just hit GPS coordinates; it finds and engages targets," said Mike Jarrett, Raytheon Air Warfare Systems vice president. "SDB II can eliminate a wider range of targets with fewer aircraft, reducing the pilot's time in harm's way."
I think it is pretty hard to parse just how autonomous these are.
> Once launched, the SDB-II relies on a sophisticated package of internal computing and algorithms that are designed to get the most out of its tri-mode sensors, and make the process of launch and targeting as simple and flexible as possible for the pilot. The GPS/INS system or datalink messages guide the bomb toward the target during the initial search phase, while the tri-mode seeker gathers initial data. A revisit phase combines information from all of its sensor modes to classify targets. That’s especially useful because the SDB-II can be told to prioritize certain types of targets, for example by distinguishing between tracked and wheeled vehicles, or by giving laser “painted” targets priority.
so some amount of classification/targeting. again I am unclear if the bomb can re-prioritize targets without confirmation from the pilot but it seems like technically very possible.
I am sorta mixed on this. Is it better to be killed by a person or a computer? does it make a difference? what if the computer system kills less `innocent' people than a dumb human? I don't think it is that black and white.
In the short term, it means that high-tech nations can start wars with less concern about citizen backlash over troop deaths and injuries. That's already evident for the US, given modern weaponry. They're still limited by costs of pacification, however. And AI will help reduce those costs.
Picture yourself on the receiving end. You don't want fast, efficient, perfect targeting. That's the stuff of nightmares.
The Italian fascists were every bit as ideologically scary as the German ones, but the German ones were ruthlessly efficient and capable. Let's not make the weapons the Nazis would have wanted.
I have no pacifist objections to Google partnering with the Pentagon, I just don't think it's very smart for what amounts to be an information platform to be opening itself too easily to accusations of bias(even though everyone would know it's there anyway).
One scenario I would plan for if I were Google is if with the emergence of webassembly that an open source browser-based mobile OS like Firefox OS might become more viable. I could definitely see some nations(I.E. Russia and China... but I could see South Korea's phone manufacturers being in favor... and many EU nations too if it were open source) being very uneasy with a US Corporation that is a pentagon contractor whose government already threatened to withhold android software to ZTE. With getting close to the Pentagon and after the ZTE threat, it will be harder to make a fuss if Russia and China were to outright ban Android.
I am amused that _this_ of all things was what it took to get googlers to stop and take a look at what they're doing.
I work somewhere that has talent bouncing to/from a military contractor and a common attitude I see is way more comfortable with the military doing "what it takes" within the rules of engagement to protect the nation than what our own golden boys in the valley are doing for dollars.
"It's difficult for someone to understand something when his/her salary depends on not understanding it." This goes for the military contractors as well as the SV tech community.
"Rules of engagement" are less noble than they sound. They are not internationally reviewed and approved, they are unilaterally imposed by one party.
An infamous example being the "do not approach within 50 metres of this vehicle" for convoy escorts in Iraq. Under no international law can the use of lethal force be condoned in that circumstance yet it was 'legal' under the RoE.
And I'm not saying that as some lefty pacifist, having been in uniform myself.
>They are not internationally reviewed and approved
Devil's advocate: what makes ROE any better if they're approved by China, Iran, and Saudi Arabia, for example? Western European-descendent values and cultural norms are a global minority. You might not like the ROE favored by Indonesia in East Timor, or by the Saudis in Yemen, or by Sudan in Darfur.
At least that rule can be followed. In Fallujah, the US prevented military aged from leaving and then decided all males left in the city were combatants.
You missed the next bit where those that remained had white phosphorus fired at them.
“We could not get effects on them with HE [high explosive]. We fired 'shake and bake' missions at the insurgents, using WP to flush them out and HE to take them out.”
I'm a googler and don't really see anything particularly objectionable about selling ads against a user's personal info. That's how the service is funded, and realistically almost nobody's gonna pay a monthly sub just to search the internet or get driving directions. I certainly wouldn't. So, ads it is.
Now, giving that personal info to third parties, that'd be a different story, but that's not what Google does.
I think it's more than that. For the most part, you get paid to do what your employer asks you to do. There is no such right anywhere such that when you go to work, you work as an agent of yourself. No, you are an agent of the company paid to do what you are asked to do. Sometimes what _you_ want to do and what the _company_ wants to do is in alignment --but that's the exception. Most of the time there is misalignment. One would imagine most FB and GOOGs aren't driven innately to spy on their users --but there they are, essentially conspiring along with the companies.
In other words, you're there to grind your boss's axe, not your own axe.
Once you're in the army you have very little choice. You follow orders or go to the martial court otherwise. As a civillian you have the luxury of choice.
Military members have a choice if it's an unlawful order. They may still be courtmartialed, and sometimes even found guilty. But in the case of unlawful orders it is their duty to disobey.
That said, for lawful orders (even ones you find questionable, immoral, or unethical), you have no real choice.
> Well, one way of looking at is: Google may or may not make shitty products that steal your info, but they don't kill people.
> The military kills people.
And, ironically, if the US military is ineffective at killing people, more people might be killed or other catastrophic human rights compromises might need to be made. Focusing too much on "the military kills people" is missing the forest for the trees.
Also Google has been doing the same thing for ages. You don't get a job at google and then think "Wow they track data, I'm gonna leave on day one" If you found their tracking objectionable you never would have taken the job to begin with.
Our military spending is anything but defensive. Overtly agressive and offensive? Yes. Active in most parts of the world yes. Defending our borders? Not really.
I wonder how people here would feel about working on anti-missile systems. Probably much less conflicted.
> There are many places one can be tolerant without having to contribute to the US government.
But if you don't contribute, you could weaken the US government to the point where it can't resist an unambiguously intolerant foe. There are governments out there that are technologically sophisticated and much worse on human rights than the US.
I'm thrilled that this discussion is coming up and leaking out into the rest of the world because we need to talk about it.
Because the smart, capable people here need to think about our ethical obligation not to create terrible weapons. I assure you AI tracking in a weapon system is a terrible weapon. The test is easy. Imagine yourself on the looking at the sharp end of the weapon. Does it feel like you're in a nightmare? That's a terrible weapon. Granted, smart, capable people know not to misuse a terrible weapon, but the last thing you do when you create something is the act of relinquishing control. And I assure you the world is full of people who cannot or choose not to consider the repercussions of their actions. A weapon once created will be misused.
We all therefore have an obligation not to create such things. Turn your attention towards creations which create the world you want to live in and never work on terrible weapons.
I hope that this comment causes a response. I hope that it kicks off a debate. I trust that my argument is sound and I look forward to exploring the nuances of it with all of you.