Hacker News new | past | comments | ask | show | jobs | submit login
Amazon shareholders demand it stop selling facial recognition to governments (independent.co.uk)
401 points by pmoriarty on June 18, 2018 | hide | past | favorite | 120 comments



After Facebook, Amazon is the other company I wouldn't trust to not do evil. Even if many more shareholders join in on this, I don't see Jess Bezos caring about these things. The Amazon stock won't suffer even if he publicly says he won't listen to such shareholders. On the contrary, the stock may just shoot up.

The larger question on this topic is, "If the big technology companies do not sell the technology to the governments, won't somebody else do?" Of course, the governments would find some domestic or foreign companies to buy such technology from (or even spend money on developing it). I think that's inevitable over the course of time. But such pressure on companies could possibly help shape policy too (added several adjectives to indicate how difficult this would be).


The problem comes down to the engineers and researchers involved.

As an INDUSTRY, we have some very important decisions to make, one of which is, "will we work on this type of system knowing full well what it will be used for?"

Yes, everyone has their price, but the fact remains that the tools are nigh useless without the expertise of the knowledgeable people to integrate and fine tune the final product. If there is enough ethical backlash by we tech people, the progress any government makes can be set back decades.

It can't be stopped at this point most likely, as the data and initial research is out of the bag as it were, but it does"t have to be carried one inch further.

Personally, I won"t touch it. No benefit is worth the risk of this type of system being leveraged against a population. I've fully thought through the security, search and rescue, administrative, etc... applications, and as cold as it may sound, it is not worth it. The invasion of privacy, the chilling effects put on public discourse, the enabling of active population social engineering through ubiquitous surveillance and negation of dissidents is not a legacy worth having anything to do with.


In a lot of these scientific do or don’t debates - people tend to disqualify that there’s a consequence to not building the technology.

In the H-Bomb example, what would happen if Oppenheimer didn’t help develop the bomb? Russia most certainly still would have. And the world would be a different place today.

China, Russia, shit - North Korea are all working on super shady projects that you not I would want any part of. However that doesn’t stop the technology from being developed - and in some dire cases - not making the technology domestically puts us at a disadvantage.

This isn’t me saying that Amazon should continue providing Facial Recognition software to the Government. This is just saying never doing evil isn’t always the best solution - on a global scale.


Developing an offensive weapon is different than working on domestic surveillance technology.

Personally, I prefer a world where China has big-brother technology that the US does not.


Facial recognition isn't exclusively a domestic surveillance technology. For example, it could be deployed at national borders, with satellite imagery captured in combat zones, and so on.


Nuclear weapons have minimal potential for domestic use, short of a geographically partisan civil war. Facial recognition conversely has huge potential for domestic use, and arguably little potential for foreign use in a major conflict. If anything, I would say it's only potential for use in conflict would be to improve the ability to quash resistance in an occupation, which is more destabilising to global security.


Absolutely. An offensive weapon might be used to unfairly isolate and target innocent people.

Domestic surveillance will be used to unfairly isolate and target innocent people.

We all know this to be true. Therefore we should be much less apprehensive about developing offensive weapons.


> In the H-Bomb example, what would happen if Oppenheimer didn’t help develop the bomb? Russia most certainly still would have. And the world would be a different place today.

Russia (or well, actually the Soviet Union) stole the atomic bomb plans via its spy network:

"Although the Soviet scientific community discussed the possibility of an atomic bomb throughout the 1930s,[4][5] going as far as making a concrete proposal to develop such a weapon in 1940,[6][7][8] the full-scale program was initiated only in response to the intelligence reports collected by Soviet intelligence through their spy ring in the United States on the secretive Manhattan Project." [1]

That's the problem with your reasoning ("but they are doing it as well" or "they will do it as well"). Your reasoning is what escalated the Cold War and almost brought us WWIII.

[1] https://en.wikipedia.org/wiki/Soviet_atomic_bomb_project


It's a stretch to say the Soviets stole the atomic bomb plans, if what they got was a rough blueprint of one of the original designs which was not used to produce the size of bomb needed to be useful. The spys may have accelerated the development by a few years at most. Also I believe a lot of the intelligence came a lot from Russia's spy network in the UK, but semantics.

It's generally assumed, by Oppenheimer very vocally, that once an atomic bomb would eventually be produced, the physics weren't that complicated, this is coming from him, not me. This was accelerated by the U.S. showing that it could be done, but would not have prevented the development of a bomb. The U.S. couldn't keep researches from publishing fully, eventually the dots would be connected.

You state that this is what escalated the Cold war and almost brought us WWIII, well, the cold war was scary and developed lots of proxy wars, but it led to a 60 year time period where two of the largest powers never engaged in direct conflict. It's argued routinely that the only reason for this is both sides have nuclear weapons and the mutual assured destruction idea. If neither side had developed nuclear weapons then there would have certainly been another war between U.S. and Russia.


It almost escalated, multiple times, where it boiled down to someone in lower chain of command refusing to take the destructive action.

The book and series The Man In The High Castle (SPOILER) is essentially about the disturbed equilibrium between a nuclear and non nuclear power.


This misses the point. Which is that, 'everyone agrees not to do risky things' is not a stable equilibrium. Russia couldn't trust the US not to do it and vice versa.


That lack of risky things is a Fata Morgana. Risky things were done. The world was almost destroyed, multiple times.


This kind of argument can be used to justify working on all sorts of terrible things, but the premise just isn't true.

Most projects fail, and the people who work on super shady projects aren't some kind of evil supergeniuses. "Good" people are often better at working together. They tend to have more access to outside research and researchers.

It's also hard to know how far along these projects are. It's in these countries' best interests for you to think they're as far along as possible. The reality may be that you're building destructive technology to pre-empt something that doesn't even exist or will never work.


"If we don't do it, someone else will". There is probably no other argument more responsible for bringing us to the brink of nuclear holocaust than this one. This is the argument that convinced the otherwise pacifist Einstein to write to president Roosevelt, urging for the Manhattan project.

So what's evil about it? Nothing by itself, it is self-evident that we can not allow our enemies be more powerful than us. But there is a second step, which has to follow: those powers have to be used for a universal good which go beyond our selfish identity interests. If not then we become the enemy ourselves.

I believe the reason that many are protesting against corporations handing over their technology to the military is because they feel this second step is missing. Right or wrong, they feel that the state has lost its moral compass.


But wouldn't the correct response be to rectify the state/govt, rather than stop working on the tech?

Just because some select group decides that it's their moral imperative, doesn't mean everyone capable of the research has the same feeling, and therefore, any Govt who wants to stay ahead of the pack must be doing the research.


There's also the issue of it almost certainly being used against those governments that refuse to build it. So any decision not to proceed must consider that it's empowering everyone else.


> As an INDUSTRY, we have some very important decisions to make, one of which is, "will we work on this type of system knowing full well what it will be used for?"

Can't this easily be defeated by breaking the project up such that different teams are involved in different parts, and none of the parts know the whole thing? E.g. they could all believe they're all involved in some new super secret product development, but not know enough about the final customer or the other parts to know it's the US government.


Hence the "as an industry" part.

If you've ever read Snow Crash, the U.S. Government does something similar to exactly what you are talking about. That is why I've stated that at this point we can only delay the fruition of a state level infrastructure. Not stop it.


>The problem comes down to the engineers and researchers involved.

>As an INDUSTRY, we have some very important decisions to make, one of which is, "will we work on this type of system knowing full well what it will be used for?"

Its the same question engineers and researches who design bombers, and missiles, and weapons of all kinds should be asking themselves. Unfortunately there are always enough people devoid of morality and/or willing to defer to authority and facilitate the violence of authoritarian governments.


Morals have not worked work enough with advertising who's technology is only a hair's breadth away. The sea change you allude to will have to begin further down the food chain.


Too many people don't care. Just look how many people are plugging away at AI even though it could easily lead to extinction level events. Even before that people were making the H bomb and even other smaller scale weapons. Too many people are too greedy and don't care.

I just don't think this kind of opting out is going to do much. Maybe slow things down a little, but not that much.


But you're confusing the purpose of "caring". You are working under the assumption that the people who care would like to modify the situation in order to prevent these things from advancing further.

And of course, from that perspective you can make the argument that none of the proposed actions make any difference at all. Only outlawing the research itself has any chance of success for that, for obvious reasons. Now the simplest explanation is the explanation I'm afraid : if people are taking actions for reason X that don't have any effect on X at all, then perhaps there's simply another reason.

If you instead look at the reactions in this thread as "given this valid concern, how can I use it as an argument to exercise power over others ?", then you will find people's behavior surprisingly sensible.

People are discussing how to exert direct influence over how these big companies (where a lot of them work) move, and even more transparently about who should decide what. Who should decide who does things and what they're allowed to do.

Does it make more sense now ?


The same thing can be said about specific technologies.

1) Is it OK to work on facial recognition at all, knowing it will be abused.

2) How about machine learning in general. It is certainly being abused. Some people are worried about an AI destruction of humanity. Should all AI research be banned.

3) What about open source ML libraries? Having TensorFlow open source, certainly makes it much easier for bad actors to develop ML to do bad stuff.

4) What about open source in general. Having Linux as an open source, free software certainly enabled a lot of people to efficiently run computer systems that can be used for bad things. I am sure, that many of the surveillance systems used by the US, China, and Russia run some variant of Linux.

Basically, if you are writing software, especially open source, somewhere along the line, somebody will do something bad with it.


The mass surveillance global market is already huge. Meanwhile, Amazon introduces the very hardware accelerators in the cloud that can make real time tracking of billions of people an affordable reality for any government.

Not only would another company step in to take this market, but Amazon stands to lose a wad of cash if it gives up it's fast mover advantage. Think figures in the tens of billions, maybe more.

Not gonna happen.


A much more accurate title would be "19 Amazon shareholders demand it stop selling facial recognition to governments." They even wrote a sternly worded letter!


A much more accurate title than that would be "Shareholders representing X% of Amazon demand it to stop selling facial recognition". I mean Jeff Bezos and myself are both shareholders, but his stake is worth mentioning.

Activist funds are able to shake things up with a lot less than 19 shareholders, so the count isn't important at all unless it's giant.


Yeah but 19 is a much larger number, visually, than 0.25% or whatever


I don't care about larger which is a relative scale vs percentage which is absolute scale.


If it's 0.25% then at least we know it's not important.


> They even wrote a sternly worded letter

Privately delivered to Amazon by the ACLU. Until one of the signatories, who is also a major Amazon shareholder, goes public with their disgruntlement this should merit no further attention.


CNN[1] says that it is "Nearly 20 groups of Amazon shareholders ... which include the Social Equity Group and Northwest Coalition for Responsible Investment."

[1] http://money.cnn.com/2018/06/18/technology/amazon-facial-rec...


I don't know who those groups are - but they're exactly the sort of name I'd choose for my single-share-holding activist organisation...

(It's a more-than-impulse-buy price of ~$1700 right now - but if I were the sort of person who bought single company shares to push my personal causes at shareholder meetings, I'd probably have bought in at a few hundred bucks 5 or more years back...)


I was curious, so I looked up the Northwest Coalition for Responsible Investment [1]. They have 16 member organizations, including the Tacoma Dominicans and the Sisters of St Mary of Oregon. Most of their actions seem to be 'dialogue,' which they apparently engaged in over 60 times.

[1]: http://www.ipjc.org/wp-content/uploads/2017/10/NWCRI-Annual-...


Pretty soon, they'll have to move to outright scowling at at Jeff Bezos!


Or a tweet. That is how most official communication is handled these days, right?


Is there something wrong with this? Email, fax, phones, telegrams, snail mail, etc. Everything was the new weird thing at one point in time.


Tweets have the side effect of being public. Sometimes that becomes the main effect. Emails and the rest of your examples don't have that property.


You've clearly never had somebody forward your email on you. All digital communication is effectively public by virtue of fast, easy copying.


There is a difference between public nature of tweet where anyone can go and read it vs public nature of email where you would need to forward it to everyone (or one of them would have to tweet it which is back to what GP said).


Transparency!


Tweets are limited in content, unlike other forms of communication their perminance is at the leisure of one American tech company.


Their visibility is as well.


You mean like gmail?


Gmail is just one company's implementation of an application layer sitting on top of SMTP. You are free to use any other company's implementation or even host your own and you will still be able to communicate with all other existing SMTP services. Not at all like tweets.


This seems like it is approaching the problem from the wrong angle. This should be addressed at the government law level.

The problem is where do you draw the line? Any technology can be used for evil. Do you next ask companies that sell cameras not to sell to the government? Then ban computers?

Laws that concern themselves with targeting and surveilling people encompass any technology.

Facial recognition is not a hard problem and this is just a matter of convenience. Government can just go around and implement it themselves if nobody provides it to them. At least through Amazon there is some level of non-government control to shut it off if they end up doing anything nefarious.


I agree with this in principle, but in the US the intelligence/police don't really seem to be accountable to the people.

They regularly conduct surveillance on huge scales and no one even knows about it until we get lucky with a leak like Snowden. Who knows how many dozens of other secrets like that they still have. The CIA/FBI etc. have a history of doing this kind of thing for decades, at least since WW2. Then it gets declassified 50 years later when no one cares.

Surveillance is a case where its good for the government, but bad for the people the government is meant to represent. Bottom line is they will never police themselves on this issue. So going after the people making it like Amazon may be a better idea.


The intelligence and police apparatus are the people. They are not some alien overlords, you can meet these employees and see what they do, and alter their course of actions by shaping the government through direct voting.

Trying to change it via corporate proxy is not going to really work.


> Facial recognition is not a hard problem

Low quality facial recognition is easy. High quality facial recognition is hard.


There's growing number of investors who are looking also at the non-financial side of business (social responsibility, environmental issues etc). Companies who ignore these things exclude themselves from the portfolios of these investors.

Some banks and investors are saying the don't want to be involved with certain types of legitimate activities. Here's one example: "Deutsche Bank avoids entering into, or continuing, any kind of business relationship with entities with clear, direct links to the following types of Controversial Weapons business [...]" [1]

I'm not saying face recognition is same thing as antipersonnel mines, but things can change over the years, especially under pressure from public.

[1] https://www.db.com/newsroom_news/2018/deutsche-bank-upgrades...


>Companies who ignore these things exclude themselves from the portfolios of these investors.

Sounds good to me. It seems very unwise to run your business based whatever pet project "socially conscious" investors are supporting this week.


I get that people are upset with government abuses (I am too!), but stepping back for a minute, I think it is much more concerning that we’re building a world where a few tech companies get to decide what computing may or may not be used for.

The fact that even the government is hitting headwinds gives me an extremely negative outlook for where the cloud is headed with respect for individual rights.


While I applaud this effort, this will not stop the use of the technology. It will simply steer them to another provider or a custom implementation (on top of AWS if they so choose to) of this now well-understood technology.


Sure. But let's be clear - it is good to have as many software developers as possible steering away from unethical work.


Is it really the tool that's unethical or the argument here just about depriving governments they don't like tools in general?

"Twenty shareholders from Ford demand they stop selling cars to local law enforcement."


I completely agree -- wouldn't you rather help law enforcement and our military protect our well-being?


Be a true patriot; set all passwords to ‘password’.



"Too many secrets". Setec Astronomy.


They don't protect your wellbeing, they protect the entrenched interests of the hyper-capitalist corporate class.


> Is it really the tool that's unethical

Amazon runs the tool on AWS servers they control, so they can (and should) be responsible for having policies that forbid its unethical use.

Here is AWS's Acceptable Use Policy: https://aws.amazon.com/aup/

I don't see anything there about ethics.


I don't know about this argument though. If Amazon bans a book based on their ethics, people freak out about censorship and how one company has so much power and shouldn't be doing that. If their AUP stated what they deemed ethical, how many potential startups or corporations wouldn't be able to start? Take a "simple" example: A business that counsels families on abortion. Is that ethical? Who gets to decide that? I think this is one of those cases where we have to be very careful what we are asking for.


> Is arresting criminals based on photographic evidence "unethical" now?

IMO creating a police state is unethical - the proliferation of cheap surveillance technology helps do that.


> unethical use

Is arresting criminals based on photographic evidence "unethical" now?


Yeah but... that means the people who do take on this work will be the zealots who never doubt what they're doing. That's not to say that non-zealots should take on this work to keep the zealots out. I'm just saying that we're screwed so long as there are no laws prohibiting governments for doing this sort of stuff, because there will always be devs willing to use their powers for evil.


Those zealots would do it anyway.

If there's any correlation between non-zealot-developers and highly-effective-developers (which there may not be, but the optimist in me hope there is), then the world might be a better place as a result of this...

(Note, I have been _tempted_ to take on work that fascinates be on a technical level, that I disagree with on an ethical level. I've never crossed my own personal ethical line there, but - without judging - I can see why the temptation succeeds for some people...)


"it is good to have as many software developers as possible steering away from unethical work."

Is it unethical to track down violent activists?

Is it unethical to check for people crossing the border illegally?

And this thing about the tech being used to single out 'people of colour' I think we can dismiss that out of hand - the tech won't be used for that ... though the application of the tech may disproportionately affect some groups (say the cops create a 'watch zone in S. Chicago' but not elsewhere).

But at least in the former scenarios, it's fairly grey thing.

At border crossings, I can see this could be very reasonable.

Walking down the street in 'wherever USA' tagged for something I can see ethical problems.

It depends on how it's used ... we need some new laws ...


> At border crossings, I can see this could be very reasonable.

Note that it's also least necessary at border crossings. As a non-US citizen I'm already required to give my fingerprints and retinal scan to border control agents.

Like you say it's the "walking down the street" problem - for me at least perhaps more accurately described as the "done at many orders of magnitude more scale, in circumstances where you have no option to opt out". If I don't like border control practices, I have the option of not crossing a border (at whatever cost to me that implies, but I have _some_ agency there). When this is deployed on streets, shopping centers, trainstations, and other similar places - I've lost any agency in being able to choose not to be involved/identified.

(Note too, that the USA defines "border areas" where I'm legally able to be stopped and fingerprinted/retina scanned as "anywhere within 100 miles of a border" which includes pretty much all of California and New York, and anywhere within 100 miles of each coast or the north/south borders.)


>Note that it's also least necessary at border crossings.

Says you. It could replace the fingerprint and retinal scan for all we know, plus you don't have to actually physically interact with the person crossing. I don't think it will, but it seems perfectly reasonable to deploy this technology at the border. I see no issue with the definition of "border areas" either. Seems reasonable to assume that if someone has recently crossed illegally, that, assuming they haven't gotten picked up by vehicle yet, they are likely to be found within less than 100 miles.


Sure, I'm speculating out my ass here (it's what everyone does on The Internet, right?)

Seems to me though, the most recent numbers I've heard for commercial state-of-the-art facial recognition are barely capable of 99% accuracy. I don't know the error rates of fingerprint and retinal scans, but I'd put good money of the combination of passport and fingerprint or passport and retinal scan being several orders of magnitude more accurate that face recognition we have available right now.

(And I probably should have left out the "border areas"comment as part of a different argument - my beef with that is not "how many illegal crossers might you find within 100 miles of a border", but "is it worth reducing the rights of everybody, legal as well as illegal, just because they live/work/travel within 100 miles of a border?" that includes everybody in CA west of a line thru Sacramento, Fresno and Bakersfield.)


Why is the accuracy of the technology relevant at all? I think you're assuming that we already have fingerprint and retinal scans of everyone entering, which is quite obviously not true. We might, however, have a rough facial footprint of a known bad actor. I'm fine with this technology being employed in such a manner.


The accuracy matters (at least it seems to be so to me) because if all you have is "a rough facial footprint of a known bad actor" and you use a technology with a 1% error rate - given that there's probably something like several hundred million airport border crossings a year in the US - _somebody_ is going to have to deal with a million false positives a day, which doesn't seem like a win given I suspect the number of bad actors for which facial features are know but cannot be detected with the in-place passport/fingerprint/retinal crossing system is probably in the single digits per year...

The aggregated individual cost to the 1% false positives - when deployed against a population of several hundred million travellers a year - seems outrageously high to me.


Easily solved by simply fingerprinting and retinal scanning the positives and the "unknowns", which is essentially the status quo. Nothing changes except our confidence level that we are actually engaging the right people. The cost, to me, is simply in terms of how expensive implementation would be in terms of dollars.


" As a non-US citizen I'm already required to give my fingerprints and retinal scan to border control agents"

Well that's my point - we already do stuff on par with ID recognition - so while it's uncomfortable and debatable, it seems 'within bounds' in our current state of affairs.

And yes, the 'all seeing eye' part is hugely contestable.

My point is that it's grey.


Sure. I think my (perhaps badly made) point was - There are two existing technologies already in use at borders, each of which have similar or better accuracy than facial recognition, so the additional benefits of deploying it there is likely to be small.

Unless, as pointed out by bdhess in another response to my comment, the aim is to detect people for whom border control do not have any passport/fingerprint/retinal information for, but some reason still consider to be a "person of interest" at a border. Which has it's own set of scary implications...


> Note that it's also least necessary at border crossings. As a non-US citizen I'm already required to give my fingerprints and retinal scan to border control agents.

I think your argument assumes that the US government has already captured either a fingerprint or retina scan of all of its persons of interest. I don't think that's a safe assumption.


True, but what's the other assumption?

That the aim is to detect people for whom border control do not have any passport/fingerprint/retinal information for, but some reason still consider to be a "person of interest" at a border?

I'm not sure if that's a valid point, or a scary overreach...

Part of me worries about using a barely 99% accurate face detection technology (perhaps trained on Facebook or YouTube jihadist videos?) on what must be at least several hundred million airport border control crossing a year is inevitably going to result in several million false positives a year - presumably mostly for bearded middle eastern males. The invasiveness of recording and storing facial data on every international traveller with the possible payoff of detecting someone genuinely "interesting" amongst the million per day or so stream of false positives seems like a poor security solution.

Another part of me acknowledges that the US (and, to be fair, every sovereign nation) can invade everybodies privacy at the border _anyway_ and what's the problem with adding just this one tiny straw to the camel's back?

As Wil Wheaton so eloquently pointed out, I'm a middle aged, white, heterosexual, cisgender man - I live life on the lowest difficulty setting. This is unlikely to affect me in any way apart from giving me a great opportunity to rant on internet forums. If you have any 15-30 year old male friends of middle eastern descent, ask them how _they_ feel about an algorithm with a well known 1%+ error rates most likely trained on "terrorist suspects" being pointed at _them_ every time they fly in or out of the US...


You're right - there's _lots_ of grey area here, and some people's ethics are different from mine (and mine are probably different from yours, at least at the boundaries. _Most_ people agree on basic human rights, they just all define them differently sometimes...)

While I agree we need new laws, I also recognise that laws by necessity change slower than fashions, so there's always a period (often a long period) where laws trail behind what's going on in society. It's during those periods where approaches like "it is good to have as many software developers as possible steering away from unethical work." might help.


There's a lot of grey area around immigration.

Where it stops being grey is when the authorities start setting up concentration camps for forcibly separated children. At that point, yeah, it's unethical as hell, and so is aiding and abetting it in any way (which may include some of the activities that you've listed). Context matters.


I think this kind of “don’t sell technology x to governments” is kind of misguided for the following reasons.

1) You are saying that you are from be with private entities having access to this technology, but not public entities? Private individuals and companies should be able to use facial recognition, but not governments? Throughout history, governments have typically been the first ones to acquire tech. Of note is that the major Airplane companies (Boeing and Airbus) have very large military units.

2) It is likely to be ineffective. All you need is a third company that serves as the bridge that buys facial recognition from Amazon and resells to the government.

3) International relationships really are a competition. I am sure Chinese AI companies are working hand in glove with the government to make sure their government is able to deploy the latest AI tech. If AI is the gamechanger people say it is, you have given an incredible leg up to China vs the West.

4) If employees at companies keep on pushing this anti-government, anti-military thing with AI, there is a good chance that it will get reclassified as a sensitive technology and heavily regulated.


> 3) International relationships really are a competition. I am sure Chinese AI companies are working hand in glove with the government to make sure their government is able to deploy the latest AI tech. If AI is the gamechanger people say it is, you have given an incredible leg up to China vs the West.

I think this is a really important point, here.

I think the paranoid line of thinking is helpful in this instance:

Who benefits from the US government not having access to AI technologies?

- The argument on the face of it seems to be that we, the people of this world and in the US, benefit thanks to keeping these tools out of our government's hands

- The unmentioned argument is benefits go to China, Russia, and other countries whose tech sectors cooperate with the government

I am not suggesting that the article was written by China or Russia to hurt our government - that certainly sounds like "conspiracy theory"; however, I am suggesting that it is a valid concern that the US as a country may want to "keep up".


Your analysis ignores an important point: the only thing that the US government would need to gain such access, would be to stop doing the things it is being called out for. It's not that government having such tools that's inherently the problem, but the way they're going to be used here and now.


> 4) If employees at companies keep on pushing this anti-government, anti-military thing with AI, there is a good chance that it will get reclassified as a sensitive technology and heavily regulated.

The US Government will build/spur/prompt the next Googles and Amazons by pouring hundreds of millions of dollars in venture capital into new companies, targeting whatever tech they want to see exist. They had a large role to play in making just about every major tech company in the US possible. They'll simply keep on with it. Google, Amazon, etc will have the US Government as an aggressive direct competitor as they hand the latest DARPA tech and In-Q-Tel funding to the next Googles. It won't get reclassified and heavily regulated, the next company will jump at the opportunity to ride the US Government money spigot to billions in riches.


A giant corporation might be evil, but where I live the government pretty much has a mandate to systematically erode the last 200 years of advancement in civil rights.

The sooner facial recognition data is integrated en masse, this mandate will be exponentially easier to carry out.

A corporation aiding the government in this process is essentially an extension of Corpgov and is not to be trusted with facial recognition tech anymore than the increasingly authoritarian government with whom they share a bed.


Trump won't end civil rights in the US, not unless you let him.

I'm not American so it's not my responsibility, but probably you should throw work into politics and reforms of the political system.


These machinations were not set forth by Donald Trump. This erosion has been a long time coming. He is a drop in the ocean.


Governments are just people, led by politicians.

It would be far better if efforts were spent on recognizing and electing qualified and capable people to run the government as wanted instead of this silly run-around with commercial companies.


> Governments are just people, led by politicians

... empowered to do things we lock up, tase, beat, sick dogs on, and/or kill other people for doing. Kind of an important distinction to make.

If you have reason to think exhorting your peer-proles to 'pick qualified and capable people' will lead to better outcomes now than it has in the past, certainly, please continue. Some of us, however, don't see that happening and prefer to explore other approaches, even if you happen to find our pessimism 'silly'.


Big corporations can also make your life miserable...

And they aren't exactly as accountable as our governments.


And their being accountable to shareholders is the avenue of accountability these groups are trying to use. Might not work, but surely it's worth a try?


Yes, it's called authority. The point of the comment is that we should pick the proper people to give that authority to, instead of fighting against them through some strange corporate proxy.

If you want an alternative, why don't you become the qualified and capable person to join the government to shape it the way you want?


And the point of my comment is that those given authority frequently do stupid, destructive things with it, and I believe in reducing the number of stupid, destructive things that happen.

You appear to believe that authority deserves deference simply because people call it 'authority'. I don't. There exist legitimate and illegitimate forms. The illegitimate forms are dangerous, and sometimes stupid, and deserve to be opposed.

And who decides? I do. I am arrogant enough to believe individuals don't need those in authority to make moral judgements for them.


And we're working on that, too. But it's a slow process, and meanwhile there's damage to mitigate.


Slow doesn't matter if it's effective.

This current approach is rarely that, and usually amounts to nothing more than noise. The government is not deterred by a single corporation, no matter how big.


The goal is not to deter them. The goal is to make it more difficult for them to do what they're doing, and thereby ruin fewer people's lives.

And yes, slow absolutely does matter, if you're one of the people caught up in the grinder.


I understand this is a practical reaction to what most of us would agree is an egregious use of the tech, but wouldn't it make more sense to stop the problem at its root? I'm talking about surveillance. Why aren't we complaining about camera companies selling surveillance equipment?

The gov't already has the data and Amazon's APIs just make this a little bit easier. But they can go direct to EC2 and get a couple hundred GPU-optimized instances and do it themselves.


I don't think it matters. Third parties are going to be in this space anyway (Raytheon, Lockheed Martin, Boeing, and the foreign defense contractors) so it doesn't make a lot of sense to shut off a profitable segment.

I was just thinking how much I would like an facial recognition API for my home security system. That way I can give someone's name to the police to make it easier to track someone down.


Me, I've been thinking about ALRs. If this stuff is going to be available, I think everyone should have it. So this is one of those projects I am just never going to get around to, but someone else, please do it - it shouldn't be that tough to build a cheap Raspberry-Pi-ish automatic plate reader anyone can point out their window to capture every plate going by. With a network of them, everyone can know this stuff, all the time, for free.

I find that to be a frightening outcome, but I think it is a less frightening one than only certain people having access. We have to learn how to live with this tech; far better to not allow it to be a tool of selective control while we're adapting. And it can be a more honest resource - police and other official vehicles' movements will be visible to those paying for those cars, whereas anything official will have official holes.


Well, I think the hard part is done here:

http://www.openalpr.com/cloud-api.html


This is interesting. The marketing has certainly been disturbing, reinforcing the worst uses for this sort of tech and overstating its usefulness for those cases. But ... unlike Google's contract work with the Defense Department, I'm not sure Amazon could legally get away with refusing to sell an on-demand product to a government agency. This service is nowhere close to the worst thing that's running on Amazon's servers on behalf of governments. The software exists and could easily be set up to run directly by governments or their contractors on Amazon's infrastructure (I guarantee that is already going on far more extensively than Amazon's own specific service will ever grow to). Ditto for GCP and Azure and everything else.

I sympathize with these investors, truly, but the only solution here would be to get out of the cloud computing business altogether.


I work in a department at Microsoft that does a lot of Machine Learning and AI work for MS customers. I'm proud that our organization regularly turns down opportunities because of ethical considerations. We have a concrete set of ethical standards, and if a project makes it past that review and into your hands, and you still feel uncomfortable, you are strongly encouraged to flag it for second review in committee. The whole system is called AETHER, for your googling - er, Binging - pleasure.

The ground rule that would apply here is, we don't do anything that makes choices about restricting or injuring humans. So no perp detection systems, no autonomous weapons of any kind... Not even a camera to disable the ignition if you're detected as drunk.


This is a perfect example of social governance: stockholders feeling empowered to control how a company operates. We need to see more of this type of governance. Ultimately, shareholders should just vote with their dollars. Sell a stock you dont like or dont invest in the first place.


I agree with you, but I think we disagree. What I mean is that this article specifies 19 share holders, without mentioning their share. It's a very safe bet to assume that it's not even a significant fraction of a percent. In other words, in 'voting with their dollar' these individuals would have a literally unnoticeable impact.

When a substantial chunk of society feels something ought be changed, then I certainly agree they have the right to make such change happen. But we live in a time where headlines, such as this, are constantly driven by irrelevantly small numbers. For instance traditional fear mongering was stuff along the lines of 'video games are driving kids to kill.' What that really translates to is 'one mentally unstable kid who happened to play a lot of video games killed another kid, primarily because he was mentally unstable'.

The conflation between the views/actions of society and the views/action of the individual is counterproductive. If you polled people, proportional to share of amazon, what percent would agree with the statement 'I demand Amazon stop selling facial recognition to governments'? On that we can only speculate, but I imagine it would probably be quite small.


I'm starting to think we have the need of something like the ICAN for facial recognition (FR), promoting the regulation of FR worldwide. There a little valid systems where FR is truly helpful. Most of todays applications are around things like "access control" and "tag your friends here", i.e. banalities which are either unnecessary or can be solved with simpler tech.

On the other hand the risks are incredible, FR in combination with AR systems could completely eradicate privacy. Autocratic governments could use it for a new levels of surveillance. Military usage could perfect the use of autonomous weapons etc.

It's just not worth it and should be internationally outlawed. Just like A-bombs and chemical warfare.


Preventing facial recognition techniques from spreading would seemingly require some attempt at keeping them secret. But aren't AI techniques often openly published in scientific papers? And it seems likely that there is also open source software available?


In the past 24 hour cycle of news I have seen:

- Microsoft catching heat for their deal to allow ICE to use their capabilities

- Amazon catching heat for their deal to allow government to use facial recognition

- A photo of Satya Nadella, Jeff Bezos and Trump sitting next to each other and shaking hands

What's the play here?


Money.

- Companies making money by developing and licensing software.

- Media making money by creating sensationalistic headlines.


PRISM?


Time for Amazon to divest this division into a wholly owned subsidiary!


Yeah, but Facebook's software could contain tagged data of actual people.


What percentage is a group of share holders?


It's weird that we have so many people directing their protests towards the tech companies, and not the government itself. The whole premise of capitalism is that profit-motivated-corporations will inevitably succumb to evil temptations. That it's the government's job to regulate corporations, and police their moral behavior.

Instead, we have the complete opposite. Americans expect the government/military to engage in morally unacceptable behavior, and are looking to corporations to "enforce" morality instead, by denying the government specific services.

Not that there is anything wrong with tech companies and workers doing their part. But when we see corporations as the best catalyst for pro-social outcomes, and not the government itself, there's something very wrong with our political system.


Can't find the actual letter. Anyone have a link?


Too. Late.


The only effective way to achieve this is getting a majority of the board to vote for this. Everything else is mere virtue signalling without effect.


> Everything else is mere virtue signalling without effect.

By that definition, isn’t almost every form of non-decisive influence “virtue signaling”? Virtue itself? Marching against a war? Voting for a candidate you like the policies of, but who doesn’t have good media coverage? Declining to purchase unethical goods?


Yes. That’s the objective behind the term “virtue signaling” - to call into question the motives of anyone and everyone who claims to care about something so that it becomes deeply uncool to care about things so that they’ll stop trying to make the world a better place.

Even if you presented a 100-step well funded plan for changing the status quo, prepare to have it leveled by the sneer of “you’re just virtue signaling.” You can never prove your sincerity to anyone who uses this term, because it will never be enough.


How's legitimacy relevant here? I'm not trying to place doubt, trying to make an honest observation that a result needs the majority of the board to vote for the measure.


I don't see any issue with the phrase "virtue signaling" per se. My issue is with the implicit or explicit statement that it is useless. Virtue signaling is cool. People who say it with a mocking tone of voice aren't.


Good. "We don't want you doing shady things that damage the brand's reputation in order to make a buck" is the kind of message shareholders should be sending to all companies.

Edit: Apparently I'm wrong and shareholders should be pushing for profits regardless of means?

I know Amazon isn't exactly known for treating people well but they're no Monsanto or Blackwater either. Amazon sells to consumers and is therefore dependent on public opinion to make sales. With the increasing public awareness of just how pervasive consumer tracking is it's becoming a touchy subject. If Amazon wants to keep expanding its market share for home goods then anything that risks associating its brand with government tracking is a very high risk. That would make me uncomfortable if I had Amazon shares.


The US government is a huge customer of AWS, and AWS has deliberately built entire regions to cater to them: https://aws.amazon.com/blogs/publicsector/announcing-the-new...

https://aws.amazon.com/govcloud-us/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: