I used this with some personal video surveillance to identify some thieves who were casing driveways to find cars to get into. Instead of wasting hours looking at footage, I was able to give the police excerpts that helped them identify the perp. Also, because of the circumstances and footage, they were able to charge him with a more serious crime than petit larceny, which means he won't be back next week.
It's an effective tool. If you're allowed to have a camera, you should be allowed to look at it with whatever means you want.
The reality is that the horse is out of the barn. The Federal government has been doing this type of surveillance either using facial or LPR data on interstate corridors since at least the 90s.
I suspect that total involuntary surveillance is not something the majority of the (United States) populace signed up for.
The legal Expectation of Privacy[0] continues to be important, even if there are people who think that it was a legal handwave that only existed due to lack of adequate technology.
> I suspect that total involuntary surveillance is not something the majority of the (United States) populace signed up for.
I certainly agree, but just wait until They/Them/Those start telling us this is part of some sacrosanct "social contract," and those who don't like it can simply leave.
TL;DR: You could renounce your US citizenship for free before 2010. Then it when up to $450. In 2014 the fee was raise to $2,350.
On top of the fee is the exit tax wherein you may be required to pay a tax on the entirety of any gains on your assets as if you had sold them all that day. And, further, if they think you are renouncing to avoid taxes they claim that you must continue to pay taxes for 10 years after you renounce.
Because the advancement of technology changes the impact of certain classes of surveillance and tracking. Existing law is based on a balance that no longer exists. Bruce Schneier has a nice essay on this and license plates:
In short, having a unique id and database of car owners may be okay when the police need to manually look up information. But when they can have every plate in the city regularly scanned by robots and the data fed to an AI to look for patterns, that's not even close to the same thing.
My point isn’t that pervasive surveillance is great. It’s that if you’re allowed to have cameras everywhere, why would you limit useful processing of the data?
How do you regulate it? If you can’t hire Amazon to do LPR or facial recognition, can you pay a guy to recognize people walking down the street and note their patterns? Do you license/permit camera placement?
The feds had thermal cameras hooked up to a helicopter, flew over some guy's house, saw thermal signatures looking a lot like someone growing marijuana.
Supreme court threw this out as a warrantless search because people are allowed an expectation of privacy against tech that "everyday people" don't have.
IANAL, but if you don't buy this logic, then search warrants are mostly limited by tech? Like if the gov't invented binoculars that could see through walls, does that mean they could just look into your house without a warrant?
The search warrant is about intent and expectations. And while "police officer checks records manually of a person" is kinda expected, "police officer x-rays an entire crowd instantly" seems a bit less so.
We _do_ put up signs about cameras, and there's rules about recording phone conversations too. It sure feels like there's case law around a lot of this.
> It’s that if you’re allowed to have cameras everywhere, why would you limit useful processing of the data?
So that a company doesn't end up doing mass surveillance.
During bad times, you'd see crackdowns and suppression caused by such a system. If you look in the present at the countries that are more authoritarian, you can get a picture. Places like China or Russia, where the internet is censored, monitored and mass surveillance is openly practiced. Russia recently passed Yarovaya law:
"Internet and telecom companies are required to store communications and metadata for 6 months to 3 years. They are required to disclose them, as well as "all other information necessary," to authorities on request and without a court order. It also requires email and messaging service providers to have cryptographic backdoors. The surveillance regulations will take effect on 1 July 2018."
It probably goes without saying that they use this tool to get a tighter grip on their population to maintain power.
Use vague words in the statutes like "reasonable" and have jurisprudence define how it can be used in practice. At least that's how the law already works.
Even though private surveillance is pervasive in the US, law enforcement is still sometimes rejected from using some of it when it flies in the face of the privacy standards to which we are accustomed[1].
>It’s that if you’re allowed to have cameras everywhere, why would you limit useful processing of the data?
Because what matters is what the result of that processing is. This is not some abstract concept; what's actually being done with the data matters.
>How do you regulate it? If you can’t hire Amazon to do LPR or facial recognition, can you pay a guy to recognize people walking down the street and note their patterns?
That's a silly analogy and I think you know it. Again, this is not some academic debate. The difference between being able to process TB order data automatically and hiring a guy to watch a corner and take notes is the difference between the Hubble telescope and me in my backyard with a pair of binoculars.
"Whatever means you want" just has to not include matching to a database of suspicious people that are not currently under investigation for a real crime. Or to something with a floating definition like "terrorist". Which it won't. Just because Washington County says it's not currently being used for mass surveillance doesn't mean it won't be in the future or for other departments.
You don't get the full extent of the damage. You are making the assumption that the government and police departments are the only ones that will do mass-surveillance.
You could not be more wrong. Everyone will do mass-surveillance. Everyone. The police is worst, because they will legally use violence against (true AND false) matches, with limited to no recourse for their victims. But the cats' out of the bag. Legal or not, mass surveillance is not going to happen ...
It is happening.
They will even sell that information on. "Alexa, which bar has the most blonds going into it ?" coming up soon. Or worse: "Alexa, can you tell me who is my daughter's boyfriend with face recognition ?" "That'll be $0.45, buy ? $1 extra if you want to know if they have sex. Included for free with Prime ! Order that instead ?".
I know it’s cliche to bring up in these examples but there was an unintended negative outcome to Germany requiring people to register and pay some of their taxes to their church before ww2. The Nazis used that information to collect up and brand Jewish people and ultimately kill them. It is hard to imagine this tech will solely be used for good forever.
Los Angeles police already use gang databases to fault people for being in the "wrong" part of the city based on their database entry. This stuff is gonna turn into some kinda crazy 21st century internet apartheid.
>If you're allowed to have a camera, you should be allowed to look at it with whatever means you want.
You aren't allowed to look in to people houses with it or in to schools, up girls skirts, inside some banks etc.
You may have rights to own a camera -- I have the right to go around swinging my fists in to thin air but my right to swing fists stops where your head begins.
Ahh the old "if you have nothing to hide you have nothing to fear" defense for mass surveillance. Don't study history much do you?
If you believe this will only ever be used to stop thieves and people that victimize others you are naive at an unfathomable level
>The Federal government has been doing this type of surveillance either using facial or LPR data on interstate corridors since at least the 90s.
lol... You believe that is justification or a defense. The fact that the Federal Government is doing something that people that value privacy oppose does not mean we should allow local law enforcement to do it even more efficiently
It does suggest, however, that a slippery-slope argument of "More surveillance plunges us immediately into a 1984-esque dystopia" has a big [citation needed] banner hovering over it though.
I feel we are already in a 1984 Style Surveillance state they are just better at hiding it then the book's government
So no I do not think it is "slippery slope" at all, we have already slide down the slope; arriving at the bottom. We long ago surpassed the level of government surveillance I believe is ethical, constitutional, or proper for government
if the government having a database filled with everyone Bio metric data they can then use technology to follow, trace, and monitory your every movement is not a "surveillance state" I shutter to think what it would take for you to classify such as system? a Brian implant that reads your every thought?
The boring observation that I can make is that the folks living in that dystopia were very good at thinking that it was a dystopia. I know that's a circular epistemic wormhole to get into, but simply liking the authoritarian surveillance you live in doesn't remove the features that make it authoritarian and intimately surveilling you.
Right, but doesn't it remove the notion that the authoritarian, intimate-surveillance features are necessarily bad things?
To stay in dystopian sci-fi: The computers on the Enterprise (TNG at least) were aware of crewmember's positions and health situation at all times as long as they were aboard ship. Do we imagine they felt spied upon? It seems instead being born into a world where surveillance is ubiquitous and benign-to-benevolent, they were fine with it.
Maybe we can build that world instead of assuming the end-result is 1984 every time.
The Enterprise was a flagship Starship, the people aboard were the best of the Starfleet Academy, aboard for a limited duration pseudo-military expedition.
This is akin to saying "humans on board the IIS are monitored intensely, do they feel spied upon? Therefore we should all be willing and able to live for a lifetime with that level of monitoring".
And not taking into account that IIS visits are undertaken by very unusual, pre-selected people, for limited durations, being monitored by a single organization, mostly for the purposes of research and protecting their investment.
Which is completely not anything like reality for most people, where things are operated by goodness knows who, for goodness knows what reasons, and barely or not at all regulated against potential abuse.
>>Maybe we can build that world instead of assuming the end-result is 1984 every time.
The Key difference between the 2 is Resource Limitation.
In 1984 Society had limited resources, in Star Trek it was a Post Scarcity Civilization.
Star Trek is not possible while we have limited resources, invent replicator tech then you might have the ability to create a Star Trek society, no replicators no Star Trek
It is sad that people have the delusion they are really free, unable to see they are actually in a guided cage.
I am sure there were many in Oceania perfectly content with their life, thinking they were free because they never butted up against the edges of their cage, they never tested the limits of their "freedom", they were "respectable" citizens doing as the government said
> I am sure there were many in Oceania perfectly content with their life, thinking they were free because they never butted up against the edges of their cage
I do not recall getting that impression when I read 1984. Rather, the majority of the proles were absolutely under the figurative boot, and kept there via fear. Even the Inner Party were aware of the edges of their cage -- either they tested them and got reconditioned, or they were all-in on the dystopia, but I don't think that counts as thinking that they were free. Rather, they traded freedom for power over others.
In 1984 they had audio/video monitoring inside every home. Today governments only have such monitoring in select public areas. That's a significant distinction.
1. that the Warrant process today is more than a superficial formality to give people the illusion of check on government power. When in reality to get a warrant they simply have to fill out a form, there is hardly ever any push back and it has been proven countless times that law enforcement lies via omission, or stretched the truth, or any number of dishonest unethical things to get warrants, on top of the fact the most judges simply approve 99.9% of all warrant applications. "Getting a warrant" should not be viewed as some panacea of government restraint
2. Assume that "if they got a warrant" then people like me would approve of government having access to that data. That "Getting a warrant" is a valid justification for obtaining data.
Only if you consider privacy as a means to an end rather than a goal by itself. Facial recognition has a negative effect on literally everyone's privacy.
People who have been damaged by random acts of crime may very well find more peace-of-mind sacrificing a bit of privacy to not have those acts repeated on their persons.
Should we install security cameras in public toilets? It would certainly reduce crimes such as assaults that occur in public toilets. There would be very little impact on our personal security.
Privacy is an fundamental aspect of our quality of life. Privacy is not of secondary importance to security but exists alongside it. If that were not so, then there are plenty of places we could be putting surveillance cameras in order to improve our security.
However, people install security cameras in their own homes all the time---some of which are even wired to cloud services or outside security monitoring agencies.
I agree that privacy exists alongside security. Where the balance-point is best placed varies from person to person. In public spaces not scoped for private activities (such as the streets of a city), the balance point has to be agreed upon by the citizenry in general and is almost certainly somewhere past both "toilet stall" and "personal residence" in terms of collective value for surveillance.
After all, we already put up security cameras on street corners.
Until we have 100% transparency into situations where executives/operatives/etc are meeting behind closed doors to screw over thousands if not millions of people then they should not be allowed to have any of those privileges on us.
When you take a GPS from an unlocked car, you get charged with petit larceny, which is an appearance ticket and $50-500 fine.
When you get charged with a higher crime, you get a few weeks in jail. In my case, the offender was a known “frequent flyer” who victimized my neighborhood for months, and was arrested 6 times. He was charged with a more serious misdemeanor, and was charged with 30 days in jail. He has not returned.
My objective as a citizen is to peacefully enjoy my home. Some lowlife doesn’t have the right to pillage our yards.
It's funny how many people come out in defense of Amazon, compared to when it's done in China. It really reminds me a really wise saying I heard somewhere, that nothing is really sinister or even when it's happening in your own backyard :)
Monitoring heavy trafficked interstate corridors is one thing. This is entirely a different. Amazon/tech is at a point where we could get blanket surveillance on most streets, in our homes, in our offices, etc. It's the difference between monitoring areas/locations and monitoring people ( us ).
But I agree that the pandora's box has been opened and there is no going back.
When all they probably needed was a real drug rehab program.
Edit: Just saying because almost all petty thievery is committed to get goods to trade for drugs.
Edit 2: Also, I'm not saying that this wasn't a good use of home surveillance, or to whatever parent's comment referred, although (s)he is probably wrong that they won't be back next week. I have acquaintances that having got hooked on meth, began stealing cars. Once caught, I thought she'd be put in prison or a drug rehab program. No. She basically served no time because the judge and everyone else know she was just a junky and there was no funding for rehab, and little room in prison.
Are you aware of how much money it costs society to send someone to prison? A LOT more than a rehab program.
- You have direct cost of imprisonment ~$60k a year
- The costs of prosecution (no idea what this costs though)
- Once someone has been to prison they are much more likely to be involved in crime again, and become a repeat customer of the industrial prison complex
- We loose a tax payer
- The perp will have no retirement fund, thus the government is on the hook again during retirement years.
- Increased insurance costs for everyone else due to a higher crime rate
I don't have the time or resources to do an accurate cost estimation but we could do some basic napkin math. Lets say 10 years in prison during their lifetime and a loss of $15k tax per year over a 30 year period (due to time spent in prison and loss of earning potential once they're out).
(10 * $60k) + (30 * $15k) = $1,050,000 loss to the State. That's a shit load of money. Crime prevention looks a lot cheaper now doesn't it?
Many people, unfortunately, want blood and 'justice', not helping hands and forgiveness for thieves. Perhaps if we highlight just how much 'torture' addicts feel as they go off drugs in a rehab hardliners might be convinced to support drub rehab as a fully funded public policy.
1. That's a non-sequitur. Drug rehab programs that are part of prison diversion schemes are implicitly a matter of public policy. A single individual writing a single check is meaningless in that context.
2. "Saint or GTFO" is a debate tactic used by the foolish who think their opponent is a bigger fool.
Or we could curb the landlords and build some decent real public housing for a lot of people that is not the prison system. Have you considered "hacking" the problem of these landowners who charge too much?
Unfortunately, knowing our luck it will barely be more than an AA/NA meeting, perhaps with random UA testing, that some contractor charges the government Passages Malibu rates for...
I find this article a bit funny, and see it mostly as hype.
I worked very revently on a large internal face detection service for a legal and compliance application at a large US company.
Being pragmatic engineers, the first thing we did was to pilot test Rekognition against an in-house prototype built by modifying some open source deep learning approaches.
Rekognition performed so poorly compared to our prototype (which was not state of the art or anything, just good enough for our company’s bespoke conpliance problem). It was just so bad. Overall intersection-over-union scores on our training & acceptance data was really poor, and perhaps worse, the latency of the requests to Rekognition was abysmal and completely unusable for us, especially for images that had a large number of identifiable faces (Rekognition service calls isually timed out at 5 seconds and returned whatever amount of faces it could process).
Our very simple prototype could fully process upwards of 30 faces with 1 second of latency, and so if using a similar 5 second timeout limit, we could process around an order of magnitude more faces (which happens more often than you think, e.g. crowds, people in public places).
I just could not believe that the web service, backed by a team surely comprised of at least hundreds of top Amazon engineers, was so slow and inaccurate.
We didn’t even bother trying to price out how expensive it would be for our expected amount of throughput, because we knew from a simple accuracy and per-request latency point of view, Rekognition just couldn’t cut it.
As I’ve moved into a more lead role with machine learning teams, this experience really opened my eyes to how seductive cloud solutions can appear to non-technical managers.
They will act like severe bean counters when they see a high salary request from a really experienced machine learning engineer, but they won’t even bother with due diligence or estimating cost-per-request or cost-per-unit-accuracy for dropping money on a cloud service.
Instead of empowering a small team to do something much more cost-effective for the company, they’ll ball at their internal engineers while setting a pile of money ablaze.
> Rekognition performed so poorly compared to our prototype (which was not state of the art or anything, just good enough for our company’s bespoke conpliance problem). It was just so bad. Overall intersection-over-union scores on our training & acceptance data was really poor, and perhaps worse, the latency of the requests to Rekognition was abysmal and completely unusable for us, especially for images that had a large number of identifiable faces (Rekognition service calls isually timed out at 5 seconds and returned whatever amount of faces it could process).
Criticizing the technology's performance is tangential to the emergence of the technology on public streets, and its adoption by local governments for surveillance. It does not matter what performance it has now; I argue and predict it will improve, and then your statement becomes moot.
The issue is not how well it performs. The issue is allowing it to happen, and without trust from the people.
My fear is: government wants surveillance, and businesses pursue revenue, so there is no incentive among those two to question their actions.
Unfortunately the "cost" of poor performance is sometimes pushed to the public, or worse, subsets of the public. Consider the million-name-long restricted fly list of the last decade -- they were nice juicy profitable projects for corporations but the cost was borne by a subset of mostly innocent people sometimes lost all ability to fly. If the costs are compartmentalized enough, sometimes the general public doesnt care (see: NYC's Stop and Frisk policy.)
I don't see much benefit to pushing back against the technologies themselves; they're here, they're easy to apply, and we're well past the point where trying to ban them will just push them underground. Misapplication of the technologies and erosion of fundamentals of evidence-based action are a real problem.
Contrast the California police using genealogy datasets to narrow the realm of potential "John Doe" serial killers. As court evidence? Shouldn't be admissible. As a tool to focus the search to a subset of the population? Extremely good idea.
Similarly, the face-recognition camera can tell the authorities where to turn their eyes but should never be the be-all and end-all of adjudication; they're too easy to fool (as Maryland discovered when it started auto-ticketing cars that blew through red lights at camera-tagged intersections, only to find a wave of high-schoolers thinking it was hilarious to tape a photo of an unliked teacher's license plate across their own and then blow through an empty intersection at 80 MPH at 2AM).
I agree with you generally. But I differ in the sense that my default belief is that these technologies are over hyped and cannot provide the scale of performance or detail of prediction that could make them effective as oppressive surveillance tools, at least not for a long time.
Mostly they are “surveillance theater” akin to useless (but costly) TSA surveillance. The way it interacts with society is by being vague and scary, not by actually enabling truly scary scales of surveillance.
Another way to look at it is that the government buys these contracts and services and rarely holds the provider accountable, perhaps especially in fields like machine learning where people can more easily make political arguments out of the debate over whether it’s working or not.
Since agencies are happy to drop big money even when the technology does not work then providers like Rekognition actually do not have incentives to invest in making the scale of accurate surveillance larger. Why would they when they’ll make more money yanking an engineer off of a material project like that and slotting them to work on some braindead TED talk sort of demoware that gets them the next big contract?
I worry about the spread of these surveillance contracts more as a taxpayer seeing government money wasted that could help people through much simpler means than I do as a concerned citizen who sees the danger in Orwellian surveillance.
These surveillance theater and immature or improperly applied technologies do more than just waste tax dollars. For decades the courts allowed flawed hair analysis to be used as evidence. When it could result in innocent people being sent to prison, a fuss must be made that's stronger than just worrying about what it will cost.
That’s a fair point, which makes me feel even more convinced that we should be more focused on whether the technology actually works than whether it’s Orwellian.
Using Rekognition for this would be like hiring a fraudster construction company to build a local bridge. The bridge could collapse, nobody did publicly verifiable due diligence, and nobody would be held accountable. This could be the automated surveillance equivalent.
>my default belief is that these technologies are over hyped and cannot provide the scale of performance or detail of prediction that could make them effective as oppressive surveillance tools, at least not for a long time.
you've never heard of the chinese police state's facial surveillance which they're using to throw countless uighurs in jail for stuff like jaywalking, i suppose?
>as a concerned citizen who sees the danger in Orwellian surveillance.
you don't see the danger because your paycheck relies on you not seeing it. the rest of us are not so willfully blind.
It seems misguided to dismiss the technology itself, rather than scrutinizing specific uses. It reminds me of David Deutsch's book The Beginning of Infinity, which has an extended anecdote about a scientist who deeply objected to the development of color TVs because "people didn't need them" and he could only think of examples where bad actors would use them. Fast forward several decades, and color TVs are used for all sorts of important outcomes, like life-saving medical imaging technology, or sharing video of an important life moment with someone who is too sick to travel.
You could imagine similar things for face recognition. An Alzheimer's patient who has a large digital photo library of family members and events, and would like some information retrieval tool to help search for a specific memory, family member, etc. Or someone who takes stock photographs and wants to photoshop bystanders out of the photos so that nobody's image rights are infringed when submitting the stock photo for publication-- might want face detection to aid in the person-removal task.
I agree completely with scrutinizing motives and refusing to work on tech when the goal is unethical. But I don't agree this allows us to point at a generic, amoral technology in and of itself and say it's conceptually unethical for existing. Maybe there would be some extreme or contrived cases when I would agree to that, but even this face recognition stuff is very far from it.
It does, however, suggest that whether Amazon's doing it today is irrelevant, if any org with the resources of mlthoughts2018's org could cobble together a system that performs sufficiently to their needs with open-source solutions and some locally-grown or publicly-sourced training data.
This barn is devoid of horses and is getting chilly from all the airflow.
> backed by a team surely comprised of at least hundreds of top Amazon engineers, was so slow and inaccurate.
You'd be surprised. Many many AWS services are launched by a single 2-pizza team (8-10 developers). It's in Amazon's DNA to act like a startup, ship something quickly, collect feedback, then iterate, and scale.
Also, from what I’ve been able to deduce, the AWS services are all just built on top of EC2 [un]modified open-source software until they reach a level of complexity where they roll their own, which leads to lots of issues that can take years to fix. Some things are telling, like banning two consecutive hyphens because something somewhere in that pipeline is shelling out.
Also, for the record, AWS is hardly top of the industry in terms of talent. It's not Google or Facebook. I've never heard of an ML researcher joining Amazon when they could have joined Google/FB/Apple.
I've known plenty of people that have left Google/FB/Apple for Amazon. And I know plenty of people who have left Amazon for Google/FB/Apple. The biggest tech companies are quite incestuous. Some people do thrive in one environment, some in others. They all pay top dollar.
It could happen both way. Noted that Google is much more saturate for its ML talents, so unless you are super top-notch, some people will take it to Amazon for bigger impact. I personally know a few people, who is competitive in the industry, having Google offer/leaving Google for Amazon.
No doubt, the flow is usually Amazon -> Google/Facebook, because for mid-level engineer/scientist, money talks :), and Google/FB generally have much open/better publication policy, while Amazon is neurotically secretive. Apple is not on the picture, I think the internal engineer culture is very problematic(no common codebase, severe NIH syndrome), except for the money, it is subpar place to grow.
Indeed! I recently tried Rekognition's "celebrity detection" and was surprised at the no. of false positives - "doppelgänger detection" would've been more like it.
That said, it could flag faces for manual screening. Or cross reference other data (License plates, Geo data from phone) to boost confidence. Also, once they get more data as part of some of these deals they'll get better.
The scenario-based costs table at the bottom is particularly interesting. If you need to run the detectors on all incoming images of a site ingesting, say, ~10 million images per week, you're looking at nearly $15,000/week, or close to $800K/year with AWS, and nearly $1.2MM/year with Google, and either way you'll still probably need to employ some type of backend engineering team to maintain the wrappers for calling it and manipulating the response data.
We were able to solve this in-house using only about 15 8-core servers (we used a few general GPU servers for training, but found we only needed CPU machines for runtime, and did not have the poor latency of the AWS calls) with quite a lot of redundancy for traffic spikes and a pretty easy deployment system if we needed to add or remove nodes, and it was only one of several dozen different machine learning projects going on, so whatever portion of the total cost of operation could be allocated to paying the salary of a machine learning team, it would further be amortized by work on many projects. For example, amortizing the cost of GPU machines for model training across a variety of projects.
interesting data point on its performance but the reason decision makers might be cloud happy is amazon has the resources to scale this to thousands of RPS and develop the backend codebase up to hundreds of engineer contributions for many years, even as key talent comes and goes. The same might not be true of your prototype especially if the core competency of your business is sales or legal compliance not distributed computing platforms.
Our team was just a general machine learning team with no expertise in compliance issues, and only one or two of us had experience in face detection. We only cared about providing a web service that gave good face detection performance (the downstream consumer was the team that had to then take the face detection output and do business logic with it for compliance).
> amazon has the resources to scale this to thousands of RPS
Actually, this part is mostly straight forward. Not easy, but straight forward, and Amazon already has this part solved for most things. Our ML team did have expertise in making very large-scale services (our company operated a large ecommerce web store, so our traffic was actually very high, top 500 Alexa rank), but our concern over Amazon had nothing to do with whether it could handle enough requests per second.
But this also misses the point. The latency for a single request was too poor for a variety of natural images (crowd scenes, public streets, etc., -- exactly the types of images relevant for many surveillance applications). It was not a throughput issue, I guess unless you expect consumer teams to do something like chopping images up into sub-images, replacing one Rekognition request with a bunch of sub-image requests, and then post-processing to stitch the results together and account for error introduced by the choice of where to split the original image. Yikes.
It's not an issue of "scaling it up" -- it's an issue that the fundamental machine learning algorithm underpinning it suffered too high latency, likely because it was a deep neural network combined with a bunch of post-processing gadgets that are conceptually not good enough for bespoke use cases -- things which engineers on a centralized platform have little incentive to care about.
The real question is whether Amazon has incentives to make this part better, and if so, how? Will a general one-size-fits-all face detector work for customers? Or will they need application-by-application customer-specific tailored models (and if so, why aren't customers savvy enough to realize that the all-in costs to build and operate it in-house are so demonstrably cheaper than Amazon, and might be the only way to meet specific performance goals)?
I hear this so often, that Amazon can "just scale it up" -- but I think if you really work in face detection and have studied the problem deeply, you know this is not right. There's a whole different bucket of problems and issues and cost-effectiveness trade-offs going on.
As it get customers, it will likely get more and more resources thrown at it and get better and better.
Even if the tech may not be useful right now, it probably will be in 5 years, and unless something is done now, another large slice of the little bit of privacy that remains will be gone.
I think this is unlikely because of the nature of the problem. It's kind of like when Paul Graham said consulting is fundamentally not scalable. Throwing more people at a problem like this one only works if you have dedicated consultants for every bespoke business case, because there's no one-size-fits-all model, and the way that performance limitations interacts with which kind of face detection models you can use is very complicated.
Companies would end up paying huge premia to Amazon essentially to have Amazon build in-house teams that would be equivalent, only much more expensive, than the companies own in-house team just building an in-house solution.
With some forms of consulting this can make sense, because the consulting services are temporary and should set the company in a position such that for the long run, the company can maintain the solution cost-effectively.
But for something like face detection, assuming it's a pivotal service you'll need on an on-going basis, this would become the worst sort of vendor lock-in imaginable, combined with the fact that if you don't appear to be creating growth for Amazon, you're liable to be deprioritized at any point, and at the mercy of Amazon's choices regarding when and how to address your bespoke need.
For these specialized machine learning services it doesn't fit the same kind of commodity model that AWS infrastructure uses, though that's currently how Amazon is pursuing it.
Don't get me wrong, Amazon is very good at making money and convincing nervous management to buy their brand. I suspect they will find ways to sell these services despite the services not being cost-effective or customizeable for bespoke customer situations.
But when I point out that Amazon's machine learning services don't offer cost-effective performance, it's very different from a "just throw more devs at it" kind of problem, fundamentally.
They are NOT selling a service that identifies people from data sets other than the police's supplied data sets. It is not inappropriate for police to be able to determine who they are interacting with if that person is a known criminal.
The cat is out of the bag and is already something the police could do themselves. The real fear is when they are able to source commercially available information from linked in, facebook, etc to identify everyone's whereabouts.
Pushing back this early will not have mass appeal and will not help to prevent the future, broader concerns IMHO.
> It is not inappropriate for police to be able to determine who they are interacting with if that person is a known criminal.
Everyone is a criminal. You commit multiple crimes a day if you're like most people.
There absolutely should be a clear delineation between what is possible and what is allowed when considering state surveillance.
The 4th amendment: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."
It's right there. You have a right to be secure as a person from unreasonable searches. Massive face recognition of 'criminals' is a violation of exactly that, and should be prevented without question.
> Everyone is a criminal. You commit multiple crimes a day if you're like most people.
More importantly, pretty much everyone has a driver’s license or other government issued ID, and in most states, the police have access to your mugshot from that.
The criminality can be figured out later. The problem is real time surveillance of the entire population.
To be pedantic, it was Lavrentiy Beria[1] who is credited with that statement[2]; So it was a WW2 Russian Sentiment. Though Stalin-ism is also an example of a surveillance state government that was known for being unpleasant.
Reasonable people can disagree on the definition of "unreasonable," however.
Is it "unreasonable" for the Amber Alert system to co-opt the eyes and attention of ten thousand citizens around a kidnapped child? Hard to say. Most people seem to think 'no.'
If that system could co-opt camera feeds instead of human attention, we could get most (all?) of the benefits of the existing Amber Alert without having to burn civilian resources on it. Wouldn't that be a strict improvement over the current system?
Is it right to track the all online dealings of people that may be connected to terrorist organizations? Pretty reasonable people seem to think yes (Patriot Act).
Is it right for the government to track all the online dealings of an average citizen? Most people think no.
You may see a discrepancy here. That is why there is the saying "First they came for the fascists...". It is why when we talk about breaking encryption and backdoors discussion is centered around terrorists and pedos. Because those are universally hated groups. The problem comes with the fact that there is spill over. (See cases like San Bernardino shooting or the cop that is sitting in jail without sentencing because they know he has child porn on an encrypted drive. Btw, they use this same tactic to frame less controversial legal cases too)
I think if backdoors/searches/whatever could only be applied to universally hated groups and not include a single random citizen than there would be almost unanimous agreement that we should do that (you are not violating a "non-criminal's" rights). The problem is that technology doesn't allow us to exclusively target those kinds of groups. As tech oriented people we understand that this spill over HAS to happen, but the average citizen (outside SV) does not understand the how it works, the scale, nor ramifications. I'd like to remind everyone that 1984 wasn't about the government watching everyone at all times, but having the ability to. I'm not sure why we call that a dystopian society but many people readily accept the use of this technology by the government. Sure, the police have the technological ability do facial recognition without Amazon, but technological capability is not the question. It is SHOULD THEY?
This is such a fascinating and interesting question. I highly recommend reading the book 'Automating Inequality'[0] if you haven't already. My initial reaction to this question was 'I certainly would prefer groups of humans making collective decisions over an opaque, proprietary, algorithm'. But then I also think back to the latest internet firestorm of the woman calling the police on black barbecuers over charcoal (this happened a block from my house). Can we actually trust people en masse to make collaborative decisions that generally avoid bias? Definitely not.
I think your comment may have changed my mind about the utility, objectivity, and overall value of amber alerts...
This is a case of "system worked as intended." Humans are fairly audit-able, can be seen, and public discourse can have its moment. Imagine if instead one of the thousands of street-cams was used to phone in the call to police. There would be no public discourse because of the invisibility of the actions.
>If that system could co-opt camera feeds instead of human attention, we could get most (all?) of the benefits of the existing Amber Alert without having to burn civilian resources on it. Wouldn't that be a strict improvement over the current system?
In my area amber alerts are usually along the lines of "the non custodial parent has the child when he shouldn't". Reducing the resource cost of (and therefore threshold to utilize) that system or similar systems is not something I can get behind.
Finding and bothering specific people should be something that scales poorly enough that all the government organizations that engage in it are forced to pick and choose who they do it to. A low threshold is an invitation for abuse.
> Wouldn't that be a strict improvement over the current system?
No, that would be an implementation of a panopticon. You'll find a lot of interesting criticism of that concept, and of Bentham for trying to solve societal problems in an inhumane fashion.
I definitely feel like we're missing positive opportunities for mass surveillance. Things like being able to recognize someone in cardiac arrest, or who is at risk for a stroke. Someday mass surveillance will help as much as it hurts society.
I believe technology will ever erode privacy, and that the best we can hope for is to make sure the transparency goes both ways. The government should become more transparent to the people as the people become more transparent to the government. Like you though, I'm afraid we'll end up with a vast and reliable surveillance system which police will be able to use, while the police themselves are still "struggling" to get their body cams to work reliably.
> Things like being able to recognize someone in cardiac arrest, or who is at risk for a stroke.
Can those things actually be done with a vanishingly low false-positive rate? If I'm walking down the sidewalk and a camera sees me suddenly grimace because a rock got into my shoe, I don't want paramedics rushing to the scene to offer me assistance for a heart attack.
I'm the last one to stay "it's for the children". I don't think mass surveillance can be rolled back though I will continue to support changing legislation that harms us. There are positives to consider if we must live in that reality. It's the best we can do if we can't go back.
At least in Canada, the new Amber-Alert-over-LTE system is mandatory, and cannot be opted out of:
"People cannot opt out of this," said CRTC spokesperson Patricia Valladao. "There is a high importance that people — want it or not — receive these alerts."[0]
I honestly do not see how the 4th amendment's text obviously applies to surveillance, especially as it pertains to being identified in public. I don't even think it matters whether or not someone is a suspected criminal. It's not a search or seizure. And only in case law are there examples of the text being interpreted in favor of things like audio recordings needing a warrant. But this isn't one of those amendments whose text is clear on an issue like public facial recognition.
Lol. The fourth amendment went out the window with the Patriot act and the Freedom act. The fourth amendment does not matter anymore and nobody gives a shit. It's not like an 'important' amendment. I mean it's not the second amendment, otherwise there would be huge lobbying groups up in arms (no pun intended).
Wait long enough and the violent solution will emerge on its own, like riots or the guillotine. Waiting and doing nothing while hoping for something ideal is a surefire way to live through the worst case scenario.
that can easily tip into a policy of never doing anything, on the basis that there is always the possibility of an unknown better solution. Nonviolent solutions are ideal, but they do depend on the existence of a public conscience sufficiently active and reliable to restrain bad actors.
The don't and they won't, don't fool yourself, This legislation will never be backtracked, it's just political suicide as there will ALWAYS be a bogeyman to scare people straight. You know, like communists, drugs, terrorists, etc.. the usual [1].
"Voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country."
I don't see anything in the post you're responding to that would constitute a search (or seizure), but simply an identification, which by itself does not meet the requirements for probable cause, and is unlikely to meet the reasonableness criteria.
By analogy, if a police officer has an excellent facial memory and they recognize someone, can they use their memory to influence their actions towards the person?
Yes, I likely commit a couple felonies a day unknowingly. Still, I've never been arrested and I don't have a mug shot. It's not inappropriate for law enforcement to automate processes that are valid and legal for them to do. They are allowed to study the mug shots but it's not reasonable for them to memorize every mugshot. However, automating their recollection via machine learning is reasonable.
the issue isn't that the police should not know about specific criminals, but when police overstep their authority and defy their own rules/laws in order to profile (sometimes racially) community members who have not been charged of crimes.
This was the purported intention of cellphone location tracking data. Instead it has been sold to marketers and data aggregators, and so poorly secured that likely for over a year it's been possible to get location data via phone number as a service.
Companies and governments have demonstrated that they are not sufficiently responsible to have these databases. HIPAA-like laws and severe criminal penalties would be a start.
Well, we have multiple court cases to prove that one of the applications of the police's and CIA's power to monitor phone conversations is checking if your (not-quite) girlfriend is "cheating" on you and attacking, as a law enforcement officer anyone she calls.
But without more government, there can't be more handouts, so ... ok ?
It is, however, inappropriate to use facial recognition technology which is fundamentally terrible to primarily frame innocent people as known criminals. An encounter with a police officer, especially after 'the computer says you're a criminal', is dangerous. And facial recognition software is profoundly bad at its task. So this amounts to nothing but putting a bunch of innocent people in the crosshairs just so cops can feel like futuristic cyberwarriors.
Why would the police only use pictures of known criminals? Unless there are negative consequences for using something they will use whatever they can get their hands on...
>They are NOT selling a service that identifies people from data sets other than the police's supplied data sets.
I don't really give a shit what tiny little corner of respectability they hold onto so they can sleep well at night while ratcheting up the surveillance state.
> The cat is out of the bag and is already something the police could do themselves
Yes. But what’s striking is that by offering this product and heavily marketing its law enforcement use, Amazon are clearly endorsing total surveillance as their preferred model of society.
I don’t imagine this will be incredibly lucrative relative to other AWS products. There’s very few potential customers. Law enforcement access to data is extremely broad, data-osmosis/‘fusion’ with the private sector is rife, and the whole area is pretty much unregulated. The potential for abuse is large.
Vigilant Solutions has been running this for years. Not just cops but tow trucks, bounty hunters, all kinds of private enterprising individuals have been building a collaborative commercially accessible database on all our locations, it was already robust technology based on plates alone.
Vigilant is so well placed, I wonder if Amazon will even have a market impact...
The last useful moment to get freaked out was probably when Vigilant hired facial recognition expert Roger Rodriguez back in 2016.
The cat's long gone, fat on mice, and we don't even remember where we left the bag.
A service that does recognition based analytics on images or video from your homw security camera seems like a multi-billion dollar idea to me.
Half the houses in my neighborhood have some sort of camera system that you can see, there are probably more that you can't. People are scared of things, there is a market there.
It's freaky 1984ish stuff to think about but normal type people have these cameras, it's only natural for there to be private databases used for analytical purposes.
... which is done as a matter of course in many countries. Hell, the Netherlands does this for all cars, always, "to send speeding tickets". They are scanning and identifying all cars, whether or not they have reason to suspect anything.
This will be a constant in the future. It has everything government wants : a gross grab of power, vast potential for abuse, and it saves a little bit of money ! What more could you want ?
Not necessarily. The cameras the roadmanager (rijkswaterstaat) use to view the busyness of traffic are still out of reach for law enforcement. They could use those cameras for a whole lot of more money if they were allowed.
Weird, I've read in the news multiple times that these cameras "accidentally" caught some sort of criminal. From people wanted by the Dutch IRS to people "running from youth services". They were particularly proud of catching those people running from youth services when they ran unannounced, and before they hit the border.
You see, apparently youth services has a problem that once across the border all the "crimes" like having a teacher pissed off and report you to youth services or having your parents divorce, which according to the Dutch should lead to a child's incarceration are not actually crimes once you cross the border. You see, they feel the need to increase the safety feeling of children with divorced parents, by having the police violently (search on youtube) abduct them, then dumping them locked up in a "group", ie. a set of adults who don't care about them at all (not professionals, of course, those are expensive. No, the normal people they come in contact with come without so much as a high school degree), with total and complete power over them. But then night comes, and those adults abandon the group entirely, so the kids are totally at the mercy of older kids or just the more violent kids in said group. Needless to say, the rate at which rapes and sexual assaults happen in this environment is "somewhat" higher than in normal society. Oh and why does the Dutch state do this ? To make the child feel safe ! Sadly, that's not a joke.
Sadly, if you were a lawyer, you are now to advise that if you're trying to keep kids away from those services, you need to either rent a car (not sure if they surveil that yet), or you need to physically assault the police (you laugh, but that's exactly what immigrant communities are doing in Rotterdam and that's (granted, a small part of) the reason why they're doing it. The alternative in some cases is giving up their kids to these monsters. Again the government responds, not, of course, by fixing their own problem).
Because of these sorts of things, most of the neighboring countries have become appalled at this behavior and refuse to cooperate in all but the worst cases, so naturally they're fixing it : the fix for that problem is employing high pressure tactics to get other police forces to cooperate and mass-surveillance on their own people.
At some point in time we Americans are going to have to come to grips with the computational abilities of the state. There are way too few safeguards in the system and the Bill of Rights does not seem to be adequate in dealing with the new reality that private companies can acquire a vast amount of information on us and thereby sell it to law enforcement. Law enforcement can use privately obtained information without a search warrant [1].
What happens with false positives? What's the recourse? This very real, pernicious circumstance is rarely well thought out in advance. The no-fly list is a great example of what happens when there are not sufficient safeguards in the system.
The reality of the computational state is not going to go away. As technology progresses the informational capacity of the state increases. Are we prepared for this? At present I think not. See the reaction to the NSA warrantless surveillance of phone calls. Our leaders for the most part decried the leaking of the surveillance rather than the act of the surveillance.
It's the leaking they fear most. The implementation of surveillance technologies is unavoidable. Ban them all you like, they will be used regardless. Privacy is dead, and nobody can save it.
What we DO have a shot at controlling is what is done with the data. Probably the best hope for safeguarding things like freedom and democracy lies in making surveillance data transparent and public.
Your philosophy is admirable. Unfortunately it's also the one that led us here. Shall we wait until the surveillance state has us intractably under its boot before we stop arguing over whether its power should exist and start focusing on who we would like wielding it?
> There are way too few safeguards in the system and the Bill of Rights does not seem to be adequate in dealing with the new reality that private companies can acquire a vast amount of information on us and thereby sell it to law enforcement. Law enforcement can use privately obtained information without a search warrant [1].
Ultimately what the Bill of Rights means is determined by the Supreme Court. It could limit the Third Party Doctrine, which is really what gives the government warrantless access to that information, in light of new technology and social circumstances.
We could rely on the Supreme Court but I'm skeptical that they will put the public interest above the government's interests. I think a new Bill of Rights will be needed if there is sufficient outcry over abuses.
I'm not so sure: the Wikipedia article quotes Justice Sotomayor:
> "More fundamentally, it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks."
> [Gorsuch] appeared eager to apply originalist principles to contemporary technology in a way that shields cellphone users from law enforcement overreach. Indeed, Gorsuch seems poised to build upon the jurisprudence of his predecessor, Justice Antonin Scalia, to establish real constitutional limits on the government’s ability to track our movements through our mobile devices.
> What happens with false positives? What's the recourse?
The same recourse when you're false-positived by a witness, I'd assume: get a lawyer. Personally, I'd probably place my trust in facial recognition (possibly not this system specifically, though) over human eye-witness testimony, based on my understanding of human psychology and its vulnerability to incentives.
It's not as dire as it's sometimes painted. MD attempted to apply mass-ticketing with intersection cameras, and it was found to violate the state Constitution because the use of a car in committing a crime didn't imply the owner was behind the wheel at the time.
If the technology is unreliable, we call it out as unreliable. What society is going to need is people well-versed in the ways the tech can fail to be good, not a doom-sayer's disregard of the ways the technology can help.
It's not the same thing as a false positive by an eye witness. Eye witness testimony is well known to be prone to false memories. Hence there needs to be corroborating evidence. I think people are too readily willing to accept a technological identification. I can see a claim being made that the computer is wrong only once in a billion cases without proper justification. There are people today who still think lie detectors are reliable.
I can see a time where the presumption of guilt is assumed simply because the technology says so. Clearly we are not there yet and possibly never will be. I hope that issues will be sorted out before the technology gains widespread use.
Sadly they never seem to learn about it despite the risks being known of treating them as magic wand auto-wins instead of pieces of evidence. From the absolute criminal garbage of bite mark identification junk science, DNA science using tests skimpy enough to false positives as their databases grow and they misuse what it statically enough for 'eliminating any non-matches in the pool of suspects' to pools of millions including vast numbers of non-suspect and non-victim samples which basically guarantee a conviction of an innocent statistically via lack of precision.
I agree about the risk. It is incumbent upon we who know better to sound a loud and long cadence about the unreliability of technology and the risk of false-positive.
Yes, we need some substantial acts of congress including and up to a Constitutional amendment to properly handle technology.
I still shake my head that "wiretap" laws are what are being used (last I knew a few years ago) - why not have a clean law to address modern communication capabilities and proper privacy & court interests? :-/
Asking a company to please not sell something doesn't seem like a scalable approach. As long as there's a market someone will try to enter it. Either make it illegal for companies to sell this technology or illegal for police departments to buy it. It's a political matter.
I agree it's a legal matter, but completely illegal is not realistic. What's needed is a legal/privacy framework that addresses these new technologies. Unfortunately, I have zero faith in the law makers to understand the technology enough to really do anything meaningful. Instead, we will probably have to wait for a SCOTUS case at some point in the future to define the legal side.
Laws cannot stop a technology that is easily availabe, like number plate scanning or face recognition. Nothing can stop it.
You might just as well outlaw knives (they are in fact illegal in many European countries, which has done nothing about their easy availability, and software will be far easier to acquire than knives)
> Laws cannot stop a technology that is easily availabe, like number plate scanning or face recognition. Nothing can stop it.
Which is why I said that completely illegal is not realistic. What laws can do is provide a scope and framework on how technology is used. Data retention, sampling, court admissibility, etc... are all pieces that can be argued and adjusted to fit with new technology. But, like I also said above, I don't have faith that current law makers understand technology enough to make any of these laws. It will take some overreach cases that end up in SCOTUS to get anything done.
> You might just as well outlaw knives
Using knives in a certain way (stabbing someone for example) is against the law. Using this new technology in certain ways could also be against law.
Face recognition and plate scanning are orders of magnitude more complex than knives, which have been around for ~1,400,000 years. Kids can (and do) make knives with random material they find. Effectively 0% of adults can make face recognition or plate scanning systems.
Also laws do work in these scenarios: look at the difference between US and Israeli security practices and you'll see dramatically different laws in effect.
Key quotation: "Facial recognition is not new technology, but the [criticizing] organizations appear to be focusing on Amazon because of its prominence and what they see as a departure from the company’s oft-stated focus on customers."
This is an interesting side-effect of the success of AWS. Amazon's customers are not just individual consumers or even sellers anymore, but organizations on the scale of multinational corporations that rely on the cloud and national governments. Interesting to see how this will play out in future AWS offerings.
Is it me or has Amazon ever cared about customers, always marketshare and profit margin. I've never felt Amazon was a pro-consumer company, merely a side-effect that they need consumers.
The number of counterfeits and abomination of UI argues against that point. I think Amazon is against both customer and small retailer and that goes to my point they are about market share and profit margin above all else.
As someone who has sold on Amazon, its not as simple as you make it sound.
Amazon takes my product, which I identify, and puts it in a bin at whatever warehouse. Another seller, sends in the same product, but theirs is counterfeit. Without opening the package, how is Amazon to know which one of us is selling the counterfeit and which one is real?
Now, my inventory is mixed in with amazon's and this counterfeiter, that product gets shipped out with my name because the customer chose my price of the list of sellers, but Amazon takes whichever one is closer, not the one I sent them. This is done to help ensure cheapest/fastest possible shipping. I take the blame even though someone else sent bad product.
This is a shitty situation all around, but how does Amazon fight back? They do eventually go back after the counterfeiters, but its a slow and complicated process because its super simple to set up as a seller on Amazon.
But, this is also not a new issue. Ebay still has this issue and I would say its far more mature than Amazon at the "3rd party seller" crap and Ebay cant fix it either. Its a shitty hard situation to deal with.
Amazon could do what Amazon used to do, and what everyone else does: Don't comingle supplies from trusted sources from random third parties. Have you every tried selling a wholesale bag of carrots to a your local grocery store? They aren't interest, for good reason.
What reasons do you have to conclude that Amazon is not pro-consumer? Chasing marketshare and profit margin is something else, and being pro-consumer is a way to win marketshare and profit-margin.
Selling everything, generous return policies, free shipping, customer reviews; these are all customer-friendly.
This is a ridiculous article. Rekognition has been available for over a year, and I've been using it for that whole year. I've even integrated it into my own home security camera.
Amazon is selling it to law enforcement, but it's not like LE get special service that no one else does. Amazon built a generic facial recognition system, and it works pretty well. Even if they weren't pushing towards the police and government, they tech is out there. Those groups could have used it on their own.
I'm not sure I follow because your home security system, correct me, is only near your home, not connected to millions of people collectively that are mechanical turked unwillingly. Them apples aren't oranges.
Right valid true. Its just that it's scary if someone has tons of money and users and AI etc etc that has this capability and misuses it on a massive scale kind of how big data only works when its big. No one cares that my localhost website has root access to my machine for example but maybe that's a silly point but I think you get what I'm saying, kind of like the tree falling in the forest... take your pick of metaphors.
Except that the article is paining it as if it's this crazy new thing and the government and Amazon must be stopped in their collaboration to turn America into a police state.
When in reality it's just a service that's been around for a while that offers a technology that other people have, and was available to law enforcement long before Amazon started selling it to them.
What is the news here? That Amazon sells their products to the government?
There is plenty of technology available to private citizens and companies that cannot be freely used by government entities, unchecked, on a massive scale. The news is that the government wants to use a new technology that is rife for abuse, which currently has no regulation when used by law enforcement, and Amazon wants to sell it to them.
Amazon might be a bogeyman here, but calling out the potential for abuse by governments of new tech is a primary function of the press in a modern free society.
I think this is going to happen, regardless of ACLU or whether Amazon pulls the service or not. Face recognition will be refined and sold as a package. Probably every cop car will have it one day. Courts will say that when you're in public you give certain privacy rights away etc. etc. Expect cop cars next to you if you missed a court date or violated parole.
The scariest part is the DNA fishing expeditions through cousins like https://www.washingtonpost.com/news/true-crime/wp/2018/04/27... . They find a hair in a crime scene and 17 years later, more or less, you need to prove that you didn't do it. Maybe the victim was next to you at Starbucks or picked your hair with his /her shoes. Good luck
>Courts will say that when you're in public you give certain privacy rights away etc. etc.
This is the law of the land right now. As to your next example, I don't see what's wrong with those, as missing a court date or violating parole is a perfectly legitimate reason for you to be arrested.
They already wrongly ticket people at stop lights, it's a racket. They could just mail you a ticket for very small petty "crimes," like not throwing something in the right bin, jay walking, etc. Do people really think police will only use this to solve crime? Solving crimes doesn't pay the bills. Ticketing people does. Then they could also use the facial recognition to get everyone that's at a protest of the government (they already do this somewhat with cell phone signals) then they know who to put on a watch list...even if it's a legitimate protest.
I'm a bit perturbed that people on this board aren't more concerned about the many bad ways that this can be used for, especially since police oversight is notoriously bad, in virtually every country, including the west.
Stuff like this is why I'm an anarchist. This power is not yet balanced enough to promote liberty with the same magnitude that it can demolish liberty. When drug dealers can take a picture of a potential customer and match it against a global database of official police photos and informants, then I will be less wary of cops matching video against criminal mugshots in real time. If the criminals can't hide from their warrants, the cops can't go undercover, either. If Joe Citizen can get an automatic ticket for jaywalking, Jack Cop should get an automatic ticket for speeding without emergency signals on, and for not wearing seat belts. If the balance tips too far one way, you get a police state; if it tips too far the other way, you get rampant criminality. The criminals have to be constrained by risk of capture, and the cops have to be constrained by prioritization of resources. Currently, I think the balance favors police state.
The only viable counter I have managed to imagine so far (for the US) against automated facial recognition is to develop a new, serious religion that always wears identity-concealing veils, masks, and costumes in public places. It has to be a religious value, because otherwise, state-by-state anti-mask laws may be employed. This may be easily justified as an expression of egalitarian non-discrimination. If one cannot immediately identify someone as a member of an out-group, such as by their facial features or skin color, it is more difficult to treat them differently from a member of an in-group. It's harder to be racist if you don't know what races other people are. So adherents might also be required to wear gloves and concealing clothing, or full-body coverings like "morph suits". Fursuit cosplayers are likely to be political allies.
If you are worried about automatic ID, you should find out whether your state is a stop-and-identify state, and if so, work to have those repealed, along with any anti-mask laws. The presumption of innocence goes hand-in-hand with the right to pseudo-anonymity in public. If you aren't doing anything criminal, nobody needs to be identifying you. Sometimes, I even prefer not to be identifiable to my friends--maybe I just want to get some groceries without having to stop and talk to anybody.
All these technologies should be used first on cops and government officials, to ensure they are upright, before being used on anyone else. Qui custodiet custodes applies. It is too dangerous to be used by anyone who is not provably trustworthy.
>I'm a bit perturbed that people on this board aren't more concerned about the many bad ways that this can be used for, especially since police oversight is notoriously bad, in virtually every country, including the west.
On the contrary, I share the concern, but I don't share the conclusion that police should be relegated to crunching data with nothing more complicated than an abacus (obvious hyperbole) because of abuse concerns.
I also believe in a level of consistency that states no, you do not have a right to privacy in public. Any such abuses should be dealt with on a case-by-case, abuse-by-abuse basis, rather than unreasonable blanket bans. And any such cases must be ruled based on existing law, rather than what amounts to bench legislation.
It's possible that in the short term, Amazon could be changing the economics of mass surveillance, making it more feasible for police departments that could not afford such tech before. But at this point, I think a change of dynamic is important, and that we actually _need_ an 800 pound gorilla to frighten us, before people wake up and start demanding action at the state and federal level.
It's already been half a decade since the UK surveillance regime and the reality of mass surveillance was opened to us, yet nobody batted an eyelid in the US. Meanwhile, facial recognition has improved by an order of magnitude and has already silently been seeing deployment everywhere around us. I'm being led to believe that unless we have a public poster boy to rally against, the situation isn't going to change.
Speaking of which: how have things gone with the UK's experiment?
Maybe I'm not plugged into the right channels, but I haven't heard any news about "London falsely jails thousands of people due to false-positives on the camera surveillance network" or "the Crown finds and jails protesters by the thousands."
Last I heard, it got mired in legal issues, and was recently tossed out by the courts [0]. (Something that's not quite as likely in the US due to the EU being more privacy-conscious, and would require mass outrage and an act of congress to deflect)
Limiting the power of surveillance is only going to come in the form of passing laws, there's always somebody willing to build the tech for the military industrial complex. It's lucrative
Usually this sort of thing would go to IBM, Accenture etc.
AWS has tried to position themselves as being different from the old 'dinosaurs', but the public sector market is probably ripe for them to move into. They've already made steps with GovCloud, now it's time to get specialist teams and consultant contracts going.
If Amazon didn't do it I bet Google wouldn't be far behind
Being an app developer and producer this seems outrageous what you can do. If anyone takes a video or image, you can track other people's faces in the scene that may or may not be your users, can gather graph info about how they are related and essentially have a wrapped up motive/alibi etc about how a potential crime could have taken place using AI and auto deliver to authorities. All this from a non-opt-in mechanical turk approach where no user is compensated or willingly testifies. What the hell do you call that? A police state?
Amazon has always been a pro-law enforcement company. Remember how they shut down Wikileak's site with a single call from Joe Lieberman? (not even an official law enforcement request)
Police should be given full access to the latest technology. The criminals will have the latest tech. DNA, face recognition, speech recognition, big data, ML, AI, whatever comes next. However just like with guns and wiretaps, there should be laws controlling the appropriate use of these powerful tools to avoid becoming Big Brother.
Yes, the laws should move at the pace of technical capabilities. That's the big takeaway here. Not asking the cops to live in the '70s technologically, which is often the (effective) ask from privacy groups.
Privacy groups and the NRA. The system for tracking gun sales is kept on paper records by law; too much fear that digitizing the dataset would be tantamount to a "national registry of gun owners," which is in the set of irrational American fears alongside "National ID cards."
People will be outraged until everyone starts doing it.
How long until we see tons of marketplaces to sell your home & business surveillance videos?
Maybe security companies will start offering free versions of things like Nest as long as you let them sell the video in their marketplace. It's the 21st Century way after all.
It's a bit dated, but the basic premise---the pattern of technology getting smaller, cheaper, and easier applies to surveillance technology also---tends to continue to be an accurate depiction.
Silicon Valley and its offshoots are going to become surveillance central. The HQ of a surveillance economy courted by despots at home and abroad. The people working here are going to be the most hated people in the world.
A new narrative about benefits other than totalitarian surveillance is going to be crafted and circulated to provide some sort of ethical fig leaf for employees but everyone involved will know the truth just not acknowledge or discuss it.
Any reasonably well informed citizen intuitively will know this is not the right direction but there is an inevitability about it, money creates its own logic and many have already made their peace with it.
> Silicon Valley and its offshoots are going to become surveillance central. The HQ of a surveillance economy courted by despots at home and abroad. The people working here are going to be the most hated people in the world.
While Hollywood’s influence has waned, America’s leading internet companies ... have spread from Silicon Valley to all corners of the globe, even some untouched by American movies and TV shows.
It’s time for Americans to recognize that they have a new major cultural export, alongside movies and television: the set of modern communications platforms created in the United States that have since overtaken the world. The question then becomes: If the world looks at America and sees Facebook, YouTube, and Twitter as its profile picture, what does the world think?
Americans should be concerned. We’ve reviewed how America’s new major cultural export has been characterized in the international media and other public discourse lately, and based on that we’ve identified three key features that have become associated with it in the eyes of the global audience.
They go on to discuss that SV has a growing reputation as the engine of 1) the spread of "hate and harmful ideas, 2) "foreign intervention in domestic politics", and 3) "the general surrender of privacy". Well worth reading.
Currently you have to pull out your phone and use their app to enter the Amazon Go stores (the no-cashiers convenience stores).
Interesting to think that Amazon could use their face recognition tech instead to let you enter without doing anything, just like you exit "without friction". You'd just need an app the first time for them to learn your face.
They haven't done this. That means they either think it'd be perceived as too creepy, or the tech isn't good enough yet.
I wonder how much more outraged people would be if you swapped Amazon for Facebook which has names and much more information to tie back to a bunch of faces.
re: "“Amazon Rekognition is primed for abuse in the hands of governments,”"
Govs? Big Incs? Deep Pockets? Can any of them be trusted?
What used to be known as "local law enforcement" is now a heavily armed para-military organization that dreams technology wet dreams. Its vendors dream of drowning in cash.
The din of fear is taking its toll. The expectation of unlawfulness is becoming self-fulfilling. The kids are not alright, as the normalization of school shootings shows.
I'm not going to defend the NRA, but they are the least of my worry at this point.
Is it okay to sell face recognition services to everyone except law enforcement? Are other customers somehow more trustworthy? Maybe it shouldn't be sold at all?
I write facial recognition software. If those example facial images and their match scores are any indication of their systems accuracy, it is on the poor side.
The point is that they made it available to customers as a full, scalable pipeline. They can (and will) improve their accuracy continuously. Any kiddo can take some bleeding-edge wide resnet variation and get better results than they have right now, but can't make it available at scale to anyone.
"but can't make it available at scale to anyone." =- I guess that's the popular misconception. Their system scales awful, rendering it's expense of operation on the high end side, for a weak product. They are simply exploiting their name.
Why is today the day for backlash? They showcased this at re:Invent for all to see, and the obvious use was for law enforcement and surveillance. Yet another surveillance issue we're up in arms about today and won't care about tomorrow.
I used this with some personal video surveillance to identify some thieves who were casing driveways to find cars to get into. Instead of wasting hours looking at footage, I was able to give the police excerpts that helped them identify the perp. Also, because of the circumstances and footage, they were able to charge him with a more serious crime than petit larceny, which means he won't be back next week.
It's an effective tool. If you're allowed to have a camera, you should be allowed to look at it with whatever means you want.
The reality is that the horse is out of the barn. The Federal government has been doing this type of surveillance either using facial or LPR data on interstate corridors since at least the 90s.