Hacker News new | past | comments | ask | show | jobs | submit login
America is turning against facial-recognition software (economist.com)
269 points by pseudolus on May 24, 2019 | hide | past | favorite | 143 comments



> Earlier this year the Washington Post reported that many small police departments were abandoning body-worn-camera programmes because of the cost.

> Although the cameras are cheap, officers can generate 15 gigabytes of video per shift; storage costs mount. Police unions often oppose body-worn cameras, fearing they imperil their members by giving superior officers licence to search them for punishable behaviour.

>Other officers complain about the amount of time required to review and redact footage in response to public-information requests.

Sure, the Police themselves don't like the cameras turned on them. Suddenly it is "too expensive".

Then they will turn around and argue for ubiquitous CCTV camera installations.


Since 2004, Chicago Police Department paid out $662 million [1] in police misconduct settlements.

The optimist in me says that the cost of proper police reform, training, body cameras, video storage, and actually punishing bad cops would be much less then that.

[1]: https://www.chicagobusiness.com/article/20160320/NEWS07/1603...


I find this to be a really compelling counter-argument to the whole "accountability is too expensive" line. Even if proper tech + training cost only reduced rate of severe misconduct by 15%, that's over $100M in savings.

I'm sure implementing reform etc isn't cheap, but I seriously doubt it's on the order of $100M.


To the people performing the misconduct, training and accountability are restrictions and represent a reduction in power. No amount of money savings will make them in favor of it.


I agree. But of course the offending people can’t just come out and say that they’re against accountability on principle - they have to trot out the “too expensive” argument. That’s why the statistic on settlements is useful.


> Axon, which makes body-worn cameras and Tasers... is building a system for managing records...the firm’s chief information-security officer, says that “what we can do to help officers improve most isn’t the sexy stuff. It’s helping them be more efficient and spend more time on the street.”

Uh... what they could do to help officers improve most might be shipping a device that doesn't seem to mysteriously malfunction or be turned off during the most controversial use of force incidents.

The way bodycams are supposed to help is by forcing officers to be accountable for their actions. When police have plausible deniability due to poorly designed tech, all these expensive devices become basically worthless.

In my cynical moments I have wondered if making such bad tech is actually a selling point for Axon and its competitors. They're selling to police departments; the less likelihood bodycams have of embarrassing police, the more likely they are to make a sale. Check the box, move along. I try to have more faith in our institutions, but the abject failure of bodycams from a technology perspective makes me wonder.

I supposed it could also just be due to a few companies having a lock on an enterprise hardware market.


It doesn't really matter if they have a body camera because the courts never convict them anyway


That's not exactly correct. There are several factors here. A big one is that prosecutors (whose jobs depend on police cooperation) have been reluctant to go against police; this probably won't change with video evidence. However, another factor is that jurors are currently inclined to take the police officer's word for it in he-said-she-said type cases, and this may change if bodycam video evidence becomes more plentiful.


Huh so the cops think giving cops the ability to search for punishable behavior might be an issue...


would you want a body camera at your work? You wouldn't I'm sure. Don't blame them for looking for their interest. It's someone else's job to (politicians) to legislate the use of it.


> Don't blame them for looking for their interest.

Blame everybody for only looking out for their own interest, from scammers to corrupt politicians to short-sighted investors to oil execs. Otherwise it's just status-quo infinite whack-a-mole and we see how well that's going.


In my job, I'm not provided a set of lethal and non-lethal weapons. In my job, I am not given the ability to detain anyone. In my job, when someone aggravates me, I do not have issued weapons to use against the aggressor. In my job, I'm not allowed to use deadly force because I feel threatened. I blame them 100% for only looking out for themselves and not the citizens they are sworn to protect.


What a poor analogy. My job doesn't have anything to do with being the sole arbiter of violence and defusing dangerous situations at the trust and expense of taxpayers and local citizens.

If they don't want accountability as police officers, they can go find another job.

> Don't blame them for looking for their interest.

I can, I will and I will argue that it is my moral obligation to ensure that the public servants are acting in the best interest of the public as a citizen. But thanks for your concern.


"giving superior officers licence to search them for punishable behaviour" seems like a legitimate concern. You seem to be assuming that the "superior officers" will have "the best interests of the public" as their sole motivation. In my experience that is rarely true of ladder climbers. By not dealing with this concern you could easily end up making things worse.


If "punishable behaviour" is appropiately limited to things like assault and evidence planting, then this is irrelevant, and superior officers should even be required to preform such searches.

If "punishable behaviour" includes things like slacking off to stop by dunkin donuts, the problem is with what qualifies as punishable behaviour, not the ability to search for it.


You might be missing the point. Assuming that all the good guys are supervisors and all the bad guys are the supervised is an unwise assumption to make. The bad guys can simply become supervisors and weed out the good guys. Entire police departments have been corrupted in this fashion. If you over simplify the problem the solution is unlikely yield the desired result.


No, I'm saying that even if all the supervisors are bad, the ability to search for punishable behaviour only helps them if there exists punishable behaviour for them to find.

I agree that this is a unreasonably high standard to hold people whose job doesn't have anything to do with being the sole arbiter of violence and defusing dangerous situations at the trust and expense of taxpayers and local citizens to.

If they don't want accountability as police officers, they can go find another job.

Edit: unless you meant not supervisors but the policymakers who define "punishable behaviour", in which case getting rid of body cams is about as helpful as rearranging the deck chairs on the Titanic.


No good person wants criminal police officers to not be held to account but simply passing laws is unlikely to have the desired effect. Many police departments in the united states are already highly corrupted at the highest levels. Its not as bad as it is in Mexico where the drug lords have actually taken over, but its getting close.


Good. This kind of software is a mirror to the data fed into it. People on the edges of society (that the people making the mirror might either intentionally or accidentally ignore) will get fucked over tremendously by this technology. We should put a complete ban on it. Even if it means gimping my photo organization programs.


I fully agree with your sentiment but I don't agree with a ban. Prohibition doesn't work. You should be able to use it but the legal risks and liability you open yourself up to by using it should be strong enough to deter its use except when there truly is no substitute. Asbestos comes to mind as an example. People (often on HN) often run in circles and scream about the sky falling because "OMG it's not banned" but the reality is that the court of public opinion has basically ruled it is not ok to use. That's how I want to see facial recognition tech and other tech that enables corporations and governments to stalk people at scale treated


> Prohibition doesn't work

You're appropriating sociological behaviors of the individual to maxims at the corporate level, which is ridiculous. Corporations need to be regulated, not coddled in some kind of convoluted, scaled-up, harm-reduction policy.


Public opinion is what ultimately is the determining factor here. If the public does not strongly want to punish companies for doing something then the government will not be successful at punishing companies for doing something. In a democracy government follows public opinion. In the absence of strong public opinion you get corporate interests shaping what government does.

Yeah, sure, ban it. But without social consensus backing up those laws it will have no teeth in practice.


Amen to that. And public opinion is easily manipulated. Create fear and people will give up their rights , if someone resists accuse them of being anti-freedom, anti-patriotic, against the free market, communist/socialist, helping terrorists,etc. Use the media which is owned by the same people politicians protect and in no time most of the population will comply


For instance, I am transgender. This kind of technology would allow them to identify "masculine" features of my face that I didn't ask for. This could cause me to be profiled as male, regardless of what my court order or personal preferences say.

This kind of technology is fundamentally dangerous to people who do not fit cleanly into the rigid caste-like boxes that are used to categorize people.


This is equivalent to asking to ban glasses to prevent short-sighted people from identifying "masculine" features. If people can learn to handle the information they see, they can teach the same to the algorithms they use. So while i agree that there are dangers, i can't agree that banning face recognition is the solution to that.


It's almost ironic that we're progressing through a social evolution where we have to have conversations with people to understand their desired pronoun; while at the same time developing software that will suffer the same fate. Appearances mean nothing, but that's all (this) software can use.

Perhaps a small badge with visual friendly metadata would be handy. Not that we need to give them more information of course, just thinking out loud. The idea of everyone walking around with a little badge on their chest containing their metadata is oddly dystopian haha.


> The idea of everyone walking around with a little badge on their chest containing their metadata is oddly dystopian haha.

This idea was tried before, but for just a subset of the population, though: https://en.wikipedia.org/wiki/Nazi_concentration_camp_badge


FWIW, it's started appearing on conference badges in some circles: Name, affiliation, role, pronouns.


That's a bit dramatic to my comment, imo. You could say the same for police, or any institution that uses identifying information - even uniforms.

Like it or not we all have piles of metadata that follow us around. A big one is our face. It won't take long for facial recognition to do the same thing I mentioned in the previous post; except you won't have a choice then. Hell, you won't even know.

The metadata badge idea was of course not serious; but it doesn't seem so farfetched to me or absolutely terrible. It (as I proposed it), at least, would be something you controlled.

Which, unlike state IDs, licenses or Nazi' camp badges are not something you would control.

I do not appreciate the extreme association with a free-thought idea to that of the worst massacre in history. I'll assume you meant no harm, but still - I figured I had to at least express clarification.


Is there any intention of using cameras to categorize people as opposed to just looking for specific criminals that are on the loose? What would be gained by categorizing you as a male or not?

I.e, someone robs a bank, their face is captured, get an alert when they are seen else where.


Omigosh, thank you for mentioning this! I am also transgender and had never even considered the implications of this.


+1 from another trans woman


Pardon the ignorant question, but is there an accepted "standard" for pronouns when someone identifies themself as you did?

Which is to say, if you say you're a "trans woman", am I able to infer that your desired gender pronoun is female?


> Which is to say, if you say you're a "trans woman", am I able to infer that your desired gender pronoun is female?

Yes, this is indeed a generally safe assumption. View "trans" like any other self-descriptive adjective. "I'm a tall woman", vs "I'm a trans woman". It's not an absolute given, because nothing is black and white (some might prefer gender neutral pronouns), but generally safe.


The best way to know for sure is to ask them instead of making any assumptions.


Truly we have transcended parody.


I'm not trans but have several trans friends, and just wanted to add (as a non-trans person), make a best guess for how someone presents and if they correct you, do not joke about it. No jokes, no extra comments, Just say, "sorry, got it," and move on. It's a matter of respect and nothing more, and if you respect someone, it's easy to show it. Making a joke about something that's important (and possibly sensitive) to someone is just a bad start.


I don't think it's necessary to be so careful. If they can't take a joke I don't wanna hang out with them.


You would use she/her pronouns for trans women, and he/him pronouns for trans men. Just treat trans women as you would any other woman, and trans men as you would any other man.

Generally people who prefer neutral pronouns will identify as nonbinary, genderqueer, or perhaps transmasculine or transfeminine, (although not all people who so identify prefer neutral pronouns) and you might need to ask them for their preference.


Counterpoint: How many cloned humans have we seen?

Prohibition works when the barrier to entry is higher. Any mook with some yeast and sugar can make alcohol. Pot grows just about anywhere (which is why they call it weed). But not a lot of people are privately developing nuclear bombs or cloning humans.


Arguably, the barrier to entry for facial recognition software (or almost any software, for that matter) is way lower than for developing nuclear bombs or cloning humans. Of course it won't be production level of big corps, but a group of less than 5 engineers can definitely hack up a facial recognition software that works fairly decently with only ramen money and a bit of AWS credits.


As a person with prosopagnosia (vide http://web.archive.org/web/20080704154655/http://www.prosopa...), I would love to have personal software for recognizing people.

At the same time, I am aware of the dangers of people doing it for commercial purposes (on so many levels!).


It's things like this that always make me think twice about banning things. Many people look at something as useless, wasteful, or bad, but to other people, it really does help people with certain conditions. So when you outright declare something is dumb or should be banned, I generally find these people to generally be ignorant about all the technology does, and it's just a knee-jerk reaction.

This is not to say they don't have good reasons for not liking a technology. Rather, that tend to not know all sides for something.



Interesting.

Does it work? (Any reviews, since it's past its release date?)


You have never been a victim of a crime have you?

I'm pretty sure if you were, and the crime was caught on camera, your position on facial-recognition software would change.


Or you'll just get harassed more as a black person[1].

Funny enough (actually the opposite of funny), the systems used by law enforcement are designed to create more false positives, which then leaves it up to law enforcement to decide which "leads" to follow up on. This will certainly be used to validate abusive behavior.

Additionally, facial recognition is used in some countries right now to silence political dissidents[2]. If you think that can't or won't happen in the US, I'd suggest you look at how activists have been treated in the past (beaten, arrested, and sometimes killed).

1. https://www.nytimes.com/2018/02/09/technology/facial-recogni...

2. https://www.eff.org/pages/face-recognition


>I'm pretty sure if you were, and the crime was caught on camera, your position on facial-recognition software would change.

yes, but likely in the wrong direction because I'd be angry and personally insecure. Decisions of liberty, privacy and law shouldn't be based on personal experiences or trauma.

I find this line of argument weird. Do you really think a personal decision about long term social issues is better than an impersonal one?


Just so I have good arguments when talking to someone about this specific point, can you or someone else please go into more detail on why this will happen?


For example, let's assume you live in Brunei where homosexuality is a capital crime. Facial-recognition (especially in concert with other pattern-recognition software) drastically reduces the man-power necessary to identify individuals for enforcement.

If you're talking to someone who's more conservatively inclined I'd talk about how Iceland has managed to basically eliminate Down Syndrome. Prenatal pattern recognition has successfully convinced nearly all Icelandic parents who have a pregnancy where the child may have Down Syndrome to abort.

If your listener doesn't care about gay rights AND thinks any fetus with a higher likelihood of having Down Syndrome should be aborted, then they're probably exactly the sort of people working on or hoping to use these tools.


Okay, I see your Down Syndrome point, and raise you Autism/Asperger Syndrome. Alright, there isn't currently a test that would enable people with Autism/Asperger Syndrome to be systematically aborted, but it isn't so hard to imagine a future where it would be possible. Given the high proportion of people in this forum with these conditions, how do we feel about this?


Please note that you're not aborting 'people' ... just a thing that might become a person.

You're giving (actual) people the opportunity to choose to have healthier families.

- Note that I say 'healthier families' specifically because willfully passing on genetic features that disadvantage your children burdens not just them but your entire family unit.


This still caries an uncomfortable implication that Downs and Autism are inherently negative traits, and that people who have them would be better off not having existed in the first place.

Depending on where you fall on the fetal personhood debate, preventing someone from existing is much better than killing them. But that doesn't mean the choice can't reveal negative things about society's overall attitudes towards people in those groups. If you're a member of one of these groups, it feels bad to have someone tell you that a fundamental part of your personality is something that ought to be eliminated from society in general.

And of course autistic people and their families often have hard lives, but a large portion of that difficulty comes from social and infrastructure problems. We're not good at building societies that are easy for differently abled people to interact with.

I want to make an analogy that should hopefully make the problem more clear. It's easy to make a case that being gay or transgender socially disadvantages both a person and their overall family unit. Raising a transgender child is going to be harder than raising someone that conforms to strict gender-rules -- you're going to have a few difficult conversations, and you're going to fight with a few systems, and you're occasionally going to have to deal with jerks telling you you've messed up your kid. None of this is right or fair, and we're getting a little bit better, but it's still a problem.

If there was a way to detect someone's future sexual orientation and identity in the womb, would you be comfortable allowing families to make abortion decisions based on that information? If that technology had existed in the 1980s where being transgender was even harder than it is today, would you have been comfortable with it being used then? Would gay and transgender rights have made the kind of progress we've seen if that technology had been available and unregulated?


Whether it be autism or sexual orientation, it seems even more wrong to choose to have a child with something that will disadvantage them just to bolster the numbers of the other people already in the world being unfairly discriminated against.

Given that in this scenario there are two potential children; the one with undesirable_trait_x, and the replacement without it, and one will come into existence and one will not, as a parent, shouldn't you pick the child that will have the happiest life rather than the one that will better serve as canon-fodder in someones ideological crusade?


Well, from a purely social perspective, the answer is pretty obviously no -- we have a vested interest in increasing social diversity for the same reasons we have a vested interest in increasing genetic diversity. And the ability to widely eliminate qualities that society deems undesirable on a whim will almost definitely lead towards increased homogeneity and worse social outcomes in general for anyone who has a trait that can't be eliminated.

However, I'm guessing you're actually talking about morality on an individual level -- that it's immoral for an individual parent not to try and guarantee their future child the happiest possible life. The problem is that even though being autistic is hard, having a hard life doesn't necessarily mean having a bad life. Being autistic is hard because autistic people are different, not because autism is inherently an undesirable trait.

Not everyone who is autistic or who has Downs would, if given the chance, flip a switch and erase that part of their personality, and they bristle at the "happy life" argument because they see it as (intentionally or not) just another way for people to imply, "differently abled people will always lead less satisfying lives." See also the controversy in the deaf community over hearing aids, which are a much less dicey proposition than eugenics but still prompt heated arguments sometimes.

As a side note, it's worth mentioning that no one is talking about artificially increasing the number of differently abled people as some kind of "recruitment strategy". Opponents to this kind of genetic selection are talking about just removing that characteristic as a determining factor entirely. It's not any different than banning sex-selective abortions, which is already an entirely normal and relatively uncontroversial policy in multiple developed countries.


The problem lies within the generalizations presented by the words “healthier” and “burdens”.

Many families with Downs children describe how much joy and love their Downs children bring to the family, how the family members learned to cherish what’s important in life, etc.

These families compared to many families with non-Downs children are quite healthy and arguably not burdened though they experience different challenges than other families.

EDIT: fix quote; change last preposition in second sentence.


I posted the original example to demonstrate that skepticism IRT abstracting pattern based decision making away from human actors transcends political leanings.

While I personally am not entirely sure how I feel about this sort of eugenics, it's worth noting that at the gestational period these genetic anomalies are detected it's far more likely that these fetuses would progress to develop legal personhood than not.

Given our species history of reliance on anomalous individuals I suspect it's unwise to seek further genetic homogenization and better for us to learn to accept those who don't neatly fit societal norms of fitness.


Not specific to facial recognition but covers it and the deeper issues. It is a thorough yet accessible primer on the many ways bias can creep into these systems:

https://parametric.press/issue-01/the-myth-of-the-impartial-...


As @pinboard noted back in the dark ages of 2016, "machine learning is money laundering for bias".


Not the OP by technology like this will explicitly make racial profiling legal again, because the power that be will hide behind the decisions taken by algorithms: “The computer told us to stop and search this black person’s car, it wasn’t us, the computer is smart, we were just following the computer’s orders”.



I heard this referred to recently as the "automation fallacy"


As a transgender woman, this is my fear precisely.


Along the lines of “everyone is breaking some law”, there could be a shift from “technically illegal but generally harmless and tolerated” to “illegal and mindlessly enforced by automated technology”.

An example being China’s automated jaywalking-shamer.


The only “good” outcome of this is we get to know how pervasive surveillance affects behavior and whether the benefits outweigh the detriments. In other words does it result in a better or worse society overall. Even if it’s better overall I’d it worth the negative effects on affected populations?

It’s a living laboratory.


The benefits would have to be impossibly valuable to outweigh the constant burden and psychological damage of living in a totalitarian panopticon.

Given that society functions without it and is constantly improving anyway--crime rates, poverty, mortality continue to drop from socioeconomic and technological development without the need for the intervention of pervasive surveillance--I personally literally cannot conceive of a benefit that would actually be worth it.


True, I do not see it turning out well. But it would be a laboratory into behavioral control. Something like this would eliminate needs for safe spaces, anti/hate speech, etc. because this would regulate all this anti social behavior in the first place.


There have been numerous issues where corpora of peoples faces have been insufficiently stocked with a particular racial group, in such cases people's faces might not register as faces and therefore be shut out of whatever the facial recognition is supposed to implement.

For example facial recognition at a high priced building, suddenly non-white people can't get in because you did not show a real face at the scanner. Sorry, it's a bug not a feature.

The same problem would of course apply to anyone with facial deformities etc.

The problem would be essentially of two forms - Automatic exclusion from a group by facial feature, or automatic inclusion in a group by facial feature.

Think of the automatic inclusion as advanced phrenology or other sort of woo, in this scenario company X sells the idea that they will have facial recognition of 'criminal types' based on whatever research they can pull out that criminals often correspond in appearance. Then they fill that corpus up with criminals which then is a nice self-selecting system of find the black/hispanic person and deny them the job, loan etc. because of likelihood for crime past the threshold set.

the reason why this would happen is the same reason why banks used to deny loans to people in certain zip codes, because by doing so they could be racially profiling and excluding but pretend not to do so.


>”corpora of peoples faces have been insufficiently stocked with a particular racial group, in such cases people's faces might not register as faces..”

I’m not sure this is s good argument against the technology because there is a solution to this objection: they’ll improve it and ensure every face type is comparably recognized. It should be rejected on more basic principles.


It's not as simple as "just make sure it works on everyone". Even if it were physically possible to capture the face of all 7 billion people on Earth, you'd start running into issues of false positives/negatives, overfitting, etc., and furthermore it would basically require every person to be periodically scanned from birth until death to ensure that the algorithm can recognize new faces.


This is a different objection than the original objection. (effective accuracy vs specific bias).

With a pervasive system, unless people are hermits, they’ll get scanned periodically, and with other bits of information the changed face can be correlated with the same person (I go into a salon looking one way, come out different, go into a boxing match, come out different, but I’m following my routine and go to the same subway stop and convenience store and use the same payment method, etc.)


And then you just have a different implementation of China's social credit surveillance system...


there are of course examples of technologies that do not work for their stated purpose, but I think in general technologies do work pretty much for their stated purposes if people can use the technology correctly.

Generally the arguments against a technology is not that the technology does not work for its purpose, but that the way humans will use it is problematic.


The problem with an "it's racist" strawman is that all the people selling this stuff have to do is frame it so NOT using it is racist. I don't think it would be hard to argue that a face scan is going to have less implicit bias than a poorly trained security guard or bored cop working overtime. So maybe it's racist to not use facial recognition, since that leaves room for human prejudice.


How does software not leave room for human prejudice? Who decides what data is used to feed the thing?


Who decides what face unlocks your iPhone? When you buy a million dollar condo, you decide whose faces you want to unlock it. That way it's not up to some security guard to decide whether to detain your kid in the lobby because he's wearing baggy clothes and didn't style his hair in a way that the guard likes. His face either scans or it doesn't.


My good buddy HAL 9000.


Many mortgage backing companies still routinely deny loans to black people because their models inversely correlate black homeowners to lower surrounding property values. When a mortgage backer also backs the loans of those same surrounding properties, the risk of losing money on their existing loans increases, thereby leading them to deny the loan and perpetuating racist policy from a so-called "objective" model-driven business decision.


Are there organizations that provide validation standards for this type of software?

I'm thinking something analogous to ASME for mechanical systems. Or is the technology still very much in the wild west phase?


> We should put a complete ban on it.

Both my phone and my PS4 use facial recognition for login. I don’t want that banned.

My camera can be set up to take a photo when it detects someone in the frame. Very useful for selfies. Are you going to ban that too.

Are you going to ban Tensorflow. Keras? SciKit-learn?

If I hand code my own neural net, are you going to ban that as well?


I disagree with a complete ban, but we could ban wide-scale implementation of this. You are not a municipality.


There is a lot of sensationalism (see title of article) and high, anxious emotions around facial recognition right now, which really is to be expected at this point. But it is muddying the rational conversations we should be having about it. Can Amazon lessen Rekognition's bias towards white people (ie. bright pixels)? Yes. Get over that. This article completely whiffed on an important point regarding police use of facial recognition (or maybe I missed it) and that is _police_ bias. We are talking about one of the most culturally and racially biased concentrations of power in the country. It's important to note that they will NOT be training the models or writing the algorithms that drive their tools. We have a chance to let some brilliant engineers create deliberately unbiased tools that can only improve the situation in America's police force. PLEASE do not let your fears win. It will be used for evil and it will be used for good. That is not something that should be banned. Sorry I started getting a little emotional.


The problem I have with your statement is that it seems to put 'brilliant engineers' on some kind of magical pedestal.

Engineers, much like police, are humans. Therefore they hold biases, they can be short-sighted, and they may not understand many of the long-term ramifications of their jobs.

All this does is move the power and bias up the chain from police to engineers while simultaneously making some very, very powerful decisions about what privacy is and what your rights are in society with 0 oversight from the actual society. In fact, police are (in theory) beholden to politicians and the voting public right now. If piss-poor decisions are made based on faulty software from software conglomerate A, who do we hold accountable?


[flagged]


>We 100% benefit by moving the power up the chain from uneducated people to educated people.

Most cops are college educated?

>They will have biases, but not the same ones police have and far less racist or sexist ones.

I honestly am going to need citations for that statement.

>Your average engineer is far more intelligent and introspective than your average police though.

Again, this is nothing more than a magical belief. There is nothing about the field of engineering that makes the people doing it inherently better than other fields.

I'll say it again, all you do is shift the biases from a publicly accountable (in theory) position to something hidden behind a corporation and unaccountable to the public. This is not a solution, it's just a shell game.


Most cops are college educated, however, let's not kid ourselves, a BS in criminal justice is a little different than a BS in engineering.


At least the cop and the engineer have BS in common, apparently.

As an "engineer" myself, I don't think my peers are in general much better or wiser people than the rest of the world, and I'm not at all comfortable with implicitly handing them an ever-increasing level of power within our society.


Ya they probably are a little different... a BS in CJ probably included more liberal arts and sociology classes that forced the student to think and talk about things like race and class, and looked at specific historical examples of social injustice etc. The engineer rolled their eyes at the required non-degree classes.


Intelligent, possibly. Introspective, I'm not so sure.

Being educated can help protect some against abuses of power, but do you believe engineers are immune to abuses of power? I don't think so. The mechanisms are the same.

What would lower abuse is deescalation training, dedicated, well-founded mental health care, decriminalizing drugs, and most importantly breaking the blue shield: holding officers accountable by actually trying officers that have committed crimes, increasing the power of agencies regulating them, weakening the power of the police union so they can actually be accountable.


Facial recognition just vastly increases the likelihood to locate a suspect. What happens after that, from detention to arrest through jail and trial remains unchanged, and is still conducted by humans in the criminal justice field.


I think the folks who are against facial recognition because they view it as “racist” are in the minority.


This has been debated extensively on HN (and is still being debated actively), but all I hope is that people realise sooner or later that the level of safety you can acquire in a society is inversely related to the level of privacy individuals can have.

What I personally hope for is that people achieve a balance between safety and privacy.


I ask, safety from what? From citizens or the state? Your statement implies that the freest societies are the least "safe", and the most repressed are the "safest". While you might be able to hinder criminality on the part of citizens with total surveillance, the cost of giving anyone that kind of power is too high.

Relinquishing privacy puts a society on a path where all power accrues to the state, to the detriment of all citizens.


I agree to most, but because solutions like facial recognition are simply easier to engage than educational reforms or welfare-based safety [sic] nets, this is not necessarily something we get to choose.

States will prefer the former to the latter and firms will provide, at which point I don't think that the definition of safety-from is necessary.


> States will prefer the former to the latter and firms will provide, at which point I don't think that the definition of safety-from is necessary.

I do believe that, at some point, people will as well. If other options prove too expensive or too inefficient, it's not just states that will prefer near complete mass surveillance.


This is a meaningless statement without a threat profile.

Sure if everyone lives in a police state under the thumb of the government you're safer from petty crime but you're a heck of a lot less safe from government abuse.


Look at the news cycle. Most stories are about how crime is everywhere and your family is in danger (nevermind that the local news channel in Northwest Iowa had to report from Alabama to find some blood and gore), meanwhile, the government is fighting a noble war against terrorism in some far-off country.

I don't generally subscribe to conspiracy theories, but I do genuinely believe we're (in the US at least) being groomed for some sort of repressive state apparatus using this type of software to 'keep us safe' in the next decade or two.


I don't think it is inversely related. At a point, you become less safe. I would not feel safe if every moment of my life was broadcast to everyone; I would be walking a razors edge, where I pay for every minor slip up. Life would be one long anxiety attack. Privacy means being able to relax, and absolutely required for mental and physical health. No privacy is inhuman.


How are you measuring safety, and how are you measuring privacy, to make the 'inversely related' claim? What is your evidence for this claim, measured in the measures you are using?


Nouvelle idea: maybe we should just lock people up who violate people's safety instead of taking away everyone privacy ;-)


How does this differ from "If you have nothing to hide you have nothing to worry about."?

Surely it's not difficult for you to imagine fairly common scenarios where your equation is patently untrue. A battered spouse of LEO or a non gender-conforming individual in a theocratic society are only two really easy examples that come to mind.

If we assume that individuals who control or have access to surveillance mechanisms are universally magnanimous and actually capable of neutralizing threats to safety, then your negative relationship between safety and privacy might hold up to scrutiny. I have more than a bit of trouble seeing how that assumption is anything other than utopian.


I don't mean to defend surveillance as a gateway to safety by any means, in fact I believe in the exact opposite. Unfortunately, it's going to be integrated into our society whether we like it or not, and after that point it's only a matter of time before we come to accept losing our privacy; having ignored all the exceptional "casualties" and enjoying the add scrutiny it yields to law enforcement.


I suspect the integration and matter of time exist in recent history. The question I struggle with is, what ought a neutral-good aligned individual do about it?


I think neutral good as an alignment inherently comes with a disadvantage in making impact. You have to take certain levels of risks at certain extremes to be able to influence anything.


I'd like to be neutral good, but recognize that one may need to dip into chaos depending on the landscape they find themselves in. :-)


"Nothing to hide" is the worst argument possible on the subject


It's easy to use face recognision on someone that has nothing to hide. That does not make it useful to track someone that does have something to hide. Is it enough to put on a mustache and some glases when you walk around a corner to fool the system? A painted face on the t-shirt and a helmet on the head?


> all I hope is that people realise sooner or later that the level of safety you can acquire in a society is inversely related to the level of privacy individuals can have

What leads you to think that this is always true? Do you not think there are any other solutions to reducing crime other than increasing surveillance?


I do, but at the same time I think that lower-hanging, yet not so easy-to-pick, fruits are ignored already. In that case, it becomes more feasible to target far out approaches that are simply easier to implement. Unfortunately, those also happen to be ones that compromise privacy.


That premise assumes that one can control safety, but that might be a hard assertion to build shared understanding on because not everyone shares that assumption.

Perhaps you would have more luck trying to build a foundation on the relationship of risk management and privacy. Mitigation vs acceptance vs avoidance, etc.


It is not zero-sum. You can increase safety while keeping privacy. Putting airbags in cars makes us safer without losing privacy (to any meaningful degree). Even if there is a relationship curve, you can shift the curve or jump between them.


Those who would give up essential Privacy, to purchase a little temporary Safety, deserve neither Privacy nor Safety.


when you talking about balance, you must associate a timeframe with it. the amount of safety you get by sacrificing a single unit of privacy can be changed through technological advancement.


I can't read the article without an Economist account, but I think the headline is a bit too broad. There are a lot of people who happily use facial recognition to unlock their iPhones every day, and I haven't seen much of a backlash against that.

Why not? Because it's isolated on the device, and can be turned off whenever you want. I think the real concern is the combination of facial recognition with vast centralized databases. Those big centralized databases are just begging to be exploited and abused, whether for commercial, authoritarian, criminal, or even creepy "LOVE-INT" stalking by the people running the databases.

I think facial recognition (and other forms of recognition) can have a bright future in society, but only if they conform to our expectations of privacy. Those expectations have been created over thousands of years based on how we interact with people. The closer that computer interaction can hew to existing social conventions, the smoother path it will have to adoption.

I don't mind if the waiter in a local restaurant recognizes me when I come in; in fact I like it! But I don't think I would like it very much if I knew he was calling 15 different data brokers to report on me every time he sees me.


>I can't read the article without an Economist account

try https://outline.com/BhW2BL


>Jack Marks, who manages Panasonic’s public-safety products, called it “short-sighted and reactive”. The technology exists, he said; “the best thing you can do is help shape it.”

Not using it IS "shaping" it. Just not into a shape profitable for this dishonest huckster.


It's like nuclear power, technology is what we make of it.

Do we want to use it for criminal purposes only, effectively to almost make it impossible for someone to commit a violent crime and get away with it and make a much safer society?

I'm all for it, there are some nasty people out there, serial killers, pedophilians, drug cartels. This technology deployed at scale would make these crimes much harder to get away with.

As long as it's another way to ensure democratic laws are respected, I don't think that completely forbidding it makes sense.

I'm all for regulation, we don't want this technology to be abused.


The thing is it depends of what you call 'criminal' on the moment, right ?

And when you know you're watched, you start self-censoring to adapt what should be the 'right thing' to avoid blame or judgment.


Ski mask. You might catch/convict a guy with it, but soon after a lot of people are just going to wear them all the time.


The same fundamentals used in facial recognition can be applied to 3D pose estimation for gait recognition. This allows identification of a person without biometric features like a face or iris [1].

Facial recognition is used as a catch-all term because it's easily understood, but the deep-learning behind it is much more widely applicable.

[1] https://arxiv.org/abs/1710.06512


Yup. About half the US needs them for 3 or more months per year. And then there's the rise of surgical masks to prevent the spread of viruses.

I also foresee a surge in wide-brimmed hats.


I remember stories from 5 years ago or so about artists who were making facial recognition blocking scarves. The scarves made you look like a lightbulb on camera, so they couldn't see your face.

I chuckled, because it was ridiculous, and now I'm wondering if they make them in professional colors. . .


I don't know how effective is this technology to identify masked people, if at all.

But then again, how suspect is that to wear a ski mask? Everyone will notice.


America is turning against facial recognition, until it's not. Total surveillance looks like it's inevitable.

Surveillance will come slowly, then suddenly


America isn't turning against anything. There was no shortage of people that bought the new iPhone with face scan technology.

I believe we live in a society where it's already possible to track the vast majority of people to a general area 100% of the time. Technology is progressing, soon it will be real-time tracking of targeted individuals 100% of the time. That will slowly expand to all individuals. Between cell phone tracking, license plate scanners, and facial recognition cameras, the system knows where you are.


Actually there's a proliferation of private facial recognition software being deployed across almost all large buildings in the united states. it's just behind the scenes and not being used by law enforcement. This is the standard for america, law enforcement gets barred from high tech, but private industry is free to use it. Eventually they sell it back to law enforcement.


This backlash is the silver lining to China’s shortsighted implementations of this technology to their world renowned police state.


This may be an unpopular opinion here but on the flip side have you been to China lately?

Street crime (including petty thievery) has dropped immensely.

Any city in China is much safer than Oakland.


I don't think safety is the only thing that matters to people.


There is always a silver lining to authoritarianism.


Hasn't that always been the case in China?


Facial recognition is coming no matter what. There is no way people will sacrifice the conveniences that come along with it. Sure people will be against it when their only concept of its applications is to "catch bad guys", but when it begins to cut time out of their day they will have a change of heart.


I'm ready to sacrifice my extremely convenient way to unlock iPhone just to know this technology is banned worldwide.

This technology opens wide doors for surveillance, for finding, identifying and pursuing political opponents - not just their leaders, but every person who was brave enough to protest on the streets.

It opens wide doors also for cyber bullying, and this issue is a problem for every country, not only for dictatorships.


What would you expect to be achieved by banning it? Intelligence services will continue with it, which is my major concern over its current use.


I think the concern is more widespread use by LEO and corporates who have far less oversight - and will effect several orders of magnitude more people.


You have vastly overestimated the oversight on intelligence agencies.


And how much legal oversight do the FANGS have? Let alone some local US police force run by individuals such as Sherriff Joe


It will be less widespread and fewer engineers will push it forward.


This is how you get billion+ defense contracts. Because Intelligence Services always get what they want regardless of the price and we pay for it.


It will not give them access this tech on every consumer device with camera.


Yes, but the thing is that facial recognition IS just video compression.

You take a video stream and compress it down into a timestamped stream of IDs. It's really lossy, but it's the same as OCR or Speech-to-Text -- it is a tool that allows us to better handle large streams of data.

As always, the tool isn't the problem. It's the use of the tool.

(But I think we all know that.)


Police are only one group who might use facial-recognition software. Body-worn-cameras can be used and not collect face data or use facial recognition technology, or their legal uses can be severely restricted and clearly defined (or outright banned, lots of options). Why conflate all these things. They overlap sometimes, not all the time.


Too late

Photos are everywhere, the software will only get to be more accessible. You can't put this back in the box.


Getting discouraging to see facial recognition and machine learning getting conflated in every article's comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: