Hacker News new | past | comments | ask | show | jobs | submit login
Police accused of deploying facial recognition 'by stealth' in London (independent.co.uk)
161 points by pmoriarty on July 28, 2018 | hide | past | favorite | 68 comments



Facial recognition tech can spot you in a crowd if you take part in a demo against the government

Self-driving car tech can simply refuse to take you there


Or even worse, the facial recognition system can alert government/companies/etc when you aren't where you are supposed to be.

For example, if you take the bus or train everyday to work at specific time. And if you aren't on the bus or train, then the system can alert authorities or your boss.

And if facial recognition somehow gets into your home, you could get the 1984 or black mirror dystopian society.


> you could get the 1984 or black mirror dystopian society

We already have that, let's stop pretending we're not already living in a dystopian society.


Stick on an anonymous mask and walk?


Illegal in Germany during a demonstration and grounds for arrest. They also routinely film you during demonstrations and capture all cellphone metadata and location. Facial recognition just automates part of the surveillance. But for groups like ANTIFA they actually have dossiers on the most prominent people and track them during demonstrations manually.


But for groups like ANTIFA they actually have dossiers on the most prominent people

A group that attempts to defeat fascist-detection technology by the most laughably transparent means


Just some sunglasses will defeat facial recognition.

https://www.forbes.com/sites/andygreenberg/2010/09/27/how-to...


that is from 2010. not to say this kind of thing isn't possible with nowadays tech, but I'd reckon that if someone trains a model specifically to be deployed for identification in adversarial settings, that this would be one of the first points to address. and at that point it basically becomes an arms race. Just that once you've got the tape, it's easy to go back two years later, re-identify everyone with $magic_tech_from_in_two_years and add that to the dossier.


Masks are illegal sometimes because of that


Somewhat ironically, burqa's might be able to get around some restrictions because of their use in religious contexts.


I always wondered how much crime I can get away with by pretending to be a woman wearing a burqa then take it off


They're not illegal in the UK.


Not by default.

But police can impose an order - for a defined area for a limited amount of time - under Section 60AA of the Criminal Justice and Public Order Act 1994 allowing officers to demand that one removes "any item which the constable reasonably believes that person is wearing wholly or mainly for the purpose of concealing his identity".


Interesting. Never seen that it action yet. Sounds like there's some wiggle room, e.g. for carnival masks for entertainment purposes.


Yet.


I'd imagine some recognition technology could take into account body proportions, posture, (commonly worn) apparel, etc. to make facial recognition less needed.



You will be sued by Warner Brothers...


Combine this with the recent advances in deep learning based lip reading and you should really know how fucked we all are.


ignoring issues related to scale, I wonder how the tech's accuracy compares to london's team of human "super-recognizers."[1]

1. https://www.newyorker.com/magazine/2016/08/22/londons-super-...


I strongly suspect that “super recognisers” don’t exist and it’s just a cover story concocted at a time that they believed facial recognition worked a lot better than it does. You know, like carrots and radar.


>Ignoring scale, I wonder how the tech's accuracy compares to london's team of human "super-recognizers."

You can't ignore scale when it is the game-changer of automated facial recognition.

The tech compares very poorly to super-recognizers, with an absurdly high number of false positives, putting the wrong innocent people under an aggressive suspicion that they won't be able to know about and dispute.


in a hypothetical you can ignore whatever you want :) infinite super-recognizers vs. infinitely available facial recognition is what I'm wondering about here.

> The tech compares very poorly to super-recognizers, with an absurdly high number of false positives, putting the wrong innocent people under aggressive suspicion.

well that's not good!


>well that's not good!

The whole concept is a nightmare for our civil freedoms even if the tech was perfect -- precisely because of realistically being able to scale -- and must be protested until the whole idea is let go of.

>infinite super-recognizers vs. infinitely available facial recognition is what I'm wondering about here.

You keep editing your comments to remove contentious remarks. I can only reply to the one I've read.


apologies for the edits; in attempting better clarity I am achieving the opposite.

> The whole concept isn't good even if the tech was perfect -- precisely because of scale -- and must be protested until the whole idea is let go of.

I don't disagree with this. my questions are not political in nature, I was simply curious if the tech had approached or surpassed the accuracy of super-recognizers.


Why doesn't such a system scale? What about it makes it undesirable even if the tech was perfect?


False positives are fine. It is basically a bloom filter. False negatives are very bad, but false positives are fine.


As the other respondent says, it depends on your view of accidentally involving the innocent in police activity.

In an ideal world, someone wrongly identified would just be questioned, they'd identify that there was an error and there's no physical impact just a brief disruption. That is the approach which genuinely could be beneficial to society.

However, I am somewhat concerned that many police officers won't keep the fact that it's fairly likely to be wrong in the forefront of their minds. Combine that with an eagerness amongst some police to start applying "justice" immediately (rough use of handcuffing, pushing to the ground etc), at best treating people like hospital trolleys and at worst inflicting injury, and it's a recipe for escalation and injustice in scenarios involving completely innocent people (and you could argue it shouldn't happen with those who turn out to be guilty too, but that's a distraction!)


It just needs to be framed as a search problem (many results), rather than recognition (one result).

E.g. "Here are the 200 people the system thinks kinda look like that picture you gave me. The one you're looking for might be in here. Then again, they might not. You take it from here. Happy detectiving. "


This is a more direct way of phrasing what I was getting at. Thanks.


It depends on what is your opinion on the old question of do you prefer an innocent in prison or a free criminal.


If murder is bad because you're taking away the life that might've been lived by a third party, then one could argue that false incarceration is similarly grave as murder.

That being said, I don't agree with your premise. Facial search/recognition is just the first step of the investigation, identifying a pool of "possible leads". This is very similar to creating a list of people who were in the neighbourhood. Lots of false positives initially, but still a good place to start looking.


Sounds like a convenient cover for parallel construction more than anything, TBH.


How is this different to lots of coppers activly looking for suspects? We have so much intrusive CCTV here I can’t quite get the complaint. Was it just the introduction with lip service to community engagement?


The difference is automation, and the issues that come with that.

If you look like someone the system is looking for, there's nothing you can do to whitelist yourself other than wear something on your face, you will be flagged, every single time until a human looks at it, and if they're stopping people, that means being stopped, every time.

If the system has the wrong face in the database, bad luck.

If someone finds a way of preventing detection, say, make up, or big glasses, the system doesn't work any more, as far as I know, even the best facial recognition that only relies on cameras and not depth cameras significantly lose accuracy with either of those.

There are a lot of people in London, the false positive and negative rates aren't usually published and anything above zero means those technical problems happen an undisclosed amount too much.

Did you ever hear the story of the guy with a very common name ending up on the no fly list? The no fly list was for a long time, and I think still now, literally only a list of names, meaning if someone with a common name goes on the list, everyone with that name gets stopped at the airport as the database gets checked to see if they're the same person. Now think what happens if someone with a generic looking face ends up in the police database.


>If you look like someone the system is looking for, there's nothing you can do to whitelist yourself other than wear something on your face, you will be flagged, every single time until a human looks at it, and if they're stopping people, that means being stopped, every time.

Wouldn't such a system (ideally) learn the differences in [who it expected] and [who it found] and attempt to improve its accuracy to prevent false positives in the future? I don't know if this is too basic, but I'd assume training with a set of similar-but-not-an-exact-match data would be vital for accuracy at scale.


I read /r/legaladvice over on reddit.

At least a couple times a month there's someone who has had repeated issues with law enforcement banging on their door, sometimes even with guns drawn, because someone with a similar name had a warrant out for their arrest, or a criminal background of some type, or someone the police wanted used to live at their address years ago. And there seems to be very little an average person can do to get "the system" to correct itself; even after multiple instances, the police keep coming back again and again and again and again because "the system said so". Massively scaled automated facial recognition will multiply that many times over, especially when officers are trained to just trust "the system" when "the system" says you're a criminal.


Yep. One only needs to look at the recent story The Machine Fired Me[1] to see the potential impact of such system errors once things become connected and automated.

That poor guy was effectively fired despite the fact that every human agreed that he actually wasn't and should still be employed.

[1] https://news.ycombinator.com/item?id=17350645


Thanks for sharing this.

Perhaps I interpreted it wrong, but I think this speaks volumes for the amount of low-hanging fruit that exists for improvements in such a system: obviously, the above _shouldn't_ happen but does, which raises the question.. why?

I may be (and probably am) wrong, but my guess is that many PDs are still using incredibly outdated systems (at least, mine is), and would vastly benefit from modernizing and improving their systems. There's many other possible reasons (improved systems don't fix this problem, or improved systems have other problems, or administration doesn't care about this problem, etc), but when I read stories like this I think "modern technology would help/fix that" rather than "adding/scaling technology would exacerbate existing problems".


The facial recognition can be improved automatically by hooking in mobile phone cell tower tracking (cell tower traffic management) in realtime which I believe the police here in the UK also have, then over time the realtime cctv's can also adjust for differences in the camera sensors, like colour hues and white balance, because not everyone is going to follow the same route at the same speed at the same time as everyone else. Its not hard to do if you join up the technology that is also in use, including cash machine usage, shopping payment systems, car park ticket machines, etc etc. Individually, facial recognition on its own has more weaknesses than if its joined up with other tech where users leave a digital footprint. If you take the new electricity smart meters being rolled out in the UK and elsewhere, over time, it would be possible to work out the devices in use by the amount of electricity that is used and the pattern or profile seen of electrical consumption. EG, if you saw a small increase in use of electricity at 4am for a about a minute, you could surmise that someone has turned a light on and possibly gone to the toilet. Over time, you will the same or similar amount of electricity used which could then be used to monitor the toilet habits of the individuals in the property. This could be a measure over the years for prostrate problems if its known a male lives in the property, as an example of how the data could be used to hilight medical problems. If you take a washing machine, this might be hot fill or cold fill, but you could watch the consistent increase & decrease of electricity as the washing machine goes through its cycle, finishing up with a spin cycle. Machines are excellent for consistent demand of electricity. In time, you could even work out what cycle or programme has been selected on the washing machine. When combined with other devices with their own electrical demand patterns, you could work out everything electrical in use, and also identify or predict faulting electrical devices. So the electrical companies could predict wasteful users as well as conservative users if other factors are known like the type of properties, useful or made easy on new build housing estates. This in turn can also be combined in realtime with police systems taking you closer to the fictional minority report, so police could be on hand to prevent a situation in the home or in the street, which I would hope they would be there to do, rather than just arrive after an event and pick up the pieces as this is also traumatic for them seeing as they are also human beings.


It's a gradual worsening of civil freedoms and rights.

Intrusive CCTV is intrusive. Facial recognition is more intrusive.

It wouldn't be different from an impossibly high number of law enforcement officers keeping watch on everyone at all time like in a high-tech big brother nightmare -- except the tech doesn't work properly, and there are high numbers of false positives that will lead to the wrong innocent people being put under suspicion and potentially having their life ruined by a police investigation that presumes you're guilty until you've been illibated by a court.

They shouldn't be watching or tracking us in the first place.


Commonly people say "if you have nothing to hide you have nothing to fear". Leaving side the origin of the phrase side, it is still a bad argument. You may not have something to hide now but can you trust that what you do now doesn't become something to hide? We've seen many times in history oppressive regimes rise out of free states. This power doesn't just belong to the government in the now, it also belongs to the government of the future. There is the possibility for turnkey tyranny. There's a balance to protecting the people now and people of the future (part of civil rights).

It belongs to other groups as well. We've seen pretty frequently the state of computer security in governments. Can you also trust that this technology won't be hijacked? Can you tell me that this system can't be used by an invading force to quickly target important individuals?

It isn't always what a technology is being used for that makes it concerning, you also have to consider how it can be abused. This type of technology, and most surveillance technology, has a high potential for abuse. By both our own governments AND adversarial groups.


I do have stuff I want hidden. Why is that such a damn crime?


It isn't, and I'm not trying to imply it is.


In the same way targeted surveillance from 50 years ago was different from the total surveillance of everyone today, or in the way air strikes from 2 decades ago are different from the tens of thousands of drones strikes that the US launches every year, a number that is bound to increase at least another ten fold if the drones strikes are automated.

The differences are cost and scope. The cheaper it is, the easier it is move from doing it to "highly-valuable and highly-like to be guilty targets" to increasingly more innocent people (because "why not?"). That's how the governments think (unless the people strongly oppose these new changes).


"Quantity has a quality all its own."


Because it makes the activity much cheaper. The cheaper it becomes to track everyone, the more tempting it becomes to not only track criminals but also any other groups of interests that haven't committed any crimes.


Also IMSI catching is used widely by police.

https://news.vice.com/article/vice-news-investigation-finds-...

And none of this surveillance compares to the extent of surveillance in Nothern Ireland.


What are you going to do about it?


I think for people who live in a place / do a job where a state harasses/meddles instead of protects them, knowing how it works lets them protect themsleves and even meddle back.

I think this is why activists / journalists / spies in some areas try to study the operation and practice tradecraft / opsec / cs.

Info asymmetry is power, isnt it?


massachusetts is such a place


Well UK doesn’t have the same free speech laws rights as the US so it’s much more risky to protest, especially in London these days.


What exactly is the use for this facial recognition exactly? We all know from experience that British police couldn't catch a cold let alone a criminal, with all those tens of thousands of 240p cameras out of which maybe 20% actually work. What is facial recognition going to solve if they can't even see the face on the camera?


Perhaps the time has come to invest in Groucho glasses.


isn't using it by stealth sort of the point?


The article says that the police had pledged to notify people of its use in the areas using leaflets and such, but failed to do so even when deploying the technology, so this is why it is labeled as secrecy.


they had posters.

> Lots of people didn’t seem to be noticing the posters at all… from a basic data protection point of view they’re not informing people well about what data they’re collecting or what is going on.

i don't think data protection applies to state surveillance. if people knew what data the state collected about them, it would defeat the point.


> if people knew what data the state collected about them, it would defeat the point.

No, it wouldn't. The point of keeping data is to be able to detect unusual activity and cross reference information. Not to spy. The people are the country and they should be able to make an informed decision on what they want the government to be tracking about people.


>No, it wouldn't. The point of keeping data is to be able to detect unusual activity and cross reference information. Not to spy. The people are the country and they should be able to make an informed decision on what they want the government to be tracking about people.

This ideal mindset breaks down for everyone other than "the good guys". The point of keeping what is tracked a secret is to prevent the-actual-people-you're-looking-for from just occluding that particular information. They can't really "opt in" to the kind of surveillance that is necessary to find predictive patterns in their behavior, because they (obviously) won't, and knowing what is tracked just tells them what to spend the extra effort hiding.


is it weird i think there's no such thing as privacy between state and individuals, and probably remains privacy between individuals?

i accept this model of reality and actually think its sensible. personally i have no issue with that.


> i don't think data protection applies to state surveillance. if people knew what data the state collected about them, it would defeat the point.

What? The people the system is meant to protect shouldn't know about the system... at all?


Yes so they cant interfere with how it protects them, as another commenter noted.

That parts of a system must be prevented from knowing/interfering with the inner workings of each other to protect the integrity of the whole is a general pronciple in many things Itl think.


The article says that the information commissioner threatened legal action about this. I doubt she'd do that if the laws didn't apply.


i think her political ambitions combined with consumer privacy being an electoral issue is a better theory


You're painting everything with a very broad brush. The "state" by and large has to be transparent to the public.


interesting theories. not how i think of it.

i think it could be more like to protect the privacy of other citizens whose data it collects, and the effectiveness of its intelligence, the state must be opaque to the public and selectively transparent to itself.

Maybe one reason for the confusion /polarity on this topic, Is individuals think of privacy Through a framework of shame or something to hide, or nothing feel like sharing, But for states and organisations Privacy and secrecy is about maintaining Operational / strategic effectiveness.


No? Police actions are supposed to be transparent to the public. Should we treat every single government agency as if they deal with super-secret national security issues and allow them to keep everything secret?


police must present as open and transparent because presence is a crime deterant, order is more effectively kept when people trust / relate / engage with them, and they are service and public facing and an image of transparency reinforces faith in justice.

but investigations and methods and intel must be secret to preserve capability. and also legal / judiciL integrity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: