Hacker News new | past | comments | ask | show | jobs | submit login

>If you look like someone the system is looking for, there's nothing you can do to whitelist yourself other than wear something on your face, you will be flagged, every single time until a human looks at it, and if they're stopping people, that means being stopped, every time.

Wouldn't such a system (ideally) learn the differences in [who it expected] and [who it found] and attempt to improve its accuracy to prevent false positives in the future? I don't know if this is too basic, but I'd assume training with a set of similar-but-not-an-exact-match data would be vital for accuracy at scale.




I read /r/legaladvice over on reddit.

At least a couple times a month there's someone who has had repeated issues with law enforcement banging on their door, sometimes even with guns drawn, because someone with a similar name had a warrant out for their arrest, or a criminal background of some type, or someone the police wanted used to live at their address years ago. And there seems to be very little an average person can do to get "the system" to correct itself; even after multiple instances, the police keep coming back again and again and again and again because "the system said so". Massively scaled automated facial recognition will multiply that many times over, especially when officers are trained to just trust "the system" when "the system" says you're a criminal.


Yep. One only needs to look at the recent story The Machine Fired Me[1] to see the potential impact of such system errors once things become connected and automated.

That poor guy was effectively fired despite the fact that every human agreed that he actually wasn't and should still be employed.

[1] https://news.ycombinator.com/item?id=17350645


Thanks for sharing this.

Perhaps I interpreted it wrong, but I think this speaks volumes for the amount of low-hanging fruit that exists for improvements in such a system: obviously, the above _shouldn't_ happen but does, which raises the question.. why?

I may be (and probably am) wrong, but my guess is that many PDs are still using incredibly outdated systems (at least, mine is), and would vastly benefit from modernizing and improving their systems. There's many other possible reasons (improved systems don't fix this problem, or improved systems have other problems, or administration doesn't care about this problem, etc), but when I read stories like this I think "modern technology would help/fix that" rather than "adding/scaling technology would exacerbate existing problems".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: