Hacker News new | past | comments | ask | show | jobs | submit | vrperson's comments login

Again, how is that story a failure of AI/face recognition?

At some point they even have a human compare the likeness, and the human also concludes it is the same person.

The article even features the sentence "teve Talley is hardly the first person to be arrested for the errors of a forensic evaluation."

And yet people seem to be hellbent to make it about AI.


> And yet people seem to be hellbent to make it about AI.

Of course, because the alternative is to make it a human failure of critical actors in the justice system. Every story about mistaken identity has dozens of points where humans, police officers and magistrates, made a judgement call ... wrongly. And always obviously wrong.

That is the case here and the case in the linked article. You can clearly see the eyes don't match, the jaw is different, ... and in Steve's case you see the face is too small, his nose is much narrower than the robber's, the hairline doesn't match, Steve has a square face the robber doesn't, his body shape (the shoulders) are very different, and Steve's ears are almost pointy, whereas the robber's are much more rounded, and much shorter than Steve's. It is not a reasonable mistake for the 10-15 people involved to make.

In other words, the inevitable conclusion is that:

1) the police and prosecutor as well as at least 1 judge knew they had the wrong guy

2) they all cooperated to use their power to extract a wrongful confession from the guy, including that judge

3) they used "testimony" from someone with a clear grudge against him without question

4) which additionally was not reasonable given the bad quality images

5) they refused to believe testimony when it disagreed with their working hypothesis

6) instead, they used psychological torture to force a confession from an innocent

7) essentially, they refused to set a person free without being offered another victim

Clearly at the very least they've shown they, both as an organisation and all individuals in it, would much rather wrongfully convict an innocent person than to be left without suspects or traces. They weren't protecting the bank, society, or anyone, they were VERY clearly abusing and violating the law to protect themselves from embarrassment.

People just can't deal with this. That police just fight to get someone, anyone, convicted at all costs rather than making damn sure they got the right guy, as the law demands. That they do this disregarding all costs to the suspect and society is not something most people are willing to consider ...


It’s a failure in the sense that people put too much confidence in this kind of algorithm and put them above any eye witness when they should just be considered as another element.


Also it is not merely a failure of people (in putting too much confidence in to these algorithms). It is also often actively marketed as such (fine print about culpability not withstanding).


>> Again, how is that story a failure of AI/face recognition?

Because AI identified the wrong man. That a human also did, does not make the identification by AI any less wrong.


I would expect algorithms like that to put out likelihoods, not hard identifications. All it does is say "this person looks like that person". I wouldn't read that as "wrong".


Facial recognition systems are image classifiers where the classes are persons, represented as sets of images of their faces. Each person is assigned a numerical id as a class label and classification means that the system matches an image to a label.

Such systems are used in one of two modes, verification or identification.

Verificatiom means that the system is given as input an image and a class label and outputs positive if the input matches the label, and negative otherwise.

Identification means that the system is given as input an image and outputs a class label.

In either case, the system may not directly return a single label, but a set of labels each associated to a real-valued number, interpreted (by the operators of the system) as a likelihood. However, in that case the system has a threshold delimiting positive from negative identifications. That is, if the likelihood that the system assigns to a classification is above the threshold, that is considered a "positive identificetion", etc.

In other words, yes, a system that outputs a continuous distribution over classes representing sets of images of peoples' faces can still be "wrong".

Think about it this way: if a system could only ever signal uncertainty, how could we use it to make decisions?


Similar to the way you could look at guns and cigarettes. 'Guns don't kill', 'It's your own responsibility', etc.


Have you tried Human Resource Machine?


I don't see much explaining of the capabilities of AI in the article.


Pointing out a colossal failing of ML based facial recognition is the point of the article. Without that failure, there is no story here.


Where exactly is the failure? It identified a person who looks similar to the person in the drivers license? It could even actually be the same person, given that the drivers license is assumed to be fake (so the fake could use the photograph of the suspect).


OK, but you can find such quotes for everything. "We estimate the global demand to be about 3 computers" "nobody will need more than 16KB of memory" and so on.

(I'm not a fan of XRP, anyways)


What makes you so sure that these people were arrested for telling the truth? What about telling the truth would be an offense worthy of arrest in the US? Assange and Manning come to mind, for leaking government secrets.

But according to the article, most arrests happened in connection with Black Lives Matter protests.

It seems equally plausible that more people than ever try to evade arrest by claiming they are journalists. Or journalist have less and less ethics when doing their jobs.

Not saying that's the root cause, just that you can not simply assume all those people were arrested for telling "the truth".

And how do people who bemoan that stand on the subject of censorship on social networks?


Almost every day during the height of the protests there were videos circulating of the police arresting people (and using violence against) people who were obviously journalists.

For example, a CNN reporter was arrested mid-broadcast and there was footage of some people with cameras being attacked even though they were sat at the side of the road filming rather than taking part.

"Equally plausible" is false equivalency.


If you go read the source for the numbers, journalists "detained" in a group of protestors being detained were included in the count. You can easily rack up large numbers that way. The journalists caught up in these were allowed to leave after they had the group under controlled.


The link included in the person you responded to says: "About half the journalists here are freelancers, who may lack the institutional support of a newsroom and the financial resources for a potentially expensive legal defense."

Journalist isn't a protected title. And I wonder how many activists don the "Journalist" label at protests.


I have heard it is a common strategy among activists now, but that was a claim made by their "opponents", so not sure how prevalent it is.


> What makes you so sure that these people were arrested for telling the truth?

> But according to the article, most arrests happened in connection with Black Lives Matter protests.

Kinda answered your own question here. Journalists were arrested for being present at newsworthy events, recording the truth as it unfolded before them


Most probably weren't even arrested. Just "detained" while detaining the group of protestors they were amongst.


But according to the article, most arrests happened in connection with Black Lives Matter protests. It seems equally plausible that more people than ever try to evade arrest by claiming they are journalists.

Not really. The first amendment protects freedom of the press as well as freedom of assembly. Arresting protestors has a similar chilling effect on speech.


Pretty sure that only applies to peaceful protests.


Learn at what moment your defenses are down so that they can sell you stuff, presumably. Isn't that the general assumption about tracking?

Or sell your data to interested parties. Maybe if you liked certain GitHub repos, you are more likely to vote Democrat or whatever.


Unless you are rich, then poor people are the problem.


de-gentrification is a good thing if you don't like cupcakes and cafes and nice restaurants and nice shops.


Too bad people can't make cupcakes for themselves.


That's a lot more effort than a quick 2$ at the corner cafe.


ANd if you like payable rent non obnoxious hipsters and their shitty music, POC and people that have politics informed by more than buzzfeed


Do you know how I can tell you’ve never lived in a low income area?


I lived in the latino part of the brooklyn for a good time it was much better before the hipsters came I will always prefer that to what gentrification brings


What is meant by 4)? So users have the right to see a web site without ads? I think if users don't consent, you should be allowed to block their access to your web site?


They can see ads. You just can't process their PII for them.

You know, it used to be that ads were targeted by where they were shown, not by who was looking. That's a return to that model.


A tracking ID isn't PII, though?


What is or isn't PII, which is a US legal term, is irrelevant.

What matters is if it's Personal Data.

Personal Data is defined by the GDPR as:

"‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person".

I would say a tracking ID falls under "an identification number [or] online identifier"...


Fair point on PII.

But to the rest of your post: I don't think so. I think there an identification number is something like a government-issued ID number.

An online identifier would have to identify you (e.g. my Hacker News username is probably identifiable).

The way I think of it is if someone who isn't authorised to know who I am can look in a system at the number and then go off and correlate that info to find me without further reference to other data in said system.

A database ID doesn't count, because you'd then need to look up something actually identifiable in the system to figure out who I am; neither does an opaque tracking number.

My social security number is identifiable; my email address may be identifiable; if I gave birth in region X to octuplets, then that probably is too.


If it can identify the person and be deanonymised, it is.


If your basis for processing private data is consent, then under GDPR one of the conditions on consent is that consent has to be freely given, it can't be traded for something.

In essence, under EU law privacy is an unalienable right, it's not something that can be freely contractually sold away (alienated) by the users. If you have a contract where users agree to allow you to do whatever with their data because you give them $100 or show some content, then that does not fit the definition for consent according to GDPR, and this contract does not - can not - give you the right to process their data as you wish; that particular clause in the contract is effectively void, the users are "selling" something they can't legally sell.

If some data is required to fulfil your contractual obligations to the user (for example, processing their address to deliver pizza), then that is a legitimate use under GDPR 6.1.b which does not require consent, but if you'd want to use the same data for some other purpose (for example, using that same address for targeting advertising or giving it to a third party) then the contractual need clause 6.1.b wouldn't apply, you'd be stuck with 6.1.a (consent) and that is valid only if it's a genuine free choice without some benefit or service being conditional on providing "consent".

So you technically are allowed to block access to your site to people who don't click a checkbox "I agree to stuff", however, if you do so then clicking that checkbox does not constitute freely given consent, so it can't give you any rights to use the data for any of the people who checked that checkbox, for the purposes of GDPR that checkbox is simply meaningless if access to your site was conditional on it. So the users have the right to (and will) file complaints about illegitimate use of their data right after clicking the "I agree to stuff" checkbox.


> I think if users don't consent, you should be allowed to block their access to your web site?

No. That's illegal. Because:

- the functionality of your site does not depend on collecting user data for ads

- you can show ads without collecting user data


Why should they?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: