I wonder who provides them this tech. A friend of mine quit a company because they were developing a VERY similar technology a few years back. He "lovingly" called it RPaaS (Racial Profiling as a Service). He went WAAAY above his pay grade to protest against the creation of the product and it ultimately ended in him sending a letter of resignation.
we can keep up with well-reasoned arguments, it's just that the words that you're typing are incoherent. i guess we can assume that you think racial profiling is ok and being against it is "shitty"?
How about this: you construct a society that obsesses about race, I build a society that obsesses about property rights and we’ll see who wants to live where?
yeah we already live in a society that's obsessed with both. you can clearly see that when corporations use untested "AI" that racially profile minorities as a way to "protect" their property. again, can you present some kind of coherent argument? are you trying to convince us of something, or just trying to be smug? bc i still don't understand what your point is or what you have to be smug about.
I think you’re being unfairly harsh. It’s good for workers to refuse to carry out orders that are unethical. Ideally, such actions would be carried out collectively by a union branch or some other worker organisation. But if that’s not feasible, the next best thing is for individuals to push back on unethical initiatives.
Also, not everyone agrees that the best way to counter the ills of capitalism is to replace the whole system with something else that may be better but could be worse (in different ways). Being so casually dismissive of such people’s effort to improve things within capitalism means they won’t ever be interested in ideas that go beyond reformism.
Overcoming capitalism or overcoming the concept of value? They did this because they felt that the value they could recoup from shoplifters/deterrence was more than the cost of building and maintaining this system. None of that requires capitalism.
Report this shit to your congresscritters who care. They may not even be in your district or in your state, but there are congresscritters who care about racial profiling like this. And they'll raise a stink.
Especially if you give them juicy details and a nifty new acronym they can get headlines with.
> with “largely lower-income, non-white neighborhoods” serving as the technology testbed.
There's been plenty of articles pointing out that certain demographics are very under-represented in training sets, leading to all kinds of errors [1] [2].
Technological advances aside, the company should have clearly disclosed their use of facial recognition technology - concealing that raises serious privacy issues.
Is there any law requiring disclosure of facial recognition?
Should there be?
I don't know if there is, but don't think so? I think there probably should be -- but also, it'll be just another fine-print fades-into-landscape thing. All the signs about video monitoring don't really matter much, we just assume there's video monitoring everywhere and don't even notice the signs anymore. I assume facial recognition will soon be the same, unless it is regulated beyond requiring disclosure.
There are a number of state laws but the last time I checked the one in Illinois (which requires affirmative consent to do facial recognition) is the only effective one because it contains what’s called a “right of private action” - if the state does not sue a company you (each individual plaintiff) can sue for $5k statutory damages (which can make class actions a huge threat even to rich tech cos).
Yeah any warning system will just turn into California’s prop ~~86~~ 65, which led every building in the state to add signage reading “this building may or may not contain cancer-causing chemicals.” Which is completely useless.
True in theory, but is the chemical composition of a building really more complex than the software installed in a building? I feel like theres some 2nd law of supply chains that guarantees that every supply chain --physical or sofware-- will grow in complexity beyond auditability.
The software is in some hardware. The hardware can be seen and removed, and can be known when it's removed.
A chemical can't be seen, and can't be removed.
And even if the hardware is super entwined with the building, making it hard to remove, such as tied in with the HVAC system, there likely will be some toggle switch to turn off facial recognition. There's no toggle switch to remove chemicals.
Not for any landlord that doesn’t control every business and person in the building.
All it takes is UPS adding facial recognition to driver bodycams or something and suddenly your building has facial recognition in it. Better to just put the warning everywhere and forget about it.
I don't think this is a very good argument. A UPS driver/truck is temporary, and isn't a part of your business. That's like saying you have to put up a Prop 65 disclaimer for your place of business because someones shoes might have a cancer causing chemical in it.
If your place of business is actively adding cameras, and is actively using facial recognition, those are actions that your business has taken. A temporary action of another party should require you to do a disclaimer. Especially if the same action is something they should be disclaiming.
Is it the landlord's job to prevent a tenant from using facial recognition in the tenant's area of the building? I think the tenant is in charge of putting up the disclaimer if the tenant does that.
As for facial recognition in common areas, the landlord can put a clause in the lease agreement requiring tenants not to do that.
If disclaimers are required for facial recognition and a UPS driver has a facial recognition bodycam, I assume it'll be the UPS driver that needs the disclaimer, not every place the UPS driver visits.
Eliminating either by the customer is very difficult. You can try boycotting or contacting the owner to complain. Even then, removing the chemicals is even more difficult, because even if your boycott gets the attention of the owner, the owner will have a harder time removing the chemicals than removing the facial recognition.
I think there definitely should be. I wouldn't want to set foot in a place that did this, and would be really angry if I discovered that a place was secretly doing this because it would rob me of my ability to protect myself.
If the best face recognition software has an error rate of 0.08% [1] and Rite Aid serves 1.6M customers a day [2] what moron thought that was a good idea?
I'm not defending Rite-Aid at all, but that's not the problem here.
It's pretty easy for the system to show the source photo to an employee, and then they themselves can judge if they feel confident the shopper is actually the same person.
Facial recognition software should always be implemented under the assumption that it makes plenty of false positives, and human review is always necessary.
(What you're talking about is 1,280 false positives a day, or less than 1 per store per day.)
Again -- not defending Rite-Aid here. Just saying that there's nothing wrong with the statistics of it.
"It's pretty easy for the system to show the source photo to an employee, and then they themselves can judge if they feel confident the shopper is actually the same person."
I would contest that. Determining if someone is the same person as a low-quality photo is not necessarily something a human can do, and that's especially true if the human in question A: basically doesn't care and is likely to screw it up every way it is possible to screw it up and B: is influenced by whether or not they're willing to directly confront someone who is nominally a criminal while they're being paid the de facto local minimum wage.
This plan is all sorts of infeasible and frankly stupid, and the purely human concerns are sufficient for that determination even before we add the tech in.
You are correct. In fact there is research that humans are not particularly good at matching people to photos... this 2014 study showed a 15% error rate (with photos from identification documents, high-quality standardized photos, not low-quality random angle ones!), and the real kicker is passport officers (who do this as part of their job all day and presumably care about doing it well) surprisingly performed no better than random undergrads...
Which is precisely why you combine the two systems.
If humans get it wrong 15% of the time, that means they're reducing false positives by 85%.
That's the whole point of combining facial recognition with human review.
You still have to decide what to do with that final information considering that it's still not perfect.
But you should never be relying on facial recognition by itself. Even if humans are imperfect, they still improve the accuracy, and can make the final call "nope I'm just not sure, I'm not going to take action".
The error reduction from applying a different system applies if the source of the error does not manifest itself in both systems (if errors are uncorrelated, you can get an improvement that looks like that implied by the simple application of statistics, but not if they are correlated).
As an example, you can combine human review with automatic facial detection of identical twins and likely not see much (any?) reduction in error rate at all.
Two independent "85% accurate" humans are not 97.7% accurate for identical twins either.
The question is how correlated are human and machine errors. I would guess fairly strongly, in which case human review would add little additional accuracy.
No, evidence suggests they are not. We're all very aware of the horror stories of AI misidentifying species. Different AI systems are different, but in general AI seems to make quite different classes of mistakes from humans. Our brains work very differently from current statistical models.
So human review absolutely adds accuracy, you can generally assume. And it adds human-accountable judgment which is just as important.
Ultimately it's an empirical question, and unless someone has published the research, you and I will not be able to know for sure. I think there will be a lot of cases where two people simply look a lot alike, and any observer would have difficulty distinguishing them.
>In fact there is research that humans are not particularly good at matching people to photos...
>(Someone is now going to suggest, right, that's why we should have computers do it instead, bringing the circle back around again... but just no).
So where does this leave us for facial recognition? Should we ban both computer and human facial recognition, because they're both flawed? How would that be enforced? If a store employee thinks they recognize someone from a few minutes ago, are they supposed to ignore that fact, and pretend that they're different people, on the off chance that the guy might be someone else[1]?
"So where does this leave us for facial recognition?"
Where it leaves us is that it doesn't work, and it can't work. I see no evidence that there is some big reservoir of facial recognition quality to be extracted from the same basic data set. There is all sorts of reasons to believe that it is simply impossible to create a system that can be given a small percentage of the population as the targets and pick them out from millions of samples correctly.
Of all the disciplines, those trained in computer science should be aware of the concept that problems can be fundamentally hard or unsolvable.
However, I've been careful to phrase what I think may be fundamentally unsolvable as being related to "the same basic data set". Expansion of the data set provides other possibilities, and while I'm not ready to declare that adding that data will certainly solve the problem, I'm not ready to declare it as fundamentally unsolvable either. Add portable device tracking, gait analysis, speech analysis, anything else some clever clog can think of, and probably drop the requirement that de facto minimum wagers be asked to confront nominal criminals (I would assert there is no solution to the mismatched incentives there), and the problem may well be solvable. It would, however, require Rite Aid and anyone else planning to use this sort of thing to radically upgrade their hardware.
>Where it leaves us is that it doesn't work, and it can't work.
You didn't answer the second part of my comment:
"How would that be enforced? If a store employee thinks they recognize someone from a few minutes ago, are they supposed to ignore that fact, and pretend that they're different people, on the off chance that the guy might be someone else[1]?"
>and probably drop the requirement that de facto minimum wagers be asked to confront nominal criminals
Are you saying this on the basis that they're not qualified to make an identification, or that confrontation would put them at risk of violence? If it's the latter it really doesn't have anything to do with facial recognition. It would still apply even if replaced facial recognition with a 100% accurate oracle.
I was doing you the favor of ignoring the irrelevant hypothetical. I find "but what if something else entirely that you didn't say?" questions rather annoying. And I believe I was rather clear that the problems I am talking about extend beyond facial recognition, yes.
The hypothetical is very relevant because your stance implies that we should ban human facial recognition as well. That might count as "something else entirely that you didn't say", but asking about the implications of something that you propose is fair game. You can't write off follow up questions with "well I didn't say anything about that, and I find them questions rather annoying, so I'm not going to address them at all".
I would say that it means whatever procedures we build for taking pictures of "known criminals" and applying recognition to someone in your store, they need to be designed and implemented and carried out by people who are aware at all stages that there is a good possibility that they had the wrong person -- how would you want, say, your grandma to be treated if someone wrongly identified her from a criminal picture but wasn't sure? Treat that person that way.
This is hard, we generally do the opposite. Especially in racialized ways in the USA.
AI systems are often promoted as some kind of a solution to this, that somehow avoids human bias/mistakes. I think your comments even revealed that kind of thinking. I don't think they should be thought of that way.
The computers do it for passports photos at self check kiosks in immigration. The thinking is that the computers are better at it anyways, so no fidelity is lost vs a manned checkpoint. A false negative is easy enough to deal with anyways, and that false positive could have happened at the manned checkpoint as well.
Banned lists are simply impossible to implement in general. Instead, police should be more active in shoplifting and organized theft cases. Someone shoplifts, it’s at least a felony, get them in the system, and apply penalties to repeat offenders. Having stores sort this out themselves is just crazy.
"These images, which were often poor quality, were captured from CCTV or employees’ mobile phone cameras." - the article
"You don't think the technology will catch up?"
First, technology can not transcend GIGO. GIGO is fundamental.
Secondly:
"When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action — and the majority of the time this instruction was to “approach and identify,” meaning verifying the customer’s identity and asking them to leave." - the article
Face matching is not a hard problem for computers. Face matching is a hard problem, period. We humans seem to have dedicated hardware for it, and we are not generally a "dedicated hardware" sort of species when it comes to that sort of task. GIGO is fundamental for humans too.
As I stated in my other post, when you add more information into the scan the situation changes. Social media adds social network information which carries a lot of other info.
The task of picking a face out of hundreds, matched with the other additional cues social media can add, is radically different than the task of picking a face out of the entire human population. It is the latter that is infeasible.
> .. then they themselves can judge if they feel confident the shopper is actually the same person
Yes - but the "approach and identify" of false positives/innocent people was the actual embarrassment and harassment that the system got banned for, it was the human review (asking for ID etc) that was the issue.
But it seems like they went straight to approaching and asking for ID.
What about the step where it shows a human photos from original footage and current footage and they get to say, nope I'm not confident?
Again, not defending Rite-Aid here, just pointing out that facial recognition needs a human verification layer before taking any action at all. The fact they weren't doing that is just one part of why their actions were wrong.
What incentive is there to say "I'm not confident"? If someone is particularly bad at comparing faces in pictures and they let multiple shoplifters through they're likely going to lose their jobs.
Wouldn’t it depend on how many accurate positives there were? If it’s one false positive against nine real positives, great. If it’s one false positive a day for a month before a real match occurs, nobody will pay any attention at all.
> It's pretty easy for the system to show the source photo to an employee, and then they themselves can judge if they feel confident the shopper is actually the same person.
Yeah, that's not going to happen in any meaningful sense with ordinary employees. Asking them to make a judgement call like that will just result in employees erring on the side of "it could be them, so I confirm" rather than "I'm not sure it's them, so I don't confirm"
I'd expect employees to act in a way they perceive would be least harmful to themselves and their job. If they incorrectly say someone wasn't the person matched, and that person then proceeds to steal from the store, the employee may (probably correctly) perceive that they'll suffer consequences for the error. The safest direction for them to err in would be "yeah, it's probably them".
> Just saying that there's nothing wrong with the statistics of it.
I think that an understanding of basic statistics still eludes a lot of people. Teaching it should be prioritized as much as teaching algebra. There are a lot of interesting real-life examples of how we get confused or tricked with statistics, enough to even keep the attention in class of the average high-schooler.
Just a reminder that the "1 per store per day" is an innocent human being who probably would have been put through hell on earth by what you considered an acceptable rate of error.
I've found that adding that disclaimer is entirely useless. Like holding beef jerky in front of a dog and saying "I'm not offering this to you, just holding it here".
You really couldn't have picked a better example to illustrate why the OPs argument was so ignorant. Pretending that reality doesn't exist doesn't make it stop existing.
No one is disagreeing that 1 per store per day is a small number. We're disagreeing with the OPs statement that there's nothing wrong with one false positive accusation of shoplifting per store per day given the obvious consequences of such an accusation in the context of the US system of policing and global blame shifting to the machine, and that given such a context essentially any false positive rate is too high.
I'm just saying I've tried to proffer a position while disclaiming any support for it, and maybe some people listen, but there will always be someone who is too tempted to attack the position instead of treating it as a specimen.
This entire thread has been about false positives. Innocent people being falsely accused of shoplifting. The difference between committing a crime and not committing a crime is not simply semantics.
i've been racially profiled before and never once has a cop apologized "for the mixup" after detaining me (rudely, usually cursing, fishing for a way to bust me).
"hell on earth" is living in a dystopian society where non-white citizens are detained and held for no reason and assumed to be guilty, because of a malfunctioning "AI" system.
And in the US, just being arrested -- even if the arrest was a mistake and no charges are filed -- has serious adverse consequences for the arrested person.
I missed that previous post, and frankly this is important enough to justify being one the front page again. Especially with the amount of irrational love heaped on AI snake oil salespeople here
Does this ban them from using all facial recognition software or just the software they used, which are two very different things.
Also, I think ppl would be surprised at the number of places that use this type of sw (maybe not ppl here.) Walmart has amazing sw that id's ppl before they even enter the store and regularly work with LE in helping find ppl.