I agree with you generally. But I differ in the sense that my default belief is that these technologies are over hyped and cannot provide the scale of performance or detail of prediction that could make them effective as oppressive surveillance tools, at least not for a long time.
Mostly they are “surveillance theater” akin to useless (but costly) TSA surveillance. The way it interacts with society is by being vague and scary, not by actually enabling truly scary scales of surveillance.
Another way to look at it is that the government buys these contracts and services and rarely holds the provider accountable, perhaps especially in fields like machine learning where people can more easily make political arguments out of the debate over whether it’s working or not.
Since agencies are happy to drop big money even when the technology does not work then providers like Rekognition actually do not have incentives to invest in making the scale of accurate surveillance larger. Why would they when they’ll make more money yanking an engineer off of a material project like that and slotting them to work on some braindead TED talk sort of demoware that gets them the next big contract?
I worry about the spread of these surveillance contracts more as a taxpayer seeing government money wasted that could help people through much simpler means than I do as a concerned citizen who sees the danger in Orwellian surveillance.
These surveillance theater and immature or improperly applied technologies do more than just waste tax dollars. For decades the courts allowed flawed hair analysis to be used as evidence. When it could result in innocent people being sent to prison, a fuss must be made that's stronger than just worrying about what it will cost.
That’s a fair point, which makes me feel even more convinced that we should be more focused on whether the technology actually works than whether it’s Orwellian.
Using Rekognition for this would be like hiring a fraudster construction company to build a local bridge. The bridge could collapse, nobody did publicly verifiable due diligence, and nobody would be held accountable. This could be the automated surveillance equivalent.
>my default belief is that these technologies are over hyped and cannot provide the scale of performance or detail of prediction that could make them effective as oppressive surveillance tools, at least not for a long time.
you've never heard of the chinese police state's facial surveillance which they're using to throw countless uighurs in jail for stuff like jaywalking, i suppose?
>as a concerned citizen who sees the danger in Orwellian surveillance.
you don't see the danger because your paycheck relies on you not seeing it. the rest of us are not so willfully blind.
It seems misguided to dismiss the technology itself, rather than scrutinizing specific uses. It reminds me of David Deutsch's book The Beginning of Infinity, which has an extended anecdote about a scientist who deeply objected to the development of color TVs because "people didn't need them" and he could only think of examples where bad actors would use them. Fast forward several decades, and color TVs are used for all sorts of important outcomes, like life-saving medical imaging technology, or sharing video of an important life moment with someone who is too sick to travel.
You could imagine similar things for face recognition. An Alzheimer's patient who has a large digital photo library of family members and events, and would like some information retrieval tool to help search for a specific memory, family member, etc. Or someone who takes stock photographs and wants to photoshop bystanders out of the photos so that nobody's image rights are infringed when submitting the stock photo for publication-- might want face detection to aid in the person-removal task.
I agree completely with scrutinizing motives and refusing to work on tech when the goal is unethical. But I don't agree this allows us to point at a generic, amoral technology in and of itself and say it's conceptually unethical for existing. Maybe there would be some extreme or contrived cases when I would agree to that, but even this face recognition stuff is very far from it.
Mostly they are “surveillance theater” akin to useless (but costly) TSA surveillance. The way it interacts with society is by being vague and scary, not by actually enabling truly scary scales of surveillance.
Another way to look at it is that the government buys these contracts and services and rarely holds the provider accountable, perhaps especially in fields like machine learning where people can more easily make political arguments out of the debate over whether it’s working or not.
Since agencies are happy to drop big money even when the technology does not work then providers like Rekognition actually do not have incentives to invest in making the scale of accurate surveillance larger. Why would they when they’ll make more money yanking an engineer off of a material project like that and slotting them to work on some braindead TED talk sort of demoware that gets them the next big contract?
I worry about the spread of these surveillance contracts more as a taxpayer seeing government money wasted that could help people through much simpler means than I do as a concerned citizen who sees the danger in Orwellian surveillance.