Hacker News new | past | comments | ask | show | jobs | submit login

Face recognition exists, is becoming trivial and this has implications.

I saw a demo of sorts about 6 years ago, and heard rumors of various research projects @ FB. Banner tracking in malls that link banner "views" to "conversions" using a camera armed cash registers. Hi-res cameras on streets that collect data like wildlife tracking collars. A stadium cam that can id all the people at a match. A smart building that knows who's inside it & where. Casino cams that dispatch cocktails waitresses to VIPs wherever they are. Casinos've been using face recognition since early days, for persona non grata identification. Coffee shop loyalty stamps. School attendance. Prison monitoring, with relationship graphs and ML conspiracy detection. Traffic Analytics....

There are a ton of applications, commercial, security, intelligence or general nosiness. IMO, the best way of thinking about face recognition is: Google Analytics will now work for physical reality.

All this stuff just exists. FB (and many others) have an enormous, proprietary set of tagged photos. You could probably create your own set just scraping social media and google image search. Some applications don't even need it. Face recognition works. Cameras are high-res & cheap. The software is fast. Customers want it. It will happen.

We (our generation) has not shown the political will (or competence) required to create new rights and limitations on power sufficient to moderate this kind of technology, thus far. The only thing preventing wide-scale & unlimited use is creepiness feelings and that is temporary.

Putting the technology onto AR robocop glasses is just an illustrative way of demonstrating implications. If this is well implemented, every police force will want it in some form. Fugitives are needles-in-haystacks and I suspect this will turn up a ton of them. It's probably a genuinely useful police tool, unlike the dragnet digital snooping they do.




A bit of a strong idea, but what if we just outright say that face recognition is unethical?

I have a hard time finding any use that is positive for society. Advertising and surveillance. Given that, what if we try to convince as many people as possible to not research this tech, nor work for any company working on this.

You wouldn't work for a weapons manufacturer, maybe the same metrics need to be applied to adtech and all this other tracking stuff


You can't make people forget technologies you don't like selectively. The more we advance in adjacent technologies (like in autonomous driving) the easier it will become to advance in face recognition, to the point where just a tiny group of people can develop it.

Fighting technology with artificial bans have never worked, and privacy or ethics based arguments won't work now either. The only hope is better technology of controlling the government: better ways to vote, better ways to monitor financial dealings of politicians.

If, people lose their privacy, but government loses it's privacy too, ordinary people will be the ones who benefit the most.


It's not about artificial bans. It's about having a social compact so that only the most incompetent/"unethical" end up working in it.

Even when the tech is known, if you aren't good at software development then the projects will have difficulty.


Well... the ban on using nuclear weapons has worked so far. The ban (to the extent it exists) on using unwarranted digital data dragnets in civilian policing has at least limited its use some.

Not that I disagree with the larger points, it's an uphill fight.


Would that be useful?

First, arms industries and even mercenaries don't seem to have much difficulty operating. So, whatever boycotts individuals might practice don't really do much. Even tobacco companies find employees, and no one likes them.

Second, we (in practice, eg eu cookie laws) rarely define these effectively. Is face recognition the problem? Let's ID people based on some other markers. You'd probably need very broad, principled, constituion-esque language. It is illegal to de-anonymise, store data or somesuch. My guess is that this will be almost impossible politically with bootleggers and Baptist's left right and centre.

There are just very few examples of what we would need, in modern legislation.


Legislation can help somewhat, but I think that having a stronger ethical stance can help a lot.

Think back when Y combinator funded that "installer bundled with ads" startup. They got yelled at for it by a lot of people. It slowly changes things


I guess we have different levels of confidence in consumer action-like initiatives.

One sleazy startup is a rounding error to yc. Online snooping is worth at least 100 bn to Google and FB, most of the revenue of two giant and otherwise desirable startups. Also, it's important to all the businesses that buy these ads, the creators that publish on YouTube (for example, but they honestly get very little from this), etc.

I don't think we can away that with appeals.


Well, you (and I) wouldn't work for a weapons manufacturer, but many people would - and do.


Well yes, but why should we not ostracize those people like we presumably would do to others who committed, participated in, or collaborated with atrocities?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: