Hacker News new | past | comments | ask | show | jobs | submit login
Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich (nytimes.com)
267 points by pseudolus on March 5, 2020 | hide | past | favorite | 139 comments



Clearview now says the app is "available only for law enforcement agencies and select security professionals to use as an investigative tool."

I'm fairly convinced by David Brin's argument that this is exactly the wrong approach. In The Transparent Society he argues that privacy is no longer an option; our choice is between a society where the police surveil us all, and one where we all surveil each other. He says only the latter is compatible with freedom. We have to be able to monitor the cops and the powerful, just like they monitor us.

Maybe we need our own Clearview, with open source face recognition and data on the darknet.


I'm convinced that Gen-Z is going to blow all of this out of the water. That the world will eventually hit a point where there isn't anyone without something online that can come back to haunt them. And I think companies are starting to see it too.

A little anecdotal but when I was applying for jobs last summer there was a video that was brought up twice during interviews with different organizations. Both times I was told after the fact that it was just to see how I would react, all I did was tell them what happened, not that I was sorry because I wasn't, I told them I stood behind what my actions were and there was nothing I could do about it now (it hit 100k views on Instagram, not a situation where I can contain it). Both companies offered me a job.

I rewrote this a few times and I'm still not sure I got my point across, but I agree with the argument in the The Transparent Society, and I already see it shifting towards that.


> A little anecdotal but when I was applying for jobs last summer there was a video that was brought up twice during interviews with different organizations.

I find it interesting that a company would do this instead of silently reject you…


The one I took, the interviewer ended up being my boss. He told me that the reason they overlooked it was because:

1. If there were videos of everything stupid he did, there would be no way he would ever get hired.

2. It wasn't anything illegal or violent, so he didn't see it as a character issue.

3. I was honest about it.


My Machiavellian side says that they have dirt on you now and know they can use it at any time. If anything, more dirt is then better, as it creates more leverage for them in the long run.

Still, Hanlon's Razor applies, and you boss is more than likely just telling the truth.


Your Machiavellian side is completely missing the point then - this is(was) a public event (as evidenced by GP's reference to 100k views on IG), thus there is no leverage.

You'd have leverage if there were a reasonable likelihood that you were the only hiring manager to have this information, but if everyone is on a level playing field, the leverage disappears.


Not only is an easily discovered public event poor leverage, it becomes much worse leverage if it comes up in an interview.

When companies (or governments) try to manipulate employees, they frequently rely on some kind of willful ignorance. Wells Fargo is a great example: they set impossible performance targets and turned a blind eye to fraud, then fired and blacklisted whistleblowers - ostensibly for knowing about that same fraud!

If a shady employer wants leverage, even public events can suffice as long as they can claim ignorance. For example, most stock option grants are immediately lost if you're fired, but even at-will employment can't be terminated specifically to deprive someone of their options. So an employer might give a generous options package, then "discover" the IG video and use it for dismissal at just the right time to prevent a profitable exercise. But if that video comes up during hiring, it's no longer a plausible reason for later dismissal, at least without committing perjury regarding the interview.

I can't even work out a scenario where "lots of people know about this including us" is an effective way to manipulate someone.


In any real going business, cost of hiring and training someone of a type who is eligible for compensation with stock options and the subsequent morale dip if discovered doing such manipulation would by far outweigh the benefits of doing this. So this is largely a tin foil hat scenario


For any business with decent size, absolutely. There are a thousand ways to claw back options, and the reason they don't get used is that doing it even once would make hiring practically impossible.

For a small enough company? It falls in the same category as "diluting out of one guy's shares" - bad morals and bad business, but it still happens.


I'm pretty sure that those that DID consider the clip a problem for hiring the OP didn't share any of that with them. But I'm happy to see that others were more rational about it.


Very possible. I didn't apply to many places, I got interviews at 3/5, two mentioned it, could have been that the other two saw it and decided I wasn't worth their time.


Precisely what were you doing in the video? Depends on the content, young me did a lot of stupid stuff which would have got me into a lot of hot water now. Benign stuff can be misconstrued and abused as well. So, sorry, I don't trust your argument for a transparent society.


Transparent Society is still the best book on the subject and I agree with you whole-heartedly about the societal ground shifting.


Now I’m wondering what the context of the video was, and why they felt it was relevant to ask about.


It was a video of me in my university dorms, it wasn't anything illegal or violent. I'm not sure why they felt the need to ask, I wonder if they were just curious for context.


That's infuriating. I'm sorry you had to go through with that.


I kinda lucked out and learned my lesson about this stuff early on

I was trash-talking one group of internet people with another group, and the trash talk ended up getting indexed by Lycos or AltaVista. They called me out for being a dick, and I realized that I need to be really careful about what ends up attached to me online. (Also not to trash talk people behind their back..)


What kind of clip was it without getting too specific?


What videos?


> our choice is between a society where the police surveil us all, and one where we all surveil each other

This idea is explored further in Nick Bostrom's Vulnerable World Hypothesis under "Preventive policing". It's depressing but does walk through this scenario in more detail than most.

> The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk‐benefit balance of developments towards ubiquitous surveillance or a unipolar world order.

https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.1...


How is everyone surveilling everyone different from East European (former) model of everyone telling on everyone?


Now, now, comrade, let the westerners have fun too.


Heh. I'm the one who says that the roads should be toll-free so that there aren't financial transactions with which to track activists.


I do think privacy is still an option and see no evidence to the contrary. Of course it should be applied to officials in public service in their function. Not their private lives.

Because anyone with a hint of a brain should be able to extrapolate the negatives of such surveillance on their lives. We played dumb long enough on that topic.

If you are not up for that, ok, but you also should not work in a capacity that has any access to private information.

If there are companies or people regularly ignoring privacy, which still is enshrined in most countries laws, severe penalties should be applied. That is also true for the executive branch of government, which was severely neglected the last decade because of fear.


I disagree with Brin, but I do think if suddenly a creepy clone of Clearview was publically accessible and aimed at the rich and powerful, we sure would see regulation happen awfully quick...


Why would you think that? It's what the Paparazzi do, and the lifeblood of the whole Yellow Press. Furthermore there is the (reverse) picture search in some search engines, where you have to connect the dots for yourself. Like described in https://www.bellingcat.com/resources/how-tos/2019/12/26/guid... for instance.

IIRC the russian one from https://yandex.com made headlines a few years ago because it scraped https://vk.com/ which is their fb/twitter-combo thing.


Interesting idea. Install a few cameras at general aviation terminals, international tax attorneys, beauty surgeons, parliament, etc.

When a big Swiss private bank moved into new premises in a skyscraper in Hong Kong, there was a rumour that they had one unmarked elevator avoiding the building lobby, directly from the underground car park to a private lobby within the bank, for UHNWI. Let’s Clearview there, too!


How would you install cams at general aviation terminals? It's not like you could just walk into them. At least the ones i've used you either switch cars at some gate and are driven to some apron, or your taxi/cab/uber/limo has to follow a car from some FBO to their premises and then be lead back out. There is no parking there(allowed). It's all gated and guarded.


I think Mr Brinn hadn't seen the dumpster fire that is

https://twitter.com/bestofnextdoor


I don’t think that’s how it works.

Going by the way online mobs work, availing these tools to all citizens unencumbered makes possible all kinds of abuses.


Exclusive top-down surveillance also makes possible all kinds of abuses. We should put some thought into which scenario has the worst failure modes, and in doing so we should remember our history.


No top down surveillance would be my preference and it isn't that hard to imagine really. Certainly you can get the impression that people currently don't really value privacy, but if it is taken from them, there will be quite a sudden interest.


Many of us grew up with no top down surveillance in the 80s and 90s. In that world, no top down surveillance is easy to imagine.

In a world where network connected cameras are so cheap they're disposable, no top down surveillance becomes harder to imagine. At some point, information gathering systems become so cheap and ubiquitous that telling organizations not to use them for surveillance is like telling people not to look up at the sky.

Banks used to ask your mother's maiden name as a security question because someone trying to impersonate you would have a hard time finding that information. Now it's a Facebook search away, and many entities still want to use personal identifiable information as security questions.

As a society we need to reevaluate what personal information is private and what is public.


I don't think any of this is inevitable. In fact it is quite illegal in many cases. Availability doesn't imply use. Especially since we even have falling crime numbers, increasing security still has become some religious dogma for some. Basic control freaks in my opinion, although there is also the other crowd thinking they are constantly under personal surveillance. Similar fears I imagine.

While there are efforts to bring more people to engage in civil society, where we don't wear masks, increasing surveillance will have the opposite effect because of basic self defense mechanisms that shouldn't be to hard to understand in my opinion.

> As a society we need to reevaluate what personal information is private and what is public.

I like to decide that myself and for bureaucratic needs I will supply the minimum amount of data needed for processing.


Yes, transparency is the only practical and realistic way to deal with privacy. Information want to be free. The sooner we learn to adapt and resolves issue due to our information being public the better.


Maybe for the plebs. The wealthy are already bulwarking their own privacy using legal and technological means. Giving up on protecting privacy for everybody just turns it into a luxury good for the wealthy while stripping it from those that can’t afford it. This creates yet another vector of inequality in our society.

Edit: Since you used the “information wants to be free” phrase - commonly invoked with the copyright enforcement controversies of the 2000s - it’s important to note how that turned out. From where I sit, I see a lot of large, powerful companies stealing ideas and content from small creators with no repercussions while small creators are under the thumb of automated content ID systems (see YouTube). “Fair Use” in video content is effectively dead - YouTube and their large media “partners” hold all the cards.

When information is “free” those with all the lawyers get to set their own rules.


The wealthy will always have the edge, regardless how you deal with privacy, thats what the definition of wealthy. Wealthy = more power/resource.

What I'm saying is, you can't solve privacy issue by hiding information. That is not feasible, as technology progresses, its become easier and easier for information to move.

I acknowledge that there are going to be issue due to our information being public and thats what we should fix instead of trying to hide the information.

Right now it is an issue if my ssn or credit card number become public. The fix should not be by making my ssn or credit card number stay hidden. We should fix it so that even if my ssn or credit card number become public it would not cause me harm.

Another example, right now it is an issue if I post something embarrassing on twitter. The fix should not by deleting/hiding my tweet. We should fix it so that even if my tweet become viral it would not cause me harm.


> We should fix it so that even if my tweet become viral it would not cause me harm.

Anonymity can supply that effortlessly. Some will say it is just pseudonymity and no real anonymity, but I think that is wrong.

If you post with your name, you have become a public persona. Just hinting at the fact that public personas often have staff to manage public relations. You could of course kill that, but the repercussions would be quite severe.


>Anonymity can supply that effortlessly

Until the real name leaks, which is only matter of time. Anonymity is what i mean by fixing the issue by hiding the information.

>If you post with your name, you have become a public persona. Just hinting at the fact that public personas often have staff to manage public relations. You could of course kill that, but the repercussions would be quite severe

Yes, right now it is an issue with using real name. We should not fix it by hiding the name, we should fix it so that using real name would not cause harm.

In other words, we should take the information being public as the base condition and solve the problem that arise from that.


I don't think that you can shield yourself from judgement in any way. Aside from that there are more extreme cases of bullying and stalking. I am not advocating accepting these issues, but it is just the best way to deal with it. And there is no argument why it shouldn't be possible.

That we currently speak very openly on the net is such a treasure. With an open ID it would just end like China. There would be topics you just don't speak about because reputation damage wouldn't make it worth it. That would be a significant loss.

I don't really see problems with anonymity. Privacy is a human need, I certainly will not share it with the world. In fact I actively disengage from any intimacy if a larger public is involved, naturally I would assume. There is little incentive to change that.

I have a public persona that is neatly compartmentalized. With effort you could connect that of course, but in practice nobody has access on the whole. I tend to keep it that way.

Not saying that having a facebook account with all your family photos is equivalent of total transparency. Clear names just don't work that well because numerous practical reasons.

Of course there could be an AI that analyses my writing style or simply has access to all relevant network information. But until that time I hope legislative measures are in place. They already are and companies tread very lightly on handling personal information. It could cost them.

Ok, people said mean things under the veil of anonymity. Big deal. But recent evidence suggests they would do it with their ID in the open just as well.


>That we currently speak very openly on the net is such a treasure. With an open ID it would just end like China. There would be topics you just don't speak about because reputation damage wouldn't make it worth it. That would be a significant loss

So yes that is need to be fixed so that we can speak openly even with open ID without the damage you mentioned. Thats the solution that we need to find.

>I have a public persona that is neatly compartmentalized. With effort you could connect that of course, but in practice nobody has access on the whole. I tend to keep it that way

You could but the cost and effort to do that is going to be higher and higher. Not everyone want that, I certainly don't. I don't want to have to compartmentalized my information or hiding my information. It's sucks.

>Clear names just don't work that well because numerous practical reasons.

Yes, I agree right now it is not practical, so that need to be fixed, instead of trying to hide the name.

>Ok, people said mean things under the veil of anonymity. Big deal. But recent evidence suggests they would do it with their ID in the open just as well

Then this issue is nothing to do with privacy.


== So yes that is need to be fixed so that we can speak openly even with open ID without the damage you mentioned. Thats the solution that we need to find. ==

That is like saying that we have to find a solution to crime and fraud so that we will not have to rely on physical locks, spamfilters, etc.

It would be nice, but it will never happen.

Meanwhile anonymity can work well.

How about we first change the world so that everyone can speak freely without fear of anything. And when that is accomplished to everyones satisfaction then we can consider total transparancy.


>That is like saying that we have to find a solution to crime and fraud so that we will not have to rely on physical locks, spamfilters, etc.

That would be great.

>It would be nice, but it will never happen.

Sure, if that the attitude.

>Meanwhile anonymity can work well.

Until it leaks, which only matter of time.

>How about we first change the world so that everyone can speak freely without fear of anything. And when that is accomplished to everyones satisfaction then we can consider total transparancy

You have it backward. To accomplish that goal, one way to achieve that is by total transparency.

Why do you think there is growing acceptance of LGBT? its because growing transparency.


> But recent evidence suggests they would do it with their ID in the open just as well.

Not just recent evidence. This has been well established by multiple studies over many years now.


> we should fix it so that using real name would not cause harm.

I don't see how that is even in the realm of the possible. The issues around this go far beyond the simply reputational.


Trying to stop technological progress by legislative measures seems to be much harder.


I don't see it as "stopping technological progress" so much as trying to mitigate the harmful aspects of technological progress. Mitigating the harm is just as important to helping progress along as developing the tech is. That's why the two have always gone together.


Yes, mitigating harm is the goal, how to do it is the issue. Trying to do that by hiding information, e.g anonymity is not feasible.


I'm not sure that I think that a completely transparent society would be compatible with freedom. It seems to me that it would just result in the dominant culture in an area being able to punish any behavior that they dislike.


The question is not whether the transparent society is inherently good or free. The question is, which is more compatible with freedom:

- total surveillance, of everybody by everybody

- surveillance only by authorities, of the rest of us

The book's premise is that these will be the only choices. The premise may not be correct. But if you start from it, clearly total transparency is better than the alternative.


> The premise may not be correct.

I do not believe it is correct.

> But if you start from it, clearly total transparency is better than the alternative.

Maybe, but that fact is nearly meaningless. Nobody will have anything like freedom in such a world, regardless of which way it works. Saying that one is better than the other is like saying that this deadly toxin is marginally less effective than that deadly toxin. It's a distinction without a meaningful difference.


You should read the book. I don't know that I buy the premise either, but I do think that he successfully argues that surveillance does not necessarily preclude freedom.


I have read the book. It has a lot of useful thing to say and think about, but I don't find the conclusions convincing.


Fair enough!


A Panopticon society is a dystopian society. Privacy is a right, giving it up is not a solution, is a crime


> He says only the latter is compatible with freedom.

I disagree with him on this point. Neither choice he presents is compatible with freedom.


The whole premise is pernicious nonsense at its root. Privacy is important for many reasons, and throwing in the towel on it because high tech makes mass surveillance easy is like giving up on laws and taboos against murder because guns make killing people easy.

"Welp, we just invented the machine gun. It's so easy to mow people down now we might as well just embrace this new post-machine-gun reality!"


The premise makes sense to me, because nature sometimes doesn't allow for the ideal circumstances to exist. Society needs to have a police (or security) force, and there will always be some members of society with more power than others. The only way for the people with less power to hold the more powerful is via transparency.

I think it would be very interesting to see what would happen if everyone's financial comings and goings were public information.


That's not what they're talking about. They're talking about giving up on the idea of privacy in any context and proposing an "egalitarian panopticon" as some kind of solution.

Not only would that be dystopian in it's own way (think gossipy small town plus professional trolls powered by big data plus the digital marketing/propaganda industry), but an egalitarian panopticon is just outright fantasy. The rich and very tech savvy have the resources or expertise to cheat. It would not stay egalitarian long.


That's very theoretical though. The truth is, for most people, the police aren't going to be targeting them with this, and even if they do, well the police already have numerous means of invading someones privacy.

Arguing that everyone should have those means at their fingertips, in practice, will mean the likelihood of this being used on you goes way up, and that there's no longer even the theoretical oversight that police use would have.


The thing about law enforcement use is that for however bad civilian use is, the consequences of false positives are severe.

Missed out on a job opportunity or social interaction unfairly? That sucks.

But here we are talking potential for wrongful incarceration. And since some places still have the death penalty, theoretically it could lead to the state killing people based on misidentification.


Well, you can try that out right now: Hook up a 24h video/audio feed to your phone, then share it with your wife, ex, parents, friends, co-workers, boss and the annoying uncle from Utah that you're obligated to invite to Christmas. Also give them full access to your browsing history, messengers, purchase history and location data. Then come back to us in a month and tell us how it all worked out.

Really, no, I think this is a fundamentally bad idea. The idea of everyone surveiling everyone else can only be remotely appealing if you pretend there are no power differences in the world and everyone is equal - a fiction that the tech works for whatever reason loves to subscribe to.

In the real world, people will have vastly different amounts of understanding for whatever weird sides you have and their knowledge can have serious consequences. Only because your boss may have some weird sides himself does not necessarily mean that he will be understanding of yours - or that whatever he did will be as equally relevant as what you did.


It's a bad idea because your wife, ex, parents, friends, co-workers, boss and the annoying uncle from Utah aren't also doing it.

If everyone else in the world is doing it too, things would be radically different.

Also, how does the power differential not actually equalize? If a cashier at McDonalds can see everything Jeff Bezos does, and Jeff Bezos can see everything the homeless person does, you really think Jeff Bezos is the one getting the better end of the deal?


MAD, the only winning move...


As in Mutually Assured Destruction? Or do I misunderstand? I'd love to know more...


Yes, mutually assured destruction. From the movie War Games: "The only winning move is not to play."


If the government succeeds in getting advanced ML/AI type software classified as arms (that idea comes up from time to time in the context of reducing industrial espionage) then in the US that would naturally lend itself to the argument that private ownership and operation of these sorts of surveillance systems are protected under the 2A.

The arguments traditionally used to prevent private ownership of the kind of arms that let you go toe to toe with a modern military don't really work very well for software.


Classified as a destructive device :-)

Maybe I should register forgotten-algorithms.com only my ponytail isn't as long as Ian's


Sure. Just like main battle tanks and nuclear warheads...


I'll never understand how people come to believe that civilian amateurs with AR-15 have somewhat of a chance against trained professionals with heavy equipment. The mental gym to justify the 2A is really mindboggling.


>“People were stealing our Häagen-Dazs. It was a big problem,” he said. He described Clearview as a “good system” that helped security personnel identify problem shoppers.

>BuzzFeed News has reported that two other entities, a labor union and a real estate firm, also ran trials with a surveillance system developed by Clearview to flag individuals they deemed risky. The publication also reported that Clearview’s software has been used by Best Buy, Macy’s, Kohl’s, the National Basketball Association and numerous other organizations.

this seems just like another tool that will be used to put the non-elite at a disadvantage. until proven otherwise - I can only think of negative outcomes for ex-felons, low wage workers, people of color, and the likes coming from the usage of this app by the rich and large corporations


> this seems just like another tool that will be used to put the non-elite at a disadvantage. until proven otherwise - I can only think of negative outcomes for ex-felons, low wage workers, people of color, and the likes coming from the usage of this app by the rich and large corporations

You lack imagination. This will be used by HR, attorneys, private investigators and others for all sorts of purposes.

- Were you really out sick? According to our partners at Foo Corporation, you were eating a churro in front of a movie theater at 10:45AM on the day you were out.

- Why were you talking to <x> in the parking lot after work?

- Our security provider identified you in a social gathering with 3 other employees, or engaged in a PDA with a fellow employee. You are out of compliance with our fraternization policy and are terminated.

- You get a 30 minute meal break. Why were you leaving the men's room at 12:45?

- You were arrested for shoplifting in 1993, we are refusing entry to <x> stadium.

- Your grades have declined, and you have been seen entering your dorm after 2AM 40% of the time, scholarship revoked.

If anything, this will bring the routine harassment that individuals in authority inflict on people and elevate it to a centrally controlled, legal practice.


> If anything, this will bring the routine harassment that individuals in authority inflict on people and elevate it to a centrally controlled, legal practice.

For those organizations that can afford what I am sure is an exorbitant subscription fee, maybe.


Lol. This is a near commodity tech. Heck I built a Rube Goldberg version of this to identify when the UPS guy comes, and I’m a 0.10x developer.

There’s already multiple shady companies that share fixed and mobile LPR. The social media integration isn’t required for this tech — it’s distracting hype.


0.10X developer. I should get business cards printed with this.


Wait until insurance demands its use to mitigate risk.


That's already here for auto insurance.

"Some of the factors used to calculate your discount at American Family Insurance include vehicle usage (how often you drive), braking (how smoothly or abruptly you apply the brakes when stopping), and acceleration (how quickly or smoothly you accelerate after stopping). Speed and time of day are also factored in, among other driving behaviors."[1]

[1] https://www.amfam.com/resources/articles/understanding-insur...


And that, right there, is one of the most stupid uses of data based on bad assumptions.

One of the keys to high performance driving, whether for racing or emergency driving, is the ability to get the absolute maximum out of the available traction -- that is exactly how you stop shorter, turn quicker, etc. -- which is exactly what is needed in tight situations.

That said, most driving moves should be done using the minimum of tire usage for efficiency, smoothness, predictability on the road, etc.. But occasionally achieving max Gs for specific situations -- and without exceeding the available grip and falling into sliding -- is the likely mark of a highly skilled driver.

Unless they're also looking for and counting primarily loss of traction events (overbraking/overturning), their conclusions are likely wrong.

Just like the use of BMI as a health index, which will classify a weightlifter with a ripped 5% body fat as "Obese", this will similarly misclassify drivers, and in this case, it will unjustly overcharge them.

Stupid


Some insurers may build risk models around that. But remember that insurers deal with the public at large, and the "average punter" is a blithering idiot.


> negative outcomes for... people of color

Wouldn’t it do the opposite? Instead of racial profiling it’s using a system that pulls from social media. I wouldn’t say it’s a good system but it seems less biased.

EDIT: OP said “negative outcomes” and not “unfair towards”. Yeah I guess something can be more fair but still have disproportionate outcomes.


> Wouldn’t it do the opposite? Instead of racial profiling it’s using a system that pulls from social media.

Oh there'll still be plenty racial profiling because the trained models will likely inherit these biases. Instead of a store clerk following me around, it'll be an automated system that just flags me as "high risk" and who knows what other secret feedback looks that results in.

I wish our leaders were smart enough to legally require that such services _must_ publish their accuracies and be open source.

A system like this as a for-profit business will go about as well as for-profit prisons have gone in the US.. except more subtle and sinister


Right, and then the discrimination is AI-washed - security personnel can say they're just following people flagged as "high risk" around the store, not discriminating, even though the "AI" mostly used skin color to add the high risk flag in the first place.

Discrimination: SOLVED!


>trained models will likely inherit these biases.

Living in middle America, I can tell you that Joe Average absolutely does not understand this. To him, based on my conversations with the people around me, AI/ML/anything sufficiently 'futuristic' sounding is without bias. To them, these models are developed in a vacuum, and using the AI label implies that the end result actually built itself, like a baby human would grow up.

And this scares me.


It does develop an idea in a vacuum. It finds the bias and uses the best biases to explain the world.


Facial recognition along with just cameras in general have historically struggled with the faces of non-caucasian individuals. This typically results from biases in training sets and in many cases cameras simply creating images with far more contrast for lighter skin tones.


> faces of non-caucasian individuals

The Chinese government, operating the biggest surveillance state on the planet, have had no trouble with the faces of non-caucasian individuals, historically or otherwise. Their facial recognition systems are highly adept at identifying asian faces. They also supply half the world.

According to IHS Markit, China accounted for nearly half of the global facial recognition business in 2018.

https://www.ft.com/content/6f1a8f48-1813-11ea-9ee4-11f260415...


The real issue here is not Caucasian skin tones vs everyone else but instead light skin tones vs dark(er) skin tones. So Chinese government won't have an issue with current facial recognition tech.


Relatedly, photography itself developed an early bias against black people in terms of what colors/contrasts they optimized film for photographing:

https://www.vox.com/2015/9/18/9348821/photography-race-bias


It seems more likely a function of actual contrast in the real World than a function of bias (inadvertent or otherwise)?

I imagine it's down to skin tone, or do facial recognition systems really recognise dark skinned "Caucasians" better than light-skinned non-Caucasians.


Exactly: In the extreme, perfectly black objects have no visible contrast whatsoever as demonstrated by Vantablack.

https://www.wired.com/story/vantablack-anish-kapoor-stuart-s...


That gets me thinking: in some parallel universe history of the Earth where interior lighting and photography was primarily developed by and for people of color, I bet average indoor lighting would be brighter, and Caucasian skin would often be washed out in CCV.


Right, because people of colour love having overly bright lights blasting into their eyes.

Lights aren't set for photography of any skin tone, they're set at a level for comfort and to provide enough light for us to operate.


That’s fair. I was thinking about dimly lit bars.


> ex-felons, low wage workers, people of color, and the likes

"people of color" are not "the likes" of ex-felons and low wage workers.


People who are generally oppressed by the larger system/those in authority? There's a charitable understanding of the statement here.


I think they just meant marginalized groups, not People Who Are Bad.


Groups disadvantaged by the current power structure of society?


sorry that isn't what I meant. I was trying to refer to the systemic biases typically found in this algorithms (correctly identifying lighter skinned people over darker skinned people, algorithms learning to use race as a factor in decision a king, etc.)


In the context of the sentence it's clear that the post refers to "Typically Persecuted People", and I would argue that applies to all three.


Tell that to Bloomberg.


We're now constantly under surveillance and facial recognition is being used on that footage. Apple stores. Malls. Bus stands. Grocery stores. Train stations. Traffic stops. Schools.

Privacy is dead. Anyone with money or any government can use one picture of you and get basically every piece of information about you.

Maybe this isn't true to the same extent for the HN crowd who might be more privacy conscious, but it is true for 99% of the rest of the population.

Schools and governments failed to educate about the privacy concerns. Maybe that's understandable. But they still don't. Teens post all sorts of stuff that will come back to bite them, that will never be forgotten.


This is true. Only legislation will stop this and that seems unlikely.

The worst part is that most people upload high quality, high fidelity tagged training data on a daily basis. Add in the fact that social networking automatically builds network circles, it is not wonder we are riding a landslide towards 1984 and Fahrenheit 451.


We are already there, but with a healthy dose of Huxley's soma, so most of us don't really see it. I, too, partake the soma in terms of current discourse - otherwise it's very hard to survive out there, and be a member of society almost impossible.


And they rage when the little guy does it to them. It's obvious that employers are either paying people to post good reviews or strong arming current employees to write positive reviews on Glassdoor. The good thing is they're easy to spot since they're usually some vapid garbage like "Best place in the world to work!" with 5 stars interspersed with the real 1 star reviews. You can also tell because there are usually just enough fake reviews to push the real ones off the front page.


Seems like they're building the hype train so that consumers want to pay for a version they can use to "take control" of their digital identity.


Just like the credit reference agencies. The scum gets your data from everywhere (sometimes wrong data), shares it with whoever asks, but makes it super difficult for you to get it or rectify any wrong data.


>shares it with whoever asks

Legally speaking they're supposed to get your authorization, but yeah it's an honor system.

>but makes it super difficult for you to get it or rectify any wrong data.

Is it? I thought all you had to do was write them a letter?


> Is it? I thought all you had to do was write them a letter?

Yes, it is. I've been trying to get this data on myself for years now, and have yet to succeed. Letter-writing is not sufficient (at least in the US).


Anyone that wants their own "Clearview" like app can take almost any FR application and create a database from scraping the web. All that Clearview did was pre-scrape the web for you, but to do that yourself or as an open source project of sorts is not difficult.


I wonder how legislating open algorithms as illegal will work out? If you can run this reasonably on a phone building your own face/object databases how could they stop anyone from doing it?


You wouldn't legislate against the algorithm. You would legislate to prevent the abuse of the data used to feed the algorithm.


If they are able to do this by creating the dataset just by scraping public social media web images, and having a facial recognition algorithm in place, whats stopping anyone else to do that ?


I'd imagine that there are many, more clandestine versions of this.


This skit from Amazon Women on the Moon turned out to be prescient. It's not so funny anymore.

https://www.nytimes.com/2020/03/05/technology/clearview-inve...


Which skit?


Has anyone here tested the app and can speak to their direct experience? (throwaway or not)


Doesn't the title imply it's not a secret plaything now?

I guess it's not secret...


I would love to produce my own ClearView.


I know Hoan. He's a great guy. Very talented. Super capable. Totally trust him to deploy this responsibly.


Despite the many examples in this article and elsewhere about him already deploying it irresponsibly? Or maybe I missed the implied “/s”


Facial recognition technology as a tool for protecting retail establishments... Interesting!


The problem I see with this tech is with insurance. Your life, your health, your well being your family's well being, will be dependent on your ability to hold good insurance. If everything is measured, everything is tracked, everything quantified, your life is going to be very different.


[flagged]


I can see why you got flagged, but even being a millennial that's a response I hope catches on.


I don't even know what I burned my internet points for since the comment I was replying to seems to have gone.


Pretty sure what happened is: someone said ubiquitous facial recognition would discriminate against ex-cons, minorities, et al. Someone saw that as saying it's OK to discriminate against minorities for the same reason it's OK to discriminate against ex-cons, and got upset on behalf on minorities. Others got upset at that person on behalf of ex-cons, and a valuable lesson about clarity in expression was learned.


Ah yes, I remember now. Thanks for replying to a flagged comment, do you have some kind of notification service running for this?


I have 'showdead' set to 'yes' in my profile, but when I responded to you the parent comment was still up.


I have lived in {} since the mid 2000’s. Stalking by strangers and acquaintances has gotten out of hand in (at least) the past five years. (Any such behavior against me has since calmed down in the past year, after reworking my digital devices, but the effects have had significant impact on me. I also dropped out and gave up on life this past year, which may make me a much less interesting target to harass.)

Such technologies are part of an ongoing increase in information and power asymmetries that can be abused to harass innocent competitors, as has happened to me. I’ve had strangers come up to me in public and discuss specifics of my private life, including non public details about my since failed startup, and personal/private comms. Concurrently, I was falsely accused of a serious crime and was put under the microscope and harassed on a regular basis by strangers regarding this. It became apparent that my life was completely owned at that point, digitally and publicly. It amounted to ongoing bullying which really pushed me beyond thresholds of learned helplessness already long since established.

There seems to be no recourse against this behavior. If you have a digital “kick me” sign attached to your back, there’s little you can do to remove it, short of avoiding being in public. Or, as in my case, one can drop out of life, go homeless, give up all of your assets, and prepare for suicide. Strangers can verbally harass/own/gaslight others, maintain perfect plausible deniability, have perfect encryption to cover their tracks, and devastate people who aren’t equipped to deal with this behavior.

Evolution of survival going forward is trending towards resilience to increasingly sophisticated psychological violence and harassment, as well as the ability to accept being an unwitting voyeur in all public places.

One of the most difficult aspects to this was reporting these incidents (admittedly, under duress in the heat of the moment), and being told that I must be delusional and mentally ill. To me, the delusion is genuinely believing that technology is not used to stalk or harass people in public. As a counterpoint, I will say that being stalked repeatedly does increase your paranoia, so you’ll start to look over your shoulder at every turn. If you believe that all of your devices and accounts are hacked and being used to harass you, the complete lack of digital privacy can have a profound impact on sanity.

To this day, I’m utterly freaked out by the presence of personal cameras, to the point where I’ve nudged people in the community to be aware of the cultural impact of holding phones vertically in coffee shops or other public places. As most people are of course good natured, I’ve noticed a trend in the places that I frequent towards people being more prudent in this regard. I personally cover the public facing back camera on my phone with my index finger as a matter of habit by now, to avoid pointing it at strangers in public. Personally I believe responsibility amongst the tech elite would include immediate installation of physical shutters that open only when a camera is in use. Shutters can be colored blue or yellow, perhaps as a culturally standardized signal that the camera is “closed”.

There’s clear benefit to tech such as Clearview but the potential for abuse by irresponsible or immoral actors is tremendous. As someone pointed out, such tech can be rolled yourself. It seems that the problem is therefore out of control. Welcome to the age of unwitting voyeurism.

Edit: I did make a comment on the linked NYT article, including my real identity. In this comment, I called out at least one person involved in shenanigans against me. This person name dropped {} as someone who would recognize him, before he trashed my startup without seeing it, encouraged me to drop out of my continuing Computer Science studies at the local University (due to the bad rep I would receive for doing so as a middle age adult, so he said), and then threatened my career/reputation if I told the truth about specific stalking incidents, all in one conversation. Not long thereafter, I experienced a stalking incident in public by two men with walkie talkies who harassed me about said startup, mentioning non-public specifics about an engagement we were seeking. In retrospect, these men could have been using tech such as Clearview to more easily enable their stalking and harassment of me. The location of this incident was the playground of wealthy folks in my city’s most affluent public area. My comment on the NYT article was not approved by the moderators, understandably.


Meh. You can be recognized by humans, you can be recognized by machines. I don't get the outrage.


> Meh. You can be recognized by humans, you can be recognized by machines. I don't get the outrage.

Put another way: "Meh, you can die from a freak accident, you can die from a well resourced contract killer. I don't get the outrage".

Same result: you die. But not quite the same thing.


Well the huge difference is, once you put cameras everywhere you can see what people did at any given moment, you can have algorithms that actively look for someone and tracks all of his movements. If you don't see the problem with that then I don't know what to say.

Sure humans can recognize people, but they can only "scan" for so much people in a crowd, they do it "live" and they don't remember everything they see.


Some people track their own movements and post them on the internet, so it's absolutely understandable they have no problem with it.

What would be more inciteful IMO is if you said why you find it wrong/unhelpful/scary?


The top of the story features an example where human recognition was not sufficient:

> One Tuesday night in October 2018, John Catsimatidis, the billionaire owner of the Gristedes grocery store chain, was having dinner at Cipriani, an upscale Italian restaurant in Manhattan’s SoHo neighborhood, when his daughter, Andrea, walked in. She was on a date with a man Mr. Catsimatidis didn’t recognize. After the couple sat down at another table, Mr. Catsimatidis asked a waiter to go over and take a photo.

> Mr. Catsimatidis then uploaded the picture to a facial recognition app, Clearview AI, on his phone. The start-up behind the app has a database of billions of photos, scraped from sites such as Facebook, Twitter and LinkedIn. Within seconds, Mr. Catsimatidis was viewing a collection of photos of the mystery man, along with the web addresses where they appeared: His daughter’s date was a venture capitalist from San Francisco.

> “I wanted to make sure he wasn’t a charlatan,” said Mr. Catsimatidis, who then texted the man’s bio to his daughter.


So it's OK these days to randomly ask a waiter to go and snap a photo of some other customer in a restaurant? Maybe call me old-fashioned, but that strikes me as outrageous behavior.


“Let me tell you about the very rich. They are different from you and me. They possess and enjoy early, and it does something to them, makes them soft where we are hard, and cynical where we are trustful, in a way that, unless you were born rich, it is very difficult to understand. They think, deep in their hearts, that they are better than we are because we had to discover the compensations and refuges of life for ourselves. Even when they enter deep into our world or sink below us, they still think that they are better than we are. They are different. ”

F. Scott Fitzgerald


The karmic payback is that soon he’ll be walking around with a live #iamabillionaire tag on his head, which is not entirely a good thing for him, if you know what I mean.

Of course, he’s welcome to never leave home, in a prison of his own creation.


Billionaires are only occasionally interested in mingling with the hoi polloi. Businesses that cater to them provide all sorts of separate entrances, security staff, etc. to prevent them from suffering the proletarian gaze. They don't care what we think of them, so long as they never have to know about it.

It's a "good" idea though. I'm sure that several of the dozens of inevitable similar AR apps focused on child molesters will sell well. Someone will have a good time hacking those apps...


The next problem is of people making assumptions about other people based on their social media presence. How does he decide whether he's a charlatan?


I think that is the first problem really. People tend to be goddamned idiots doing the equivalent of looking for burglars wearing striped shirts and carrying swag bags marked with dollar signs. It is all about servicing confirmation bias. They think Elizabeth Holmes is a successful entrepreneur and anyone in a cheap suit is a fraudster.

So many of the common preconceptions are idiotic. Like "someone looking like a sexual predator". They can look like literally anyone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: