> His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
This doesn’t sound groundbreaking all and I’d be very surprised if FBI/NSA/DHS/Palantir didn’t already have a system like that. Maybe they were just reserved for higher-value targets. Of course, NSA isn’t gonna tell you what it has constructed until decades later, so claiming that this goes far beyond NSA capabilities is reckless at best and clueless at worst.
What’s groundbreaking here is not really the technology but rather the marketing: Clearview is selling a relatively cheap product to law enforcement agencies who are desperate to solve cases at the lowest cost.
I think we (both here at HN and in the larger society) have this perception that police departments, etc., are part of a well-organized hierarchy of well-considered processes and technology that start at the FBI/NSA/etc and works its way down, all carefully vetted for technical and ethical standards, and disclosed to the public who is ostensibly their employer.
The reality that I’ve seen is quite different. As the article points out, individual police departments (and their officers!) often independently research and procure services, based on hearsay and questionable morals. Does the service help ‘get the bad guys’? Then it’s good. Does the service obviously violate copyright and ToS agreements? Not our problem.
I learned this when I built a scrappy little website for a one-man company who builds training kits for first responders. As low-tech as this site was (e.g., ordering the product entailed sending a purchase order by postal mail) I was struck by the apparent success of my client’s product: he had dozens of endorsements by apparently top-level influencers in the field, and didn’t have much competition. There were some certifications that he matched, but it didn’t seem all that hard to sell a few thousand dollars of technology to small/mid-level LEOs. (This isn’t a criticism of his product, which seemed fine and certainly didn’t involve privacy or surveillance technology.) The numbers in the article about Clearview’s services are in the same range — easy to justify if it’s only a few thousand a year and apparently produces good results.
>>What’s groundbreaking here is not really the technology but rather the marketing: Clearview is selling a relatively cheap product to law enforcement agencies who are desperate to solve cases at the lowest cost.
Exactly. This is a prime example of commoditization. The product doesn't have to be sophisticated. Just good enough to handle the most common use cases reasonably well, at a cheap price.
This was in "Better off Ted" - search by a photo of someone to find all online photos of the same. There was a great comment by another scientist on the team: "we finally killed privacy!"
>Maybe they were just reserved for higher-value targets
That's just it, though. I think there is broad popular understanding that the FBI and NSA have just about any surveillance capabilities we can imagine. But, we think, they only use it to chase terrorists, maybe pedophiles. Local cops instantly identifying a shoplifter by matching the surveillance photo to a facebook photo still seems like sci-fi, even if it was increasingly "near future" sci-fi.
I agree completely. I stopped reading the article after that. One of the most naive things I've read. Even if there writer is correct, how would you know? You wouldn't.
That doesn't mean that every police station in the land should have it. And they could sell this to absolutely anyone if their policy revenues aren't sufficient. There was a story about Palantir selling its tech to a NY bank (Morgan, I think?), who used it on their staff.
Just because the tech isn't ground breaking doesn't mean the application isn't a concern to society.
I’ve been waiting to see this story and this company show up. It has felt inevitable and it’s sad that it’s being allowed to happen. How long will it be before the Ring cameras on every block feed into a real-time database where anyone can figure out where anyone else is, track their movements, and monitor their every moment?
Facial recognition is a bigger threat to our current way of life than just about anything else I can fathom other than climate change or nuclear war. The scariest part is that few people seem to recognize or care about the risk.
People freak out when it's based on cameras and faces because it triggers some ancient "being watched" reactions in us.
But when it's based on phone GPS, wifi networks etc., then people are fine with it. And that type of tracking is very possible for many years now through smartphones. But it feels less viscerally spooky.
I can go somewhere without my phone. My face essentially has to be visible. I'll agree that people should be more spooked out by phone tracking, but facial scanning is spookier.
if I leave my apartment in NY without any devices and transport myself exclusively with a metrocard paid for with cash, I can move myself across the entire city without being surveilled
I will however probably walk into at least one photo that gets posted publicly in the course of a day.
Surveillance by the police and government agencies is different and in some ways more restricted than surveillance by private companies, using social media inputs. It seems reasonable to assume that NYPD won't sell their surveillance camera data to third parties, but the legal restrictions on private companies seem much weaker -- if they exist at all.
People who pay attention are not ok with it being their phones either. Most people don't entirely get that they're under, or will soon be under constant surveillance.
In addition to the fact that your phone is separable in ways your face, Nick Cage and John Travolta excepted, is not, there's the notion of manifest versus latent (or tangible vs. intangible) perceptions.
Humans are visual creatures. To "see" is synymous with "to understand". Vision is a high-fidelity sense, in ways that even other senses (hearing, smell, taste, touch) are not. And all our senses are more immediate than perceptions mediated by devices (as with radiation or magnetism) or delivered via symbols, data, or maths.
This is a tremendously significant factor in individual and group psychology. It's also one that's poorly explored and expressed -- Robert K. Merton's work on latent vs. manifest functions, described as the consequences or implications of systems, tools, ideas, or institutions, is about the closest I've been able to find, and whilst this captures much of the sense I'm trying to convey, it doesn't quite catch all of it.
But his work does provide one extraordinarily useful notion, that of the significance of latent functions (or perceptions):
<quote>
The discovery of latent functions represents significant increments in sociological knowledge. There is another respect in which inquiry into latent functions represents a distinctive contribution of the social scientist. It is precisely the latent functions of a practice or belief which are not common knowlege, for these are unintended and generally unrecognized social and psychological consequences. As a result, findings concerning latent functions represent a greater increment in knowledge than findings concerning manifest functions. They represent, also, greater departures from "common-sense" knowledge about social life. Inasmuch as the latent functions depart, more or less, from the avowed manifestations, the research which uncovers latent functions very often produces "paradoxical" results. The seeming paradox arises from the sharp modification of a familiar popular perception which regards a standardized practice or believe only in terms of its manifest functions by indicating some of its subsidiary or collateral latent functions. The introduction of the concept of latent function in social research leads to conclusions which show that "social life is not as simple as it first seems." For as long as people confine themselves to certain consequences (e.g., manifest consequences), it is comparatively simple for them to pass moral judgements upon the practice or belief in question.
"One of the odder pitches, in late 2017, was to Paul Nehlen — an anti-Semite and self-described “pro-white” Republican running for Congress in Wisconsin — to use “unconventional databases” for “extreme opposition research,” according to a document provided to Mr. Nehlen and later posted online."
Giving hate groups a greater ability to stalk and harass their victims is also pretty scary.
This illustrates hypocrisy of Silicon Valley.. don’t be evil indeed
Peter Thiel funding this, while he demolished Gawker Media, through litigation for his own invasion of privacy.
Forget Silicon Valley, we urgently need federal regulation to limit this assault on our privacy (at the very least it can slow down our country’s inevitable decline into a black mirror episode)
According to a different article, one of Peter Thiel's 20 under 20 recipients eventually started/run Cleaview. I do not think he knowingly invested in this company.
>I don't think many people in the valley actually care about being evil.
I disagree. I think many care about the evil they're complicit in in their every day working lives there, but they inherently choose the money over good and hence target their "goodness" on espousing their virtuousness on unrelated, seemingly utterly disconnected "issues" from what they're doing and supporting day-to-day.
I'm not okay with this - and you shouldn't be either. It's only a matter of time before it gets and misused/abused - to identify protestors, for law enforcement officers' personal uses, to identify those doing legal things deemed "immoral" (see: China), etc.
We really need regulation here. Urgently.
The US appears to have been the leader in such regulation in the past. The problem is, they don't do that anymore. They haven't passed any laws related to user rights or privacy in a long time, and are actively trying to make encryption illegal.
The same is true for the Australian government, and those of several developing nations. We can hope that the EU does something, but... the impact will be limited.
It's especially bad for people living in non-first-world countries like India where the citizens aren't educated on the consequences of law enforcement agencies using tech like this. Laws taking away the right to privacy are being pushed through regularly. Recently they've started using facial recognition to identify protestors: https://www.fastcompany.com/90448241/indian-police-are-using...
I really wish that some leading tech companies would try and push regulation through, but that will never happen since apparently privacy erosion and constant user tracking is critical for revenue for seemingly all of them (except Apple, I suppose).
Also, even if somehow regulations were put in place that made it necessary for websites to try and protect user data and made it illegal to scrape PII, there's nothing stopping government agencies from developing tools like these for themselves. Aaaand we go back to the first paragraph of this comment. This is a sad state of affairs.
What’s worse in this case vs. the already known issues of facial recognition by law enforcement is that this private company has the ability to monitor law enforcement on who they investigate. But still does this in a secretive fashion. So it’s not even about trusting LEO with this, it’s trusting yet another completely unregulated party.
This opens up a whole new can of worms with issues like selling this info back to the victim.
“ One helped design a program that can automatically collect images of people’s faces from across the internet, such as employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram and even Venmo. Representatives of those companies said their policies prohibit such scraping, and Twitter said it explicitly banned use of its data for facial recognition.”
I am not a lawyer. Is it possible to file a (class action?) lawsuit against Clearview AI and its clients (police agencies, etc.) in light of this breach of TOS? At the least, this should suffice in procuring a subpoena to obtain more information on the exact extent of misappropriation of public data at play here.
The US Senate has previously knocked down the use of private information on US persons by law enforcement [0]. In the past at least there has been a clear distinction between information suitable for the intelligence community and information suitable for law enforcement. The slope is definitely slippery once again but at least there is some precedent for not crossing the streams.
But it’s hard to identify what efforts I could support with my time where I am. The right to repair movement has done a good job of communicating state by state proposed laws that can be advocated for. Is there anyone doing the same thing for privacy?
Yes, and in contrast with OP’s assertion some states are actively regulating privacy (i.e. California) and there are plenty of statehouses that would likely be amenable to regulating this issue.
It’s a mistake to think about US regulations as purely federal.
There should be regulations, yes. We should be dramatically limiting the power of the government as it was in the past in the US.
If it is a problem that the government identifies you as a protestor, at that point it doesn't matter that there was a regulation telling them not to. The government needs to be controlled to the point that it doesn't matter whether they can identify a protestor or not, because peacefully protesting should not be a crime that warrants government intervention.
It should be hard for the government to imprison you or otherwise impinge your freedom, for only serious offenses, with a high burden of proof, in a public trial.
> We should be dramatically limiting the power of the government as it was in the past in the US.
> The government needs to be controlled to the point that it doesn't matter whether they can identify a protestor or not, because peacefully protesting should not be a crime that warrants government intervention.
Right, but that was never the case, and it probably won't be in the future.
Well, it's going to have to be if we're going to live in a world where facial recognition technology exists.
Because this idea where the government wants to jail peaceful protestors, but surely they'll refrain from using certain technologies to find them if we just ask nicely does not make sense to me.
I mean I agree but if they wanted to jail peaceful protestors surely they could do that by showing up at the protest and rounding them up? The whole point is to be conspicuous.
> We really need regulation here. Urgently.
>
> The US appears to have been the leader in such regulation in the past. The problem is, they don't do that anymore. They haven't passed any laws related to user rights or privacy in a long time, and are actively trying to make encryption illegal.
This cat is out of the bag. Findface.ru now actively courts law enforcement and other interested parties. The west does not have the monopoly on this.
Well, people still might find out if a whistleblower leaked info about the program, but that's becoming less and less likely as mistreatment and persecution of whistleblowers has been normalized under the canard, "Well, you have to face the music when you leak classified info."
I'm okay with this. It's a long-standing principle that you have no expectation of privacy in public spaces. On what basis do you get to claim suddenly that it's "not okay" and on that basis slow technological development? Every new tool has benefits and drawbacks, when it comes to integrating computing into our lives, the benefits have always massively outweighed the drawbacks.
We don't need regulation here, urgently or not. This whole push towards banning things --- this company, the EU facial recognition thing, and so on --- strikes me as just another moral panic used as an excuse for a few to impose their opinions and power on the many.
I've yet to see privacy advocates identify actual undeserved harms that have come to people as a result of the technology that they want to regulate. Loss of "privacy" in public is only a harm if you already accept the premise of the argument, which I don't.
> It's a long-standing principle that you have no expectation of privacy in public spaces.
There are limits to what we consider acceptable even in public spaces; for example, upskirt photos aren't ok even you're in a public place. I think it's still reasonable to consider that one day (maybe today, for many people?) it might mean that every single moment of their life outside is being recorded, which was literally not possible until recently. It's a valid thing to discuss.
What I'm against is this idea that X is okay as long as X is expensive, but the moment X becomes cheap, or democratized, or accessible, all of a sudden it's a problem, and we need a ban.
Example: it's already legal to keep tabs on people in public. There are businesses build on this idea, private investigators. A little sleazy? Expensive? Sure. But legal. If you want to be consistent, you should ban them too.
If X is what causes harm, X should be disallowed no matter the price.
In general I disagree (but would agree with you in certain cases, like something that was doable but gatekept due to cost).
Alot of X's aren't a problem until they can scale. It's not pragmatic to outlaw everything that might be a problem at scale but might never be able to achieve that scale.
We're in an era where we are discovering alot of abuses that could only be classified as an issue due to scale and efficiency.
The problem is that many problems are so because of their scale. For instance, you have billions of harmful bacteria in your body, but are (presumably) not afflicted because they can be handled by your immune system. It would be bad to eliminate all the bad bacteria (even if this were possible without killing good bacteria) because your immune system would become more fragile in future infections.
> What I'm against is this idea that if X is okay as long as X is expensive, but the moment X becomes cheap, or democratized, or accessible, all of a sudden it's a problem, and we need a ban.
These kinds of things tend to increase attention given to issues, yes. I don't think it's unreasonable to think that people care more about things that are easily and practically abused, because, well, those are the things that are more likely to actually affect them. Plus it's a lot harder to argue against some formless "maybe people could be watching me" threat, but a lot easier to reason about a specific example.
> Example: it's already legal to keep tabs on people in public. There are businesses build on this idea, private investigators. A little sleazy? Expensive? Sure. But legal. If you want to be consistent, you should ban them too.
I'm not really a fan of private investigators, to be honest, but I haven't really given it as much thought as I should before I argue my case online.
Some things are actually acceptable, not necessarily good, if only done/available on a small scale. But get problematic once done on a large scale.
Good example is paper records, lots of sensible stuff is recorded on paper records. Access to those is often less than perfectly secured. But, because accessing hundreds of them is tedious, and stealing them might require an actual truck, this is not a real world problem. The moment we digitize this data, it get‘s so easy to access, copy, etc that the old level of access protections is no longer enough.
Similar problem with scaling up face detection. Lot‘s of jerks would surely like to harass other people by following them around everywhere, spying on them and generally make their life’s miserable. Until know, this was really expensive - time and money wise - so it did happen only rarely. But once this gets automated, it also gets cheap.
There are a near infinite number of things that we could pass laws against but don't because it isn't currently an issue. We pass the law when it becomes an issue.
Should we bother passing a law saying it is illegal to teleport across international borders, thus evading immigration checkpoints? Should we set tax rates for the sale and import of time machines? Or does it make sense to wait until either is remotely possible?
"i don't care if i don't have privacy" is not as strong an argument as you think. almost nobody wants strangers to be able to stalk them trivially and arbitrarily.
> I've yet to see privacy advocates identify actual undeserved harms that have come to people as a result of the technology that they want to regulate.
Harms like protesters getting identified, in Hong Kong and elsewhere?
Why do protesters have an expectation of privacy? You go out in public and someone might see you there. If you're a peaceful protester, it shouldn't matter. If you're performing criminal acts while protesting, you should be stopped.
I see a lot of people saying that it's terrible that protesters might be identified. I think at least some of these people are secretly upset that people can't break windows and burn cars with impunity.
> Why do protesters have an expectation of privacy?
You asked about harm. Expectation of privacy is a different matter. And they might expect privacy due to wearing masks - why is it okay to regulate those, but not facial recognition? While I'm against regulating facial recognition, I'm at least aware of the harm it causes.
> If you're a peaceful protester, it shouldn't matter.
It shouldn't, but it does. If the government and corporations were so perfect that it wouldn't matter, you wouldn't be protesting in the first place.
> I think at least some of these people are secretly upset that people can't break windows and burn cars with impunity.
This is a pure ad hominem, and not a very good one either. The motivations of an imagined 'some' people are irrelevant to the harms of surveillance and facial recognition.
Governments might decide they want to retaliate against peaceful protesters, if the peaceful protests are causing problems for their agenda. This would have a tremendous chilling effect on protest.
Governments have retaliated against protesters for centuries. They don't need face recognition to do it. Even now, motivated individuals regularly identify and shame individuals involved in various public demonstrations. I find this fixation on the anti-protest use case of facial recognition to be very bizarre. Is a ban on smart doorbell cameras really going to prevent a state actor doing what it wants?
The gross invasion of privacy is not taking pictures of people in public spaces. It's cross-referencing those pictures against images people have uploaded to various online services, many of them before this kind of use in facial recognition databases had even been conceived by anyone except science fiction writers.
When lone individuals go trawling social media sites for pictures of some random stranger they met on the street, this is commonly understood as creepy at best and (especially to the extent they are violating the terms of the platforms) an invasion of privacy at worst.
I can imagine the EU having a field day with this. Given the scraping of data from other sites, this is clearly processing of PII without explicit consent - it's effectively being used as a biometric identifier too so the rules around sensitive data also apply...
It’s my understanding that GDPR protects the personal data of EU citizens. Regardless of whether you do business in the EU, if you process this data GDPR applies to you.
My guess is that this company has no way to verify that they don’t process EU citizen data. They almost certainly do if they’re scraping so pervasively. And I don’t think they can credibly claim users gave consent let alone all the other rules they need to follow.
Looking forward to someone challenging them on this and hopefully the EU taking action. This feels like exactly what GDPR should protect against.
If a US multinational is handling data ostensibly subject to EU regulations, and using employees in developing countries for the lowest possible cost, they may or may not abide by the letter of EU law, but I think they evade the spirit of the law as a matter of course. There are some senseless "fig leaf" procedures that they use.
The EU might want that, but just today my company collected and passed the personal details of a number of EU citizens to colleagues without their consent.
Since the company doesn't do business in the EU, the GDPR can go get knotted.
PS. My gay mates have also not decided to go straight just because Uganda outlaws it.
>> Since the company doesn't do business in the EU, the GDPR can go get knotted.
That's not how international law works though, especially when wielded by a large economic block. If the EU wants to put pressure on a company the pain is harsh. For instance they can blacklist the company and it's C-suite from international banking and ask any in-treaty country to extradite or arrest employees.
Also are you admitting to breaking EU law and moral/ethical codes on HN ?
Conflating blasphemy with ethics on privacy? So deities and the right to privacy are both something that belong to the past according to you?
One of them is in the universal declaration of Human Rights. Not caring about privacy is immoral wherever you are and brings only support from unaligned actors in this world.
Is this unique? I feel like there are many companies providing this sort of surveillance/data-scraping aggregation and indexing functionality to law enforcement and TLAs. We just don’t hear about them too often because we are not their customers.
Our government has abandoned us, and total surveillance is the future unless something radical changes.
Fun tip: get an old analog radio, like in an old non-connected car or a boombox or walkman or clock radio or something, go somewhere quiet and private, and listen to whatever you want. And realize that nobody knows what you’re listening to—it’s your secret. I find this to be a strangely powerful experience.
> While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
What's appalling here is that Clearview is playing in God mode, monitoring and acting on what Law Enforcement is doing, IN THEIR OWN INTEREST, without any oversight. The potential for this to backfire is astronomical
Sadly, it reminds me of all the stories of authorities using databases for stalking women they're interested in, or people they have a personal grudge against, or whatever.
> That’s because Facebook and other social media sites prohibit people from scraping users’ images — Clearview is violating the sites’ terms of service. “A lot of people are doing it,” Mr. Ton-That shrugged. “Facebook knows.”
The last kind of person on Earth I want making an app like this is someone that doesn't care about terms of service, morality, contracts, or upholding the law. It seems like he just got into it for the money, and has no compunction about unethical behaviour. "Everybody's doing it" is a cliche, and idiotic, response. Don't take any wooden nickels when you sell your soul...
> Police officers and Clearview’s investors predict that its app will eventually be available to the public.
Mr. Ton-That said he was reluctant. “There’s always going to be a community of bad people who will misuse it,” he said.
(...)
> Asked about the implications of bringing such a power into the world, Mr. Ton-That seemed taken aback.
“I have to think about that,” he said. “Our belief is that this is the best use of the technology.”
And then read this:
> Because the police upload photos of people they’re trying to identify, Clearview possesses a growing database of individuals who have attracted attention from law enforcement. The company also has the ability to manipulate the results that the police see. After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches. When asked about this, Mr. Ton-That laughed and called it a “software bug.”
Facebook is alredy all of those things though, and nobody pushed back. Thats the root of the problem. Too many smart but socially myopic jerks decided their SV projects funding was more important than human decency. YC is part and parcel of that way of thinking unfortunately.
i’m hoping this story prompts his company getting sued into oblivion. if i were running twitter or fb and suddenly there’s a strong incentive not to post personal photos i’d be rightly alarmed.
Was not there a Russian app 4-5 years ago that did something similar? Its premise, if I recall correctly, was to trawl VK/dating app profiles based on photos of people taken on a street.
If we are to believe the tests journalists did, it was pretty good considering the app authors just pulled some VK/russian dating app photos
How is this not a massive copyright violation? When I upload a photo to, say, Instagram, I know I’m granting them a perpetual license to do basically anything with my photo, but scrapers don’t inherit those rights.
No, I’m talking about them being apparently vulnerable to a trillion-dollar copyright lawsuit from the users — that is, the owners of the copyrights on the photos they’re storing, analyzing, and redistributing.
No it doesn’t. Google honors Facebook’s robots.txt, which excludes basically all the valuable data. It’s why they blew billions trying to get us all to switch to Google+.
For an article seemingly about the dangers of such a product, it sound an awful lot like an advertisement. They mention several influential people involved, had law enforcement agencies rave about how well it worked, included the price point, and free trials are available!
Agreed. Anyone who stands to benefit from this company's services are a few clicks away from a trial--and it does not seem they're particularly sensitive to how their system is used.
We all know that once technology allows a beneficial behavior you can easily get away with, nothing can stop it. See torrents, ad blocking, reverse-engineering, cracking, etc.
But even from a legal/moral perspective, it's not clear where the line is. The data is publicly available, uploaded voluntarily by the people themselves. The algorithms are freely available. People are allowed to take photos...
Sure, the end product is creepy, but where along the way did we go too far?
Gathering up personal data without user permission, and then putting it to use in ways not originally intended by those users who provided it.
That's when we went too far.
This isn't hard.
Edit: And as a random aside, I'd be surprised if Clearview wasn't violating copyright law, here. When a person uploads a photo to Facebook, the user grants a license to Facebook.
So unless I'm missing something, Clearview is illegally copying and using these works without permission...
What precisely is "personal data"?
Can a nosey neighbor look out their window and note passersby that they recognize?
Can one recognize people whose face they saw somewhere (on TV, on a dating app etc) without "permission". (e.g. "I'm pretty sure I just saw my tinder match going into a bar with someone")?
What if someone has a very good memory for faces and a curiosity to match? What if someone employ scouts to report when they spot certain people?
What if someone automate these processes?
Where is the line between private informatio and publicly available raw data such as photons bouncing off people's faces?
At risk of violating HN's policy: it's this kind of sophistry that allows tech employees and entrepreneurs to justify their predatory behaviour... bend over backwards hard enough and you can find the logic to justify nearly anything.
Let's try it!
How do you define a biological weapon?
If I know I'm sick and I deliberately sneeze on people, am I a weapon?
If I pay people with an illness to sneeze on people, is that a weapon?
What if I cultivate smallpox in a lab and spread it with an aerosol sprayer instead of using human carriers?
Where is the line on what is a biological weapon? Where along the way did we go too far?
> While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
This is a great question. For all the trouble (and it's not much) that CCPA has caused me at work, it would totally be with it if it ends up destroying unethical, unregulated sham companies like this one.
What is the value of your photo? That is the compensation you deserve. So what’s that worth, $2? It’s the aggregate that provides value to the business
If two parties can't decide on the value, I would think it's only natural there would be no transaction.
If I set the value of my photo at $2 million and company X sets it at $0.20, am I forced to sell it at $0.20? If two people can't decide on a valuation it's only fair there's no transaction.
Depends. If it is your photo, then you own the copyright and you entirely have the right to set a price or not share it at all (don't click 'I agree' on those TOS)
If it is not your photo, then unless you are a celebrity or otherwise a famous figure the law is quite clear that you do not have any recourse for your photo being used. You must use the courts to determine the valuation of your likeness.
If you own a share of stock whose market price is $10, and someone buys the company for $20, you can't personally refuse to sell for under $50. You will, typically, be cashed out on a certain date without any action on your part. In the past, I've run into people who just won't believe this.
The only question is whether the value is > 0. If so, then Clearview are vulnerable to legal challenge.
The final step of identifying people involves actual individual photos, and these photos are displayed to the police officer. It is not just the aggregate.
If you "profit" (by saving time that you'd otherwise need to go to a brick-and-mortar store) by viewing a new product on Amazon.com, does Amazon deserve compensation?
If somebody took your photo in a public place, why should you deserve compensation?
Where did they get your photo? Did they get it from Facebook? You probably agreed to let Facebook share it with whoever they want when you uploaded it there. Is it a photo of you in a public place? People are not usually entitled to compensation for that.
You took the picture with your phone? You own the copyrights. You can issue a DMCA takedown if someone illegally copy your copyrighted creation and profits from it.
I don't think they'd meet the requirements to be covered by the DMCA safe harbor, i.e., being unaware of the infringing material and not receiving financial benefit. However, they may be able to try a "fair use" defence, since they aren't republishing the photos.
We as a society are still experimenting with new new-found toys like this in a lawless territory. Similar to how car-traffic was largely unregulated until we realized that traffic rules are really necessary.
It is not clear how this will play out though. Is it even possible to hope for a state that doesn't spy on its citizens? I'm not so sure anymore (thanks, all you f-g terrorists). Maybe our struggle has to be to regulate and enforce how the spying is done, and used, and live with the fact that it can be abused before it is corrected.
If anyone has a clear view of how such pessimism might be wrong I'll be happy to hear it.
> Similar to how car-traffic was largely unregulated until we realized that traffic rules are really necessary
Sure. And then we over-reacted by passing laws limiting cars to the speed of a horse. I'm much more worried about that kind of kneejerk regulation than I am about the actual supposed privacy problem.
We have already reached the point of no return unfortunately. The only option at this point is to open source and normalize this tech somehow so that everybody has access to it.
Right. I mean, cameras are everywhere. Probably, computers will look at and parse their feeds, not just slow humans with eyeballs. Better this than Sensetime.
This company has too much power to fake and manipulate things in different ways. It’s just a matter of time for false positives that crush the already vulnerable people. The level of abuse people can (and will) be subjected to is going to be horrendous.
Without regulations, laws and audits, everyone is screwed. Oh yeah, even law enforcement is screwed if/when it blindly believes in these systems and considers them fool proof and beyond suspicion.
Creepview — now, that’s my name for this company. It also makes sense that Peter Thiel put money into it.
As this has been and will be inevitable given the tech, it's not a question of regulation as so many call for. Regulation will just put this tech in a small group of hands to imbalance the power of probably government versus citizen.
The only way to deal with this is to recognize that privacy in public spaces was a temporary concept in society available for a limited period. Allow everyone this information and let the cards fall in a more balanced way. Anything else would be oppression.
Lots of discussion on ethics (important!), but little in actual metrics.
What actually are the precision and recall of systems that search over faces of the entire national population? I would have thought enough people look similar that precision would be inherently low (even as a human it is occasionally hard to tell people apart in photos), but the claims here (and in similar articles NYT has had on Chinese companies doing similar things) is implying near perfect numbers.
I remember reading how some Chinese company was using face recognition on livestock (chickens maybe?), which (although this might be human centric) seem far less diverse in appearance than humans.
I would hazard a guess that this will lead to more anti-mask laws since politicians, who love appearing tough on crime, will view anyone wearing a mask as a likely criminal. Which will probably decrease crime but also effectively eliminate any checks on state power.
How has this guy not been targeted by organized crime groups already?
This loss of privacy is just another consequence of our perpetual focus on and devotion to competition against each other -- everything gets weaponized. Privacy hinders the real game-changer, the weaponization of you.
Even if we ban it here in the US, there's nothing stopping adversarial foreign governments from building the same databases, and probably with a lot more computing power and technical expertise.
> Even if we ban it here in the US, there's nothing stopping adversarial foreign governments from building the same databases, and probably with a lot more computing power and technical expertise.
You make a good point, a simple ban of this technology isn't good enough. The ban needs to go further, to ban the kinds of things that enable the technology, like massive databases that aggregate people's personal photos or surveillance video.
The bans would have to be quite extensive. You could not allow networked security camera systems like nest/ring. No social networks could exist, no personal video sharing. Even governmental databases of photo IDs and company photo IDs pose a risk given this technology. Always a chance of a data leak or bad actor.
There does not seem to be a feasible way to stop this. We may simply need to accept that a face will be enough to positively ID anyone.
I have often wondered if you form an LLC and sign over the rights to your DNA / looks / et al to said LLC contractually are you more protected in todays society ?
The social media companies might be able to buy some good press by suing this company, its founders, its investors, and its employees into oblivion for terms of service violations.
On the other hand, it would push copycats into developing alternative business models to capture the demand while still hiding from civil legal action.
And there’s wider societal costs from potentially chilling innovation like this through the social media companies acting like a trust.
I am waiting for a government that puts together all the tech we are developing right now and creates the perfect surveillance state. Even in the worst governments like Russia under Stalin, the Nazis or North Korea citizens still have/had the possibility to move around and do things without anyone knowing. That soon may be over.
the look that clearview has cultivated is at best clandestine and might even be criminal
it is telling about the attitudes of law enforcement that they skirt prohibitions of firstparty use of facial recognition by consorting with an entity such as clearview
Oh I know Hoan. At least he isn't alt-right any more. I think he has matured and is a decent guy. It's just commodity technology, but it is going to make him rich. shrug Cool.
How did Clearview scrape so much of Facebook? Is this fundamentally hard for Facebook, Instagram, Linkedin, etc. to stop if someone determined wants to suck in photos for a tool like this?
The counterpart to this dystopian future is other people taking photos/video without your consent. Just as a social norm I wish that was illegal first, and if it already is, enforced more.
Apple and android should blur faces by default until other people explicitly give consent to be photographed. That's the future I want to see.
The article summary pretty much highlights that anybody can aggregate public data for personal or commercial use, so Clearview AI is not the only player.
Data aggregation and transparency were supposed to be the foundation of open government, but it looks like the citizens (by way of consumerism/capitalism/communism) are the victims of legislated privacy violations by law makers that want nothing to do with transparency. It seems like a conflict of interest if business is pulling the strings of politics.
If the citizens pushed harder for a transparent government, other than encryption, what else can they legislate to turn the tables on that debate (ie. No privacy for government is a no go)?
So, we must always be reactive rather proactive in our responses? I'm personally quite interested in what might be coming, particular ly if the article does a good job of describing driving forces and potential obstacles.
It's not news, it's speculation and should be noted as such. The opinion section would be an appropriate place for it.
If some journalist wants the world to know what their crystal ball says the world is going to be like, they should publish a book, write a blog, whatever. Don't pretend it's journalism.
Humans are notoriously bad at predicting the future.
This doesn’t sound groundbreaking all and I’d be very surprised if FBI/NSA/DHS/Palantir didn’t already have a system like that. Maybe they were just reserved for higher-value targets. Of course, NSA isn’t gonna tell you what it has constructed until decades later, so claiming that this goes far beyond NSA capabilities is reckless at best and clueless at worst.