Hacker News new | past | comments | ask | show | jobs | submit login
I got my file from Clearview AI (onezero.medium.com)
811 points by us0r on March 25, 2020 | hide | past | favorite | 224 comments



The article is missing a link to the Clearview forms to request a copy of the data or request deletion: https://clearview.ai/privacy/requests


Its absolutely wild that for anyone that's not a resident of California or EU/UK -- there isn't a way for you to request anything other than specific images/links.


> there isn't a way for you to request anything other than specific images/links.

And you can't even do that unless you've already managed to remove the image from the internet:

https://clearview.ai/privacy/deindex:

> This tool will not remove URLs from Clearview which are currently active and public. If there is a public image or web page that you want excluded, then take it down yourself (or ask the webmaster or publisher to take it down). After it is down, submit the link here.


At least for GDPR reasons (as far as I understand it) you can forbid the use of your data. They then are not allowed to use your data in any way. Even data already public or put on the internet by you in the future.

If they do they are in "deep shit" (pardon my french) legally. I actually hope they do this - and somebody can catch them in the act. I believe they will be gone soon then.

Also I would advise anyone under GDPR legislation to also request exactly with whom the data was shared and go on to also request deletion and usage information from them. It is a pain that one has to jump all these hoops. I would love for the GDPR to have a way of forcing such a company to also do the information and deletion requests on your behalf and prove that to you.

Sadly this was not included I believe.


Norway also has GDPR.


Even more incredible is that the opt out link requests a clear view of your face to proceed.


Even more incredible that it requires a picture of your ID


It makes sense to me. This is a database keyed by facial images. They don't know your name with certainty. The only way to look you up in the database is by face. Then presumably they need the ID to make sure it's you who's requesting the info. Hard to imagine how else to do it, given the nature of the technology.


Exactly. Otherwise, this is just a vector for any person to exploit the process and freely play cop by uploading a photo of someone they're trying to track.


But isn't that whole value of their service?


Yes, but the value is to provide that in exchange for money, and ostensibly only to law enforcement agencies and similar organizations. (If you try to sign up on their website, it says "Clearview is available to active law enforcement personnel" and that you need to apply for access.) If you're a random citizen and can get the same data, especially for free, the value proposition breaks. And the privacy implications would be worse.

So, I get why they ask for ID, even though I also get the reluctance to give them your ID since it could help tune their system.


How much would you worry about your ID being leaked, if/when Clearview AI is hacked into? What can be done with the info?


How else would they know what images to remove?


This is a case of 'never click "opt-out" on spam'. Clearview is not to be trusted. One should not go through their process. They are not likely to delete the data, and if they have none, they are likely to create a profile for you.


Thank you. I was having a hard time finding this.


I used to think some of my peers were being overly cautious by purposely trying to obfuscate their online profile. Back in the day, I couldn’t care less about putting in effort to try to protect my online privacy. I have slowly but surely come to see the light. Now it’s at the forefront of my mind at all times.

Learning about this company (and I imagine other unknown entities are doing the same) has encouraged me to get more aggressive.

I think I will start to try some shenanigans I learned from a friend. I plan on replacing my online profile pics to a random grab from https://www.thispersondoesnotexist.com/. Might not make much of a difference for the old stuff, maybe I can successfully request a data deletion as the article suggests. At least it will introduce a little bit of noise to the AI overloads :)


I grew up with the "don't use your real name on the internet" in the back of my head, this was before kids got internet safety classes.

5-10 years later, Facebook came up with their real name policy and started asking people to snitch on their friends if they used a fake name. Google, and mainly Youtube, came with a real name policy as well, on the one hand for Google+, but on the other to try and fight comment abuse - theory being people are more hesitant to be a dick on the internet if they use their real name.

But people got used to that real fast, and since there was little consequences anyway, it didn't work.

People have valid reasons to use a fake name on the internet; government and business surveillance is a big one. Abusive / stalker exes are another. Having an alternate persona (e.g. entertainers, authors) which people are trying to hide from their un-understanding or abusive family, or society at large.


> People have valid reasons to use a fake name on the internet; […]

Here is a more exhaustive list: Who is harmed by a "Real Names" policy?

https://geekfeminism.wikia.org/wiki/Who_is_harmed_by_a_%22Re...


I too am extremely opposed to any and all non-consenting invasions of digital privacy (i.e., the problem isn’t the known-known of what you upload, but the hidden implications of it), but on the contrary, I make a disciplined effort to make my digital fingerprint reflect my actual views and identity. Whatever time capsule my digital identity ends up in, I want to be as accurate and beautiful as possible.

This includes my avid support of corporations which make good faith efforts to defend natural rights and freedoms, and my vehement opposition to corporate/political nonsense that does not represent, in good faith, the interests of humanity and nature. “A reasonable amount” of surveillance is an essential aspect of society, but it SHOULD be considered invasive, and should never be invisible. I suspect that there will be BS metrics to evaluate how “consenting” a given individual is to NSA/Clearview type behavior, and I would hope that I am casting a shield of protection. I feel bad about the fact that an element of bitterness is necessary to be resilient. I know no other way than truth.

I truly act as if AI is learning from me, and believe that there are long reaching and metaphysical effects to all actions.


Thank you for sharing this viewpoint. Your end goals are admirable. I will seriously consider adopting your strategy.

Obviously, I'm using a nym. Because I've wanted to my life to be faceted. I've lived very publicly before (ran for office). But in geekdom, I didn't want people to casually correlate my politics (activism) with my open source efforts.

Again, thank you.


Just for the record, the journalist who broke the story on Clearview noted that Clearview AI has specifically demonstrated it isn't fooled by thispersondoesnotexist.com:

https://twitter.com/kashhill/status/1218542846694871040?s=20

You won't be giving them any info on you, but you won't be confounding them either.


I'm not sure what is not working-as-intended here?

Run facial recognition against computer generated face, got no matches. Surely that is the expected and intended result from both parties?

Or is it expected to match against a different face?


They might have already scraped all of the faces on that site. They aren't generated on demand so you conceivably scrape the entire database of fake people and tell the algorithm to ignore anything matches them. Then, the algorithm would just treat any photos of a person that doesn't exist that it finds in the wild as it would if you have no profile pic. It might fool a person or an AI not trained on those pictures. Another option is to generate your own people that do not exist and use those images. This could work as long as Clearview isn't doing some sort of image analysis to look for telltale signs of AI-generated faces. You could start photoshopping fake faces onto your real pictures in an effort to blur the line between AI generated pictures and real pictures.


He thought giving them a fake face would gum up their search quality and ability to resolve him. I'm saying it would basically be a null-photo to Clearview.


It's the fact that it returned no matches, that indicates it isn't fooled. If it were fooled, it would have associated those faces with the accounts that use them (assuming anyone is using the, which they probably are).

By returning no accounts, it demonstrates their AI isn't using those faces for identification.


In that case, how about the opposite strategy? Take your own picture and a GAN with a latent space feature (i.e. a thispersondoesnotexist that lets you decide age, gender, etc.). Then set the parameters so that you get a picture of yourself. Upload this picture to social media and watch Clearview ignore it, while you still look like you to humans.


I was thinking something similar. Maybe with some randomization to it, so different profiles across (social) media wouldn't link together through Clearview et al. but still all look like you to humans.


True, but at least it prevents them from linking accounts. It is equivalent to no profile picture, which you might not want to keep up appearances.


You may be surprised to learn that your face is your least identifying trait online. You network of friends/followers/likes identifies you far more readily—even if you use a random username.[1]

Managing your privacy is a lot like CPU side channel attacks. It forces you to re-evaluate your fundamental assumptions about what information can be exploited.

[1] http://www.vldb.org/pvldb/vol7/p377-korula.pdf


While reading the comment, I was thinking about overlaying faces with the Laughing Man instead of thispersondoesnotexist.

http://cdn.collider.com/wp-content/uploads/2016/02/ghost-in-...


But, that sounds like it is fooled, in this context.


Replacing pictures wont help (they already have them), GDPR request for deletion might work better but on the other side you also give them your confirmed identity. With companies as shady as this one, they might just set a flag in their database and add your documents data to them.

As you have figured out on your own, public should listen to some people warning about this for more than decade instead of making fun from them (tin foil hat,...).

And if anyone thinks that google and facebook are not having their own versions of clearview, think again. Any form of online presence under real name has to be minimized and it is doable but this would mean that all personal narcissistic pushes would need to be stopped (or should I say - cured) and refrain yourself from using any personal information (no, you wont secure my account by having my phone number, provide me TOTP if the security is really the reason) on the internet while avoiding it beeing stolen by apps (application firewalls, sending back fake data and not using any google applications including removing their preinstalled spyware by rooting the phone).

I can guarantee you that you wont be missing anything relevant (I am doing it for more than decade). But. Will you do that? Can you do that? Do you want to do that? Most people just rather take blue pill.


> With companies as shady as this one, they might just set a flag in their database and add your documents data to them.

This is exactly my fear. If they were more legitimate, I wouldn't be worried about sending an ID card. Thinking if it makes sense to fake an ID with my real picture on it so they can give me the data and 'delete' it but with a fake name to make sure I'm at least not feeding the troll.


GDPR request for deletion might work better

Dumb question - how can anyone be sure that companies actually delete the data? What about the backups, what happens to the data there? How do these govt enforcers verify this? Also, what about Clearview's employees abusing the data? what stops them from snooping on someone they are interested in?


It is actually not stupid. Nothing stops them except a liability for a hefty fine if they are cought. In same manner as nothing stops criminal from stealing your car. But there might be a jail sentance that deters him from doing it. Some cars will be stolen, some criminals cought and jailed. Sure, car has locks. It is preventive measure. Same as not uploading your personal data/pictures/... to "public" servers.


It's pretty easy to check whether or not they are still selling a profile of John Q Deleted to customers, since anyone be a customer.


Your peers were being rational, but it's a drop in the bucket. I've come to the conclusion that privacy can only be protected with legislation. There is too much surface area to protect for the average person to police their own data trail online. Even experts have a tough time doing it. You'd have to abstain from virtually everything, and even then you can't keep other people from posting you, tagging you, etc.

This isn't a technical issue. It's a political issue.


Starting in the early 90's, 'everyone's a dog on the internet', so my profile pics were dogs. One buzzed evening, I changed a few to Fabio. Good luck, Fabio,


"I miss the good ole days of the Internet -- back when the men were men, the women were men, and the kids were FBI agents."


The thorny issue here is that this is all public information put forth willingly by people. These are not leaked medical records. In a way these abilities are like a person saying "hmm isn't that the guy from that thing a while back?" What are the limits of what is allowed to do with public data? I don't have a clear opinion.


I think about it like this: what would happen if you did this manually, at the same scale? If I went around asking every single person you've ever met if they could provide any pictures in which you're in the background, and collected them all and sold them as a collection, i'd be borderline harassing/stalking you. Not necessarily straight up illegal, but maybe in some ways? That's what this is, digitally.


The data was shared willingly within the understanding of the person making that decision. Do people really understand how much data in their day to day life they have “willingly” shared?


This public information includes security cam footage. It’s not up to us if people film us, the cameras are everywhere


> These are not leaked medical records. In a way these abilities are like a person saying "hmm isn't that the guy from that thing a while back?"

That's why I don't think there is much of a point in trying to prevent people (e.g. by law) from crawling and using data in this fashion.

I feel like the only reasonable solution here would be to force these companies to rebuild their databases by legally limiting the lifetime of such data.

That way people have a chance to remove themselves from the database by changing/deleting their online profiles without having to use legal measures like GDPR requests. People wouldn't even have to be aware of any individual database they might be part of; they would be removed from it automatically at some point.

Another benefit of this would be that the pure cost of constantly re-crawling a giant dataset could act as a limiting factor and therefore prevent abuse.


Is you are going to play the "is it legal" game, then it's illegal bulk copyright violation. I gave Facebook a license to make copies of my data for use in its website. I didn't give Clearview a license to make copies to give out to it's customers.


Is it me or did anyone find the availability of photos to be less than I would have expected.

It is worrisome but a facebook could produce a lot more privacy related connections from private photos no one knows existed. I guess I was expecting that.. perhaps in the future facebook will offer this service.


Almost all of those photos were from the guys personal blog and one wasn't even him.

I'd be way more worried if it was finding stuff like me in the background of someone else's photo in a crowded city or something like that.


To my knowledge, only 1 photo of me exists online (I've never even taken a selfie), in a concert crowd that showed up in our online newspaper. Feeling pretty secure about that.


Bit of a rub that to request your messy, potentially erroneous, public profile, you have to give private, authenticated identification and contact information - basically the most valuable information they could want, dramatically increasing the value of your profile that you were so concerned about in the first place.


> Bit of a rub that to request your messy, potentially erroneous, public profile, you have to give private, authenticated identification and contact information - basically the most valuable information they could want, dramatically increasing the value of your profile that you were so concerned about in the first place.

The concept of being opted-in by default and being forced to authenticate yourself to opt out is getting more and more ridiculous by the day.

These systems need to be opt-in, and that requirement needs to be enforced by some kind of powerful government agency with the power to arrest and jail non-compliant operators. Anything less feels like it would end up being a complete surrender to companies like Clearview AI.


I've been thinking about this for a while. A fairly simple solution would be to turn the incentives around. In the EU, people are already the rightful owners of their data, so that needs to be applied world wide. But that's not enough.

To really flip the table, we need to make it mandatory to pay people for use of their data. If a company is exploiting[ß] someone else's property, the owner must be compensated for the privilege. Take a page from SaaS companies' nickel-and-diming ("pay per use") billing strategy, too. Just turn it around: forbid blanket permissions and consents.

The idea is to ensure that other people's data needs to be universally treated as a toxic liability, not an asset.

ß: in economic sense, although other meanings apply equally well.


The governments are the customers of such products. We can't expect them to protect us.


> The governments are the customers of such products. We can't expect them to protect us.

So you're saying we should just surrender to companies like Clearview AI? I don't agree with your fatalism.

I think you're making a mistake in assuming that governments are unaccountable and cannot be prevented from pursuing interests that against those of the people. That's clearly not the case. If you were right, we'd already have unchecked police surveillance (isn't government is the consumer of surveillance products?), but we don't. That's only the case in non-democracies like China. In democracies, society (and the government itself) is capable of putting significant constraints on government action.


I suppose it depends on the definition of government. I was thinking of the executive branch (maybe I confused the terms). Parliament legislation would be the branch that has the responsibility to protect from such things.


I came here to make the same comment. I don't like what this company is doing, but the information is already public. People should know that anyone can read your public data and assemble it. It is not very different to living in a town. In a town everyone knows public, and no so public, information about everyone. Police, or a private investigator, can always go and interrogate the butcher or the hairdresser and ask information about you. They can also read the local gazette. The difference now, and it is not minor, is that the town is global.

Privacy starts from us not revealing our private information. That's why we have curtains at home. It is us who put the curtains on windows.


Came here to say this. Do any of the privacy laws allow for deleting your profile without proving to the company who you are? Could you have the government make the request on your behalf? Sending Clearview your license sounds like confirming your credentials in a haveibeenpowned-like honeypot. Seems irresponsible for the author to even suggest that to readers. There should be other ways.


Its probably because for clearview its the only way to know that it is indeed you who is asking for information on yourself.


I asked this in a sibling comment to yours but couldn’t the government, who issued your ID in the first place, make requests on your behalf to enforce the privacy laws they’ve created?


Is it really that surprising, since all the photos are available on public sites? It seems this tool reveals the same a Google search for the name would reveal.


This guy's name is Tom Smith. Go ahead and pop that into Google and let me know how that turns out.

His name is about as generic as you could imagine for a white person. But they returned a bunch of images of HIM, and one Alexey Something-or-other, which could be his troll account.

Edit: the Alexey part is a joke, I'm sorry but I thought it was funny.


TIL generic names (in your geographic region) are good ways to promote privacy.

I always wondered how the Chinese authorities figured that out considering the massive name reuse in China. I can’t watch a Chinese movie or TV show without at least one character sharing the surname (the first part in China) of a Chinese person I know.

Smith would be our classic version in the west.


The Romani people have used it as a general tool to make it difficult to govern them. The local equivalent of "John Smith" gets used by everybody in every official form. Meanwhile, they just call each other what they otherwise would've.


China handles it by making you put your national id # or phone # on everything you do.


The only real concern I see is that it allows someone searching to go from photo to name very quickly.


I'm not entirely sure what's so shocking about public data, shared willingly by the person to the public, being used to identify the person. If the data weren't/hadn't been shared willingly, I'd definitely see that as a problem. Same thing as if the data were to have been gleaned from non-public sources e.g the person's private belongings or secured digital realms like private forums, password encrypted backups, private profiles, etc.


It's about trust and permission. If I give e.g. a local news website permission to use my portrait, I do NOT implicitly give permission to Clearview, Google, Facebook, etc to use it for their own purposes.

I mean it's implicitly known that anything you post on the internet is public property, but legally that is not the case. Portrait law (at least in my country). You can't just take someone's portrait and use it for your own gains.


Are they actually using it, though? Or are they just a search engine presenting public links that are hosting the photos of your face? Is linking to something covered by GDPR?

Couldn't this same argument be made for Google reverse image search? Reverse image search someone's face and you're likely to get links to places that photo is located; potentially including links to different photos of their face, as well.

They're likely mirroring local copies of the photos as well when they show you the link (in case a link later 404s or changes), but so is Google reverse image search, I believe.

If your position is that both things are wrong, that's fine, but would Google then basically have to remove the entire feature? Or prevent searches for any photo containing any faces?


Actually you do, because scraping is considered legal in the US and your information is on a public site which makes scraping the information legal.


Legal _in the US_ is an interesting point. We often see the US trying to overreach and apply US standards when US-owned material is hosted on sites outside of the US (Richard O'Dwyer), or arguing that accessing a server in the US makes you subject to US law (Gary McKinnon). Why would similar arguments not apply here? It may be legal in the US, but that doesn't necessarily place the actions outside the jurisdiction of the other country. Naturally, _enforcement_ may be an issue, but that doesn't make it legal. At the very least, I don't see how it can be seen as OP giving permission - it's effectively another country taking that decision away from him.


Alright, I’ll go there. This is like claiming all German citizens were complicit with the Nazi movement, because it was legal.


Ideally we live in a society where not being anonymous is not such a big risk.

But governments change, the data is still around to be abused.

This is what disturbs me. Is how data can be abused in the future.


Yep. "I trust this entity with my data" is absolutely not an argument to be lax with your privacy.

Take Pebble for example. They had a very invasive privacy policy and reserved the right to upload pretty much anything from your phone via the companion app, but they were a cool hacker-friendly hardware startup and a lot of people trusted them.

Years down the track they ran out of runway (the ugly side of "unicorn or bust" venture capital but that's another rant) and were bought out by Fitbit. Meh, Fitbit seemed pretty good with privacy too so that's alright, I guess?

Now Google's bought Fitbit and potentially has a bunch of very personal, private data on everyone who originally trusted Pebble.


Why were Pebble and Fitbit more trustworthy? Because they were "cool"?


And because they had only one source of data, namely their watch/armband, and were not cross referencing it with search queries, YouTube views, etc. Unlike Google, needless to say.


Whether the really were or not is another question. Personally, I didn't trust them any more, but a lot of people did.


This nails it. It's not the present view of you that someone might get, but it's the fact that they can roll back the clock on you and re-interpret anything you did or said or anywhere you went. Digging up some ancient tweet is somewhat analogous to it, but it cuts way deeper when you start thinking about every moment of your life. On the flip side, if you're just living your life OpSec feels overkill...


> not being anonymous is not such a big risk.

the world has always been anonymous because of the lack of capability to track large amounts of data - until recently.

Anonymity allows you safety from any one who seeks to predate you. I think that safety needs to be maintained. People stupidly put photos of themselves online, then face tag their friends. This allows third parties to identify your friends and circles, and that's dangerous. All relationship should be reciprocal.


> the world has always been anonymous because of the lack of capability to track large amounts of data - until recently.

The first data privacy law ("loi informatique et liberté") were introduced in France around 1980, after a controversial government project to create a massive database of people generated a huge scandal.

So it's been possible for quite a while, it's just that it was reserved to state actors.


Privacy laws, much like noise ordinances and pollution regulations, are emergent responses to novel threats and ills.

In a world without the printing press, anti-libel laws didn't exist. In a world without photography, rights to personal image and freedom from invasive shutterbugs didn't esist. Anti-wiretapping and phone-recording restrictions were necessitated by the telephone. The Bork bill protecting the sanctity of ... video store rental records ... was necessitated by videocassette technology, a video rental market, Supreme Court nomination hearings, chatty store clerks, and newspapers interested in publishing such details.

As technologies tear down and penetrate the long-standing barriers to snooping, recording, transmitting, analyzing, and acting on what had always. been personal and private behaviours, societies turn to law to reinstitute those protections.

Privacy is an emergent phenomenon and a direct response to intrusions.


No, long ago the world was a bunch of small groups of people that knew each other intimately. At the village and tribe level there was little or no anonymity. The offset was that there was also intimacy. The problem now is that we again have no anonymity, but there is none of the intimacy needed to offset the negative aspects of this (and we probably can't, I don't think intimacy scales the same way, but who knows).


The people who left their tribes (or were forced off) were effectively anonymous to any new group of people they might come across. So if you were kicked from your group for a misdeed, sure you could continue your bad deeds in the new group or you could turn over a new leaf without the weight of your past mistakes holding you back.


> The people who left their tribes (or were forced off) were effectively anonymous to any new group of people they might come across.

The people who left or were forced out often died. The world is a scary and dangerous place without a support network, which civilization basically is.

> So if you were kicked from your group for a misdeed, sure you could continue your bad deeds in the new group or you could turn over a new leaf without the weight of your past mistakes holding you back.

Outsiders were often viewed with distrust. Why wouldn't they be, when most people can only associate their leaving the safety of the community with at best a foreign way of thinking, but more likely them being forced out for past misdeeds.

It's sort of like interviewing for a job a 35 year old that has no work experience to show for the last decade, and not a very convincing story as to why (or even if it's feasible, you don't really know). Why take the risk?


Ever wonder why places were called the Wild West?

It's an awful dangerous place when you're alone out in the Wild West.


I thought about this. Perhaps youtube real videos (not staged mini tv shows like peewee) has opened up people's lives to their world. Perhaps twitch/live coding streaming provide this on some level. Perhaps onlind video games or even social media provide that intimacy into others lives at scale.


Exactly. Very current affairs: governments are looking into getting location data from phone networks, apps, phone manufacturers to determine whether people are sticking to the curfews. And this change in perspective happened real fast.

Does the end justify the means? A pandemic like this is the ideal chance for a government to set up emergency measures like martial law, while the people themselves are too busy trying to look out for themselves and their family to be able to protest it.

Of course, anyone with half a brain already knew that unlimited data gathering, including location or personal information, was a bad thing.



> This is what disturbs me. Is how data can be abused in the future.

The general 1984 style dystopia vision is that there’s a gov’t change for the worst and you could be SWAT’d out of the blue.

The most probable one is that this kind of tool would be used in far less obvious, if at all visible, ways.

In that situation some kind of honeypot/canary strategy would be nice to reveal shady use but I can’t seem to come up with a realistic one.


I’m pretty sure ICE is already using that to find potential targets that are not residents.

Most non technical people don’t understand how powerful technology is.


So will we see a new normal that if you delete your data, you’re considered suspicious as it looks like you might be hiding something.

Personal data may soon become the same as a credit score. No data, high risk.


This is already here with Google if you use oBlock or uMatrix. Endless Captchas...

"We detected suspicious blah blah blah"



Ironically that triggered the CloudFlare check for me.



I submitted an info request about 2 months ago now, and I haven't heard a word from these jackasses


Keep going. They risk being fined by not answering.


I wonder what would happen if I were to issue them a $5000/mo/piece retroactive license invoice for any/all use of my name/likeness/etc for their profit.

I wonder what would happen if everyone did.


A class-action suit that would probably last for a couple years and end in a settlement where the claimants - assuming they fill(ed) in forms x, y and z, would be entitled to at best a few hundred dollars.

See also the Equifax settlement. I'd say Equifax is the best / most recent example about something like this.


> Equifax is the best / most recent example about something like this

Equifax’s credit-data provenance is enshrined in law. Their mistake was in improperly distributing legally-owned data.

Clearview does not have legal claim to individuals’ or Facebook‘s copyrights. Its mistake is more fundamental than Equifax’s.


I’d love to see a successful copyright case along these lines.

I wonder if this can be combined with adding a “terms of service” to your FB profile.

Corporations have perverted contract law to the point where their terms of service are binding even if you never read them, or are even aware of them.

Turnabout is fair play, right?


In OP’s case, his profile includes pictures scraped from a privately hosted blog, where adding such terms would be a simple matter.


Wouldn’t the new terms and conditions only apply to the pictures after the TnC have been added? So any already scraped picture could still be used.


CFAA enforcement is only for large corporations. Try to get an AUSA to enforce CFAA violations against a big company as an individual person with a website and they will probably laugh in your face.

Remember: there are two sets of laws in the US: ones that apply to you, and another set for large corporations that cooperate with the police and military.


It's interesting how collecting or distributing files with music is a grave crime and corporations can take down anything they don't like with a half-assed DMCA request, with no repercussions for "mistakes", but a person's face image doesn't belong to that person and the same corporations can safely collect and distribute these images for profit.


YouTube takes this one step further to the ridiculous extreme that if you are live streaming a DJ set, it censors your stream in realtime whenever it detects a copyrighted song, i.e. a good 1/3 of your stream. It doesn't need a DMCA request, it does it willingly.

YouTube is so horrible now for content creation, I don't understand how content creators are able to post anything anymore without it being smashed by the copyright automation.


Unfortunately, the EU adopted law which will force all social media to do something like what YouTube does.


Related Tom Scott video https://www.youtube.com/watch?v=1Jwo5qc78QU

tl;dr ContentID is a workaround for a broken copyright system. It's not perfect, but it's better for everyone than falling back to the default of copyright through court.


It goes even further than corporations being able to request things from being taken down. Kim Dotcom's house was raided by the FBI in New Zealand over this. He isn't even an American!


>Kim Dotcom's house was raided by the FBI [...]

Source? The US asked for his extradition and the NZ authorities raided his home, as far as I know.


raided at the request of the FBI (which was at the request of the MPAA) might be more accurate.


Actually, in Europe it is illegal to collect personally identifiable information of people without their consent. GDPR and all that.

I am surprised they haven't been fined out of existence yet.


Just a correction - consent isnt the only legal basis. Thought it was important to correct you here so others reading the comment wouldn't get the impression that no consent = illegal


Probably because nobody in the EU currently has this company on their screens. Or because those who are critical of this company and its practices have not yet reported it to the relevant authorities.

In Germany not only the GDPR regulation but also the so-called "right to the own picture" applies here.This means that no one may use/sell pictures of a person without explicit consent. Therefore, photographers must also have an explicit release of the person for the respective context of use.


There has been recent activity (article in german).

The article claims you could ask for deletion of your data without uploading your ID.

https://www.golem.de/news/gesichtserkennung-datenschuetzer-r...


Clearwater did not obtain consent, so the data shouldn't be there in the first place. Each and every data protection Behörde here in Germany should be investigating this.


Well I would advise everybody to file a complaint to their local DPA. Even if you can oppose the processing, they rely on legitimate interest (doubtful their balancing act is acceptable) and they acquired the data in a legal way, they failed to inform the user of a second-hand data collection[1].

So If they have data on you, that is older than a month, and did not contacted you to inform you of it, you can file a complaint!

[1]https://github.com/LINCnil/Guide-RGPD-du-developpeur/blob/ma... (In French sorry)


Google and Facebook are still around, so the GDPR is an absolute joke of a law that nobody cares to enforce.


You might be surprised: https://www.enforcementtracker.com/


Search for how many of those enforcements came from Ireland. Now look up what country's GDPR authority Google and Facebook and most American companies are choosing to be subject to. It's not a coincidence that the place that hosts all US company's remote headquarters isn't participating in GDPR enforcement.


Ireland tried to give a great deal to Apple. The EU didn't take it very well [1]. I wouldn't be surprised if the EU will investigate Irish GDPR enforcement.

edit: also France did fine Google just fine.

[1] https://en.wikipedia.org/wiki/EU_illegal_State_aid_case_agai...


> but a person's face image doesn't belong to that person and the same corporations can safely collect and distribute these images for profit.

The irony is that companies like Facebook, Twitter, etc are really bothered when another business scrapes profiles to mine data uploaded to those sites.

I wouldn't be surprised to find out that one of these companies gets sued by them for "stealing" content they host and violating the licenses for user content they grant themselves via their ToS.


It's because you haven't shown a dollar value of damages.


> but a person's face image doesn't belong to that person

Copyright applies there too, and if you sued them for it, it's not inconceivable you might win.


I'd need to prove it, they'd skillfully dodge the request, I'd have to hire expensive lawyers (500/hr) and make a more formal request that's harder to dodge, they'd say the data is distributed across the globe in multiple jurisdictions in a very complex form, I'd have to hire an entire law company that works with international cases (1M/month?), they'd drag their feet and mud the discovery requests as much as possible and at the end of the day it'll be about who runs out of money first and who has better connections. The copyright law is made to resolve disputes between big companies.


> For a few million dollars, nearly anyone with a startup background could likely build their own version of Clearview in less than a year.

Am I being naive, or is this being overly generous? What about this can not be recreated with an off-the-shelf web scraper and a pretrained facial recognizer?


It's likely to cost you a few million dollars in hardware and multiple months to run your an off-the-shelf web scraper and a pretrained facial recognizer on a very, very large number of images. There are a lot of images on the internet, bandwidth and compute are not free.


I think Clearview having indexed basically all public information about people gives them a serious advantage for face recognition and building up a network of relationships between people.


Maybe over generous, may be not. We shouldn't get stuck on the accuracy of the dollar amount. The point the author is trying to make is that there will always be "ClearView".

So, we need strong legislation around the use of this technology especially when it comes to law enforcement as opposed to trying to kill the idea itself because that's unrealistic. Just as you said, you could start it from your laptop.


You can even upload the scraped images onto google photos (does a boatload of ai classification)

Pretty much anyone with python familiarity can do this.


Probably the scale.



Aren't they profiting from the copyrighted works of others without express permission?


This website does some crazy redirect loop between medium at the sub domain when opened without JS. How is that even possible?


It returns a 302 header with this location:

https://medium.com/m/global-identity?redirectUrl=https%3A%2F...

Which then sends another location of:

https://onezero.medium.com/i-got-my-file-from-clearview-ai-a...

Which just ends up bouncing you back and forth, unless JS is allowed to percolate through. However, there is some useragent sniffing happening, so the exact set of headers changes.


Nothing magic, they're just sending a HTTP 302 redirect if they don't see a cookie. If you hit it with wget you'll see two 302s, one of them with the old "Moved Temporarily" text and the other with "Found". I'm not sure why you only get two with wget, possibly user-agent sniffing. Tracing with firefox's dev tools I see an initial JS redirect, but that may be a bug since I've got javascript disabled for medium. Alternatively it's a bug in NoScript and that's not good. Either way they'll toss a 302 with no js at all.


Potentially done with an http-equiv meta tag hidden inside a noscript tag

    < meta http-equiv=refresh content=0; url=http://example.com/ >


I had the same issue. Found it was because I had set Firefox to block all cookies for medium.com (probably to get around the article limit).


It didn't for me. (It did, however, make the content of the page load roughly two orders of magnitude faster.)


Also running into this issue.


A related thought experiment. What if Google or Bing build an image search service based on the face recognition ? If you don't like that idea, how about doing that for celebrities and public figures? If you are ok with one but not the other, what would be a good line distinguishing them and guiding principle? If you are not ok with either or ok with both, why?

More I think about it, more I lean toward allowing both, but I can see why people would not like it.


> If you are ok with one but not the other, what would be a good line distinguishing them and guiding principle?

Because celebrity by definition requires trading privacy for fame, and is almost always a decision.

We need a new legal classification for "public, but not accessible by everyone in the world for the rest of time" information, which is what most regular people assume or desire for themselves.


Do the founders of this company have their profiles in the database? What about the other executives and investors? Do those profiles contain all of the same personally identifiable information as any other randomly sampled user? Does it show photos, social media posts, personal contacts, school teachers, addresses, family member data etc? If not, this sort of blunt assault on privacy would make Orwell toss in the grave.


So the difference between this an google‘s reverse image search is that google‘s matching algorithm is worse (probably because it doesn‘t include facial recognition)?

Well, public images are public and I don‘t think banning such a service would prevent governments from implementing something like this .. the technological challenges are getting less and less.


You can presumably hold government officials accountable, but you can’t vote out the CEO of Clearview.


Nobody should be posting their private data (pictures or videos, for instance) on a publicly accessible site if they want privacy. Look, if you want your relatives, friends, and even acquaintances to see whatever you want them to, just do it through a medium that allows for private communication. It's just common sense.


>It's just common sense.

Based on what and how people share online, the "common" sense is an expectation of decency in not vacuuming data just because you can. You and I know that's silly, but that's not common sense.


When it comes to information gathering, I have always assumed if it is technically possible to do, then some agency in the government is doing it. So even if people were able to shame all of Clearview's customers into not using them, that wouldn't stop this type of information gathering form going on.


Just because the government has nukes, doesn't mean citizens should have them. The government shouldn't have them, either, but there's not much citizens can do about that outside of revolt.

Meanwhile this company has been nothing but privacy abuses and lies to the public. If it isn't broken up by law, it will be interesting to see what the people do.


> Perhaps most worrying is the fact that some of Clearview’s data is wrong. The last hit on my profile is a link to a Facebook page for an entirely different person. If an investigator searched my face and followed that lead (perhaps suspecting that the person was actually my alias), it’s possible I could be accused of a crime that the unknown, unrelated person whose profile turned up in my report actually did commit.

Yikes, this is like Minority Report - level weirdness.


If you need a hacker for hire service, I think one of the best places to find one will be on the deep web HL forum . where you can find tins of hackers willing to hack anything for money . One of those with 89% positive review will be Barrysanchez AT hackermail DOtcom . I am a proud user of their services. and I am not ashamed to say this . lol


Honest question - why is author so shocked, given that all the information was public or published by himself?


I think because the idea that someone you don't know and have no relationship with is systematically collating everything they can about you, is a bit like having a stalker.

You don't know why they're stalking you.

On the face of it it's for law enforcement in case you decide to commit a crime sometime in the future.

...or it could be so that they can raise the prices in shops when you walking and the facial recognition picks you up.

...or it could be for a future employer to decide that the kind of bars you visit means that you're not the right "social fit" for a job.

...or it could be for... anything.

You have no control and that's the scary thing.


Probably because it was in disparate places and it seemed unlikely that someone would, or even could, aggregate and correlate it all with enough accuracy to not be junk.

There is a difference, or at least there conceptually was, between posting your life story and all of your thoughts on your central LinkedIn profile, versus having two dozen different "blogs" of sorts over the years, Steam accounts, Facebook, MySpace, Flickr, usenet groups that come and go and we think of as ethereal. When you see all of that stuff pulled together it could be deeply unsettling.

Of course that was foolish -- eventually networking, storage, and computation would allow for everything to be ingested, and facial identification would greatly assist in pulling it together -- but it seemed dystopian at the time.


Humans don't keep track of all things that could be done. I know how what Clearview does is technically possible, but when I see it being done, it shocks me too.


I hope somebody out there in a country that doesn't have an extradition agreement with the US breaks into their systems and publishes the list of their clients. We urgently need a big-style leak of everything these scumbags do.

Also countries should start issuing arrest warrants and sanctions against them.


Wow that last photo doesn't even remotely look like him. I'm more surprised by how bad their facial recognition is than by the data have.


Why is there no arrest warrant against the managers and owners of Clearview here on the EU?

They are using PII of hundreds of millions of Europeans without written consent.

That alone should mean billions of euros in fines.


Because the general public doesn't care enough to raise a stink? Back when Snowden story broke, I tried explaining its significance to a handful of my friends - all I got was a yawn. These are not dumb people, they are smart professionals (non-IT). Even then, I failed to get the point across to them.

It would require some serious education before the public wakes up to the dangers of private companies running amok with their data. Sad thing is, it is already too late. It is going to be very difficult to put a lid on this. This is a company that we (now) know about - how many are there silently working in the shadows that we don't know about?


> It would require some serious education before the public wakes up to the dangers of private companies running amok with their data.

Exploitation is the best education there is. Everything is fine until it isn't. One day, companies and governments are going to start doing things with personal information that are unacceptable to even the average person. By then, it will probably be too late.


Case in point: The Nazis used census records to identify Jews. Who knows what pieces of personal information the next madman will consider relevant. They'll certainly have much more of it to choose from.


I've heard anecdotally that the DOD takes privacy and security significantly more seriously post OPM hack.


Is potential surveillance by government entities the primary reason you are against companies generally doing whatever they want with user data? Are there other rationales?

Just like your friends, I personally don't particularly care either, but from time to time I have tried to understand the privacy crowd's obsession with this issue and the rationale behind laws like the GDPR and CCPA, as well as the desire for even more restrictive laws and I truly don't get it.

Is there a manifesto somewhere I'm missing? Some essay or thinkpiece that lays out in detail the case against collection of user data?


The best manifesto is an history manual.

Take any century in the last 2000 years of human history, and there are files about people. For long it was sculpted or hand written. Then it was printed. Now it's digital. But it's the same thing, only the scale and speed change.

And at any point during those centuries, somewhere in the world, some entity (it doesn't have to be the gov) does bad things with those files. It's different every time. Excluding, killing, tracking, stealing, controlling... The form changes every time, but it's the same thing: abuse of those files.

It would, of course, be less of a problem if the information access was perfectly symmetrical. If anybody could access anybody's data, society would probably have a hard time for a few years, then adjust. And maybe become more fair.

But that's not what's happening. Here it just reinforces power asymmetry. And it creates incentives with huge bias that affect everybody's life.

There are three reasons why people give your answer.

1 - We had a nice run for a few decades in North America and West Europe. It's been a sweet life. And the human mind sets it as a new baseline. Now people see this as normal, and something else as the exception.

However, those decades ARE the exception. An exception that needs maintenance to preserve as best as we can.

2 - We are already pretty bad at making the connection between our misery today and the consequences of our past lifestyle, but today's information system is making it extra hard.

There are several factors for this: those in power getting really good at PR, information overload, more levels of indirection between causes and consequences, and the whole system complexity that never ceases to increase.

3 - The convenience is huge, and the price delayed

We don't get tracked for free, we get huge convenience in exchange. Plus we don't pay the price immediately, nor individually. We pay it as a society, and since it's cumulative, it's not obvious how much it costs us. It will only be painful in ... Well nobody knows when.

In fact, not only doing things right would rip us from convenience, but we would individually pay a strong price on top, right now. While seeing everybody around not doing it.

It's the exact same problem than for global warming.

Not accepting tracking is a deep and important political decision that shapes the future of our entire society. It is as important as avoiding mixing the church and the state or defending freedom of speech.

And it's also why it's not a popular view: it requires to think about what society we want to build, and not just what life we hope to have individually.


> Are there other rationales?

Yes. The fundamental issue is the total lack of respect for the user's consent.

People usually have no problem with volunteering personal information that is relevant to whatever activity they're trying to accomplish. For example, a company will need people's addresses in order to ship products to them. This is a voluntary, explicit and respectful process: consumers willingly and knowingly give the company copies of the information they need to perform the service and the company uses that information only for its intended purpose and absolutely nothing else.

The problem we face today is that businesses are collecting massive amounts of personal information indiscriminately, invisibly and without true consent. Web browsers hemorrhage personal information without them even asking and there's no way to stop it. Companies make apps that mine people's phones for every last bit of data they can get their hands on. Web sites put up annoying little banners saying they collect data and call it informed consent even though there's no way to say no. They bury some clauses in a terms of service nobody reads and say the user agreed to it by continuing to use site even though cookies were set and fingerprinting was performed on the very first visit before the user could possibly have known about much less read the contract.

Not only that, the information is being abused to do things people don't actually want. When people give a company their email addresses, they assume they will receive messages that are actually important. What happens is the company thinks it has every right to spam people with marketing and advertising emails or sell their data to other very interested parties. People give a company their phone numbers and next thing they know they're getting marketing calls they never asked for and can't opt out of.

And then there's the security issues. If someone has information about people in a database, there's always a chance it can be leaked even if every precaution is taken. The potential for harm is significant. Data should be considered a liability for companies. Knowing things about people should cost them money. They should have ample incentive to collect as little data as possible, limit the scope and frequency of the use of whatever data they collect as much as possible and delete that information as soon as it is no longer needed. What's happening today is the complete opposite of that: companies are collecting as much data as possible, keeping it forever and using it for whatever makes them the most money regardless of people's wishes.

These examples are relatively benign but the potential for harm is always there. What if companies start buying up personal information and using the data to profile and exclude candidates? No doubt information such as browsing history would condemn a huge number of people. What if companies find a way to deanonymize that data and link it to candidates?



> Back when Snowden story broke, I tried explaining its significance to a handful of my friends - all I got was a yawn. These are not dumb people, they are smart professionals (non-IT). Even then, I failed to get the point across to them.

If you were unable to convince multiple smart people who you care about / respect, then perhaps it is time to at least considere the possibility that your position is incorrect?


Investigations are on the way. Heise online writes (translated): "Hamburg data protection officer takes action against Clearview. Following a complaint, data protection officer Johannes Caspar is investigating the US company Clearview AI, which specialises in automated facial recognition. [...]"

Source (in german): https://www.heise.de/newsticker/meldung/Gesichtserkennung-Ha...


And it only took them 2 years to act on it. Wow.


GDPR for the first time allows non profits to bring up cases. So its up to eveyone to support them. "They" have a lot of worrying cases to cope with.


Do they take donations? I'd be happy to chip in a $1000 to help destroy these scumbags.



Thank you, just donated 1k to them.


https://edpb.europa.eu/about-edpb/board/members_en

Europeans can find their national privacy boards here, and file complaints about Clearview through them.


Legal actions are currently in the process of being carried out. But this take some time to clarify thinks.

Like part of the pictures come from old failed social networks which sold them and which in their AGB _might_ state that they can do so. Now it's questionable if such AGB is valid at all but as such it needs to:

- If European citizens are affected (it's hard to be be the case). - the exact legal status as pictures might have been optained legally before GDPR - ... - also note Clearview stores biometric data (devices from the images, necessary or "fast" search/lookup would not be implementable

So I would not be surprised that Clearview will be required to delete all data from which it can not be sure it's not from EU citizens, which I think would mean all data given what they store and what they don't store. Obviously they won't comply and a EU wide arrest warrant might follow which is kinda useless if the person doesn't enter the EU. I highly doubt that they will try a international warrant.

So practically it's unlike that anything will change except the operators of Clearview being listed official as "potential" criminal (no arrest => no court => innocent until convicted)


GDPR is EU law and does not cover American corporations. It relies entirely on foreign cooperation [1] to extend that reach internationally, which so far has been untested and is not likely to get any real support in the near term given the current economic situation.

1. https://gdpr.eu/article-50-countries-outside-of-europe-coope...


There is no arrest warrant for Google, Facebook or ad network employees either despite them violating the GDPR on a daily basis and very large scale.

Granted, the GDPR doesn't say anything about arresting offenders, but the companies should at least be investigated and fined, which isn't happening either.

The GDPR is a joke.



This exactly, it's very much happening even if you're not reading it in the news every time.


If you search for Google, you will find one 50M fine. What is the 4% of Google revenue's again? 50M is filed under cost of doing business or the corruption budget if you are not that generous.


So you mean the small companies won't get fines of 10Mi dollars as everyone is so quick to claim here?


That is not what I said. I only asserted that major and repeat offenders do not get the book thrown at them.


Your info seems a bit out of date:

> SWEDEN

> 2020-03-11

> 7,000,000

> Google LLC

> Art. 5 GDPR, Art. 6 GDPR, Art. 17 GDPR


What specific violation do you have in mind? Google has been fined $5B for various non-GDPR violations.


Have they paid any of it? FCC has fined robodialers in the hundreds of millions but only managed to collect a fraction of a percent.


The difference being jurisdiction. Google operates within the EU. Most robodialers targeting the US are not in the US.


As long as they process openly available data I do not see a difference from searching your name on the internet in terms of the gdpr. They also responded to the request (maybe not fast enough) . Deletion would be a difficult thing maybe to be fully compliant to eu regulations .if its ethical is another story...


Wrong.

PII may only be processed if you have explicitly consent for the exact purpose you want to use it for.

There is no "open" personal data you may just use for anything.


> PII may only be processed if you have explicitly consent for the exact purpose you want to use it for.

Not exactly. Consent is one of six allowed ways to process PII. It’s just that for advertising/tracking use its probably the only one that you can use.


And biometrics (which would include images of faces) are a special category of data with stricter restrictions.


Can you back your claim? Its true that the data is still personal but a lower level of protec to ion is applied. Except for children as i remember. There is need for notification but exceptions if not realistically feasible . Sorry on my phone with child sleeping in my arm.

Here is an article I could quickly come up with in English

https://iapp.org/news/a/publicly-available-data-under-gdpr-m...


Just saw that data used for biometric purposes is under special protection beyond typical pii. That seems to be the basis for current investigations.


How is a Facebook profile openly available? Facebook probably gets pii via unreasonably broad blanket opt-outs, which is itself problematic, then it is shared with / not kept safe from a third party.


It's literally on a pubicly viewable URL. Do you delete your local copy of any photo view on Facebook?


Good thing I'm not relying on you for my legal questions. GDPR gives the user control over their own data, and each new use of the data requires specific consent be asked. So using PII on any person, no matter where you found it, requires explicit consent.



Maybe because no one actually thinks that there is a privacy interest in public photos of his/her face that he/she posts on the public internet?


A Facebook profile that you have to register, and is only accessible post-login, is not "public" by any reasonable definition.


Compiling profiles like this without consent is subject to massive fines under GDPR.

I find it extremely surprising that they would be responding to GDPR subject access requests, given that they appear to be ignoring the rest of it.


Not replying would get them in trouble even faster. So probably they hope to appease the one requesting the data.


For people in countries with no clear privacy laws could I also request Clearview to show my data ?

Has anyone tried it ?


The author says: "If they have — and you’re a resident of California or a citizen of the EU — the company is legally obligated to give you your profile, too."

Sure, they're obligated to give you your profile per GDPR. But if you're within the GDPR's jurisdiction, they're obligated to get your consent BEFORE they collect personal information about you. If they haven't, they're liable for at least millions of euros of fines.


They are using copyrighted images. What if 1000 of copyright owners decide to sue them asking $100 000 each?


One thing you should be very careful about is licensing. If you post photos on Facebook or Instagram, you automatically grant them a license to redistribute the photo and share it with others. And these "others" can include Clearview. So, Clearview could have a contract with Facebook which legally allows them to get and save those photos. So, you are still the copyright owner, but due to licensing Clearview can legally store and use the photos.

About suing them if they would break copyrights: not sure about the US, but in Germany it wouldn't be that easy to actually sue them for such a high sum. You could argue that the company makes money by offering the search service, but there must be evidence that Clearview made that specific sum just with your photo(s), which is very unlikely.


They would have to be able to prove that in a lawsuit first. Do they have similar agreements with the alumini magazine? The python coder's meetup group? The personal blog?

Just one of those pictures would be enough for a lawsuit.

Of course, if someone were to sue, they would be heavily pushed to settle out of court. I suspect there's a lot more legal activity going on about things like this, but everyone that starts to make some noise is quickly silenced with a lump sum and a binding contract to shut up about it. If they don't, they're threatened with spending the next 5-10 years in court. Because if there's anything corporate lawyers are good at, it's stalling and making sure the suing party, especially if it's a random consumer, spends years and hundreds of thousands in court fees.

What we need is more rich people suing businesses. Or a massive public defense fund supporting the average joe's case.

But right now it's in the rich people's interests to support shady businesses, and if they don't they'll be offered massive financial incentives on the golf course.


At least if it is a photo in a news story or from a private homepage I would believe they are in trouble. I doubt they have licensing agreements from every news source. And I am sure they do not have them for private homepages.


Given the „right to be forgotten“ shouldn‘t he be able to request all his data be deleted?


From the article:

> And remember that once you receive your data, you have the option to demand that Clearview delete it or amend it if you’d like them to do so.


Excerpt:

"What does a Clearview profile contain? Up until recently, it would have been almost impossible to find out. Companies like Clearview were not required to share their data, and could easily build massive databases of personal information in secret.

Thanks to two landmark pieces of legislation, though, that is changing. In 2018, the European Union began enforcing the General Data Protection Regulation (GDPR). And on January 1, 2020, an equivalent piece of legislation, the California Consumer Privacy Act (CCPA), went into effect in my home state.

Both GDPR and CCPA give consumers unprecedented access to the personal data that companies like Clearview gather about them. If a consumer submits a valid request, companies are required to provide their data to them. The penalties for noncompliance stretch into the tens of millions of dollars. Several other U.S. states are considering similar legislation, and a federal privacy law is expected in the next five years."


Send them a GDPR/CCPA deletion request now: https://yourdigitalrights.org/?company=clearview.ai


Private companies can do whatever immoral garbage they want, and in exchange for access to the data or the product they developed in an unethical manner, the government slaps them on the wrist, or does nothing, allowing it to get worse and worse and allowing them to assert more control over us.

Imagine the story when people find out this or another company started skimming the porn sites with facial recognition, starts gaining access to surveillance footage from Nest or Ring, or maybe even gets access to state and federal DOT cameras and real time feeds from body cameras!

Facial recognition is going to need some regulation, ASAP.


> starts gaining access to surveillance footage from Nest or Ring, or maybe even gets access to state and federal DOT cameras and real time feeds from body cameras!

https://www.engadget.com/2020-03-04-banjo-ai-utah-law-enforc...

> The agreement gives the company real-time access to state traffic cameras, CCTV and public safety cameras, 911 emergency systems, location data for state-owned vehicles and more. In exchange, Banjo promises to alert law enforcement to "anomalies," aka crimes, but the arrangement raises all kinds of red flags.

> Banjo relies on info scraped from social media, satellite imaging data and the real-time info from law enforcement. Banjo claims its "Live Time Intelligence" AI can identify crimes -- everything from kidnappings to shootings and "opioid events" -- as they happen.


Why are paywalled articles still allowed on HN front page? And why are people still upvoting them?



Because some of us agree that not everything is free beer and people that provide information also have bills to pay.


Same frustration here, but this[0] did the job for me.

[0]: https://github.com/iamadamdev/bypass-paywalls-chrome


They should be in prison, instead of making money on violating people's privacy.


Would you help and use Clearview if it was being used in your governments strategy against coronavirus?

Would you volunteer your time to tag images with your friends and acquaintances, to help slow down the virus? To do otherwise would be immoral and lead to the death of thousands, right?

To those worried about that, it's just temporary. It would just last a few months and then you don't need to worry about it any more. This is a global war and we have to make sacrifices and take important actions.


Yuval Noah Harrari, "The world after coronavirus"

https://www.ft.com/content/19d90308-6858-11ea-a3c9-1fe6fedcc...

Bruce Schneier, "Emergency Surveillance During COVID-19 Crisis"

https://www.schneier.com/blog/archives/2020/03/emergency_sur...

I'd have very grave misgivings.


Thank you for the links. Listening to them reminds me of my interaction with similar thoughts.

I was in a startup accelerator near the end of last year and many of the business ideas that some of my teams came up with were data oriented and how it could be better gathered or used for good purposes. Or for profit and control. For example measuring people's location, driving speed and brake force to give discounts for car insurance. We found out that some companies do this already, and others are asking for it. And that was not even in an emergency context.

From another viewpoint, the more data we get, the more sensitive we will get for the "crises", or we can define simpler things as "crises" anyway. And that will demand more data gathering. If we allow using data to stop Coronavirus for example, we can call "the flu" as something to tackle, if only we had more information about the people. Why not the cold, too, after the flu?

I feel that it is inevitable.


Yes I will. But only if assurances are made that all this work will be destroyed after the pandemic is over. I give my consent for one task and after that task is over that data should not be used. Its the same case as people being allowed to kill in war but not after. Although weapons are kept after wars strict measures are in place to stop them from reaching unauthorized hands. These measures are not perfect but they are really good and in good faith should be continuously improved.


"It would just last a few months and then you don't need to worry about it any more."

I heard about this before...


No.

This isn’t even a hard question.


What if everyone around you told you that by not using it you were contributing to the deaths of hundreds of thousands?

Edits - and what if your friends and family tag you and add your details instead? That way you dont need to actively support it.


Isn’t this part of the original question? No. The justification for mass surveillance and totalitarianism is always “it will make us safer in the short term.”

No one who knows me will be surprised to find out that I think it creates larger long term dangers or that safety isn’t my highest value in any case.

Response to the edit: I don’t know how I would react, because they didn’t have malicious intent, but they still wronged me. I would be angry with them, and it would definitely have a negative effect on our relationships.

I wouldn’t look down on anyone who wanted to tag their own photos.

And I would be willing to do geo tracking with something that I believed was actually temporary. Give me a dongle I can throw away when this is all over and I’ll carry it around.


Herbert Simon, Nobel laureate in economics and one of the fathers of AI, wrote one of the better treatments of the possible future of computer-based data systems in his 1977 essay "What Computers Mean for Man and Society". In it, he addresses concerns: "The privacy issue has been raised most insistently with respect to the creation and maintenance of longitudinal data files that assemble information about persons from a multitude of sources. Files of this kind would be highly valueable for many kinds of economic and social research, but they are bought at too high a price if they endanger human freedom or seriously enhance the opportunities of blackmailers. While such dangers should not be ignored, it should be noted that the lack of comprehensive data files has never been the limiting barrier to the suppression of human freedom. The Watergate criminals made extensive, if unskillful, use of electronics, but no computer played a role in their conspiracy. The Nazis operated with horrifying effectiveness and thoroughness without the benefits of any kind of mechanized data processing."

https://pdfs.semanticscholar.org/a9e7/33e25ee8f67d5e670b3b7d...

There is, of course, one slight problem with Simon's argument: The Nazis did make heavy use of mechanised data processing, provided and supported by IBM. Edwin Black documents this meticulously in his book IBM and the Holocaust:

https://ibmandtheholocaust.com

Whether or not it's possible to transact genocide at similar scale without computerised data records, it's quite clearly far easier to do so with them. Worse, with comprehensive records and rapid identification of any particular meddlesome priest, activist artist, or woman who was warned but nevertheless persisted, it's possible for such regimes, state or non-state, to dip in and retaliate with pinpoint effectiveness. Even the mere suggestion that this is possible can be extraordinarily chilling.

https://mastodon.cloud/@dredmorbius/103059230160200494




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: