A danish tourist was killed on holiday and it was filmed (very graphic) and now the parents of the deceased are being sent videos.
> She said she had reported every instance of the harassment to Facebook but in a number of cases it had left the video online. She also complained that Danish police had failed to identify the culprit.
> Simon Kollerup, 35, the Danish trade minister responsible for internet regulation, said Facebook’s “hopeless” response had underscored the need for more stringent EU rules on social media.
> Martin Ruby, 48, Facebook’s head of public policy for the Nordic and Benelux regions, said he was sorry “if we made the wrong calls in this case” but insisted that it had done its best to eradicate the execution clips. He said that the code for the videos was being continually tweaked to elude Facebook’s censorship algorithms. “The problem is that these evil people are sitting there and changing the format on them,” he told DR. “It’s a perpetual battle.”
I'm usually a free speech absolutist and privacy advocate, but whomever is sending that poor woman's parents videos of her torture and beheading deserves to be banned from the Internet forever. At minimum.
> Being against harassment doesn't make you any less a believer in free speech.
The validity if your statement depends heavily on what's your personal definition of free speech, and where you draw the line regarding acceptable censorship and persecution from the state.
Considering the spectrum of free speech includes absolutists who argue speech can only be free if it's free from any consequences from the state, there's a huge grey zone on where to draw the line.
I believe it was Chomsky who defined what we mean by freedom of speech would be more appropriately defined as freedom of opinion. Harassment, calls to violence, inciting panic and defamation are not opinions.
I think I can agree with that. Although defamation is probably a grey area. If I say, “the vice president is corrupt and unintelligent”, should I not be allowed to speak that opinion?
That’s an opinion though. Defamation has a very specific definition, often with a high bar. This would be like “the Vice President accepted a bribe of ten thousand dollars to inject a support of terrorism in their past speech on January 5, and I have film evidence to prove it” explicitly knowing this is false and saying it anyways just to hurt the Vice President.
Indeed. Personally I think we could solve a lot of our problems by offering users robust content filtering controls and teaching people to use them as a basic life skill.
Bullies don't stop when a few people ignore them. That just creates an echo chamber, and they get more bold.
Bullies stop when you punch them in the nose.
Offering people "content filtering controls" might work in a small population where bullies could get ignored completely after a short time, but IME it doesn't even work then. What does work is when the community stops them cold by banning them.
But not everyone is a bully, so the system needs to be more nuanced than that, with warnings and lesser punishments before the final ban.
Personal filters are a good thing because it allows people to go beyond the rules and filter out things that bother them personally, and even to stop the bullying personally while the community works out that the bully will not stop. But it's the first step, not the last.
It doesn't have to be that complicated. Blacklist certain words and phrases. In this case, the people are being sent videos of a murder. Silently block all media content from non-whitelisted users, or any video that falls within a certain range of lengths. On a larger scale, content filtering rules can get more complicated and can be collaborated upon and shared. If we can do it with advertisements we can do it with offensive content.
I'm not sure any of the those points is true always, and at the very least will raise the bar slightly. Blocking on a naive hash wouldn't work, but there are more sophisticated video matching algorithm. If they're good enough to be applied to DMCA type content, this seems like a higher priority.
there's no personal definition imho - free speech is the allowance of the citizens to say anything, regardless of whether the state wants it heard or not.
Free speech isn't the idea that other people must have the opportunity to hear one's speech.
So sending emails or photos to people (ostensibly to harass) doesn't constitute free speech.
Yes, it's an awful and unacceptable situation. Different people have different ways of responding to harassment based on their risk acceptance and resilience: some people respond and fight back, some people hide, and some people seek help from others around them (the latter is, it seems to me, probably generally the healthiest approach, although I'm no expert).
What the parent post seems to be advocating for is that tech companies should automatically attempt to hide people from harm, and that they're best-placed to do that. I think it's doubtable whether they'll be able and willing to implement that in an effective or accurate manner (partly because it requires putting significant time, money and staffing into it, it can never be completely automated, and attackers -- especially in individual or small cases -- evolve as the representative mentions in the quote), and I don't think it respects or empowers victims, long-term.
It's also key to focus on the perpetrators - hold them accountable and figure out the reasons why this happened in the first place.
Free speech is nothing to be absolute about. Things that were encouraged before can become absolutely abject later on - free speech in the public sphere must be regulated, and is in many jurisdiction (I'm French, I live in China - both are very similar in how they approach it in principle, China is more ideological when implementing but France can go very far when someone must stop talking).
I'm glad that as an absolutist, you managed to make a compromise, even if it's just for beheading. How can we convince you rape, insults, religious conflict, racism apology, or even treasonous discourse also can be regulated ? What is so good about stupidity that it has to be exposed so free to the public or what is so special about it that it can't be distinguished from intelligence ?
I don't think there is a conflict between a right for absolute speech (on a practical level) and being held responsible for direct and foreseeable harm without a mitigating reason.
I.e. nobody can shut your mouth, or stop you from speaking, but there can still be consequences if your speech is directly aimed at unjustifiable harassment.
For instance, simply making the video available on a web site would be protected speech (no restriction on that) - assuming no other law was conflicted (copyright, etc.) But going out of one's way to push the speech to those it harms, and on services that prohibit it, creates a problem independent of the freedom to simply speak/publish.
Likewise, you can publish technical secrets you were given under a nondisclosure agreement, but will be prosecuted civilly and possibly criminally (depending on circumstances) for the damage you have done to intellectual property value.
You can claim anything you want for some product, but expect to pay for it legally if you are defrauding people.
You can publish any code you want, source or object code, but expect to pay the consequences if you are publishing code that doesn't belong to you, or violating an open source license by only publishing in one form, without providing the required form, or without proper attribution.
Etc.
For all those cases, nobody limits your speech, but direct consequences of your speech can still result in enforcement action.
You don't have rights that prevent you from hearing things you dislike, things you don't want to hear, or things you find offensive. But you absolutely have protections against repeated harassment.
Restraining order.
And in this case, it should be issued against any Does that engage in the activity.
Rights are best when fundamental and few. Even then it takes a lot of thought to balance between them.
If you start creating a long list of rights then the complexity of balancing all the areas of contradiction will be so many that inevitably an elite will simply use the plethora of rights/contradictions to rationalize getting their way.
In most important things, we need to keep boiling down the principles to as few as possible to explain/perform as needed.
> What is so good about stupidity that it has to be exposed so free to the public or what is so special about it that it can't be distinguished from intelligence ?
The problem is: who gets to define “stupidity” and “intelligence” and the exact limits of what speech is allowed or not. We already have provable examples of people who were considered some of society’s biggest idiots while alive but ended up having a deep and lasting impact to society in the long term (Socrates is a notable example). It’s also guaranteed that this power would be abused to prevent legitimate political discourse almost immediately after granted. There is no such thing as perfect, consistent enforcement. It’s always subject to the human biases of today.
Based on your concerning desire for that poster to indulge the endless slippery slope of censorship, which you (wrongly) may believe is in your favor, perhaps they should consider retracting said post and reaffirming said absolute-ness.
The key distinction is not that China is more ideological (it's just a different ideology), but rather that they're is a democratic and rule-based foundation to the regulation of free speech. Both the democracy and application of the rules may be flawed at times, but it's the consent (through democratic politics) which makes the situations very different.
Facebook butchered (pun intended) a case where cartel sicarios executed a Mexican national, but before they finished him off they tortured his Facebook password out of him.
They proceeded to send members of his extended family, his friends, and family members' friends taunting messages from his account, which by this time had his profile picture replaced with a photo of his decapitated body with his head and genitals essentially co-located in an obscene manner. The other pics on the account (background/header image, photos of the decedent, etc.) were replaced with photos of the victim in various stages of torture & dismemberment. I saw them and it was horrific (and I'm pretty jaded).
FB slowly responded and locked the account, except they left his profile picture alone (beheaded body) and didn't delete the messages that had been sent. It was just a trainwreck, but who cares? They're just Mexicans and they don't know any politicians so... /s
Atrocious behavior, and I saw it for myself on FB after seeing screencaps so I know it wasn't fabricated. I hope they can find a way to better handle this stuff. It makes revenge porn seem mild.
>Martin Ruby, 48, Facebook’s head of public policy for the Nordic and Benelux regions, said he was sorry “if we made the wrong calls in this case”
Ah, the good old "we apologize because we're having to, but we did nothing wrong" non-apology.
>“The problem is that these evil people are sitting there and changing the format on them,” he told DR. “It’s a perpetual battle.”
Sooo all they need to do is run it through handbrake/ffmpeg again and the scanner stops detecting that video? Oh no. Does that mean FB needs to buy/license Apple's CSAM scanning technology to improve their detection to 100%?
The answer to this is that we need to log on using our biomedical-crypto-citizen-id, just to go online.
Everything we do _needs_ to be fully tracked. We need to serve our governments all the information about everyone, so that they can catch the baddies* - its essential! It really is true - someone did something bad online, so that justifies the loss of liberty for everyone! Get with it. Roll over. Right?!?
* exceptions apply for key governmental employees.
You kid, but the dystopia of 1984 is probably pretty close to the utopia where everyones rights are protected while also being held accountable by a benevolent technocracy. I just understand that the latter won't last a day before it's abused and devolves into the former.
So what stops someone from uploading an image that isn't NCII as an attempt to censor an image or get whoever shared the image in trouble? I imagine there needs to be human oversight in some way to stop abuse. But does that mean that in order to stop the spread of your personal NCII, you need to proactively share those exact same photos you are worried about getting shared?
You could upload part of the picture (with the nudity, or everything personally identifying, cropped out) and see if it matches part of any other picture online.
This frequently happens with GDPR data/delete requests. They will ask you to verify all the shadow data they collected about you is indeed correct before giving you a summary or promise to delete it.
PayPal won't delete an account I didn't knowingly consent (I'm sure there was some hidden terms hiding behind some dark UX pattern) to being created unless I prove my identity with information that isn't already recorded on the account. At least, not visible to me.
All that's there is a name and bank account link, I told them 'I am Oliver Ford' and that I'm happy to provide a secret shared by way of reference attached to a penny transfer to that linked account; but no - I must provide a copy of my passport and proof of address (which they don't know, and I can edit on the site anyway), obviously.
Google does the same with phone numbers. They ask for a number as a way of verification you never gave them in the first place. It does not make any sense.
Imo if one can log into an account they should have the right to request a delete/download for all associated data. Considering PII could very much be involved self service would even be better.
https://stopncii.org/how-it-works/ explains that "Your content will not be uploaded, it will remain on your device", and "Participating companies will look for matches to the hash and remove any matches within their system(s) if it violates their intimate image abuse policy."
In principle, both promises can be kept, with humans checking the matches (if any) against their rules. (In practice, I have no idea how it will work out.)
Yup, you got it, the content itself will remain only in the device, the hashing is done in-browser, and the only part of the original content that makes it into the system is the hashes. Once a platform that is part of the program downloads those hashes and is able to match content, you need to apply some amount of verification. It’s on the participating companies themselves to review the content that matches the hash, to see if it actually violates their policies on NCII.
I thought the same thing. I wonder if there could be a way to have you submit a posed picture (to verify that it was taken live) and then use facial recognition to verify your face is in the content. It would probably be hard to do all that on device. It would also be a lot of hurdles to place in front of users, the vast majority of whom are just legitimate victims.
But if they don’t do something like that, I could see submitting all the Pepe the Frog memes in an attempt to stop the proliferation of material that FB probably should have been stopping anyway.
It’s a real challenge - you can try and do more pre-processing on a submitter’s device to try and avoid mistakes or malicious use of the system. The big limitations are:
1. How much processing time can you put on what might be a mobile phone - it’s using MD5 for Videos, but StopNCII did also try other things that were just too slow for phones.
2. StopNCII chose to do verification on platforms after a match rather than do it up front to prioritize the privacy of the user. This decision came after a lot of feedback from victims and experts who represent them. There are all sorts of techniques you can use to process server side (thus saving the person’s phone’s CPU), but it deliberately doesn’t retain enough context to make them viable.
Facebook doesn't allow nudity anyway. The system could just serve to prioritize complaints and they wouldn't need to verify the identity of the person in the photo.
I clicked on some random reddit user a few days ago and his top comments were on some thread with a bunch of nude images from some person whos accounts were hacked, his comments were gloating about the thread being a top google hit on her name and that people at her university and prospective employers would likely find the images.
I reported the posts and the thread and reddit sent back a form letter response: "Thanks for submitting a report to the Reddit admin team. After investigating, we’ve found that the reported content doesn’t violate Reddit’s Content Policy."
The absentee management of these platforms has turned them into an attractive nuisance. Getting more reports won't address the underlying issues.
If it uses a hash of an image, does that mean an edited image (eg cropped or resized) wouldn't be detected? Or is there a way to extend a single hash to multiple image variants? I imagine the answer is no, but I would also like to be optimistic and think even detecting the original version would stop a lot of NCII sharing (ie it's not a perfect solution, but still helpful).
Here's an article on how a few different image hashing algorithms work https://content-blockchain.org/research/testing-different-im.... I think in general they still function with resizing (and a few other image manipulations), but I don't think they handle cropping.
Cropping does seem like a harder problem, in the sense that there's no "clever" (high compute efficiency) solution available. The only thing that comes to mind is the brute-force approach of comparing normalized small tiles instead of whole images, which would involve exponentially more compute effort. Interestingly, the article you cite mentions rotation and skewing, which make both clever and brute-force approaches even more expensive and/or less effective. Certainly seems like a target-rich environment for research.
> does that mean an edited image (eg cropped or resized) wouldn't be detected?
Depends on the hash, and I also wouldn't be surprised if "fingerprint" turned out to be a more accurate term. This is a PR document, after all. ;) You can normalize things like resolution and color balance, and create a hash/fingerprint that's a concatenation of hashes for portions of the image, which would allow some degree of useful comparison despite many kinds of changes. Similar techniques have been used to detect copyright violations as well, so there are technological precedents. The real question is implementation quality.
While this article is on sound fingerprinting it does talk about image fingerprinting, the algorithm finds "significant" points in the image and as such can still work because it looks at their positions respective to each other, which will still hold after scaling or rotating. And cropping is likely to remove content lacking such features, so it can handle that as well.
Yup, PDQ (https://github.com/facebook/ThreatExchange/tree/main/pdq) and MD5- https://stopncii.org/faq/. MD5 is a cryptographic hash, which means that even a single bit changed gives you an entirely different hash. This usually precludes any attacks to try and deliberately generate collisions, but also means it's harder to match “benign” changes - for example, many platforms re-encode videos as part of the upload process (reducing resolution, changing formats, etc). However, many platforms skip re-encoding, so MD5 can have better results than you might think at first glance. The more bits you take from the original content, the harder you can make it to bypass, but the more you might have to worry about capturing enough that you might compromise the privacy of the original submitter. For the program, StopNCII picked a set of tradeoffs between those two tensions, and are keeping a close eye on how effective it is, and will iterate if need be.
As the images never leave the device (and you can't build an image from the hash) then surely they are dependent on the victim certifying that the image is a picture of them.
They have just made a tool to ban 20 arbitrary images at a time from social media that is gated by some questions that anyone can lie to.
I wonder if you could interact with the API directly if you were feeling evil
Real revenge porn uploaders could just change a pixel / reencode / tint the image like you do for flesh tone detection
Indeed. Reaction so far:
Apple tries to fix a real problem botching its corporate comms badly — the Internet has a meltdown over it.
Facebook does the same times 10 and with way more options to trick the system —- same old.
The key difference is where the processing is happening. With Facebook, you can delete your account - with Apple's stuff, you'd have to get rid of potentially tens of thousands of dollars worth of equipment to escape the spying.
A perceptual hash is immune against the sort of manipulation you mention. One method I’ve implemented myself is recording changes in brightness along a path in the image. Inverting colors would work, but also make the image somewhat worthless. Flipping would also work, but is usually protected against by adding the flipped hash as well.
True. I am not sure exactly what hash is being used (though others seem to indicate md5)
I have heard from colleagues in the past that consumers of illegal pornography used to (or possibly still do) commonly apply a negative filter or hue shift to images to avoid hash matches and flesh tone detection. This helps when combined with other stuff if someone scans their disk or they get raided and have too much stuff for law enforcement to look through properly. They then set up their screen or display settings to counteract the filter when viewing it.
Thankfully I have no experience with illegal pornography and definitely don't want to gain any, so hopefully I will never be able to confirm that firsthand.
It’s somewhere in the FAQ at https://stopncii.org/faq/, but It’s PDQ for photos (https://github.com/facebook/ThreatExchange/tree/main/pdq) and MD5 for videos. PDQ is resistant to some modifications (it focuses on the ones that come from regular usage, such as changing the format from gif to jpg, or a filter changing colors or brightness), but it’s not as resistant to modifications as you could get by training dedicated classifiers or other approaches that you might do with the original media or by storing more context, which StopNCII chose not to do.
Could this be used by someone to stop sharing any image or video of themselves they don’t like? Or does the tech enforce in some way that only NCII content is eligible? If so I’m curious how that works.
Another commenter (“ahahahahah”) described it as the equivalent of a “report image” button, which I thought was apt - it preemptively reports on all the platforms that are participating with StopNCII. It comes down to each platform: Meta has specific rules against NCII (https://transparency.fb.com/policies/community-standards/sex...) - we prioritize potential NCII content as high severity and requiring immediate review to verify. However, just like hitting the report button on each of those platforms, they might have different rules.
It only sends hashes of the contents so I don't know how they would tell unless they review what images are being flagged by these hashes after it gets included.
I guess it'll be a "Facebook censorship API". Beyonce wants an image to disappear? If it's not on FB, IG, WhatsApp, Pinterest, it's pretty good. Prince Andrew doesn't want a pic of him with his hands around a minor and Ghislaine Maxwell nearby? Call Zuck...
There's a gray area where someone consented regrettably, say as a brief stint on OnlyFans until they realized how pictures travel. I wonder how Meta will rule on those
Posting on OF does imply consent to be seen naked. But since it doesn’t come with a license allowing redistribution, reposting those photos is never allowed. No gray area whatsoever.
I think that makes it an even more fascinating gray area, actually. It's exactly the important case where the hair needs to be split.
Using a technology like this for removing leaked private photos is one thing. Using it to enforce simple copyright is a whole other matter. If this were being deployed as a copyright enforcement tool, absolutely everyone here would be livid.
So, the question then is, if its used to censor a nudge image, but that image had been originally uploaded with consent, is that a matter of privacy, or copyright? And if it's merely a matter of copyright, is it appropriate to use this technology in that context?
It sounds like a contract between the sender and the receiver of the images, much like an NDA.
Edit: the problem there is, if something under NDA finds its way into public view, and there are several different parties that could have leaked it (i.e. the leaker 8s not identified), what actions can then be taken? It's pretty much unenforceable at that point to anyone who didn't sign the agreement. Hence the need for this new system, I guess.
Sorry, I was referring to the case of publicly published images that the author/subject no longer wishes to be public. E.g. A person posts nude images on OnlyFans, and later regrets it. I think the case where someone transmits such images in private, and then they are leaked, is exactly what this tech. is designed for.
Ownership rights must factor in somewhere right? If I pay an artist for a painting, and they deliver it to me and take my payment I can then sell that painting to someone else, display it on the walls of my home, show it to my friends, or even give it away. If I pay someone to send me a digital photograph or even a video shouldn't I have those same options? The whole thing seems like a weird grey mine field of legal and personal rights.
If you buy a painting from an artist, or a print from a photographer, can you turn around and scan that art and sell hundreds of copies? Usually not, you typically have purchased an individual copy of a work but not to make your own copies and redistribute.
Copyright is itself filled with massive amounts of grey areas and is so overbroad that most people are regularly violating it just by living their daily lives. Some uses very clearly violate copyright, but in others it's not clear. If I bought a painting from a local artist and hung it on my wall, I wouldn't expect it to be impossible to post a picture of me standing in my living room on social media just because that painting happens to be hanging above my living room fireplace and I don't have a license to make or distribute copies.
I don't think any automated process can determine what is okay or not. This isn't a problem I think we'll have easy answers for in tech and care must be taken to make sure that attempting to fix it using technical measures doesn't invite other abuses or infringe on other rights. Blindly accepting hashes and blocking them seems like it could make things very easy for social media companies but at a cost to the rest of us.
Sure, posting a photo of you in your living room with the painting in the background isn't reproducing the original work. It'd be hard to argue that it's infringing. But let's get back to the original topic: OnlyFans' terms of use almost certainly does not give users the right to redistribute purchased content. This would destroy their entire business model: one subscriber could just repost everything.
> Sure, posting a photo of you in your living room with the painting in the background isn't reproducing the original work.
Most people wouldn't think so, but it technically is. If an algorithm spots the painting in the background and matches it to a known hash it could block the post automatically. If it didn't, than anyone could print out revenge porn and tape it to a wall, photograph it, and facebook's system would never remove it. The point of revenge porn isn't to reproduce and sell perfect 1:1 copies of a copyrighted work, but to harass someone and less than perfect copies will do that job very well.
It's not that I think any system must be perfect or else it's useless, but others have already pointed out how easy this system could be to game to remove all kinds of other content, youtube's efforts with their similar "Content ID" system shows us that these automated systems will have negative impacts for many innocent people, and these systems are becoming increasingly common (for example, apple scanning your personal devices to generate hashes to match against photos of child abuse) so it really is important that companies get this right and protect their users. Especially for a company like facebook who is doing it so that they can avoid paying for the kind of human oversight which is actually needed to combat the problem. This seems like a poorly thought out scheme that could have major consequences for users but let's Facebook off the hook because they've doing "something".
> If you buy a painting from an artist, or a print from a photographer, can you turn around and scan that art and sell hundreds of copies? Usually not, you typically have purchased an individual copy of a work but not to make your own copies and redistribute.
Depends on specific details and jurisdiction. Here, if someone commissions a work from an artist (instead of buying copy of their existing work), then transfer of copyright ownership to the buyer is implicitly a part of the agreement.
Depends on legislation and degree. I think here at least you have right to buy a print, make photocopy out of it and give limited number to your family and friends...
Morally, it would make sense for content creators (the creator and owner of an image, say an Onlyfans model who took a selfie) to be able to flag a picture for removal if they decide they don't want it to be shared anymore.
Technologically that would be very difficult to implement and guarantee, but I'd imagine they would rule in favor of the creator in that case.
I don't think a user should be banned or penalized for sharing said image since they may be unaware the model is trying to retract it. But no reason not to try respecting the creator's wishes.
I wish there was a methodology whereby a person could remove their personal data from fb given that they were unaware of what they were consenting to and that it changed over time.
Consent during a sex act and consent to an image that you created continuing to exist after you knowingly release it into the wild are two different things. (Not including revenge porn or other nonconsensual recordings, obviously).
An actor who performs in an embarrassingly bad movie might wish they could refuse to let their work be used after the fact, but they already signed away their right to retract their work.
An OnlyFans model is also the creator of the content though, so they might have more leverage depending on OF's legal terms. If they still own the rights to all media they create and distribute through OF, then they could petition for removal based on DMCA or something (I'm not a lawyer but it sounds plausible).
Morally, I hope FB can help regretful adult models remove content of themselves if they don't want it to exist anymore. But practically, that will be very difficult to guarantee.
During the act, yes. But I think the example here is basically: In your 20s you did some jobs as a porn star. 5 years later you want the videos scrubbed from the web. Is this within your rights?
Depends on who owns the rights. Most UGC sites terms give them perpetual rights to distribute content as part of their services. Some take ownership of the IP. Most also let you remove your own content.
If the performer owns the IP then they are completely within their rights to remove it where others infringes them. Whether that’s to protect their own commercial operation or because they don’t want those images in public anymore.
The platform can stop hosting some content immediately once consent is withdrawn, but it can't prevent that content from having already escaped.
It's probably not technically a "gray area" since the platform can't ensure the content doesn't escape. But certainly, content creators should be aware that withdrawing consent doesn't mean their content is suddenly secured.
Any time? Can you withdraw consent after the fact and claim rape? If you are going to use categorical language, please be more careful about the language you use to describe a black and white world.
Hey everyone! I’m a software engineer at Meta, and I helped work on the program that the newsroom post is about. Sorry I’m late, I only just found the thread! I might be able to answer some questions about the technology - many of the building blocks are open source, like the PDQ photo hashing algorithm (https://github.com/facebook/ThreatExchange/tree/main/pdq) mentioned in FAQ (https://stopncii.org/faq/). I might not be able to answer every question, but I’ll do my best!
> "Once you have created your hash and submitted it to the StopNCII.org bank it will be sent out to participating companies’ platforms. If someone tries to upload a matching image, the platforms will review the content to check if it violates their policies and take action accordingly."
So what stops somebody from uploading a benign image? Say, an image of the Statue of Liberty?
Will the platform propagate the hashes? Will it proceed to flag similar images found on the devices of people around the world?
Nothing. But also nothing prevents a user from manually reporting an image of the Statue of Liberty as a violation of such a policy. At best, this could be used to help the user automate such reports. Such abuse (that of users reporting benign images) is far from the most difficult abuse problems facing these platforms, though it does have the effect of making fighting "real" abuse harder.
I like the comparison of the program to hitting “report image” on a platform. Advantages of StopNCII is that you can report proactively, and to report it to all the participating platforms at the same time.
Misuse of the StopNCII platform is something that we (Meta/Facebook), UK Revenge Porn Helpline and the full StopNCII team discussed a lot. Once you have confirmed that a piece of content violates your platform’s policy, it’s easier to find other instances and filter out false-positives, because now you can use your platform-specific ML, or in-house photo detection algorithms.
> "Misuse of the StopNCII platform is something that we (Meta/Facebook) discussed a lot. Once you have confirmed that a piece of content violates your platform’s policy, it’s easier to find other instances and filter out false-po"
Okay, and what was the conclusion then? Can I use the service to take down arbitrary content online that has nothing to do with the subject at hand? Can I just incriminate other devices, users and accounts that store innocent data and get it deleted without anybody's consent?
I sometimes think that we might need to approach this from the opposite direction. Declare a national "post your privates" day where we all post to all our social media pictures pictures of ourselves nude and/or engaging in sexual activity. Make this a regular event.
Try to make it something everyone participates in. Fat people. Skinny people. People who climb on rocks. Tough people. Sissy people. Even people with chicken pox. Elderly people. Kids (but just nudes, nothing sexual). Singles. Couples. Whole families.
Outside of the "post your junk" days social media would still ban such material.
The idea is to make it so that if someone then does post intimate pictures of you without your permission the impact would be about as bad as someone posting an unauthorized picture of you having lunch.
That wouldn't completely defang such photos of course, because sometimes the harm from the photo is not that it shows your private parts or shows you having sex but rather who it shows you with when you were doing it. A photo of Juliet with Romeo would have caused her trouble even if they were fully clothed and just talking. But for the cases where what makes the images damaging is the nudity or sex itself, normalizing this could help.
Is there any benefit to legalized pornography that outweighs the horror of things like this?
Does the amount of bureaucratic scaffolding and invasive surveillance by corporate and government actors required to police the boundary of consensual and non-consensual pornography leave you substantially more free than just punishing platforms which permit the dissemination of pornographic images?
For those of you who are confused, this is about revenge porn. Sometimes unfortunately referred to as "Nonconsensual Intimate Images".
While we're at it, it's "child porn", not "CSAM".
These weird jargon names aren't intrinsically less offensive. They don't protect anybody from anything. They certainly don't address any actual problem. They're not necessarily more technically correct, and in fact the word "intimate" in this particular one brings in a ton of baggage that basically guarantees it will be technically wrong much more often than the original, intuitive phrase.
They're just more confusing. They basically exist so that people can feel superior for using them. People who do use them are acting badly and should feel bad.
I get what they're gesturing at, but I find a stronger emotional reaction against child porn than CSAM and it never once occurred to me that anyone thought this term implied it was in any way consensual or non-horrific.
It’s certainly a more familiar term for many of us, but I prefer the new terms because “porn” has pleasurable connotations for most people. Using that term focuses on the abusers’ pleasure rather than the victims’ suffering.
I think this is especially important at older ages: if you remember things like https://en.wikipedia.org/wiki/2014_celebrity_nude_photo_leak there were entirely too many men who felt comfortable making public statements about how great it was (I remember seeing some open source developers tweeting about this like it was a gift to the world). That won’t happen for young children but it definitely does for teenagers, so I’m fine with changing the language to focus on the horrific experience.
Except if everybody just uses CSAM instead of "Child Sexual Abuse Material", as they are wont to do because the full words are too cumbersome, you lose all negative connotations whatsoever.
I find the discussion completely ridiculous. I don't think anybody who isn't a pedophile ever thought child porn could have any positive connotations.
I’m just talking about the term, not globally surveying laws. It seems reasonable to avoid a term to describe abuse which is also used for media which a large fraction of adults have enjoyed.
A nude isn't automatically "intimate" either. Nor is nudity the only aspect that might make an image "intimate", or even the most important one. Nor does "intimacy" capture anything remotely like all of the reasons the material might be painful or embarassing for somebody. In fact, if I were looking for something vaguely silly to be offended by, I'd suggest that it was a bad idea to tell the subjects of those images what they should consider to be intimate or why they should feel offended by the images being shared... as the use of "intimate" tends to do.
Oh, and "images" unreasonably limits the scope of the media that might be problems.
Not all of the images are shared nonconsensually, either, at least not initially. And there are all kinds of forms and layers of consent. So, if you want to be all technically correct, you probably shouldn't bring that in either.
In other words, like all terminology, "NCII" is imprecise. In fact, I think it's a true step down from "revenge porn", even if you ignore the fact that you shouldn't change terminology for trivial reasons to begin with.
It is NOT an improvement. It's people arguing over minutiae so they can feel informed, feel like they're helping, and get a rush out of "correcting" everybody else.
I was in court when a defendant was brought up. He'd broken up with his girlfriend, but had her nudes. He knew another guy who had a serious crush on his ex-gf so he sent him a message and offered to sell him the nudes. Crushman called the police, bless him.
The word "pornography" puts the focus on the person consuming it, as opposed to "nonconsensual" or "abuse" which puts the focus, rightly in my eyes, on the people who did not consent to having their images either taken or made public.
Agree to disagree, to me this seems like the euphemism treadmill paired with academic/professional lingo seeping into the public. Still, I'm not going to presume that someone knows more or has a more valid opinion on something because they use a fancy new acronym. And I severely doubt that anyone's mind will be swayed by an acronym either. Fortunately on both of these issues, I doubt many people need to be convinced.
> to me this seems like the euphemism treadmill paired with academic/professional lingo seeping into the public.
I agree the euphemism treadmill is a thing, but based on events here in the U.S. over the last several years I think we could use much more "academic/professional lingo seeping into the public" if by 'lingo' we mean impartial, reasoned takes on things.
No, a bunch of people needed a pretext to come up with some jargon so that they could feel special, so, among other things, they decided to adopt the unsupported and frankly ridiculous ideas that (a) the word "pornography" put the focus on the consumer in any certain way, (b) any such "focusing" was strong enough to be important or noticeable, and (c) anybody who wasn't desperately searching for something to complain about would ever care.
Another excuse they used was that the existing terms were somehow imprecise... and that they'd be able to come up with new terms that were (1) not equally imprecise and (2) somehow likely to defy the universal drift present in all human language, so that they would not end up with completely new connotations 5 minutes after first being used publicly, and completely new denotations shortly after that.
So they tried to rename things that had good enough names. Names that were searchable. Names that were meaningful both to people new to the issues and to people familiar with the issues. Not because it mattered, but because it made them feel important.
And CSAM is not really named appropriately named. For example, I am under 18 and have nudes of myself on my phone but there was nobody else involved and no abuse but it would still be lumped together as CSAM.
There are some very good reasons for adopting technical terms for concepts like this.
I would imagine it's generally easier for academics to book venues for conferences when the title is 'CSAM Seminar' than if it's called a 'Child Pornography Convention'
Similarly, when they are looking up publications, it's probably more fruitful, and less prone to terms-of-academic-use-violations for them to google 'NCII' than 'revenge porn'.
It's a little like how people tend to refer to themselves as 'working in pharmaceutical sales' rather than 'dealing in drugs'.
(Note, my original comment has been edited and the entire contents have been changed)
Revenge porn implies, well, revenge. This covers non-revenge related sharing.
That being said, I would be incredibly appreciative if we could keep these defined terms less than 10 syllables. Not sure how they expect people to adopt the term when the alternative, "revenge porn", is so easy to say.
My only gripe with the term CSAM is that it looks like an acronym for a missile system at first glance.
Prescriptivism rarely if ever creates useful linguistic evolution. The concepts have to change before you can stick new labels on them, or at least at the same time. And in this case what's being prescribed does not in fact tighten up any boundaries or "carve reality at the joints" any better than what it's trying to replace.
I once installed an operating system from a CD-ROM which I later learned through an Internet forum that I could have downloaded from a mirror FTP site and loaded as an .iso into the flash memory of a USB drive for LiveUSB testing of the Linux distro, a new flavor of Ubuntu.
Out of the 52 words in this last sentence, 22 of them were new or adaptations, commonly through prescriptivism, and a common enough list of words that one can find each of them commonly used on the forum you're reading.
I conclude that your assertion that "prescriptivism rarely creates useful linguistic evolution" is wrong. While I doubt Facebook will shut down to honor the "not sharing intimate non-consensual photos" policy (as it should... as it shares data including photos without informed consent [EULA doesn't count in my personal view]), they aren't wrong to call out the common terminology doesn't work for setting a policy as it is too vague.
Porn is created with the intention of titillating. This is probably the case for the child stuff, but "revenge porn" is intended for revenge. It's not disseminated for the purpose of arousing consumers, but of embarrassing the subject.
Would it not be easy to stop the spread of intimate images through your platform?
If you have a platform for "intimate" images maybe have a record on file that shows everyone is of age and consents. If an image, or video providence can not be confirmed you don't put it up for consumption. There is no shortage of "intimate" images and it would not present a hardship or be a barrier to entry to those that want to produce or consume the content. Please, explain how it is ethical to do otherwise.
A danish tourist was killed on holiday and it was filmed (very graphic) and now the parents of the deceased are being sent videos.
> She said she had reported every instance of the harassment to Facebook but in a number of cases it had left the video online. She also complained that Danish police had failed to identify the culprit.
> Simon Kollerup, 35, the Danish trade minister responsible for internet regulation, said Facebook’s “hopeless” response had underscored the need for more stringent EU rules on social media.
> Martin Ruby, 48, Facebook’s head of public policy for the Nordic and Benelux regions, said he was sorry “if we made the wrong calls in this case” but insisted that it had done its best to eradicate the execution clips. He said that the code for the videos was being continually tweaked to elude Facebook’s censorship algorithms. “The problem is that these evil people are sitting there and changing the format on them,” he told DR. “It’s a perpetual battle.”