I feel like a lot of the concerns around this are already covered by image and likeness restrictions. They exist for the exact same reason - people don't want their likeness associated with something they don't consent to. They don't want their image tarnished or false information being published. It's just that the modern internet has gotten us so used to completely disregarding this that people forgot it was something that affects others.
I don't think local models should ever be restricted, in the same way I don't think our imaginations should be restricted. But when it comes to non-private use, sharing, selling, harassment, those are all bad for their own obvious reasons.
It's a new problem, but AI is not to blame - bad people who want to stalk, harass, and completely disrespect others are the problem. Blame the human, not the tool. A tool which gives people new ways to harm others is not the tools fault, it's the person who's doing the harm's fault.
How is it a “new problem” when painting has existed for a 1000 years and photo-realistic painted had existed for at least 100.
The idea that someone, be it a celebrity or a non-celebrity, should have a concern that someone might paint them without their consent is simply mind bogglingly ridiculous. As if anyone can have a say in what I choose to paint (I can paint in a photo realistic manner).
If I want to paint Halle Berry having sex with Bill Cosby I may and no human on the planet should have a say in telling me I cannot. Not Halle, not Bill and not you. To imply that someone should be able to stop me from making any painting I wish is to subscribe to the most dystopian of world-views.
And yes, my mind has been “trained” on various photos I’ve seen of Bill Cosby over my lifetime. He doesn’t somehow get to own my creative works just because I trained my mind on photos of him or tv shows.
Replacing the paint brush with a piece of software I write is not going to change this logic.
Actors can trademark their likeness and defamation is already a thing. If you try to sell art or photography of other people there is already settled law relating to that as well.
AI just makes these disputes more common as the barrier to creating unauthorized works is lowered.
This is an oversimplification. You absolutely can make and sell art depicting other people. Photographs in public, even if people are identifiable and don't consent to being photographed, can be sold [1]. Likewise, paintings depicting real people can be sold, even if the depict people in less than flattering scenarios, for example [2].
Defamation is a non-issue if people are transparent that it's a fictional work.
Protections on one's image are actually quite narrow, mostly limited to advertisements and non-consensual photographs taken in private.
> How is it a “new problem” when painting has existed for a 1000 years and photo-realistic painted had existed for at least 100.
The effort involved matters. Machine guns present societal challenges slingshots do not, despite them being essentially the same "throw an object at someone fast" technology.
They didn’t change the morality or legality of hurling a blunt object at one’s head resulting in their death. Using a slingshot in that manner was just ad illegal and morally reprehensible as pulling the trigger on a machine gun pointed at someone’s head.
I’m focusing on the notion of legality and ethics.
It does change the morality and legality of providing the tool, though. Give a slingshot to a kid and no one will blink an eye; give them a machine gun and you're likely to meet a bit more resistance... even if they never shoot it at someone.
So are we in agreement that the author of these AI tools should be morally and legally free to make synthesized videos of Halle Berry having sex with Bill Cosby?
Effort is over-rated and with "mere" creativity and taste you can get very satisfying results by cutting and pasting fragments of magazines (or fragments of vanilla internet photos.) Halle and Bill included.
Effort is the core of the problem, as it affects speed and access. Most of the tech industry is built on scaling things and expanding access by making them cheaper.
A government could pay 500 people to watch 500 cameras and look for someone, but cheap cameras and cheap storage to save weeks of footage and AI facial recognition raise societal problems even though by your definition no new capabilities were created.
Effort is the core of SOME problems - like you describe - and not the core of others. There is an audience for personality porn. It's probably a medium audience for "real photos and videos" smuggled or hacked out. It's probably a small audience for fan fic style. I doubt it will keep many artists fed, with generative AI or not. There is a tiny market for revenge porn - and it's already highly illegal. Does it change things that anyone can now create revenge porn out of the blue? Yes, a little - you still need someone to want to use revenge porn. Does it shake the foundation of our civilization? No.
This is a problem that was raised at the beginning of anonymous remailers. Some people were very upset to be receiving death threats "enabled by that remailer" and seemed totally unconcerned that someone close to them was mad enough to send death threats. Talk about misplaced concern.
There is a big market for high-bandwidth porn: videos and streams. And for now it seems to crave volume and freshness. Even then, for now that market seems to crave humanness, real actual performers - all the better if you can message them. So even there, not much threat - for now.
Everyone can look at that and know it's a painting, not a real image. Most "photorealisfic" portraiture still looks visibly like a painting. Paintings can't move like a video can. The issue is misleading people with believable imagery. The issue is defamation. Apply it to yourself and your own image instead of Cosby.
So once my paints get “too” good (I.e. indistinguishable from a photo) then all of a sudden my benign art has transformed into a violation of the law? Or is now “problematic”?
Btw that’s how photorealistic paintings already are.
Actually, once your painting gets to any amount of recognizability at all for someone's image/likeness, it is a violation. This is exactly what I'm talking about when I say the internet has made us forget that these laws already exist because nobody has been enforcing them.
No, it is not. Ones image and likeness is only protected in very narrow scenarios: namely advertisements. You can go paint politicians or celebrities all you want, as long as you're not having them endorse products or otherwise advertising things.
So you’re saying there are drawings I can make that somehow “violations”? Think about that. There’s a drawing I’m not allowed to draw. Like WFT. Talk about dystopian.
You are not allowed to draw children in sexual acts in many places in the world. In some countries, you cant even draw god. These artistic limits have existed for a long time, but normally people don't rub up against them.
These laws already exist right now.
Here is an example.
You can't make a painting at the dimension of a dollar bill that's so authentic that looks exactly like a dollar bill. That's called counterfeiting and it's illegal on every country in the world I believe (for their own money of course).
That's an entirely different thing from being prohibited from making any likeness of George Washington (even if the image shows him having sex with Martha, Thomas Jefferson, or the entire Continental Congress).
This can be dealt with mandatory disclosures enforced by law that the imagery is AI generated and not real.
But this solution will not satisfy you, will it?
So now you want the law further involved in pornography? Will fine print save someone's reputation like that? If people are distributing for malicious reasons, they aren't going to follow rules.
> If people are distributing for malicious reasons, they aren't going to follow rules.
Yes... and by making laws around the distribution or labeling of this content they can be sued. The argument that criminals don't follow the law, so there should be no law is really weak. I dont really see a problem with requiring AI porn of non consensual 3rd parties to be labeled as such at a MINIMUM. Some amount of regulation could be required and shouldn't be dismissed outright given the societal harm that could be caused.
I could see artists voluntarily mentioning AI art in their captions. And on the forums I frequent, that's the current solution to keep the controversy down.
I don't think defamation is the issue. The idea behind defamation is that it's harmful to somebody if people believe bad things about them. People still feel harmed by these images even if absolutely nobody believes that they're real.
I think that right there is the crux of the problem. To think yourself harmed because of what some strange is looking at in their own home is quite a bit odd.
If I knew that someone had a naked painting of me on their wall and they were masterbating to it right now, should I really be bothered by that? Am I supposed to care what is in the head of a stranger?
Maybe, maybe not. If there are a bunch of strangers out there trading around the pictures and discussing, in graphic terms, what they'd like to do to you, maybe more likely. Especially when most of them seem to have no particular interest in thinking about what you might or might not be into.
... and especially if some of them are actively stalking you in real life, or if you tend to get stalked in real life in general. Which roughly 100 percent of major actresses do, and a nontrivial percentage of just random women. Creepy stalkers or worse are going to be heavily overrepresented among the people who produce or consume this material... and women who have actually been stalked are going to be overrepresented among the subjects.
Oh, and by the way, if you were raised female, you're also pretty likely to have had experiences where (1) a bunch of people treated you as valuable only for your sexual potential, and acted disappointed or even angry if you did anything that reduced their ability to fantasize about how they wanted to use that potential; and (2) another bunch of people told you you were a worthless slut if you did anything sexual they didn't like. That can affect a person.
I'm actually not ready to ban things, but I am not going to delude myself that there's no downside to that.
Exactly, the issue is that you become a product in the eyes of others, and now realistic images of that product are clicks away. And it alters how you interface with the people around you and how they interface with you.
If you find a naked painting of yourself and tissues in the floor in your friend's house please report back on exactly how that made you feel and exactly how you decided to proceed with the friendship. I am very interested to hear the results. For most people I think they would be disturbed to find this, and it's easy to theorize how you wouldn't be bothered until something like this actually happens to you.
I'm going to assume bad faith because the alternative would be to rude to you.
If you wanted to paint a photorealistic halle berry having sex with whomevere you'd need years if not decades of art training. Or shelling a minimum of 500$ to commission an artist. Now I can have 25 different jpgs of that mental image in less than a hour of "work". And no you don't even have to write that piece of software, it's already online for free.
If you don't see ho this is a new problem you should log off
>Random photos without context will just lose their value as gossip evidence. Who cares.
You answered your own question. If generative AI means that no one trusts any evidence and calls everything fake, that's a huge societal problem. We've already seen a certain subset of the population claiming that since 2016 with disastrous results (how do you convince someone who dismisses all evidence out of hand?)
About not blindly trusting photos, that ship has sailed. The tools exist. Have existed long before generative AI. Are accessible. Do not require very much "talent". That's done. You can't blindly trust photos. When you get a photo or a few photos - be it in a scientific paper or a tabloid web site - you can ask yourself whether it makes sense. You can consider the context. Generative AI does not change that.
I strongly disagree. There is a huge difference between "photos can theoretically be faked, we need to consult an expert" and "everyone I know is using meme generators to post pictures of themselves in historical events, nothing is believable" in terms of societal trust.
I disagree with your disagreement. Pandora’s box has been opened a while ago. There is nothing we, as a society, can do to reverse that.
Either you waste time discussing how that’s a bad thing, or accept the premise that pictures aren’t inherently trustworthy ways to convey information anymore, period.
Perhaps an example helps. And then I'll agree to disagree :-)
If you are, 6 months ago, before generative AI, an exec or HR person and you spot on the web, or are helpfully sent, 2 photos of your cherished spokesperson having a great time at a nazi campout. Then what? Well, you better think a few cycles before firing that spokesperson. It may well be real, it may well be Photoshop.
Generative AI does not change that issue. Eventually it will make it easier to generate that content, not yet. But you are still making a decision that matters on the basis of very manageable labor. More manageable than before does not change the importance of the decision you have to make. Many will make that decision without considering it but it's more a question of awareness or care than means, no?
That's the point. The conservatives got up in arms about video game violence, but the left too can get over excited about similarly innocuous things in this case.
While I agree with keeping the focus of people problems on people, I do disagree with your statement on local models as written.
If you write the model from scratch, and create all the data it is trained on yourself, then yes I agree it is like using your imagination.
However, these models are trained on commercial resources to at least some degree - so even your local model is likely still a commercial product.
I struggle to identify a meaningful metaphor to support my point, which I think is an interesting note on the kind of new territory we are in.
But in essence, it feels to me like if you use a commercial model locally to generate content that is illegal to trade commercially then you've effectively just put illegal content in a box and opened it at home. You cannot claim that a box containing illegal content is just a box and send it through the mail.
So I think it is in our best interest to play nice with this technology and not use it for questionable purposes or else the inevitable law that will need to be put in place is "it is illegal to trade or own AI models capable of GENERATING illegal content"
I do see some logic to the idea that current AI is very similar to artists' brains: Trained by seeing mountains of images, including many of actual living persons. Nothing wrong with that for the artist. Most do pay attention and don't go out of their way to reproduce a specific person, except sometimes that's the entire idea and they do. - which sometimes results in a lawsuit. Same in music: currently you have to be extremely cautious and even then you will run afoul of several copyright claims.
An AI can create as many illegal images as it wants within itself, as that would be akin to imagination.
A person interacting with an AI is now a transaction and no longer imagination.
The key point here is that a lot of legal issues pop up when you trade goods that do not exist when you do things alone and privately.
Although the experience with an AI model might be private, my opinion is that long term the things it generates will be seen more as traded commodities than personal items you built with your tools
The things the generative AI outputs are very much the result of prompt engineering based on the artist's selections. It's an artist's effort. Using a tool. They tell us how much of a pain it is to arrive at a specific desired result.
But I will agree with you that where the law ends up is still up in the air. I sure hope it doesn't end up seeing all the output as "just stuff that was put in". And the law often does not end up on the side favorable to art. Wouldn't be the first time. See music copyright. See street photography in Europe.
It's new territory, not really just a tool nor just a box of things. I didn't want to overcomplicate my message so I went with asserting the box extreme as it puts the fuller range of legal options on the table.
I do think in it's current state it should lean more towards box than tool.
A tool typically doesn't create things on its own in a way that is hard to control. You need to use tools under your control. So I see the pain they feel to get specific results as in support of my claim that this is not simply a closed case classification as a tool that bears no responsibility.
If it is sharing responsibility with the user for the output, then I consider that a relationship and the output a transaction.
This protects users just as much as it limits their freedom of use. We don't control these models, and we should want protection against things it may create or hallucinate given our lack of control
I agree that prompt engineering is an art, but imo that gets you as far as claiming the prompt itself as art - not the output from the model given that prompt.
I hear the distinction you are trying to make. I think it's been argued before but I don't remember where. Something like "if you use our tool you must credit us - we are part author." That attempt wasn't successful enough to leave much of a mark on me :-)
I don't know that the arguments you use are the right ones. Being "under control" has little place in the definition of a tool or medium. Many are valued specifically because they are not under control - with an output too complex to have been under control. Splashing, color mixing, watercolor, impasto, some kinds of paint brushes, lens flare or daylight and weather in photography, rain patterns, all the way to ...
"Generative art" which has been around a long time before LLMs and is hard to control, IMO, because the primitives are not humanly intuitive, or add up quickly past what we can intuit. Much of generative art is a matter of trial and error while holding a highly unstable process by the tail. The human artist is responsible for selection and taste but the richness of the output is often entirely the "work" of the process chosen. - Sharing responsibility if you will but the question of who is the author is not really argued anymore. AI generative art seems well in the continuation of that.
It's also possible previous Generative Art was not sufficiently problematic enough to be thoroughly analyzed.
Something feels different about AI than other Generative uncontrolled processes like the ones you mentioned. Can't quite put a description to it yet.
My other reply to you goes into detail on the core of where my thoughts are coming from. I agree that giving credit to the entity you got the model from also feels weak.
There is just a certain mix of deliberate intention, uncontrolled process, and expected outcomes involved here that also feels weak to classify as just using tooling
There seems to often be an idea that the models "contain" the training images. Aren't they far too small for that? A point of having a model (LLM, whatever) is that the model is abstracted from the training data. And then, except coincidence that doesn't really matter, the model cannot regenerate exactly a person's source image.
It can generate a likeness and that will be enough for people to complain about; and likeness is already actionable in narrow circumstances in current law if there was some intent. But otherwise, likeness, even very close likeness is safe in current law.
One current problem is that copyright covers "derived" works. And it will have to be sorted out whether model training means everything that follows is "derived". I sure hope for not but who knows. Music law certainly goes pretty extreme on "derived". While elsewhere we are actually pretty free for our inspiration.
The question of tools that are capable of generating illegal content is pretty safe currently. Not illegal. But there's no doubt that some large copyright owners will go after that sooner or later. It's tradition.
I definitely understand the model does not contain the images.
You gave a very valid description for probably the most critical copyright issue around it at the moment.
I'd say my point is more around the fact that the content will have different legal standing depending on whether or not something produced by the model is considered a traded good or a private creative work using some tools.
Copyright and likeness rights is only one area this impacts, but is a solid example. If the content is considered traded, then likeness is not allowed. If it is a private work created through use of a tool, then it is fine as long as it isn't traded or publicly displayed.
My argument is that since the model is heavily influenced by its training, and the user did not train the model, that a trade is happening. The untrained model is the tool, but a trained model is now like a commodity and everything it produces is also a traded item.
> So I think it is in our best interest to play nice with this technology and not use it for questionable purposes or else the inevitable law that will need to be put in place is "it is illegal to trade or own AI models capable of GENERATING illegal content"
But I think there is hope more in terms of what we publicly allow as opposed to what people actually do. As long as people cannot publicly assert that whatever they make with a local instance of an AI they downloaded is the same as just using their imagination, that would go a long way.
They may do things they shouldn't anyway, and it may be completely unenforceable but at least we stop any compounding effects from happening in situations where it does come to light. Removes it from being plausible legal defenses as well.
Otherwise the publicly available side of the tech is at risk of getting completely gimped
> However, these models are trained on commercial resources to at least some degree - so even your local model is likely still a commercial product.
That's not how “commercial product” works (a product with a commercial product as an input or component is not itself ipso facto a commercial product, and especially not a product who has a component or input which is not itself a commercial product except in that it also has commercial products as components or inputs.)
I was using the phrase relatively casually, but im interested to learn more there.
I was not exactly trying to argue it ipso facto but rather just the nature of it lends itself to (imo) being a much more commercial experience than not when you take a commercial model off the shelf, bring it home, and ask it to show you content that would be illegal to trade otherwise.
Basically I'm arguing that anything the model can generate is relevant to the transaction of the model itself once generated. As in, everytime a model generates something for you, you should think of that generated item as something you originally obtained in the same manner you obtained the model. Creating illegal content at home with a local model you downloaded off the internet is, to me, the same as just downloading illegal content directly from the internet.
If a painter cannot sell you a portrait they painted of someone without their consent, then it's equally illegal to invite the painter over your house and have them paint it for you in secret.
> If a painter cannot sell you a portrait they painted of someone without their consent
Not trying to derail the discussion, but there is no legal issue with selling an art portrait without consent, or an art or news photograph of someone who didn't consent. The US legal limitations are elsewhere. Commercial (advertising use), overly piggybacking on someone's notoriety, that kind of thing. And even then, the question is usually more about who will share in the proceeds.
I'd be curious to know the legality here more concretely. My understanding is that although a lot of it is not enforced, it is technically illegal to sell or publicly display images of another person's likeness without their consent.
AFAIK An artist can draw whatever they want for personal use, and this is where I feel the distinction on whether or not an AI produced image is considered personal use or a traded good becomes critically relevant.
It depends. Heh. Here is a chart for street photography with people. Even then there are subtleties - perhaps if one person is singled out or there's an undifferentiated group.
So, in most of Europe, street photography is just about over. In some countries you can take photos, in some not even, and then you would have to agonize on what can and cannot be sold as art photos or published.
Etc, when it comes to other mediums than street photography.
In the US in general, the questions are of "expectation of privacy" and of "commercial use" (i.e. advertising and product promotion) and creating fiction showing someone in an untrue and compromising situation, etc. And largely the rest is open. There's no issue with publishing street photography in particular.
> If a painter cannot sell you a portrait they painted of someone without their consent, then it’s equally illegal to invite the painter over your house and have them paint it for you in secret.
Despite the hype, AI models aren’t painters, they are paint.
They are the worst qualities of both in this regard.
It is not paint in that you do not have absolute control over its application, and it is not a painter in that it is not sophisticated and independently creative.
It is paint in that it is simple, and it is a painter in that content can be generated on the fly and taking any content from it (imo) classifies as a trade.
> I feel like a lot of the concerns around this are already covered by image and likeness restrictions.
Image and likeness restrictions are, in most cases in the US, commercial trade restrictions; some non-trade abuses might instead be covered by defamation, but there is a reason that states (and countries with sinilar basic legal systems to the US) have started adopting deepfake-specific laws, because they aren't well covered by existing laws in general.
I note that the Scottish law prohibiting non-consensual distribution of "intimate" images uses the phrasing "shows or appears to show". So sufficiently realistic fakes that might be taken for real might be illegal, although there has not yet been a test case.
This talk of deepfake porn takes me back to a classic HN comment from mabbo:
"As I see it, within a couple years this tech will be so widespread and ubiquitous that you can fully expect your asshole friends to grab a dozen photos of you from Facebook and then make a hyperrealistic pornographic image of you with a gorilla. Pandora's box is open, and you cannot put the technology back once it's out there."
I think this is inevitable and the best we can do, as a society, is to forbid publishing images together with personal identification (like actress name) without their consent.
As they explain, almost anybody can combine the two images using AI in their own home. I think that's not very different from privately fantasizing about a famous actress. I think it would be too privacy intrusive for society to try to prevent that.
The real harm seems to come from publishing, and making the connection (i.e. claim that this image that looks like this actress is in fact, image of this actress), so the image can be found under their name. That's where I think we should draw a line.
Face and possibly other body features are already enough for personal identification. I mean if a nude with Emma Watson's face is generated then adding Emma Watson's name to make the connection is already redundant. Anybody can recognize her face.
Not really, because you cannot efficiently search by face and body features. IMHO publishing an image that looks like a famous actress, without indicating which actress it is, should be allowed. Sure, perhaps somebody could do an image search, but that's already more complicated process than synthesize your own fantasy.
There are other edge cases, for example, should an image of "Princess Leia" be allowed to be shared under that title, when it implies the resemblance to the actress? I think the answer is probably yes, like with the body features, character name is not enough to identify a person.
But it would not be allowed to mention Carrie Fisher in conjuction with such a picture, even by a 3rd party reposting the image.
Edit: Maybe anybody can recognize Emma Watson (myself, I am not sure, not HP fan), but the issue is how you then use any human image generator without potentially infringing some random actress (effectively banning them)? There are definitely facial features I like and I cannot discern that I like them because I saw them in some actress whose name I have long since forgotten.
The actresses tend to have wide variety of appearances, and there is not that many faces. It's gonna be a very contested rule. That's why I think it would be much easier to simply go after the name connection with the actress, rather than facial or bodily features.
> you cannot efficiently search by face and body features
Yet. That feels like something an LLM trained on lots of images would be good at. It's also the basis of biometric identification.
(Heck, google lens will give you celebrity identification about 10% of the time even though it's not designed for that. The Bing image search will try to identify the clothing and handbag in the photo instead so it can sell them to you)
I think I addressed that in the next sentence. Doing an image search using a real image and deciding whether the fake image you find is good enough for your fantasy is already more complicated process than getting two images (one of your fantasy and the real one with the actress) and having AI synthesize them together.
Or maybe you need to be more specific how do you imagine people would use the technology.
Get a normal, fully-clothed, completely SFW, maybe face-only shot of your target. Paste it into facial-image-similarity search engine that also supports constraining the output using text, and make the text "hardcore blah blah blah". If, as you assume, there are in fact hardcore blah blah blah images out there that look like they're of the target, the image search will find them.
A search engine that does something like that doesn't seem too hard to build and will probably come along sometime... especially if the availability of images that you can't search for by pure text has created demand.
In fact, it doesn't seem too farfetched for somebody to build a purely text-prompted image searcher that can accept "looks like $target" as part of the search string, and have it go find a picture of the face, then combine that with the rest of the query text. Again, if the images are out there, it'll find them.
Not only do the images being out there create demand for the search system, but the search system then creates demand for people to compete in producing and publishing the "best" images. So you might very well find that the skill level for finding the images was even lower than the skill level for prompting them. And of course the shared images would automatically be visible to the target, as would a lot of link sharing and commmenting activity.
Pretty much any publicly visible woman, and many publicly visible men, would end up being able to search for their own name and find a bunch of people sharing and talking about obvious fake images of them.
Sharing the images really does have different consequences than just generating them, even if you don't put names on them.
Also, I don't think there's any functional difference between allowing an image to be tagged as "Princess Leia" or "Hermione Granger" and allowing that image to be tagged as "Carrie Fisher" or "Emma Watson". That just means that the character name becomes a widely known, thinly veiled code for the actress name.
I don't know what to do here, but I'm pretty convinced the particular line you want to draw doesn't make sense.
> In fact, it doesn't seem too farfetched for somebody to build a purely text-prompted image searcher that can accept "looks like $target" as part of the search string, and have it go find a picture of the face, then combine that with the rest of the query text. Again, if the images are out there, it'll find them.
Then it's the search engine which makes/keeps and publishes the association between actress names and their images, and publishing that association would be illegal. In other words, the search engine should not make an inference, this image looks like a specific actress => this image is of this actress.
> Also, I don't think there's any functional difference between allowing an image to be tagged as "Princess Leia" or "Hermione Granger" and allowing that image to be tagged as "Carrie Fisher" or "Emma Watson". That just means that the character name becomes a widely known, thinly veiled code for the actress name.
I think there is a difference. People already understand that Lena Headey is not an actual IRL evil queen. Look at where the reputation risk comes from - it comes from the association with our real-world identities, not with those of their characters. Why should the actress care if her character is depicted doing something bad?
Ultimately I think, the whole "reputation risk" is stupid, and humans should know better not to judge others based on some random image, and consequently, not to worry about being judged from some random image; that would really solve the problem. But perhaps it's too much to ask, so we have to draw the line somewhere.
> Then it's the search engine which makes/keeps and publishes the association between actress names and their images, and publishing that association would be illegal. In other words, the search engine should not make an inference, this image looks like a specific actress => this image is of this actress.
Assuring that is technically difficult, error prone, and likely to interfere with legitimate functions of the search engine (assuming you think that search-by-face has any legitimate functions).
One of the signs of being on the wrong track in policy is that you find yourself having to go after more and more people and forbid more and more things.
> Look at where the reputation risk comes from
This. Is. Not. About. Reputation.
Not for major celebrities. Not in any way, shape, or form.
If you posted a deepfake of some major actress that was any more pornographic than some casual wardrobe malfunction, put her name on it, and claimed it was real, still effectively nobody would believe it was real (unless she had actually made it, in which case she or her publicists would make really sure that was known). Unless she was really unusually dumb, she would know that effectively nobody believed it was real. And yet she might very well feel harmed by it.
And for that matter she might be actually seen in a different light by people who were 100 percent certain it wasn't her and she had nothing to do with it.
Claiming it's a deepfake of "her character" doesn't change the fact that it is equally a deepfake of her. It's an image. There is nothing interesting about it other than its appearance. And it appears to be her just as much as it appears to be the character.
It's true that reputation risk comes into it for "normal" people, but everybody knows that there are going to be tons of fakes of celebrities.
OK, I am confused what policy you're arguing in favor of. You seem to think that limiting search is too broad a measure, yet you think that an actress will be annoyed by the results of the search.
I suggested a sort of compromise. If she looks up her name, it will come clean, because people should be forbidden to associate her name with an artificial image (without her consent). If she looks up her face (without a porn filter, or something like that), well, then.. she might find distasteful things. So she perhaps better not do that. But almost anybody can find distasteful things on their own face (facial similarity is pretty common, in fact), or even without a face (that's why major search engines already do content filtering).
I do not have a firm policy answer, but I think that either of "do not share these images at all" OR "absolute free-for-all" would make more sense than what you propose.
i think they meant fraudulent attribution rather than identification. For example, claiming such a nude emma watson photo is real means they are fraudulently misrepresenting her and thus, could tarnish her reputation etc.
Which is why fake Emma Watson images should not be illegal, as long as there's clear labeling (may be a requirement for watermarking for more permanence).
> I mean if a nude with Emma Watson's face is generated then adding Emma Watson's name to make the connection is already redundant. Anybody can recognize her face.
Sadly, they'll probably just prefix her last name with a "T" ...
> The real harm seems to come from publishing, and making the connection (i.e. claim that this image that looks like this actress is in fact, image of this actress), so the image can be found under their name. That's where I think we should draw a line.
Does it have to be publication?
Suppose somebody creates a picture of actress X doing something she'd never do, with somebody she'd never do it with, in circumstances that she finds degrading, and, heck, let's say with "pig whore" painted on her... and privately emails it to her. "Thought you'd like this picture.".
No publication, just a private message. No caption on the picture, nothing like that; it's just her mind that makes the connection. No defamation; obviously she's not going to believe it actually happened and obviously the sender doesn't expect her to.
It might be hard to do that particular examples without it coming off as a threat, but even if it were clearly not a threat, I don't think most targets would be very thrilled about it. It's not just publication.
You seem to be mixing two things there, or it's unclear what you mean. It's possible to mention someone by name and arrange for the image to be found under their name without claiming that the image is really of that person. For example, something that people have been doing for a long time, I think, is finding a porn actress who looks like Famous Person, adjusting their hairstyle and make-up to enhance the resemblance (in extreme cases including cosmetic surgery), and then advertising the pictures as "Famous Person Lookalike does XYZ". The client knows exactly what they're getting and there is no libel of the famous person. Would you want to make that illegal, or just the cases that are perhaps already covered by libel law and the Trade Descriptions Act?
Yes, I think the "famous person lookalike" loophole should be illegal for generated images. All generated images are lookalikes, by definition. Also no tricks like "this is not a pipe".
But I think we have to allow lookalikes that do not claim to be a specific lookalike, otherwise using human image generators is going to be too complicated legally, and I don't want to ban them.
> the best we can do, as a society, is to forbid publishing images together with personal identification
We can do more, like leverage AI to hunt down abuses of images. But this will probably only happen if there is a bounty on such abuses. We could have AI-backed law firms making money out of this.
i think it has always been an offense to use someone's likeness fraudulently. But the issue has always been enforcement. Quite difficult to catch people responsible. An overly draconian net impairs freedom on the net, but a loose net means there will be many trolls like that.
Funny that when it is AI generated porn, it is non-consensual, but Copilot and ChatGPT aren’t labeled as producing non-consensual code and text, when they stole the data in the same manner.
That doesn't make any sense. You have ownership over your face and likeness, hence stuff published in your name (or using your face) without your permission can be called "non consensual". This doesn't translate to the code you write. The most one can say is that they are violating your copyright, which is already heavily discussed every day in the context of generative AI.
They are different branches of law, but neither is more tightly connected to the concept of “ownership” than the other. Saying that your likeness rights are something you have ownership over while things you own copyright to are not is a gross mischaracterization.
I don't buy the GP argument at all. But you have to take into consideration that most code is similar. All the code in a project when put together may be unique but snippets of code in one project - which is what presumably chatgpt is giving you - is not all that unique. Claiming ownership over that is somewhat meaningless.
> When you publish a work, you inherently grant permission for others to use it in various ways, including fair use and reuse
This is conceptually backwards.
In general, you have the right to do anything not constrained by law. Copyright is one such constraint, but fair use is carved out from copyright and specifically not included in copyright protection. Ergo, you don't need to do anything to have the right to fair use.
This means that publication is irrelevant. Even if a work isn't published, and you just happen to find it under someone else's couch cushions, you have the right to make fair use of it.
Of course, as goes without saying, fair use is quite limited, and the creator retains copyright and other rights (moral, etc.).
Next someone will discover that fan fiction exists. Worst! has existed for many years and names, describes and stages actual living personalities.
If anything, artists among that bunch were fast to get really good with their tools. To the point that it seems the only reason to restrict an image to one person's likeness is to satisfy a fetish in that one person. Or revenge porn or similar evil plans.
The artists might want to pay attention to not inadvertently optimize an image "back down" to one person's likeness but that doesn't seem too difficult.
genie's out of the bottle. nobody can't stop it. law? do you believe those corrupt old men actually work for you? especially when a founder of leading llm company is chasing for regulatory capture. shit show all around.
and out of topic: the quality of this report is really top notch. where and how can I support you?
I was confused by the tone, this helps explain it. It comes off as a relic from another age, other people doing the equivalent of investigative journalism nowadays don’t typically use phrases like “our investigation revealed.” It really breaks the fourth wall.
The million-foot view thing that I can't get over. I'm utterly fascinated by how in practice with things like this, celebrities are the historical equivalent of religious figures.
Like, there's all KINDS of weird shit that can be spread around, no problem, but a long time ago, "we" drew a very serious line at CELEBRITIES.
The articles focus on the marketplace element and "for sale" seems strange given that everything in it is available for free. Any of the the models or LORAs can be downloaded without payment and run locally, the apps and websites mentioned are just charging for running the models.
There are a whole host of discords and such that produce and sell this stuff. Most people would rather spend $ then create themselves. This financial incentive is driving alot of creators. You have sites like mrdeepfakes which try to be the pornhub of celeb/influencer ai content that will are going to try and ride the legal line. These are the marketplaces of concern, not civitai imo.
Personally, I have way less of an issue of some person creating private pornography in their home than I do with profit generating side. Ideally laws could be crafted that protect the private rights of the individual while regulating the distribution or profit of using other people's likenesses in this way.
Hmmm...trying to square my reading with that observation: I think that's only true if we limit ourselves to the first paragraphs on Civit, and either skip the rest, or argue the rest is irrelevant because you could construct something yourself without paying.
I guess I'm reading the article as them calling CivitAI the "marketplace", which is like calling Github a marketplace because projects can have a button to run them on Heroku.
I've seen those things draw an occasional girl with four legs by accident and note one guy who publishes a lot of photos of girls with two belly buttons not because that is a kink of his but because the model makes that mistake a lot and he doesn't catch it.
I find those things kinda interesting because I am a fan of anime fan art and the skills of the people drawing it vary a lot. I wasn't very picky when I first got into it but over time I see the difference between professional art (say Tsunako's work) and most of the people who draw fan art (except for the few who are also professional or on the path to becoming professional such as Wisespeak... Oddly this interest and some other developments got me interested in art history and criticism in general.)
My understanding is that those diffusion models mostly work on a local basis, particularly they don't have a global understanding of how a body exists in 3-dimensional space, human anatomy, etc. (The kind of things an artist has to learn to draw good figure art) Somehow they still do a very good job of maintaining symmetries such as: left and right limbs having the same length, left and right eyes having the same color, etc. I do seem them get in trouble though in other unusual cases: for instance if somebody's arm goes behind their leg the system might not keep track of the length of the obscured part so the global length of the arm ends up wrong.
My take is that "four limbs" is really unusual, maybe it happens 1/1000 of the time, whereas in one run of images "two belly buttons" was happening around 10% of the time. Oddly I found there really is a woman with two belly buttons
> My understanding is that those diffusion models mostly work on a local basis, particularly they don't have a global understanding of how a body exists in 3-dimensional space
Stability made some changes with this in mind, anf SDXL is somewhat better at this, but yeah it will be a ongoing issues.
> Jenny went on to continue with the rest of her story, explaining that on one birthday, she received free drinks 'all night' for showing her unusual addition to a member of staff who had a fear of belly buttons.
> "I showed them to her and she completely freaked out and started crying and left and they high-fived me and gave me free [drinks] for the rest of the night," Jenny explained.
I've never felt cheated by the circumstances of my birth before this moment.
I have a polydactyl cat where there isn't really room for the extra toe and might be a little uncomfortable. She can't stand anybody messing with her feet, we are told she's very spooky because children kept trying to touch her paws. I can pick her up and easily hold her upside down like a baby at the food bowl if we're alone but if somebody else is around she's very hard to handle.
I had a friend in college who had a similar condition, I knew him for a month before one day I looked at his hand and I said "Dude, you've got six fingers!" In his case the sixth finger was kinda a cross between a pinkie and thumb and not very useful.
My mom worked as a secretary for a while and was always amazed at how fast one of her friends could type, compared to her. Turns out she had six fully functional fingers on each hand and her own method of typing!
IIRC the gene for six digits is actually a dominant trait, oddly enough.
I guess this is anticipatory of the technology improving. Deep fakes still look much better than this since most of the image is still real. If the images based on "real" people look like the image at the top, which I imagine they do, it's not much different than drawing a cartoon picture intended to resemble a real person, which for now is still legal, at least in the US, provided you don't actually profit off of it (why some video game sports have to use intentionally generic players even though they don't look realistic anyway).
If the concern is people might believe any of these people are really doing the things in the generated art, though, that is premature. We don't know how long it will take to get past this uncanny valley, but there is still a hell of a gap. That "woman" at the top has no pores, no lines in her skin, appears to be growing head hair from her arm, has three fingers on her left hand, has two left feet, the big toe is on the wrong side of both feet, she has six toes on one foot and four on the other, she has an extra heel popping out of the front of her ankle on the right foot. This is still way behind even a hand-drawn cartoon in terms of realism.
Question: Can we use this to liberate people with... different... tastes, Like pedophiles and gore? Give them something to look at without harming real people?
A lot of places have laws against possessing hand-drawn child porn too; it gets justified with hand-wavy arguments of "enticing" or "normalising" the abuse of real children.
Much of the legislation around pedophilia is driven by revulsion rather than the extent of harm, so I think ideas to use this for therapy would get rejected by the populace and thus by legislators who want to be popular...
The problem with that is that the generated child porn would presumably derive from training data containing real children and child bodies, so it's not completely "not real".
This isn't the same thing as a drawing or 3D model made by a human.
Or maybe it is, what if a pedophile looked at some clothed children, drew very realistic pictures of them naked, then used them as child porn?
I think a lot of people would have a problem with that too.
For obvious reasons, this is hard to do some kind of rigorous double-blind study, but the theory tends to be that this may enhance these urges rather than satisfying them.
Serial killers, for example, tend to escalate over time chasing the thrill of things.
From my understanding, Stable Diffusion model will gladly spit out pedophilia while not being trained on such the same way it will gladly give me a Shark in Space.
Before AI, you can commission an artist and pay 100 bucks for questionable images. Now with AI, you can pay $1 for the same questionable images.
The contents being generated are nothing special. They are just various kinks and fantasy and some hardcore stuff that have been floating around since forever. It is just more accessible and easier to make them now. And considering where they come from, this is probably an improvement since no real human is involved or harmed for it.
The only thing I can see people worry about is someone putting a real person's face on a porn image. But deepfakes are also around for years and we don't really care or affected that much. Why bother with what people do behind closed doors? As long as they don't harm anyone, what they do in private is their own business.
> The only thing I can see people worry about is someone putting a real person's face on a porn image. But deepfakes are also around for years and we don't really care or affected that much. Why bother with what people do behind closed doors? As long as they don't harm anyone, what they do in private is their own business.
The problem is when it doesn't stay behind closed doors, but instead when this is used to harass the person being depicted. Non-consensual distribution of porn is a big deal because there is a lot of evidence of its harms, including a very large fraction of people with suicide ideation.
But the assumption that circulating porn of someone is real is broken.
It will take some time to collectively learn, and it will still be disturbing, but the implications of your nudes being passed around will be totally different than today.
That's not how it works: that either there is an assumption or there is no assumption. The assumption may weaken, but what will happen is that if someone sends a fake video to many people, you're guaranteed that a substantial fraction will believe it's real. I mean, a big portion of the population believe in outlandish conspiracy theories. You're way overestimating people's skepticism.
There are already laws for that. Like I said, deepfakes and photoshop have been around for a while. The same concerns were raised back then and now it is just another type of porn. Turn out those petty enough to use deepfakes to bully are also often not bright enough to make them good enough. It will be the same with AI. In many ways it is still easier to spot an AI generated image than some photoshopped photos.
In the furry community people pay artists _thousands_ for commissions of their or other fursonas. I've met a few people where this is their entire livelihood and they make comparable money to SWEs.
There's still an exploitable niche here where I think people will still pay big money for models trained on their specific requirements.
rule34 and Celebfakes exists since the beginning of the internet. This ist just more accessible and not a niche anymore. Not much you can do. The tools are there, everybody can generate deepfakes on his offline phone now. Better to live with it. Now when your real nudes leak somehow you can just say its a deepfake.
“Non-consentual” imagery is such a strange concept. Does it not fall under some kind of freedom of speech to be able to generate artwork or content of someone specific getting railed?
This feels like asking someone for consent to imagine them while masturbating.
I don't think anyone would say you should be prevented from drawing whatever you want in a sketchbook. But if I started distributing a magic sketchbook that drew one particular person in lewd poses, even though those images don't exist yet when the sketchbook is distributed I could see a strong argument that what I'm doing is essentially harassment and defamation. The US doesn't have federal protection of image rights (that I know of, anyway) but depending on the state it may be illegal to use someone's likeness in this way.
I do think that there's an argument here that could be brought before a court and potentially won that even if individual lewd images are free speech, creating and distributing a machine that produces those images can cause real material harm to the person whose likeness is used in the imagery.
not creating. Only distributing could create harm. So if someone wishes to create AI generated imagery in private, without giving said image public access, then it ought to be fine.
I'm not sure that distributing alone is in fact harmful, otherwise you could sue everyone in the pipe that touches the bytes. Both creating and distributing together create the necessary conditions to show intent and harm.
You are absolutely right but the comments are driven by emotions.
I find it repulsive -> Let's ban it.
Obviously we should compel speakers to disclose imagery is AI generated and not real. Banning this kind of speech is affront to freedom of speech AND freedom of thought.
I do kind of wonder how different this is from cutting out the face from a photo from someone you like, and pasting it into the body of some porn mag for masturbation purposes.
There is a vast difference in the realism of the end product, or type of end product in the case of a deepfake video where it is not just a static frame like a magazine.
Ultimately this should come down on wether harm is made or if the societal impact is not accepted.
To take your example, explaining face to face to the person how you imagined them when masturbating could bring punishment depending on how they take it or if it impacts them in problematic ways.
Should putting a celebrity's face on a billboard without their consent be covered under "freedom of speech"? What about if you release a movie starring "Tom Cruise" but Tom Cruise hasn't consented to it and his face is actually deepfaked?
Except there's a difference between a thought in your head and a digital image or video being published to the internet for profit, and it's weird that you can't comprehend this.
I have to assume there are a lot of people in this thread with interests in AI porn because there is a lot of aggressive missing of the point going on here.
The amount of people here claiming that using an AI tool to create an image or video that can be shared, and that privately fantasizing inside your own mind, are equivalent, is boggling to me. Your fantasy-brain isn't hooked up to the internet where everything can be saved and reuploaded indefinitely. Personally I feel like I have the right not to appear in pornography produced by a computer that can be diffused worldwide. Privately made or otherwise. The other person's right to fantasy ends at the border of their skull. This is not an economics question, it is an ethics question.
I disagree. You can’t dictate which tools someone should [not] use to amplify fantasy if it doesn’t affect you. Tbh that “right to fantasy ends at the border of their skull” sounds too oppressive to appeal to ethics.
It absolutely would affect you if that imagery is disseminated through any means capable of being seen by anyone you ever know or meet. It is oppressive because it requires you to allow others to exist with the same right. To me this is a facet of the right to privacy.
I'm not going to explore this topic more than I need to know.
What little I have seen of it, to me it seems like it is even worse in terms of affect on your nervous system, than old fashioned porn. It is way, way worse because with AI you're tightening the feedback loop, the "instant gratification" physiology in your body more than ever before.
I don't think local models should ever be restricted, in the same way I don't think our imaginations should be restricted. But when it comes to non-private use, sharing, selling, harassment, those are all bad for their own obvious reasons.
It's a new problem, but AI is not to blame - bad people who want to stalk, harass, and completely disrespect others are the problem. Blame the human, not the tool. A tool which gives people new ways to harm others is not the tools fault, it's the person who's doing the harm's fault.