FYI. I bought a box of peaches last weekend, was munching on one just now, so being the first thing on my mind, I searched "512 peaches". I got some NSFW results.
Yes, I was torn between not censoring at all, but some results friends pointed out seemed too bad to have them be seen on my site, so I build a small filter for now.
I noticed “a beige-pink veiny pickle” on some prompt engineering site as a substitute for penis. Goes to show you that there will always be dicks made in any creative space, no matter what restrictions are in place.
Whenever there's a discussion about copyright and AI-generated images, there will be people claiming that AI can't possibly reproduce images it has been trained on. Stable Diffusion and similar software supposedly only learns higher concepts and hence no one can claim that its outputs are derived works. The rationale I often see mentioned for this argument is that the model is very much smaller (in terms of bytes) than the set of images used to train it, so verbatim images can't possibly be stored in it.
Yet, there are plenty of examples where it will reproduce well-known works out of it. People have reported images with recognizable watermarks [1]. How do you reconcile this fact with the previous claim? Is there a special part of the model that stores Mickey Mouse, Batman and Getty Images watermark verbatim, but other less-well known artists' art gets reduced to mere concepts?
When you put in a prompt "magnificent owl" and get out an unrecognizable owl artwork out of the machine, I wonder. Is that really a novel artwork, or is unrecognizable merely because it was done by some random person uploading their art to an unknown DeviantArt account 5 years ago that was seen by 10 people in total since, just barely changed enough by the AI that a similarity search through billions of images in the training set won't find it?
For a well-known cartoon figure, the issue isn't copyright, it's trademark. If someone types "Mickey Mouse" and this site has Stable Diffusion design a Mickey Mouse shirt and sell it, they are infringing Disney's trademark: they are selling a Mickey Mouse shirt without Disney's permission. It's the same if they hire an artist to do an original Mickey Mouse design.
It will be interesting to see how this plays out. I think it's much more likely that trademark issues will bring the hammer down on some of these projects than copyright issues, because I think it will be easier to persuade a judge: they let people type "Calvin and Hobbes" and make and sell a "Calvin and Hobbes" shirt to them, and we don't have a license to do that.
I see it this way: I provide a means for users to generate what they like, but I'm not mass selling those designs.
If a single person uses tshirt pens to paint mickey mouse on their shirt, Disney would probably not sue them.
Also there is no way knowing whether I sold any mickey mouse shirts.
If you allowed users to upload their own images, and those images were of Mickey Mouse, and you printed them and sold them, Disney would have a pretty easy claim against you.
You're not just selling a neutral tool here, you're actually printing the potentially infringing art.
> Also there is no way knowing whether I sold any mickey mouse shirts.
Think this statement through. I can come up with a couple of ways for a rights-holder to figure out if you're selling merchandise that infringes on their marks. They can attempt to order one themselves. They can see a tweet where someone shares the cool t-shirt they bought from you.
You're probably not actually in much danger here, but I would still take the hazard seriously if I were you. Be careful.
Currently, I review all created designs before printing, so that would be a way of not infringing, I. E. Cancel orders which are potentially infringing
Indeed. Here is an example of a company ordering trademark-infringing goods directly from a vendor, then successfully pursuing legal action against that vendor:
I mean, how is that any different than what we have now?
I guess the ability to feign ignorance that a certain work is trademarked? I mean, what happens if you have a program that draws random shapes and 3 circles line up in a way that looks like a disney logo?
I get what you're saying, just sorta thinking out loud, lol
I would think what we have now already covers everything. Doesn't matter _how_ you produced an infringement, if you make something too closely to Mickey and sell it, you're infringing _(i'm speaking loosely, ofc)_.
Well T-shirt companies already let you upload any image you want, and usually have a human reviewer checking for copyright violations. So depending on the risk profile of the manufacturer they will either use the same system or just wing it and hope to not get noticed.
I wanted to print a stable diffusion design on a t-shirt as a birthday gift, but needed to solve technical challenges to do that (quality, upscaling etc).
Then I thought it'd be fun to provide this as a service for others.
You simply enter your design wish as a text and via stable diffusion a design is generated for you and you can see how it looks on a t-shirt.
My standard test prompt has become "John Wayne as Captain America". Your images were returned > 10x faster than on other stable diffusion generators and generated many more images. But they were of lower quality than I've seen before. Hopefully you can find a way to give visitors more compute, but I can understand how that could get expensive.
I was interested in doing the same, but the economics don‘t make sense yet imho if you would really perform queries to the models. Do you just text-match similar queries on lexica.art?
How are you planning to deal with the trademark issue? If your customers ask you to make them a shirt based on a cartoon character owned by a company with an aggressive legal department, you might face trouble.
Hey, would you be open to an interview? I've been moonlighting an generative AI channel - I've also been playing around with some schemes using stable diffusion in similar ways ;)
This just seems way too fast. I'm betting they have a ton of pre-generated stable diffusion images taken from somewhere (midjourney maybe?) and then are just finding a nearest match with your input text rather than spending money on actual computations. I'm not convinced this is actually generating new content.
Then please mark which image is generated and unique and which is a copy. There are obvious copyright issues as I'm sure the licensing from lexica will be different to the image you generate yourself.
I don't see where they say this. I see that they say you can upload your own stable diffusion images and that you can use images from lexica.art as inspiration.
"""
You can upload previously created stable diffusion images. Consider the awesome https://lexica.art website for inspiration. The images from there can directly be used as upload images on this website as well.
"""
They suggest heavily that the prompt you give is running stable diffusion and that it is your unique image on the that is generated
Sorry, it might not be super clear everywhere on the site.
That's because initially I only generated a single image per query, but recently in addition I added the Lexica API to showcase more pictures per query as inspiration.
I played with this for a few minutes, having not yet played with any of the Stable Diffusion things out there. Some of the stuff it generates is so weird to me (and maybe many people), but what's really trippy to think about is that it has no concept of weird or not - it's just generating stuff based on models. That seems weird to me given that art is supposed to be emotion, and this is completely emotionless by definition. I'm sure I'm not the first person to come to this conclusion, and won't be the last, but it was remarkable enough to me that I figured I'd share.
You can somehow influence the emotions by the words you enter, though.
So if you use intense words, it will take that into account as it has learned so from training on a lot of image Word pairs.
Typing “clean and tidy church stained glass” generates a few of the usual semi-on-target results, and also a fetching stylized image of a naked man caressing another who is masturbating in what appears to be a sauna [1].
I wonder if the technique of substituting “gl**s” for “glass” in the input field might have the inadvertent effect of making the results more porn-y rather than less.
I have a small server which runs the main application which is quite cheap, but the stable diffusion generations are run via an API which costs around 1€ / 100 image generations.
This Show HN cost me around 150 € via the API so far and I had 3 orders, so I guess this will be more of a cost center :D
But "Scantily clad woman" generated some, uh, interesting results.
Edit: If you desire to be stunned, try "Scantily clad male".
Or just "Scantily clad", so much NSFW, ..I'm like "what?"
Warning: blurry nips https://lexica-serve-encoded-images.sharif.workers.dev/sm/1f...
Or if you're the Jeff Dahmer type, "Man with hotdog" produces a human-sausage hybrid:
https://lexica-serve-encoded-images.sharif.workers.dev/sm/05...
If you want to get an actual male-appendage-looking result: "Man crotch bulge"
Alright, that's enough. My work here is done, I'm out!