Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Let the computer be your unique t-shirt designer with Stable Diffusion (thisshirtexists.com)
48 points by mpaepper on Oct 4, 2022 | hide | past | favorite | 61 comments



"Boobs" and "Ass" are forbidden.

But "Scantily clad woman" generated some, uh, interesting results.

Edit: If you desire to be stunned, try "Scantily clad male".

Or just "Scantily clad", so much NSFW, ..I'm like "what?"

Warning: blurry nips https://lexica-serve-encoded-images.sharif.workers.dev/sm/1f...

Or if you're the Jeff Dahmer type, "Man with hotdog" produces a human-sausage hybrid:

https://lexica-serve-encoded-images.sharif.workers.dev/sm/05...

If you want to get an actual male-appendage-looking result: "Man crotch bulge"

Alright, that's enough. My work here is done, I'm out!


FYI. I bought a box of peaches last weekend, was munching on one just now, so being the first thing on my mind, I searched "512 peaches". I got some NSFW results.


Haha, wow, that's not a prompt where I would've expected NSFW!


Thanks for the hint, I'm not a native English speaker, so I probably need to enter a couple more forbidden words.


Maybe the alternative is to not forbid NSFW words?


Yes, I was torn between not censoring at all, but some results friends pointed out seemed too bad to have them be seen on my site, so I build a small filter for now.


I noticed “a beige-pink veiny pickle” on some prompt engineering site as a substitute for penis. Goes to show you that there will always be dicks made in any creative space, no matter what restrictions are in place.


I typed "xvideos" as the query, i immediately regret that decision


Entered "Mickey Mouse", "Batman", "Microsoft", "Calvin and Hobbes", and "pokemon" into the generation field.

All of the above appear to generate copyrighted/trademarked images. Careful.


Whenever there's a discussion about copyright and AI-generated images, there will be people claiming that AI can't possibly reproduce images it has been trained on. Stable Diffusion and similar software supposedly only learns higher concepts and hence no one can claim that its outputs are derived works. The rationale I often see mentioned for this argument is that the model is very much smaller (in terms of bytes) than the set of images used to train it, so verbatim images can't possibly be stored in it.

Yet, there are plenty of examples where it will reproduce well-known works out of it. People have reported images with recognizable watermarks [1]. How do you reconcile this fact with the previous claim? Is there a special part of the model that stores Mickey Mouse, Batman and Getty Images watermark verbatim, but other less-well known artists' art gets reduced to mere concepts?

When you put in a prompt "magnificent owl" and get out an unrecognizable owl artwork out of the machine, I wonder. Is that really a novel artwork, or is unrecognizable merely because it was done by some random person uploading their art to an unknown DeviantArt account 5 years ago that was seen by 10 people in total since, just barely changed enough by the AI that a similarity search through billions of images in the training set won't find it?

[1] https://news.ycombinator.com/item?id=33044611


For a well-known cartoon figure, the issue isn't copyright, it's trademark. If someone types "Mickey Mouse" and this site has Stable Diffusion design a Mickey Mouse shirt and sell it, they are infringing Disney's trademark: they are selling a Mickey Mouse shirt without Disney's permission. It's the same if they hire an artist to do an original Mickey Mouse design.

It will be interesting to see how this plays out. I think it's much more likely that trademark issues will bring the hammer down on some of these projects than copyright issues, because I think it will be easier to persuade a judge: they let people type "Calvin and Hobbes" and make and sell a "Calvin and Hobbes" shirt to them, and we don't have a license to do that.


I see it this way: I provide a means for users to generate what they like, but I'm not mass selling those designs. If a single person uses tshirt pens to paint mickey mouse on their shirt, Disney would probably not sue them. Also there is no way knowing whether I sold any mickey mouse shirts.


If you allowed users to upload their own images, and those images were of Mickey Mouse, and you printed them and sold them, Disney would have a pretty easy claim against you.

You're not just selling a neutral tool here, you're actually printing the potentially infringing art.

> Also there is no way knowing whether I sold any mickey mouse shirts.

Think this statement through. I can come up with a couple of ways for a rights-holder to figure out if you're selling merchandise that infringes on their marks. They can attempt to order one themselves. They can see a tweet where someone shares the cool t-shirt they bought from you.

You're probably not actually in much danger here, but I would still take the hazard seriously if I were you. Be careful.


Yep, that's correct. Thanks for the hints!

Currently, I review all created designs before printing, so that would be a way of not infringing, I. E. Cancel orders which are potentially infringing


That should be enough. A block list of prominent trademarked characters might also be possible. That would still leave a huge space of possible ideas.


Indeed. Here is an example of a company ordering trademark-infringing goods directly from a vendor, then successfully pursuing legal action against that vendor:

https://www.npr.org/2022/09/13/1122820376/the-good-the-bad-a...


I mean, how is that any different than what we have now?

I guess the ability to feign ignorance that a certain work is trademarked? I mean, what happens if you have a program that draws random shapes and 3 circles line up in a way that looks like a disney logo?

I get what you're saying, just sorta thinking out loud, lol


I would think what we have now already covers everything. Doesn't matter _how_ you produced an infringement, if you make something too closely to Mickey and sell it, you're infringing _(i'm speaking loosely, ofc)_.


Is it also illegal for me to upload an image of the mouse and get it printed on a t-shirt ?


Technically, yep


But in that case, the problem wouldn't be mine, but the one making money out of it, wouldn't it?


This is super tricky. I wonder how this plays out in the end.


Well T-shirt companies already let you upload any image you want, and usually have a human reviewer checking for copyright violations. So depending on the risk profile of the manufacturer they will either use the same system or just wing it and hope to not get noticed.


My guess is that 2-5 years from now, copyright/trademark suits will have removed the current iterations of Stable Diffusion from widespread use.

The idea and technology is awesome, but one must be very careful with the training data.


It's not illegal, it's just potentially expensive.


Author here:

I wanted to print a stable diffusion design on a t-shirt as a birthday gift, but needed to solve technical challenges to do that (quality, upscaling etc). Then I thought it'd be fun to provide this as a service for others.

You simply enter your design wish as a text and via stable diffusion a design is generated for you and you can see how it looks on a t-shirt.

Ask me any questions!


My standard test prompt has become "John Wayne as Captain America". Your images were returned > 10x faster than on other stable diffusion generators and generated many more images. But they were of lower quality than I've seen before. Hopefully you can find a way to give visitors more compute, but I can understand how that could get expensive.


That's interesting, thanks! I actually generate one image per query and use the lexica API to sample others that are similar / related


I was interested in doing the same, but the economics don‘t make sense yet imho if you would really perform queries to the models. Do you just text-match similar queries on lexica.art?

Edit: NVM just saw that they provide an API…


I do both, I generate one image on demand and query others from lexica :)


I see, nice! It would be great if you could add the prompt to the shirt, I think that would be a great conversation starter ;)


Interesting idea, might consider to have it as an option. Thanks!


How are you planning to deal with the trademark issue? If your customers ask you to make them a shirt based on a cartoon character owned by a company with an aggressive legal department, you might face trouble.


How would they know that a customer asked?


Because the trademark is right there in the prompt? Like "Mickey Mouse on the moon" or some such?


Hey, would you be open to an interview? I've been moonlighting an generative AI channel - I've also been playing around with some schemes using stable diffusion in similar ways ;)


I might. Send me a private message on Twitter: Twitter.com/mpaepper Thanks!


How are you handling the printing / logistics aspect? Those are a black box to me. :)


There are whitelabel producers doing this for you.

Have a look at Spreadshirt, shirtee or Picanova for a start.


It's called dropshipping. I have a provider who gets the GAN upsampled image and the order info. They print and send it to the customer.


This concept could be even more interesting with e-ink clothes that you could update on a daily basis with unique AI generated art.


That's a very cool idea :) I would definitely wear that!


This just seems way too fast. I'm betting they have a ton of pre-generated stable diffusion images taken from somewhere (midjourney maybe?) and then are just finding a nearest match with your input text rather than spending money on actual computations. I'm not convinced this is actually generating new content.


Yes, I use the lexica API and in addition generate a unique image with stable diffusion!


Then please mark which image is generated and unique and which is a copy. There are obvious copyright issues as I'm sure the licensing from lexica will be different to the image you generate yourself.


No the license is the same, both CC0 free for all.


They use lexica.art pregenerated images, it‘s also mentioned on their website.


I don't see where they say this. I see that they say you can upload your own stable diffusion images and that you can use images from lexica.art as inspiration.

""" You can upload previously created stable diffusion images. Consider the awesome https://lexica.art website for inspiration. The images from there can directly be used as upload images on this website as well. """

They suggest heavily that the prompt you give is running stable diffusion and that it is your unique image on the that is generated

See this image https://www.thisshirtexists.com/wp-content/uploads/2022/09/S...

which advertises the following text

""" Specify your subject

Generate Art through AI

Create your unique shirt now!

Order your unique shirt! """

This strongly suggests you are getting a unique image from your prompt not a nearest match from somebody else.


Sorry, it might not be super clear everywhere on the site. That's because initially I only generated a single image per query, but recently in addition I added the Lexica API to showcase more pictures per query as inspiration.


I played with this for a few minutes, having not yet played with any of the Stable Diffusion things out there. Some of the stuff it generates is so weird to me (and maybe many people), but what's really trippy to think about is that it has no concept of weird or not - it's just generating stuff based on models. That seems weird to me given that art is supposed to be emotion, and this is completely emotionless by definition. I'm sure I'm not the first person to come to this conclusion, and won't be the last, but it was remarkable enough to me that I figured I'd share.


You can somehow influence the emotions by the words you enter, though. So if you use intense words, it will take that into account as it has learned so from training on a lot of image Word pairs.


Typing “clean and tidy church stained glass” generates a few of the usual semi-on-target results, and also a fetching stylized image of a naked man caressing another who is masturbating in what appears to be a sauna [1].

I wonder if the technique of substituting “gl**s” for “glass” in the input field might have the inadvertent effect of making the results more porn-y rather than less.

[1] https://lexica-serve-encoded-images.sharif.workers.dev/sm/35...


I also wrote a blog post about stable diffusion - how it works technically.

In case you want to go deeper down the rabbit hole:

https://www.paepper.com/blog/posts/how-and-why-stable-diffus...


That's a really cool / commercial application of the technology. I can see how this would get some traction.


Thank you :)


Wondering, how's your daily server cost looks like? I'm planning to play with SD but isn't it expensive to run?


I have a small server which runs the main application which is quite cheap, but the stable diffusion generations are run via an API which costs around 1€ / 100 image generations.

This Show HN cost me around 150 € via the API so far and I had 3 orders, so I guess this will be more of a cost center :D


Thats nice! So Stable Diffusion charge/api call no need to setup a server for that?


Yep, correct. But it's expensive if you have traffic :)


Canva can do the same.


Do they automatically upsample the images to print quality with a GAN model? That's what I'm doing in the background.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: