Hacker News new | past | comments | ask | show | jobs | submit login
Stable Diffusion Prompt Book (openart.ai)
114 points by EMIRELADERO on Oct 28, 2022 | hide | past | favorite | 22 comments



I know a lot of people think of things like Dall-E and SD as "fake easy art", but creating good prompts it's harder than I initially thought, I'm glad this guides exists.

I don't think these models will replace traditional artist, but they can be a good tool for drafts, and just test the waters between paper and a tangible image.


I think just different communities will exalt the creative processes behind using AI tools

the quantity of creators will grow by orders of magnitude and grow around existing art communities

so the good news is that we never have to listen to those other incumbent art communities ever again


There's a cool technique called "negative" prompt that I didn't see mentioned. Some of the community projects have implemented it. (I use https://github.com/AUTOMATIC1111/stable-diffusion-webui locally when I want to generate something).

You pass in both your prompt and a negative prompt at the same time. The negative prompt describes what you don't want in your image and it'll try its best. There's some magic quality words like "jpeg artifacts" you can put on negative prompt and poof the image is less messy now.


Really nice PDF. I'm digging the homegrown feel of it. As someone just observing this and tinkering in the periphery, it's awe inspiring to see the amount of progress in such a short time on Stable-Diffusion. Thanks to the authors for putting this together and sharing it with us for free.


i am not personally a part of the whole AI-thing because i dont have the GPU horsepower to make it work. What i am happy about is the fact that the stable diffusion people made it open and the world did not just collapse or they went into financial ruin for doing so.. open ai and all the proprietary FUD people claim that open sourcing leads to loss of incentive, loss of jobs and other BS but the fact that stable diffusion works and is being appreciated by the community walks in the face of that argument.


There are a couple free online services, like paperspace and Google Colab that let you use GPU resources online for free via a notebook interface (aka Jupyter) if you're technically inclined.

https://github.com/Engineer-of-Stuff/stable-diffusion-papers...

https://colab.research.google.com/github/WASasquatch/StableD...

For a longer list: https://github.com/hashborgir/awesome-ai/blob/main/README.md...


At least one popular front-end for SD embeds loads of detail about the generation process in the generated PNGs, which is a real goldmine for figuring out how certain pictures were generated - to help in learning prompts, parameters, and tweaks.


Embeds in the title? Metadata?


metadata


For comparison: here is the DALL-E 2 prompt book: http://dallery.gallery/wp-content/uploads/2022/07/The-DALL%C...


https://lexica.art/ has given me some great plug-and-play prompts.


Is human speech eventually going to converge to be more sybiotic with AI? If so maybe we should start the prompt with "please".


Very underwhelming. Many concepts are explained incorrectly, like mixing styles.


Just a tiny bit of hubris to assume anyone fully understands prompt engineering at this point to assert something is blatantly "wrong". What about mixing styles do you think doesn't work?

I've been playing with ideas from the pdf for the last hour, and it's pretty awesome.


"Table of Content"


the anatomy of a Pikachu one on page 43, that's rich


no need to reinvent html. the static fixed whatever layout doesnt render properly.

From “oh nice” to “useless” in 15 seconds.



They're referring to the website. It renders terribly


Right, hence the link to the PDF


Which is REALLY nice. Thanks!


Cheers




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: