Hacker News new | past | comments | ask | show | jobs | submit | joelS's comments login

This is an amazing writeup, thank you. looking forward to going through it in more detail.


very nice!! I've had trouble working through AlphaFold in the past so this is going to be very helpful...


Hi all, we created a virtual world to explore and generated images together in the latent space.

We’re nostalgic for hangouts on runescape, and wanted a world where you can find your aesthetic niche and make art with the people there.

Happy to answer any questions!


Tuner is based on techniques from the GAN days of discovering latent space directions but applies them to diffusion models! This allows the ability to control lots of independent attributes (age, hair, etc) without prompting.


Yes! using the init_image


Hi! Joel, one of the creators of ProsePainter, here. It's all open source https://github.com/Morphogens/prosepainter and you can see the original announcement with some examples of users works here https://twitter.com/StudioMorphogen/status/14965783377910456...

Happy to answer any questions!


Hi, really amazingly cool, thank you!

1. Sorry if this is an inappropriate/silly question. I don't know much about this field.

What I would love is a plain CLI version where I can put the text string and number of frames, and maybe image size, as options, and it saves images of every frame to a file, for making video. Would that be really easy? i.e. starting from blank canvas each time, painting on whole image. Or maybe there is something that does that already, probably, I guess. How can I do that?

Even more than the individual images, I like seeing how it changes over the 30 frames. Being able to save whatever amount of frames I like would be so great.

2. Why does it change so much at the beginning/end of each 30 frames? The first one is often very different from what came before.

3. What are the images it's using for its knowledge? Maybe you've written about all this online, hope so. Thanks again.


Amazing. Very cute. I like the "surreal", "picasso" and "brutalist" interpretations.


I guess I'm not seeing something from the demo, everytime I press start, I get a message - "You must apply the prose-paint to the painting first!"

I watched the demo from the tweet above and I think I am doing it correctly.

Edit - I am using an old version of chrome - which was the problem.


Author here, happy to answer any questions.


Very cool! Artbreeder does have an animation tool, just go to create -> category -> animate

best, joel


Hey! So I feel a lot of creativity is 'combinatorial', i.e knows what two things might go well together. Artbreeder kind of gamifies that by making it very easy. So many images may look similar but some people can really develop their own style with time.

Also, often my favorite part of artbreeder is when artists take what they save as the inspiration or building blocks for full works. It's really an inspiration tool, but saying inspiration-breeder is a mouthful.

More generally, I think computation can meaningfully augment human creativity by providing surprise and break us out of our loops.

Best, Joel


How was the training data acquired though?


Humans generate art by selection. There's no reason you couldn't have an art ecosystem where AIs do all the generation, humans do the selection, and that's how it bootstraps. AI Dungeon 2 and 15.ai are already taking steps to close the loop by using human interactions to score outputs and train on them ("preference learning").


Marking them interesting doesn't update the model. If you click on any mage you can 'breed' it with others, or go to the create age to compose or upload something!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: