Hacker News new | past | comments | ask | show | jobs | submit login

My assessment of Midjourney, Stable Diffusion and DallE are that they are good if you don’t have anything specific in mind and your subject isn’t something which has specific components. (Try creating an accurate chess board. I have never been successful.)

So for many situations where we want something that is consistently good, graphic design skills are still necessary imo.




I've seen enough of their output now that I can recognize it immediately and I interpret as a signal of low effort and low quality. It's a glorified placeholder.

Art is supposed to express or communicate something. Typing in a prompt doesn't really express much.


Not sure if this is just my imagination but I think I might have experienced the same phenomena, on Instagram, I can look at an image of a person and very often guess that it's AI generated, even though it's a very realistic looking image.


AI images seem to all have this sort of shimmer too them, that make them quite easy to identify - even without having to dig into the imperfections.


I notice a lot of the people look similar. Like most models must have a fairly consistent idea of what a person of a certain ethnicity looks like.


I have a decent time if I use in painting with enough hand drawn scaffolding. I think the best uses of these technologies is adding complexity to a drawing you’ve already created. Anything else doesn’t impose enough constraint to get control if you have a specific idea in mind.


100% agree with this.

Also we still need people who truly understand hands have 5 fingers and dogs have 4 legs.


we understand. Hands just suck to draw. They are a non-trivial shape that have multiple appendages with independent degrees of movement and angles (which makes them really hard to light. Lighitng is the biggest weakness of 2D gen AI right now), multiple types of material to consider (including nails and palm), slight deformaton, and ultimately need to be proportionate to the rest of a larger body.

Yet despite all that we are really good at identifying such subtleties in hands, even when casually viewing. So it's a high standard for a very complex piece of anatomy.


It's not even that. The things don't understand what they're drawing. Ask for a handshake and you get a mutant sausage orgy.


This is about to improve across the board from a product perspective. To get a feel for this, try Krea or ControlNet or ComfyUI. You can precisely control the scene layout.


If you can link a chessboard created using those tools with all of the pieces in the correct starting positions and with the board in the correct orientation, I would believe you.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: