Hacker News new | past | comments | ask | show | jobs | submit login
AI real-time human full-body photo generator (generated.photos)
747 points by bookofjoe on Aug 23, 2023 | hide | past | favorite | 358 comments



If you're wondering how it's so fast and cheap and they can generate variants so easily, it's because they're using GANs (see the footer). GANs are way faster than diffusion models because they generate the image in a single forward pass and their true latent space encoding makes editing a breeze.

(And if you're wondering how it can look so good when 'everyone knows GANs don't work because they're too unstable', a widespread myth, repeated by many DL researchers who ought to know better, GANs can scale to high-quality realistic images on billion-image scale datasets, and become more, not less, stable with scale, like many things in deep reinforcement learning. See for example BigGAN on JFT-300m https://arxiv.org/pdf/1809.11096.pdf#page=8&org=deepmind , GigaGAN https://arxiv.org/abs/2303.05511 , Projected GAN https://arxiv.org/abs/2111.01007 , StyleGAN-XL https://arxiv.org/abs/2202.00273 , or Tensorfork's chaos runs https://gwern.net/gan#tensorfork-chaos-runs . 'Computing is a pop culture'...)


> If you're wondering how it's so fast and cheap and they can generate variants so easily

I assume its cheap because they are burning money to build a business, its not fast at all, and the quality... sucks.

> And if you're wondering how it can look so good

I’m not.

I'm wondering why they’re trying to get people to use something worse than using a decent photorealistic SD1.5-based checkpoint with some basic prompt templating.

Not saying GANs can't be awesome, just that this site isn't what I’d use to make that case.


Looking at the poses, it feels optimized for generating porn, but one example someone showed had a child's face (good god please don't let your "AI" system generate child anything if you want to sell it for porn purposes), and another user noted that their attempt error'd out because it "detected nudity", even though other users get given a nude model by default.


I don't know if the thing in the crotch is a penis or a scrotum, but it is definitely NSFW:

https://images.generated.photos/0wV1dBnZ15hGneEfqfZT7SdEIill...

My prompt was simply "Standing in front of a rocket.".


I had one generate a fully nude, spread eagle male model w/ a very vivid vagina when I typed in "on couch". Tried it again w/ other randomly generated male and female models with nowhere near the same results w/ any of the other tries...


That image is so full of wtf


It's interesting that the image has a "mirrored along the X axis" kind of look.


It's neither, because this AI doesn't care for gender norms. /s

That looks more like a machine, an android perhaps, than an actual human.


Good thing you didn't ask for sitting.


Hah. I've been playing with StableDiffusion locally and have gotten a few abominations like this myself.


Whatever you do don't take all those long prompts that include (((aesthetic))) and such and put them in the negative prompt with "human" in the positive prompt.

It's not nightmare fuel it's a nightmare engine


You asked for a phallus symbol, it gave you phallus symbols.


I’m going with that guy is a reptile in disguise.


Oh no, no, no, I'm a rocket man


> it feels optimized for porn

So I was relatively unimpressed with the first few images, they looked incredibly unrealistic. But then after reading this message i decided to add "topless" to the notes just to see how crazy the result was.

I was actually impressed, it returned an image that actually looks lifelike once I gave it an adult request. Which reinforces the idea that this was optimized for porn.

For those curious, this was the result I was provided. NSFW of course, topless female gender (ai generated): https://generated.photos/human-generator/64e6c26e190809000fb...



That image is really off-putting. It looks wrong in several ways I can identify and some that I can’t.


I can count a number of things wrong with that image on her right hand. 7 or 8.


7-finger porn!


We're not judging.


An AI-generated image is just a drawing, not a real human.



I believe anything depicting CP is illegal in the US as well


The justice department defines it as anything depicting a real identifiable minor. Fictive persons are notably exempt from this definition (regardless of realism of the depiction). Canada is different in that regard.


I made the mistake of opening this in work and had to instantly close it


[flagged]


To expand on what I take as your implied argument- Some (small) percentage of people are pedophiles, meaning they're attracted to children. Presumably they can't help that, just as others can't change their sexual preferences. Clearly acting on this urge with an actual child is wrong. That's true whether it's directly assaulting a child, or consuming child porn, as that market encourages others to exploit children to generate it. However, if it is possible to produce CP without involving actual children, it could provide an outlet for those desires that would reduce demand for actual CP, and thereby reduce incidents of children being abused to produce it.

One could argue that such an outlet could even reduce incidents of direct sexual assault of children by pedophiles, but there is also a counter-argument that it would instead serve to "whet the appetite" and encourage such behaviour. And of course there are other counter-arguments; it could make actual CP more difficult to detect, for one. Finally there is the argument from the perspective of fundamental morality, that depicting children in a sexual manner is wrong in and of itself, and therefore the various potential effects are irrelevant. (Much like it's wrong to murder an innocent, even if you could harvest their organs and save five others as a result.)


>Finally there is the argument from the perspective of fundamental morality, that depicting children in a sexual manner is wrong in and of itself, and therefore the various potential effects are irrelevant. (Much like it's wrong to murder an innocent, even if you could harvest their organs and save five others as a result.)

I don't think those two scenarios are usefully similar. Murder is wrong because of its effects. This would be more like if you could murder someone without _anyone_ dying.


Murder someone without anyone dying is the plot of many, many videogames since the 80s


In the 80s you were "murdering" a handful of pixels. It's really not comparable. Neither In the 90s or 00s even though ever more pixels were involved.

I have to say though that this argument starts making a lot more sense with current VR tech. Sometimes it's really disgustingly realistic.

While I'm still not in favor of banning violent games, I do think a valid argument is emerging. I've never truly had a feel of fright playing on a 2d screen but since VR the spiders headcrabs of Half Life Alyx really make me panic sometimes.

On the csam side I'm just not sure if this would diminish or increase their interest. I would leave that up to someone who studied for this kind of insight.


That's a good summary, thanks. I think AI-generated will lead to actual child porn not making financial sense (hopefully, anyway). I also don't think that the "whetting the appetite" argument is true, from other areas I've seen (eg playing violent games doesn't lead you to becoming a murderer), but I have no data on that.


It's interesting to notice when utilitarian arguments are accepted and when they're rejected. The argument offered in favor of abortion without limits tends to be that women will get abortions regardless, they will just be dangerous. Presumably the greater good is served by allowing abortions despite the moral issues surrounding killing fetuses/unborn children. I have no trouble imagining many people supporting such a utilitarian argument for abortion but not for generated CP. Though I have a hard time making the distinction intelligible.


1. You are potentially giving a shield of deniability to people who create or distribute real CSAM because now they could claim that the images are just AI generated and therefore "harmless"

2. Efforts to stamp out real child abuse may be undermined by a flood of AI-generated false positive imagery

3. When people see something over and over again they start to think that it's normal. AI generation of this kind of material (something which can be done at a huge scale) risks normalizing the sexual abuse of children.

I'm sure there are many other arguments beyond these.


I see #3 as potentially valid, but also as applying equally to murder, drugs, and car speeding, which are prevalent in movies, animation, and video games. Heck, tv shows catering to kids have variations on all 3.

Don't get me wrong. I have a built in revulsion toward CP. But I wonder how much is that built in revulsion helping vs hindering rational discussion - just like it took me like 15 years of reading and pondering to remove the indoctrinated deference I had toward religion, and I hope I can have a more stable, less knee jerk discussion about religion. I think CP is wrong, but I also Think murder and torture are wrong. The latter two are amply represented in entertainment media. What is the Venn diagram, what is the delta that makes cp different?


ok, you've convinced me about it. before, I didn't have any opinions regarding llm and cp. I didn't even think people could generate cp with llm. thanks mate.


> What exactly is the argument against AI-generated child porn?

Currently? The fact that all the models need training data and the law will see that as victimizing the people who were used in the data set be they adults of children.

Overall? The fact that it's disgusting and pedophiles deserve things which I can say IRL and everyone agrees with, but on HN will get me banned.

Many countries ban underage anime porn too. Children and their likeness are off limits.


It's natural (normal) to think child point is disgusting.

But its not hard to find many other sexual acts that at least some people, if not most people, find disgusting. So I'm not sure disgust is a useful criteria here.

Most of what makes pedophilia so disgusting is the fact that a child is involved. Remove the child from the equation and it's more in line with other sex preferences.

People can't really control their preferences. We've grown to understand that there are healthy, and unhealthy ways to satiate those preferences. Most of those revolve around the consent of those involved (taking as fact that children cannot consent.)

One could argue that AI allows for a healthy outlet for a pedophile.

You mention children being off limits. In other societies homosexuality is still criminal. Even where it is legal many consider it "off limits" or "disgusting". Laws, and emotions, in this space are not always logical.

There will be second-order arguments (think - the gays are trying to turn my child gay) but most of them fundamentally misunderstand how human sexuality works.


> You mention children being off limits. In other societies homosexuality is still criminal. Even where it is legal many consider it "off limits" or "disgusting". Laws, and emotions, in this space are not always logical.

In fact in a lot of those countries where fully consensual homosexuality is highly illegal, marrying (with all its implications) what we would consider a child is legal and "normal" and so is forced/arranged marriage (again with all that implies). I'm really lucky to have grown up here.


Certainly yes, different cultures have different norms.

As a complete aside, forced marriages are not the same as arranged marriages. In many cultures arranged marriages (which seem completely bonkers to someone grown up in a western mindset) are both common, work well, and often good.

Clearly neither arranged marriages, nor "choice marriages", are guaranteed. The divorce rate in the west bears this out. Both work well, both work badly.

Again arranged is not the same as forced - clearly forced marriages are not something I consider a good thing.


I agree with you that porn is disgusting, and that the people who view it are immoral. However, there are a lot of arguments to be made against capital punishment, murder, and other forms of vigilante violence. No nation with capital punishment is a stranger to wrongful execution.

https://en.wikipedia.org/wiki/Wrongful_execution

Encouraging state violence based on the fact that something is "disgusting" is not a good idea. Law needs to be grounded in concrete harm. The lines currently seem ill-defined, and existing laws in this area have been used against children (see State of Washington v. E.G., where an autistic child was charged and convicted with distribution of child pornography for taking a selfie and sending it to an adult).


Far from me to defend pederasty, but I'm quite sure I would disagree with the thing you wouldn't want to publish, RL or not.


> What exactly is the argument against AI-generated child porn?

As something you generate in a photorealistic image generator you are building a business around?

The fact that it is a serious crime in many jurisdictions and, even where it isn’t, photorealistic child porn images that get noticed anywhere or going to result in uncomfortable conversations for everyone involved in the process of establishing that they aren’t evidence of a crime.


The argument against it is that the police and prosecutors don't care about your arguments.


Under English law creating a rough hand-drawn child porn sketch for your own amusement is a serious crime. I don't understand the rationale for this, but people should be aware that if they use a porn generator and it spits out an image that looks like CP then they will have committed an offence in England.


Probably moral depravity if I had to guess. Not sure why we even need “an argument” against it. It’s pretty self-evidently wrong.


What if it turned out it were the only effective way to prevent people engaging in real paedophilia?


> What if it turned out it were the only effective way to prevent people engaging in real paedophilia?

Paedophilia is not synonym for child sex abuse, and to the extent that conjecture hasn't been ruled out by study already, that's at most an argument that its an appropriate function for tightly-controlled research-supporting tools, not a general public business.


No real disagreement there. Ultimately my biggest concern is that real children are not exploited and damaged by experiences they're clearly not ready for - if using AI-generated pornography turned out to be a particularly effective way to achieve that (with no obvious negative side-effects, and to be clear I doubt that's particularly likely to be the case), then I would think we shouldn't dismiss it out of hand as something that must be banned just because it intuitively seems so awful.


You’re suggesting that perhaps allowing people to explore an urge that should be fought and suppressed might dissuade them from pushing that urge further resulting in real-world action? Fascinating, do tell me more.

Blatant disregard for children aside, the larger issue here is the presupposition that we should someone permit all urges. Not every thought that passes through your brain should be “explored”; some thing are just bad and should be fought.


Not suggesting either, just putting forward a hypothetical possibility. The idea of using AI to generate kiddy porn is off-putting to me in the extreme, and I can think of plenty of reasons why it's almost certainly a bad idea to permit it, but I'd be wary of relying primarily on my instinctual aversion to it to be so certain it should be banned outright.


While I agree it’s probably a good idea to avoid instinctual aversion in many things, I believe some things in life are too sacred to rely on “data”. Some things we just know to be true, and have to trust that whatever story data might tell isn’t a story we’re interested in.

Keep in mind that science can tell any story you want it to tell as long as you interpret the data in the way you need. That’s why some things, such as our children, cannot be left to slippery slopes such as this.

We’ve already got more than enough data proving that pornography is absolutely destructive and a net negative on all of humanity. I don’t think we need additional data to prove that bringing kids into the mix would somehow make it better.


That's a very big "what if". What data could demonstrate that to be true or false?


I did have exactly that thought when I posted it. It was more of a thought experiment than anything.


But it’s not.


Prisons?


I am also a bit disappointed how it's not photorealistic. We had better quality 3-4 years ago.


You’re overstating the simplicity of a scaling a GAN well.

GigaGAN is the best quality out of those and requires 7 loss functions and is incredibly complicated.

Sure GANs can scale, but Diffusion models are drastically easier to scale.


No, I'm not. BigGAN did fine on scaling up to JFT-300M with basically no changes beyond model size and a simple architecture. This is also what we were observing, even with a buggy BigGAN implementation. GigaGAN is the best quality, but that's mostly because it's also the biggest; as Table 1 shows most of the gains come from various kinds of additional scaling. (And this is moving the goalposts from the usual assertion that "GANs can't scale" to "they're harder to scale"; note the self-fulfilling nature of such assertions. Considering how there is next to no GAN scaling research, these results are remarkable and show how much low-hanging fruit there is.)

Diffusion models are only 'drastically easier to scale' because researchers have spent the past 3 years researching pretty much nothing but diffusion models, discovering all sorts of subtle issues with them and how to make them scale, which is why it took them so long to become SOTA, and why massive architectural sweeps like https://arxiv.org/abs/2206.00364#nvidia were necessary to discover what makes them 'easier to scale'. If this level of brute force and moon math is 'easy', lord save us from any architecture which is 'hard'!


Researchers have spent several years try to even create a GAN that can fit well a distribution made of aligned faces (resulted into StyleGAN1/2); with a simple unet with e-objective and cosine schedule you can fit much complex distributions, still using one loss: L1/L2.

Reading your comments make me feel like that you believe that just every researchers (even extremely smart dude like Karras) just switch to diffusion models because they are idiots, they should have instead focus on GANs and today we will have GANs that are as powerful or more than the diffusion models we have today and also work one step; this is just a weird delusion. Diffusion models are just simply much easier to train (just a L1/L2 loss in most cases), write (for example your buggy BigGAN implementation), they usually work out-of-box on different resolutions and aspect ratios, you can just finetune them if you want to create an inpainting model; and for what is right now you just need much less compute to reach a good image coherency or maybe just reaching a coherence that as not been achieved by GAN models; like I would be curious even on a small scale experiment what a GAN (with ~55M parameters) would be able to perform after a 1-day/2-day GPU time of training on Icon645 dataset, because my diffusion model I can assure is much better than I could have imagine while being trivial to implement (I just implemented a Unet as I remember one, nothing rigorous and of course no architecture sweep).


There are so many holes in this, but to pick just one:

> for example your buggy BigGAN implementation

The bug was that the batchnorm gamma was initialized to zero instead of one, so all the model params were being multiplied by zero instead of one. Literally any model that uses batchnorm is susceptible to this bug, and it certainly has nothing to do with GANs.

I think it’s too easy to spout dogma, and calling gwern delusional is something I’ve learned from experience will prove foolish as t approaches infinity.


It has to do with GANs as the added complexity makes the model more difficult to debug, the other day a friend of mine had a problem with the normalization layers of his diffusion model, it was fixed almost immediately, if something goes wrong you have less to worry about.

Also no I don't take "he is gwern" as an argument ahah


Bookmarking this for when time proves you wrong after a couple years. I’ll drop by to say hello.


> Diffusion models are only 'drastically easier to scale' because researchers have spent the past 3 years researching pretty much nothing but diffusion models

This is what tends to happen when you find a superior method.

GAN’s are fine, they have plenty of promise for tasks requiring rapid inference. But diffusion models beat GANs on robustness and image quality every time.


Hi gwern, asking earnestly and without snark about this:

"And if you're wondering how it can look so good when 'everyone knows GANs don't work because they're too unstable', a widespread myth, repeated by many DL researchers who ought to know better, GANs can scale to high-quality realistic images on billion-image scale datasets, and become more, not less, stable with scale, like many things in deep reinforcement learning."

My interpretation from how this is written is that you are saying: "Researchers say that GANs are unstable at inference time." Is that the correct reading? If so, where have you seen this sentiment? More commonly I've heard that people criticize GANs at inference time for mode collapse and monotony (which comes from training), not instability.

Or are you saying that researchers say GANs are unstable during training, which is a common criticism. Don't you feel this is the case? A lot of different tricks are used for getting generators that are roughly balanced with more powerful discriminators so that the adversarial game is balanced, like TTUR and encodec's weight balancer etc. In this case, are you saying that GAN training is as straightforward as diffusion training?

In my experience, GAN training involves an unholy number of heuristics and best-practices are still quite murky.

I am eager to hear your response.

[edit: my experience with GANs and diffusion models isn't about taking image problems with existing working models and scaling or refining them, but about applying GANs and diffusion models to different domains (audio) and on novel problems with different kinds of conditioning. I would love to see more controlled experiments comparing weak and sophisticated generative backbones as the experimental control and varying the training regime across different GAN + diffusion flavors.]


Trying it out just now, it's still getting major anatomical features wrong, and hands/fingers are still a noticeable problem.

eg (NSFW!): https://generated.photos/human-generator/64e741e6c4c0f80009e...

Wrong number of limbs, wrong number of fingers on hands, etc.

The result is ok sometimes, but that's probably only about 5% of the time. Here's a decent result (very NSFW) without immediately noticeable deformities:

https://generated.photos/human-generator/64e7590e482612000b1...

Probably 95% of the images generated so far have been clearly deformed though.


How it can look so good? ROFL!! It just created a guy with a hand coming off his left ankle in place of a foot, and toes or fingers or something poking out the end of the show on his right foot! https://generated.photos/human-generator/64e682308448b8000c5...


Four hands is much better for climbing industrial installations so you can hang in mid air next to them


If it isn't my old friend, Willie McGreg. With a leg for an arm and an arm for a leg!


>they're too unstable', a widespread myth

>See for example BigGAN

I remember when you try training a BigGAN model on anime images, the quality was bad. Now look at this example, one single GPU, 1.5M images with a diffusion model: https://medium.com/@enryu9000/anifusion-diffusion-models-for... ,the difference in quality is absurd, you can say this or that is not true but the quality speak for itself, obtaining good quality on complex distribution is much easier with a diffusion model than a GAN.

For example in the case of the site linked they have conditioned the model on poses because you're not going to get anything close to be coherent without them with a simple StyleGAN as they say they're using.


> I remember when you try training a BigGAN model on anime images, the quality was bad

Because there was a bug in the code, in a part unrelated to the GAN itself.

> the difference in quality is absurd

Yes, it does help to train on anime with code that isn't buggy. (BTW, Skylion was getting good results with GANs on anime similarly restricted to centered figures like those samples, he just refuses to ever publish anything.)


So you believe that without the bug you would be able to come close to the quality of the diffusion model I have linked? I'm not even asking about using the same compute (1 GPU for ~1 month) but if you just believe BigGAN can come close to that in general.

Also the bug is probably related to the added complexity of training a GAN model in comparison to a diffusion model.


Broken link?


Fixed thx


> everyone knows GANs don't work because they're too unstable

Is that a wide spread myth? I thought it’s widely accepted that GAN is really good generating these artificial pictures (it’s what started DeepFake after all) when you know your model’s “button”. Similar to how this uses GAN since they have a model “boundary condition”. While humans are diverse, we have a set of repeatable features (two legs, two arms, etc). Diffusion models are great because you can control the latent space with something way more generic, like text hence why it’s been so much more mainstream.

Edit: actually I might be misremembering, I think Deepfake used VAEs?


It is very widespread. You will see people in this very thread dismissing GANs as fundamentally failed, and hotly objecting to any kind of parity, even if they have to fall back to 'well ok GANs do scale, but they're more complicated'. I also have some representative quotes in my linked draft essay from various papers & DL Twitter discussions. (Another way to put it would be: when was the last time you saw someone besides me asserting that GANs can scale to high-quality general images and are not dramatically inferior to diffusion? I rest my case.)


> Another way to put it would be: when was the last time you saw someone besides me asserting that GANs can scale to high-quality general images and are not dramatically inferior to diffusion? I rest my case.

Get off of Twitter, maybe? I don't know but I have seen a ton of work done on large scale GANs. The incentives are pretty fucking obvious - they are faster to use. Of _course_ people want them to work for the same usecases as diffusion.

Maybe they will someday! But they aren't currently. You're referencing a zeitgeist that I am not familiar with and therefore come across overly defensive.


Sounds very important to you that you don’t have to change the premise of your essay or ever admit you’re wrong. No one is dismissive of GAN’s here without justification. They’re fine. They don’t beat diffusion, but they’re fine.

You come across as severely, _severely_ biased and reactionary.


>And if you're wondering how it can look so good

I don't think anyone is wondering this, especially if they are used to playing with diffusion models.


Yeah, honestly this whole comment thread is a testament of how quickly generative AI has moved the past year. Even in a hacker community like this where we ought to be close in terms of realizing where current tech stands, we're on different pages on what's possible. Photorealism is pretty much already here and this website is a poor example of current generative AI. The most it has going for it is that it has an approachable UI.


What I’m really wondering is how can this be free. What is the business model here?

Are they using me to refine the model in some way?


Human evaluation is super expensive so yes. Seeing which sessions you discard and keep alone is worth the compute time, especially if it's GAN based.


It says free for non-commercial. If commercial, contact us. I assume they plan on paying the bills with commercial work.


There was a whole community around ESRGAN img2img finetuning kinda like Stable Diffusion LORA community... albeit a much, much smaller one.


I’ll take this opportunity to mention our research scaling StyleGAN 2 to larger datasets (using LAION) on food images, leveraging free TPU compute through the TRC program.

We trained for 36 days on a v4-8 on 558k images.

https://nyx-ai.github.io/stylegan2-flax-tpu/

We were hopeful GANs would beat diffusion models when trained on specific domains. But we’ve now switched to Stable Diffusion and Dreambooth training which has proven more much efficient for this purpose.

I still have hopes for GANs! I miss their insane inference speed.


maybe it was fast 58 minutes ago but apparently it is now at peak capacity. even if you don't get rejected, a new image takes minutes


While there's discussion on the topic here - are there any resources that can explain the exact mechanism of how a GAN works for image generation? I have a rough idea of how diffusion models work, but I'm still no AI researcher.


I get that most of these are hilarious (this is my favorite comment on HN in some time, https://news.ycombinator.com/item?id=37239909 ). But still, I find this incredibly frightening. These are only going to get better. Does anyone doubt that in a couple years time (if that) we'll be able to put the image of any known public person into whatever generated photo we want, which would be indistinguishable from reality? We're not that far already (see the Pope in a puffy jacket).

My only hope is that this extreme enshittification of online images will make people completely lose trust in anything they see online, to the point where we actually start spending time outside again.


We're basically there right now betweem Deepnude[1] and Photoshop.

[1] (NSFW, seriously.) https://deepnude.cc/


> The most powerful image deepfake AI ever created. See any girl clothless with the click of a button.

This is just disgusting. I thought it would just be a uncensored generative AI but certainly wasn't expecting peeping-tom as a service. And advertise it so blatantly as being able to virtually strip any girl you have pictures of just makes me sick.


I agree it’s gross, but I’m not sure how to articulate it. It’s making real something people have done in their minds-that is imagine people with no clothes on. These pictures aren’t the actual subject naked. There’s nothing being discovered or disclosed. It’s a pure fiction. But it still bothers me.


It is fiction yes, but if it is lifelike enough does the difference matter? Even without the ick factor of making porn of someone without their consent, it would be so easy to destroy someone's career or relationship by making these deepfakes and then spreading them around. Especially once the tech gets more life-like and loses the current AI gen tells.

And the same would apply to doing this the old fashioned way in Photoshop, however you have to admit that taking it from "need special software and experience in using that software" to "just upload their image to a website and get back their AI generated nudes" is a huge change in how accessible this is.


> it would be so easy to destroy someone's career or relationship by making these deepfakes and then spreading them around.

If nude pics can get you fired, work culture needs to change.

Same with relationships.

Basically, people need to learn not to trust digital media at all without some kind of authentication, and to be a little more tolerant of nude human bodies when they do pop up.


Potentially the opposite. It may become more difficult to use such images to harm someone's career or relationship. Perhaps nothing is believable in the future.


I’d like to believe that this is the future. Already with the rise of digitally native relationships nudes have become commonplace (even Jeff Bezos has sent some). Now with these widely accessible deep-fake generators any leaked nude photo can be chalked up to digital malfeasance!


>it would be so easy to destroy someone's career or relationship by making these deepfakes and then spreading them around.

Only because this tech isn't yet well known. People will just correctly learn to not trust anything they see online.


The average IQ doesn't allow for that.


When it's on TV and people are aware it will sink in.

It's only Voice cloning that really scares me.


I’d also add that the knowledge of what’s possible diffuses to abusers a lot quicker than it diffuses to, say, the grandparents of victims.


Through all that pain, humanity adapts.


[flagged]


If by virtue signal you mean call out creeps and perverts then yes. The world would be a much better place if people did that more often instead of just leaving them be to hurt women and teenage girls. Maybe then they would at least not act on it in public.


It's okay to have morals and to talk about them.


Not sure if these should have light brought upon them or stay under rocks.

[2] (NSFW, seriously.) https://undress.app

[3] (NSFW, seriously.) https://porn.ai


Doesn't seem to actually be a thing though, as they seem to require a US$100 payment up front in order to actually "generate" an image.

That's pretty much the behaviour of a scam website.


I have seen ads on other websites which link to working versions.


It's really interesting how many people see a product/service with an impressive result (previously seemed impossible), albeit with flaws that humans find funny/obvious, and then are unable to imagine how it might be once improvement leads to the issues being ironed out.

A few years ago generating images like many of these was unimaginable, and now this does it, but perhaps 50% have some silly flaw in the image. Unless there were some reason to believe that further improvement isn't possible, as you say, it is likely to become indistinguishable from reality in the relatively near future.


The only difference between that world and our current world for the last 10 years is that before you needed to have some photoshop skills and now anyone can do it.

In some ways, I think that actually makes it safer now… the more trivial it is the more people will stop trusting photos as automatically being real.


The good news is that legal courts have already lost faith in all things digital imagery, and has for a good long while. They're actually way ahead of the curve.


> My only hope is that this extreme enshittification of online images will make people completely lose trust in anything they see online, to the point where we actually start spending time outside again.

Well in a weird way it will provide cover. You could post nudes of yourself online and just explain it away as bad actors using AI.


I don't see much to be frightened of. It has always been possible to create convincing fake photographs, at a price, while a photograph on its own (without proper provenance) has not usually been treated as important evidence (though there are some famous exceptions, e.g. Duchess of Argyll). The new technology just makes it cheaper for mischievous people to make fakes, and easier for people to dismiss an unprovenanced photo as a fake.


> It has always been possible to create convincing fake photographs...

Not really, depending on who you are "convincing". Up until relatively recently it was usually quite easy for photo experts (and often even less than experts, just people trained to look for certain "tells") to detect digital manipulation. But, on the flip side, yes, digital manipulation has occurred in the past, and I think it's a mistake to discount the strong negative effects it has had, e.g. many people having a completely unrealistic view of what most real humans actually look like (e.g. https://scottkelby.com/faith-hill-redbook-magazine-retouch-f...).

> The new technology just makes it cheaper for mischievous people to make fakes

That's a huge deal. Just look at the concerns around the use of LLMs to generate (and run) misinformation campaigns. Obviously those campaigns can and are run now, but the thought of it being incredibly cheap to do so, by people of extremely little skill, changes the information landscape drastically. Doing it for images is just another piece of the puzzle.


When one says fake photographs, that is not the same as saying digital manipulation. Photo manipulation is older than the transistor. See https://www.imaging-resource.com/news/2012/09/28/before-phot... for some examples; these are admittedly artistic manipulations and fairly obvious, but it's entirely possible to apply the same techniques in other ways.


I don't share your optimism. I expect that there will be a lot of false news and propaganda using these kinds of images. Perhaps our legal systems will evolve (albeit slower) to handle cases that come up using these technologies.

This will, if not already, make its way into porn and increase the amount of content there. It's already wild and with this, I don't expect anything good to happen.


https://getimg.ai already lets you train on submitted photos and then generate using various AI models. This tech is commonly called DreamBooth. There are clauses against misuse of course. The obvious one being creeps using photos of girls they know and then using an NSFW model with it.


> My only hope is that this extreme enshittification of online images will make people completely lose trust in anything they see online, to the point where we actually start spending time outside again.

I'm in full-bore accelerationist agreement with this point. Defense lawyers must love this.


I think already a thing?


What I find sketchy is that it is not easy to find out who is behind this service. The norm is an about us or a link to a parent site. Briefly skimmed the legalese (ToS & Privacy) and still not clear who these people or where they operate from. The linkedin link shows 8 people working there, mostly in BD from outside the US.

I don't think there is a nefarious purpose going on, i.e. getting people to sign up and stealing their info or payments, etc. However, it contributes to the erosion of trust on the internet. You're no longer sure if you're talking to a real dog in pajamas online or an AI pretending to be one.


I find that a lot of Show HN (YC companies included) that make it to the front page have the same problems. I usually don't make comments on it but I find it crazy that someone would launch either a paid product or something that takes your private information without knowing where they exist or who they are.


The FAQ https://generated.photos/faq directly answers this: it's made by the same company as Icons8, which has a long track record. The founder is Ivan Braun, who is indeed a real person (I've known him since the Icons8 days).


Fwiw, I read the FAQ yesterday. It was either not there or else my blinders were on to not find anything.


This could just be a side-project by some guy working in a tech company with a 'no inventions/IP clause' in his contract. People prefer to stay anonymous in such cases, and launch silently, as to not pull attention from their main org. Not everybody can be a twitter-tech bro announcing his creations on ProductHunt and not get in trouble for it.


It could be an attempt at entrapment by a hostile foreign power attempting to get blackmail material on western engineers by tricking people into accidentally viewing CSAM, and then leveraging that to try to turn western engineers into unwilling foreign agents by blackmailing them with the threat of turning them in to law enforcement and/or ruining their marriage. I know this sounds like a conspiracy theory but this is an anonymous site, which is free to use, targeted at English-speaking engineers, which generates extremely sketchy content. Some of the comments in this thread make it pretty clear that this site generates CSAM some percentage of the time. If you use the site you are rolling the dice and might become guilty of a crime, putting yourself in a position to be a pawn of a N Korean or other foreign country's intelligence agency.

Edit: If you get contacted by a foreign intelligence agency attempting to blackmail you, please report them to https://www.inscom.army.mil/isalute/ and also to 1-800-CALL-FBI (1-800-225-5324).


> I know this sounds like a conspiracy theory

Yes but why North Korea? This sounds like US 3 letter propaganda from a decade ago

It would be China looking to Exfiltrate IP/Data as that data war has already begun. See : Azure keys.


Why's that bad? Should every URL be doxxable?


Doxxing refers to private individuals not companies and organizations.

In Germany for example every commercial website (which is defined very broadly and applies to most websites) is by law required to have an imprint listing the person/org responsible for the website and how to contact them (an e-mail address is not good enoguh). This means you can go to any German business website and get the full address, EU VAT ID and registration number of that company.

Under the GDPR more broadly (which also affects foreign companies offering services to EU residents) every company is required by law to have a privacy policy and that policy must include whom to contact for concerns and requests regarding personal data of the user/visitor and who (which legal entity) processes and stores the data.

This is the opposite of doxxing. It protects private individuals by making transparent to them who they are interfacing with and who holds their data. This is necessary for informed consent.

Sidenote: the website/app's cookie notice is pointless as it's using the same "redirect to google.com if user says no" logic porn sites used to do (do they still do this?) for age checks. The app also works without accepting the terms, so either it can work without accepting them or (more likely) it doesn't actually wait for the user's consent. Either way it doesn't do anything and doesn't comply with any privacy laws I'm aware of that would require it.


Call it sitedoxxing then or urldoxxing.

It's the same attack, to deanonymize, to hunt people down from the internet because you don't like what they say or do.

Germany is the only country in the world with that Impresseum policy because of it's highly leagalistic Prussian background, and you would find that many in the hacking community (e.g. CCC) take huge issue with it


The hacking community takes issue with it because it is overly broad and applies to sites any reasonable person would consider personal and non-commercial. The infamous precedent were early 2000s era personal homepages with banner ads on them to pay for the hosting. The presence of ads made them commercial and thus subject to the Impressumspflicht.

The CCC has a strong anarchist tendency unlike the US tech bubble which has a more libertarian (i.e. free enterprise) streak. They absolutely do not want companies to hide from accountability, which completely abolishing the Impressumspflicht would do.

Also note how I said the GDPR also requires transparency with regard to who processes and stores your data. This doesn't translate to the same requirements that exist for an Impressum but for companies and registered organizations it's enough to make them identifiable and recognizable, especially in combination with the Transparency Register, which is also part of EU law.


Fair enough, I'm just very aware of the doxx culture we live in and the insanity of the modern internet.

You're right legally, but obviously the GDPR is not fully followed -- Big Corpos just ignore it and pay the fine, and small companies can skirt it.

I don't understand your overall point about "data" though. Do you mean for free usage when people accept cookies from their logging, or just for customers of the API since you make an account?

In any case it looks like the FAQ now links to the parent company, but I could have imagined it just being a guy who didn't want to get doxxed or wanted to stay private.

I think being able to make a website or tool or thing and say "hey check this out" and stay anonymous is a key part of the internet, and frankly I don't mind if they make a small amount of money on the side. I know this is probably Ketzerei in Germany but in Anglo countries it's sometimes notoriously hard to track down corporate structure to people and such.

Germany is definitely incredibly pro copyright though so that probably plays a role.


> I don't understand your overall point about "data" though.

Data about you is your data. The GDPR defines it as such. As long as data can be traced back to you, even through pseudonymization, it remains your data. This includes anything from IP logs to what you did in the app. If it's tracked, that generates data, the data is tied to you, so it's your data. Given that the app invites you to upload pictures, which themselves could be other people's data, it's very relevant to know who is storing, transferring and processing it and for what purposes.

> Germany is definitely incredibly pro copyright though so that probably plays a role.

Sure, to some degree. I'd also like to believe that we have a heightened cultural awareness of the dangers of governments and corporations having access to personal information when things go south. The biggest civil control mechanism of the East German government was what at the time would have been considered an excessive amount of data collection about anyone even remotely suspicious of being critical of the state (and anyone affiliated with them). And prior to that the NSDAP used intricate record keeping to identify "Jews" and suspected enemies of the state. It doesn't matter if it's a corporation that has the data or the government because fascism doesn't make this distinction. So the only way to protect data is to have full transparency over who has it and why.


It's also prominently asking you to upload a picture of your face along the rest of the controls


You can upload any face.

Here's Kurt Cobain in a universe where he gets a regular office job, goes to gym, and never starts a grunge band...

https://generated.photos/human-generator/64e708a6190809000fb...


> Thanks to our advanced AI algorithms, you won’t tell generated humans from real people

If the images posted are the best they can do, then i have some bad news from them


I can count on one hand the number of ways these photos fail. That's right, 6 ways.


If you encode a binary digit for each biological digit on your hand you can count up to 32 on one hand.


and there are more than two clearly distinct ways to articulate a finger, so you can bump that up to at least 3^5, not that it would be in any way convenient or useful to do so

assuming you have the dexterity to hold each of your 10 fingers in one of 3 different positions—e.g.: up, down and crooked—you could technically count up to 3^10 or 59,049

if you mastered that, you could add a fourth or even fifth position. 4^10 is just over 10^6, so you could hold any 6 digits, and 5^10 is just under 10^7, so almost any 7 digits

you can compress really heavily using high bases, but for it to make practical sense the atomic medium of storage has to have that number of states

if you just convert binary to a high base and back again, unsurprisingly the space used works out exactly the same, as the number of symbols used to represent a number may be fewer, but the length of the symbols themselves balances it out exactly

e.g. in base 1000 you may be able to represent the number 999 with a single symbol, but you need 1000 unique symbols to show every number up to 999, each of which of course takes up significantly more digits than a bit; however, if you had a storage cell that could reliably store and display 1000 unique states, then it would be a different matter. whether that's possible or would/could even take up less physical space than ~log_2(1000) binary cells I do not know


Nono, these AI improvements mean that we can do 64 now!


Only 31 right? 16 + 8 + 4 + 2 + 1


up to 31, but 32 numbers in total


Count Rugen sees no problem with this.


Eh. On one hand, I guess it's no problem at all. But on the other hand...


Marketing taking it too far as usual. They certainly have mastered peak uncanny valley though, I'm not really sure what this is useful for.


The first photo generated for me made everything look plastic. Unnatural sharp lines on everything. Shadows from 5 different directions.

It’s laughable to call these “hyperrealistic”.


The first image i generated was worse than that

A mermaid with plastic looking skin, and badly rendered ocean water in background.

https://generated.photos/human-generator/64d6dde03af7f90007c...


I got exactly that image too! I guess the "random human" isn't so random. This calls their "real time" claim into question...


Looks as realistic as every other real mermaid I’ve seen ;-)


It nailed the pose, though.


The demo pictures have this sort of "uncanny valley" effect for me. I wonder if it happens when I don't know that the pictures are AI-generated.


In the second photo I rolled, the generated human had legs that ended in hands instead of feet.


> “If you want to use images produced by Human Generator in commercial projects, contact us.”

If there is no copyright in AI-generated images, then how can they possibly enforce this?


There haven't been any solid rulings on the copyright validity of human-driven AI generation yet. There have been a few cases, but they've been muddied by complicating factors (not a human doing the generation - that is, autonomous generation - or the generation being used as a base work for something else).

Additionally, even if there's no copyright, the terms of service may still apply separately (see OpenAI disallowing training a competitor model on output from OpenAI models)


I don’t think we’re going to see a ruling against copyright in the long term. When the rulings do come, they’re going to be complex (not that copyright law isn’t already complex). As prompting and working with AIs slowly becomes its own art and skill, it will become clear that works need protection. We’ve had “intelligent” filters and tools in Photoshop for decades, this is just the next step in that evolution.

The only real problem here is that the original creators of the art that these AIs were trained on didn’t consent to this type of use and aren’t getting any kind of attribution or payment. If they were recognized and compensated, there’d be really nothing to talk about here - any work could be copyrighted, with whatever derivative status the AI bakes in.



I believe that as the article says despite the headline this case was specifically about a situation where a computer scientist wanted to list the AI as the one creating the work. The case doesn't examine an argument that things can be copyrighted when a human is involved either by filtering the output or even just by developing the algorithm involved and thus the human is the artist and the AI is just a tool. I think what's clear is that legally AI can't itself create a copy righted work just like a camera can't be listed as the author of a work, but it's not clear if a human using AI as a tool either through prompting or filtering counts as a creative act under copyright or if AI generated creations count as derivative works of the models weights.


A copyright is a government granted monopoly. The copyright office has stated they wont grant monopoly privilege for ai generated art. The courts thus far have backed them up.

I would say it doesnt look good at the moment for to try and enforce ownership of something ai generated, it would be an uphill battle, and the default/null position would be that the art is free to use, and unprotected by government.


> The copyright office has stated they wont grant monopoly privilege for ai generated art.

No, they haven't.

They've said that if the only human input is a text prompt, then it lacks the required human creativity to be eligible for copyright protection.


Not trying to be combative, but I don't see the difference?


Real AI imagegen workflows very often have more input from the human creating the image than a text prompt.


seems a little circular semantically. if it has significant human input its human generated moreso than ai generated, in which case we are saying the same thing.


Textures, sample images, something like that?


ControlNet offers a wide variety of additional input sources: https://stable-diffusion-art.com/controlnet/#Preprocessors_a...


> Additionally, even if there's no copyright, the terms of service may still apply separately (see OpenAI disallowing training a competitor model on output from OpenAI models)

Aren't contract clauses that relate to the distribution of material preempted by the copyright act?


No. However contract clauses only apply to people who are actually parties to the contract.

For example you and I could enter into a contract for me to use AI to generate something that is not copyrightable from data you provide and give you a copy of that thing. There would in general be no legal problem if the contract included restrictions on what you could do with that thing, including restrictions on distributing it.

Part of the quid pro quo of a contract can be one party giving up a right to do something that they would normally have a right to do.

Now suppose the contract did allow you to make and distribute copies as part of your product. Someone else starts making copies of those copies you distributed and distributing those copies.

There is no contract between me and that person, so I would not be able to stop them. I've got no contract with them, and the thing is not copyrighted, so there's nothing that prevents them from copying it.


> Aren't contract clauses that relate to the distribution of material preempted by the copyright act?

Generally, no. It's possible for there to be interactions in some cases, but the Copyright Act wouldn't generally preempt contract terms. (Its closer to the other way around, in that—to the extent copyright rights exist that could otherwise be enforced—a relevant contract will generally limit enforcement and recover to breach of contract rather than bare copyright action.)


Its terms of use of the site, you are offered use of their geneator in exchange for agreeing not to use it commercially. If you use it commercially, you are breaking those terms, which they will argue are an enforceable contract. Copyright has nothing to do with it.


But what if you use the generator, post the image in an allowed non-commercial context, and then I copy that image and use it commercially? I have no contract with the AI generator company, and you didn't violate yours; it would seem to me that the violation involved is a copyright violation.


100 percent legal to use the generated images as freely as you like in that case.


They could do it with a click-through license agreement, but they don’t have one of those either. So it seems to have the legal force of a polite request. (IANAL)


I am thinking out loud so here it goes - How will they have the bandwidth to identify and pursue every person using a generated image in commercial use. i respect and understand legalese around terms of service copyright etc but - seriously. how would any company just starting in this space enforce “terms and services”? serious question. Until there is a concrete and - let me emphasize, “cheap and generally available” solution to legally moderating AI generated content - feels to me like it’s the wild west. be real


> If there is no copyright in AI-generated images, then how can they possibly enforce this?

We don't have precedent here. Whether a person using a website with a generative AI tool counts as having a non-human creator isn't clear, and it seems to me like the answer is that it does have a human creator. Using a horse-hair brush to paint a painting doesn't mean that the painting was created by a horse and isn't subject to copyright. We'll have to find out eventually whether over a dozen settings, some with a gazillion options, and multiple freeform inputs counts as 'not created by a human'.


Was there not just precedent set for this?

The horse hair example is nonsense. One might argue that artists take inspiration from other artists to make the argument that what the AI is doing is fine. But the ai is actually only capable of blending what it’s been trained on, whereas an artist is not similarly limited. And this is how the horse hair sample is stupid.


> Was there not just precedent set for this?

No, there was a recent case where someone tried to claim an AI as an author for copyright and themselves as owner via work for hire, where it was ruled invalid because AI can’t be an author under copyright law; the ruling was explicit that it was not addressing copyrightability by humans of images they create using an AI generator as a tool, only the claim of copyright with AI as the author.


> Was there not just precedent set for this?

No.

> One might argue that artists take inspiration from other artists to make the argument that what the AI is doing is fine.

This doesn't seem to address what I took to be the relevant part of IP law - that non-human authors don't create copyrighted works. It was a reductio ad absurdum for minimal non-human involvement. It's probably not the case that a monkey stealing your camera and taking a selfie creates a copyrighted work. It's probably the case that a frog triggering a motion sensor you set up for nature photography does. It's certain painting normally with a horsehair brush does.

Your remarks seem to make some sort of moral appeal, but I'm not sure how it ties into the legal concerns I thought was being raised.

> the ai is actually only capable of blending what it’s been trained on, whereas an artist is not similarly limited.

I'm not sure what "blending" means here or what the actual theories of generative art ML systems and of humans here are. To call what the former do "blending" requires such a broad definition I can't tell you if humans are blending as well (at least some of the time, at least materially) when creating works.


There are lots of tools that don’t have copyrightable output that require commercial licensing to use.


As of now, copyright of AI-generated images is not a settled matter. But I think smart money is on the courts coming down on the side of copyright being applicable.

(If you're thinking of the recent court case, no, that was unrelated; some guy was trying to pull a stunt and the court did not actually rule on the thing you think they did.)


Do you have a link or something re: your second point for those of us who might not know?


No. But courts have continued to rule that new technologies allow humans to create works of art that fall under copyright, and I don't see why they wouldn't continue to do so.


Legal consequences of using photos of identifiable people are not limited by copyright[1]. For example, it is quite easy to assume that photos from this service will be used for political defamation, among other things.

How could they enforce it? For example, by embedding a steganographic tag, by which human rights activists will be able to identify the author of the picture.

[1] https://commons.wikimedia.org/wiki/Commons:Photographs_of_id...


Adobe Fusion 360 education edition limits what I can create with it to non commercial uses only. Despite me owning the copyright for what I produce using it. I don’t think you have it right.


Does this apply to any photo taken by a camera with "AI" filters? There must be a line somewhere.


The current line is not based on a maximum AI input but a minimum human input. You aimed the camera, your copyright. You have an AI just create fake pictures and post them without someone in the loop, no copyright. The questions are mostly about how little you have to do.


While that may be the status today, I feel this is in no way settled.


if there were, globally, no copyright at all in any "ai" generated images, and one confidently thought there never would be, then simply using the images, in the case that one only needed one or two images, would probably be fine.

however if there were large nations in which the law was still in flux or unclear, or one wanted to generate new images on the fly without fear of rate-limiting or refusal of service, then one would potentially wish to work out an arrangement.


There are still terms of use, which can dictate how you are allowed to use a website. And there are watermarks in the corners.


We can probably AI those out.


they can't.


Question is why would anyone want to use this since it's so buggy.


Jesus. I asked for a woman in the jungle wearing "hiking attire" and got a woman wearing...uh...a denim g-string, denim stockings (?!), and a kind of evening-wear top. With her boobs out.


Is it just me or is something kind of weird about all the breasts on the example women? They all look really high-set - and combined with the fact other people have gotten back nudes from this tool (as shared in the comments below) - I'm thinking that the dataset they used here was really catered to a "certain" audience.

Edit to add: It's not fast, it's showing you repeats of stuff of already made in the first try. Which is probably why I got 5 men in tight pink shirts eating cake in a row. ???


Incredible. In the time I'd spend creating one image that exactly fits my or my client's needs or buying a high quality stock photo, I can generate literally millions of photorealistic, unappealing, images that would require a skilled commercial artist to make useful for all but the most throwaway uses. What about for some high-volume throwaway use case? I generated like 5 images before I got a 3/4 shot instead of a full-body shot. bzzzzzzzt.

Trying to 'wing it' with engineers doing what designers should be doing is a bad enough when you're just making regular interfaces, but when you're trying to sell a commercial art product, you need people with subject matter expertise. No matter how cool the technology is, and no matter how well it theoretically serves a commercial art customer base, if you're selling art, it's going to be critiqued as such. Hope you've got a thick skin.


But think of how much easier this makes believeable spam and scams!


"Does your new romantic partner that texted you out of the blue have what could only be described as cleft eyeballs, four or five labiodental grooves, or other biological anomalies that vary from picture to picture? They might be an AI scammer."


So this is a fit Icelandic, adult, woman: https://generated.photos/human-generator/64e6ff848448b800095...

Radiation levels??


Now I’m wondering if AI is going to result in new fetishes. “You’ve heard of foot fetishists? Well, Brian only gets off to women with feet for hands.”


Put NSFW here!

Also, what the hell was this trained on?!?


Sincere apologies for forgetting the NSFW tag :/


You can edit it. Please do.


I tried, but no edit link for me. Perhaps I need points/credentials?


My mistake, I think the option goes away after a fixed amount of time.


Not sure if you noticed, but the text prompt in the bottom left corner says "Six-year old obese child, smiling, acting shy"


Erhm no, didn't notice. Then the AI ignored all my settings. Of course I'd select "fit Icelandic woman" !


You should actually put an [NSFW] tag to this.


NSFW


"Sorry boss, I was expecting a SAFE fit Icelandic adult woman, not an unsafe one."


It seems all generated images when you land on the page and click the "create human" aren't created live but picked from a pre-populated list images that were also likely pre-filtered for accuracy: it's always almost instantaneous, while any small modification on the same subject always fails returning that the site is at maximum capacity. Also, several users pointed to created images with the usual problems related to hands and other parts. I would not base any judgment on what the site returns on the landing page.



That pretty much confirms that it's built on Stable Diffusion model(s).


Some of the generated models were pretty damn good but without any additional prompting I ended up with the standard oddities like multiple limbs.

I like the UI functionality though. easy to dial in what you’re looking for


Why is it impossible to generate a male model wearing anything other than rolled denim jean shorts? I've tried things like "long pants" or "ankle-length pants," but I cannot get it to stop putting them all in denim shorts!


You might not like it but jorts are the pinnacle of lower body coverings.

That is actually a hilarious issue.


Yeh; tbh I thought it did a reasonable job with them though https://generated.photos/human-generator/64e666d59563e6000e0... (The hairline is way off, and he's way too muscular) I at least don't spot anything too anatomically wrong.


Knees are weird, one of them tilted backwards, ears asymmetrical.


I got one with regular shorts: https://generated.photos/human-generator/64e67772190809000bb...

I don’t understand why none of these generators want to make a man with body hair. I specified Armenian and Armenians are notoriously hairy.

Edit: this is if I specify “very hairy”. Note that for a man, there is a bit of hair in the usual places but still far short of “very hairy” IMO. https://generated.photos/human-generator/64e67aa38448b800095... (And the hair rendering is bad, AI still doesn’t understand directionality; there’s a very common pattern it should know, especially on the chest)


Try with cargo pants. After three or four iterations the pants disappeared and were substituted by something that can only be described as an andrologist fever dream.


Scroll down to the text field at the bottom and delete everything in it. Then go back and reselect clothing.


The first female model I generated was supposed to be wearing jeans but was wearing what can only be described as a denim belt with pockets


> Why is it impossible to generate a male model wearing anything other than rolled denim jean shorts?

Its not, I got rolled-but-long white denim pants, white shirt, white tie, and white jacket, with white deck-ish shoes but selecting “Formal” on the clothing tab and entering “Clothing: white-tie formalwear”.

But, yeah, there is a definite denim bias.


There is a clothing tab which you select whatever you want. I think description doesn't override it.


It doesn't work, either.


I accidentally ended up with a male in a miniskirt, simply because the clothing defaults didn't change.


Go for crotchless leggings. That worked for me. :-P


Refusing cookies redirects to Google? Kinda scummy.


Actually against GDPR. So even though they have that dialog, they are still in violation.

The need to track people is strong.


Wow, the "one more click" effect is strong with that one... I did not expect anything useful to come out of experimenting with this, yet here I still am half an hour later. Congrats to the makers, it's impressive!


Doesn’t seem nearly as good as a decent SDXL finetune. Or even SD 1.5, really.



Probably shouldn't have made every individual adjustment to the gen parameters require a generation round-trip to persist them


"Want more generated people?" is the most 2023 ad headline yet


"Currently, we do not have any limits to the number of humans you can generate."

This has widely been seen as something of a problem, environmentally.


Default human is a "Young Adult" woman, and the default "add something" was "woman with tatoos". I changed ONE filter (from young to Senior). It spun for about 20 seconds and then gave me the same woman's face but older. She is also topless. I'm impressed (?)


I had very different default settings, so I think there's some randomization going on here.


It completely ignores details in the "add something" box. I asked for gray trousers, it gave blue jeans. I told it doc martin boots, it continued showing white sneakers. I told it "burgundy sweater", it produced a sweater at least, but pink.

In short, as a tool for an art director or illustrator trying to avoid the expense of a model or a photo shoot, it's fuckin' useless.


Has the same problems as midjourney et al: you can feed it pictures of a friend or yourself or a celebrity, and the result is always off - not recognizable as them


That's pretty easily fixable if you train a LoRA or similar so that the model has a specific likeness in mind. (You can look at - and despair at - Civitai if you want proof.)

It's harder to do at inference time without training, but I wouldn't assume it'll be impossible forever, especially with the existence of ControlNet.


This is a GAN, so you can just project the image of yourself into the latent space (which will give you a near-pixel-perfect reconstruction), fix the identity-relevant variables in the _z_, and edit it as necessary. (No workarounds like finetuning necessary. Just one of the many forgotten advantages of GANs.)


You can project an image into the latent space with diffusion model too, DDIM inversion.


>fix the identity-relevant variables in the _z_

Is that how the latent space works though; Like if it's a 300-dim vector, is the face at locations 0-10?


I'm a bit surprised this is the only comment mentioning ControlNet.


Some will surely consider that a feature rather than a bug.


But then how am I supposed to be the kind of creepy person that jerks it to poorly modified "nudes" made from Facebook profile pics?


"Back in my day stalkers had to use their imagination, by gum."


This is a first: On refusing their cooking banner it immediately redirects to google.com, poof.


I know that currently AI is the hot topic, but if you want to generate pictures of well-known objects like humans, why not just render a 3D model of a human? Something like Poser (the software) is way more efficient than any GAN or diffusion model.

But Poser is 28 years old now, I guess it doesn't attract sweet VC money.



Woah, that’s insane. I had no idea mo-cap was so low-effort already.


That is truly impressive.

In my mind, it would be smarter to use the AI to tune the 3D model, and then let a rendering algorithm like this generate the final image.


Finally a website that unprompted answers the question “What if Wednesday Addams had enormous breasts?”

Edit: lol https://generated.photos/human-generator/64db2561ba3ed6000ca...


It gave you a Sims character?


That was after several Barbie dolls. Great website.


The prompt for the image you linked says "in sims world". It's in the bottom left field.


The thing is if you are advertising generating infinite photorealistic humans, your automatically generated prompts should probably not be things that do not serve that end.


That’s cool, I got there through hitting the refresh button.


I always thought peak of popular culture had a bit of a negative effect on the body image of the rest. Now we’re going to let ai define the ideals… this might be slippery.


"Now"?

I doubt everyone is going to want to have six mangled fingers on one hand and a claw on the other.


Okay, agreed. I ran about 15-20 gens through the site, got mangled feet/legs, disconnected appendages, extra appendages, nudity, nudity imprinted onto clothing, just bizarre stuff.

Tomorrow's "Now", maybe.


or a penis coming out of their vagina


Neat. I'd really like to have a setting for attractiveness, all of these people look like models.


At last, a way to complete the AI-generated cycle that https://thispersondoesnotexist.com/ started.


The only thing I changed from the default parameters was Age == Teenager. That resulted in this error:

We detected that generated image contains nude content. Try changing parameters.

Not sure what to make of this, but it feels wrong, somehow?

Edit: This was the prompt it generated for me on page load: "Minerva McGonagall in Hogwarts, wearing Hogwarts robe and witch hat" - https://generated.photos/human-generator/64e650a39563e6000e0....


"teen" is a ubiquitous porn category that, in practice, describes a body type, not age; similar to how "babe" almost never means "infant".

I would be more surprised to get SFW results from that prompt, considering the result would be based on more heavily regulated (less common) photographs of minors.


No, teen in porn absolutely means 18 and 19, or at least "I'm '18-19' and definitely not a 26 year old"


Pick ten random 26 year old porn actors. I bet you their most recent scene was categorized as either "MILF" or "teen".


I got the same thing and am very confused. They are the ones generating the image. Why did they generate porn if they don't allow it? Also apparently clothed teenagers are now pornographic? I think their image analysis needs some work.


Over self-censoring


crash "you got thirst trap instagram photos in my david cronenberg dataset" "you got cronenberg photos in my thirst trap instagram data set"


Not bad at all, but what’s the main use case?

The site lists all of these things you can do, but are those things people needed or wanted? Is the idea to replace stock photography of people?


Stock photography, better selfies, or simply fun. Also, people always invent some use creators never thought about.


Amusing. My first two "random human" samples had completely ordinary uncanny valley issues (eye was smushed and blurry, weirdly shark-like teeth in child's mouth). But the third looks pretty good! ...for a 90s era povray Barbie doll model.

https://generated.photos/human-generator/64d552c85263da00077...


The text prompt for that image (one is generated for the “random” images) is “barbie doll”, so in this specific case its not so much an imagegen problem as other parts of the app design not matching the advertised behavior.


Ah, funny. Their interface hid that box from me on my phone. Weird choices all around.


The human brain is still very good at sensing something is not right and these photos all still look fake. Like bad movie CGI. It’s impressive stuff but if anyone is thinking they could replace a real photo with this and pass it off as real they’re making a mistake.

The tech is super interesting but this space is also a good example of getting the last 1% working is going to be a lot harder than getting the first 99% to work.


NSFW? Also.. WTF with their detection algorithm, this is easily abusable. This was the first image I was prompted with https://generated.photos/human-generator/64e65d5a8448b8000b5... I have not changed any of the parameters, they were automatically generated on /new


This is the thing that comes to you if you take too much Benadryl at noonday.


Accuracy on using an existing face seems pretty off. Certain positions have terrible accuracy on body parts. Why would you care to use this again? This just seems like another AI SaaS scam project where they're basically charging others to use their A100s with their copy-pasted implemented version of the research paper algorithms. These should just all be outlawed IMO.


Wow, this went from reasonable to "holy crap that's nude" without any prompting real fast.


They are overselling the capabilities of their model a bit. The boy posing has facial artifacts, and the first “human” I generated is a painterly mermaid with a disjointed background. Results from photoai.com or many models available on civit.ai look a lot more realistic.


For a number of use cases, this would be most helpful if combine-able with tools which move lips/cheeks to simulate speech. However, the toolsets seem to be fractured at this point. Does anyone have a good workflow for this?


Should flag it as NSFW, it went full on on my first prompt, which is weird.


Seems rude how if you refuse their cookies they redirect you to Google.


I couldn't help noticing that half the female poses are seriously CFM ("Come hither"), whereas the male poses are The Thinker, or Man-spreading, or other things that manly men do. But then I tried generating some male images. Turns out that The Thinker can be seriously CFM as well.

For some reason, all the images I have generated so far are unexpectedly sexualized even I when i try to give it completely benign prompts. I don't think it's just me. :-/


I tried to add more clothing, both with the prompt, and with the clothing tab. Got the same tiny costume each time. Trying to generate D&D character, got chippendale.


I noticed this as well! Its surprisingly hard to get a fully clothed female with certain combinations of body type, posing. Even when I explicitly ask in the text box for "modest" or "fully clothed".


No matter the clothing prompt (sweater, cargo pants, and shirt), any female image I generated had a bra and shorts showing. Alongside the available poses, I think it's pretty clear what they made this for...


>CFM

Cubic feet per minute?


Come ** me.


Hit back and forward in the browser and you see other people’s generated image rather than your own without hitting the generate image button again.


Interesting. I’ve now seen the exact same image twice, about 10 minutes apart


What datasets do people use for applications like these?

Seems like you'd want huge amounts of photos of people (abundant publicly, though not necessarily free for use) with huge amounts of tagging for assorted clothes, body styles, accessories, nationalities, scene types (arduous?).


A lot of the tagging, from what I understand, comes from image recognition that detects some of that stuff. For the clothes you could just scrape clothing web sites, since they often categorize already.


Congrats on the slick design!

Ethnicity: American

What does that mean in latent space and does this mostly represent training bias?


It's also got "Irish" but not "Scottish" or "English". Very odd.


There's British though


Wonderful dystopia we are creating


Be careful at work...it sometimes generates a realistic nude even with clothing selected.


"Different Poses" section - the hands are all cut off in these pictures except for one single picture where the index finger is the longest finger on the right hand!


NSFW


quite the opposite. I get a lot of the outputs filtered without any NSFW prompting.

> We detected that generated image contains nude content. Try changing parameters.


Not _quite_ the opposite.

I clicked the female generation, and got a porn model posing nude. Without any provided guidance other than the clickable buttons.


oh, in a few iterations i got a nude sexy adult woman. clearly they're at risk of generating child porn (you can change the age to child or teen, though for obvious reasons I haven't tried it).


How does it compare to https://photoai.com by Pieter Levels?


When I refresh enough I get duplicates. How come?


This would be useful if you could upload reference images of the clothing items so it can be used for catalogs, etc.


Since we are already that far, I am thinking we should by law tag AI generated data in the Metadata.


What would a company do with 10,000 photorealistic photos of an AI generated human... per month?


On-demand generation of NPCs for video games? Or background extras in movies?

Or maybe a people trying clothes on virtually.


Train their models


We know about all of the potential harms that deepfakes can cause, the problems with inherent bias in training data, etc.

Creating/publicising a tool that winks at these issues (consider the difference between the poses offered for 'male' and 'female' bodies) but does nothing to mitigate them - and a lot to enable them - is irresponsible at best.


It would be quite hard to make any AI tool that preemptively avoids the wide range of potential issues that you've mentioned. If tool makers are forced to always err on the side of caution, it's likely that the resulting tool ends up disappointing.

Only when published, and when put into context of the entire work, could a creation deemed harmful. A tool should not, for example, prevent you from making a bunch of images with ominous poses, from which you select one to use with an article that discusses the history of ominous poses.


Just because it's hard to make a tool that can't be used in negative ways doesn't mean that it's a good idea to make a tool that (charitably) makes specifically negative uses easy and (uncharitably) is deliberately designed for them.

Tool makes do err on the side of caution all the time - we **** out passwords so users don't share them as easily, we put safety catches in secateurs. "Build in safeguards against the obvious issues" is a basic design step.


- your critique is both vague, but at the same time touches a sensitive area, implying a wrongdoing by the tool authors that can't be refuted or fixed easily. What specifically bothers you? Consider that active Twitter discussions uncover and point out troublesome issues almost faster than the general public can understand and digest.

- assuming you found an egregious issue, do you also double down on maintaining that the tool is 'deliberately designed to make negative uses easy'? How so?

- I disagree with the 'safety catches' metaphor and would offer the 'hammer' metaphor instead.

- Actually, with the rapid development in this field I expect that anyone will be able to locally prompt for any content, even movies, soon, limited only by people's taste and imagination; with this realization I don't think I will follow up on this discussion that will surely be outdated in a minute.


Peritract already called out a specific issue. The male and female options come with different sets of selectable poses, and some of the female poses are pornographic in nature. This promotes the objectification of women.


> If tool makers are forced to always err on the side of caution, it's likely that the resulting tool ends up disappointing

I don't disagree with you entirely, but I still have the feeling like this will make a pretty good epitaph for humanity some day.


In that case enjoy our proof of concept:

https://app.engageusers.ai

Everything from realistic faces to realistic posts. We tried to make it as ethical as possible in multiple ways. But ultimately it is designed to spur conversation on topics that need more kickstarting engagement…


Perhaps more to the point:

All these AI content generators are still early stage. So it's kind of wild west for the time being.

First cars were what? Horse carriage with an engine duct taped onto it? Only when they became more numerous, things like traffic rules, reliable brakes & steering etc became important.

We're in the engine-with-wheels stage. Have fun, be happy.


Feel free to create your own software that mitigates those issues. The rest of us will use whichever tool performs the best and does what it's asked to do.


These all look like cartoons to me.


The sliders on the side didn't work for me when I hit update. Hug of death?


FYI: the site places 3 cookies on the visitor's computer without consent


Do they need consent?

One cookie looks like it just records whether or not a tooltip that they want to show to first time users has been shown. The other two appear to be some kind of session cookies.

They might count as strictly necessary cookies.


So, not only does this generate nightmare fuel at surprising rate, but it will /happily/ generate likenesses of well-known people by just adding their name in the prompt. So, that's terrifying and probably not going to end well...

All the best,

-HG


Nice seeing a nuxt-webapp in production.


really easy to jailbreak nudes



If I click on Refuse on a cookie banner it redirects me to Google. Yeah, no - off I go.


...prone to pron?


Refusing cookies links to Google. This is of course against GDPR.


Oh yeah, totally ready for prime time, hyper realistic, SFW filter works great, not at all hallucinations /massive_sarcasm

Actually NSFW, not safe for sanity. That's...not how body parts work:

https://generated.photos/human-generator/64e644f39c8c0400108...

Prompt was "young woman with tattoos in miniskirt" really nothing crazy there. But perhaps the latent space with that particular pose is particularly raunchy.


Yeah, this was my first attempt: https://generated.photos/human-generator/64e648819c8c0400088...

Not quite right. I am, however, impressed that the fingers are generally "mostly" correct.


Oof, I thought we banned thalidomide.

I wonder if their pose detection/interpolation struggles for rarer poses, eg "kneeling with legs splayed leaning forward" is quite specific in saucy contexts and fairly sparse in more typical model shoots, so the manifold gets a bit holey, and overlaps with similar poses like one knee up, one hand forward.


Yeah I think that's it.

Also, it's way too easy to make something that looks like (or basically is) child porn with this tool. I chose Adult but something else keeps triggering it to generate a child-like face like in the image I posted. And as you showed, it's easy to get accidental NSFW pics.


my first attempt created some kind of gym monster lol https://generated.photos/human-generator/64e64a2f412bec0009b...


This is actually an incredible example of "The longer you look the worse it gets".

She has a second set of boobs where her hips should be! That's not evolutionarily advantageous!


> That's not evolutionarily advantageous!

Although not my kind of thing, I'm pretty sure some people would select for that (in real life) if given the option. ;)


What a time to be alive!


Your description had me curious, the picture had me laughing out loud


Hermione’s been in the restricted section again…


Definitely the stuff nightmares are made out of!


This cured my depression.


It's very hard to filter NSFW content. Every site I tried, unstable diffusion, kawaix.com, Mage space, novel ai... They all have some content moderation on (to avoid CP, to keep payment processors happy...), but things leak.

Some are really bad at filtering. Kawaix is particularly terrible at it, because they are new, while mage have upped their game a lot but had many months to do so.

It feels like 2000' again, and it's the wild west.

Plus when you have a horde of teenagers having a whole summer to try prompts from their bed, you get serious pen testing sessions.


There is a SFW filter?

I just let it generate a random woman with no prompt, and it gave me a pretty good result, except there is a mask on the face and literally bloody nude boobs; https://generated.photos/human-generator/64d67874568faa0007a...

edit; I just realized it put in a default prompt


and I got a 'We detected that generated image contains nude content. Try changing parameters.' despite not specifying any such thing


I've spent the last hour trying to coax it to spit out nsfw images and it definitely is not safe for work lol. I wouldn't even want to post some of the seeds I generated. Nudity is not prevented in this generator.


Looks like something from a hellraiser movie


If you refuse their tracking and marketing cookies it redirects you to google.com. Classy.


That violates EU law and you can absolutely get a fine for this behaviour. As a digital service offerer you can ask the user for permission to track non-essential information about the user, but your service should work the same, without regard for if that user says yes or no.

If this service is hell bent on raping your privacy, they will have to limit their offerings to mostly those living in dictatorships and immature democracies.


You can just hit the back button and use the website without it popping up again. I refused but they're probably still assigning cookies after I hit the back button.


I'm surprised browsers don't offer something like Docker so that each site is isolated to its own virtual environment.



The creator and maintainer of that extension has passed away in January.

https://github.com/stoically/temporary-containers/issues/618


Chrome profiles work exactly like this, you can set up any number of profiles and they all have their own configuration/sessions etc.

I use home and work profiles on my laptop for instance, works really well.


private/incognito window?


That forgets the whole session when you close it. I meant a way to isolate websites for tracking purposes but also continue to use it over time rather than throwing away all cookies.


It sounds like Chrome accounts might accomplish this for you. I have 7-8 profiles for different personas that seem to sandbox cookies and other identity-adjacent features quite well.


I wonder if their business model is tracking and marketing.


You have to wonder?


"Human generator is at full capacity, please try again later"


No wonder the birth rate has been dropping.


I wonder if it's actually overloaded or if there's a bug.


HN Effect


hug of death


pony tail.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: