Hacker News new | past | comments | ask | show | jobs | submit login

To me it sounded really similar to a Generative Adversarial Network (a GAN). With a GAN you have one network whose job is to classify an image (is this picture really a person?) and another whose job is to essentially fool the classifier (generate an image that looks like a person).

This case is a little bit of the reverse, in that it's focused on making the computer vision component (the discriminator) try to match the visual content that has already been generated.

Seems like these types of "adversarial" approaches will be used in lots of different domains, as so far they've produced some pretty amazing results.




Speaking of doing things the other way around, could ML techniques be useful for tuning shader parameters and light placements in order to make a 3d scene modeled by a human look as close as possible to a reference photo?

(If so, it should probably be done with multiple reference photos from different angles, to ensure shaders and lights aren’t adjusted badly so that the scene only looks good from the one angle that the computer was looking from when it was tweaking.)


Would be even nicer if it could be trained on unpaired datasets (ala CycleGAN https://arxiv.org/abs/1703.10593).


Sure, I can imagine that. Would be surprised if there isn't SIGGRAPH papers to that effect. At least initially, it would be more of an optimization problem, but ML could help as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: