Hacker News new | past | comments | ask | show | jobs | submit login

> we could learn without the need of a differentiable renderer neither graphics operators, right?

No. The sentence of the parent post "a pixelwise error can be computed and backpropagated to the CNN" is possible only if the renderer is differentiable.




Got it. So let's suppose we have an external renderer, then we could learn parameters to tweak rendered scene and then get rendered pixels, so we can calculate pixelwise errors from it and some target image we're trying to optimize for. In this way, do we still need differentiable renderer in your opinion?

Update:

It would require more training cycles and would not be as "atomic" as iterative tweaks but seems possible.

At same time, I wonder about making loss function talk with some external renderer would make it possible to mix both approaches.


How would you learn parameters to tweak the rendered scene if the renderer is not differentiable, and you can't backpropagate through the renderer to calculate the appropriate parameter adjustments from the pixelwise errors?

I suppose you theoretically could do it with some trial and error method or grid search or something like that, but it's going to be absolutely computationally unfeasible in the general case; the pixelwise errors only become practically useful if you have an uninterrupted differentiable/'backpropagatable' path from your parameters to the pixels.


Yes, throughput would be larger and we would loose the backprop path like you said, but it seems practical in some ways and actually guiding approaches like https://nv-tlabs.github.io/meta-sim/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: