Do you think its possible to decouple lighting effects from the generated 3D scene? Currently it seem to simply copy the actual lighting condition in the environment. In a way, it's a limitation. It's much better if it can provide the RGB color + other material information (SVBRDF decomposition). If that's possible, then these meshes can be used in other environments.
I think its definitely possible, though I don’t know if the algorithms exist now. You would probably need to capture the scene under different lighting conditions, and probably from different angles. But assuming there is some theoretical way to go from photos to a mathematical representation of material characteristics I’d think we’ll eventually have neural nets that can do the conversion.