Hacker News new | past | comments | ask | show | jobs | submit login

Nice! Glad to hear it’s not just for research nerds these days. I’ve done a fair bit of experimentation with photogrammetry and it does work well for the built environment.

I have an application however where I want to be able to capture an outdoor forest scene with a video camera and convert that to a video game/simulation scene which renders efficiently. It turns out that is a more challenging problem that actually no one has solved as far as I know.

There’s lots of neat research tho and it will be possible some day. This bit of research is pretty cool: https://youtu.be/JuH79E8rdKc




This is actually a very interesting video. Reflection effects look very realistic.


Yah it’s cool stuff! Currently NERF takes a very long time to calculate and render but folks are working on improving that.


Do you think its possible to decouple lighting effects from the generated 3D scene? Currently it seem to simply copy the actual lighting condition in the environment. In a way, it's a limitation. It's much better if it can provide the RGB color + other material information (SVBRDF decomposition). If that's possible, then these meshes can be used in other environments.


I think its definitely possible, though I don’t know if the algorithms exist now. You would probably need to capture the scene under different lighting conditions, and probably from different angles. But assuming there is some theoretical way to go from photos to a mathematical representation of material characteristics I’d think we’ll eventually have neural nets that can do the conversion.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: