Hacker News new | past | comments | ask | show | jobs | submit login

Like so:

http://vision.princeton.edu/projects/2016/SSCNet/

There's similar work at (at least) Berkeley and Stanford.




That's cool. I liked the use of scene rendering to supply training data to the network.

It'd be nice to see texture prediction on some of the voxels, so painting the occluded voxels in the scene as well as texturing those in the image.

Texture accuracy could be measured by rendering the other side of the bed and see how close the texture predictions were.

Now this would be quite a challenge, but if you could train a network to give D, given RGB, you'd have RGBD and could maybe use internet video to create some structure. Use something like a SLAM algorithm to get camera position, then detect when a model is viewed from the occluded side and get a lot of texture prediction data using real world internet video.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: