Hacker News new | past | comments | ask | show | jobs | submit login

They're not just storing the albedo, they're optimizing spherical harmonics to represent the color in an anisotropic way, that's why tey're calling it a radiance field. Radiance fields capture both light intensity (including color) and direction. They explain in the paper that it's very difficult to estimate good normals from the sparse point cloud they're starting with (or rather, that's taken as a given and produced as an earlier step using colmap) and that the gaussians doesn't use normals. You could probably make a point cloud from the gaussians and then use one of the existing techniques to estimate their normals, as a first attempt.

Remember that it's a bit tricky to talk about depth when the gaussians have both a position (mean value) and a size (covariance). The bicycle spokes are made up of long thin splats, what value do you assign to one of those? That's why I think you would have to sample new points from them as a first step.

https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/




I wasn't saying that you'd estimate normals from the point cloud. You'd need to estimate the normals separately and store the world position and world normal along with the color. This should be possible as these values can be represented as a color texture, so you should be able to construct something that renders a normal map and depth map from any angle just like this renders the color currently.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: