Oh yes. This could be a nice interface to provide other inputs we (plan2scene) want as well. Like photo to room assignments. Then, plan2scene can create 3d houses with appropriate materials. Thank you for sharing this link.
Do you think its possible to decouple lighting effects from the generated 3D scene? Currently it seem to simply copy the actual lighting condition in the environment. In a way, it's a limitation. It's much better if it can provide the RGB color + other material information (SVBRDF decomposition). If that's possible, then these meshes can be used in other environments.
I think its definitely possible, though I don’t know if the algorithms exist now. You would probably need to capture the scene under different lighting conditions, and probably from different angles. But assuming there is some theoretical way to go from photos to a mathematical representation of material characteristics I’d think we’ll eventually have neural nets that can do the conversion.
This can detect rooms as polygons, doors/windows as line segments and bounding boxes around objects (e.g. cabinets, sinks etc. indicated on floorplans).
This is definitely a thought I have considered. It could potentially ease lot of hassle related to 3D scanning of houses by real-estate companies. Very often, those scans require expensive equipment, professional expertise to use those equipment and considerable time and effort to capture the entire residence. Even then,you will find many holes in those scans and floating debris. With our work, one only needs a sketch of the floorplan and photos taken from that house. Plan2Scene will use data to infer appearance of unseen surfaces.
For instance, look at this sample 3d scanned house: https://matterport.com/en-gb/media/2486. Make sure to switch to the doll-house view from the toolbar. There are many holes in the scan.
Estimating the global camera pose using non-overlapping photos is a challenging problem. This is doable if the photos densely cover the house (e.g. video frames).
However, if the global camera poses are available, we can detect surfaces in photos and project them reasonably well into 3D. In the past, I have used this work to achieve something similar: https://github.com/NVlabs/planercnn
As far as I can tell it's not directly related to yours - but are you aware of any software that uses say, Android OS and a camera or pre-existing photos of the inside of a house/office/building to create a 3d map? Matterport seems like it might but their Android app is not going so well.
Ben’s reviews are the best I’ve seen, but he is far from the only reviewer in this space.
Unfortunately, most of the software in this space does not actually generate a floor plan, even if they do create a 3D model of the space (like Matterport). There is one program he highlighted which is bundled with a service from the company that you can use to process your pictures and generate a floor plan among other things, but some of that work is still done by human beings.
I’m still looking for a good consumer grade program that can do photogrammetry of a space and turn that into a floor plan. I’ve seen attempts to do this with phones and cameras, and the LIDAR scanner that Apple has recently introduced has improved the situation, but IMO it is still not there yet.
But please correct me if I’m wrong. I would love to be wrong.
Ahh, yes — they used to support the Structure.io scanner that you could strap to an iPad or iPhone.
I bought one of those. I could never get it to work.
Edit: the Canvas.io software really wants the latest iPhone or iPad devices with LIDAR scanners built-in. It will supposedly run on lesser hardware, but with greater error in the constructed 3D map. Fine for a sample to get an idea of whether or not it might work for you, but not really enough to actually use for anything useful.
Note that the Canvas.io software really wants the latest iPhone or iPad devices with LIDAR scanners built-in. It will supposedly run on lesser hardware, but with greater error in the constructed 3D map.
Fine for a sample to get an idea of whether or not it might work for you, but not really enough to actually use for anything useful.
I was reminded about a program called MagicPlan. See https://youtu.be/gEUk7YzfTcc for a video demo of using it to scan an apartment, using an iPhone 12 Pro with the LIDAR sensor.
But IMO, this still isn’t good enough. It’s a huge improvement over the corner mode, which is what you have to use if you aren’t using a device with a LIDAR sensor, but it’s still not enough.