Hacker News new | past | comments | ask | show | jobs | submit login
Photogrammetry Guide (github.com/mikeroyal)
179 points by peter_d_sherman on July 30, 2022 | hide | past | favorite | 43 comments



Photogrammetry can produce outstanding results with just the click of a button as long as the lighting is good and there's enough photos to cover every angle.

At the Stanford museum, made with open source Meshroom and a cell phone camera: https://sketchfab.com/3d-models/parvati-cantor-arts-center-a...


And you have a good surface texture.

I have off-and-on attempted to use photogrammetry to measure things with very little surface, and consistently failed.

There's a reason basically every photogrammetry example you see is of stone statues or organic structures (trees, dirt, things with lots of texture at every scale).


Yeah I tried to scan my hallway using Meshroom. It took about 3 hours to solve and completely failed on any of the white surfaces. Ok I guess that isn't entirely unexpected but it's not like they were completely free of texture. The solve time was also very disappointing.

Unfortunately there seems to be a huge gap between Meshroom (the best open source solution AFAIK) and commercial solutions like Matterport, or even iPhone apps (yes I know it has a depth camera but even so).

There's also a ton of research results that are way better than Meshroom but unfortunately they never have code that you can actually use.

This sort of thing: https://youtu.be/TZ1eToXQwN0


I wonder if it would work to project a microdot pattern on those kind of objects..

Obviously you lose the ease-of-use of using your cellphone then, though.


That's kind of what the original Kinect depth camera did. It projected a known constellation of IR dots to help with the 3D reconstruction.


The problem with that approach is that the microdot pattern has to be fixed. SFM type approaches work by doing feature matching between images to determine the correspondence between images. If the microdot pattern changes for each image, the correlation fails.

AIUI, the kinect and similar tools don't do SFM, but rather measure depth by looking at how the micro-dot pattern is distorted.


You could try cutting some colored masking tape into angular pieces, and sticking them on until there's enough places to track.


Good idea but I really shouldn't have to do that and it still shouldn't take 3 hours.

The problem is Meshroom is based on old methods from the Photosynth era.

I imagine eventually some researcher will go the extra mile and provide a modern algorithm in Meshroom, because the rest of Meshroom (tools, interface, etc.) is really good.


Could you explain why those are things you "shouldn't have to do"?

Who owes you this functionality?


It's shorthand for "I believe this is a problem that can be solved by the technology rather than requiring a manual workaround", not "I am entitled to a solution"


I worked in computer vision a few years ago and I was wondering if you could solve this with a camera flash. Let's say you take 2 pictures is quick succession, one with and one without a flash. Let's assume you know the intensity of the flash and that the 2 pictures were aligned pixel-to-pixel. Now, for each pixel, the colour difference between the the 2 pictures is going to be dependent on the albedo of the surface (for a white wall it's going to be relatively constant) and the distance from the flash. Further objects would have more similar colour (they would be less affected by the flash since they are further away), and closer objects would be more affected (since they are closer). You could write down the math for this and solve for "distance from camera" for each pixel.


> I was wondering if you could solve this with a camera flash

Yeah, I think you can - there's been a few "multi-flash 3d reconstruction" approaches proposed in the last 5-10 years. e.g. [1], possibly also [2]

[1] https://www.nature.com/articles/srep10909 [2] https://www.researchgate.net/publication/220829727_Shape_fro...


It is tricky because you also have indirect lighting.


If you subtract the 2 images the indirect lighting gets removed and you're left with only the flash, which depends only on albedo and distance


You're missing secondary reflections.


Scan the World is a fantastic open-source photogrammetry project to digitise notable artwork, sculpture, and other objects of cultural significance. I've taken a scan for this project myself using Meshroom, one of the apps mentioned by this guide. It was fun and surprisingly easy; the scan turned out decent despite only using photos taken with my iPhone SE.

https://www.myminifactory.com/scantheworld/full-collection

VRChat has a museum world built with a selection of these models ("Ancient Art Museum"). Fun to visit in a VR headset.


I clicked a few and they don't seem to have any texture maps. Is that true for most/all?


Good question, I don't know the answer. The model I scanned was just a marble bust so it doesn't have a texture map. The models I saw in VR had mostly what appeared to be boilerplate marble/granite/rock textures.


This is a fantastic resource for someone who's just getting interested in Photogrammetry, via Neural Radiance Fields(NeRF). I'm curious if those should be included here.

As an aside, perhaps this repo should be renamed to start with `awesome`, since it helps with SEO for github.


The outputs of Photogrammetry are still more useable today for 3D work like importing the scenes into Unity and editing. NeRF's are great and getting better every week but the output is mostly a video camera fly through of the model scene. NeRFs will really start to shine once they have common rendering and useability in game engine pipelines etc.


In my experience NeRF tends to work better for objects that photogrammetry struggles with, like transparent objects. By using marching cubes you can also export scenes as a mesh. instant-ngp (https://github.com/NVlabs/instant-ngp) has done an amazing job of making NeRF accessible, but you still need camera positions from other software such as COLMAP.


You may find this tool interesting

https://picto3d.com/



There are now many 3D LiDAR scanning apps out there and IMO Polycam is actually one of the weaker ones.

My personal favourites are Scaniverse and Dot3D (which is currently in open Beta).


Can confirm that Scaniverse is amazing.


It's worth trying out polycam even without a Lidar iPhone, the standard photogrammetry is pretty cool.


I've been interested in trying something like this for a while but I don't have an iPhone. I've thought about getting one just for this purpose.

Does anyone know of a cheap lidar usb camera or something like that you could use with software like poly.cam?


Polycam supports android and web now, you can upload a bunch of jpegs online. I've used it to create photogrammetry from drone captures. Good enough for sharing with others, but not on par with dronedeploy or ODM for pro surveying


Interesting - thanks!


Does this app do anything special, or is it just $8 a month for something that nobody's bothered to make an open source version of yet?


Leica Photogrammetry Suite (LPS)

( largely sourced from 1980s era ERMapper ) directs to a dead Hexagon link, appears to be rebranded as

ERDAS IMAGINE 2020 https://download.hexagongeospatial.com/downloads/imagine/erd...

Hexagon Geospatial seems to be always reshuffling its links.


I got surprisingly good results out of a Kinect 2 and a program called kscan3D, feels like a shame Kinect 2 didn’t get the kind of dev support the first one did because the tech was seriously impressive for the time.

Pretty sure Photogrammetry would do better now if only due to the drastically better dev support. Need to get around to trying some time to compare.


My partner makes ceramics. It would be nice to make some 3d models of the work she creates for her (upcoming) website. Any thoughts for a workflow to do this quickly and easily?

I would expect to take pictures of the item, then flip it over and take pictures of the bottom.

Happy to make some glue-code if there are some tools that can facilitate this in a mostly uattended way


I'm doing exactly this for my mum at the moment. Trialing various apps and methods. Best so far has been polycam (tho only with photos, hers is an older iphone) uploaded to sketchfab for embed .. that said I think we're content with not shooting the bottom of the various pieces, so they are presented on a table or a wheel. Good luck!


Thanks - Ill have a go with that!


Well what I usually use is Meshroom, where you simply add your photos and let it reconstruct the object. I haven't had particularly good results with it (likely due to my photography "skills"), but it was good enough for what I needed. You could check out a guide for that since it should be easiest to start with.


I tried meshroom originally, but it was amazingly slow compared to some of the commercial offerings. Searching, I see there is some hidden flag for GPU, so ill see if that helps.


In the past I did exactly this, creating 3d models of ceramics made by my partner, with my Xperia xz2 phone. It had some built-in app for this. Though I have nothing to compare it to, the results where quite good, even used it to 3d print copies.


This is something that QuickTime VR did reasonably well many decades ago. So consider using a turntable and capturing a short video clip.


I purchased a Pivo to do exactly this. The problem is more with the fixed angle you cant always capture enough of both the inside of a bowl and the outside. But maybe i can use this to assist with capture....


I've played around with Meshroom and both my phone camera and a 4/3, but never managed to get an acceptable 3D model.

I suppose it was because I did not have a studio, i.e. a monocolor diffuse background.

Meshroom always identified most of the points in the background and left only a few points for the object itself. I soon gave up.


Are there methods to scan the soil too?


Not sure what exactly you mean by scan, but Land Surveyors use drone photogrammetry (and lidar when appropriate) to create TIN surface models of bare earth, yes.

But if it's a smaller area typically it's not worth using a drone and we'd use a terrestrial scanner like a Faro.

Edit - spelling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: