Hacker News new | past | comments | ask | show | jobs | submit login
Design an Enclosure Using Photogrammetry (smartsolutions4home.com)
145 points by detaro on July 17, 2021 | hide | past | favorite | 17 comments



This is the type of article that hacker news was made for!

It would be cool if you could print console controllers that mold to the players exact hand dimensions (a teenager and adult have very different hands)


3D printing materials are pretty weak and game controllers have a lot of excessive force misapplied to them, it would be really expensive to print the replacement housings parts over and over again.


Depends on the geometry of the object, orientation, print settings and material.

This being a closed shape, split in half, printed hollow-face down and supported by the inner structure _and_ the other half?

I'm pretty sure you can use the cheapest PLA filament on the market, and you wouldn't be able to break it will all your strength even at very weak settings (2 perimenters, %20 infill).

Want to factor in the possibility to throw the controller on the ground? Next cheap step is to use PETG. Higher impact strength, almost perfect intra-layer bonding if printed correctly. By the time the shell is broken, the components are too.

If this was a controller in the shape of a PS controller it would be different due to the possibility of leverage. Still, we now have FDM filaments which are good enough even for that.

You always need to factor in process limitations. What you don't see in everyday object is how they're designed to fit within the manufacturing process they've chosen and optimized accordingly. FDM has limitations too, and one has to design with that in mind.


Did you ever use a 3D printer before? I'm asking because I had similar misconceptions about 3D printed materials before my first print, and was quickly proven wrong.

Even standard PLA prints are really sturdy. You can easily create game controller-sized parts with 20-30% infill that almost nobody could break with their bare hands. It'll certainly survive anything that won't also break the internals.


They're only weak if you print at the default settings with 2 or 3 perimeters. Crank those numbers up

A handheld controller that can't be broken should cost like $4 unless you're using some luxury filament.


In my experiments with photogrammetry, instead of step 7 to add texture, I have tried mixing two colors of kinetic sand. It works and give enough texture for matching provided that the pictures are close-up enough,sharp and of high-resolution.

I am always impressed by the ability of computer vision to match patches of similarly random textures.

The texture of kinetic sand isn't great to hold shape for modelling, and the size of the sand grain is a little too small, which means you must take great care when taking the pictures.

There is probably some room for a product in the market : a mixed-colored-clay with the right sized grain and texture for photogrammetry.

One last question I have is, how do I unmix my kinetic sand to get the blue sand in the blue castle and the red sand in the red castle ? How long does it take before the kids mix all the colors when you buy them kinetic sand ? Is there some solvent for the paint I can put my sand in to get transparent sand, and then plunge it in some water based paint to get some color back ? Does different colors have different grain size so they can be separated by sieving ? Or is it just an educational tool to teach kids early about irreversibility ?


I used to do research into digital image correlation, and the best method is to sputter black and white paint onto the object using an airbrush.


This has to be one of the 3D printing "killer apps." 3D printing may have made prototyping an overall better experience, but something like this simply wouldn't have been done without it.


Apple is adding photogrammetry API to macOS Monterey for this kind of scanning.

https://developer.apple.com/augmented-reality/object-capture...


I'm really curious about the photogrammetry. It seems to share similarities with computed tomography only it's probably mostly for visible light with opaque surfaces. Like computed tomography, you can design a rotating gantry for your detector(s). Are there hobbiest versions of these?

For the software doing the actual reconstruction, is there anything like a radon transform underlying the algorithm used to recover the object? Or does that only work with attenuation?


Disclaimer: I have a degree in robotics and worked for what is now Lyft Level 5, dealing with autonomous driving.

What it does is basically a big optimization problem. You have 6 parameters for each picture (x, y, z, roll, pitch, yaw, although in practice there are better parametrizations based on what is called Quaternions). Than three prameters for the position of each keypoint. Those are simply the same physical spots on multiple image that are matched based on their visual appearance.

Last component is a metric that you try to optimize. That is mostly just reprojection error. Given all your current estimates where yould the keypoints be projected on this synthetic image. Then you compare it with the pixel locations of where it actually in and try to minimize this.

It is actually a very versatile and robust pipeline which gives you what you’ve seen on the images.

Last step is to produce the dense reconstruction which is commonly done using patch match algorithm.

You can try it for yourself with the OpenMVG library. Very hackable and versatile.


CT works when the scanned object is partially transparent to the light. In linked slides I was scanning a beer glass. I used father’s LP record player to have rotation at a fixed speed. Source of light was an LCD screen showing white picture in fullscreen. Fixed record player rotational speed + fixed camera framerate gave me accurate angles to run backprojection. The backprojection algorithm I used for this CT proof of concept is rather unsophisticated. You simply “extrude” each 2D photo in the direction camera is looking at. Then you add up all these volumes from different photos and you’re done. Almost. You can get sophisticated and try to correct for various imperfections of this crude method.

Here are slides in Slovene with photos of my garage CT scanner https://drola.si/demos/ct_prezentacija.pdf

I believe photogrammetry is computationally vastly more complicated than CT but I haven’t gone deep into it yet.


If you have textured surfaces, you can also "just" use a camera that you move around the scene. Turns out, most surfaces have sufficient texture if you go close-up.

There are DIY gantrys for the line-laser type of scanner.

The software tends to be significantly more complicated. Here's a link to something fairly modern, yet not dependent on ML [0], project page here [1].

The code is open-source, but it has/had some issues when running on Linux, the critical ones I fixed in my GH fork.

[0]: https://www.gcc.tu-darmstadt.de/media/gcc/papers/Aroudj-2017...

[1]: https://www.gcc.tu-darmstadt.de/home/proj/tsr/tsr.en.jsp


Just on the point about turntables: I've done a good deal of photogrammetry on small objects, and part of what helps the photo alignment is the background. If the object is on a turn table and the background never changes (even if its blank) the software will have a hard time with alignment, so its best to just walk around the object, besides you have to get lots of up-high and down-low shots which is another step to automate.


Have you tried something like a greenscreen background? I've done a little with turntables and ran into exactly the problem you're alluding to. Would chroma-keying the background out, to get rid of it entirely, help?


An important prerequisite was left out: Meshroom requires an Nvidia card for CUDA. Sadly, I swapped out my Nvidia card for an AMD one so I could switch to Sway/Wayland. I've contemplated putting both cards in, AMD for the desktop and the Nvidia for CUDA, but I haven't got around to it. I need my desktop intact for WFH duties.


There’s an iOS app called turn3d that makes photogrammetry easy.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: