A couple performance optimizations I made that weren't covered in the article:
- When I profiled my initial implementation of the annealing, I found that it was spending a lot of time just rotating bounding boxes to test for intersection. So I precompute all possible rotations of the BVHs (24 rotations) and then only need to deal with translation when checking for intersection - way faster: https://github.com/fogleman/pack3d/blob/master/pack3d/bvh.go...
- Even more interesting, for the bin packing code, I used a recursive algorithm with memoization, like you do. But I realized an improvement can be made. Say you search for the optimal packing for a box sized 10x10x10 and the optimal packing comes back, say, 9x8x9. Now I do another query for a box sized, say, 9x9x9. I should just reuse the result for my 10x10x10 query because 9x9x9 falls between 9x8x9 and 10x10x10. Basically instead of requiring an exact match in the memoization, I want to query a range. I implemented this using a spatial hash and it made things WAY faster. https://github.com/fogleman/pack3d/blob/master/binpack/spati...
Hey Michael - I'm leading the engineering team that is building the Fuse 1 here at Formlabs. This is awesome! I can't wait to try out a few of these prints soon.
Thanks for sharing this with the world. It's so rewarding to see people inspired to make stuff like this based on the printer we have been working on for such a long time.
oh, I was assuming that the time for the rotations was dependent on the size of the models, presumably in number of verts. This memoization would change the runtime.
E.g. if it was linear in that, then you would be going from O(mn) to O(m+n)
>Note that pack3d runs until stopped, writing its output to disk whenever a new best is found.
I wish more software operated like this. It makes it convenient because your concrete constraint is usually time, instead of number of iterations, which is an abstract value.
Somehow it seems less polished of a command line tool that way, but it was perfect for my needs. I actually ran it over night to pack a model of a deer, packing 8 of them together. It produced this histogram of scores: http://i.imgur.com/VENTeIa.png So it could easily find arrangements that scored around 65% while the best it found was like 52% (lower is better and basically anything under 100% is better than naive packing).
In general, simulated annealing is hard to parallelize.
But in this case we also tend to get stuck in local minima a lot, because they are very deep troughs in the search space. (Annealing does still perform much better than hill climbing though, so it's still worthwhile.) So we can run multiple annealings in parallel and pick the best result.
How about tossing a bunch of boats into the build volume and shaking it? Use a sand/particle simulation to fit them together, seems amenable to the GPU.
If passing through one another is allowed during the physical simulation, this is just a CPU-heavy simulated annealing. If not, you won't reach solutions that are "far away" from your initial condition in terms of number of passes required to reach it.
In parallelization of annealing, do you think it would be worthwhile to broadcast the paths that are A) already taken B) not productive ? So that, each parallel annealer is working in a unique solution space?
The process could be made continuous by building in supports to the volume of parts being printed, so it wouldn't need to be supported from the bottom for the whole duration of the build. Continuous Selective Laser Sintering via periodic support structures and powder dams.
The interlocking parts and periodic build support structures could be made so that eventually lower parts could fall away and the powder recycled.
One could also build in dams, so that the powder didn't fall away as the lower parts fell away. Those dams would allow for powder retention.
Last year I needed the same thing for packing 2-dimensional shapes on a piece of plywood (for CNC.) You would think it's fairly simple and common but I failed to find any FOSS tools.
Any pointers for a good 2d shape-packing algorithm?
> Any pointers for a good 2d shape-packing algorithm?
Googling `open source texture packer` finds a few projects. Do the requirements change significantly when dealing with a physical sheet of plywood rather than pixels?
This is a bit of a naive optimization, though. Thermal properties have to be considered when planning SLS builds. Very dense cross sections will result in long laser scan times, and the quality of those benchies will suffer significantly.
My understanding is Shapeways uses an advanced/custom version of Netfabb for their 3D packing. Yes they have some insane optimizations/automation already, and I believe they also use human input for certain requirements/limitations
Question: Is this more of an unbounded multi-dimensional knapsack problem? With bin packing, you have multiple bins and you're minimizing the bins used. With knapsack, you're maximizing the value (in this case, count) of the objects fit into the finite knapsack.
This looks really cool but I have a naive question...
I've heard desktop printers are prone to failure. Like the print becoming detached from the plate. Would this increase the units / material you lose if there is an error? Or do the errors happen early enough in the print you can catch them before much damage is done.
Can anyone explain how the STL manipulation works? I never really understood how the program reads a STL file and how it processes the geometry. Is there something similar in Python as well?
While packing problems are pretty cool in general, I'm afraid I don't understand why this is useful for 3D printing. Does this increase printer throughput?
To a certain extent, yes it increases printer throughput. With selective laser sintering there's a fixed overhead time required to remove the last print from the machine, clean up the workspace, and reset it.
I'm the lead engineer for the Fuse 1 project here at Formlabs and I think it is safe to say that we'll have something to show off soon. Really glad that Michael shared his code with us all here. Stay tuned!
Or possibly OP has access via his employer Kitware?
Kitware could be doing software consultancy for Formlabs. They already have expertise in 3D slicing [1][2][3], which is just what you need for 3D printing.
This is pure uninformed speculation:
Formlabs look like they have great hardware.
On the other hand they could be under competitive pressure on the software front.
Firstly there's Autodesk who launched Ember as an "Open" design [4] in roughly the same market as Formlabs Form 1/2.
This is straight out of the Joel Spolysky playbook [5] "Smart companies try to commoditize their products’ complements." Autodesk's (MCAD) product is Inventor now transitioning to Fusion 360. 3D printing is the complement. That's why Ember's "electronics are shared under a Creative Commons Attribution-ShareAlike license, the same license under which we've shared Ember's resin and mechanical designs; the firmware is licensed under GNU GPL (see the source code itself for the full details)." [4]. You can bet that it will be a cold day in hell before Autodesk releases Inventor and/or Fusion 360's source code under any sort of FOSS license.
Secondly, there's Stratasys that bought GrabCAD mainly to acquire there MCAD software team, several of whom are ex Solidworks / Siemens PLM (Unigraphics/Parasolid/D-Cubed). Expect in the next few years for slicing of STL to be replaced by slicing of native MCAD file formats. That's a step up in development effort and probably needs Modeling kernel (Parasolid, Acis ...).
Thirdly, there's 3D Systemes who decided to compete with lawyers rather than software developers - but eventually that got settled [6].
Possibly strategies for Formlabs:
Either, invest heavily in their software. That's wny they might contract Kitware. Maybe they license a modeling kernel.
Or, get acquired by one of the other (not Autodesk) big three MCAD vendors: (1) Dassault, (2) Siemens PLM, (3) PTC.
Think I was probably completely off-base and the OP doesn't have the printer ...
Anyway, I forgot to say the write up and project on Github is very nice. I ended up buying the OP's Primitive for MacOS [1], [2] which is very nice too!
- When I profiled my initial implementation of the annealing, I found that it was spending a lot of time just rotating bounding boxes to test for intersection. So I precompute all possible rotations of the BVHs (24 rotations) and then only need to deal with translation when checking for intersection - way faster: https://github.com/fogleman/pack3d/blob/master/pack3d/bvh.go...
- Even more interesting, for the bin packing code, I used a recursive algorithm with memoization, like you do. But I realized an improvement can be made. Say you search for the optimal packing for a box sized 10x10x10 and the optimal packing comes back, say, 9x8x9. Now I do another query for a box sized, say, 9x9x9. I should just reuse the result for my 10x10x10 query because 9x9x9 falls between 9x8x9 and 10x10x10. Basically instead of requiring an exact match in the memoization, I want to query a range. I implemented this using a spatial hash and it made things WAY faster. https://github.com/fogleman/pack3d/blob/master/binpack/spati...