Hacker News new | past | comments | ask | show | jobs | submit login
Automating Complex 3D Modeling (sandia.gov)
59 points by sizzle on May 1, 2020 | hide | past | favorite | 13 comments



More clealy: "the first provably correct algorithm for conforming Voronoi meshing for non-convex and possibly non-manifold domains with guarantees on the quality of both surface and volume elements"

https://arxiv.org/abs/1902.08767


It looks to me like this is not automated 3D modeling. Instead it's an improved algorithm for reducing an existing 3D model to a mesh suitable for eg. finite element method simulations.


I agree. A process that typically requires manual intervention and tweaking of the source CAD model to arrive at something ready for simulation. As a design engineer maybe I’m not out of a job just yet.


Maybe time to start upskilling if a major part of your day to day work is being targeted by AI and algorithms?


This is specifically an improvement to meshing, a method used to simplify the solid model of a mechanical design for simulation. It's (generally) a pretty small percentage of our workload. This press release looks like something that will amount to a software update to my CAD program.


this isn't first of its kind, most/all cfd/fea packages come with a meshing tool. And most of them are already "one-click" solutions for a halfway decent mesh even on complex topologies with some even doing error handling for the user.


That is typically tetrahedral meshing. Hex meshing is sometimes an option, with the caveat that the user must first decompose the geometry (slice it up) into simpler sections that are mappable.

This seems to be automated polyhedral meshing. Which is interesting, but I imagine has limited applicability as most finite element software (that I'm aware of) relies on tet/hex element formulations. Maybe with this advance in meshing that will change, though.


I have not so far heard any compelling arguments for using other than tetrahedral or hexahedral meshes in finite element computations. I know that there is a lot of research going on but the motivation is not clear to me.

Error estimates are dependent on the polynomial degree and the average cell size, and not the "amount of corners per cell" or anything like that.


The assertion is that computation of phenomena like flow and radiation could result in fundamentally more accurate computations. Minor variations in surface shape and texture can have a large impact on physical interactions. Getting the geometry right could make a big difference. Given that this is a rigorous attempts to advance the state of the art it should be possible to demonstrate this improvement with computations, but with the current state of this project any experimental exercises will have to wait until after SIGGRAPH or whenever this is released as usable software. It seems unlikely that anyone will try to jump ahead of this project using what they have published so far. It seems kind of sad that this work is being advanced first as hype for a licensing opportunity and as research that others could work with second, especially given that national labs are doing this work with public money.


Meshing may be on the way out:

https://www.altair.com/simsolid/


That’s not how I read that product page. They claim that the user does not need to simplify the model before inputting it to their modeling domain. Not that they would have invented a completely new method of defining non-convex 3d solid volumes.


You need meshing for all kinds of things ...

Seen a super hero/end of the world/action blockbuster movie recently?


Even with advanced tools, all I can ever see when needing to clean up complicated models is hundreds of hours of effort.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: