Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Bucket Robotics (YC S24) – Defect detection for molded and cast parts
111 points by lasermatts 67 days ago | hide | past | favorite | 26 comments
Hey Hacker News! We’re Matt and Steph from Bucket Robotics https://bucket.bot Bucket transforms CAD models into custom defect detection models for manufacturing: https://youtu.be/RCyguguf3Is

Injection molded and cast parts are everywhere – 50% of what’s visible on a modern car is injection molded – and these molds are custom created for each part and assembly line. Injection molding is a process where small plastic pellets are heated - primarily by friction from an auger - and pushed into a mold - usually two big milled out chunks of aluminum or steel - that are pushed together by somewhere between 10 tons and 1000s of tons of pressure. Once the plastic cools the machine opens up the mold and pushes the newly formed object out using rods called ejector pins. Look at a plastic object and you can usually find a couple round marks from the ejector pins, a mark from the injection site, a ridge where the faces of the molds meet, and maybe some round stamp marks that tell you the day and shift it was made on. (Link to a great explainer on the process: https://youtu.be/RMjtmsr3CqA?si=QjErT_rOU9-_TQ8d)

Defect detection is either traditional ML based – get a real-world sample, image it, label defect, repeat until there’s a big enough set to build a model – or done manually. Humans have an 80% success rate at detection - that gets worse throughout the day, because decision fatigue leads to deterioration in performance near lunch/end-of-shift (https://en.wikipedia.org/wiki/Decision_fatigue). Creating an automated system usually takes somewhere between 2 days and 2 weeks to collect and label real world samples then build a model.

Injection molding is currently a 300 billion USD market, and as vehicle electrification increases, more of the total components of a car are injection molded making that market even bigger. And because so much of that surface area is customer-facing – any blemish, scratch, or burn is considered defective. Speaking to folks in the space, you can see a defect rate as high as 15% for blemishes as small as 1cm^2.

Our solution to this problem is to build the models off of CAD designs instead of real world data. An injection mold is usually machined aluminum or steel and can cost anywhere from $5k to >$100k - usually with a significant lead time. So when customers send out their designs to the mold makers - or their CNC if they do it in-house - they can also send them to us in parallel and have a defect detection model ready to go long before their mold is even finished.

On the backend we’re generating these detection models by creating a large number of variations of the 3D model - some to simulate innocuous things like ejector pin marks and most to simulate various defects like flash. Once we have our 3D models generated we fire them off to the cloud to render photorealistic scenes with varied camera parameters, lighting, and obscurants (shops are dusty). Now that we have labeled images it’s a simple task to train a fairly off the shelf transformer based vision model from them and deliver it to the customer.

Running the model doesn’t require fancy hardware - our usual target device is an Orin Nano with a 12MP camera on it - and we run it purely on-device so that customer images don’t need to leave their worksite. We charge customers by the model — when they plan a line change to a new mold, ideally they’ll contact us and we’ll have their model ready before retooling is complete.

Injection molding is as error prone as it is cool to watch. For example, flash is a thin layer of extra plastic - usually hanging off the edge of the part or overhanging a hole in the part which makes parts defective aesthetically or can even prevent parts from joining up properly. It can happen for so many reasons. Too high an injection pressure, too low a clamping pressure, a grubby mold surface, mold wear, poor mold design, and that’s just to name a few!

Steph and I have a history of working on tasks performed manually that we want to automate – we’ve been working together for the last five years in Pittsburgh on self-driving cars at Argo AI, Latitude AI, and Stack AV. Before that, I worked at Michelin’s test track and Uber ATG. We really, really love robots.

Our first pitch to Y Combinator was, “build a better Intel RealSense” since it’s a universally used (and loathed) vision system in robotics. We built our first few units and started building demos for how folks could use our camera - and that’s when we found defect detection for injection molding and casting. Defect detection is understood and highly automated for things like PCBs – where a surface defect can indicate a future critical failure (hey that capacitor looks a little big?) but defect detection for higher volume/lower cost parts is still too high a cost and effort for most shops.

We’re excited to launch Bucket with you all! We’d love to hear from the community – and if you know anyone working in industrial computer vision or in quality control, please connect us! My email is matt@bucket.bot – we can’t wait to see what you all think!




Nice! I have so many questions.. How stable is the injection molding process once it's fully proven out, up and running? Is it a bathtub curve shape, do defects keep randomly popping up?

What do you use on your end to label the ejector pin locations, parting lines, etc? Does this process use Hexagon software inputs to make that easier?

If you're not relying so much on a skilled operator, would you be using a CMM for dimensional inspection anyways, and then would this be better solved with a CMM? How can you get quality parts if you don't have a skilled operator anyways to set up the machine correctly and correct the defects? Are you ever going to be able to replace a good machine operator? Or this just helps reduce the inspection toil and burden? Do they usually need 100% inspection, or just periodic with binning?

Why do you want to target injection molded parts and not machined parts?

Don't most of these machines have the parts just fall in a bin, with no robot arm? Doesn't this seem like instead of paying a good injection mold tech, now you're paying for an injection mold tech and a robotics tech, if you have to program the arm path for every part setup?

How many defects are "dimensional" and how many are "cosmetic" ?

Can a defect detection model accept injection mold pressure curves as input? Isn't that a better data source for flash and underfilling?

Is this supposed to get retrofit, or go on new machines?


> Nice! I have so many questions.. How stable is the injection molding process once it's fully proven out, up and running? Is it a bathtub curve shape, do defects keep randomly popping up?

They tend to pop up randomly -- mold wear is a big one -- and that's a function of material selected for the mold itself (resin vs aluminum vs steel.)

> What do you use on your end to label the ejector pin locations, parting lines, etc? Does this process use Hexagon software inputs to make that easier?

Right now we have an in-house tool for this - but it's a bit painful on our end so we're always looking for good alternatives!

> If you're not relying so much on a skilled operator, would you be using a CMM for dimensional inspection anyways, and then would this be better solved with a CMM? How can you get quality parts if you don't have a skilled operator anyways to set up the machine correctly and correct the defects? Are you ever going to be able to replace a good machine operator? Or this just helps reduce the inspection toil and burden? Do they usually need 100% inspection, or just periodic with binning?

Injection molding is usually for mass manufacturing - think multiple parts coming in bursts every minute or so - which makes CMM a tough to integrate without way slowing down your line. There's also the case of big objects like bumpers and chairs that might not be easy to CMM. We're not shooting to replace machine operators - just make their lives easier. With injection molding our customers so far usually really want 100% inspection instead of sampling.

> Don't most of these machines have the parts just fall in a bin, with no robot arm? Doesn't this seem like instead of paying a good injection mold tech, now you're paying for an injection mold tech and a robotics tech, if you have to program the arm path for every part setup?

Depends on the shop! Some have automated packaging systems that someone has to stare at. Some are trying to add in automated packaging a build out a defect plan. Keep in mind you don't necessarily need a full robot to get bad parts off your line - a little shoving arm to just boot the bad parts off a conveyor works fine in some cases.

> How many defects are "dimensional" and how many are "cosmetic" ?

Varies wildly by design - but I'd say we see more cosmetic than dimensional. Maybe because the ratio of cosmetic surface to interface surface is fairly high.

> Can a defect detection model accept injection mold pressure curves as input? Isn't that a better data source for flash and underfilling?

I'll have to keep that in mind - it's a great idea. The hard part there is that you'll need a per-machine calibration and a lot of data collection. Could be good though!

> Is this supposed to get retrofit, or go on new machines?

Ideally both since it's a separate camera system, although I'd love to try to integrate with the machines themselves.


Thank you.

I have seen machines with visual pressure curve output on the operator screen for each part. I also think some machines have automatic pressure monitoring already built into the machine control, but it's certainly not transformer model based.

I didn't know they were using resin molds, that takes cheap aluminum prototype scale up mold to a whole new level.

Last time I checked, the mold design software itself has the same UI as 1999 AutoCAD.

How many images/angles can you effectively sample and compare on that hardware in a 30 second cycle time? How would you process images from more than one camera? If you have 8 cameras, can the defect recognition software run on 8 threads?

Are injection mold operators mostly located in low labor cost areas? Is any reshoring happening?


Injection molding houses are heavily concentrated in LCOL areas -- but it's a massive market, so, so much of modern materials are plastic that there's a lot that's done in the US/Canada/Mexico, in North America, and Germany/Italy/Austria.

For just the automotive industry, there are 120 injection molding contractors in Michigan alone. Onshoring and reshoring are desired for really customer facing parts -- you spend a lot of weight on packaging to mitigate scratches when you produce abroad then assemble domestically.

Staying with automotive, electrification is driving the injection molding industry -- as your weight shifts to "big battery with a shell around it" more of the total components of a vehicle are injected.

Zooming out of automotive, biomedical device packaging is a huge injection molded business that's stayed in the US and is growing.


Steph here - each image takes about ~250ms on a small single board compute like an Nvidia Orin Nano. On something larger like an RTX 4080 GPU it's less than 100ms. Because we're running big models we can't really just spin out more threads ourselves, we throw them over to the GPU (or deep learning accelerator - depending on the platform) and the driver's internal scheduler decides how to get it done.

In a robotic packaging scenario most of the time is spent by the robot picking up the objects and moving them, so for a 30 second cycle we usually get less than a second to take multiple pictures and make a decision about the part. For a smaller number of images - like 4 - it's pretty easy to handle with cheap hardware like an Orin Nano or Orin NX. If we've got more images (like 8) and a tight time budget (like less than 2 seconds) we'd usually just bump up the hardware, like going to a higher tier of Nvidia's line of Orins or using compute with an RTX 4080 GPU or equivalent in it.


Very cool. Good luck! I used to work on this. Your synthetic dataset pipeline is really neat. A foundation model of molding defects might be feasible. I hope you will also work on the whole inline quality control problem. From what I saw of the field, sometimes you only get the final quality days after painting, finishing or cool down of big parts. And the quality metric is notably undefined for visual defect, using the cad render as a reference is a good solution. Because plastic is so cheap and the process so stable, I have seen days of production shredded for a tiny perfectly repeated visual defects. Injection molding machines are heavily instrumented [0] and I tried to mix in-mold sensors + process parameters + photo + thermography of hot parts [1] (sry it's in french, might find better doc later). [0] https://scholar.google.com/citations?view_op=view_citation&h... [1] https://a1rb4ck.github.io/phd/#[128,%22XYZ%22,85.039,614.438...


Dude yes exactly!!

It's so incredibly frustrating when you're past final assembly of some system, and only then do you see a defect that requires a teardown! You touched upon a really fun piece of defect detection -- quality metrics are highly dependent upon the customer, but that makes it fun for us

Great paper links too, I really appreciate that! My French is a little rusty, but I love the comic at the start!!


I'm an engineer at company that injection molds parts for medical and industrial devices. This seems extremely promising.

Can your scene generator handle things like custom tooling? For example if I were to place a part to be inspected on a clear acrylic jig, could the model be trained to look through the acrylic?

We're currently already using a vision system to measure certain features on the parts, can your models be applied to generic images, or does it require integration with the camera?

How does the customer communicate the types and probable locations of potential defects? Or do you perform some sort of mold simulation to predict them? Likewise how does the customer communicate where defects are critical versus non-critical?

Finally how does pricing work? Does it scale based on part size, or does the customer select how many variations or do you do some analysis ahead of time and generate a custom quote? Is it a one time cost or is it an ongoing subscription? Could you ballpark a price range for generating a model for a part roughly 3.5 inches in diameter and 1.5 inches tall with moderate complexity?

Feel free to reach out to the email in my profile if you'd like to discuss a little more in depth.


Regarding pricing: we we charge at a per-model basis -- so the workflow when you're ready to retool your line, send us the CAD and we'll send you the defect detector model and the bill. We're still working out if there's some sort of "enterprise tier" for folks like CMs who are flipping through molds almost as quickly as it takes to heat up/cool down a machine

Pricing is custom and is a dependent on a few key factors like what your quality tolerances are.

In, say, the disc golfing space, you can have wildly different acceptance rates for flashing around the rim of the disc at a manufacturer-by-manufacturer basis


Steph here (the guy from the video) - yep we can handle custom tooling pretty easily in general. Usually what we're simulating is robot arms (think scaras with vacuum grippers), acrylic might be a bit fiddly and take some tuning but I bet we can handle it just fine.

Nothing special required from the camera - but it's nice if we know the camera parameters before hand (sensor size, focal length) so we can make sure we generate images matching what that camera spits out.

Right now all the part-based communication is just "email us/jump on a quick call" - in the future I want to make it a self service UI where customers can mark things out.


Mechanical engineer turned software engineer here; I love this kind of stuff and I frequently wonder how I might apply my software expertise to that domain again. Amongst other things I worked in automotive and the components I worked on were forged and heat treated high strength steels. The defects in forged components are often very small (tens of microns) but I'd be curious if this could work there. We used powerful microscopes - including electron microscopes - on the production lines so maybe that would work?


That's another very, very interesting case we thought about tackling. That sounds like something that's ripe for transformer-based vision models to keep the overall size of the model down.

What kind of timescales do you get when measuring parts in an electron microscope case? Are these crankshafts whizzing by, or something like a ship propeller where people spend days making sure every inch is covered?


for forged parts, magnetic particle deposit might enhance defect under UV light. Here [0] is a crankshaft inspection method that mixed UV light with SSD detection. [0] https://doi.org/10.1007/s00170-020-06467-4


Nice use case! Can you elaborate a bit more on robotics piece? What role does the robot play? I assume it's required to turn the part around for inspection. If so, how do you (automatically?) compute the grasping pointing? Also feel free to find me on LinkedIn if you want to chat more about growing a robotics businesses and/or geometric reasoning for manufacturing.


You nailed it - when the part comes out of the mold, it slides down a chute onto a conveyor belt. From here, the arms themselves change depending on the supplier (Kuka/Fanuc/Universal Robotics/Yaskawa...there are a lot of players in the space,) but they're all used to hold the part in the air so we can take images on all sides -- then the arm moves the part to its correct spot (good bin/bad bin)

for computing the grasping position -- your mileage may vary depending on which axis/face matters the most for an object (in an automotive part, you want to grasp the side that's not customer-facing because people care less about a scratch there) but it's a real challenge.

Luckily EtherCAT + protobuf's adoption has helped keep the comms integration low -- even a few years ago we'd need to make a weird hop from camera --> PLC --> arm but things are slowly getting easier


This looks cool.

> Steph and I have a history of working...

I have so many questions, since you are experienced.

Do you think there should be import tariffs on Chinese made EVs?

I know your gut is telling you, don't answer this question, but that is like, the biggest and most important story in autos manufacturing, no? It would be like saying, if cars were extremely cheap, so that everyone could have one, the manufacturing story for free cars must already be sufficient, and so there isn't much demand for innovation to make things cheaper. But in real life, the thing that makes cars cheap or expensive is a law, which could disappear with a stroke of a pen, so it's interesting to get your POV.

> On the backend we’re generating...

OpenAI, Anthropic, Stability, etc. have already authored 3D model to synthetic data pipelines - why won't they do this one?


I'll answer in reverse order.

Making synthetic data from a 3D model is really nothing too special - it's just a tiny subset of what video game engine does. But there's one extra step required for defect detection: you need to think about where the defects occur (and where the non-defect witness marks occur) and simulate those. Like any startup our biggest advantage here over the big companies is we move fast and customers usually like us. Our second biggest advantage: defect detection just isn't sexy, so it's not top of mind for most folks.

I think yes there probably should be tariffs on Chinese EVs (we're pretty big on on-shore manufacturing) but that's essentially a crutch. We'll need a lot of automation and design work to push down US-made EV cost to be competitive. If we want electrification to increase and onshoring to occur we've gotta bring prices down to something folks can easily buy that solves their problems.


> We'll need a lot of automation and design work to push down US-made EV cost to be competitive.

What kind of automation and design work would "push down US-made EV costs" more than corresponding automation and design work in China?

Do you see what I mean? Technology doesn't change the relative costs, which matter, even if it changes the absolute costs, which don't.


> Do you see what I mean? Technology doesn't change the relative costs, which matter, even if it changes the absolute costs, which don't.

I get what you're saying, and I somewhat agree. But I think it does leave out the desire some consumers have to purchase domestic. For example, I might be willing to buy a domestically made vehicle if the price is under $25K even if it's more than a similar vehicle made overseas. But if the price is over that, I'm going with the cheaper import.


The idea of the domestically manufactured vehicle is just that, an idea.

There's the fiction of quota and part manifests.

Then there's the reality that, well I assemble a thousand parts in China into one "part" then I import that one "part."

There are a ton of people employed by the autos industry in the US but that's so broad. It basically means there are a ton of people employed by organizing our life around cars. While some are involved in some kind of manufacturing, relative to the amount of manufacturing and manpower in China, it is small.

So every way you look at domestic, it seems less and less like it really means "domestic," and more and more like it's a form of vague but powerful storytelling.

I don't think it's good for anyone to be so wedded to storytelling. And anyway, you could try e-biking in weather, it's fine, sometimes it's even fun, and then suddenly you're like, well do I need more than the occasional rented car?


That's a fair point. Personally, I don't want a car at all but it's highly impractical for me not to have one where I live. I didn't own a car until my 30s and before that I biked or took the bus everywhere. However, where I am now public transit is severely lacking and the weather is often not conducive to biking.


I apologize for such a naive comment, as I don't have experience in this field, but I've seen OpenAI do some pretty impressive image recognition tasks (multimodal LLMs). Have you tried uploading some images of successful injection castings and some of unsuccessful injection castings (they don't even have to be of the same mold), telling it "These are examples of success" "these are examples of failures, e.g. flashing, blemish, scratch, etc" and feeding it picture(s) of the casted object?

It'd be interesting to hear how effective that is.


LLMs like GPT-4o have some pretty impressive image performance. It can actually pick up some of the more obvious defects on our buckets (Steph tested it out just now).

Two problems though with the OpenAI approach: 1. You'd need a cloud connection to send those images up to and get the answer back down so that's cost in terms of your round-trip latency, network infra, and the OpenAI account itself.

2. It doesn't do well with the very subtle defects - mild shape changes, loss of features from short shots, etc

It might be worth using in the offline pipeline for auto labeling though!


How would this compare against producing a 3D mesh using traditional photogrammetry and comparing the CAD model and mesh for deviations? Or would this be unrealistic since the photogrammetrically produced mesh would lack the level of detail required?


That's a great idea - especially because photogrammetry with high quality cameras can have way higher detail than most of the common 3D sensors (realsenses, luxonis, etc). The big problems there are the computation cost and/or set up complexity of photogrammetry. You either need to do a lot of computation (a couple minutes on my RTX 4090 last time I did it for a medium sized object) to estimate keypoints and disparities or you need a really well calibrated ring of cameras, some way to feed parts through it at line rate, but could get away with less compute.

A laser scanner would probably make the mesh comparison approach easier, but it's still incredibly hard to get a really accurate and high resolution depth map in a short time span - especially if the parts are actively moving.


What is the difference between this and traditional FEA done on CAD models? Is this purely a cost benefit?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: