Bucket Robotics (YC S24) – Defect detection for molded and cast parts

Hey Hacker News! We’re Matt and Steph from Bucket Robotics https://bucket.bot Bucket transforms CAD models into custom defect detection models for manufacturing: https://youtu.be/RCyguguf3Is

Injection molded and cast parts are everywhere – 50% of what’s visible on a modern car is injection molded – and these molds are custom created for each part and assembly line. Injection molding is a process where small plastic pellets are heated - primarily by friction from an auger - and pushed into a mold - usually two big milled out chunks of aluminum or steel - that are pushed together by somewhere between 10 tons and 1000s of tons of pressure. Once the plastic cools the machine opens up the mold and pushes the newly formed object out using rods called ejector pins. Look at a plastic object and you can usually find a couple round marks from the ejector pins, a mark from the injection site, a ridge where the faces of the molds meet, and maybe some round stamp marks that tell you the day and shift it was made on. (Link to a great explainer on the process: https://youtu.be/RMjtmsr3CqA?si=QjErT_rOU9-_TQ8d)

Defect detection is either traditional ML based – get a real-world sample, image it, label defect, repeat until there’s a big enough set to build a model – or done manually. Humans have an 80% success rate at detection - that gets worse throughout the day, because decision fatigue leads to deterioration in performance near lunch/end-of-shift (https://en.wikipedia.org/wiki/Decision_fatigue). Creating an automated system usually takes somewhere between 2 days and 2 weeks to collect and label real world samples then build a model.

Injection molding is currently a 300 billion USD market, and as vehicle electrification increases, more of the total components of a car are injection molded making that market even bigger. And because so much of that surface area is customer-facing – any blemish, scratch, or burn is considered defective. Speaking to folks in the space, you can see a defect rate as high as 15% for blemishes as small as 1cm^2.

Our solution to this problem is to build the models off of CAD designs instead of real world data. An injection mold is usually machined aluminum or steel and can cost anywhere from $5k to >$100k - usually with a significant lead time. So when customers send out their designs to the mold makers - or their CNC if they do it in-house - they can also send them to us in parallel and have a defect detection model ready to go long before their mold is even finished.

On the backend we’re generating these detection models by creating a large number of variations of the 3D model - some to simulate innocuous things like ejector pin marks and most to simulate various defects like flash. Once we have our 3D models generated we fire them off to the cloud to render photorealistic scenes with varied camera parameters, lighting, and obscurants (shops are dusty). Now that we have labeled images it’s a simple task to train a fairly off the shelf transformer based vision model from them and deliver it to the customer.

Running the model doesn’t require fancy hardware - our usual target device is an Orin Nano with a 12MP camera on it - and we run it purely on-device so that customer images don’t need to leave their worksite. We charge customers by the model — when they plan a line change to a new mold, ideally they’ll contact us and we’ll have their model ready before retooling is complete.

Injection molding is as error prone as it is cool to watch. For example, flash is a thin layer of extra plastic - usually hanging off the edge of the part or overhanging a hole in the part which makes parts defective aesthetically or can even prevent parts from joining up properly. It can happen for so many reasons. Too high an injection pressure, too low a clamping pressure, a grubby mold surface, mold wear, poor mold design, and that’s just to name a few!

Steph and I have a history of working on tasks performed manually that we want to automate – we’ve been working together for the last five years in Pittsburgh on self-driving cars at Argo AI, Latitude AI, and Stack AV. Before that, I worked at Michelin’s test track and Uber ATG. We really, really love robots.

Our first pitch to Y Combinator was, “build a better Intel RealSense” since it’s a universally used (and loathed) vision system in robotics. We built our first few units and started building demos for how folks could use our camera - and that’s when we found defect detection for injection molding and casting. Defect detection is understood and highly automated for things like PCBs – where a surface defect can indicate a future critical failure (hey that capacitor looks a little big?) but defect detection for higher volume/lower cost parts is still too high a cost and effort for most shops.

We’re excited to launch Bucket with you all! We’d love to hear from the community – and if you know anyone working in industrial computer vision or in quality control, please connect us! My email is [email protected] – we can’t wait to see what you all think!



Get Top 5 Posts of the Week



best of all time best of today best of yesterday best of this week best of this month best of last month best of this year best of 2023 best of 2022 yc s24 yc w24 yc s23 yc w23 yc s22 yc w22 yc s21 yc w21 yc s20 yc w20 yc s19 yc w19 yc s18 yc w18 yc all-time 3d algorithms animation android [ai] artificial-intelligence api augmented-reality big data bitcoin blockchain book bootstrap bot css c chart chess chrome extension cli command line compiler crypto covid-19 cryptography data deep learning elexir ether excel framework game git go html ios iphone java js javascript jobs kubernetes learn linux lisp mac machine-learning most successful neural net nft node optimisation parser performance privacy python raspberry pi react retro review my ruby rust saas scraper security sql tensor flow terminal travel virtual reality visualisation vue windows web3 young talents


andrey azimov by Andrey Azimov