Our system combines a robotic platform (we call the first one Maurice) with an AI agent that understands the environment, plans actions, and executes them using skills you've taught it or programmed within our SDK.
If you’ve been building AI agents powered by LLMs before, and in particular Claude Computer use, this is how we intend the experience of building on it to be, but acting on the real world!
You can see Maurice serving a glass here (https://bit.ly/innate-hn-vid-serving). Here is another example (https://bit.ly/innate-hn-vid-officer) in which it was given a digital ability (through the SDK) to send a notification to your phone, to use when it sees someone in the house. In both these cases, the only work you have to do is spend 30mn per physical skill to collect data to train the arm, and a couple minutes to write a system prompt.
You can read more about how it works, about the paradigm we’re creating and find our Discord in our documentation (https://docs.innate.bot). We’ll be open-sourcing parts of the system there soon.
We want to lower the barrier to entry to robotics. Programming robots is usually complicated, time-consuming, and limited to experts even with AI helping you write code. We think it should be easier.
We’re coming from an AI for robotics and HCI background as researchers at Stanford, and we’ve worked on multiple hardware + agentic AI projects this past year, but this one was clearly the most surprising one.
The first time we put GPT-4 in a body - after a couple tweaks - we were surprised at how well it worked. The robot started moving around, figuring out when to use a tiny gripper, and we had only written 40 lines of python on a tiny RC car with an arm. We decided to combine that with recent advancements in robot imitation learning such as ALOHA to make the arm quickly teachable to do any task.
We think it should be simple to teach robots to do tasks for us. AI agents offer a completely new paradigm for this, easy enough to help many non-roboticists start in the field, but still expandable enough to make a robot able to do very complex tasks.
The part of it that excites us most is that for every builder teaching their robot to perform a task, every other robot learns faster and better. We believe that by spreading our platforms as much as possible, we could crowdsource massive and diverse datasets to make robotics foundations models that everyone contributes to.
Under the hood, our brain (running in the cloud) uses 9 different models and runs in the cloud. A YOLO, a SAM, and 7 VLMs, from OpenAI, Google, Anthropic, and most importantly a couple Llamas running on groq to make the system think and act faster. Each model has a responsibility. Together, they act as if it was only one model with ability to navigate, talk, memorize and activate skills. As a bonus, since these models keep getting better and smaller, every new release gets our robots smarter and faster!
Our first robot Maurice is 25cm-high, has a 5DoF arm, a Jetson Orin Nano onboard, and comes equipped with our software installed and a mobile app to control it. Our first batch of users wants to teach it to clean floors, tidy up after kids, wake them up in the morning, play with them, or be a professional assistant connected to emails and socials. You can go wild quickly!
We’re making a small batch available for HackerNews at $2,000 each for early builders who want to experiment, with $50 free of agent per month for a year. You can book one on our website with a (refundable) deposit if you’re in the US. These units will start shipping in March - the first 10 were booked already for February.
We’d love your thoughts, experiences, and critiques. If you have ideas on what you’d use a home robot for, or feedback on how to make these systems more accessible, we’re hanging around for the coming hours in the comments. Let us know what you think!