Martin (YC S23) – Using LLMs to Make a Better Siri

Hey HN! Dawson here from Martin (https://www.trymartin.com). Martin is a better Siri with an LLM brain and deeper integrations with everyday apps. See our demo here (https://youtu.be/jiJdfrWurvk). You can talk to Martin through voice in our iOS app. You can also text it via SMS, WhatsApp, or email. Currently, Martin can manage your calendar, set reminders, find information, send you daily briefings, and have text conversations with your contacts on your behalf (from its own phone number).

I’ve been a Siri power user for a long time, mainly because I’ve always liked using voice as an interface. But, legacy voice assistants like Siri, Google Assistant, and Alexa were never well integrated enough or reliable enough to actually save time. Maybe 1 in 5 commands end up executing as smoothly as you expected, but the most useful thing they do is play a song or set an alarm. The advent of LLMs seemed like a great opportunity to push the state of the art forward a notch or two!

Our goal is to do 2 things better:

1) Deeper integrations with productivity-related apps you use every day, like calendar, email, messages, whatsApp, and soon Google Docs, Slack, and phone calls.

2) Better memory of each user based on their past conversations and integrations, so Martin can start to anticipate parameters in the user’s commands (e.g. text the guy from yesterday about the plans we made this morning)

A great way that our early users use Martin is having morning syncs and evening debriefs with the software. At the start/end of each day, they’ll have a 5-10 minute sync about their TODOs for the next day, and Martin will brief them on upcoming tasks and news they’re typically interested in.

Something else Martin does which is unlike other voice assistants is it can have full text conversations with your contacts on your behalf from its own phone number. For example, you can tell it to plan a lunch with a friend, and it can text back and forth with that friend to figure out a time and place. After the text conversation between your friend and Martin is over, Martin reports back to you via a notification and a text. You can also monitor all of its messages with your contacts in the app.

We started building Martin exactly 1 year ago, during our YC batch. It’s definitely a hard product to “complete" because of the many unsolved technical challenges, but we’re making progress step by step. First was the voice interface, which Siri still hasn’t gotten right after more than a decade. We have 2 modes: push-to-talk and handsfree. Handsfree is great for obvious reasons. We’ve gotten our latency down to only a couple seconds max for most commands, and we’ve tuned our own voice activity detection model to minimize the chance of Martin cutting you off (a common problem with voiceGPTs). But, even then, Martin may still cut you off if you pause for 3-5 seconds in the middle of a thought, so we made a push-to-talk mode. For those cases where you want to describe something in detail or just brain-dump to Martin, you might need 20-30 seconds to finish speaking. So just hold down, speak, and release when you’re done—like a walkie talkie.

We’ve also had to tackle a very long tail of integrations, and we want to do each one well. For example, when we launched Google calendar, we wanted to make sure you could add a Google Meet link, invite your contacts to the events, and access secondary calendars. And, you should be able to say things like “set reminders leading up to the event” or “text Eric the details of this event.” So, we pretty much release one new major integration every month.

Finally, there’s the problem of personalization / LLM memory, which is still very unsolved. From each conversation that a user has with their Martin, we try to infer what the user is busy with, worried about, or looking forward to, so in their next “morning sync” or “evening debrief”, Martin can proactively suggest to-dos or goals/topics to discuss with the user. Right now, we use a few different LLMs and many chain-of-thought steps to extract clues from each conversation and have Martin “reflect” periodically to build its memory. But, with all that said we still have a lot of work to do here, and this is just a start!

You can try Martin by going to our website (https://www.trymartin.com) and starting a 7 day free trial. Once you start your trial, you’ll get an access code emailed to you along with the download link for our iOS app. After you enter your access code into the app, you can integrate your calendar, contacts, etc. If you find Martin useful after the trial, we charge our early users (who are generally productivity gurus and prosumers with multiple AI subscriptions) a $30/month subscription.

We can’t wait to hear your thoughts. Any cool experiences with Siri, things you wish a voice assistant could do, or ideas about LLM memory, tool calling, etc. - I’d love to discuss any of these topics with you!



Get Top 5 Posts of the Week



best of all time best of today best of yesterday best of this week best of this month best of last month best of this year best of 2023 best of 2022 yc s24 yc w24 yc s23 yc w23 yc s22 yc w22 yc s21 yc w21 yc s20 yc w20 yc s19 yc w19 yc s18 yc w18 yc all-time 3d algorithms animation android [ai] artificial-intelligence api augmented-reality big data bitcoin blockchain book bootstrap bot css c chart chess chrome extension cli command line compiler crypto covid-19 cryptography data deep learning elexir ether excel framework game git go html ios iphone java js javascript jobs kubernetes learn linux lisp mac machine-learning most successful neural net nft node optimisation parser performance privacy python raspberry pi react retro review my ruby rust saas scraper security sql tensor flow terminal travel virtual reality visualisation vue windows web3 young talents


andrey azimov by Andrey Azimov