Buildt (YC W23) – Conversational semantic code search

Hi HN! We’re Ali, Sam and Yang, the founders of Buildt (https://buildt.ai), an LLM-powered IDE extension that allows you to ask highly contextual and semantic questions about your code. It’s a bit like if you had a colleague sitting next to you who has perfect memory of your codebase. Our VS Code extension is here: https://marketplace.visualstudio.com/items?itemName=BuildtAI....

Some demos: https://twitter.com/AlistairPullen/status/162848600700289433... and https://twitter.com/AlistairPullen/status/162848600806408601...

We’ve been devs on projects ranging from mobile apps, arbitrage trading systems, VR platforms to on-demand startups. Without fail, whenever a codebase gets over a certain size or we inherit legacy code, we get slowed down from not knowing where a certain snippet lives, or how it works. I’m sure we’ve also bothered our colleagues when we first get onboarded for longer than they would like.

Current code search products aren’t too different from CMD + F. We’ve often wanted results that aren’t captured by string matches or require some nuanced understanding of our codebase—questions such as “How does authentication work on the backend?”, "Find where we initialize Stripe in React”, or “Where do we handle hardware failures?”

To build a tool to help developers quickly search and understand large codebases requires contextual understanding of every line of code, and then how to surface that understanding in a useful format.

First we need to parse your codebase; this isn’t a walk in the park as we can’t simply embed your code files because in that instance if you were to surface a result for a specific search you’d only be brought to the file that the result was in, and no deeper. To be able to find specific snippets of code you’re looking for, we need to be much more granular in how we split up your codebase. We’ve used a universal parser (TreeSitter), so we can traverse the Abstract Syntax Tree (AST) of your code files to pick out individual functions, classes, and snippets to be embedded; not the entire file. This allows us to work on your codebase on a more semantic level than the raw source code.

Once we have extracted all of the relevant code from the AST, we have to embed them. (We use a number of other search heuristics too, such as edit distance and exact matches, but embeddings are the highest weighted and core heuristic.) We’ve learned a great deal about the best implementations of embeddings for this use case, particularly in this case when using embeddings to search between modalities (natural language and code) we found that hypothetical search queries were the optimal way to surface relevant code, as well as creating a custom bias matrix for our embeddings to better optimize them at finding code from short user queries. Simply embedding the user’s search query and searching the answer space with it was a poor solution.

One embeddings heuristic we use is a HyDE comparison, which involves using an LLM to take the user’s search query, and then generate code that it thinks will be similar to the actual code the user’s trying to find. This process is well documented and has given us a huge increase in performance (https://www.buildt.ai/blog/3llmtricks). Another heuristic allows us to achieve “search for what your code does, not what it is” functionality—this involves the embeddings gaining some form of understanding of what the code actually does. For this we used embedding customisation to create a bias matrix to mutate the vector space in such a way that the embeddings cluster code by its functionality rather than simply its literal strings (https://www.buildt.ai/blog/viral-ripout).

By having a product that lives in your IDE instead of your Git repository, we give you the power of contextual understanding in real time as you’re working on your codebase. There’s no need to context switch or change apps—everything is self-contained; you can easily search for code, have your code refactored and fresh code written from a single extension.

Buildt is free for now as we’re still in beta, but in the future we’ll charge something like $10 per seat per month. We’re currently building the last part of what we consider our core features, cross-file codegen. Soon you’ll be just ask Buildt to instantly perform request such as ‘add firebase analytics to every user interaction’.

We started Buildt as a product to tackle our own frustrations and we’d love for you to try it out and let us know what you think. We can’t wait to hear your feedback, questions, and comments!



Get Top 5 Posts of the Week



best of all time best of today best of yesterday best of this week best of this month best of last month best of this year best of 2023 best of 2022 yc w24 yc s23 yc w23 yc s22 yc w22 yc s21 yc w21 yc s20 yc w20 yc s19 yc w19 yc s18 yc w18 yc all-time 3d algorithms animation android [ai] artificial-intelligence api augmented-reality big data bitcoin blockchain book bootstrap bot css c chart chess chrome extension cli command line compiler crypto covid-19 cryptography data deep learning elexir ether excel framework game git go html ios iphone java js javascript jobs kubernetes learn linux lisp mac machine-learning most successful neural net nft node optimisation parser performance privacy python raspberry pi react retro review my ruby rust saas scraper security sql tensor flow terminal travel virtual reality visualisation vue windows web3 young talents


andrey azimov by Andrey Azimov