Litebulb (YC W22) – Automating the coding interview

Hi HN, I’m Gary from Litebulb (https://litebulb.io). We automate technical onsite interviews for remote teams. When I say “automate”, I should add “as much as possible”. Our software doesn’t decide who you should hire! But we set up dev environments for interviews, ask questions on real codebases, track candidates, run tests to verify correctness, and analyze the code submitted. On the roadmap are things like scheduling, tracking timing, and customizing questions.

I've been a software engineer at 11 companies and have gone through well over a hundred interviewing funnels. Tech interviews suck. Engineers grind LeetCode for months just so they can write the optimal quicksort solution in 15 minutes, but on the job you just import it from some library like you're supposed to. My friends and I memorized half of HackerRank just to stack up job offers, but none of these recruiting teams actually knew whether or not we were good fits for the roles. In some cases we weren't.

After I went to the other side of the interviewing table, it got worse. It takes days to create a good interview, and engineers hate running repetitive, multi-hour interviews for people they likely won't ever see again. They get pulled away from dev work to do interviews, then have to sync up with the rest of the team to decide what everyone thinks and come to an often arbitrary decision. At some point, HR comes back to eng and asks them to fix or upgrade a 2 year old interview question, and nobody wants to or has the time. Having talked with hundreds of hiring managers, VPs of eng, heads of HR, and CTOs, I know how common this problem is. Common enough to warrant starting a startup, hence Litebulb.

We don’t do LeetCode—our interviews are like regular dev work. Candidates get access to an existing codebase on Github complete with a DB, server, and client. Environments are Dockerized, and every interview's setup is boiled down to a single "make" command (DB init, migration, seed, server, client, tunnelling, etc), so a candidate can get started on coding within minutes of accepting the invite. Candidates code on Codespaces (browser-based VSCode IDE), but can choose to set up locally, though we don't guarantee there won't be package versioning conflicts or environment problems. Candidates are given a set of specs and Figma mockups (if it's a frontend/fullstack interview) and asked to build out a real feature on top of this existing codebase. When candidates submit their solution, it's in the form of a Github pull request. The experience is meant to feel the same as building a feature on the job. Right now, we support a few popular stacks: Node + Express, React, GraphQL, Golang, Ruby on Rails, Python/Django and Flask, and Bootstrap, and we’re growing support by popular demand.

We then take that PR, run a bunch of automated analysis on it, and produce a report for the employer. Of course there’s a limit to what an automated analysis can reveal, but standardized metrics are useful. Metrics we collect include linter output, integration testing, visual regression testing, performance (using load testing), cyclomatic/halstead complexity, identifier naming convention testing, event logs, edge case handling, code coverage. And of course all our interview projects come with automated tests that run automatically to verify the correctness of the candidate’s code (as much as unit and integration tests can do, at least—we’re not into formal verification at this stage!)

Right now, Litebulb compiles the report, but we're building a way for employers to do it themselves using the data collected. Litebulb is still early, so we're still manually verifying all results (24 hour turnaround policy).

There are a lot of interview service providers and automated screening platforms, but they tend to either not be automated (i.e. you still need engineers to do the interviews) or are early-funnel, meaning they test for basic programming or brainteasers, but not regular dev work. Litebulb is different because we're late-funnel and automated. We can get the depth of a service like Karat but at the scale and price point of a tool like HackerRank. Longer term, we're hoping to become something like Webflow for interviews.

Here's a Loom demo: https://www.loom.com/share/bdca5f77379140ecb69f7c1917663ae5, it's a bit informal but gets the idea across. There’s a trial mode too, for which you can sign up here: https://litebulb.typeform.com/to/J7mQ5KZI. Be warned that it’s still unpolished—we're probably going to still be in beta for another 3 months at least. That said, the product is usable and people have been paying and getting substantial value out of it, which is why we thought an HN launch might be a good idea.

We’d love to hear your feedback, your interview experiences or ideas for building better tech interviews. If you have thoughts, want to try out Litebulb, or just want to chat, you can always reach me directly at [email protected]. Thanks everyone!



Get Top 5 Posts of the Week



best of all time best of today best of yesterday best of this week best of this month best of last month best of this year best of 2023 best of 2022 yc s24 yc w24 yc s23 yc w23 yc s22 yc w22 yc s21 yc w21 yc s20 yc w20 yc s19 yc w19 yc s18 yc w18 yc all-time 3d algorithms animation android [ai] artificial-intelligence api augmented-reality big data bitcoin blockchain book bootstrap bot css c chart chess chrome extension cli command line compiler crypto covid-19 cryptography data deep learning elexir ether excel framework game git go html ios iphone java js javascript jobs kubernetes learn linux lisp mac machine-learning most successful neural net nft node optimisation parser performance privacy python raspberry pi react retro review my ruby rust saas scraper security sql tensor flow terminal travel virtual reality visualisation vue windows web3 young talents


andrey azimov by Andrey Azimov