Slai (YC W22) – Build ML models quickly and deploy them as apps

Hi HN, we’re Eli and Luke from Slai ( Slai is a fast ML prototyping platform designed for software engineers. We make it easy to develop and train ML models, then deploy them as production-ready applications with a single link.

ML applications are increasingly built by software engineers rather than data scientists, but getting ML into a product is still a pain. You have to set up local environments, manage servers, build CI/CD pipelines, self-host open-source tools. Many engineers just want to leverage ML for their products without doing any of that. Slai takes care of all of it, so you can focus on your own work.

Slai is opinionated: we are specifically for software developers who want to build models into products. We cover the entire ML lifecycle, all the way from initial exploration and prototyping to deploying your model as a REST API. Our sandboxes contain all the code, dataset, dependencies, and application logic needed for your model to run.

We needed this product ourselves. A year ago, Luke was working as a robotics engineer, working on a computationally intensive problem on a robot arm (force vector estimation). He started writing an algorithm, but realized a neural network could solve the problem faster and more accurately. Many people had solved this before, so it wasn’t difficult to find an example neural net and get the model trained. You’d think that would be the hard part—but actually the hard part was getting the model available via a REST API. It didn’t seem sensible to write a Flask app and spin up an EC2 instance just to serve up this little ML microservice. The whole thing was unnecessarily cumbersome.

After researching various MLOps tools, we started to notice a pattern—most are designed for data scientists doing experimentation, rather than software engineers who want to solve a specific problem using ML. We set out to build an ML tool that is designed for developers and organized around SWE best practices. That means leaving notebooks entirely behind, even though they're still the preferred form factor for data exploration and analysis. We've made the bet that a normal IDE with some "Jupyter-lite" functionality (e.g. splitting code into cells that can be run independently) is a fair trade-off for software engineers who want easy and fast product development.

Our browser-based IDE uses a project structure with five components: (1) a training section, for model training scripts, (2) a handler, for pre- and post-processing logic for the model and API schema, (3) a test file, for writing unit tests, (4) dependencies, which are interactively installed Python libraries, and (5) datasets used for model training. By modularizing the project in this way, we ensure that ML apps are functional end-to-end (if we didn't do this, you can imagine a scenario where a data scientist hands off a model to a software engineer for deployment, who's then forced to understand how to create an API around the model, and how to parse a funky ML tensor output into a JSON field). Models can be trained on CPUs or GPUs, and deployed to our fully-managed backend for invoking via a REST API.

Each browser-based IDE instance (“sandbox”) contains all the source code, libraries, and data needed for an ML application. When a user lands on a sandbox, we remotely spin up a Docker container and execute all runtime actions in the remote environment. When a model is deployed, we ship that container onto our inference cluster, where it’s available to call via a REST API.

Customers have so far used Slai to categorize bills and invoices for a fintech app; recognize gestures from MYO armband movement data; detect anomalies in electrocardiograms; and recommend content in a news feed based on previous content a user has liked/saved.

If you’d like to try it, here are three projects you can play with:

Convert any image into stylized art -

Predict Peyton Manning’s Wikipedia page views -

Predict how happy people are likely to be in a given country -

We don’t have great documentation yet, but here’s what to do: (1) Click “train” to train the model; (2) Click the test tube icon to try out the model - this is where you enter sentences for GPT-2 to complete, or images to transform, etc; (3) Click “test model” to run unit tests; (4) Click “package” to, er, package the model; (5) Deploy, by clicking the rocket ship icon and selecting your packaged model. “Deploy” means everything in the sandbox gets turned into a REST endpoint, for users to consume in their own apps. You can do the first 3 steps without signup and then there’s a signup dialog before step 4.

We make money by charging subscriptions to our tool. We also charge per compute hour for model training and inference, but (currently) that's just the wholesale cloud cost—we don't make any margin there.

Our intention with Slai is to allow people to build small, useful applications with ML. Do you have any ideas for an ML-powered microservice? We’d love to hear about apps you’d like to create. You can create models from scratch, or use pretrained models, so you can be really creative. Thoughts, comments, feedback welcome!

Get Top 5 Posts of the Week

best of all time best of today best of yesterday best of this week best of this month best of last month best of this year best of 2022 best of 2021 yc w23 yc s22 yc w22 yc s21 yc w21 yc s20 yc w20 yc s19 yc w19 yc s18 yc w18 yc all-time 3d algorithms animation android [ai] artificial-intelligence api augmented-reality big data bitcoin blockchain book bootstrap bot css c chart chess chrome extension cli command line compiler crypto covid-19 cryptography data deep learning elexir ether excel framework game git go html ios iphone java js javascript jobs kubernetes learn linux lisp mac machine-learning most successful neural net nft node optimisation parser performance privacy python raspberry pi react retro review my ruby rust saas scraper security sql tensor flow terminal travel virtual reality visualisation vue windows web3 young talents

andrey azimov by Andrey Azimov