Most LLM apps can be built by properly integrating LLMs with a knowledge base consisting of domain-specific or company-specific data. The scope of this knowledge base can change based on the task- it can be something as narrow and static as your API docs or as broad and fluid as meeting transcripts from your customer support calls.
To effectively use their data, most teams need to build a similar stack—datasource integrations, async embedding jobs, vector databases, bucket storage for non textual data, a way to version prompts, and potentially an additional database for the text data. Baseplate provides much of the backend for you through simple APIs, so you can focus on building your core product and less on building common infra.
At my previous role at Google X, I worked on building data infrastructure for geospatial data pipelines and knowledge graphs. One of my projects was to integrate knowledge graph triples with LaMDA, and I discovered the need for LLM tooling after using one of Google's initial prompt chaining tools. Ani was a PM at Logitech, shipping products in their Computer Vision team, and at the same time building side projects with GPT-3.
The core of Baseplate is our simplified multimodal database, which allows you to store text, embeddings, data, and metadata in one place. Through a spreadsheet-style interface, you can edit your vectors and metadata (which is surprisingly complex with existing tools), and add images that can be returned at query time. Users can choose between standard semantic search or hybrid search (weighted keywords/semantics for larger, more technical datasets). Hybrid search on Baseplate utilizes two open-source models that can be tuned for your use case (instructor & SPLADE). Datasets are organized into documents, which you can keep in sync through our API or through the UI (this way you can keep your datasets fresh when ingesting data from Google Drive/Notion/ etc).
After your datasets are set up, we have an App Builder where you can iterate on prompts with input variables, and create context variables that pull directly from a dataset at query time. We give you all the knobs and dials, so that you can configure exactly how your search is performed and how it is integrated with your prompt.
When you're satisfied with an app configuration, you deploy it to an endpoint. All you need is a single API call and we'll pull from one (or multiple) datasets in your app and inject the text into the prompt. We also return all the search results in the API response, so you can build a custom UX around images or links in your dataset. Endpoints have built in utilities for human feedback and logging. With GPT-4 being able to take images as input, we will soon be working on a way to pipe images from your dataset directly to the model. And all of these tools are in a team workspace, where you can quickly iterate and build together.
We just started offering self-serve sign up, and our pricing is currently $35/month per user for our Pro plan and $500/team on our Team plan. Feel free to sign up and poke around. We'd love to hear feedback from the community, and look forward to your comments!