Some background on “business research”: investment and consulting teams sink many hours a week into researching companies, markets, and products. This work is time-sensitive and exhausting, but crucial to big decisions like company acquisitions or pricing model changes.
At large financial services firms, much of this work is offshored to external providers, who charge thousands of dollars per project and are often slow and low-quality. Small teams lack the budget and consistent flow of work to employ these resources. We’re building an automation solution that brings a fast, easily accessible, and defendable research resource.
Meticulate uses LLMs to emulate analyst research processes. For example, to manually build a competitive landscape like this one: https://meticulate.ai/workflow/65dbfeec44da6238abaaa059, an analyst needs to spend ~2 hours digging through company websites, forums, and market reports. Meticulate replicates this same process of discovering, researching, and mapping companies using ~1500 LLM calls and ~500 webpages and database pulls, delivering results 50x faster at 50x less cost.
At each step, we use an LLM as an agent to run searches, select and summarize articles, devise frameworks of analysis, and make small decisions like ranking and sorting companies. Compared to approaches where an LLM is being used directly to answer questions, this lets us deliver results that (a) come from real time searches and (b) are traceable back to the original sources.
We’ve released two workflows: building competitive landscapes and market maps. We designed it with an investor running diligence on a company as the target use case but we’re seeing lots of other use cases that we didn’t originally have in mind—things like founders looking for alternative vendors for a product they’re purchasing; sales reps searching for more prospects like one they’ve already sold to; consultants trying to understand a new market they are unfamiliar with, and more.
The main challenges we’ve been overcoming are preventing quality degradation along multi-step LLM pipelines where an error on one step can propagate widely, and dealing with a wide range of data quality. We’re working hard on our next set of workflows and would love for you to give it a try at https://meticulate.ai and would appreciate feedback at any level!