I'm Julian, co-founder of Haystack (https://usehaystack.io). We’re building one-click dashboards and alerts using Github data.
While managing teams from startups to more established companies like Cloudflare, my cofounder Kan and I were constantly trying to improve our team and process. But it was pretty tough to tell if our efforts were paying off. Even tougher to tell where we could improve.
We tried messing around with JIRA which gave us story points and tickets completed but it didn’t help us dig deeper on where we could improve. We found a few tools that integrated with Github measuring # of commits, lines of code, and even comparing engineers using these metrics! - but we didn’t like that approach.
We wanted to know (1) how quickly we deliver as a team (2) what bottlenecks tend to get in the way and (3) as we make adjustments, are they helping us improve?
We scoured the internet looking for every piece of research on the topic we could find, talked to >500 engineering leaders working everywhere from startups to FAANG, and started to learn which metrics helped answer our questions and which ones just sucked. Once we had a clear picture of what that looked like, we built Haystack.
Haystack analyzes pull requests on the team level, giving you “northstar” metrics like cycle time, deployment frequency, change failure rate and 20+ more to help you improve delivery. Teams use Haystack to quickly find bottlenecks like code review, experiment with changes like smaller pull requests or automated tests, and see the result. Using this feedback loop, the top 70% of Haystack users have increased production deployments by 58% and achieved 30% faster cycle times on average.
We’re lucky enough to work with some awesome teams at Microsoft, Robinhood, and The Economist. As we continue to build out our product, we’d love to hear any of your experiences with engineering metrics, your thoughts about how to actually get them right, and of course your disaster stories :)