Blog

How DataHub Used an AI Support Engineer to Save $1 million in Engineering Cost

RunLLM saved DataHub $1 million in engineering cost, increased question capacity 6X, and improved ticket deflection by 90%

DataHub uses RunLLM AI Support Engineer to Scale its Advanced Technical Support

“I don’t say this lightly — RunLLM fundamentally shifted how we do DevRel at DataHub. It freed us from whack-a-mole support and gave us space to finally fix the things we never had time for. There hasn’t been a single moment I wished we built our own. It’s freed us up to do so much more.” — Maggie Hays, founding DataHub Community Product Manager

The Challenge: Scaling Support Without Scaling Cost

DataHub, maintained by Acryl Data, is the #1 open source metadata platform used by over 3,000 enterprise companies including Apple, Netflix, and Visa. As one of the most vibrant open source communities, it has 13,000 members and 600 active contributors.

Maggie Hays, founding DataHub Community Product Manager, and her team were responsible for technical support, primarily through Slack and GitHub Issues. As demand grew, they added an on-call rotation where core developers would join Slack channels to answer questions and point users to docs. They also added weekly live office hours.

But support questions spanned over 70 different integrations — front end, back end, Elastic, Docker, and more — making it difficult for anyone outside the core development team to keep up. Even building a DevRel team to triage questions couldn't fully shield engineers from the volume and complexity. As demand continued to rise, they kept falling further behind.

“We were underwater. Our team couldn’t keep up, and we would have needed more full-time support engineers just to catch up. It wasn’t sustainable — we needed a systematic solution.” — Maggie Hays, DataHub Community Product Manager

They also considered building an in-house AI assistant, but the lift would have pulled part of core development off the roadmap for weeks, if not months.

“We were tempted to build it ourselves, but RunLLM delivered faster and better than we could. It just worked—and freed us up to stay focused on our core product.” — Shirshanka Das, Co-founder & CTO
The Solution: Seamless AI That Actually Understood the Product

The team evaluated other AI support solutions but found that RunLLM delivered the highest quality and most useful technical support.

“Other tools were horrible — half the time they said ‘I found nothing’ or just pointed to a page. RunLLM's AI Support Engineer gave us the best results. It was the only agent that could find the right answer, explain why, and give you working code.” — Maggie Hays, DataHub Community Product Manager

Unlike other solutions, RunLLM didn’t just search documentation. It learned from numerous data sources including Slack threads, GitHub issues, and community conversations — capturing real-world context and how users actually talked about the product.

“Unless you’d been in Slack for months reading everything, you wouldn’t know where to find answers. RunLLM just knew. I've been especially impressed by the quality of responses for nuanced corners of our project.” — Maggie Hays, DataHub Community Product Manager

Deployment was seamless. RunLLM automated onboarding and required almost no lift from the team.

“It was just kind of off to the races. I don’t think we had to tweak a single thing.” — Maggie Hays, DataHub Community Product Manager

DataHub improved Cost Savings, Ticket Deflection, and Technical Support Capacity with RunLLM.

The Results: Cost Savings, Bandwidth Gains, and a Thriving Community

RunLLM didn’t just reduce support tickets — it created a step-function improvement across people, process, and product.

By handling over 3,000 questions a month — up from around 500 prior to launch — RunLLM increased questions answered by 6X. That growth reflected not just more questions, but deeper engagement.

“People didn’t hesitate — they @mentioned RunLLM right away. It helped them learn faster and dig into parts of the product they might not have otherwise.” — Maggie Hays, DataHub Community Product Manager

Internally, RunLLM saved DataHub from hiring more engineers just to keep pace with community demand — avoiding what likely would have been a significant cost in hiring, onboarding, and ramp time. But the benefit wasn’t just financial. Existing development and support engineering were finally unblocked.

“RunLLM lets our core developers put on headphones and code. Our support load dropped, the community’s happier, and we’re improving docs faster. It’s a virtuous cycle.” — Shirshanka Das, Co-founder & CTO

With reactive support off their plates, the team overhauled onboarding, rebuilt the docs site, launched API guides, and refocused on roadmap work.

“I don’t say this lightly — RunLLM fundamentally shifted how we do DevRel at DataHub. It freed us from whack-a-mole support and gave us space to finally fix the things we never had time for. There hasn’t been a single moment I wished we built our own. It’s freed us up to do so much more.” — Maggie Hays