Blog
Our journey to build an AI Support Engineer started more than a decade ago with research at UC Berkeley. Fast forward to today, and a recent comment from a customer stuck with us:
“I've been especially impressed by the quality of responses for nuanced corners of our project.”
Getting to a point where someone feels an AI system understands edge cases, nuance, and subtle dependencies is rewarding. It didn’t happen by accident. And we're still not where we want to be.
What this quote reminds us of is not just progress, but the underlying question that started it all: How can you take a large, messy body of information, ask it anything, and get back something genuinely useful?
That is the promise of AI, but also its most common failure. The answers are often not right, or not right enough.
When it comes to technical knowledge, a language model can’t just be pretty good. It has to be precise. The bar for quality is high, and there is very little forgiveness for wrong answers.
And it’s not like there’s a neat pile of data in the garage we can just go use.
In technical support, valuable knowledge lives everywhere. Docs, Slack threads, support tickets, and code snippets are scattered across tools. But finding the right answer remains a daily challenge for support and engineering teams.
Slack search is weak. Docs are out of date. And no one has time to keep everything organized.
This fragmentation creates three persistent problems:
Historically, organizations tried to solve this problem by creating centralized knowledge bases—single sources of truth. Usually, a big part of that is technical documentation.
But this approach requires constant manual upkeep. Engineers and support staff are expected to document everything in standardized formats, a task that few prioritize. Their focus is rightly on building and fixing, not archiving. To put it more plainly, the one thing engineers hate more than reading documentation is writing it.
The result can be incomplete content, outdated docs, and even systems that fall into disuse.
The flaw is not centralization itself. It’s the unrealistic dependence on people to maintain it all by hand.
The emergence of large language models (LLMs) offers a fundamentally different approach but fall short of solving the problem on their own. Rather than forcing teams to document perfectly, these systems can read across an organization’s knowledge—docs, chat logs, codebases, and tickets—without requiring manual consolidation.
But they do more than aggregate. They structure.
At RunLLM, we use a series of data engineering techniques to make this possible. As data is ingested, it is:
This results in a structured map of an organization’s technical knowledge, not just a searchable pile of files.
The knowledge graph is not a static artifact. It reflects the way technical teams actually think, work, and debug. It captures how components relate to one another, what edge cases live near core logic, and where support challenges tend to cluster.
This is not about replacing documentation. It is about organizing what already exists into something searchable, navigable, and durable.
Base LLMs can surface relevant information from fragmented sources. But on their own, they introduce new risks:
What is needed is specialization. When LLMs are fine-tuned on specific domains (retrained on vocabulary, patterns, and context) they develop a deeper understanding of the problems at hand.
This is especially valuable in environments where support teams face complex, interdependent issues. Well-structured, fine-tuned systems can return accurate responses even when the documentation is incomplete or outdated. When a system correctly answers a nuanced question without relying on perfect source material, it’s not guessing. It’s reasoning.
Even when information is well-structured, it has to stay up-to-date. That’s hard in a world where product knowledge shifts constantly, continues to get generated across silos like Slack, Zendesk, and Jira, and codebases change daily.
Documentation rarely keeps pace not because people don’t care, but because they’re busy solving problems. The more they document faithfully, the less time they have to address issues or rethink how to improve their support process overall. It’s a bitter trade off. Manual maintenance doesn’t scale, and that is the reality most support teams live with.
What is needed is not just smarter “search.” It is systems that learn.
The most effective AI support systems improve through use. When users flag gaps or corrections, those signals are incorporated into future responses. Over time, the system gets better not because someone rewrote the docs, but because it learned from context, correction, and repetition.
This is not magic. It is applied systems design. Feedback loops. Structured data. Model tuning. All working together to reflect how real teams think and work.
This kind of system helps support teams deliver better outcomes without requiring a reinvention of their workflow:
Importantly, this works with how teams already operate. There is no mandate to re-document everything—just the opportunity to learn from what is already there.
Systems that consolidate, structure, and learn—rather than simply index—help companies respond faster, onboard customers better, and resolve issues with more confidence. They do not eliminate documentation, but they reduce the friction of using it and maintaining it effectively.
The future of technical support isn’t just about faster answers. It‘s about deeper understanding.
Not just information retrieval, but applied insight.
Not just documents, but systems that learn alongside the people they help.