Blog
Every support team works a little differently: Different channels, different SLAs, different triage flows and tagging schemes, different classes of tickets. That’s not a bad thing. Your support workflow and strategy should reflect the shape of your business. But it is a problem for generic AI tools.
If an AI system can’t adapt to your workflows, it slows things down, gets in the way — or worse, creates more work for your team. That’s why we built the RunLLM Support Agent SDK: to give your team full control over how AI fits into your support stack, without requiring you to rebuild everything from scratch.
One of our favorite customers manages hundreds of private Slack channels — one for each of their enterprise customers. Depending on the customer’s support tier, each channel had different expectations for response style. When they started using RunLLM, they needed it to serve as the first line of defense across all of them — but follow-ups, triage, and response style varied depending on the customer.
In some cases, we needed to work with the customer until the issue was resolved. In others, if RunLLM couldn’t answer a question, it needed to automatically create a ticket.
We built support for each branch in the workflow incrementally, but the process was messy. Every minor tweak required our team to open a pull request, push code to prod, and wait. Updates were slower than anyone wanted, and it ultimately took us about two months to get this customer in production.
We realized making our team the bottleneck for controlling agent behavior was a bad idea. A lightweight Python SDK would give customers full control — and dramatically reduce iteration time.
The RunLLM Support Agent SDK gives you:
With the SDK, you can fully customize your support workflows without rewriting core logic. Most importantly, you don’t need to rebuild the AI primitives we’ve already spent 20+ engineer-years perfecting. The SDK gives you flexibility, speed, and reliability — without the overhead.
Say RunLLM is deployed on your documentation site. You want it to answer most questions — but if it’s stumped, it should message your internal support Slack. If someone replies, that response should go back to the user.
That workflow takes just a few lines of Python:
Or say you're using RunLLM in Slack. If the AI can’t confidently answer, you want to:
Also just a few lines:
👉 Learn more at docs.runllm.com/sdk
We’ve been testing SDK-built workflows with customers over the past few weeks. It’s helping us iterate faster and giving teams more flexibility. Today, we’re rolling the SDK out as a private beta for early design partners. If you’re interested, let us know.
What’s coming next:
This is just the beginning of what’s possible when agentic reasoning meets flexible infrastructure. If you want to see what a fully customized AI support stack looks like, we’re ready.
👉 Reach out for early access to the Support Agent SDK