Blog

Customize Everything: New RunLLM Support Agent SDK

Agents that work how you work

Agents that work how you work

Every support team works a little differently: Different channels, different SLAs, different triage flows and tagging schemes, different classes of tickets. That’s not a bad thing. Your support workflow and strategy should reflect the shape of your business. But it is a problem for generic AI tools.

If an AI system can’t adapt to your workflows, it slows things down, gets in the way — or worse, creates more work for your team. That’s why we built the RunLLM Support Agent SDK: to give your team full control over how AI fits into your support stack, without requiring you to rebuild everything from scratch.

RunLLM Support Agent SDK Walkthrough

Why We Built the SDK

One of our favorite customers manages hundreds of private Slack channels — one for each of their enterprise customers. Depending on the customer’s support tier, each channel had different expectations for response style. When they started using RunLLM, they needed it to serve as the first line of defense across all of them — but follow-ups, triage, and response style varied depending on the customer.

In some cases, we needed to work with the customer until the issue was resolved. In others, if RunLLM couldn’t answer a question, it needed to automatically create a ticket.

We built support for each branch in the workflow incrementally, but the process was messy. Every minor tweak required our team to open a pull request, push code to prod, and wait. Updates were slower than anyone wanted, and it ultimately took us about two months to get this customer in production.

We realized making our team the bottleneck for controlling agent behavior was a bad idea. A lightweight Python SDK would give customers full control — and dramatically reduce iteration time.

What the Support Agent SDK Does

The RunLLM Support Agent SDK gives you:

  • Prebuilt support operations powered by RunLLM’s reasoning engine — including answering questions, tagging and triaging, summarizing, escalating, and syncing across tools.
  • Access to RunLLM’s integrations — Slack, Zendesk, Salesforce, internal docs, and codebases — without dealing with custom APIs.
  • Custom workflow logic defined in lightweight Python code, so your agents behave the way you need them to.

With the SDK, you can fully customize your support workflows without rewriting core logic. Most importantly, you don’t need to rebuild the AI primitives we’ve already spent 20+ engineer-years perfecting. The SDK gives you flexibility, speed, and reliability — without the overhead.

The SDK in Action

Say RunLLM is deployed on your documentation site. You want it to answer most questions — but if it’s stumped, it should message your internal support Slack. If someone replies, that response should go back to the user.

That workflow takes just a few lines of Python:

Or say you're using RunLLM in Slack. If the AI can’t confidently answer, you want to:

  1. Create a Zendesk ticket
  2. Post a summary in the ticket
  3. Keep the ticket and Slack thread in sync

Also just a few lines:

👉 Learn more at docs.runllm.com/sdk

What’s Next

We’ve been testing SDK-built workflows with customers over the past few weeks. It’s helping us iterate faster and giving teams more flexibility. Today, we’re rolling the SDK out as a private beta for early design partners. If you’re interested, let us know.

What’s coming next:

  • Public release in August
  • Workflow management via UI
  • Natural language generation and editing
  • Deeper integrations with tools like Cursor

This is just the beginning of what’s possible when agentic reasoning meets flexible infrastructure. If you want to see what a fully customized AI support stack looks like, we’re ready.

👉 Reach out for early access to the Support Agent SDK