Blog
Thanks to ChatGPT, an empty text area that turns into a familiar instant messaging-style conversation has been the dominant metaphor for most modern AI applications. The same was true of the very first cut of RunLLM back in 2023 — before we’d even figured out a name (we were nearly “ApiOverflow”)!
As we’ve shared before, chat isn’t the only possible UI for a good AI application. Over the past year, we’ve added support for a number of key, non-chat features: analyzing docs & suggestion improvements, managing a complex knowledge base, and visualizing the topics a user base is asking about. We incrementally bolted many of those features onto the original chat-based UI.
But as we added support for deeper agentic reasoning, we realized that the original metaphors were getting in our way. You needed to be able to see what agents you had running, inspect their actions, and control their behavior — not just see a generic-looking chat history that could have come from any app.
Over the past few months, our team redesigned the UI from the ground up to expose the full capability of an AI Support Engineer. We focused on a few key goals: maximizing visibility and control, enabling users to create and manage multiple agents, and laying the foundation for deeper insights.
Take a look at the walkthrough video of our new UI below!
We’ve discussed the idea of maximizing visibility and control before (including in yesterday’s post), but the previous version of our UI needed improvement here. It simply showed an answer that was generated, but didn’t show you its work, tell you what other actions it took, and or allow you to adjust its behavior. Reading the result, you wouldn’t know that 30+ LLM calls were used to ensure its quality.
This is a fundamental problem for AI systems: As an AI agent does more work on your behalf, you need to know what work it’s doing and how it’s doing it (visibility), and you need to be able to give detailed feedback for it to improve (control).
RunLLM’s new conversation view adds visibility and control in key ways.
This is a big update, but we have a lot more planned. In the next few months, we’re adding what-if analysis for debugging purposes, fine-grained feedback for reasoning steps, and Cursor-style rules that allow you to gain more control over agent behavior.
One of the most common things we’ve seen our customers do over the past year is create multiple agents — externally-facing support is where most folks start, but that’s quickly followed by an internal support copilot, a sales engineering agent, and a sales copilot. Until today, that process required a lot of manual effort.
With today’s redesign, we’ve migrated to a shared knowledge base across your whole organization. When creating a new agent, you can give it access to any of the data or tools your organization has already connected to RunLLM. For each new agent that you create, you now have fine-grained control over the tone, level of detail, tagging & triage, and any other behavior you’d like to control. For example, a support engineer might have highly detailed answers with step-by-step guides and code examples. On the other hand, a sales copilot might have concise answers with no code but details on business outcomes.
We’ve also added presets for each of the common agent types we see our customers creating today.
There’s much more that went into this redesign we didn’t have time to cover here: background agents for offline analysis, automatic connections to related conversations, more powerful filters, unified insights across all your conversations, dark mode(!), and so on.
But what we’re most excited about with this release is that it lays the groundwork for RunLLM to help customers in deeper, more meaningful ways. Over the next few months, we’ll be adding support for a more comprehensive copilot mode, ad hoc analytics, custom views & dashboards, and sentiment analysis + trend detection. All of this builds on the redesigned conversation view and data architecture released today — we’d love your feedback on what to improve.