Blog
The good news? You got the AI budget.
The challenge? Now you have to spend it wisely, and everyone’s watching.
In 2025, AI software spending is projected to hit $134 billion, and boards (”and billboards”) everywhere are pushing for “AI initiatives” that deliver (real) value. But with that excitement comes risk. The moment budgets gets greenlit, the pressure starts: pick something, ship fast, and show results.
Unfortunately, that’s how a lot of teams end up with the wrong tool — and worse — wasted time and no real outcomes to show for it.
According to McKinsey, only 20% of companies report meaningful ROI from their AI investments. That number feels low, until you look around.
Flashy demos get attention, but they rarely solve real problems. Some tools plug in easily, but deliver shallow answers. Others make bold claims but fall apart in real workflows.
A high-profile example: Google’s Gemini demo in December 2023. It appeared to show a live, conversational interaction but was later revealed to have been heavily edited. Google admitted to using still images and scripted prompts to generate the responses, raising serious doubts about what Gemini could actually do in production. The backlash was swift: users felt misled, trust eroded, scrutiny intensified. Because of this, we speculate many may still feel like Google is behind in AI when our experience is that they have among the best models.
It was a masterclass in why excitement doesn’t equal execution — and why buyers must focus on what AI can reliably do today, not just what it promises in a demo.
The deeper issue is misalignment. Most AI tools weren’t built with your team’s actual work in mind. As companies adopt AI, they need to prioritize what’s functional over what’s flashy. The real opportunity lies in solving the hard, unglamorous problems that actually move the needle. Focus on the bottlenecks holding you back from growth.
Bad tools don’t just waste money. They erode trust — especially in enterprises counting on AI to deliver results.
Engineers already drowning in tickets don’t have time to clean up after a half-baked rollout. Support leaders can’t afford to lose credibility by backing tools that fail to save time, improve CSAT, or drive revenue.
As Forbes notes, many failed AI projects follow a predictable pattern: teams underestimate the work it takes to implement, overestimate the value they’ll get out, and run out of steam before the tool earns its place.
It’s like every company decided to build California’s high-speed rail system (well publicized for massive cost overruns, unrealistic timelines and now may never get built).
What do you stand to lose in your organization? Time. Morale. Political capital. And the opportunity cost of not solving the real support problem you set out to fix.
Nothing frustrates a technical team faster than being forced to adopt a tool that breaks trust.
The AI that succeeds looks nothing like the AI that demos well.
Harvard Business Review puts it simply: the highest-ROI AI projects are narrow in scope, tightly integrated, and deeply relevant to users’ day-to-day work.
In our experience talking to hundreds of support and engineering organizations about their AI evaluations — including those testing our own product — the ones seeing real benefit are focused on finding solutions that:
We’ve seen these principles play out again and again. The highest returns come from systems that reason through real documentation, handle nuance, and integrate into daily work. Whether homegrown or vendor-led, the pattern is the same: what works isn’t what demos best — it’s what fits fast and proves real value early.
The best teams don’t try to “transform their business” on Day 1. They test small, learn fast, and scale what works.
MIT Sloan calls this the best way to implement AI: start where the pain is sharpest, and show results early.
That means picking a real problem — say, long ticket queues, repetitive escalations, or inconsistent responses — and seeing whether a tool can meaningfully reduce it. No vision decks. No six-month rollouts. Just: does this work with our data, our systems, our team?
When it does, it’s obvious. And when it doesn’t, you’ve lost a day — not a quarter.
As you evaluate AI tools, these four questions can help you separate hype from help:
If you can’t confidently answer “yes” to all four, don’t commit. Run a fast pilot instead.
AI budgets are growing — but so is pressure to show real outcomes. The strongest returns we’ve seen come from teams that start small, focus on impact, and test AI in the same way they test software: against real scenarios.
For example, one support org we worked with cut mean time-to-resolution by over 50% after training an AI system on their historical tickets and documentation. Another used AI to guide new hires through complex edge cases, reducing onboarding time from months to weeks. The specifics vary — but the pattern is the same: don’t overcommit. Pilot. Learn. Expand.
If you're sitting on a fresh AI budget, the smartest move isn’t to pick the most impressive demo. It’s to prove real value, fast — with tools that actually work for your team. You can get to ROI if you define the return you seek before matching it to the investment to get there. More R. Less I.