If there is one lesson CX leaders should take from the current AI cycle, it is this: customers do not automatically trust AI agents just because vendors say they should. The latest evidence points to a harsher reality. US customer satisfaction remains stuck at 76.9 on the ACSI, Forrester says US CX quality fell again to a record low 68.3 in 2025, and UK improvements come with a big asterisk around complaints, data protection, and effort. Put simply, AI is not rescuing broken service. It is exposing it.
The Market Has Pushed Automation Ahead of Trust
That matters because plenty of companies are still treating AI agents as a cost-cutting story first and a trust story second. The market has been seduced by the usual shiny-object theatre: faster handling times, higher containment, fewer frontline hires. But the backlash is getting harder to ignore. The CFPB said it received about 6.6 million complaints in 2025, more than double the prior year, while Salesforce research found 60% of consumers believe AI makes trust even more important. This is the bit many boardrooms missed: automation does not lower the trust bar. It raises it.
Related Articles
Is AI Customer Service Improving CX – or Driving Customers Away?
AI & Automation Trends Redefining CX in 2026
Why Bad AI Is Costing You Customers in 2026
Klarna and Air Canada Show What Happens When AI Goes Too Far
The cautionary tales are already familiar. Klarna became the poster child for AI efficiency after saying its assistant was doing the work of hundreds of agents, only for CEO Sebastian Siemiatkowski to later stress that customers must know
“there will always be a human if you want.”
Air Canada, meanwhile, learned the expensive way that a chatbot inventing policy is still the company inventing policy, after a tribunal ruled the airline was responsible for misinformation on its site. These cases matter because they puncture the fantasy that AI agents can replace judgment, empathy, and accountability with a slightly friendlier interface.
Where AI Agents Are Actually Improving Customer Experience
That does not mean AI agents are failing everywhere. Far from it. The better examples all share one trait: AI is being used to remove friction, not dodge responsibility. Bank of America’s Erica, narrowly scoped and backed by human support, is still held up as a model for handling simple tasks well. Zendesk’s 2025 trends research also pointed to rising consumer favourability toward AI in CX, but with a catch: customers are more likely to trust AI when it feels friendly, transparent, and easy to override. In other words, the winning play is not automation versus human support. It is automation with an escape hatch. Very glamorous? Not exactly. Effective? Much more often.
Why Human Escalation Is Becoming a CX Requirement
That is the strategic lesson for buy-side leaders weighing AI agent rollouts in 2026. The brands most likely to gain customer trust with AI agents are not the ones racing to eliminate human interaction. They are the ones designing for clear escalation, context retention, and honest disclosure. CX Today has already been circling this point: poor escalation design quietly breaks trust, while transparency and human oversight are becoming non-negotiable in CX automation. Gartner has now gone a step further, predicting that by 2027, half of companies that cut customer service staff because of AI will rehire for similar work under different titles. That is less a forecast than a polite industry confession.
The Real CX Test Is Whether Customers Feel Helped or Trapped
The broader signal is hard to miss. AI agents can absolutely improve customer experience when they handle low-stakes, high-volume, well-defined work: order tracking, password resets, status checks, routing, translation, after-hours support. But when companies push them into complaints, emotionally charged interactions, or vulnerable-customer journeys without a human backstop, trust falls apart faster than a streaming finale everyone swore would make sense in the last episode. For busy CX leaders, that is the real takeaway: the question is no longer whether AI agents belong in service. It is whether your operating model makes customers feel helped, trapped, or quietly disposable.
The Winning Formula for Customer Trust With AI Agents
As Zendesk CEO Tom Eggemeier put it:
“AI should be in service to humans.”
That line may be the simplest test of the whole market. The companies that remember it will build customer trust with AI agents. The ones that forget it may still hit their automation targets, right up until their customers decide they have had enough.
If you want to stay up to date with the latest CX news, subscribe to the CX Today newsletter and join our growing community of enterprise technology professionals.