Containment is up. Repeat contacts are up. Trust is leaking — and the metrics you’re watching aren’t designed to show it.
As a customer, I can forgive a delay or a mistake. What’s harder to forgive is feeling trapped in a loop when I need help, especially when it’s obvious the system is trying to save money at my expense.
That tension is showing up across the customer experience: companies are chasing containment and deflection gains to reduce cost per contact, but in the process, many are eroding the trust that keeps customers loyal.
In a recent interview with CX Today, Jeremy Puent, Principal Solution Architect at Amazon Connect, and Tony Shen, Senior Product Manager at Amazon Connect, argued that containment metrics are often treated like a north star when they should be treated like a secondary signal.
The real priority, Puent said, is building automation that earns trust by reducing customer effort scores, improving outcomes, and keeping a clear path to human help.
“Containment and deflection are about driving those costs down. The most expensive part of that calculation is the human resource.”
Why Leaders Became Obsessed With Containment In The First Place
Puent points to a familiar board-level fixation: cost per contact. In many industries, leaders can recite unit economics instantly, and the Customer Support Operations are often managed the same way. If a customer can be “contained” in an IVR or routed into a chat flow where agents handle multiple conversations at once, the math looks better.
The problem is what that math can hide.
Containment can go up while customers are working harder to get answers. Deflection can rise while repeat contacts increase. And dashboards can look “green” while trust is quietly draining away.
Puent shared an internal framing used on Amazon’s retail side that illustrates the mindset shift: treat customer contacts as signals that something broke upstream.
“At Amazon.com on the retail side, we call a customer contact a defect, and we focus on defect elimination.”
How Containment Metrics Undermine Trust
Trust is not lost because an AI agent exists. Trust is lost when automation makes customers feel powerless.
Puent described the difference between an AI agent that helps and an AI agent that harms: recognition, context, and escalation. If the system knows who is calling, why they are calling, and can quickly confirm the relevant issue, automation can be a trust builder. If it cannot, it becomes friction.
He offered a simple scenario. A delivery is delayed due to a blizzard. A good customer experience anticipates the reason for the call and confirms it quickly, while still letting the customer reach a person if needed.
A bad experience repeats questions, forces a cheaper channel, and blocks escalation.
“If you create a bad experience, you don’t give me a path to get to a human being if I still feel like I need one… You’re going to create frustration and drive me away. That erodes trust.”
In practice, the trust-killers are familiar to any CX leader reviewing escalation complaints: Customers repeating information. The AI agent asking for details the customer does not have. A “containment at all costs” flow that delays human help. A channel strategy that feels like avoidance, not service.
What To Measure Instead If Trust Is The Goal
Shen’s argument is not that containment should be ignored. It’s that containment alone is a misleading headline.
He recommends customer experience measures as the real north star, with containment tracked alongside them to ensure automation is not improving efficiency by damaging outcomes.
“Containment alone is not the right North Star.”
For teams trying to operationalize trust, Shen pointed to measurable indicators that can move trust from a subjective debate to something teams can manage with evidence. That includes whether customers actually accomplish what they came for, and whether AI responses are grounded in the right sources.
The downstream costs are quantifiable: repeat contacts that double cost per resolution; CSAT drops of even 5-10 points that research links to measurable churn acceleration; and customers lost to competitors at acquisition costs 5-25x higher than what retention would have required. Trust isn’t a soft metric, it’s a leading indicator of lifetime value. When you let containment erode trust, you’re not saving money. You’re trading pennies for dollars.
Why Observability Is Becoming A Trust Requirement
If trust is the goal, leaders need to know how automation behaves at scale, not just in testing. Shen argues that observability is the bridge between “we think the AI agent is helping” and “we can prove it is helping.”
“Without the observability, trust metrics are just like a sentiment. You can’t measure it and you can’t action on it.” In other words, trust improves when teams can see where the customer experience breaks, isolate why it broke, and then fix it with confidence.
In a market where one poor experience pushes 52% of customers toward a competitor, you cannot afford to learn about systemic automation failures through CSAT surveys sent days after the damage is done.
Puent added that observability is also what enables a more disciplined improvement cycle: evaluating AI agents in the same way organizations evaluate human agents, and tracking whether changes improve or degrade performance over time.
This is where human + AI orchestration creates durable advantage. True orchestration means AI handles what it does best, humans step in when judgment matters, and the handoff is so smooth the customer never feels the seam. The difference isn’t just operational; it’s architectural. Systems designed to avoid human escalation will always optimize for containment. Systems designed for intelligent handoffs optimize for resolution.
The Shift From Metrics To Insights
The interview also pointed to a leadership-level change in how dashboards should work.
Puent argues that a raw metric is rarely actionable without context. An insight has meaning and triggers action. That distinction matters because executives do not need more numbers, they need a clearer picture of what changed, why it changed, and what to do next.
“A metric is just a metric. An insight has context, and is actionable.”
The practical translation: containment rate without resolution rate is noise. AHT without first-contact resolution is a vanity metric. Speed of answer without customer effort score tells you how fast you picked up, not whether the call was worth taking.
Every metric in your dashboard should connect to a downstream business outcome. If it can’t answer “did the customer get help, and did that outcome build or destroy value?” – it’s an incomplete signal.
A Practical Question For Leaders Reviewing Dashboards Today
Puent’s recommendation is straightforward: look at your operational dashboards and ask how many widgets are reporting outcomes after the fact, versus helping teams spot risk early enough to intervene.
“Today dashboards tell you when you missed the mark. How can you make it so they tell you when you’re getting to the warning track and help you take the right actions to avoid running into the wall?”
For CX leaders, that question is also a trust question. If the system is optimized to report failure after customers feel it, then customers will keep feeling it. Trust will keep leaking, even as containment improves.
What changes next is not only the technology, but the intent behind it. If automation is designed to help customers accomplish something quickly and confidently, trust can grow. If it is designed to keep humans out of reach, customers will notice, and they will respond accordingly.
The companies that measure trust will earn loyalty. The ones that measure containment will measure churn.
Start by auditing one thing: pull up your most-watched dashboard and ask whether each metric tells you about customer outcomes, or just operational activity. If you can’t connect the number to whether a customer got help, and whether that help built or destroyed their loyalty, you’re watching the wrong scoreboard.
The technology to do this right exists. The question is whether you’re measuring the right things, or still celebrating containment while trust quietly walks out the door.
Which dashboard are you watching?