Empathy for Humans, APIs for AI: Why One CX Model No Longer Works

Most contact centers are designed around the assumption that every interaction is human. That assumption is now quietly breaking everything

5
Sponsored Post
Contact Center & Omnichannel​Interview

Published: May 14, 2026

Nicole Willing

The contact center industry was established on the design principle that good service requires an understanding of human intent, the ability to read emotional signals, and responses that deliver the right blend of empathy and information.  

It has taken decades to get reasonably good at it. The problem now is that the customer is changing, and the design hasn’t. 

Customer-initiated AI agents are now entering service interactions, and the tools that browse, ask questions, and make decisions on behalf of humans don’t need empathy. They don’t experience frustration in the way humans do. They don’t need reassurance or warmth from a brand.  

Carrie Brough, Director of Strategy & Ops for TTEC Digital EMEA, has been watching this pressure build for some time, as she explained in an interview with CX Today: “We’ve been designing for humans for so long that we have been concentrating on trying to make automated journeys emotional and complex.”   

Brough added: 

“When we start mixing it into a dual lane, the AI thinks differently. It doesn’t want emotionally well thought through answers. It wants quick, responsive, brief answers so that it can take a decision on behalf of a customer.” 

A large part of the mismatch comes down to how information is delivered. Human-centric journeys are designed to guide, reassure, and explain, often wrapping key details in layers of context. 

AI agents, by contrast, interact through APIs. They require structured outputs, clear fields, and deterministic responses that they can process without ambiguity.  

Related articles:

Can a Contact Center Designed for Humans Serve AI Agents Effectively? 

Empathy remains essential for human interactions, but APIs are becoming the interface for AI, creating a dual requirement for contact center architectures . 

That shift presents an opportunity as customer adoption of AI agents grows, Brough said: 

“Contact centers have been very successful at designing experiences for human-to-human engagement. Human agents can interpret nuance, understand emotional context, and bring empathy into the conversation. But AI agents consume information differently. They need structured, consistent, and machine-readable data to understand intent and take the right action.” 

The end customer is still a human that responds to empathy. “But as AI agents become part of the customer journey, organizations need to design the underlying information layer so machines can interpret it accurately and deliver it in a way that still feels clear, helpful and trustworthy to the customer,” Brough added. 

Early Warning Signs Your Contact Center Is Mixing Human and AI Interactions Badly 

So how do organizations know when they have drifted into this liability zone? Brough pointed to a set of early signals that are easy to overlook but hard to reverse. 

The clearest sign is a pattern of repeat contacts, as AI interactions produced the wrong output. 

“The AI is getting a response that’s then going back to the real customer as unclear or confusing, and they’re having to follow it up again,” Brough said. 

The second signal is harder to spot because it shows up inside the agent team rather than in customer metrics. Agents start spending their time cleaning up and fixing errors generated elsewhere, correcting interpretations that the customer’s AI got wrong, and unpicking decisions that were already acted on.  

“An action has been taken, that then leads to a complaint and a complex query that the agents are then having to unpick,” Brough noted, “because customers acted on what they thought was the correct advice. But if it’s been misinterpreted, it’s a problem further down the chain.” 

Why Human Customers and AI Agents Need Completely Different CX Responses 

The challenge for organizations lies in underestimating how differently AI agents consume information compared with human agents. What works well for a person (who can infer meaning and apply judgement) will need to be structured very differently for an AI agent. Agents require a different kind of information architecture, according to Brough: 

“We’re designing for average demand, for human demand. And actually, what looks good to a human is very different to what looks good to an AI.” 

The consequence of ignoring this distinction is predictable. 

“We’ve gone for one model to give consistency, but that’s not going to be helpful in the future and will lead to mistakes,” Brough warned. 

What does dual lane actually mean in practice? 

Routing Humans and AI Agents Separately 

Separating the paths is a natural design discipline that organizations should be building now, before volume forces the issue. 

“The first step is understanding: is it an AI or a human that wants the answer? And having that dual route will allow you to offer either that speedy, quick answer, or that more complex, more emotional, more brand-orientated answer that humans are looking for,” Brough advises. 

Identifying what kind of entity is making the request and serving it the kind of response that will work for it requires changes to how organizations think about intent, how they classify demand, and crucially, how they build and maintain their knowledge. As Brough pointed out: 

“You still want good knowledge for your humans, so making sure that you’ve got a complete knowledge profile is important .” 

How Mixed Contact Flows Are Damaging Agent Experience 

 Getting this wrong has consequences across the contact center, but the impact is often felt most directly by agents. As with the early days of chat automation, teams can find themselves managing the fallout from technology that was intended to reduce pressure, improve journeys, and simplify work. 

“The agents can only then deal with what’s left or what’s gone wrong. And so they become frustrated,” Brough said. 

That frustration is more about the nature of the work than the volume of it. Agents who came into the industry to handle complex problems and make a genuine difference start to feel like a backstop for the technology. As Brough put it, “We always talk about agents being there to handle the more complicated queries, and that’s absolutely right. That’s what humans are great at.” 

“We’re not great if that has been caused by a failure of technology that was supposed to make life easy. We get frustrated. Your agents get disengaged with the service that you’re offering.” 

Dual-Lane CX Design Must be a Contact Center Priority Now 

The era of designing CX for a single type of interaction is coming to an end. As AI agents become part of the customer journey, organizations can no longer rely on one experience model to serve both people and machines. Human customers need empathy, context, and reassurance; AI agents need structured, consistent, and machine-readable information. 

By trying to serve both, organizations risk delivering an experience that works well for neither. 

The first step is a design decision that recognizes there are now two types of customers and understands what each of them needs, rather than assuming the same answer will satisfy both. 

Agent Experience (AX)Agentic AIAgentic AI in Customer Service​AI AgentsAPI ConnectivityArtificial IntelligenceAutomationAutonomous AgentsCCaaS
Featured

Share This Post