Contact Center AI Assistants Are Introducing New Inefficiencies & Burdens, Finds Study

The report suggests that ill-suited AI assistants could be causing learning, compliance, and psychological burdens

4
Contact Center AI Assistants Are Introducing New Inefficiencies & Burdens, Finds Study
Contact CenterLatest News

Published: July 8, 2025

Rhys Fisher

A new study has highlighted the teething issues some contact centers face when implementing AI assistants.

Researchers from a selection of Chinese universities interviewed customer service representatives from a power grid service call center about their use of AI.

While the reps generally found the ability of AI assistants to transcribe calls in real time to be useful, they highlighted a number of shortcomings.

For instance, agents complained that the technology fails to keep track of longer, more complicated interactions, often cutting out after a certain point. As one rep explained:

“When the conversation gets more intense, like if the customer has a lot of follow-up questions or is emotionally charged about an issue, the call might go on for over 30 minutes.

But the system might only transcribe the first 10 or 15 minutes. After that, it just freezes or stops recording, and you don’t get a complete transcription of the entire call.

Another CSR reported issues with standard customer inquiries, stating:

[The] AI assistant isn’t that smart in reality. It gives phone numbers in bits and pieces, so I have to manually enter them… Sometimes it confuses homophones or only transcribes part of what was said.

Moreover, the AI frequently makes errors in transcriptions, particularly when callers have certain accents or switch between languages.

A further shortcoming is the technology’s supposed “emotion recognition features.”

As the name suggests, the capability is designed to analyze the tone and inflection of a caller’s voice and characterize their emotional state.

However, those surveyed reported that the feature was not fit for purpose, with one rep claiming that it focuses too heavily on volume level, often misclassifying normal speech intensity as negative emotion.

Finally, the study, entitled: “Customer Service Representative’s Perception of the AI Assistant in an Organization’s Call Center“, also found that the tool lacked sufficient tag features to adequately convey the callers’ emotions.

The combination of these shortcomings meant that reps “largely disregarded emotion analysis features” and instead relied upon themselves to infer the customer’s emotion during a call.

New AI-Induced Burdens

The study concluded that AI did reduce some manual typing for reps, but the prefilled content often needed correction or removal, creating new inefficiencies.

While it helped organize call records, the extra, unnecessary text added to their workload. The study stated: “Our findings reveal that an AI assistant can alleviate some traditional burdens, such as cumbersome and time-consuming tasks, thereby demonstrating potential for enhancing foundational efficiency, aligning with previous research in areas such as entertainment, work, education, and healthcare.

However, it also introduces new learning requirements, compliance challenges, and psychological burdens.

The study specifically outlined the following three burdens that AI implementation caused within this contact center environment:

1. Learning Burden

Reps face extra effort adapting to the AI assistant due to its limitations, such as misinterpreting phone numbers, addresses, dialects, or lengthy conversations.

While the AI helps capture details when customers speak unclearly, it often fails with complex or region-specific inputs.

This forces agents to spend time correcting errors and learning how to work around the system’s flaws, highlighting a gap between the technology’s promise and its practical implementation.

2. Compliance Burden

AI-generated outputs often don’t align with internal policies or regulatory standards. Reps must edit, verify, or rephrase content to ensure accuracy and use approved terminology.

Because large language models (LLMs) tend to produce natural language rather than standardized, domain-specific language, agents must spend time making the output compliant.

This action adds to the agents’ workload and creates friction between flexible AI outputs and rigid operational requirements.

3. Psychological Burden

Although AI can reduce cognitive strain by remembering customer details, it can also introduce frustration.

Redundant or unclear suggestions, especially during busy or high-pressure moments, can interrupt workflows and obscure key information.

These interruptions contribute to stress, and while the overall emotional impact is mixed, there is evidence that poor AI performance can add to agents’ psychological strain.

The Bottom Line: Run Pilots and Invest in Change Management

It is clear from the study that many contact centers are struggling to achieve a balance between humans and AI.

Although it is important to note that this study has its limitations – a small sample size and a singular, specific location being chief among them – the issues unearthed are not uncommon.

In order to truly maximize the potential of the technology, contact centers need to run effective pilots and invest in change management to achieve agent buy-in.

An “effective” pilot proves that the tech works and enhances experiences, not just efficiency.

Meanwhile, effective change management involves being clear on the purpose of AI assistants (i.e., not as a job displacement tool), creating AI champions in the agent population, and selling the long-term benefits.

For more on how to best implement AI assistants, check out the article: 5 Bite-Sized Lessons for Implementing AI from Contact Center Experts

 

Agent AssistArtificial IntelligenceVirtual Assistant
Featured

Share This Post