As enterprises race to integrate artificial intelligence into customer service, the technology’s financial potential might be undercut by a growing trust deficit.
While AI promises financial benefits and efficiency, the cost of poor implementation could outweigh the savings for enterprises relying on customer calls as revenue lifelines.
When callers realize they’re speaking to AI, the average call abandonment rate jumps from around four percent with human agents to nearly 25 percent with disclosed AI, according to Answering Service Care’s AI Call Report. That’s a difference that could cost businesses millions in lost revenue and reputation.
Transparency Matters More Than Technology
“People want to talk to people, or at least know when they’re not,” Logan Shooster, VP of Answering Service Care, told CX Today in an interview.
“The best course of action if you’re going to use AI is to at least disclose that it is AI. And [say] here are your options—you can get a human if you want it, or you can get a call back or an email so that the person can get help.”
That desire for transparency is overwhelming. 80 percent of the respondents to the survey wanted to know if they’re talking to AI. “So not necessarily that they don’t want to talk to AI, but if they are talking to AI or forced to be dealing with AI, they want to know,” Shooster said.
The data shows how fragile trust from consumers can be.
“A third of them will hang up immediately if they do know it’s AI, so they won’t even try to solve the problem.”
Generational differences exist, but the core sentiment holds steady. “Boomers were the most prone to hang up immediately and want to talk to a human, whereas Gen Z and below were more adept to potentially deal with AI,” Shooster said.
“But still, the overwhelming majority of people across generations, across political lines, all want to know at the core, ‘Is it a human or not?’ And then if it’s not, ‘What are my options? Can I get to a human? Am I stuck with an AI? What’s the process to escalate?’”
Shooster warned that businesses are too often focused on cutting costs, without fully considering the risks. Across millions of calls, human agents maintain average abandonment rates between 3 percent and 5 percent, whereas rates for disclosed AI calls spike close to 30 percent, Shooster said.
“So yeah, maybe you’re saving 50 percent on the labor assets, still, if you’re losing one out of every three leads to your competitor… it’s not even close.”
For small businesses, such as HVAC, plumbing, real estate and personal injury firms, the cost can be devastating. “Each one of those calls could be $15,000, $30,000 a pop,” Shooster said.
And beyond the direct financial cost, there’s the longer tail of reputational damage from negative reviews on social media.
“A lot of these businesses are word-of-mouth and referral-type businesses and I can imagine lots of these businesses having people talking on Facebook and all different types of groups [saying], ‘Hey, don’t use them. You can’t get support. It’s horrible; they don’t care. There’s nobody you can talk to,’” Shooster said.
“That’s the indirect impact that you don’t see on your bottom line, but it is impacting your brand and your reputation.”
The company’s findings highlight a disconnect between how quickly AI is being adopted and how slowly regulation and consumer confidence are catching up. “It’s kind of Wild, Wild West across the world, but especially in the U.S.,” Shooster said. “The technology is moving faster than legislation, and consumers are the ones who have to deal with it.”
With AI systems often retaining and processing user input for training, enterprises need to think carefully about what that means for customer data protection, Shooster said.
“If you’re talking to AI, not only is it not a human, but also everything you say is being stored and then trained on and then reused, and then who knows where it’s ending up.”
“We’re not completely anti-AI—obviously there’s a place for technology, and it’s not going anywhere, but it’s good to know what you’re talking to and what’s going to happen the information that you’re sharing.”
Businesses would do well to take a similar approach to compliance with U.S. call recording disclosure laws, which vary by state. Some require only one party’s consent, while others mandate that both parties be informed.
In practice, the call center industry tends to take the cautious route, following the strictest “two-party” disclosure standards to ensure compliance everywhere. The logic is simple: always disclose when a call is being recorded to avoid legal gray areas and stay on the safe side of regulation.
“Maybe they are thinking about their customer. But have they really stopped and looked at where their policies are today… when it comes to the proper disclosures, the proper registrations, the proper laws?”
Shooster pointed to states like California, Colorado, and Texas that have already begun passing AI transparency laws, as well as the proposed Keep Call Centers in America Act that could tie AI disclosure to job protection.
“There are a lot of things happening right now from a legislation standpoint even though it’s not where we’d like to see it,” Shooster said, adding that specific AI laws may come into effect in the future. “Businesses should think about that before they’re implementing technology.”
Answering Service Care has proposed a simple fix: make AI disclosure the default, along with a dose of honesty.
“Potential disclosure could look like when you call in, the AI would respond, ‘You’re speaking with an AI assistant today. I’ll do my best to answer your questions quickly,’” Shooster said. “At least now they know, from an ethical standpoint, from a privacy concern.”
That level of transparency helps businesses because “it lets the technology slow down and get better before we just roll it out to everything because it may save a few dollars,” Shooster said. It also “helps avoid any potential financial compliance risks by breaking laws they probably don’t even know about.”
Building Customer Trust in AI Rollouts
A lack of strategic planning and feedback loops around AI implementations isn’t unique to the U.S.
A recent report from ArvatoConnect found that many firms in the U.K. are rolling out AI and digital projects without proper planning, feedback, or measurement. Just 53 percent had gathered end-user insight before making changes, and the same number hadn’t sought feedback after rollout.
In addition, around 60 percent had failed to gain colleague buy-in, and only 56 percent had trained their employees on the new systems. Even more concerning, 38 percent had yet to implement performance monitoring, and 50 percent hadn’t set KPIs to track success.
Shooster echoed that sentiment, arguing that feedback and testing are essential before any rollout.
“You need to get customer feedback when you roll things out, and then iterate and then adapt from there and test. And not just do cold deploys of new things and hope it works. There does need to be a systematic approach to it.”
Shooster added that introducing AI without purpose or process is where many companies go wrong. “It’s about identifying the types of call types and workflows… identifying the whole process end to end, and then saying, where does a human need to be involved? Where could an AI potentially be involved? And most importantly, what does the customer think?”
Shooster acknowledged that AI can have a place — in limited, low-stakes scenarios, such as basic queries and or mass notifications for public announcements. “But there are other queries where you do need a human on the phone, instead of just forcing consumers to deal with AI for all different types of solutions.”
That distinction becomes crucial in high-stakes industries like emergency response, healthcare and finance. “Because when it’s a 911 call, which we went over in our results, an urgent medical issue, or even financial, with a lot of these IVRs you’re getting, you’re ending up at AI, and they’re not giving you the escape to get to a human. You’re stuck there, and you’re in loops and loops of agent,” Shooster said.
“When it’s wrong, AI has no idea. It’ll be extremely confident that this is the answer. And then when called out, it’ll change its opinion and say, ‘Oh, you’re right.’ … Sometimes paying a little bit more to make sure that we aren’t making those types of life-changing mistakes is worth it.”
That same need for balance between efficiency and empathy showed up across the Atlantic, too.
According to 8×8’s Streetview survey, 83 percent of UK customers still prefer speaking with a real person, with just four percent favoring chatbots or virtual agents. The only time that sentiment shifted was when money entered the picture, as around one in three respondents said they’d accept AI if it meant lower prices.
“My recommendation is not to put it in front of potential new business opportunities, because at least for that small business, you can’t afford to impact your reputation with a new customer, because every single one of those is really important to the growth of your business.”
Shooster added: “I think for some support-related issues, it could make sense, again, based on feedback and testing. And then every business is different.”
In the end, the advice to business leaders is clear: slow down, disclose, and design with trust in mind. The key is to use AI deliberately, not desperately. “It’s really about understanding what’s important — is it short-term gains, or long-term gains?” Shooster said. “Because loyalty and reputation are everything in the long run.”