For years, contact centers have been tightening security around digital channels such as web, app, chat, and email, while the voice channel kept chugging along, largely untouched. But that oversight is now becoming a liability.
AI-generated voice attacks, imposter calls, and IVR mining have created a new kind of inbound threat that’s invisible to traditional fraud systems and surprisingly effective against undertrained agents.
The inbound voice channel, still a mainstay for financial services, healthcare, and government, has lagged behind digital channels in security evolution, Mike Schinnerer, Vice President of Product Management at Transaction Network Services (TNS), told CX Today in an interview.
Online, there is pressure to move away from passwords to multi-factor authentication, but inbound contact center engagements often still rely on knowledge-based authentication—the “What’s your mother’s maiden name?” routine—to verify callers.
“That’s pretty vulnerable,” Schinnerer said.
“It often ends up being an imposter talking to an agent. And those agents are sometimes easily fooled and manipulated… because they’re people too and they’re vulnerable to being misguided.”
The result is a perfect storm of human vulnerability and AI-enabled deception.
As AI voice synthesis becomes easier to deploy, bad actors are exploiting it to create lifelike caller voices that can pass basic screening. Some use interactive voice response (IVR) mining — repeatedly calling 1-800 numbers to study how an organization’s IPR system behaves, before launching a real attack.
“The intent of this initial call into the care support line is not necessarily to… steal money,” Schinnerer explained. “They’re just trying to do reconnaissance to understand where the vulnerabilities are.”
That’s the double-edged sword of AI in the contact center: it’s both the threat and the solution as enterprises deploy it for call routing to enhance customer experience. “That’s great, but let’s make sure that we’re not embracing a technology that has vulnerabilities,” Schinnerer said.
“It’s a new technology for the contact centers, so there is continual learning back and forth on how we can understand how they implement their AI agents, and then we can work with them to identify vulnerabilities. It’s almost like penetration testing and software code to help identify those moments,” Schinnerer said.
“The AI agent is new, it’s a good buzzword, but we’re still learning. Every cloud contact center, if you go into their marketplace, there’s [at least] five different service providers touting an AI voice or AI agent experience.
“We’re trying to evaluate the credibility of those and the vulnerabilities of those types of service providers, either in partnership with them or not.
“It’s new. It’s not going to be solved in the next three months. It’s going to take more than that to identify where those vulnerabilities are, and then to help the education of the enterprises using them on how to strengthen [their defenses] if there is an exposure or not.”
Fraudsters are now extending their reach into smaller companies, requiring them to be more alert.
For instance, in financial services, “ten years ago, a lot of the imposter fraud was attacking larger financial institutions,” Schinnerer said. “But we’re now seeing it trend into the regional banks. And those regional banks often have more vulnerabilities or exposure to risk, and… when these fraud events happen, they’re more impactful for them.”
For one, it creates an unsavory taste in the consumer’s mouth, and they might leave and go to a bigger brand that they feel is protecting them better, or they already have financial payouts and penalties.
From working with health care organizations on outbound calling, it is clear that businesses have to be tech savvy to navigate these increasingly sophisticated attacks, Schinnerer said.
“What we’re seeing when we engage the healthcare organizations is that they don’t necessarily have a sophisticated customer care service and platform that they’re using. So there’s a little bit of immaturity in the healthcare market on what calling platforms that they use, and a lot of them are calling from local numbers within their regions, but… it is a target.”
Brands also need to be aware of how imposters are carrying out multimodal attacks, using both voice and messaging to find vulnerabilities or use them in tandem, to see which one they can get through.
“We’re working with other partners who have more of a messaging background and messaging security, and seeing how we can complement one another with the data that they have and the data that we have, so that we can bring a multimodal solution to the market,” Schinnerer said.
From ‘Trust But Verify’ to ‘Never Trust, Always Verify’
To counter these threats, companies like TNS are pushing for a zero-trust approach to voice — treating every inbound and outbound call as potentially hostile until verified.
“A zero trust voice framework is gaining momentum. So we started shifting the conversations as we engage enterprises.
“Where before we were getting engaged with their support organization or their marketing organization for customer outreach, we started elevating these conversations to the CISO organizations, and those are the ones that understand and comprehend this ‘never trust, always verify’ approach, because that’s what they’ve implemented for their data security or their online security principles.”
“When we approach them and say we’re not going to trust any call into or out of your voice platform, because that’s the only way we can ensure that we remove fraud, they say, ‘Yeah, that makes perfect sense.’”
Ultimately, what we see is that there’s an ROI to it, there’s an arc. There’s an ROI from the customer satisfaction perspective, and then there’s an ROI because we can remove a lot of the fraud that originates from this exposure that they have, which is their voice network.
This zero-trust framework uses AI and network-level data to validate that calls come from the devices and networks they claim to — effectively creating multi-factor authentication for voice.
TNS monitors billions of calls daily for carriers and enterprises, which allows it to develop multi-factor authentication that is less reliant humans to authenticate callers.
“We can check [the carrier network] if the phone call is originating from the actual device,” Schinnerer explained. “If they’re on Verizon, we go ping Verizon — is this call active? We can do more deterministic type of interrogations. We can also do some AI voice detection, to see if this is not the actual caller’s voice.”
Filtering Fraud at the Edge
One of the biggest shifts in inbound voice protection is moving detection to the “edge”, using network attributes to identify suspicious callers.
“The goal is… to filter the bad actors and the imposters before they get to the agents,” Schinnerer said.
That way, customer service agents aren’t vulnerable or under pressure to resolve fraudulent calls, especially as high churn rates of contact center employees make it challenging for managers to keep their teams trained and vigilant.
This filtering relies on network intelligence—for example, seeing if one number has been calling multiple banks or trying to penetrate several financial institutions in a short time frame.
Crucially, these checks happen in real time. TNS’ Enterprise Voice Security suite also runs AI synthetic voice detection while a conversation is happening.
“We can do the detection within about 15 seconds and… we can alert the care agent that we think something’s going on here,” Schinnerer explained.
“And within those 15-30 seconds, by that time, you haven’t really disclosed anything. You’re still trying to figure out why they’re calling, what the intention is, and you haven’t disclosed any PII information.”
This allows the enterprise to identify fake calls without disrupting the experience for legitimate customers.
“The thing we need to be mindful of is that we don’t ruin the experience for good actors,” Schinnerer said. “So we don’t want to make it too long on the phone to do the analysis and the determination before we send the good calls to the agents.”
Security as a Customer Service Differentiator
While adding new security layers can sound like friction, in many cases, it actually improves brand trust. “Ultimately, even if there’s a little delay getting [callers] to an agent, what we’re also seeing with institutions that we talk to, especially these downmarket ones, because they’re competing against the big brands… we help them position it to their consumers as a differentiator.”
Brands can highlight their strong security focus on protecting customers in their marketing campaigns, emphasizing how their fraud prevention measures set them apart, Schinnerer suggested. This messaging appears to resonate with customers, who express appreciation and trust towards companies prioritizing their safety.
This mindset also applies to buyer evaluations. “It’s really important to understand what impacts their business,” Schinnerer said.“First and foremost, understanding what really business outcomes you want and understanding what that spend and cost also might look like, because that is also part of that ROI assessment, and then it’s a matter of always trying to improve your implementation and your solution.”
What are you trying to accomplish? Are you trying to improve your agent efficiency? Are you trying to reduce your exposure, or reduce your out payment of fraud loss?
Because, as Schinnerer pointed out, there is no magic, hands-off answer:
“We’ve had inbound voice authentication for a little while now, and it’s been expensive… When you think of the fraud departments inside of those organizations, they were hopeful that it would be a silver bullet, in that if they can implement inbound voice authentication, it would remove the fraud, and it necessarily hasn’t, that’s just because, again, with the AI voice agents and other ways of bad actors always changing, it is not a set-and-forget type of solution.”
Enterprises have to stay on top of understanding the threat landscape and where new attacks are coming from. To do that, they need to “choose a partner that can see the whole picture, not just the inbound picture,” Schinnerer said.
As AI blurs the line between real and fake voices, inbound call security is no longer optional — it’s strategic. Enterprises need to start treating the phone channel like any other digital access point — zero trust, AI-augmented, and continuously verified.