AI Is Breaking Contact Center Security—Are You Ready?

In this CX Today roundtable, security experts unpack how AI scales contact center fraud and why security must become risk-based.

Security, Privacy & ComplianceRoundtable​

Published: May 12, 2026

Nicole Willing

AI is changing the cyber threat environment and CX teams are feeling it first. In this CX Today roundtable, host Nicole Willing is joined by Miguel Fornes, Information Security Manager at Surfshark, Randy Layman, CTO at AVOXI, and Ron Zayas, CEO at Ironwall by Incogni, to unpack what AI-accelerated threats mean for contact center security, authentication, and customer trust.

The panel agreed that attackers are not always using radically new techniques. Instead, AI amplifies familiar ones, like social engineering, through speed, scale and personalization. That shift matters because it increases the odds of finding a weak link, whether that is a process gap, an overworked agent, or a customer who is tired of jumping through hoops.

Zayas expanded on the operational fallout of attacks at scale, warning that volume also becomes a customer experience issue:

“When you can do 100 calls, not only are you going to be able to find those weaker links, but you’re also going to crowd out the ability for other people who have legitimate calls to be able to come in.”

For CX leaders, this reframes security as capacity protection. Fraud can degrade service levels and raise the emotional burden on agents who are expected to spot increasingly convincing manipulation.

Fornes argued that the root challenge is a decade-long march toward convenience that attackers exploit, and he did not mince words about what changes with agentic AI:

“What before an attacker usually was spending like days or weeks to do… now with AI that can be honestly done in the blink of an eye.”

Fornes also cautioned that consumers and businesses need to adopt layered security habits and realistic expectations around friction.

One of the most important nuances in the discussion is that “bot detection” is no longer a clean solution. Authorized automation will increasingly sit alongside malicious automation, meaning that teams must validate intent and identity, not just whether the voice or chat appears synthetic.

The panel’s practical recommendation is to evolve authentication and identity checks into a graduated, risk-based model. Layman captured the shift:

“I think we need to really start looking at authentication, not as a black and white yes and no, but as a continuum of probability.”

Watch the full interview above for insights and practical advice on how CX and security teams can stay ahead of AI-driven fraud without adding unnecessary customer friction.

Agentic AIAI AgentsCybersecurity for CXSecurity and Compliance
Featured

Share This Post