Customer experience is entering a new frontier. For years, automation in support meant deploying chatbots that offered scripted answers and could only handle basic inquiries without ever touching a user’s account. If a user wanted to move money or change data, they were handed off to a human agent who acted as the authorization gatekeeper.
And then as large language model (LLM)-based chatbots entered the contact center, the primary fear was reputational. Leaders worried about a chatbot “going rogue,” hallucinating a policy or using inappropriate language.
However as 2026 unfolds, the risk profile has fundamentally changed. Enterprises are no longer deploying bots that just answer; they’re deploying AI agents that act.
The new generation of agentic AI is being integrated directly into backend systems with the power to issue refunds, change account details, book appointments, and reset passwords. While this promises to collapse Average Handle Time (AHT) and boost resolution rates, it creates a massive, often unsecured attack surface.
This introduces a different risk landscape, one that demands new approaches to trust, oversight and security.
The New Attack Surface: Actions, Not Words
The rise of AI agents is shifting the attack surface in CX from informational to operational. The next wave of CX failures won’t be a chatbot giving a wrong answer. It will be an AI agent with too much access doing the wrong thing —or being manipulated into doing it—likely while leaving a digital trail so murky that security teams won’t know who, or what, pulled the trigger.
Nowhere is that risk more apparent than in account takeover (ATO) fraud. ATO became one of the fastest-growing security threats in 2024, surpassing ransomware as the top enterprise security concern, with 83 percent of organizations experiencing at least one incident, according to Sift.
Losses from ATO fraud are projected to have climbed to $17 billion in 2025 from $13 billion the previous year. That growth is being driven by malicious bot activity, infostealer malware, and increasingly sophisticated AI-driven techniques, indicating the threat from agentic AI adoption.
As Miguel Fornes, Information Security Manager at Surfshark, explained in an interview with CX Today, the leap from chatbots to agentic AI is transformative from a security perspective.
“The main difference is that this critical leap comes from content to consequence. The chatbot makes a mistake and hallucinates… but the agentic AI, if it hallucinates, it can send the money to the wrong person, or it can simply just wipe everything on your computer.”
This transition from words to actions is what makes AI agents both powerful and potentially dangerous. The ability to interact with accounts, execute transactions, and manipulate systems transforms what used to be a largely informational risk into a consequential one.
Where cybercriminals once relied on social engineering to manipulate human agents, now they can exploit AI agents. As Ali Sarrafi, CEO of Kovant told CX Today in an interview:
“Traditional cybersecurity problems have been about bugs in our software… We’re moving towards that these agents will become the new way of doing social engineering against your systems. That is the scary part.”
An agent that can reset credentials, change contact details, or authorize refunds becomes a high-value target. If an attacker can manipulate the inputs, the agent becomes an acceleration layer for fraud.
Sarrafi pointed to the nuances of context and access. Agents may perform the right actions, but in the wrong context:
“A lot of security problems with agents come from the fact that they’re actually performing the right action, but in the wrong context… If you have an agent that’s supposed to book you flights, if it gets access to everything else, it’s not doing the right job. It can mess up your information, mess up your database.”
Prompt injection, where an agent is tricked into executing malicious instructions hidden within a request, further complicates matters. “Unless the actual guardrails are outside the agent itself, you have a risk,” Sarrafi explained.
A recent security flaw identified in Moltbook, a social network built for AI agents, exposed millions of API authentication tokens, email addresses, and private messages because its backend database was misconfigured and left open to the internet. Researchers found that the exposed credentials would have allowed an attacker to take over any AI agent’s identity on Moltbook, demonstrating how agent identities, if not properly protected, can become vectors for account hijacking and unauthorized actions.
Beyond the Breach: Why Data Aggregation Matters
The risk comes as many organizations are rushing to deploy these capabilities with the same loose security architectures they used for informational bots. The prompt box is evolving from a conversational interface to a command line, introducing failure modes that traditional contact center security stacks aren’t built to catch, at a time when attackers are getting better at impersonation.
Prompt Injection & Tool Abuse: Bad actors are no longer trying to trick a bot into saying something offensive; they are trying to trick it into executing tools. A well-crafted prompt could convince an agent to bypass authentication steps or process a refund outside of policy limits, effectively social engineering the software.
Over-Permissioned Integrations: In the race to introduce AI features, developers often grant the model broad access to the CRM or billing system rather than scoped, least-privilege access. If an agent only needs to read a balance, but the API token allows it to edit the balance, a compromised or confused agent becomes a dangerous insider threat.
The “Black Box” Audit Trail: When a human agent commits fraud, there is a clear audit trail: User ID 123 clicked button X at time Y. When an AI agent takes an action, the log often just shows that the “System” executed a command. If the model’s reasoning capability isn’t logged alongside the action, security teams have no way to reconstruct why the agent decided a fraudulent request was legitimate.
And the danger isn’t limited to malicious hackers or rogue AI. Ron Zayas, CEO of Ironwall by Incogni, pointed out in a recent interview with CX Today that even without a data breach, the aggregation of sensitive data across platforms can erode trust and expose customers to harm.
Zayas emphasized that risk compounds over time as fragmented pieces of personal information are aggregated:
“The more information you give out… even if you’re just giving tiny pieces in different places, it’s being aggregated. And understand that it’s cumulative… When you put that together with other pieces of information that are out there, it can make a non-breach into a breach.”
The implications for customer experience are significant. A single misstep, whether through careless sharing of data or over-permissioned AI agents, can damage loyalty irreparably.
Secure-by-Design: The New CX Mandate
Addressing these risks requires rethinking security for agentic AI from the ground up. CX leaders and CISOs need to collaborate on a “secure-by-design” framework.
Sarrafi noted that governance should be embedded in agentic AI:
“Agents need to be governance-ready by design… You need to babysit these agents. They still need human oversight.”
Fornes added that enterprises must understand the precise scope of access granted to agents:
“First, you need to deeply understand what data, processes and context you are allowing your agents… Things like their code base, source code, personal information of the customers… I would strongly recommend to not give full access to agents without understanding the consequences.”
A mature architecture for transactional AI requires three layers of defense:
- Scoped Permissions: Agents should never have admin access. They require scoped tokens that limit them to the exact functions needed for a specific intent.
- Step-Up Authentication: Just because a user is logged in doesn’t mean the agent should process a high-risk action. “Step-up” challenges like two-factor authentication or biometric checks should be triggered dynamically when an agent attempts sensitive workflows.
- Human-in-the-Loop Gating: For high-value transactions, the AI agent should prepare the action but require human approval to execute it.
This approach balances innovation with safety, ensuring that AI agents enhance customer experience without opening up organizations to operational errors or fraud.
On top of securing their own agent deployments, enterprises must also prepare for AI customers—autonomous agents acting on behalf of human users. Contact center infrastructures built around human behaviors struggle the moment an AI interacts with them: knowledge-based authentication fails because AI has perfect recall, fraud detection tools flag legitimate interactions as synthetic, and voice-response systems misinterpret machine speech as malicious activity.
To adapt, organizations will need to rethink trust, moving from authenticating human memory to cryptographically validating permissions and delegation tokens, auditing what AI is allowed to do, and redesigning workflows so that enterprises can serve agents securely without eroding customer trust.
The Operational Tradeoff: Fraud Controls vs. CX Friction
For years, the goal for CX has been “frictionless” experiences. However, preventing agent-driven fraud requires reintroducing intentional friction. This creates a direct conflict with operational metrics. Implementing strict guardrails can slow down service, while lax controls increase risk. Adding a 2FA step to an AI interaction increases AHT and may lower customer satisfaction (CSAT) for legitimate users, but it is necessary to stop account takeovers.
Fornes likens responsible agentic AI adoption to managing a “super talented, strange, reckless genius,” leveraging AI’s power while retaining supervision.
As enterprises hand the keys to backend systems over to AI, the definition of a “successful” interaction changes from whether the customer is happy to whether the action they requested was safe, authorized, and auditable.
As Fornes put it:
“The best shield against this type of technology will always be caution and skepticism.”
The proliferation of agentic AI is inevitable, and enterprises need to move quickly to safeguard trust. This requires technical controls, policy frameworks and education, as well as a mindset shift, recognizing that trust is earned, not assumed, and even systems designed to delight can cause harm if left unchecked.