Real-Time Fraud vs Real-Time CX: The Security Tradeoffs Behind Seamless Journeys

As account takeover and social engineering surge, enterprises are learning that reconciling speed and safety demands more than technology alone

6
Security, Privacy & ComplianceFeature

Published: March 31, 2026

Nicole Willing

Enterprise contact centers are where the quiet tug-of-war between security and customer experience plays out in milliseconds. On one side, fraud teams are tightening controls as account takeover rates climb and social engineering gets more convincing. On the other, CX leaders are under pressure to cut friction, shorten handle times, and keep satisfaction scores moving up.

In the middle sits the customer, more impatient than ever, more frequently targeted, and mostly unaware of how much is happening behind the scenes before an agent even answers a call.

Globally, approximately 7.7 percent of annual revenue was lost on average to fraud, representing $534BN, according to TransUnion’s 2025 Global Fraud Trends Report, driven by scams, synthetic identities, and account takeovers.

At the same time, Salesforce’s State of the Connected Customer report found that 80 percent of customers say the experience a company provides matters as much as its products. And 75 percent of consumers say poor customer service changes their purchasing behaviors, according to a Zoom consumer study.

Organizations can’t afford to lose on either front. Can real-time intelligence make that balance achievable?

Related Stories

As AI Agents Enter Customer Journeys, Enterprises Must Rethink Fraud and Identity

Inbound Voice Is the New CX Threat Vector: Why Contact Centers Can’t Afford to Ignore Voice AI Fraud

The Data Privacy Gap: PwC on Why Brands Are Misreading Customers and How To Fix It

The “Real-Time” Lie: Inside the Latency, Data Quality, and Identity Gaps Breaking CX Decisioning

What Real-Time Risk Looks Like

In a contact center, fraud signals surface almost instantly. Real-time risk evaluation now means moving authentication to the “edge” of the network, using deterministic, invisible signals.

Before an agent even says hello, modern systems are pinging carrier networks to verify if the device a customer is using is familiar, jailbroken, or the IP address has appeared in recent fraud clusters, alongside behavior such as how they type, pause, or navigate a call or chat. Multiple login attempts, rapid-fire requests, or unusual spikes in activity across channels in the past 24 or 72 hours can raise flags. And then there’s interaction history—including past calls, recent changes to account details, prior disputes—layered with agents’ notes.

Taken together, these form a live snapshot of intent, helping teams decide in the moment whether they’re dealing with a genuine customer or something that needs a closer look.

But that decision is becoming more challenging as AI agents proliferate.

Reliance on knowledge-based authentication (KBA), asking callers to provide a mother’s maiden name, a first pet, or a recent transaction, is no longer reliable account defense in an era of massive data breaches and sophisticated social engineering.

Trust frameworks need to take new approaches to determine who is at the other end of an interaction, Mary Ann Miller, VP Evangelist & Fraud Executive Advisor at Prove told CX Today in a recent interview.

“It’s understanding if it’s a human or not, understanding if it’s an agent and if that agent is authorized, understanding if there are other ways that the AI can be, at the other end of that interaction, learning and adjusting to its attack vector. So we’re really in a different situation now.”

Fraudulent actors are already using AI voice cloning and deepfake videos to bypass selfie and identification verification. And AI agents can take this a step further by acting autonomously to run full-scale scams. They can probe for weak points, test verification flows, and refine their approach with each interaction, at speed and scale.

“We can use AI to look at a document to determine if that document is genuine or not. But we can also use AI to generate a fake document as well. So we’re really looking at an environment [where it’s] machine versus machine. And that puts us in a position to need to really look at controls and risk assess those almost on a weekly basis to see what’s working and what’s not,” Miller said.

The combination of context from customer relationship management (CRM) and customer data platforms (CDPs) with security telemetry separates modern risk-aware CX from basic authentication. It’s also what makes governance and architecture more complex.

How Risk Scoring Changes the Journey

When organizations successfully blend CRM and CDP context with real-time security signals, they can stop treating every customer like a suspect. A dynamic risk score allows the journey to adapt instantly. When a risk engine assigns a session a score, whether low, elevated, or high, it reshapes the entire journey dynamically.

A low-risk interaction, for example, a known device calling from a usual location to check a balance, can be routed seamlessly to a GenAI bot for instant resolution. But if the risk score spikes due to an anomaly, the system can trigger a step-up multi-factor authentication (MFA) prompt, a temporary limit on refunds, or an immediate routing to a specialized fraud team.

The routing decision is where the impact on customer experience is most measurable. Organizations using risk-based routing report material reductions in average handle time (AHT) for legitimate customers, because lower-risk interactions are handled by self-service or streamlined authentication. The tradeoff is that elevated-risk routing increases AHT on those queues, and agents handling flagged calls require specialized training to investigate without tipping off a potential fraudulent actor, or, just as importantly, without treating an innocent customer like a criminal.

While US-based fintech Chime automates over 70 percent of its support, it draws a hard line when it comes to disputes and account takeovers. Janelle Sallenave, Chief Experience Officer at Chime, highlighted how this dynamic routing plays out in high-stakes financial scenarios.

Instead of fully automating the journey, Chime uses AI to do the heavy lifting of data collection, pulling device logs, interaction history, and transaction velocity, so that the human agent can focus entirely on the final decision.

“Bots can do those tasks. But that judgment moment of, ‘Wait, what story is this data telling me? Was there an account takeover?’ That’s where agents really add their value and their expertise.”

Technology can score risk in real time, but the hardest moments still land with human agents when something feels off, or when a legitimate customer has been flagged and is frustrated after repeated verification steps.

As more routine queries and obvious fraud attempts are filtered out upstream, what reaches an agent tends to be higher risk, more complex, and often more emotionally charged.

The Tension Between Data Governance and Privacy

To make these split-second risk decisions, AI and security systems require vast amounts of data. But collating behavioral profiles, interaction transcripts, and biometric data creates a massive target for the exact cybercriminals these systems are trying to stop.

It is also, depending on jurisdiction, legally precarious. GDPR, CCPA, and an expanding patchwork of biometric privacy laws impose obligations around consent, retention limits, and the right to explanation. Yet most fraud risk decisions are made in ways that are deliberately opaque, because transparency about scoring methodology is itself a fraud vector. If you tell customers exactly which behaviors trigger a high-risk flag, bad actors adapt.

George Korizis, Customer Strategy Partner at PwC highlighted the new reality:

“In a world and age where… your likeness, your voice can be replicated at ease, I think it would be gross negligence to use certain biometric identifiers without additional security verification.”

Customers are increasingly aware of these risks, demanding transparency about how they are being profiled and what data is being kept. Brands that fail to clearly communicate their security measures risk eroding the trust they are trying to protect.

The middle ground is informing customers that behavioral signals are used to protect their accounts, without revealing the specific triggers.

The emerging best practice around retention policies is for session-level signals to be purged within hours, device-level signals retained for 30–90 days, and account-level risk history retained for fraud investigation.

Why Proactive Testing Is Critical in the Age of AI Fraud

The “zero trust” security framework is often misunderstood as treating every customer with suspicion. In reality, modern contact centers can use zero trust architecture to verify customer interactions in the background, distinguishing between legitimate human or agentic customers and fraudulent actors the moment the conversation begins.

Miller advises enterprises to engage in AI fraud team testing, attacking their environments and learning from any gaps they identify, which some US banks are doing.

“That’s a really smart approach rather than waiting for some kind of volume in your environment to go up and not knowing ‘why is my IVR suddenly spiking or why are there transactions spiking in my card environment unauthorized’ and suddenly we’re getting a lot of disputes.”

“Don’t wait for the spikes, don’t wait for the attacks, actually test and learn and then start to form your strategy.”

By blending real-time risk signals with intelligent routing, brands can deliver on the promise of an experience that is as safe as it is effortless.

Security and ComplianceSPOTLIGHT: From Data to Decisions: Real-Time CX Insights​
PwC
Featured

Share This Post