Fraudsters Are Targeting Contact Centers with Deepfakes. Here’s How.

With contact center authentication systems at risk, how can you protect against deepfakes?

Scam phone call from unknown number. Anti-scam fishing app - block scam and unwanted calls concept
Contact CentreInsights

Published: March 4, 2024

Rhys Fisher

Pindrop analysis has identified the four ways that attackers are using deepfakes to target contact centers.

While to most people, the idea of deepfakes generally relates to counterfeit celebrity photos and videos – the use of synthetic voice is a growing concern for contact center safety.

Deepfake voice works by using AI to clone a person’s voice, with the recent advancements in generative AI making it possible to emulate the tone and likeness to an alarmingly accurate level.

With the proliferation of homemade videos across social media channels like Instagram, Facebook, and especially TikTok, it is becoming incredibly easy for fraudsters and scammers to find examples of customers’ voices through basic internet searches.

In a study of a selection of its clients, Pindrop – an expert in audio traffic monitoring – was able to detect and analyze a number of calls with “low liveness scores” and determine that they were being perpetrated with synthetically generated voices – underscoring just how prevalent contact center deepfake attacks already are.

But what exact tactics are fraudsters using to target contact centers? And how can companies identify and prevent them?

Pindrop analyzed all of the synthetic calls that its customers received and outlined the following four patterns:

1. Deepfake Voice Is Not Limited to Duping Authentication

Despite the ability of deepfake technology to allow a bad actor to conduct a full conversation in a cloned voice, most scam calls were actually far more simplistic – focusing on using synthetic voice to gain an understanding of the IVR navigation and steal basic account details.

Armed with this information, fraudsters would then make the calls themselves, reverting back to traditional social engineering methods.

2. Deepfake Voice Is Helping Attackers IVR Bypassing Authentication

If the above example was akin to sending a singular scout to have a quick look at the lay of the land, this one is a fully blown reconnaissance mission.

Synthetic voices were able to completely bypass the IVR authentication steps, allowing fraudsters access to more sensitive information, such as bank/account balances, which could be used to identify which customers were worth targeting further.

While these sorts of tactics have been around for some time, the combination of deepfake voice and automation means that scammers can operate at a far higher scale.

3. Deepfake Voice Is Helping Attackers Alter Account Details

Another classic weapon in the fraudsters’ arsenal that has been enhanced with deepfake technology, like when Johnny Cash took the Nine Inch Nails song, ‘Hurt’ to an entirely different level – but, you know, with his legitimate voice.

By emulating customers’ voices, scammers were able to alter email and home addresses, opening up a number of fraud opportunities, including accessing one-time passwords and ordering new bank cards.

4. Deepfake Voice Is Helping Attackers Mimicking IVRs

In arguably the most innovative fraud strategy, some deepfake voice recordings revealed that scam callers were using their own voicebots to mimic IVRs.

Rather than attempting to answer any of the automated prompts, the caller was just repeating them back to the IVR.

Pindrop kept a record of these calls and discovered that following the initial mimicking, contact centers received similar calls, but this time, the caller repeated the prompts in the cloned voice of the IVR.

It is clear that this was the first step in a future fraud scheme that would look to emulate the contact center’s customer service line.

Fighting the Fraudsters

With fraudsters taking advantage of AI technological advancements to develop more creative and sophisticated methods of targeting customers, an organization’s safety solution must be just as innovative.

For Pindrop, liveness detection – a biometric that can recognize and verify if a voice is live or is recorded/synthesized – is the most efficient and effective way of protecting against these attacks.

More specifically, liveness detection should be integrated alongside a multifactor authentication (MFA) process.

The Pindrop Passport uses seven factors to identify the authenticity of a call – providing users with scores based on the likelihood of the voice being genuine.

While Pindrop’s offerings will undoubtedly improve a company’s ability to detect deepfakes, businesses may want to implement a broader governance and compliance overarching framework.

This was a point raised by Avivah Litan, VP Analyst at Gartner, when warning companies about the risks of using GenAI.

Latan emphasizes the need for transparent policies to prevent staff from asking questions that could expose sensitive business or personal data, stating:

Organizations should monitor unsanctioned uses of ChatGPT and similar solutions with existing security controls and dashboards to catch policy violations.

She also suggests employing firewalls to restrict user access, as well as creating and storing engineered prompts as immutable assets to protect sensitive data used in third-party infrastructure – allowing for safe use, sharing, or sale.

Whether or not companies look to invest in a live detection tool or aim for more general staff training and education around the threat that deepfakes pose, it is clear that as GenAI’s capabilities continue to grow, combatting synthetic voices will become vital to contact center security.



Artificial IntelligenceConversational AIGenerative AI

Brands mentioned in this article.


Share This Post