Deepfakes Are Entering the Talent Pool, Putting Customer Trust at Risk

How AI-driven hiring fraud is exposing new risks for customer experience leaders

4
Security, Privacy & ComplianceNews

Published: February 2, 2026

Nicole Willing

Customer experience leaders tend to think about trust in terms of customers—how it’s earned, how it’s lost and how fragile it has become as AI use becomes more widespread. Increasingly, that same trust challenge is showing up much earlier in the value chain, inside the hiring process itself.

Survey data from the Institute for Corporate Productivity (i4cp) highlights how deepfakes and synthetic identities are showing up in hiring processes in ways that directly affect customer-facing operations. Along with a recent incident at an AI security startup, where even a seasoned CISO was nearly fooled by a deepfake candidate, the findings point to a reality CX leaders can no longer delegate solely to HR or security teams.

Hiring risk has become customer risk.

Cautionary Tale from a Security Startup

In a survey of talent acquisition executives, i4cp found that 59 percent are concerned about identity fraud or impersonation via deepfake video or audio. Lorrie Lykins, VP of research at the Institute for Corporate Productivity (i4cp), stated:

“Deepfaking and identity fraud are chief among the concerns of organizations when it comes to the use of AI in the hiring process.”

That concern is well-founded.

If any organization might be expected to spot a deepfake quickly, it would be an AI security firm that specializes in threat modeling. Yet that is exactly what makes the recent experience of Expel co-founder and CEO Jason Rebholz so compelling.

After Rebholz posted a security researcher vacancy on LinkedIn, a new connection introduced a potential candidate they claimed to have worked with previously and sent a link to a resume hosted on Vercel, an app-building platform that integrates with AI tools. That raised suspicion that the resume was generated by Claude Code, but that wouldn’t be out of the ordinary for a developer, Rebholz told The Register.

The connection’s urgency to schedule an interview raised a red flag, and when the candidate joined the video call with their camera off before switching it on with a virtual background, the warning signs became harder to ignore. The candidate’s face appeared “blurry and plastic,” with visual artifacts that seemed to flicker in and out. Even so, Rebholz hesitated.

“What if I’m wrong? Even though I’m 95 percent sure I’m right here, what if I’m wrong and I’m impacting another human’s ability to get a job? That was literally the dialog that was going on in my head, even though I knew it was a deepfake the whole time.”

Rebholz continued the interview, and then sent clips from the video to fraud detection firm Moveris for analysis using its deepfake detection technology, which confirmed the deception. As Rebholz said:

“It’s one of the most common discussion points that pops up in the CISO groups I’m in. I did not think it was going to happen to me, but here we are.”

The incident highlights that more than exploiting technical blind spots, deepfakes exploit social norms, empathy and the fear of making the wrong call.

Deepfake Job Candidates Put Customer Experience at Risk

The incident highlights the risk to customer experience teams, which sit at a unique intersection of trust, access and scale.

Customer-facing roles, including contact center agents, onboarding specialists, support engineers, trust and safety teams, often rely on remote hiring, global talent pools and rapid scaling. This can make these functions especially attractive targets for fraudsters seeking access to customer data or internal processes.

A single compromised hire can have an outsized impact, exposing customer data, manipulating service interactions, or causing damage to a brand, potentially with regulatory fallout.

Over half (54 percent) of the respondents to the ic4p survey reported that they have encountered candidates in video interviews they suspected of using AI tools to assist with answering questions or completing technical challenges. Despite that, only 17 percent reported increasing the use of in-person interviews in response to concerns about AI-related fraud.

Survey respondents noted that industries handling sensitive data, such as financial services, healthcare, defense, and infrastructure, are more cautious in AI adoption because of these risks. Customer-facing functions within less regulated sectors may not have the same guardrails, but the customer impact of a breach can be just as severe.

The survey revealed ambiguity in organizations’ policies around candidates using AI. Forty-one percent of organizations have no official stance. Another 29 percent encourage ethical use but worry about misuse, while only 26 percent clearly welcome AI use and provide guidelines.

There’s also a human dimension. AI-assisted hiring can erode the qualities that CX leaders value, such as empathy, judgment, adaptability and real-time problem-solving. When candidates rely on AI-generated responses during interviews, hiring managers risk optimizing for polish rather than presence.

Customer Trust Starts at the Hiring Stage

For customer experience leaders dealing with the growing threat of voice and video AI fraud, working closely with HR departments will be key to mitigating the risk of deepfakes infiltrating the hiring process.

Clear guardrails and intentional friction can help to flush out fake candidates. Hiring for customer-facing functions should have firm non-negotiables around live, unaided interaction, especially in roles where real-time judgment and empathy are essential.

Interviews should include visible verification steps, such as mandatory cameras, no virtual backgrounds and spontaneous questions that test the candidate’s presence and comprehension in real time, rather than polished responses.

Because resumes are no longer reliable signals of authenticity, ic4p recommends cross-checking identity and work history, exploring platforms that verify user presence and do live identification checks, as verification can stop fraud before it reaches customers. Recruiters and hiring managers also need support to overcome the social discomfort of challenging suspicious candidates.

Deepfakes in hiring are no longer a future problem or a niche security issue. They are a present-day operational risk with direct implications for customer experience and brand credibility. At a time when seeing and hearing are no longer believing, hiring has become one of the most vulnerable moments in the CX lifecycle.

Security and Compliance
Featured

Share This Post