OpenAI Warns Businesses to Phase Out Voice-Based Authentication, Stalls the Release of Its Voice Cloning Tool

The ChatGPT innovator cautions against voice-based ID&V methods for “accessing bank accounts and other sensitive information”

OpenAI Warns Businesses to Phase Out Voice-Based Authentication, Stalls the Release of Its Voice Cloning Tool
Contact CentreLatest News

Published: April 2, 2024

Charlie Mitchell

OpenAI has put companies on red alert, warning them that it’s time to move on from voice authentication amid the development of its Voice Engine solution.

Voice Engine is a voice cloning tool that generates natural-sounding speech that “closely resembles” the original speaker.

Now, it’s available in a preview, but OpenAI has chosen to stall the general release in a bid to “bolster societal resilience” against increasingly convincing generative models.

In an unauthored blog post on its website, OpenAI dived deeper into the specifics, noting:

We encourage steps like phasing out voice-based authentication as a security measure for accessing bank accounts and other sensitive information.

The large language model (LLM) pioneer also encouraged businesses to speed up the development and adoption of techniques for tracking the origin of audiovisual content.

Finally, OpenAI gave more general advice, which included educating the public in better understanding “deceptive AI content” and investigating policies to protect the use of an individual’s voice in AI.

While such advice may set many alarm bells ringing, OpenAI aims to get ahead of the curve.

“It’s important that people around the world understand where this technology is headed, whether we ultimately deploy it widely ourselves or not,” continued the blog.

“We look forward to continuing to engage in conversations around the challenges and opportunities of synthetic voices with policymakers, researchers, developers, and creatives.”

In regards to customer experience, those opportunities are significant, as the early applications of Voice Engine – launched with “a small group of trusted partners” – suggest.

For instance, it provides reading assistance via a natural-sounding, emotive voice. Such an application may prove helpful in supporting vulnerable customers.

Then, there are other use cases, such as translating content and supporting people who are non-verbal, which could spark the imagination of CX innovators.

Beyond customer experience, applications like helping patients suffering from sudden or degenerative speech conditions recover their voices are also incredibly exciting.

Nevertheless, synthetic voice is already a threat on the rise, and OpenAI’s warning comes as a startling reminder of the threat it poses to businesses, let alone the broader society.

Synthetic Voice: A Rising Threat

OpenAI’s Voice Engine announcement comes just two weeks after CX Today shared concerning new research on how fraudsters are leveraging synthetic voice to attack contact centers.

The study – carried out by Pindrop – uncovered that scammers are already using synthetic voices in combination with automation to bypass IVR authentication steps.

It also found that fraudsters had been able to alter customer email and home addresses using voice-based deepfakes, which opened up many opportunities for fraud.

For instance, the attacker could then access one-time passwords or – in the case of an attack on a bank – order new credit cards.

Finally, Pindrop found that some scammers have started using their voice bots to build a clone of the company’s IVR – likely as part of a broader scheme to trick customers.

Each example provides cause for concern and should inspire a conversation within all enterprises, including those that haven’t yet implemented voice authentication.

After all, as these fraudster ploys suggest, no business is exempt from the threat of synthetic voice.



EnterpriseGenerative AIInteractive Voice Response

Share This Post