Can You Fool Voice Biometrics?

Voice biometrics is an emerging area of cybersecurity research

Fooling Voice Biometrics
Contact CentreInsights

Published: June 4, 2021

Anwesha Roy - UC Today

Anwesha Roy

Voice biometrics is an emerging area of cybersecurity research, bringing the same level of stringent protection and convenience as facial recognition or fingerprint analysis. Between 2019 and 2027, voice biometrics is expected to grow at a stellar CAGR of 23.6%, reaching nearly $5 billion by the end of the forecast period. But is voice biometrics 100% safe, really? Or is it possible to fool voice biometrics through human impersonation, or deep fakes?  

As it runs out, the answer isn’t quite so simple.  

Researchers have Found Potential Vulnerabilities in Voice Biometrics 

As we found out in 2017, shortly after Windows introduced its Hello facial recognition feature, biometrics aren’t always failproof. A group of German security researchers were able to trick the authentication system simply by using a modified photo of the user! So, it makes sense to imagine there would be similar weaknesses in voice biometrics, at least in its early years, until it has been in the mainstream long enough to remove every bug and strengthen the ecosystem.  

And you would be correct in this assumption. Research by the University of Eastern Finland found that voice biometrics are vulnerable to spoofing and can be fooled, but not through deep fakes as one might imagine. Artificial duplicates, created by technical means such as voice conversion, speech synthesis, are easy to identify. Ai algorithms are powerful enough to distinguish an artificially generated string of audio from genuine human sounds.  

But the challenge arises with human impersonators – namely, skilled professionals from the entertainment or other industries who have years of experience in recreating voice characteristics and speech behavioural patterns of other individuals. As this category of impersonated voice also has human origins, it can be difficult to tell between the two.  

The study specifically found a vulnerability when impersonators mimicked a child’s voice; you can read the full study here 

How to Counter the Risk of Voice Biometrics Fraud

While technically it is possible to fool voice biometrics, it continues to be among the most secure authentication systems today, especially when used alongside other authentication mechanisms like OTPs or knowledge-based authentication.  

This can be further strengthened by:  

  • Authenticating continually, not just at the start of a session  Voice biometrics is used to verify the caller’s identity every one or two minutes, making it far harder to fool it every time
  • Checking for signs of “liveness” – The system checks for the actual presence of the biometrics body part – like assessing the pressure when analysing fingerprints, or checking for human pauses, ambient sound, etc., when verifying voice

These two measures make voice biometrics safer, by making it more difficult to mimic every aspect of the authentication experience. Apple is even in talks to develop an ultrasonic-based voice biometric, where ultrasonic sensors are used to verify liveness and authenticate via voice.  

Apart from the risk of being fooled, companies should also remember privacy risks around collecting, storing and utilising voice data, as well as potential bias when training the voice AI algorithm.  



Artificial IntelligenceFraudSecurity and Compliance

Share This Post