New conversational AI option reduces response lag to less than 300 milliseconds
Deepgram, the only true end-to-end deep learning ASR, announced that it has added three additional capabilities to an automated speech recognition (ASR) platform that it invoked via a set of application programming interfaces (APIs).
The company is adding a conversational AI option that reduces response lag to less than 300 milliseconds. This will make it easier for organisations to deploy applications that enable humans to have highly interactive conversations with a machine in use cases such as virtual agents handling tasks of billing, support, sales, compliance etc.
The capability will also simplify the process of tuning a base model for conversational AI to address a wider range of uses cases in domains that employ unique jargon and nomenclature. The word error rate has dropped by up to 50 percent, while noise and filtering crosstalk is reduced to make key terms and phrases easily understood.
Scott Stephenson, co-founder and CEO of Deepgram, said:
“The promise of AI has consistently been to solve problems and deliver experiences in ways that are faster, cheaper, and better. At Deepgram, AI is in our DNA. We’re an automatic speech recognition company focused on solving the inherent challenges of speech technology and powering the next generation of voice-enabled applications.”
Besides the conversational AI capability, Deepgram is adding a module to enable sales and support interactions to generate offers and alerts in real time. The third added capability will support data process and create transcripts in near real time.
Stephenson explained that with a less than 300 millisecond lag time and more than 90% accuracy rate for transcription, conversational AI is now entering a new phase.
“Rather than being used primarily by enterprises as a tool to reduce customer support costs, conversational AI will be employed more to proactively engage customers, thanks in part to cloud services being able to scale more affordably”, added Stephenson.