Generative and Conversational AI: Dream and Nightmare Deployments

Unpack examples of use cases, real-life deployments, and cautions for LLM-powered conversational AI

5
Sponsored Post
Generative and Conversational AI Dream and Nightmare Deployments - CX Today News
Speech AnalyticsInsights

Published: December 19, 2023

Charlie Mitchell

Use cases for large language models (LLMs) have grown significantly over recent years – from providing basic customer service to writing code and scripts and even creating content such as blogs and songs.

Indeed, Christoph Börner, Senior Director of Digital at Cyara, explains how he leveraged a LLM for inspiration when his band were struggling to come up with material for a new song.

“Within a few seconds, it had an intro, chorus, verses and a solo,” he noted.

Börner describes his surprise at how good the output was, noting how – with just a small bit of polishing and tweaking – they were able to include it in their set list.

Such capabilities of LLMs – such as GPT, PaLM and Falcon – have led to deployments of conversational AI skyrocketing across numerous industries and all stages of the customer journey.

For instance, in the analysis stage of customer journey building, organizations utilizing LLMs to promptly generate relevant customer contact reasons and queries is an exciting new use case.

Meanwhile, in the design phase, LLM applications have the capability to conduct the entire dialog management including conversation flows, lexicons, and even “personas” – which allow the bot to interact with customers in a specific style and manner.

As a final example, consider the training phase. There, a technician tasked with making sure a customer-facing bot can understand and respond to customers appropriately is able to use LLMs to auto-generate new and more appropriate training data for the bot.

Cyara, a company focused on supporting organizations in assuring and optimizing their entire customer experience (CX) environments including conversational AI channels, is at the forefront of several such use cases. Noting the latter, Börner said:

“We released an AI Data Wizard within Cyara Botium, which leverages a LLM to spot undertrained intents (the goal that a user has within the context of their conversation with a chatbot), and then – with a press of a button – generates new training phrases.”

Solutions like that enable brands to bring bots to the market quicker and – in the case of many dream deployments – come equipped with new, flashy features.

Examples of Dream Deployment Already in Place

First up, UK bank NatWest leveled up its virtual agent – “Cora” – with generative AI (GenAI), so it is able to answer particular customer questions without prior training.

Now known as Cora+, the bot plugs into trusted, secure, business-specific knowledge sources to send responses in a “natural, conversational style”.

Meanwhile, Cora+ also cites the source material for each of its responses, so customers can dive deeper into it if they wish.

Next, consider fashion retailer GAP, which implemented a similar solution, leveraging domain- and industry-specific language models.

In doing so, GAP claims an 84 percent auto-resolution rate, up from 50 percent before the GenAI augmentation and extending far beyond its target of 70 percent.

A final and excellent example is Pelago, a travel experience platform established by Singapore Airlines Group, which layered GenAI over its existing conversational flows.

As such, its bots can adjust their responses to the changing context of the conversation, resulting in more “personalized, near-human planning experiences” – as per Yellow.ai, Pelago’s tech partner.

Interestingly, in this example – and likely many others – Pelago has actually leveraged multiple LLMs to achieve the desired results.

Börner suggests that such innovation is indicative of businesses experimenting more with the LLMs they utilize for conversational AI projects. He continued:

“Most of our clients are utilizing hybrid models. This combines foundational models like GPT, trained on data from the internet for example, and finetunes it with real critical business cases.”

Such thinking reduces the chances of inaccurate answers or even “hallucinations” – a term that, in this context, refers to when AI algorithms produce outputs that are not baked on training data or don’t follow any identifiable or logical pattern.

Yet, as these businesses begin to dream bigger with their use of GenAI, there is much more to consider.

Watch Out for the Nightmare Scenarios

Despite all the success stories of generative and conversational AI, there is an elephant in the room: data generated by LLMs introduces significant new risks and challenges that businesses must prepare for and address.

Data privacy, misuse, bias, copyright, and cybersecurity concerns are critical security and reputational hazards to consider as new GenAI use cases surface.

Noting this in a recent CX predictions piece, Rebecca Wetteman, CEO & Principal Analyst at Valoir, forecasted the “spectacular” failure of AI.

“Lack of mature technology, adequate policies and procedures, training, and safeguards are creating a perfect storm for AI accidents far more dramatic than just hallucinations.

“Expect public fails, lawsuits, and effective shake-ups of technology vendors and AI adopters when things go awry.”

Take cybersecurity as a prime example of where things may go “awry”. While vendors of foundational GenAI models claim to train their LLMs in fending off social engineering attacks, they typically don’t equip users with the necessary tools to thoroughly audit the applied security controls and measures.

For this, teams focused on providing the best CX turn to Cyara to plug the gap, and this is precisely where Cyara excels.

Cyara as the Dreamcatcher

To protect against all data privacy, misuse, bias, copyright, hallucination, and cybersecurity concerns, Cyara has introduced specific and targeted testing and optimization features to its Botium platform, which already offers value at every stage of the entire conversational AI lifecycle from bot vendor selection, training and testing to deployment and monitoring  – and they’re proving popular.

As Börner added:

“Misuse testing is a requirement from almost all of our customers and this shows the importance of it in today’s business environment.”

Indeed, Cyara has been working hard to keep pace with the surging use of LLM-augmented AI and is now well ahead of the curve – especially after its recent acquisition of QBox.

QBox provides unparalleled visibility into the impact of changes or additions to a conversational AI model – including GenAI augmentations – in training and beyond.

Moreover, Cyara doesn’t just work with businesses to assure their conversational AI and broader contact center deployments… it also acts as their CX transformation partner, providing guidance and support along every step of the journey.

As such, Cyara enables clients to introduce generative AI into their contact centers responsibly, and with much greater confidence.

To discover more about Cyara’s philosophy and the extensive capabilities of the Botium platform, visit: cyara.com

Artificial IntelligenceCCaaSChatbotsChatGPTConversational AIGenerative AIWorkforce Optimization
Featured

Share This Post