Ping Wu, CEO of Cresta, has pinpointed three common misconceptions contact centers have before implementing large language models (LLMs).
LLMs power generative AI (GenAI) use cases in the contact center, enterprise, and beyond.
Most will have utilized an LLM when experimenting with ChatGPT or Gemini. However, this first-hand experience has led to widespread misunderstandings of how contact centers – and the broader business – can leverage this technology.
As noted, Wu – who co-founded Google’s Contact Center AI Solution in 2017 – filtered these down into three central misconceptions.
First, people often think LLMs are just end-to-end text generation machines. You ask a question, and it gives an answer. However, it’s more complex.
“There are two parts: comprehension and action (decoding),” explained Wu.
In many business contexts, LLMs are more useful for comprehension, understanding the user’s intent, while the output action is guided by business logic.
Secondly, many customer service leaders consider LLMs as being limited to question-answering.
Yet, as Wu suggests: “They are also very strong in synthesis, extracting key concepts from large amounts of text.” Case summarization is a mature example of this in the contact center.
Lastly, people often apply human intelligence intuition to LLMs, yet there are differences that business leaders must consider. Indeed, some tasks that are hard for humans are easy for LLMs, and vice versa.
Sharing an example, Wu stated: “LLMs can pass advanced placement biology exams but may get simple customer support questions wrong if not properly guided.”
After running through these three misconceptions, Wu highlighted how contact centers can better guide LLM outputs, excellent examples of GenAI done well, and more in an interview with CX Today.
The interview is part of our 2024 CX Trends series and is available below.
Yet, for those wishing to skim through the interview, here are some more highlights.
The Two Methods for Implementing LLMs
According to Wu, there are two common approaches to implementing LLMs in the enterprise.
The first involves fine-tuning the LLM with use-case-specific data. For example, GitHub’s coding copilot fine-tunes the model using a coding repository to generate better-quality code.
The second approach is retrieval-augmented generation (RAG). Explaining how this works, Wu said:
Relevant business data are first searched and retrieved, then fed into the LLM, which synthesizes the information to answer questions by analyzing multiple documents.
Businesses that embrace the RAG approach must ensure their knowledge centers – which store those documents – include accurate and up-to-date information.
High-profile customer experience AI fails from the New York City government and Air Canada offer tough lessons here.
Thankfully, businesses can leverage LLMs to improve the content within knowledge centers.
As Wu stated: “AI identifies out-of-date content and gaps in the knowledge base by analyzing conversations, helping to keep the knowledge base current and accurate.”
In this sense, contact centers can use GenAI to improve the knowledge that enables customer- and agent-facing GenAI use cases. That’s a powerful cycle!
Yet, the value of LLMs augmenting the knowledge base doesn’t end there. Indeed, there are two more ways in which they can add value.
The first is through semantic search. After all, as LLMs can understand queries and documents at a semantic level, they can significantly improve search quality.
Also, LLMs may add value by synthesizing information from various documents to generate direct answers to customer queries. That’s useful for real-time customer support.
Wu: Future Contact Centers Will Be Hybrid Human-AI Systems
“We’re excited about transforming contact centers with AI, aiming to build an AI-native contact center,” summarized Wu.
Future contact centers will be hybrid human-AI systems, where AI augments human agents, learns from them, and improves over time.
“We foresee increased automation of conversations, enhanced human abilities, and more sophisticated AI capable of multi-modal tasks, interacting with both speech and screens.”
Yet, as the CEO also warned, more high-profile GenAI failures will occur when LLMs are applied inappropriately, like a chatbot selling a car for a dollar or promising refunds that don’t exist.
As such, it remains crucial to identify suitable use cases and implement guardrails.
CX Trends: Catch Up on the Entire Series
As the co-founder of Google’s Contact Center AI Solution and the current CEO of customer service disruptor Cresta, Ping Wu is a global thought leader in AI and LLMs.
In recording this video, he joined 14 other subject matter experts (SMEs) as part of the 2024 CX Trends series, with each sharing their thoughts on the hottest topics in customer experience.
These include proactive and predictive customer support, customer experience design, and conversational intelligence, with speakers from the likes of Google, Lenovo, and Zoho.
Each speaker shared significant insight, and you can catch up on everything they had to say by watching our 2024 CX Trends series here!