Since the creation of contact centers, managers and QA teams have been asking one big question: “How do we make sure we’re keeping our customers happy?”
That was the birth of quality management for contact centers: overseeing tasks and conversations to ensure agents are properly trained to help customers.
Quality management has changed significantly over the years, as new technologies allow managers and QA teams to gain new insights and information from each call and interaction. With that in mind, let’s look at how quality management has changed, and how companies like MiaRec are shaping its future.
Manual Call Scoring
The oldest version of quality management is the most time-consuming method. It requires supervisors or quality assurance teams to listen to call recordings, then assess the agent’s performance and assign scores. This is a manual process taken one call at a time, usually done on Excel spreadsheets, although there are also tools like the Agent Evaluation functionality (found in MiaRec’s Conversational Intelligence Platform) that can help with the process.
This method tends to be great for scenarios where having a real person provide insight is typically more accurate than automated tools. It also helps supervisors gain a personal understanding of their agents and customers and lets them accurately score the calls themselves.
However, there are several downsides to this method, the biggest one being the time it takes to go through the process. The evaluator must listen to the entire call, manually note every positive or negative element, and assign overall scores (which can be impacted by human bias). This takes longer than the call itself, which is simply not an efficient use of time.
As a result, only about 2-3% of all calls get scored this way. That doesn’t provide nearly enough information to get an accurate impression of any agent’s performance, and can’t give supervisors any insights into compliance, like overall script adherence. Small samples simply can’t provide enough data to make sure agents are handling every call properly, so manual scoring is typically only efficient for smaller teams with low call volumes.
Keyword-Based Scoring
What if we were to let software analyze the calls before passing it on to a supervisor? That brings us to keyword-based scoring, where a program scans a conversation’s transcript for specific words.
The targeted keywords typically relate to call scenarios, such as ensuring the agent reads compliance statements. The scoring algorithms can identify key phrases (or even similar phrases) to ensure agents are sticking to the script and customers are satisfied with the support they’re receiving. Unlike manual scoring, this can be used for every call without taking up a supervisor’s time.
However, there are still some challenges with this scoring system, and it’s not just because focusing on keywords is a rather narrow criterion. Setting up, configuring, and maintaining keyword-scoring software can be an endeavor, and it’s an inflexible method that can’t account for context.
Additionally, language itself can be complex, with several different ways to phrase the same sentiment and words taking on different meanings depending on the context and tone. Keyword-based scores must account for every variation of phrases like “thank you” or “you’ve been a great help,” and any deviation from the keywords it’s set to detect will go by unnoticed. As a result, speech-to-text transcripts aren’t always enough to really understand the conversation.
Generative AI Scoring (Auto QA)
Generative AI has advanced enough to become an excellent tool for quality assurance. This is where Auto QA tools (like MiaRec’s Auto Score Card) come in, using Generative AI to analyze conversations for a holistic view of the call when scoring them.
This version of quality assurance takes the best parts of both manual and keyword-based scoring, as it can provide accurate, detailed insights into the conversation while scoring 100% of calls quickly.
As Generative AI-based call scoring uses machine learning and large language models, it doesn’t require any extensive configurations or large lists of keywords and all their variations. It can also understand the context of the conversation, as opposed to the limits of keyword-based scoring, and provides an objective score without human bias.
“Normally, QA teams can only score two to three percent of their contact center’s calls,” said Tatiana Polyakova, COO of MiaRec. “With MiaRec’s Auto QA, we’ve enabled teams to score 100% of their calls, while seeing unprecedented ROI for their contact centers.”
In fact, one of the biggest benefits of Auto QA is the ROI it provides. In addition to being an affordable and efficient option, Auto QA can save hours that would be spent manually evaluating calls—that translates to significant savings.
How significant? MiaRec has an ROI Calculator that can find out how much money businesses save in both time and the monetary cost of that time. Needless to say, it shows that the savings add up quickly for businesses of all sizes.
With the growth of Generative AI comes concerns that it may make the human element obsolete, but that is far from the case. AI tools are designed to make life easier for contact center supervisors and QA teams, and should be used to support human teams, but cannot replace them. Human oversight is always necessary to ensure accuracy.
Quality management has changed over the years, as new technology enables more accurate and efficient means of reviewing calls, and it’s only going to keep improving. With tools like the Auto Score Cards from MiaRec, it’s easier than ever to improve customer satisfaction with every call.