Automate Quality Assurance In Seven Steps 

Maximize the benefits of contact centre QA with this clear-cut approach 

5
Sponsored Post
Automate Quality Assurance In Seven Steps - CX Today 1
WFOInsights

Published: January 31, 2023

Charlie Mitchell

For decades, contact centres have engaged in quality assurance to unearth customer service issues, motivate agents, and ensure compliance.  

Yet, the QA strategy of many operations remains rooted in the 20th century.  

Analysts will randomly sample a wafer-thin percentage of conversations, fill in a stale scorecard, and give feedback to the agent – the end. 

As a result, many agents consider the entire exercise unfair and a drain of time. 

Enter Auto-QA. It ensures each agent follows protocol and exhibits the optimum behaviours during every conversation, spotlighting insights by the bucketful.   

From there, the technology helps to prioritise feedback, informing coaches where they can have the biggest impact on performance. 

In doing so, the technology augments manual QA workflows, driving efficiency and effectiveness. 

Such a solution is a gamechanger, particularly for those stuck in a web of spreadsheet tabs and Sharepoint docs.  

Nonetheless, there are seven steps to follow first before reaping the rewards.    

Step 1: Connect Your QA Strategy

As Bill Gates once said: “Automation applied to an inefficient operation will magnify the inefficiency.” Auto-QA is no exception.  

To instead magnify the efficiency, lay the groundwork for automation. 

Doing so often starts with developing shared performance standards across each channel. These ensure supervisors, quality analysts, and coaches sing from the same hymn sheet. 

After, build mutual workflows for identifying and closing performance gaps.  

Then, establish a strategy for reinforcing and tracking the impact of coaching. 

With such a well-defined process, contact centers gain greater value from autoscoring customer conversations, surfacing performance trends, and flagging prime conversations for learning. 

Step 2: Fetch & Redact Conversations

With a transparent QA process, it’s time to add in automation. This starts by connecting Auto-QA with the contact centre ecosystem. 

Critical integrators include the call management system, ticket management software, and all customer engagement channels.  

By connecting these to the QA software, contact centres can store every conversation in one place. Moreover, each conversation comes with a recording, transcript, and metadata attached.  

Unfortunately, the integration process is sometimes tricky. Thankfully, most tools now boast a whole host of API integrations to simplify the process.  

EvaluAgent’s burgeoning marketplace offers an excellent example.  

An image of the EvaluAgent marketplace

Nevertheless, Jaime Scott, Founder & CEO of EvaluAgent, puts forward a mission-critical hygiene factor to keep front and centre. He states:

“When sharing conversations with any third-party platforms, you may wish to redact sensitive personal data from the conversations to give your compliance teams extra confidence that your customer data is totally safe.”

EvaluAgent supports its clients in achieving this aim, while its Auto-QA tool ensures compliance by listening to every customer conversation and flagging issues.

Step 3: Auto-Tag Conversations

Auto-QA transcribes every call, processing them alongside transcripts from digital channels.

In doing so, it analyses and automatically tags each interaction with relevant insights. The contact centre must simply give the tool permission to do so.

From there, most platforms will group conversations by various topics, allowing QA teams to fixate on conversations with shared elements, such as customer intent.

Step 4: Auto-Score Conversations

Auto-scoring all contact center conversations serves up a fair, transparent view of each agent’s performance. It also spotlights opportunities for learning, recognition, and reward.

Yet, there are some significant steps to follow to make this a possibility.

First, go through the shared performance standards – which should filter into manual QA scorecards – and determine the possibilities for automation. Typically, it’s between 40 and 70 percent.

Why not 100 percent? Well, Auto-QA works by looking for combinations in words and phrases which, if found in the right order at the right time, imply that the agent has met specific standards. They may then achieve a “pass” on the Auto-QA scorecard.

The contact centre must then build and configure these within the platform, with multiple scorecards for different channels and – perhaps – contact reasons.

Step 5: Identify High-Risk Conversations

Unlike manual QA, Auto-QA spotlights every high-risk, reputation-damaging, and potentially fine-inducing customer conversation. These are prime candidates for analyst review.

The auto-tagging process makes these simpler to scour through, as the contact centre may auto-tag them as “complaint” and run each through a custom Auto-QA scorecard.

These conversations are prime candidates for contact centres to focus their improvement efforts on. Yet, there are many others too…

Step 6: Deep-Dive Manual QA & Feedback

Now, businesses can identify opportunities for improvement that move the dial on performance.

Contacts with a low Auto-QA score are excellent points of reference. Yet, the system also isolates trends, such as standards that an agent repeatedly fails. These spotlight a coaching opportunity.

In addition, the filters previously created help when selecting contacts for manual review.

With such filters, contact centres can configure “auto-work queues.” These sample conversations with high-risk filters to automatically generate feedback, which analysts can deep-dive. Alternatively, agents may do so with self-QA.

Alongside this automated feedback, Scott says:

“The evaluator can playback the conversation. They can see all the metadata that came across with it. They can see the insight topics that have been auto-tagged onto the conversation. They can review the auto-QA results. And they can manually score and add deep, actionable feedback onto the conversation before sharing with the agent.”

Yet, such feedback should not only fixate on agent development. Use the tool to pinpoint opportunities for agent praise too.

Setting up a filter for 100 percent quality scores may help here, providing an excellent baseline for a reward and recognition programme.

Step 7: Track the Results

Tracking each agent’s quality scores gamifies their performance. Yet, Scott urges caution. “Showing auto-QA scores directly to team leaders, coaches, and agents is one of the fastest ways of disengaging the front line,” he says.

“Over the last six months, we’ve had almost a dozen prospects working with other vendors come to us who have done this. They had to pull auto-QA out of the operation and go back to basics.”

As a result, it’s essential to configure reporting permissions so that only particular people can track the results – so agents don’t feel down when they underperform.

Instead, the contact centre can pick and choose its feedback, filtering positive insights for praise, which refill each agent’s desire to perform well.

Yet, a holistic view is excellent for analysts, so they measure the impact of their performance interventions, extract lessons, and improve.

Get Started With Automating QA

Auto-QA does not replace manual QA. Instead, it adds more metadata and insight. This enables targeted interventions that personalize and improve agent performance.

Such clever augmentation takes QA out of the dark ages and into the modern, hybrid contact centre.

Now, there is work to do upfront. Nevertheless, once the quality team has set up their Auto-QA program, they can quickly scale it and reap the rewards in no time at all.

Delve deeper into the world of Auto-QA systems by visiting: www.evaluagent.com

 

 

AutomationGenerative AIWorkforce Optimization

Brands mentioned in this article.

Featured

Share This Post