Quality Assurance (QA) offers contact centers the chance to motivate agents, support their development, and monitor customer service delivery.
As such, it’s a cornerstone contact center initiative. Yet, there are significant opportunities to bolster QA strategies in 2025.
Recognizing this, Justin Robbins, Founder & Principal Analyst at Metric Sherpa, pinpointed contact center QA as the ideal focus topic for the first episode of Contact Center Talk.
Contact Center Talk is a six-part video miniseries hosted by Robbins, with the first featuring Charlie Mitchell, Senior Editor of CX Today.
In the following video, they discuss everything QA, developing a list of ten best practices for 2025.
A written rundown of each best practice is also available below.
1. Create a Connected Learning Strategy
The best contact centers treat QA as a team effort. QA analysts, coaches, and supervisors collaborate to set shared performance standards, identify gaps, and run post-training reinforcement.
In doing so, contact centers can create a unified vision of excellent customer service, work cooperatively towards that, and maximize the value of their investments in QA technology.
Moreover, this approach combats the “us vs. them” mentality between critical QA stakeholders and ensures that evaluations translate into performance development.
2. Use More Than One Method to Evaluate Customer Interactions
Many leaders default to scorecards for QA, but it’s vital to include other approaches, like customer surveys, secret shopper programs, and appropriate metrics.
Diversifying the contact center’s methods provides a more complete picture of interaction quality.
Contact centers may also look beyond the agent-customer conversation, too, analyzing the pre- and post-contact experience to isolate service experience issues outside the agent’s control.
3. Align Quality Scores with Customer Satisfaction
Look for a correlation between QA scores and customer satisfaction metrics. If they align, the contact center is likely already measuring what matters most to customers. If not, it’s time to revisit the QA scorecard.
However, remember that not every scenario lends itself to this. For instance, some intents – such as loan denial conversations in banking – will undoubtedly deliver low CSAT scores.
As such, balancing QA insights with customer intent and situational context is critical.
4. Recognize That Your Scorecard Is Not Your Quality Program
Many leaders equate their scorecard with their strategy, but the form is just a tool.
Indeed, a QA program should encompass standards, coaching, connected learning strategies, and roles and responsibilities.
The form comes later and is a piece of the puzzle, not the whole picture.
5. Demonstrate Unquantifiable Standards Within the Scorecard
Criteria like politeness or empathy can feel vague to agents. Creating a library of excellent customer conversations helps bring these standards to life, clarify performance expectations, and build agent confidence.
Additionally, consider developing a standards repository that defines every criterion in plain language and includes examples of each rating. That offers a valuable resource for everyone, from new agents to seasoned supervisors.
6. Take More of a Behavior-Based Approach to QA
Many contact centers report quality in terms of scores, but this can lead to unproductive conversations like, “You got a 73; how do we get you to 86?”
Instead, focus on behavior-based approaches. For example:
- For new agents, evaluate one skill at a time, building proficiency step-by-step.
- For experienced agents, target specific areas for improvement rather than reviewing all 15 criteria every time.
- For organization-wide initiatives, focus QA efforts on a single skill or behavior for a defined period.
Such an approach ensures targeted coaching and sustainable performance improvement.
7. Build QA Into the Agent Onboarding Agenda
It’s critical to position QA as a tool for success, not as a “Big Brother” or punitive system. Setting clear expectations early and familiarizing agents with QA can make it a comfortable, supportive process.
Also, consider introducing other key concepts like workforce management (WFM) early, especially for agents who may not have encountered these practices before.
Again, it comes back to unintended consequences. Delaying QA introduction could lead to confusion or disengagement when agents start interacting with customers.
8. Beware of the Two Categories of Skills You Evaluate
In a contact center, skills typically fit into two categories: task-based skills, measured by whether something was done, and proficiency-based skills, measured by how well it was done.
For example, perhaps agents have to follow a standard greeting, such as: “Thank you for calling [Company Name]. My name is Justin. May I have your domain name, please?”
The task itself is straightforward, requiring no learning curve or varying proficiency. It is simply a matter of whether the agent follows the script. For skills like this, a binary evaluation—“yes,” “no,” or “not applicable”—is sufficient.
In contrast, skills such as demonstrating empathy or resolving complex issues are more nuanced and require varying proficiency levels. These should be measured on a Likert scale, such as one to five, with clear criteria for each level.
Why? Because this approach allows the contact center to track progress, provide targeted coaching, and avoid frustrating agents still developing these more advanced skills.
Unfortunately, many service teams misclassify these skills. Evaluating a complex skill with a simple “yes” or “no” can lead to frustration and missed growth opportunities.
As such, organizations must carefully assess their evaluation criteria to ensure they are appropriately measuring the “if” versus the “how.”
9. Engage In Outlier Analysis
Many contact centers already examine calls that stand out in customer sentiment or transfer rates. However, long handling times often get overlooked because analyzing them can be time-consuming.
Yet, these interactions frequently reveal the most valuable insights, such as broken processes, knowledge gaps, or coaching opportunities.
For example, in one case, a contact center found that lengthy calls stemmed from a new system integration that caused confusion. By digging into these outliers, analysts identified the issue and helped address it.
Conversely, analysts may also spot high-performing agents excelling in areas they hadn’t previously noticed, leading to valuable best practices.
Encouraging curiosity among quality analysts can uncover both problems and successes in these outlier interactions.
10. Better Manage Quality Calibration Sessions
Calibration often focuses on finding standard deviations, but that’s not the best use of time.
Instead, the primary goal of calibration should be to identify the most important takeaway from an interaction, whether it’s a behavior to reinforce, a skill to improve, or a course of action for coaching.
It’s critical to avoid overloading agents with feedback. Highlight one or two key areas to focus on; otherwise, nothing sticks.
Secondly, calibration sessions should also serve as a “gut check” for your QA form and strategy. If discrepancies arise, ask why. Are the criteria outdated? Are they addressing what’s truly meaningful?
Use calibration to refine the form and ensure it aligns with the organization’s goals.
Don’t miss an episode of our Contact Center Talk miniseries. Sign up for the CX Today Newsletter.