GDPR, HIPAA, PCI-DSS… meeting such compliance requirements remains paramount when introducing new contact center innovations.
So, contact center and IT leaders do their due diligence, and when they implement new AI solutions are typically confident in meeting those standards and securing their deployments.
However, the world moves on, and these AI solutions evolve.
Consider an AI agent. As it grows, it leverages more data to generate better responses. Without proper oversight, the development can drift away from what’s acceptable for the business.
The Nightmare Scenarios
Over the past 18 months, many customer-facing AI solutions have gone rogue.
Some have been comical incidents, such as the Virgin Money chatbot taking offence to the word “virgin”. However, others have been more serious.
A classic example is the New York City virtual agent telling small business owners to break the law. Another is a recent attack on Lenovo that had its chatbot spewing past customer conversations.
These risks are often discussed. Yet, there are rising dangers in agent-facing AI use cases, too.
For instance, in banking, if an AI guides agents to sell more credit products to customers already struggling to pay, the contact center risks violating fair credit regulations. That’s dangerous territory.
That incident beckons the first of three new security and compliance considerations for contact center leaders.
1. Don’t Over-Rely on Human Oversight
Both technology and human oversight are critical to successful AI deployments. AI must be constrained with boundaries and agents educated.
While “human-in-the-loop” is a term often banded about, just relying on human feedback is risky, since metrics may push them to follow AI guidance even when it’s problematic (per the banking example).
The key is to build a strategy around AI reporting, with alerts feeding into the contact center system. After all, real-time system monitoring is far more effective and future-proof than relying on humans.
2. Shift from Reactive to Proactive Quality Assurance (QA)
Continuously review outputs through QA, not just of call recordings, but also of the guidance AI provides. That’s critical. Yet, as Crispen Maung, SVP of Compliance and Privacy at Five9, stressed:
“The real need is proactive, real-time monitoring. AI can learn and shift quickly, so you want automated checks that flag when outputs approach unacceptable tolerance levels.”
To Maung’s point, relying on QA alone means only catching issues after the damage is done, possibly after falling out of compliance. As such, both real-time monitoring and back-end QA are essential.
Many CCaaS providers will provide both capabilities within their AI solutions. However, it’s ultimately up to customers to implement them correctly and maintain oversight. Brands can’t just “set it and forget it” for months.
Indeed, regulators will expect continuous monitoring, and, if complaints arise, contact centers will need to show both reactive fixes and proactive safeguards.
3. Integrate with the Company’s Broader SIEM Platform
Many smart enterprise contact centers will integrate their contact center AI solutions with the company’s broader Security Information and Event Management (SIEM) platform.
By doing so, service teams can log outputs into the SIEM, which sends out alerts and triggers immediate actions to shut down prospective attacks.
Typically, SIEM systems sit in the security function, but QA and compliance teams need access so they can act quickly. That requires discussion.
Alternatively, there are lighter continuous monitoring tools to automatically restrict or shut down problematic outputs. However, the benefit of integrating it with the SIEM is in continually collaborating with cybersecurity leaders on proactive risk management.
Interestingly, Gartner recently predicted that pre-emptive cybersecurity solutions will account for 50 percent of IT security spending by 20230, up from less than five percent in 2024. That underscores the direction of travel.
Find Out More with Five9
Guardrails are evolving with AI, and many SaaS services will be secure. However, customers must configure the tech properly to their specific regulatory obligations.
Ultimately, this ties back to the original point: if AI agents are constantly learning and brands don’t have correctly configured guardrails, they can drift into non-compliance.
Given this risk, the leading contact center and conversational AI vendors won’t just be innovative, but also they’ll be consultative.
Five9 is one such brand. Sure, it’s evolving its QA functionality, reporting, and monitoring tools so customers can stay compliant and in control. Yet, it’s also imparting the knowledge to understand where to set guardrails and how to best leverage the technology.
One way in which it’s doing so is through a webinar series dedicated to contact center AI, trust, and governance. Register here.