AI promises to reimagine the contact center by automating contacts, elevating employees, and redefining experiences.
However, AI is not just delivering new, game-changing capabilities to service teams; it’s also bringing new tools to attackers.
Recognizing this, contact center leaders must understand emerging threats to their operations and customers.
As such, CX Today reached out to Santosh Kumar, Chief Security Architect at Cisco, to identify six new risks of AI and how to combat them.
1. AI Voice Phishing
In February, a startup named “Zyphra” launched two open text-to-speech (TTS) models, each capable of cloning someone’s voice with as little as five seconds of sample audio.
An impressive achievement? Absolutely. But one with several risks to many businesses.
After all, with such technology, a fraudster may conduct a voice phishing attack that can convincingly bypass voice biometric systems.
For instance, an attacker could call a bank, pass the voice recording as authentication, and gain full access to the account.
That may seem far-fetched, but—in November—a BBC journalist successfully used voice cloning technology to bypass voice ID systems at two prominent UK banks.
The threat is significant. Indeed, OpenAI stalled the release of a similar solution last year, warning businesses to “phase out voice-based authentication”.
Commenting on this threat, Kumar noted: “The growth of AI-driven voice phishing has increased by 3,000 percent compared to two years ago.
“To mitigate this, it’s crucial to implement anti-spoofing mechanisms, multi-factor authentication, and liveness tests to verify the caller’s presence.”
Companies that haven’t already implemented similar voice biometric protections are especially vulnerable to this AI threat.
2. Privacy Risks
The growing use of machine learning (ML) models in contact centers introduces new challenges. These go beyond the scope of traditional practices–like encryption, access controls, and GDPR compliance–which, of course, remain essential.
Yet, businesses must consider new practices to protect against new breaches of these models.
For instance, there are “membership inference attacks”, where a fraudster attacks an ML model by inputting specific queries to determine if certain individuals’ data were used in their training.
In doing so, the attacker may access that individual’s personal information.
Additionally, they may gain insight into how the model was trained. That could allow them to tamper with it or create a fraudulent duplicate – as scammers are doing more and more.
To mitigate such AI threats, Kumar advises against leveraging machine learning models trained on small datasets and ensuring the model has gone through adversarial testing.
“Every model in our pipeline undergoes adversarial testing before deployment,” said Kumar.
“We also explore differential privacy techniques to ensure prediction vectors remain ambiguous, preventing attackers from extracting precise information.”
Remember, ML models often memorize sensitive data, so always treat them cautiously.
3. Chatbot Attacks
Chatbots offer a common entry point for attacks, especially those powered by machine learning. After all, they can be targeted by adversarial attacks like those highlighted above.
Yet, as businesses power bots with large language models (LLMs), there’s now a risk of “prompt injection” attacks. These are either direct – aiming to trigger specific responses – or indirect – striving to change the virtual agent’s behavior.
Via both methods, users can trick the bot into performing prohibited tasks.
These attack methods received widespread publicity after security researcher Johann Rehberger used similar techniques to tamper with Google Gemini’s long-term memory.
However, there are other chatbot attacks to guard against. For instance, a fraudster could manipulate the bot into adopting a persona. Alternatively, they may exploit AI’s limited context window to overload it with irrelevant data, hampering its performance.
Given these risks, Kumar recommends a multifaceted approach to safeguarding bots. “Mitigating chatbot threats involves strategies like adversarial testing, continuous model evaluation, input validation, and preventing prompt injection attacks,” he recommended.
Nevertheless, businesses must first understand these attack vectors to effectively enact these strategies and ensure AI remains reliable.
4. Model Poisoning
Not all threats come from external fraudsters. Some attacks come from within.
Consider model poisoning. This occurs when an insider injects malicious data during model training, creating backdoors for attacks.
For example, they may introduce poisoned data to an AI-powered security solution designed to detect malware. As a result, it may miss specific threats.
As such, contact centers must ensure their providers follow the OWASP Top 10 for LLM principles and ensure poison detection methods are built-in, suggests Kumar.
“We’re also leveraging Cisco’s AI Defense product, which enhances protection against such attacks,” he noted. “Our AI-specific pipeline includes continuous monitoring and testing to detect and mitigate threats early.”
5. API Weaknesses
Enterprises often integrate their contact centers with various point solutions for conversational analytics, forecasting, self-service, and more.
It’s critical to maintain strict authentication and authorization controls for these APIs.
After all, while APIs face similar threats as software and web applications, they also have unique vulnerabilities that demand special attention.
For instance, APIs – like chatbots – are susceptible to injection attacks via SQL injection, remote code execution, and cross-site scripting (XSS).
Contact center IT teams can ensure consistent input validation and leverage API management platforms to guard against such risks.
However, businesses must prepare for more than just API injections. Service availability threats, where APIs are overwhelmed with requests, and user identity risks are also concerns.
Deploying an API gateway and volumetric defense tools are best practices here.
6. Supply Chain Frailties
With the rush to adopt AI and ML, many companies turn to third-party solutions. While cost-effective, these solutions can introduce significant risks if not properly vetted.
For instance, they may be unpatched, depend on other components/services, or contain precarious open-source components.
Therefore, gaining assurances from vendors against supply chain attacks is critical.
As an example, Cisco has enforced rigorous supply chain security and compliance practices for over 20 years, whether for on-premise libraries or modern SaaS integrations. “This ensures the integrity and security of our ecosystem,” added Kumar.
The tech giant has also developed a Responsible AI Framework, outlining its approach to ethical and legal AI development and integration.
Combatting Contact Center AI Threats with Cisco
Cisco uniquely delivers customer experience solutions alongside a deep security portfolio.
In 2024, Cisco restructured its product divisions, including security and collaboration, to operate under a single Chief Product Officer, Jeetu Patel. Furthermore, Cisco consolidated its Webex Contact Center and CPaaS (Communications Platform as a Service) offerings under the leadership of Jay Patel. This strategic alignment was designed to empower customer experience leaders to proactively address emerging risks.
Cisco is uniquely positioned to deliver AI-enabled Webex Contact Center with security and compliance built into the foundation. With Cisco’s unique positioning on AI threat defense, it can further solidify customers’ overall trust by providing robust security and privacy posture.
To learn more about Cisco’s contact center portfolio, visit their website.