Cisco Reveals Security and Safety Framework for Enterprise AI Readiness

But does the Framework go far enough for CX teams? Techtelligence's Tim Banting thinks not

6
Cisco Reveals Security and Safety Framework for Enterprise AI Readiness
Security, Privacy & ComplianceNews

Published: December 17, 2025

Francesca Roche

Francesca Roche

Cisco has revealed its AI Security and Safety Framework to support and manage risks in modern systems. 

This unified structure combines both security and safety into a single model to reduce gaps and misalignment in activities. 

This will enforce protection on AI tools utilized in customer interactions, reducing the risk of unsafe customer outputs.

The threat of AI is still recognized as a real risk, despite the pace of adoption being at its highest level. 

This has likely been a result of knowledge gaps in AI security, with the technology evolving at a rapid pace, many organizations have not yet understood how to control its behavior within an unpredictable ecosystem. 

According to Cisco’s 2025 AI Readiness Index, many companies are still unprepared to face this danger, with only 29% of those surveyed believing they’re adequately prepared to defend themselves against these threats. 

In attempting to keep up with the rising demand for AI products, these results suggest that CX leaders are willing to encounter this risk rather than fall behind in AI deployment. 

This could result in unwanted exposure and failure of customer and brand data without a management plan in place, elevating risk likelihood due to low responsible adoption and a reactive-first solution. 

Furthermore, only a third of these companies admitted to having a formal change-management plan, with the remaining respondents likely relying on incomplete risk reviews and immature organizational adoption. 

With many companies not having yet adopted a framework, they can expose themselves to risks in agentic, supply chain, and multimodal vulnerabilities. 

This can lead to inconsistencies in service teams, with customer impact likely not being assessed in advance, potentially resulting in incorrect information, sensitive data exposure, or reduced overall quality of customer interactions.

To tackle this issue, Cisco’s AI Security Framework offers a unified, end-to-end solution that covers both safety and security, as well as educating organizations on their AI risks. 

It also acts as a vendor-agnostic framework, allowing organizations to continue using their tools with the solution without modifying its architecture, supporting vendors across multiple environments for durable and flexible capabilities. 

This strategy is designed to improve a company’s readiness and AI risk understanding by defining threat types and failure modes so organizations can identify what risks they are prepared to handle and the ones they can’t. 

Furthermore, it also supports formal change management by providing a common risk taxonomy to ensure risks are explained and organized into priorities for the company, as well as identifying recurring or unresolved ones. 

This allows individual teams to understand the current threat landscape using shared language and mental model, working to support established infrastructure, complex supply chains, company policies, and human-in-the-loop interactions to determine a likely security outcome. 

This ensures a company’s AI accountability, protection, and assurance for a system’s ethical and reliable attitude in alignment with its values and organizational policies.  

To ensure the framework adopts these expectations, Cisco has included five built-in design elements to explain how AI is used today and that older frameworks are no longer sufficient for current adoption. 

1. Threat and Harm Integration

Cisco framework combines both security and safety to allow organizations to understand an attack and witness a fuller impact on a customer. 

During an attack, the framework will allow companies to build better defenses against technical exploits by having security and safety teams collaborate to address AI risks. 

In the case of traditional frameworks, teams will attempt to solve these two problems separately, retaining only the security or safety information, likely leading to context gaps in necessary information. 

For CX teams, this means that with security issues and customer harm being so intertwined, an attack on an AI system by a cybercriminal could result in poor customer experience with unsafe or incorrect answers, or possibly expose sensitive information during an interaction. 

2. AI Lifecycle Awareness 

The AI Security Framework is also designed to determine risks by examining an AI system’s entire lifecycle, rather than simply reviewing development and deployment periods. 

By collecting all its recorded data, this will allow teams to identify various security and safety concerns outside of model or AI deployments and assess data collection risks to ensure the whole system is behaving properly. 

And by mapping risk appearance at each stage, teams can understand how threats have evolved within the system. 

This ensures AI system risks are caught early so CX teams can offer consistent customer experiences during updates and feature rollouts. 

3. Multi-Agent Orchestration 

This orchestration capability allows the framework to acknowledge emerging or likely risks during high AI system interaction periods, considering all systems involved rather than looking at each in isolation. 

During AI collaboration, risks and exploits are more likely to occur with data and communication sharing. However, the framework ensures that these tools behave safely and consistently with each other to avoid external impact. 

This means that CX teams can prevent customers from receiving harmful outputs during multi-agent collaboration by utilizing tools such as chatbots and customer support agents simultaneously. 

4. Multimodality Considerations

The security framework can recognize that AI systems are becoming more multimodal, being more willing to accept and produce many inputs. 

However, this can increase threat levels through possible corruptions in capabilities such as voice command, image uploads, and video sharing, unintentionally causing problems that may bypass traditional text-based safeguards. 

To tackle this issue, the framework assesses all inputs through the same structure to avoid handling each separately. 

For CX teams, this means that enterprises with customer interactions occurring across multiple channels can utilize the framework to ensure safety and security controls are implemented evenly across various chat inputs, reducing gaps that lead to customer disruption. 

5. Audience-Aware Security Compass

This allows multiple team audiences to use the framework for their specific responsibility whilst remaining aligned with a single conceptual model. 

It allows various company members to focus on several risk-level threats on specific topics, such as customer trust, service disruption, and brand impact. 

The shared framework can also let teams work collaboratively to communicate needs without needing extreme technical detail. 

For CX teams, this means that they can utilize the framework to uncover customer risks, as well as any complaints and issues to support high-risk customer situations. 

This also allows customer agents to learn how an AI will behave and handle failures during customer interactions. 

Does the Framework Go Far Enough?

Whilst Cisco can identify AI risk at a strategic level, the security framework does not show clear execution guidance for CX teams. 

Frameworks need to show clear capabilities of enforcing consent, data protection, workflow governance, and API controls. 

This also includes producing audit-ready evidence to identify where customer risk is most prevalent. 

Speaking with CX Today, Tim Banting, Head of Research and Business Intelligence at Techtelligence, noted that while Cisco has acknowledged the correct risks, CX teams are currently more concerned with the execution of privacy, rather than abstract AI threats. 

“Cisco’s AI Security Framework addresses buyer anxiety and identifies the right threats,” he said. 

“Techtelligence data indicates the next competitive step is execution: consent enforcement, data residency guarantees, workflow-level guardrails, API governance, and audit-ready evidence – especially in CX environments.  

“To fully align with buyer intent, Cisco must now show how that foundation operates where risk is most visible.”

With CX privacy no longer theoretical but operational, the data proves that security and privacy is one of the most important CX categories for customer buyers. 

“Techtelligence data shows that CX Security, Privacy, and Compliance is the largest buyer-intent category, with 24,367 companies showing weekly research activity.” 

This includes topics such as vendor privacy risk, data breach response, and data residence. 

Cisco acknowledges the unintended behaviors of AI systems toward organizations; however, Banting concludes that despite this, the framework is still at a theoretical stage. 

“Yet the framework remains conceptual, while buyers face concrete challenges, CX buyers are researching execution questions: how consent is enforced across live channels, where customer data resides, how long it is retained, and what evidence exists for regulators.  

“In CX, AI risk is no longer theoretical – it poses potential compliance risks to enterprises.”

Reshaping the Security and Compliance Landscape

The AI Security and Safety Framework highlights how Cisco is just one of several CX vendor giants securing their place within the security and compliance market. 

In fact, this sudden move toward security and compliance has set a next-level standard for safety and security in all industries, raising the bar in framework and regulation expectations. 

This has also included governance and security-focused acquisitions to improve adoption reliability and safety for enterprise customers, with ServiceNow’s acquisition of Veza. 

Similar to Cisco, this acquisition utilizes unification capabilities in its efforts to deploy Veza’s governance tools into broader platforms, allowing enterprises to address data and identity risks. 

Further expanding its security portfolio, ServiceNow recently announced plans to acquire Armis to strengthen its cybersecurity abilities as cyber threats become increasingly more common in the customer service sector. 

This strategic move positions ServiceNow in a secure position for this attack rise, attracting more enterprises who choose to expand their own security capabilities. 

Other vendors are also joining in on the security and compliance hype by adhering to internationally recognized privacy standards to ensure enterprise customer data is being handled in line with industry expectations. 

Similar companies are also assuring execution with their privacy and security expectations by utilizing tools that deliver transparency and risk management whilst ensuring customer service isn’t compromised during deployment. 

Whilst it lacks in execution, the Cisco Security and Safety Framework model offers strong alignment with the current market’s movement.  

The framework ensures that AI risk is managed structurally end-to-end to fully view possible and current threats in one place to protect customer interactions from technical risks. 

Agentic AIAgentic AI in Customer Service​AI AgentsAutonomous AgentsPrivacy RegulationsSecurity and ComplianceUnified Experience Management

Brands mentioned in this article.

Featured

Share This Post