As AI agents move into customer experience workflows, handling queries, processing transactions, and triggering backend actions, they introduce a new layer of security risk that goes beyond traditional chatbots.
“With chatbots, we worried about what they would say. With agents, we worry about what they do,” said Jeff Schultz, Senior Vice President of Portfolio Strategy for Cisco’s product organization, in a media briefing.
That shift raises the stakes for security teams and customer experience leaders responsible for delivering safe, reliable customer interactions.
At the RSA Conference in San Francisco this week, Cisco is going after that problem by rolling out a set of security capabilities designed to make autonomous AI safe enough for real-world use.
As AI evolves, enterprises are focusing more on how to network and connect compute capability in a highly secure way, Schultz said.
“We also see a trust deficit. We know that all of our customers are driving to move very, very quickly with AI, but trust is holding them back… this is something that we believe has to be addressed to be able to truly see the potential of AI come forward for our customers.”
A recent Cisco enterprise customer survey found that 85 percent had experimented with AI agents, but only 5 percent had moved agentic technology into production.
Cisco aims to address that with new security capabilities that establish trusted identities, enforce strict Zero Trust Access controls, harden agents before deployment, enforce guardrails at runtime, and give security operations center (SOC) teams the tools to stop threats at machine speed.
From Chatbots to Action-Taking Agents
The move into agentic AI means having autonomous systems “essentially become co-workers sitting side by side with humans in the workforce,” Schultz noted. These agents can help teams to deliver faster, increase productivity and accomplish tasks they couldn’t before.
When it comes to customer experience, that means AI moving deeper into journeys, handling tasks, resolving issues and interacting with systems.
But that autonomy requires new safeguards.
“[W]hat they’re able to do, they’re doing at machine speed. They’re doing it relentlessly, and at the same time, they also act without consequences… they will do whatever is needed to accomplish their task, and they’ll do exactly what you say, not necessarily what you mean. And so we really need to re-imagine security as this agentic workforce joins us.”
Cisco’s approach centers on giving AI agents identities, tying them to human owners, and tightly controlling what they can do.
But the bigger shift is conceptual. There have traditionally been distinctions between human and machine access to systems, noted Tom Gillis, Cisco’s Senior Vice President and General Manager for Infrastructure and Security.
“As we start to move into a world of AI agents, these two lines begin to blur, and it’s not enough to simply think about access control for an agent, we have to move to action control.”
For example, an agent built to process expense reports needs to be able to access travel systems, receipts, and credit card accounts, but should not be able to make purchases.
“The challenge is, if I write a hard coded rule that says, ‘don’t buy a Porsche,’ the agent, being an eager little beaver, will say, ‘okay, cool, I’ll buy a McLaren,’” Gillis said. Access control for autonomous agents goes beyond what permissions they have in various systems to the actions they could potentially take once they’re inside. That “is a fundamental rethink of how security systems work.” For customer experience teams, it’s also the difference between a helpful automation and a costly mistake.
Cisco is approaching agentic AI security from three aspects: protecting the world from agents so that they can only act as intended; protecting agents to ensure they can’t be manipulated or corrupted; and detecting and responding to AI incidents at machine speed and scale.
To protect enterprises from AI agents, Cisco is extending Zero Trust Access to hold them accountable to a human employee and secure agentic actions. New capabilities in Duo IAM integrate with model context protocol (MCP) policy enforcement and intent-aware monitoring in Cisco Secure Access to enforce strict access control to help organizations gain full visibility and governance over their AI agent workforce. All agents are assigned fine-grained permissions only for the specific tasks or resources they need. And all tool traffic is routed through a model context protocol (MCP) gateway to eliminate blind spots.
To protect agents, Cisco is expanding its self-service AI Defense product with new tools to help stress-test agents and the interactions between them before deployment. Cisco AI Defense: Explorer Edition enables AI developers, application security teams, and security researchers to build and secure AI agents. Features include dynamic agent red teaming, model and application security testing, actionable security reporting, API-first access and team collaboration.
“One of the biggest challenges that customers face is that they don’t know how their agents will behave,” said Akshay Bhargava, Vice President for AI Software and Platform at Cisco. “Even a small failure can turn into real world consequences.”
That uncertainty can quickly translate into poor customer experiences, or worse, security incidents.
“This is exactly why we introduced algorithmic red teaming for agents… it continuously tests agents across real-world scenarios, then after that, we enforce real-time guard wheels monitoring their behavior as these agents operate,” Bhargava said. The goal is to catch risky behavior early, before it affects customers.
Cisco is launching an Agent Runtime Software Development Kit (SDK), which embeds policy enforcement directly into agent workflows when they’re built. The SDK supports frameworks including AWS Bedrock AgentCore, Google Vertex Agent Builder, Azure AI Foundry, LangChain, and more.
An LLM Security Leaderboard provides a resource for evaluating model risk and susceptibility to attacks. The leaderboard compares model performance metrics against evaluations of how models handle malicious prompts, jailbreak attempts and other manipulation strategies by providing transparent evaluation signals.
Cisco is also introducing DefenseClaw, an open-source framework designed to automate security checks and reduce friction between development and security teams, helping developers deploy secure agents faster. Consolidating these capabilities into a single framework aims to eliminate the need for manual security steps or separate tool installations.
“From a threat actor perspective, we are seeing speed and rapid weaponization of vulnerabilities,” noted Amy Henderson, Senior Director at Cisco Talos. For example, the React to Shell vulnerability that emerged in December 2025 saw more threat actor activity in three weeks than any other vulnerability last year.
“The only way that’s possible is threat actors are adopting AI across their tactics and techniques… At the same time, you have legacy vulnerabilities and old and outdated systems that we’re still dealing with from a customer perspective.”
AI Agents Threaten Security But Strengthen Enterprise Defense
While AI agents pose potential security threats, they can also provide effective defense. Splunk is using that advantage to embed AI capabilities into key SOC workflows. Cisco is leaning on Splunk to modernize security operations, embedding AI across detection and response workflows.
Exposure analytics are now integrated into Splunk Enterprise Security by default, providing a continuously updated inventory of all assets and users. Detection Studio provides a unified workspace to streamline the detection engineering lifecycle, while federated search helps SOC analysts to uncover and correlate data across multiple environments.
Specialized AI agents, including the Detection Builder Agent, Standard Operating Procedures (SOP) Agent, Triage Agent, Malware Threat Reversing Agent, Guided Response Agent and Automation Builder Agent, provide active security evaluation and defense. Splunk is rolling out the features gradually between now and June.
Cisco’s message is that AI agents can unlock major gains in productivity and customer experience, but only if organizations trust them to act safely.