Vendors Race to Reinvent Cyber Defense for the Agentic AI Era

Microsoft, Cisco and OpenAI unveil AI-native cybersecurity initiatives as autonomous models intensify concerns around vulnerability discovery and exploits

5
AI & Automation in CXSecurity, Privacy & ComplianceNews

Published: May 14, 2026

Nicole Willing

As AI models become more sophisticated, their ability to autonomously find and exploit vulnerabilities is increasing exponentially, making them a powerful weapon in the hands of cyber attackers. And with enterprises accelerating their adoption of GenAI and autonomous agents, vendors are shifting from traditional detection models toward AI-native security architectures.

This week, Microsoft, Cisco and OpenAI have each unveiled new initiatives aimed at addressing growing concerns across the enterprise market about how to secure these increasingly autonomous AI systems before attackers exploit them.

The announcements point to a transition away from static security tooling and toward agentic defense systems that can continuously evaluate and address threats across software environments.

As Cisco pointed out in announcing its initiative:

“The operating model of cybersecurity has fundamentally shifted. As frontier AI models create a new dual-front challenge, attackers are now identifying vulnerabilities at machine speed, leaving security teams struggling to keep pace with manual, legacy processes.”

Microsoft Pushes Multi-Agent Cyber Defense

Microsoft has introduced a new agentic security system, multi-model agentic scanning harness (MDASH), which combines more than 100 specialized AI agents to detect vulnerabilities across Windows infrastructure. It uses frontier LLMs including Anthropic’s Claude Mythos, OpenAI’s GPT-5.5-Cyber and others.

According to the company, the platform helped Microsoft researchers “find 16 new vulnerabilities across the Windows networking and authentication stack—including four Critical remote code execution flaws in components such as the Windows kernel TCP/IP stack and the IKEv2 service.”

Microsoft fixed the flaws in its weekly Patch Tuesday update pushed out to Windows devices.

The vendor claims that the system outperformed Anthropic’s Claude Mythos Preview and OpenAI’s ChatGPT 5.5 in benchmarking of real-world vulnerabilities.

Taesoo Kim, Vice President, Agentic Security at Microsoft, wrote in the blog post announcing the system:

“AI vulnerability discovery has crossed from research curiosity into production-grade defense at enterprise scale, and the durable advantage lies in the agentic system around the model rather than any single model itself.”

Discovering security flaws using AI is becoming an engineering problem, Kim added. It requires composition that no single prompt can achieve and must include validation to fix the flaws uncovered.

Microsoft’s system absorbs improvements in AI models, so that the targeting, debating, deduplication, and proof stages do not need to be rewritten each time there is an update. Instead, the vendor changes a configuration and re-runs an A/B test, and the customer’s investment, including per-project context, scan plugins, and proving agents, carries over.

“This is the architectural property that matters most over time, because the model lottery is going to keep playing out, and any system whose value is gated on a particular model is a system that has to be rebuilt every six months,” Kim noted.

The architecture is intended to emulate collaborative human security teams, with agents specializing in reasoning, exploit validation, triage and remediation workflows.

Microsoft is framing the initiative as a response to the widening speed gap between attackers and defenders. In parallel, the vendor also detailed work using AI-generated synthetic attack logs to improve detection engineering and training datasets for security operations teams.

MDASH is helping Microsoft’s engineering teams improve security outcomes using generally available AI models and is being tested by customers as part of a limited private preview.

Cisco Addresses the Limits of AI Security Analysis

Cisco has taken a different approach, releasing an open-source framework called the Foundry Security Spec. Rather than introducing a standalone product, the company published a model-agnostic and stack-agnostic reference architecture for building auditable AI-driven security evaluation systems.

Cisco warned that simply using AI to attempt to find and fix flaws is not enough. Omar Santos, Distinguished Engineer, AI Security Engineering, S&TO, stated:

“Organizations are investing in AI-assisted security and getting back hallucinated findings, false positives at scale, and no coverage signal.”

When security teams point an LLM at a repository and ask it to “find the bugs,” they are often given “a wall of unbounded, unverifiable output that mixes sharp insights with hallucinated findings, with no way to know what was missed or when you’re actually done,” according to Santos.

“Foundry Security Spec is the scaffolding that turns a frontier LLM from ‘an interesting demo against your codebase’ into a security evaluation system,” Santos added. It produces a prioritized and verifiable set of findings, a clear “done” signal, an auditable provenance chain.”

Importantly, it also uses “safety guardrails that assume the model will, at some point, try to do the wrong thing; and constrain it at the substrate, not the prompt.”

Like Microsoft’s MDASH, Cisco’s framework is designed to assist human security teams as a starting point for building systems that are tailored to their specific enterprise environments. Santos wrote:

“As with any security tool, the responsibility for implementation, oversight, and final decision-making remains with the user. We provide the blueprint for the guardrails, but it’s up to you to ensure that the ‘human-in-the-loop’ remains the final arbiter of security decisions.”

Foundry Security Spec is built on functional requirements and roles, not specific model parameters, so that it can adapt as models evolve to produce complex reasoning agents.

Cisco executives positioned the framework as infrastructure for an “agentic workforce,” where autonomous AI systems participate directly in development, operations and security processes. The company has also expanded zero-trust controls for AI agents within its identity and access management portfolio.

OpenAI Launches Daybreak as Anthropic’s Project Glasswing Raises the Stakes

OpenAI has laid out its answer to Anthropic’s Project Glasswing cybersecurity initiative addressing the threat of AI to enterprise security with its own Daybreak project, designed to help organizations identify vulnerabilities, validate patches and integrate AI-assisted defense directly into software development pipelines.

The initiative aims to support secure code review, threat modeling, dependency analysis and remediation guidance. The announcement stated:

“Daybreak combines the intelligence of OpenAI models, the extensibility of Codex as an agentic harness, and our partners across the security flywheel.”

The company said the program is being developed alongside security partners including Akamai, Cloudflare, Cisco, CrowdStrike, Fortinet, Palo Alto Networks, Oracle and Zscaler.

OpenAI emphasized that the same capabilities enabling defensive automation could also be misused by cyber attackers, highlighting the need for safeguards, verification systems and accountability controls.

Vendors Respond To Pressure From Mythos

A growing sense of urgency around Anthropic’s Claude Mythos model and its Project Glasswing initiative is accelerating competitive responses across the industry.

Anthropic has claimed Mythos can autonomously identify thousands of high-severity vulnerabilities across major operating systems and browsers, capabilities the company said surpass most human security researchers and are too sensitive for broad public release.

The initiative quickly reframed the cybersecurity conversation from incremental automation to the possibility that AI systems will be capable of discovering and weaponizing exploits at unprecedented scale.

In response, vendors are now positioning their own agentic security systems as defensive counterweights to Mythos-class capabilities.

The competitive dynamic indicates that frontier-model cybersecurity is rapidly becoming a strategic battleground among major AI vendors, cloud providers and enterprise security firms.

For customer experience leaders, the rapid emergence of AI-native cybersecurity platforms introduces new operational considerations beyond traditional IT security. Buyers need to increasingly evaluate vendors on their ability to demonstrate governance and explainability for AI behavior.

That shift may become especially important in regulated industries where platforms now handle sensitive customer interactions, financial workflows and healthcare information through AI-assisted systems.

Agentic AIAI AgentsSecurity and Compliance
Featured

Share This Post