A new form of shadow IT, where employees use tools without formal oversight, is taking shape inside enterprises. As teams rapidly integrate AI tools into daily workflows without centralized governance, their use of “shadow AI” is creating a new and largely unmonitored risk layer of hidden access points that can expose sensitive data and disrupt customer experience.
Shadow AI reflects the simple reality that employees are no longer waiting for centralized approval to experiment with new tools. From drafting customer responses to prototyping workflows, AI is being embedded organically across teams that directly shape customer interactions.
That reality came up in CX Today’s recent roundtable discussion on how AI is reshaping CRM and customer data stacks. As John Kelleher, VP UKI/MEA of Enterprise Sales at Zendesk, said, “In the past you wouldn’t have taken a CRM and played with it yourself… but CIOs are building [apps] in Claude… everybody’s playing with it themselves and creating the thoughts, ‘I could do this within my own function, my own business’.”
“We don’t really talk about it much at the moment, but… that whole shadow AI… is a risk. That creates complexity… that will need to be governed better.”
This decentralized experimentation is accelerating innovation, but it is also creating fragmented, often invisible, extensions of the enterprise tech stack. That speed introduces a new category of operational risk, especially when those tools interact with customer data or production systems.
“I’m seeing it already, the speed with which someone says, ‘I can do this.’ They’re doing it, and actually going live.” Mark Ashton, VP of Solution Consulting, CRM, EMEA, at ServiceNow added. “When you think about governance, security, control, where the data is coming from, there’s an explosion waiting to happen with all these citizen developers building… At some point, there’ll be a leak of some information.”
How the Vercel Breach Reveals Hidden CX Risks From Shadow AI
A recent breach of a third-party tool used by an employee at U.S. cloud application company Vercel demonstrates how shadow AI can translate into real-world CX impact. The breach originated through a compromised third-party AI tool connected to an employee’s account, which hackers then used to access internal systems and customer-related data.
The incident highlights how quickly a single instance of a seemingly innocuous tool can become a gateway into an enterprise’s core infrastructure.
As Fredrik Almroth, co-founder and security researcher at Detectify, pointed out, the issue lies in how access is structured, and how easily it is extended by third-party tools.
“The Vercel breach is a stark reminder that modern security risks don’t stop at the boundaries of your own systems. They extend to every tool and service your organization is connected to.”
Sophisticated attackers using less-scrutinized, third-party tools to take over an employee’s account and infiltrate an organization’s systems is becoming common, Almroth warned. “There was no need to go after Vercel directly, to use brute force, or sophisticated technical knowledge.”
One of the most significant challenges shadow AI introduces is a lack of visibility. While IT teams may maintain oversight of approved vendors, the tools that employees connect independently often fall outside formal tracking.
In Vercel’s case, the employee connected a consumer version of Context.ai, an agentic AI tool that enabled access to their Vercel-issued Google Workspace account, which allowed agents to automate actions in Google Workspace. Context.ai noted that the breach did not affect its enterprise customers, which run deployments in their own infrastructure.
Almroth cautioned:
“That’s a blind spot many organizations still have. They’ve got a reasonable handle on their known vendors, but the web of third-party tools that employees connect to their work accounts organically, tool by tool, often without a formal approval process, is a different thing entirely.”
“It’s rarely tracked, rarely reviewed, and almost never reconsidered when something goes wrong elsewhere. That’s the gap this incident exposes.”
This lack of visibility has direct CX implications. Vercel initially found that a “limited subset of customers” had non-sensitive environment variables compromised. And its continuing investigation “identified a small number of additional accounts that were compromised as part of this incident.”
That indicates how the use of shadow AI can directly result in customer data becoming compromised. And when an incident occurs, enterprises may struggle to quickly identify the source of the breach as well as the scope, delaying customer communications and increasing the risk of inconsistent messaging.
Vercel’s investigation into the breach also turned up signs of further compromise from outside the company.
“We have identified a small number of customer accounts with signs of compromise that appear to be separate from the April 2026 incident,” the bulletin stated. “Based on our investigation to date, these compromises do not appear to have originated on Vercel systems. We have already contacted those accounts and provided them with specific corrective actions to remediate potential risk.”
The update added that “this activity does not appear to be a continuation or expansion of the April incident, nor does it appear to be evidence of an earlier Vercel security incident.”
What Enterprises Can Learn About Protecting Customer Experience
Almroth offers a practical lesson. “Focus less on the label of the tool involved and more on the access chain: which external apps are connected to employee accounts, what those apps are allowed to do, what internal systems those accounts can reach, and whether sensitive credentials would still be exposed if that chain of trust broke.
As the Vercel incident shows, security events originating in shadow AI environments are likely to become apparent first in customer-facing channels. The firm recommends that customers enable multi-factor authentication, rotate environment variables and deployments, and review activity logs.
Whether it is forced password resets, unexpected downtime, or precautionary restrictions, the customer experience becomes the most visible layer of the incident.
It also reframes governance. Managing enterprise AI usage includes understanding which tools employees are using, how they connect to corporate systems, what permissions they hold, and how quickly those connections can be revoked or contained. It also requires tighter coordination between security, IT, and customer-facing teams to ensure that when incidents occur, responses are technically effective and customer-aware.
“The organizations that develop real visibility into what’s connected to their systems (and what those connections can actually reach) will be the ones that catch these intrusions before an attacker decides to go public,” Almroth advised.