Shadow AI – the unsanctioned use of artificial intelligence tools by employees without organisational policy, oversight, or formal approval – has become one of the defining governance failures in enterprise technology today.
As Vercel’s recent breach makes clear, the consequences reach well beyond an IT department’s inbox. Customer data, intellectual property, and hard-won regulatory standing are all in play. And with EU AI Act enforcement arriving in August 2026, the window to get ahead of this risk is closing faster than most boards realise.
What Is Shadow AI, and How Did It Become an Enterprise Problem?
The term will be familiar to anyone who spent the 2010s arguing with employees about personal Dropbox accounts and WhatsApp groups. Shadow IT – the use of unsanctioned software to fill gaps left by corporate tooling – costs organisations millions in data exposure and regulatory fines before boards eventually catch up. The pattern is repeating. The difference this time is scale, speed, and sensitivity.
AI tools are more capable, more personally compelling, and more deeply embedded in daily workflows than a shared spreadsheet ever was. The data employees feed into them – customer transcripts, product roadmaps, deal intelligence, HR records – is often far more valuable and legally protected than anything that passed through a rogue Dropbox folder.
We sat down with Gary Hibberd, Head of Consultants Like Us, to get his perspective:
“We are trying to implement AI on top of data chaos. A lot of organisations don’t really understand their current platforms.”
Before shadow AI can be governed, many organisations first need a clearer picture of what data they hold – and where it already lives.
Does the Board Understand the Risk Shadow AI Poses?
In a word: no. And Hibberd does not soften the assessment.
“For most people, AI has only been around since 2022,” he says. The mental model most boards are working from is that of a conversational search tool. The reality – AI embedded in CRM platforms, customer service workflows, contact center infrastructure, and employee productivity suites – is categorically different, and the risk exposure that comes with it is orders of magnitude larger.
“One of the biggest risks the board is facing is shadow AI,” Hibberd says. “People are using it in the workspace without any real guardrails – policies, procedures, training, explaining to people about not putting confidential data into AI. That could be personal data, but it could also be the intellectual property of the company.”
A recent EY survey found that 99% of organisations surveyed had experienced financial losses from AI-related risks, with compliance failures, flawed outputs, and data exposure among the most common causes. Estimated combined losses across surveyed firms reached $4.4 billion.
Is Shadow AI a Security Failure – or a Leadership One?
This is the reframe most organisations are still missing. Shadow AI is not, at its root, a discipline problem. It is a clarity problem – and that makes it a leadership responsibility.
“AI offers lots of opportunities to get quicker and better at what we do,” Hibberd says. “So people are using it indiscriminately in their organisations without any real forethought about what they’re using it for.” Employees are not acting recklessly; they are responding rationally to capable tools, competitive pressure, and an organisational vacuum where policy should be.
Hibberd describes this as “adoption without clarity.” Organisations have not defined what AI is actually for – so individuals do it themselves. Without those answers at the leadership level, individuals fill the vacuum themselves – and the results increasingly land in boardroom risk registers and regulatory investigations.
“None of it’s technical,” Hibberd says of the governance conversation. “It’s business. Security is not an IT risk; it’s a business risk.”
How Should Organisations Respond to Shadow AI?
The instinct is to reach for tools – an AI governance platform, a usage monitoring solution, a vendor agreement. Hibberd’s prescription is to start with the fundamentals:
“My first suggestion would be to look at the EU AI Act,” he says. “We need to think about being fair, transparent, and effective, which are the act’s core principles.” Rather than framing compliance as a legal department exercise, he argues that those three principles offer a practical governance lens that any business leader can engage with, regardless of technical fluency.
The next step is accountability of ownership:
“You need someone in your organisation who is looking at the broader context of AI – looking at the effectiveness, the fairness, and the transparency of the tools, and then looking at simple security principles: governance, risk, and compliance.”
Standards frameworks such as ISO/IEC 42001 – published in December 2023 as the world’s first international AI management system standard – provide a structured, auditable route to doing precisely this and are increasingly referenced in enterprise procurement as the AI equivalent of ISO 27001 for information security.
Gartner projects that by 2026, 50% of governments worldwide will enforce responsible AI through binding regulations – making a documented, repeatable governance framework no longer optional for vendors operating at scale.
The return on getting the basics right, Hibberd argues, is substantial:
“Understand the basics, understand the foundations, and you’ll quickly find that you’re going to satisfy 70, if not 80% of any issues that you’ve come across.”
The boards that treat shadow AI as a symptom of strategic ambiguity – rather than a problem to be policed – will build governance frameworks that hold. The tools are not going away.
Employee appetite is not going away. And with regulatory enforcement timelines now measured in months, organisations still waiting for a definitive internal policy before acting are already behind.
The only question left is whether leadership has decided what to do – or whether employees will decide for them.
FAQs
What is shadow AI?
Shadow AI refers to the use of artificial intelligence tools by employees without formal organisational approval, policy, or oversight.
Why is shadow AI a risk for businesses?
Employees using unsanctioned AI tools can inadvertently expose sensitive customer data, company intellectual property, and personally identifiable information to third-party platforms outside the organisation’s control.
How is shadow AI different from shadow IT?
Shadow AI carries greater risk than traditional shadow IT because the data employees feed into AI tools – customer transcripts, financial records, strategic plans – is typically far more sensitive than files shared through an unapproved cloud drive.
What should organisations do to address shadow AI?
Organisations should start by defining clear AI objectives, appointing a cross-functional governance lead, and grounding policy in the core principles of the EU AI Act: fairness, transparency, and effectiveness.
Does the EU AI Act cover shadow AI?
The EU AI Act does not target shadow AI directly, but its compliance obligations – particularly around high-risk AI systems and accountability frameworks – apply regardless of whether tools were formally sanctioned at organizational level.