Microsoft Copilot Bug Exposes Confidential Emails, Risking CX Data Security

A Microsoft 365 Copilot flaw that bypassed DLP protections shows how AI convenience can risk customer trust and data privacy

3
Security, Privacy & ComplianceNews

Published: February 19, 2026

Nicole Willing

A bug in Microsoft 365 Copilot Chat has allowed the AI assistant to summarize emails marked as confidential, even when customers had data loss prevention (DLP) policies in place.

The issue, which has been confirmed by Microsoft, was first detected on January 21 and tracked as CW1226324. According to service alerts and reporting from BleepingComputer, the bug allowed Copilot’s work tab chat feature to read and summarize emails stored in users’ Sent Items and Drafts folders, including messages protected by sensitivity labels meant to restrict automated access.

Microsoft acknowledged the problem directly in an admin notice:

“Users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat.”

“The Microsoft 365 Copilot ‘work tab’ Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured.”

That’s a serious breakdown of trust for any organization relying on Microsoft 365’s governance controls, especially as AI becomes more deeply embedded into daily workflows.

Copilot Chat is designed to be content-aware, pulling information from across Microsoft 365 apps like Word, Excel, Outlook and PowerPoint to help users summarize, draft and analyze work. Microsoft began rolling it out broadly to paying business customers in late 2025, positioning it as a productivity boost for knowledge workers.

But in this case, a code-level defect means Copilot ignored safeguards that were supposed to block it from processing sensitive content. Microsoft explained:

“A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place.”

The labels were there and policies configured, but Copilot bypassed those controls.

Microsoft said it started rolling out a fix in early February and is still monitoring deployment, contacting a subset of affected customers to confirm the patch is working. There’s no firm timeline for full remediation, and the company hasn’t said how many customers were affected. The incident remains tagged as an advisory.

CX at Risk as AI Tools Handle Communications

While the bug looks like an IT or security issue, it has implications for customer experience, as teams increasingly rely on AI assistants to summarize customer communications and speed up response times.

Email remains a core channel for escalations, legal correspondence and sensitive customer data, especially in regulated industries like healthcare, finance, and government.

An AI tool bypassing declared privacy controls undermines the confidence customers place in how their data is handled. And once shaken, customer trust is hard to rebuild.

The concern isn’t hypothetical. Earlier this week, the European Parliament’s IT department reportedly blocked built-in AI features on staff devices, citing concerns that AI tools could upload confidential correspondence to the cloud. In the UK, the National Health Service reportedly logged the Copilot label bypass issue internally after the alert was reposted on its support portal.

Microsoft’s own documentation notes that sensitivity labels don’t behave consistently across apps:

“Although content with the configured sensitivity label will be excluded from Microsoft 365 Copilot in the named Office apps, the content remains available to Microsoft 365 Copilot for other scenarios. For example, in Teams, and in Microsoft 365 Copilot Chat.”

Until the bug is fully resolved, enterprises need to audit their AI integrations against DLP setups and remain vigilant.

For CX and service leaders, the issue is a reminder that protected doesn’t always mean what it seems, especially once AI is in the mix.

AI copilots are quickly becoming integrated into customer interactions and when they work, they’re delivering results. But when controls fail, they increase the risk that customer data can be exposed or stolen.

That highlights the importance of governance maturity. Enterprises adopting AI need to pressure-test how these tools interact with DLP, privacy commitments and regulatory obligations and not just trust default settings.

It also raises a harder question. If an AI assistant summarizes information that it shouldn’t, who owns the fallout, the platform, or the brand the customer trusted with their data?

Security and Compliance
Featured

Share This Post