Is Your Chatbot Giving Away Secrets? How to Stop Whisper Leak Now

Microsoft has exposed a flaw that lets hackers infer what your AI chats about – even through encryption. Here’s how contact centers can close the gap fast

4
AI chatbot security risk - Whisper Leak vulnerability
AI & Automation in CXContact Center & Omnichannel​News Analysis

Published: November 10, 2025

Rhys Fisher

Microsoft has uncovered an AI vulnerability that could have significant repercussions for contact centers.

Dubbed ‘Whisper Leak’, the flaw could allow bad actors to discern what someone is discussing with an AI chatbot, such as ChatGPT.

‘But what about the encryption?’, I hear you cry! Worryingly, this weakness bypasses encryption entirely; or rather, it makes the encryption redundant.

According to Microsoft Defender Security Research Team, the Whisper Leak vulnerability allows an attacker who can observe network traffic to infer the topic of an encrypted AI chatbot conversation – even though the content remains unreadable.

That means that while the words remain behind TLS encryption, the patterns of data packets – including their size and timing – reflect the rhythm of the chat and can reveal sensitive subjects.

By analyzing this information and comparing many conversations about specific topics, such as “money laundering,” against unrelated ones, attackers were able to train a classifier to spot the unique patterns associated with each subject.

This resulted in Microsoft’s system being able to achieve more than 98 percent accuracy in distinguishing sensitive topics purely from the shape and rhythm of the encrypted traffic.

While no content is ever directly exposed, experts say the pattern data alone could, in theory, reveal what a user is discussing

According to official guidance, this makes the threat “practical” in environments where a well-resourced adversary could monitor encrypted traffic.

But what exactly does this mean for contact centers?

What Contact Centers Need to Know

Given the ubiquitous nature of AI-powered chatbots in contact centers in recent times, this could prove to be hugely problematic.

Indeed, WifiTalents’ 2025 Contact Center Statistics report revealed that 54% of contact centers have reported increased use of chatbots.

A report from Emerge Haus released earlier this year stated that 52% of contact centers have invested in ‘conversational AI’ and an additional 44% intend to.

Elsewhere, Calabrio’s State of the Contact Center report found that 98% of contact centers report using AI in some form.

This rapid adoption reflects the drive for efficiency, lower costs, and better agent support. Yet the pace of roll-out may also have outpaced full scrutiny of privacy implications.

This could prove to be particularly problematic for contact centers that deploy AI assistants in regulated industries. The research underscores that encrypted communications are not as infallible as many believe.

Imagine a scenario where a customer chats with a virtual assistant about a refund, a health-claims issue, or a financial investigation.

The actual text may be protected, but if a bad actor can observe the session’s packet pattern, they could infer the subject is “medical claim”, “fraudulent payment”, or “political complaint”.

In a contact center environment where AI assistants handle high-volume, routine queries, or where agents augment responses with AI, this opens up a range of exposure points and areas of concern:

Exposure Points

  • Vulnerable networks (public Wi-Fi, shared agent terminals)
  • Multi-tenant cloud services where packet flows might be observed
  • Metadata linking of customer behavior over time, even if individual queries are anonymized

Areas of Concern

  • Agent-assist vs. direct-bot interactions: Whether the AI chat is with a customer or behind the scenes supporting an agent, both use streaming models and are therefore susceptible to side-channel inference.
  • Metadata risk amplification: Even if you’re complying with GDPR or CCPA around content encryption, regulators are increasingly concerned about metadata (who, when, how often, what topic). In some jurisdictions metadata itself may constitute personal data.
  • Network exposure and vendor risk: Many contact-centres rely on hybrid or multi-cloud AI deployments, or agents working remotely on unsecured Wi-Fi. Each of these expands the “surface” where a packet-observer could exist.

What Major AI Providers are Already Doing

While the possibility of anonymous bad actors having access to customer communications is certainly a concern, the good news is that Microsoft has already worked with several major providers, including OpenAI, to help combat the Whisper Leak vulnerability.

Typical measures include:

  • Adding random padding or ‘noise’ to responses so packet sizes vary unpredictably
  • Batching tokens so streaming chunk boundaries are less consistent
  • Offering non-streaming modes of interaction (where applicable)

These changes reduce the effectiveness of the attack to what Microsoft considers “no longer a practical risk” under their test conditions.

However, immediate mitigation doesn’t mean universal safety. Organizations should still take their own practical steps to limit the likelihood of communications being inferred.

These include:

  • Audit your AI vendors: Confirm that your chatbot or assistant providers have safeguards against side-channel attacks like Whisper Leak.
  • Tighten network security: Keep sensitive customer interactions off untrusted networks and enforce VPN or secure tunnel use for remote agents.
  • Review data classification: Reevaluate what counts as “non-sensitive” — if a topic can be inferred from traffic patterns, treat it as sensitive.

The CX Differentiator

Given the slew of high-profile hacks and data breaches in recent months, the importance of security has never been more front of mind.

In the AI contact center era, organizations that can ensure that customer data security is prioritized can differentiate themselves and enhance customer loyalty and trust.

Those organizations that fail to commit the necessary time and resources to this increasingly crucial tenet of CX risk alienating their customers and damaging their overall brand image.

Artificial IntelligenceAutomationCCaaSConversational AI

Brands mentioned in this article.

Featured

Share This Post