A Google Calendar Invite, a Hidden Prompt, and a New Kind of AI Security Problem

A potential Google Gemini exploit shows attackers are exploiting AI assistants’ use of natural language to bypass security controls and expose sensitive user data.

5
AI & Automation in CXSecurity, Privacy & ComplianceNews

Published: January 22, 2026

Nicole Willing

Security researchers have sounded the alarm again as large language models (LLMs) are creating a new type of vulnerability that turns natural language interpretation into a potential attack surface.

Researchers at Miggo found that an ordinary Google Calendar invite could be turned into an exploit vector against Google’s Gemini AI assistant, masking malicious instructions to expose or manipulate private data without the victim clicking a link or installing malware.

When the data in the calendar was later read by Gemini during a scheduling query, the model followed hidden instructions, summarizing private meetings and creating a new calendar event including extracted data readable by the attacker. Liad Eliyahu, Head of Research at Miggo, wrote:

“As application security professionals, we’re trained to spot malicious patterns. But what happens when an attack doesn’t look like an attack at all?”

Turning Helpfulness Into an Attack Surface

The vulnerability centered on Gemini’s deep integration with Google Calendar. Gemini is designed to analyze event titles, descriptions, attendees, and timing to answer routine questions like “What’s on my schedule today?” That helpfulness turned out to be an entry point that would allow attackers to bypass Google Calendar’s privacy controls, Eliyahu explained.

“This bypass enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction.”

The attack relied on a classic prompt-injection concept but applied it indirectly. Instead of feeding malicious instructions directly to Gemini, the attacker embedded them inside the description field of a calendar invite. When Gemini later analyzed the calendar to answer a normal scheduling question, it also read and executed the hidden instructions.

What made the exploit especially insidious is that nothing about the payload looked obviously dangerous. The embedded instructions were written in everyday language and placed inside an otherwise normal meeting invite, according to Eliyahu.

From the user’s perspective, Gemini would behave normally. When triggered, the prompt instructed Gemini to summarize all meetings on a specific day, write that summary into a newly created calendar event, and then respond to the user with a harmless-sounding message: “It’s a free time slot.”

“The payload was syntactically innocuous, meaning it was plausible as a user request. However, it was semantically harmful.”

Behind the scenes, Gemini created a new calendar event and wrote a full summary of the user’s private meetings in the description field.

In many enterprise environments, that new event would be visible to the attacker, turning the calendar itself into a data exfiltration channel.

Why Old Defenses Don’t Work Anymore

This kind of flaw highlights a deeper problem with how security teams think about AI-powered systems, Eliyahu noted.

Semantic attacks are different from the kinds of security bugs that most threat hunters are used to seeing. Instead of exploiting poorly sanitized strings, broken authentication checks, or unpatched services, the vulnerability showed how malicious intent can be embedded purely in the way text is phrased and then executed by a system designed to understand language.

“AI native features introduce a new class of exploitability. AI applications can be manipulated through the very language they’re designed to understand.”

Typical application vulnerabilities like SQL injection or cross-site scripting rely on manipulating parsers or interpreters by feeding them specially crafted inputs. To that end, traditional application security (AppSec) is largely syntactic, which works when the threat is recognizable by pattern matching. But with semantic vulnerabilities in LLM-powered systems, there’s no obvious code payload, just natural language that the LLM interprets as instructions because it was built to be helpful. Eliyahu pointed out:

“The danger emerges from context, intent, and the model’s ability to act… Vulnerabilities are no longer confined to code. They now live in language, context, and AI behavior at runtime.”

In this case, the chatbot acted as “an application layer with access to tools and APIs.” When natural language becomes the interface, the attack surface becomes much harder to lock down.

The Gemini incident isn’t an outlier. Security teams have reported similar prompt-driven vulnerabilities across AI assistants tied to email, documents, customer support systems, and developer tools. In several cases, attackers were able to extract sensitive customer data or manipulate AI-powered workflows using instructions that appeared to be harmless.

This Gemini calendar flaw wasn’t unique. Similar AI-native vulnerabilities have surfaced across major platforms of late.

Last week, Microsoft patched a Copilot prompt injection vulnerability discovered by researchers at Varonis that allowed attackers to extract personal data with a single click, exposing users’ recent files, and addresses. That exploit also bypassed security controls and caused Copilot to leak personal information when a user clicked the link, even when the chatbot interface wasn’t actively open. Microsoft emphasized that there was no evidence that the Reprompt attack was exploited.

In early 2025, Meta patched a security bug in its Meta AI chatbot after Sandeep Hodkasia, CEO and Founder of AppSecure Security, found that the backend was not properly enforcing authorization checks before returning prompt content. That made it possible to access other users’ private prompts and AI-generated responses by manipulating the unique identifiers assigned to prompt sessions. Meta deployed a fix and confirmed it found no evidence the flaw was exploited. Hodkasia stated:

 “If a platform as robust as Meta.AI can have such loopholes, it’s a clear signal that other AI-first companies must proactively test their platforms before users’ data is put at risk.”

As companies embed LLMs deeper into products that handle customer calendars, inboxes, files, and internal systems, a single misinterpreted sentence can have serious consequences. Eliyahu wrote:

“This Gemini vulnerability isn’t just an isolated edge case. Rather, it is a case study in how detection is struggling to keep up with AI-native threats.”

A New Security Frontier

If defenses built for deterministic software don’t translate cleanly to systems that reason in language, enterprises will need to go beyond filters and blocklists, according to Eliyahu.

They should introduce runtime safeguards that understand intent, tighter control over what tools models are allowed to invoke, and security controls that treat LLMs as full-fledged application layers rather than just chat interfaces.

Broader analysis of AI risk shows that prompt injection, poisoning of model training data, and model theft are among the top concerns for enterprises integrating LLMs into business workflows.

There have been a growing number of instances such as prompt manipulations in customer support chatbots leaking session tokens, zero-click exploits in corporate AI tooling, and even prompt injection embedded in code comments tricking developer assistants into enabling dangerous features.

Semantic ambiguity that phrases a malicious instruction that looks harmless to humans and deep integration make AI security particularly difficult.

AI assistants that can be tricked into revealing or exfiltrating sensitive customer data risk exposing customer data, failure to comply with data protection laws like GDPR and the erosion of customer trust.

Fixes for individual vulnerabilities—like patches from Google and Microsoft—are important, but the broader lesson is that AI security needs to adapt, fast. Eliyahu warned:

“Securing the next generation of AI-enabled products will be an interdisciplinary effort. “Only with that combination can we close the semantic gaps attackers are now exploiting.”

ChatbotsGenerative AISecurity and Compliance

Brands mentioned in this article.

Featured

Share This Post