OpenAI has been exposed to a security breach at Mixpanel, a data analytics vendor that the GenAI developer used to support its API frontend product. The incident highlights the growing risk around third-party integrations and the potential for customer data held by the major AI providers to be exposed.
On November 9, 2025, Mixpanel notified OpenAI that an attacker had gained unauthorized access to part of its systems and exported a dataset containing some customer information and analytics data related to the API. Mixpanel shared the affected dataset with OpenAI on November 25, the company stated in a blog post.
The breach occurred within Mixpanel’s systems and there was no unauthorized access to OpenAI’s infrastructure and systems. ChatGPT and other OpenAI products were not affected. “No chat, API requests, API usage data, passwords, credentials, API keys, payment details, or government IDs were compromised or exposed,” Open AI stated. It also confirmed that session tokens, authentication tokens, and other sensitive details for OpenAI services were not involved.
But Mixpanel’s systems had access to user profile information from platform.openai.com. According to OpenAI, the information that may have been affected included:
- Users’ name and email address
- Operating system, browser and location (city, state, country) used to access the API account
- Referring websites
- Organization or User IDs associated with the account
OpenAI has removed Mixpanel from its production services and said it is working with the company as well as other partners to gauge the scope of the incident and determine whether any further response actions are needed. It is in the process of directly notifying the organizations, admins and users that were affected by email.
“While we have found no evidence of any effect on systems or data outside Mixpanel’s environment, we continue to monitor closely for any signs of misuse,” the post stated.
The incident is a reminder that exposure of non-critical metadata can introduce security risks, and sharing identifiable customer information with third parties should be avoided. As Ron Zayas, Founder and CEO of Ironwall by Incogni, told CX Today in a recent interview:
“The smart play is to learn how to sanitize your data. You don’t have to share 100 pieces of information on one of your customers with an outside company. It’s stupid. Why are you sharing all that customer information?”
Enterprises often underestimate the value of metadata to attackers, as it doesn’t contain critical information like customers’ login credentials or payment details. But malicious actors use the information to create credible phishing or impersonation campaigns, which are becoming an effective way to deploy ransomware attacks through social engineering. Having a person’s real name, actual email address, location, and confirmation that they use OpenAI’s API makes malicious messages look far more convincing.
OpenAI acknowledged this in the blog post, advising its API users:
“Since names, email addresses, and OpenAI API metadata (e.g., user IDs) were included, we encourage you to remain vigilant for credible-looking phishing attempts or spam.”
Users should “[t]reat unexpected emails or messages with caution, especially if they include links or attachments. Double-check that any message claiming to be from OpenAI is sent from an official OpenAI domain,” the post added. It also encouraged users to protect their account by enabling multi-factor authentication “as a best practice security control” and noted that OpenAI doesn’t request credentials such as passwords, API keys or verification codes through email, text or chat.
Complex AI Stacks Open More Ways In for Attackers
As with recent cyberattacks exploiting third-party platforms, the incident serves as a reminder that API-based architectures will only become more vulnerable with the use of AI in enterprises. AI systems are too complex for most companies to develop in-house, so they build stacks of third-party tools using APIs, all of which collect operational metadata and open up more attack vectors.
While vendors and enterprises are tempted to collect as much customer information as possible to train AI models as well as deliver personalization, they need to be judicious in the types of information they collect and store, Zayas said, as the risk of data breaches in the AI era will become “much more significant.”
“Companies are opening up all of their data and feeding it to an AI engine. And how secure are the AI agents? They’re led by big companies, but big companies get breached all the time.”
Zayas warned that the major AI and cloud providers like OpenAI, Google and AWS will become increasingly vulnerable as hackers target them for their wealth of data:
“When your data is sitting there, you’re going to get attacked. If I can pull out information… from an AI provider, I am going to get so much rich data that I don’t have to worry about attacking a lot of companies… That’s where companies and criminals are putting all their time and effort—going to the big ones. If you’re giving them data, you are much more of a target.”
Enterprises need to get smarter about the data they share with AI tools to get the outcomes they need. Customers’ personally identifiable information can often be removed to anonymize the data without affecting how the tools work, Zayas noted.
“You’re going to see the breaches being more and more related to the amount of information that’s coming out with AI, the amount of information that’s being enriched, and companies are going to suffer from this.”
Enterprises also have to train employees to avoid carelessly uploading spreadsheets and other files to chatbots like ChatGPT, because even if a company’s systems aren’t hacked, malicious actors may be able to extract customer information using certain prompts.
As the adoption of AI tools accelerates, enterprises should treat every handoff to an AI provider as a potential point of exposure of their customer data. Limiting the amount and sensitivity of information sent to these systems and designing workflows that avoid unnecessary data transfer can reduce the impact of a breach, protecting customers as well as the company’s reputation.