Cyber attackers are moving beyond using GenAI for phishing emails and basic scripting, with Google Cloud researchers recently disrupting what they describe as the first zero-day exploit developed with AI.
New findings from the Google Threat Intelligence Group (GTIG) indicate that malicious actors are now experimenting with AI-assisted vulnerability discovery and exploit development, including attempts tied to initial access operations and authentication bypasses.
Google has observed “prominent cyber crime threat actors partnering to plan a mass vulnerability exploitation operation.”
The incident centered on a popular open-source web-based administration tool. Google researchers said that based on the structure and content of the exploits, they had “high confidence” that attackers used AI to identify a logic flaw that enabled them to bypass two-factor authentication (2FA) protections. They do not believe Gemini was used.
Exploit Code Clues Point to LLM Involvement
Several characteristics within the exploit code indicated that a large language model (LLM) was likely involved. According to Google’s analysis, “the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data.”
The researchers said the nature of the vulnerability also pointed toward the growing strengths of frontier AI models in identifying semantic security flaws that traditional tools often miss.
The vulnerability stemmed from high-level logic error tied to a hardcoded trust assumption embedded within the application, rather than common coding weaknesses such as memory corruption or improper input sanitization.
While conventional fuzzing and static analysis tools are typically designed to detect vulnerable functions or unsafe inputs, LLMs are increasingly effective at identifying contextual inconsistencies and hardcoded anomalies, the researchers noted.
“Though frontier LLMs struggle to navigate complex enterprise authorization logic, they have an increasing ability to perform contextual reasoning, effectively reading the developer’s intent to correlate the 2FA enforcement logic with the contradictions of its hardcoded exceptions.”
“This capability can allow models to surface dormant logic errors that appear functionally correct to traditional scanners but are strategically broken from a security perspective,” the researchers added.
GTIG said it worked with the vendor to fix the vulnerability before broader exploitation could take place.
In its broader analysis of adversarial AI use, GTIG said attackers are increasingly applying LLMs across the attack lifecycle, from high-speed research assistance to malware development and post-compromise activity.
“Since our February 2026 report on AI-related threat activity, Google Threat Intelligence Group (GTIG) has continued to track a maturing transition from nascent AI-enabled operations to the industrial-scale application of generative models within adversarial workflows.”
For instance, the use of AI-enabled malware, such as PROMPTSPY, signals a shift toward autonomous attack orchestration, which allows threat actors to offload operational tasks to AI for scaled and adaptive activity.
“Our analysis of this malware reveals previously unreported capabilities and use cases for its integration with AI,” according to the researchers.
Cybercriminal groups are building infrastructure specifically designed to enable large-scale abuse of AI services while masking their identities and bypassing platform safeguards. Google noted that threat actors are using anonymized premium-tier accounts, automated registration pipelines and professionalized middleware services to help them bypass LLM usage limits.
The researchers also pointed to growing concerns around AI supply chain attacks, particularly campaigns targeting machine learning environments and software dependencies as a way to gain initial access into enterprise system. Once they gain access, they can pivot from compromised AI software into broader enterprise environments where they conduct disruptive operations including ransomware deployment and extortion.
Why AI-Driven Cyberattacks Are Becoming a Major CX Issue
The implications of the attacks GITG uncovered extend well beyond security operations teams.
For customer experience leaders, AI-assisted cyber attacks introduce a new layer of operational risk exposing customer-facing systems, creating a difficult environment for teams already balancing personalization initiatives and growing digital complexity.
Contact center platforms, self-service portals, mobile apps, identity systems, and cloud-based CRM environments are all potential entry points for attackers seeking to gain access to sensitive data or business operations. Attacks aimed at authentication systems, customer portals, APIs and cloud infrastructure can disrupt digital experiences while exposing customer data.
Google’s findings align with growing concern across the security industry that attackers’ use of AI is compressing the timeline between vulnerability disclosure and active exploitation. Models capable of code analysis and semantic reasoning can help attackers quickly identify weaknesses that traditional scanning tools miss and adapt attack techniques with less manual effort.
A successful zero-day attack against customer-facing infrastructure can quickly trigger cascading business consequences, including customer account lockouts and authentication failures, service disruptions, exposure of customer data, increased contact center volume during outages and the erosion of customer trust.
The challenge is amplified by the expanding use of AI inside customer experience environments. Many enterprises are embedding GenAI into chatbots, agent-assist tools, knowledge systems, workflow automation and personalization engines. While those systems improve efficiency and responsiveness, they also widen the attack surface.
Google’s H1 2026 Cloud Threat Horizons Report found that exploitation of software vulnerabilities has overtaken credential abuse as the primary way attackers compromise cloud environments.
“Threat actors exploited third-party software-based entry (44.5%) more frequently than weak credentials—a significant increase from the 2.9% observed in H1 2025. While weak or absent credential entry fell from 47.1% in H1 to 27.2% in H2, software exploitation overtook credentials as the primary initial access vector for the first time.”
The evolution also changes how organizations think about defensive operations. AI models are increasingly being used for secure coding reviews, anomaly detection, adversarial simulation and automated patch analysis, creating an escalating competition between offensive and defensive AI capabilities.
With traditional security approaches focused mainly on perimeter protection, which may struggle against AI-assisted attacks that can adapt quickly and target application logic flaws, organizations may need to rethink how CX, security, and IT operations collaborate around authentication and incident response as well as experience monitoring.