Google Detects First Known AI-Built Zero-Day Exploit, Used by Criminals to Plan Mass 2FA Bypass Attack
Source Material
Axios
news · May 12, 2026
AI-assisted hacking is already here, Google warns
“Google's threat intelligence group found evidence of several prominent cyber crime threat actors partnering to identify a bug that would let them bypass two-factor authentication, using AI-assisted code to weaponize the vulnerability.”
First known case
Criminals used AI to discover and build a working zero-day exploit — a documented first
2FA bypass
The exploit targeted a Python-based open-source system to bypass two-factor authentication
Mass attack foiled
Google disrupted a planned mass exploitation event and disclosed the flaw to the vendor
Google's Threat Intelligence Group has identified what researchers believe is the first known case of criminal actors using artificial intelligence to discover and weaponise a zero-day vulnerability — an unknown software flaw — as part of a planned mass exploitation event.35 The disclosure, published in a Google Cloud blog post and reported widely on 11-12 May 2026, marks a significant escalation in the use of AI as an offensive hacking tool, moving beyond the theoretical scenarios that security researchers had long warned about.29
What Google found
According to Google's Threat Intelligence Group, several prominent cybercrime threat actors collaborated to identify a previously unknown bug in a Python script used by a popular open-source system.37 The vulnerability, once found, could be exploited to bypass two-factor authentication — one of the most widely deployed security controls on the internet.7 The groups then used AI-assisted code to develop a working exploit for the flaw, transforming a bug discovery into a deployable weapon without the extended manual reverse-engineering work that such attacks have historically required.4
Google said it had "high confidence" that an AI model was used in the process of finding and exploiting the zero-day.31 The specific AI system used was not identified in the public disclosure. The company said the criminal group had planned to deploy the exploit in a mass exploitation event — a coordinated attack across many targets simultaneously — but that Google's proactive counter-discovery may have prevented it from being used.3 Google has since disclosed the vulnerability to the open-source vendor responsible for the affected software.4
Why this is a landmark moment
Security researchers have discussed AI-assisted zero-day discovery as a near-future risk for several years, but documented real-world cases have been rare.68 The significance of this disclosure is not just that it happened, but that it happened in a criminal context, carried out not by nation-state actors with large research budgets but by cybercrime groups that found AI lowered the barrier to conducting what had previously required specialised expertise.14
John Hultquist, chief analyst at Google's Threat Intelligence Group, cautioned that the publicly documented case is likely not an isolated incident. "For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said, noting that the group's visibility into the threat landscape is necessarily incomplete.6 The implication is that AI-assisted vulnerability research is already becoming part of the standard toolkit for criminal actors, not just well-resourced state programmes.2
State actor activity
Google's broader threat intelligence report noted that North Korean and Chinese state actors are also actively experimenting with AI across a range of offensive cyber operations, including vulnerability discovery, social engineering, and the generation of malicious code.29 While the zero-day case involved criminal groups, the underlying dynamic — AI lowering the skill floor for sophisticated cyberattacks — applies equally to state-sponsored programmes and is expected to accelerate as AI models improve at code generation and reasoning.56
What defenders should take from this
The Google disclosure adds urgency to conversations in the security community about how defenders should respond to AI-assisted attack development.810 The traditional model of security — where the attacker needed deep specialist knowledge to find and exploit zero-days — is being eroded. AI compresses the time and expertise required to move from bug discovery to working exploit, which historically was measured in days or weeks of expert manual work.47
Practical responses are being debated across the security industry. Defenders have access to many of the same AI tools as attackers, and AI-assisted vulnerability scanning and patch prioritisation are already commercially available.29 However, the asymmetry that concerns researchers is that attackers only need to find one working exploit, while defenders need to close every vulnerability — a disparity that AI-accelerated attack development makes significantly more acute.6
Comments
Leave a comment