Google warns hackers are using AI to build zero-day exploit for planned mass cyberattack

1 hour ago 2



Cyber criminals and state-backed hackers are increasingly using generative AI to accelerate exploit development, automate malware operations and scale cyber campaigns, Google’s threat intelligence division (GTIG) said in a report published on Monday.

The findings show a shift from limited AI experimentation to large-scale operational deployment, with adversaries using AI to power attacks even as AI infrastructure and software ecosystems become targets themselves.

For the first time, GTIG has identified a real-world zero-day exploit developed with AI assistance.

According to the report, criminal actors built a 2FA bypass targeting a popular open-source web administration tool ahead of a planned mass exploitation operation. The campaign was disrupted before deployment after GTIG collaborated with the vendor on responsible disclosure.

Researchers noted that China- and North Korea-linked threat actors have shown sustained interest in AI-supported vulnerability research, including the use of persona-based prompting, automated exploit analysis and agentic frameworks designed to scale reconnaissance and testing activities.

PROMPTSPY and AI-driven malware

On the malware front, the report highlighted PROMPTSPY, an Android backdoor that embeds an autonomous agent feeding the device’s user interface state to Google’s Gemini API, receiving structured commands in return, and executing them (clicking, swiping, navigating) without human oversight.

It can capture biometric data, replay authentication gestures, and even prevent its own uninstallation by rendering an invisible overlay over the “Uninstall” button that silently swallows touch events.

Researchers also documented AI-assisted obfuscation techniques in malware linked to Russia-aligned operations, including dynamically generated code and AI-produced decoy logic intended to bypass detection systems.

Google warned that attackers are building professionalized infrastructure to obtain anonymized, large-scale access to premium AI models through proxy relays, automated account creation and trial-abuse schemes.

At the same time, adversaries are targeting the AI software supply chain itself, including open-source AI tooling and model integration layers, to gain initial access to enterprise systems and steal credentials for ransomware and extortion operations.

The company said it is deploying AI defensively through tools such as Big Sleep and CodeMender to identify and patch vulnerabilities, while expanding safeguards across Gemini and related services.

Disclosure: This article was edited by Vivian Nguyen. For more information on how we create and review content, see our Editorial Policy.

Read Entire Article