Google Discovers PROMPTFLUX Malware Leveraging AI for Evasion
TL;DR
AI-Powered Malware Detection Evasion
Google's threat intelligence analysts have identified malware leveraging Large Language Models (LLMs) to operate and evade security systems. This signifies a shift towards more autonomous and adaptive malware. Google's analysts pointed out that adversaries are deploying novel AI-enabled malware, dynamically altering behavior during execution. Underground marketplaces also offer illicit AI tools for cybercriminals. Google Threat Intelligence Group has outlined how threat actors misuse LLMs to increase productivity across various attack stages. Help Net Security reports that Google has uncovered several instances of AI-powered malware in the wild.
Examples of AI-Driven Malware
Several instances of AI-powered malware have been observed:
- QuietVault: A credential stealer targeting GitHub and NPM tokens. It uses an AI prompt and on-host AI CLI tools to find and exfiltrate secrets. Google's report details this.
- PromptSteal: Used by Russian APT28 (aka Fancy Bear), this data miner uses the Hugging Face API to query Qwen2.5-Coder-32B-Instruct for generating Windows commands to collect and exfiltrate data. Help Net Security mentions APT28's use of PromptSteal.
- FruitShell: A reverse shell containing hard-coded prompts designed to bypass detection by LLM-powered security systems. Google's analysis includes FruitShell.

Image courtesy of Help Net Security
Experimental Malware: PromptLock and PromptFlux
- PromptLock: This ransomware uses an LLM to dynamically generate and execute malicious Lua scripts at runtime. NYU Tandon researchers initially developed it.
- PromptFlux: A dropper that uses the Google Gemini API to rewrite its own source code hourly to evade detection. The Hacker News reports on PromptFlux's capabilities.
Both are considered experimental, indicating ongoing development.
PROMPTFLUX Details
PROMPTFLUX is a VB Script malware that uses the Gemini AI model API to rewrite its source code for better obfuscation and evasion. The "Thinking Robot" component queries the LLM (Gemini 1.5 Flash or later) to obtain new code, bypassing detection. Google Threat Intelligence Group (GTIG) shared this information. It saves the obfuscated version in the Windows Startup folder for persistence and attempts to propagate via removable drives and network shares. The Hacker News provides a detailed overview.
Gemini Abuse by Threat Actors
A China-nexus threat actor misused Gemini to craft lure content, build infrastructure, and develop data exfiltration tools. Google's report details this misuse. The actor bypassed Gemini’s reluctance by posing as a participant in a capture-the-flag (CTF) exercise. PCMag notes that Google has implemented safeguards against these techniques.

Image courtesy of Help Net Security
Other Instances of Gemini Misuse
- Iranian nation-state actor APT41 used Gemini for code obfuscation and developing C++ and Golang code for tools.
- MuddyWater researched custom malware development, circumventing safety barriers by posing as a student.
- APT42 crafted phishing material and developed a "Data Processing Agent" for SQL queries.
- North Korean threat actor UNC1069 generated lure material for social engineering and developed code to steal cryptocurrency.
- TraderTraitor developed code, researched exploits, and improved tooling.
The Hacker News provides detailed reports on these incidents.
PromptSteal Malware
PromptSteal, flagged by Ukrainian cyber authorities, is a data-mining malware that connects to a Qwen large language model developed by Alibaba Group. It acts as a Trojan, posing as an image generation program, and generates commands for execution. Google's analysis confirms this. It is suspected to be the work of Russian state-sponsored hacking group APT28 (Fancy Bear). PCMag also reports on PromptSteal's connection to APT28.