Google Discovers PROMPTFLUX Malware Leveraging AI for Evasion

AI malware LLM evasion PromptFlux QuietVault PromptSteal cybersecurity AI malware detection
Edward Zhou
Edward Zhou

CEO & Co-Founder

 
November 6, 2025 3 min read

TL;DR

Attackers are now using Large Language Models (LLMs) to create advanced malware that can evade detection and adapt its behavior. This article covers examples like QuietVault, PromptSteal, and PromptFlux, which leverage AI for activities ranging from secret exfiltration to self-rewriting code to bypass security systems. Understanding these AI-driven threats is essential for modern cybersecurity.

AI-Powered Malware Detection Evasion

Google's threat intelligence analysts have identified malware leveraging Large Language Models (LLMs) to operate and evade security systems. This signifies a shift towards more autonomous and adaptive malware. Google's analysts pointed out that adversaries are deploying novel AI-enabled malware, dynamically altering behavior during execution. Underground marketplaces also offer illicit AI tools for cybercriminals. Google Threat Intelligence Group has outlined how threat actors misuse LLMs to increase productivity across various attack stages. Help Net Security reports that Google has uncovered several instances of AI-powered malware in the wild.

Examples of AI-Driven Malware

Several instances of AI-powered malware have been observed:

  • QuietVault: A credential stealer targeting GitHub and NPM tokens. It uses an AI prompt and on-host AI CLI tools to find and exfiltrate secrets. Google's report details this.
  • PromptSteal: Used by Russian APT28 (aka Fancy Bear), this data miner uses the Hugging Face API to query Qwen2.5-Coder-32B-Instruct for generating Windows commands to collect and exfiltrate data. Help Net Security mentions APT28's use of PromptSteal.
  • FruitShell: A reverse shell containing hard-coded prompts designed to bypass detection by LLM-powered security systems. Google's analysis includes FruitShell.

malware using LLMs
Image courtesy of Help Net Security

Experimental Malware: PromptLock and PromptFlux

  • PromptLock: This ransomware uses an LLM to dynamically generate and execute malicious Lua scripts at runtime. NYU Tandon researchers initially developed it.
  • PromptFlux: A dropper that uses the Google Gemini API to rewrite its own source code hourly to evade detection. The Hacker News reports on PromptFlux's capabilities.

Both are considered experimental, indicating ongoing development.

PROMPTFLUX Details

PROMPTFLUX is a VB Script malware that uses the Gemini AI model API to rewrite its source code for better obfuscation and evasion. The "Thinking Robot" component queries the LLM (Gemini 1.5 Flash or later) to obtain new code, bypassing detection. Google Threat Intelligence Group (GTIG) shared this information. It saves the obfuscated version in the Windows Startup folder for persistence and attempts to propagate via removable drives and network shares. The Hacker News provides a detailed overview.

Gemini Abuse by Threat Actors

A China-nexus threat actor misused Gemini to craft lure content, build infrastructure, and develop data exfiltration tools. Google's report details this misuse. The actor bypassed Gemini’s reluctance by posing as a participant in a capture-the-flag (CTF) exercise. PCMag notes that Google has implemented safeguards against these techniques.

Threat-actor-Gemini-misuse
Image courtesy of Help Net Security

Other Instances of Gemini Misuse

  • Iranian nation-state actor APT41 used Gemini for code obfuscation and developing C++ and Golang code for tools.
  • MuddyWater researched custom malware development, circumventing safety barriers by posing as a student.
  • APT42 crafted phishing material and developed a "Data Processing Agent" for SQL queries.
  • North Korean threat actor UNC1069 generated lure material for social engineering and developed code to steal cryptocurrency.
  • TraderTraitor developed code, researched exploits, and improved tooling.

The Hacker News provides detailed reports on these incidents.

PromptSteal Malware

PromptSteal, flagged by Ukrainian cyber authorities, is a data-mining malware that connects to a Qwen large language model developed by Alibaba Group. It acts as a Trojan, posing as an image generation program, and generates commands for execution. Google's analysis confirms this. It is suspected to be the work of Russian state-sponsored hacking group APT28 (Fancy Bear). PCMag also reports on PromptSteal's connection to APT28.

Edward Zhou
Edward Zhou

CEO & Co-Founder

 

CEO & Co-Founder of Gopher Security, leading the development of Post-Quantum cybersecurity technologies and solutions.

Related News

CISA Warns: Patch Samsung 0-Day RCE Flaw to Prevent Attacks
Samsung vulnerability

CISA Warns: Patch Samsung 0-Day RCE Flaw to Prevent Attacks

CISA warns of critical zero-day vulnerability in Samsung devices (CVE-2025-21042). Learn how it's exploited and how to protect your data. Patch now!

By Edward Zhou November 12, 2025 2 min read
Read full article
Critical runC Vulnerabilities Allow Container Escape in Docker, Kubernetes
runc vulnerabilities

Critical runC Vulnerabilities Allow Container Escape in Docker, Kubernetes

Urgent! Three severe runC flaws allow container escape in Docker & Kubernetes. Update now to protect your systems from root access. Learn more!

By Alan V Gutnov November 11, 2025 4 min read
Read full article
AIVSS: Bridging AI Security Gaps for Safer Applications
OWASP AIVSS

AIVSS: Bridging AI Security Gaps for Safer Applications

Discover the OWASP AI Vulnerability Scoring System (AIVSS) for assessing AI security risks. Learn about its framework, deliverables, and how it closes the gap with CVSS. Explore the AIVSS calculator and join the project!

By Alan V Gutnov November 10, 2025 5 min read
Read full article
Criminals Profit from Growing Market for Illicit AI Tools
AI cybercrime

Criminals Profit from Growing Market for Illicit AI Tools

Criminals are leveraging AI to create sophisticated malware and automate attacks. Discover the latest AI threats and how they're evolving. Learn more!

By Alan V Gutnov November 7, 2025 2 min read
Read full article