Criminals Profit from Growing Market for Illicit AI Tools

AI cybercrime AI malware underground AI market LLM security threats PromptFlux cybersecurity AI
Alan V Gutnov
Alan V Gutnov

Director of Strategy

 
November 7, 2025 2 min read

TL;DR

Criminals are rapidly developing and selling AI-powered tools for cybercrime, including advanced malware and automated attack systems. These illicit AI tools are maturing on underground forums, making complex cyberattacks more accessible to a wider range of actors. This trend, exemplified by malware like PromptFlux, poses significant new challenges for global cybersecurity defenses.

AI-Powered Crime Tools on the Rise

Google's Threat Intelligence Group (GTIG) has observed a growing trend of criminals developing and selling AI tools for illicit activities. This includes AI-powered malware and other cybercrime enablers traded on underground forums, posing significant challenges to global cybersecurity. The marketplace for these tools is maturing, lowering the barrier for criminals to engage in complex cyberattacks.

"Just-in-Time" AI Malware

LLMs are being integrated into malware, with examples like PROMPTFLUX using Gemini's API to request VBScript obfuscation techniques. This allows for "just-in-time" self-modification to evade signature-based detection. Google's report indicates that criminals are experimenting with LLMs to develop dynamic obfuscation techniques, targeting victims with code that is still in the testing phase but has the potential to become more dangerous.

Underground Marketplaces

Underground forums are offering purpose-built AI tools that enable criminals to carry out complex cyberattacks without requiring specialized skills. These tools help in activities like generating phishing emails, creating deepfakes, and automating attacks. Google notes that the illicit AI market's growth is fueled by easy access to powerful AI models, which allows even low-skilled criminals to customize tools for malicious purposes.

Sophisticated Tactics

Cybercriminals are using AI to analyze vast datasets to identify vulnerabilities faster than human hackers. Autonomous AI agents are being developed to conduct attacks autonomously, outpacing traditional defenses. This includes the use of AI in malware for dynamic script generation, code obfuscation, and creation of on-demand functions.

Examples of AI-Powered Malware

Google has identified several AI-powered malware families, including:

  • PromptFlux: A VBScript dropper that uses Google's LLM Gemini to generate obfuscated VBScript variants.
  • FruitShell: A PowerShell reverse shell that establishes remote command-and-control access.
  • QuietVault: A JavaScript credential stealer targeting GitHub/NPM tokens.
  • PromptLock: An experimental ransomware that uses Lua scripts to steal and encrypt data.

PromptFlux \"Thinking\" function
Image courtesy of BleepingComputer

Abuse of Gemini

Threat actors have been found abusing Gemini across the entire attack lifecycle. Chinese actors posed as capture-the-flag participants to bypass safety filters and obtain exploit details. Iranian hackers used Gemini for malware development and debugging, while others used it for phishing, data analysis, and code assistance. North Korean groups utilized Gemini for crypto theft and creating deepfake lures.

Market Dynamics and Growth Factors

The illicit AI market’s growth is fueled by easy access to powerful AI models. Criminals are selling AI tools on dark web marketplaces, similar to ransomware-as-a-service. AI is scaling on infrastructure ripe for exploitation, turning interactions into fuel for uncontrolled systems.

Alan V Gutnov
Alan V Gutnov

Director of Strategy

 

MBA-credentialed cybersecurity expert specializing in Post-Quantum Cybersecurity solutions with proven capability to reduce attack surfaces by 90%.

Related News

Critical runC Vulnerabilities Allow Container Escape in Docker, Kubernetes
runc vulnerabilities

Critical runC Vulnerabilities Allow Container Escape in Docker, Kubernetes

Urgent! Three severe runC flaws allow container escape in Docker & Kubernetes. Update now to protect your systems from root access. Learn more!

By Alan V Gutnov November 11, 2025 4 min read
Read full article
AIVSS: Bridging AI Security Gaps for Safer Applications
OWASP AIVSS

AIVSS: Bridging AI Security Gaps for Safer Applications

Discover the OWASP AI Vulnerability Scoring System (AIVSS) for assessing AI security risks. Learn about its framework, deliverables, and how it closes the gap with CVSS. Explore the AIVSS calculator and join the project!

By Alan V Gutnov November 10, 2025 5 min read
Read full article
Google Discovers PROMPTFLUX Malware Leveraging AI for Evasion
AI malware

Google Discovers PROMPTFLUX Malware Leveraging AI for Evasion

Discover how threat actors are weaponizing AI & LLMs like Gemini for sophisticated malware evasion and attacks. Learn about PromptFlux, QuietVault & more. Stay protected!

By Edward Zhou November 6, 2025 3 min read
Read full article
Explore Current Cybersecurity Job Opportunities and Resources
cybersecurity jobs

Explore Current Cybersecurity Job Opportunities and Resources

Explore a vast landscape of cybersecurity job opportunities worldwide for 2025. Find your next role in on-site, hybrid, or remote positions. Start your career search now!

By Alan V Gutnov November 5, 2025 9 min read
Read full article