Google AI Chatbot and Gemini Flaw Enable New Phishing Attacks

Edward Zhou
Edward Zhou

CEO & Co-Founder

 
July 16, 2025 3 min read

Google AI Chatbot Target of Potential Phishing Attacks

Researchers discovered a security threat in Google's artificial intelligence chatbot. AI security company 0din flagged the problem after a vulnerability in Google Gemini was reported by cybersecurity publication Dark Reading. The issue involves a prompt-injection flaw that allows cybercriminals to design phishing or vishing campaigns by embedding malicious instructions into emails that appear to be legitimate Google security warnings.

Google Debuts Improved Version of Gemini AI Tool

Image courtesy of PYMNTS

According to 0din researcher Marco Figueroa, if a recipient clicks “Summarize this email,” Gemini treats the hidden admin prompt as its top priority. This means the victim may only see a fabricated ‘security alert’ in the AI-generated summary. For instance, a proof of concept demonstrated an invisible prompt in an email warning that the reader's Gmail password had been compromised, urging them to call a specific number, potentially leading to credential harvesting.

Google has discussed some defenses against these types of attacks in a company blog post. A spokesperson mentioned that Google is in “mid-deployment on several of these updated defenses.”

Google Gemini Flaw Hijacks Email Summaries for Phishing

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites. Such attacks utilize indirect prompt injections that are hidden inside an email and executed by Gemini when generating message summaries.

Gmail

Image courtesy of BleepingComputer

A prompt-injection attack was disclosed through 0din, demonstrating how attackers can manipulate Gemini's output. The malicious instruction can be hidden in an email's body using HTML and CSS to set the font size to zero and color to white, rendering it invisible.

When a user requests a summary, Gemini parses the hidden directive and executes it. An example from the report showed Gemini including a security warning about a compromised Gmail password in its output, misleading users into believing the danger was real.

To counteract these attacks, security teams can remove or neutralize content styled to be hidden and implement post-processing filters to flag messages containing urgent alerts, URLs, or phone numbers for review. Users should also exercise caution and not consider Gemini summaries as authoritative security alerts.

Google Gemini Bug Turns Gmail Summaries into Phishing Attack

A security researcher uncovered a method to trick the AI-generated email summary feature of Google Gemini into promoting harmful instructions. The technology, which can automatically post email summaries, can be exploited to deliver phishing messages.

Gemini Gmail

Image courtesy of PCMag

The flaw allows malicious emails with hidden instructions to mislead Gemini into displaying fake warnings in email summaries, such as claiming a user's Gmail password has been compromised. This can result in users being directed to call a fraudulent number for assistance.

Mozilla's 0DIN program disclosed this vulnerability, illustrating how attackers can embed hidden prompts in emails. Google is actively working to strengthen its defenses against such attacks, as noted in a blog post.

Investigation Reveals Google Gemini for Workspace Flaw

Mozilla's 0-Day Investigative Network disclosed that Google Gemini for Workspace could be exploited by embedding malicious prompts in email summaries. This attack enables the AI to communicate false alerts to users regarding their accounts.

Google Gemini logo

Image courtesy of Tom's Hardware

The attack requires an email with a hidden malicious prompt. When users ask Gemini to summarize the email, the AI outputs the false security alert. The hidden text can be styled to be invisible, making it more likely that users will fall for the scam.

The ongoing threat emphasizes the need for organizations to treat AI assistants as part of their attack surface. Security teams must implement measures to monitor and isolate these tools to prevent exploitation.

Incorporating robust security measures is essential for users relying on AI technologies. Being aware of the potential risks associated with such tools can help in mitigating these threats effectively.

Edward Zhou
Edward Zhou

CEO & Co-Founder

 

CEO & Co-Founder of Gopher Security, leading the development of Post-Quantum cybersecurity technologies and solutions.

Related News

AI-Driven Cybersecurity Innovations: The Future of Threat Prevention
AI agents security

AI-Driven Cybersecurity Innovations: The Future of Threat Prevention

AI agents are prime targets for cyberattacks. Discover evolving threats like prompt injection & AI-powered exploits, and learn how to fortify your defenses. Read now!

By Brandon Woo January 22, 2026 5 min read
common.read_full_article
GootLoader Malware Evades Detection Using Nested ZIP Archives
GootLoader

GootLoader Malware Evades Detection Using Nested ZIP Archives

GootLoader is back with advanced tricks, using malformed ZIPs to bypass security & target businesses. Learn how to detect and defend against this threat. Protect your assets!

By Edward Zhou January 21, 2026 3 min read
common.read_full_article
WhisperPair Vulnerability: Millions of Bluetooth Devices at Risk
WhisperPair attack

WhisperPair Vulnerability: Millions of Bluetooth Devices at Risk

Millions of Bluetooth audio devices are at risk from the WhisperPair vulnerability. Learn how attackers can eavesdrop and track your devices, and what you can do to protect yourself. Update your firmware now!

By Jim Gagnard January 20, 2026 3 min read
common.read_full_article
Tech Hiring Growth: 12-15% Increase in AI and Data Jobs by 2026
India tech job market

Tech Hiring Growth: 12-15% Increase in AI and Data Jobs by 2026

India's tech job market is set for a 12-15% surge in 2026, creating 1.25 lakh roles. Discover key sectors and skills in demand. Read more!

By Edward Zhou January 19, 2026 3 min read
common.read_full_article