Google AI Chatbot and Gemini Flaw Enable New Phishing Attacks

Edward Zhou
Edward Zhou

CEO & Co-Founder

 
July 16, 2025
3 min read

Google AI Chatbot Target of Potential Phishing Attacks

Researchers discovered a security threat in Google's artificial intelligence chatbot. AI security company 0din flagged the problem after a vulnerability in Google Gemini was reported by cybersecurity publication Dark Reading. The issue involves a prompt-injection flaw that allows cybercriminals to design phishing or vishing campaigns by embedding malicious instructions into emails that appear to be legitimate Google security warnings.

Google Debuts Improved Version of Gemini AI Tool

Image courtesy of PYMNTS

According to 0din researcher Marco Figueroa, if a recipient clicks “Summarize this email,” Gemini treats the hidden admin prompt as its top priority. This means the victim may only see a fabricated ‘security alert’ in the AI-generated summary. For instance, a proof of concept demonstrated an invisible prompt in an email warning that the reader's Gmail password had been compromised, urging them to call a specific number, potentially leading to credential harvesting.

Google has discussed some defenses against these types of attacks in a company blog post. A spokesperson mentioned that Google is in “mid-deployment on several of these updated defenses.”

Google Gemini Flaw Hijacks Email Summaries for Phishing

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites. Such attacks utilize indirect prompt injections that are hidden inside an email and executed by Gemini when generating message summaries.

Gmail

Image courtesy of BleepingComputer

A prompt-injection attack was disclosed through 0din, demonstrating how attackers can manipulate Gemini's output. The malicious instruction can be hidden in an email's body using HTML and CSS to set the font size to zero and color to white, rendering it invisible.

When a user requests a summary, Gemini parses the hidden directive and executes it. An example from the report showed Gemini including a security warning about a compromised Gmail password in its output, misleading users into believing the danger was real.

To counteract these attacks, security teams can remove or neutralize content styled to be hidden and implement post-processing filters to flag messages containing urgent alerts, URLs, or phone numbers for review. Users should also exercise caution and not consider Gemini summaries as authoritative security alerts.

Google Gemini Bug Turns Gmail Summaries into Phishing Attack

A security researcher uncovered a method to trick the AI-generated email summary feature of Google Gemini into promoting harmful instructions. The technology, which can automatically post email summaries, can be exploited to deliver phishing messages.

Gemini Gmail

Image courtesy of PCMag

The flaw allows malicious emails with hidden instructions to mislead Gemini into displaying fake warnings in email summaries, such as claiming a user's Gmail password has been compromised. This can result in users being directed to call a fraudulent number for assistance.

Mozilla's 0DIN program disclosed this vulnerability, illustrating how attackers can embed hidden prompts in emails. Google is actively working to strengthen its defenses against such attacks, as noted in a blog post.

Investigation Reveals Google Gemini for Workspace Flaw

Mozilla's 0-Day Investigative Network disclosed that Google Gemini for Workspace could be exploited by embedding malicious prompts in email summaries. This attack enables the AI to communicate false alerts to users regarding their accounts.

Google Gemini logo

Image courtesy of Tom's Hardware

The attack requires an email with a hidden malicious prompt. When users ask Gemini to summarize the email, the AI outputs the false security alert. The hidden text can be styled to be invisible, making it more likely that users will fall for the scam.

The ongoing threat emphasizes the need for organizations to treat AI assistants as part of their attack surface. Security teams must implement measures to monitor and isolate these tools to prevent exploitation.

Incorporating robust security measures is essential for users relying on AI technologies. Being aware of the potential risks associated with such tools can help in mitigating these threats effectively.

Edward Zhou
Edward Zhou

CEO & Co-Founder

 

CEO & Co-Founder of Gopher Security, leading the development of Post-Quantum cybersecurity technologies and solutions.

Related News

2026 Cybersecurity Trends: Dominance of Vulnerability Exploits
vulnerability exploits

2026 Cybersecurity Trends: Dominance of Vulnerability Exploits

Vulnerability exploits now account for 40% of cyber intrusions, surpassing phishing. Learn how shrinking patch windows and edge device targets are changing security.

By Brandon Woo April 6, 2026 3 min read
common.read_full_article
Surge in Vulnerability Exploits: Cyber Intrusions Trends 2026
cybersecurity trends 2026

Surge in Vulnerability Exploits: Cyber Intrusions Trends 2026

Vulnerability exploits now drive 40% of cyberattacks as hackers weaponize flaws within hours. Learn why traditional patching is failing and how to adapt. Read more.

By Divyansh Ingle March 30, 2026 3 min read
common.read_full_article
Surge in Vulnerability Exploits Dominates 2026 Cyber Intrusions
Vulnerability Exploitation

Surge in Vulnerability Exploits Dominates 2026 Cyber Intrusions

Hackers are weaponizing zero-days within hours of disclosure, leaving traditional patch cycles in the dust. Learn how to bridge the security gap with MFA and Zero-Trust.

By Alan V Gutnov March 23, 2026 4 min read
common.read_full_article
Vulnerability Exploits Dominate Cyber Intrusions in 2026 Trends
vulnerability exploits

Vulnerability Exploits Dominate Cyber Intrusions in 2026 Trends

Exploits are the leading cause of cyber intrusions, outpacing phishing. Discover the latest trends and essential strategies to protect your organization. Read now!

By Brandon Woo March 16, 2026 3 min read
common.read_full_article