Critical LangChain Vulnerability Risks AI Secrets and Workflows
TL;DR
Critical Vulnerabilities in LangChain: A Technical Deep Dive
LangChain, a popular open-source framework for building applications with large language models (LLMs), has recently been the subject of several critical vulnerability disclosures. These vulnerabilities, if exploited, could lead to serious security breaches, including remote code execution and sensitive data exposure. This article provides a technical analysis of these vulnerabilities and offers guidance on how to mitigate the risks.
"LangGrinch" Vulnerability in langchain-core (CVE-2025-68664)
A critical vulnerability, dubbed "LangGrinch" and tracked as CVE-2025-68664, has been identified in langchain-core, the foundational library behind LangChain-based agents. The vulnerability has a Common Vulnerability Scoring System score of 9.3 and could allow attackers to exfiltrate sensitive secrets and potentially achieve remote code execution under certain conditions.

Image courtesy of SiliconANGLE
The vulnerability is a serialization and deserialization injection flaw in langchain-core's built-in helper functions. An attacker can exploit it by using prompt injection to steer an AI agent into generating crafted structured outputs that include LangChain's internal marker key ("lc"). Because the marker key is not properly escaped during serialization, the data can later be deserialized and interpreted as a trusted LangChain object rather than untrusted user input. According to the advisory explains.
"What makes this finding interesting is that the vulnerability lives in the serialization path, not the deserialization path,” explained Yarden Porat, a security researcher at Cyata. “In agent frameworks, structured data produced downstream of a prompt is often persisted, streamed and reconstructed later. That creates a surprisingly large attack surface reachable from a single prompt.”
Successful exploitation can lead to full environment variable exfiltration via outbound HTTP requests, potentially exposing cloud provider credentials, database and RAG connection strings, vector database secrets, and large language model API keys. Cyata Security Ltd. researchers identified 12 distinct reachable exploit flows.
Patches are available in langchain-core versions 1.2.5 and 0.3.81. Organizations are urged to update immediately.
Impacted Versions:
- LangChain Core: Versions < 0.3.81
- LangChain: Versions < 1.2.5 and >= 1.0.0
The patch fixes the escaping logic in the serialization functions, ensuring that user-controlled “lc” keys are treated as harmless data rather than actionable commands.
LangChain Vulnerability Exposes AI Workflows to RCE (CVE-2024-36480)
A separate LangChain vulnerability, tracked as CVE-2024-36480, allows for remote code execution (RCE) under certain conditions. This flaw stems from unsafe evaluation in custom tools, where the use of the eval() function or similar execution contexts without proper sanitization creates a direct vector for RCE.
!LangChain vulnerability exposes AI workflows to RCE. Learn how to protect your AI apps with cybersecurity best practices and Hodeitek services.
Image courtesy of Hodeitek
LangChain's flexibility, while enabling powerful integrations, requires developers to implement strict input validation. Without it, attackers can inject malicious payloads that compromise the system’s integrity.
LangSmith, a platform for debugging and monitoring LangChain applications, can inadvertently expose the same risks if integrated with unsafe tools. In some configurations, LangSmith allowed evaluation of tool definitions that included eval() or other unsafe functions, expanding the attack surface.
The vulnerability was discovered by cybersecurity researcher Bar Lanyado and responsibly disclosed to LangChain’s maintainers. The LangChain vulnerability was officially designated as CVE-2024-36480 and received a CVSS v3.1 base score of 9.0, categorizing it as critical.
Vulnerabilities in LangChain Gen AI (CVE-2023-46229, CVE-2023-44467)
Palo Alto Networks researchers identified two vulnerabilities in LangChain
- CVE-2023-46229
- CVE-2023-44467 (LangChain experimental)
CVE-2023-46229: Server-Side Request Forgery (SSRF)
CVE-2023-46229 is a server-side request forgery (SSRF) vulnerability affecting LangChain versions earlier than 0.0.317. It allows attackers to get sensitive information from intranets by crafting malicious sitemaps.
A malicious actor could include URLs to intranet resources in the provided sitemap. This can result in SSRF and the unintentional leakage of sensitive data when content from the listed URLs is fetched and returned.
To mitigate this vulnerability, LangChain has added a function called \_extract\_scheme\_and\_domain and an allowlist that lets users control allowed domains.
CVE-2023-44467: Prompt Injection in LangChain Experimental
CVE-2023-44467 is a critical prompt injection vulnerability identified in LangChain Experimental versions before 0.0.306. It affects PALChain, a feature designed to enhance language models with the ability to generate code solutions.
The flaw allows attackers to exploit the PALChain's processing capabilities with prompt injection, enabling them to execute harmful commands or code that the system was not intended to run.
The pull request langchain-ai/langchain#11233 expands the blocklist to cover additional functions and methods, aiming to mitigate the risk of unauthorized code execution further.
Securing LangChain Applications with Gopher Security
Given the potential risks associated with LangChain vulnerabilities, it is crucial to implement robust security measures. Gopher Security specializes in AI-powered, post-quantum Zero‑Trust cybersecurity architecture, offering a comprehensive platform that converges networking and security across devices, apps, and environments.
Our platform utilizes peer-to-peer encrypted tunnels and quantum-resistant cryptography to protect your AI workflows from potential threats. We provide:
- AI-powered threat detection: Identify and block AI-generated attacks and polymorphic threats.
- Zero-Trust architecture: Enforce strict access controls and continuous authentication to minimize the attack surface.
- Post-quantum cryptography: Protect your data from future threats posed by quantum computing.
- Runtime monitoring: Detect anomalies and block malicious activity in real-time.
By partnering with Gopher Security, you can ensure the security and integrity of your LangChain applications and AI infrastructure.
Don't wait until your AI system is compromised. Contact Gopher Security today for a free consultation and discover how we can help you secure your AI workflows against present and future threats.