Three vulnerabilities in LangChain and LangGraph expose files, secrets, and chat history
Contents
Three security vulnerabilities were disclosed in LangChain and LangGraph, the de facto frameworks for LLM application development. One is a CVSS 9.3 deserialization flaw, another is a CVSS 7.3 SQL injection, and the third is a CVSS 7.5 path traversal issue that lets an attacker read arbitrary files by abusing insufficient file-path validation. What makes this nasty is that the three bugs target different classes of data: environment variables, conversation history, and the file system.
Security researcher Vladimir Tokarev of Cyera said that each vulnerability exposes a different class of enterprise data.
This follows the recent LangFlow CVE-2026-33017 incident, which was a CVSS 9.3 unauthenticated RCE. Unlike LangFlow, these three issues can also be triggered in authenticated environments, and in some cases through prompt injection.
Overview of the three vulnerabilities
| CVE ID | Affected package | CVSS | Vulnerability type | Data exposed | Fixed version |
|---|---|---|---|---|---|
| CVE-2025-68664 | langchain-core | 9.3 | Deserialization | Environment variables and API keys | 0.3.81 / 1.2.5 |
| CVE-2025-67644 | langgraph-checkpoint-sqlite | 7.3 | SQL injection | Conversation history and checkpoints | 3.0.1 |
| CVE-2026-34070 | langchain-core | 7.5 | Path traversal | Arbitrary files | 1.2.22 |
langchain-core has roughly 98 million monthly downloads as of December 2025, so the blast radius is wide.
CVE-2025-68664 “LangGrinch” deserialization and secret theft
This is the most severe issue. It was found by Yarden Porat of Cyera and nicknamed LangGrinch. A $4,000 bounty, the largest ever paid for a LangChain report, was awarded.
LangChain’s serialization model
LangChain uses its own internal serialization format. If a dictionary contains the key lc, the deserializer (loads / load) interprets it as a LangChain internal object and instantiates the corresponding class.
The problem was in dumps() and dumpd(): user-controlled dictionaries containing lc were emitted without escaping. As a result, attacker-supplied data could be deserialized as if it were a legitimate LangChain object.
Attack surface
Twelve vulnerable flows were identified:
astream_events(version="v1")Runnable.astream_log()dumps()/dumpd()->load()/loads()- Message history and memory serialization
- Cache operations
- Event streaming and logging
In a realistic attack, the additional_kwargs and response_metadata fields in an LLM response are abused. Those fields can be influenced through prompt injection. The attacker gets the LLM to emit an lc structure, and then the framework serializes and later deserializes it while streaming or saving history, which triggers object instantiation.
Defending against prompt injection in AI agents has already become a major topic, and this issue is a concrete example of prompt injection triggering a framework-level bug.
How the secrets leak
Before the patch, loads() defaulted secrets_from_env to True. With that option enabled, deserialization resolves secret objects from environment variables.
The attack flow looks like this.
flowchart TD
A[Attacker sends a crafted prompt] --> B[LLM response contains<br/>an lc structure]
B --> C[Dumps runs during streaming<br/>or history storage]
C --> D[lc key is serialized without escaping]
D --> E[loads interprets it as a<br/>LangChain object]
E --> F[secrets_from_env is True<br/>secrets are resolved from env vars]
F --> G[Object instantiation triggers<br/>an HTTP request]
G --> H[Environment variables are placed in HTTP headers<br/>and sent to the attacker]
In practice, the allowlisted ChatBedrockConverse class is abused. Its constructor accepts endpoint_url and emits a GET request during initialization. An attacker can point endpoint_url at their own server and cause environment-variable secrets to be injected into request headers.
The allowlisted PromptTemplate class also supports Jinja2 template rendering. If rendering runs on a deserialized object, arbitrary Python code execution becomes possible.
What the patch changed
- Escaping was added for user-controlled dictionaries that contain the
lckey - The default for
secrets_from_envwas changed fromTruetoFalse - Validation was tightened on deserialization paths
CVE-2025-67644 SQL injection that exposes conversation history
This is a SQL injection in LangGraph’s SQLite checkpoint implementation. LangGraph is a stateful multi-actor framework built on LangChain, and it stores conversation state as checkpoints in SQLite.
Vulnerable code
The _metadata_predicate() function directly interpolated the metadata filter key into the SQL query with an f-string.
for query_key, query_value in metadata_filter.items():
operator, param_value = _where_value(query_value)
predicates.append(
f"json_extract(CAST(metadata AS TEXT), '$.{query_key}') {operator}"
)
The filter value was safely parameterized, but the key was concatenated without validation. It is a classic oversight: prepared statements are used for the value, while the key is left wide open.
Example attack
# Filter controlled by the attacker
malicious_filter = {"x') OR '1'='1": "dummy"}
# SQL that gets generated
# WHERE json_extract(CAST(metadata AS TEXT), '$.x') OR '1'='1') = ?
OR '1'='1' makes the WHERE clause always true, and every checkpoint record is returned. Conversation history, thread IDs, and metadata can all leak. If the checkpoint search endpoint is exposed in a custom server deployment, the risk is high. LangSmith deployments are not affected because they cannot use a custom checkpoint store.
What the patch changed
The patch added _validate_filter_key, which restricts filter keys with the regex ^[a-zA-Z0-9_.-]+$.
CVE-2026-34070 arbitrary file read via path traversal
This vulnerability affects the prompt-loading API in langchain_core/prompts/loading.py. There was no validation on the file-path argument, so passing a relative path such as ../../etc/passwd allowed an attacker to read arbitrary files on the server.
Docker configuration files, .env files, and SSH keys are all viable targets. A CVSS 7.5 score makes sense here: the attack is network reachable, requires no authentication, and has a large confidentiality impact.
The fixed version is langchain-core 1.2.22 or later.
How the three bugs combine
The issues are independent, but if they all exist in the same LangChain deployment, the damage compounds.
flowchart TD
A[CVE-2026-34070<br/>Path traversal] --> B[Read .env and config files]
B --> C[Extract DB connection info and internal paths]
C --> D[CVE-2025-67644<br/>SQL injection]
D --> E[Dump the checkpoint database<br/>and recover conversation history and metadata]
A --> F[Read Dockerfiles and deployment settings]
F --> G[Identify environment variable names]
G --> H[CVE-2025-68664<br/>Deserialization]
H --> I[Steal API keys from env vars<br/>and send them out]
Path traversal can reveal configuration files and environment variable names, SQL injection can dump the checkpoint database, and deserialization can then exfiltrate the actual secret values. Together, the attack covers the file system, secrets, and conversation data.
Attack surface specific to AI frameworks
These are conventional web-app bug classes, but the AI context changes the input path. In a traditional web app, SQL injection usually starts from a form field or URL parameter. In LangChain, the LLM response itself becomes the input vector. An attacker can use prompt injection to control the LLM output, and that output then flows through serialization, database access, and file loading inside the framework.
The fact that the payload is carried by the LLM’s output rather than a direct user input makes detection much harder for conventional WAFs.
Security vulnerabilities in AI agent frameworks are being reported one after another, so auditing the security of LLM application infrastructure is now urgent.
Response
The affected packages and fixed versions are repeated below.
| Package | Fixed version |
|---|---|
| langchain-core | 0.3.81 / 1.2.5 (CVE-2025-68664) |
| langchain-core | 1.2.22 (CVE-2026-34070) |
| langgraph-checkpoint-sqlite | 3.0.1 (CVE-2025-67644) |
langchain-core 1.2.22 fixes two of the issues at once. Check the installed versions with pip show langchain-core langgraph-checkpoint-sqlite, and upgrade immediately if they match. As a defense-in-depth measure, treat LLM response fields such as additional_kwargs, response_metadata, and tool outputs as untrusted data and validate them before deserialization.