top of page

CVE-2025-68666 - LangChain serialization injection vulnerability in data utilities

  • Dec 26, 2025
  • 2 min read

Key Findings:


  • A critical security flaw (CVE-2025-68664) has been disclosed in LangChain Core that could enable attackers to steal sensitive secrets and influence large language model (LLM) responses through prompt injection.

  • The vulnerability, tracked as CVE-2025-68664, carries a CVSS score of 9.3 out of 10.0.

  • The vulnerability is caused by a serialization injection issue in the `dumps()` and `dumpd()` functions of LangChain, which fail to properly escape dictionaries with "lc" keys during serialization.

  • This allows attackers to instantiate unsafe arbitrary objects, potentially leading to secret extraction, class instantiation within trusted namespaces, and even arbitrary code execution via Jinja2 templates.

  • The vulnerability also enables the injection of LangChain object structures through user-controlled fields like `metadata`, `additional_kwargs`, or `response_metadata` via prompt injection.


Background


LangChain is a framework for building applications powered by large language models (LLMs). LangChain Core (i.e., `langchain-core`) is a core Python package that provides the core interfaces and model-agnostic abstractions for building these applications.


Vulnerability Details


The vulnerability, tracked as CVE-2025-68664, is caused by a serialization injection issue in the `dumps()` and `dumpd()` functions of LangChain. These functions fail to properly escape dictionaries with "lc" keys when serializing free-form dictionaries.


The "lc" key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.


Impact


The crux of the problem is that the `dumps()` and `dumpd()` functions do not escape user-controlled dictionaries containing "lc" keys. This allows an attacker to instantiate unsafe arbitrary objects, potentially leading to:


  • Secret extraction from environment variables when deserialization is performed with `secrets_from_env=True` (previously set by default)

  • Instantiation of classes within pre-approved trusted namespaces, such as `langchain_core`, `langchain`, and `langchain_community`

  • Potential arbitrary code execution via Jinja2 templates


Additionally, the escaping bug enables the injection of LangChain object structures through user-controlled fields like `metadata`, `additional_kwargs`, or `response_metadata` via prompt injection.


Mitigation


The patch released by LangChain introduces new restrictive defaults in `load()` and `loads()` by means of an allowlist parameter "allowed_objects" that allows users to specify which classes can be serialized/deserialized. Additionally, Jinja2 templates are blocked by default, and the "secrets_from_env" option is now set to "False" to disable automatic secret loading from the environment.


Affected Versions


The following versions of `langchain-core` are affected by CVE-2025-68664:


  • >= 1.0.0, < 1.2.5 (Fixed in 1.2.5)

  • < 0.3.81 (Fixed in 0.3.81)


Users are advised to update to a patched version as soon as possible for optimal protection.


Sources


  • https://thehackernews.com/2025/12/critical-langchain-core-vulnerability.html

  • https://cvefeed.io/vuln/detail/CVE-2025-68665

  • https://cvefeed.io/vuln/detail/CVE-2025-68664

Recent Posts

See All

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page