Skip to content

chore(deps): update dependency langchain-core to v1 [security] - autoclosed#153

Closed
renovate-bot wants to merge 1 commit intogoogleapis:mainfrom
renovate-bot:renovate/pypi-langchain-core-vulnerability
Closed

chore(deps): update dependency langchain-core to v1 [security] - autoclosed#153
renovate-bot wants to merge 1 commit intogoogleapis:mainfrom
renovate-bot:renovate/pypi-langchain-core-vulnerability

Conversation

@renovate-bot
Copy link
Copy Markdown
Contributor

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
langchain-core (changelog) >=0.1.1, <1.0.0>=1.2.22, <1.2.23 age confidence
langchain-core (changelog) ==0.2.31==1.2.22 age confidence

langchain-core allows unauthorized users to read arbitrary files from the host file system

CVE-2024-10940 / GHSA-5chr-fjjv-38qv

More information

Details

A vulnerability in langchain-core versions >=0.1.17,<0.1.53, >=0.2.0,<0.2.43, and >=0.3.0,<0.3.15 allows unauthorized users to read arbitrary files from the host file system. The issue arises from the ability to create langchain_core.prompts.ImagePromptTemplate's (and by extension langchain_core.prompts.ChatPromptTemplate's) with input variables that can read any user-specified path from the server file system. If the outputs of these prompt templates are exposed to the user, either directly or through downstream model outputs, it can lead to the exposure of sensitive information.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

References

This data is provided by the GitHub Advisory Database (CC-BY 4.0).


LangChain Vulnerable to Template Injection via Attribute Access in Prompt Templates

CVE-2025-65106 / GHSA-6qv9-48xg-fc7f

More information

Details

Context

A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes.

Templates allow attribute access (.) and indexing ([]) but not method invocation (()).

The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using MessagesPlaceholder with chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g., __globals__) to reach sensitive data such as environment variables.

The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.

Affected Components
  • langchain-core package
  • Template formats:
    • F-string templates (template_format="f-string") - Vulnerability fixed
    • Mustache templates (template_format="mustache") - Defensive hardening
    • Jinja2 templates (template_format="jinja2") - Defensive hardening
Impact

Attackers who can control template strings (not just template variables) can:

  • Access Python object attributes and internal properties via attribute traversal
  • Extract sensitive information from object internals (e.g., __class__, __globals__)
  • Potentially escalate to more severe attacks depending on the objects passed to templates
Attack Vectors
1. F-string Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate

malicious_template = ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__.__name__}")],
    template_format="f-string"
)

##### Note that this requires passing a placeholder variable for "msg.__class__.__name__".
result = malicious_template.invoke({"msg": "foo", "msg.__class__.__name__": "safe_placeholder"})

##### Previously returned
##### >>> result.messages[0].content

##### >>> 'str'
2. Mustache Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

##### Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)

result = malicious_template.invoke({"question": msg})

##### Previously returned: "HumanMessage" (getattr() exposed internals)
3. Jinja2 Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

##### Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

result = malicious_template.invoke({"question": msg})

##### Could access non-dunder attributes/methods on objects
Root Cause
  1. F-string templates: The implementation used Python's string.Formatter().parse() to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:
    from string import Formatter
    
    template = "{msg.__class__} and {x}"
    print([var_name for (_, var_name, _, _) in Formatter().parse(template)])
    # Returns: ['msg.__class__', 'x']
    The extracted names were not validated to ensure they were simple identifiers. As a result, template strings containing attribute traversal and indexing expressions (e.g., {obj.__class__.__name__} or {obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with (), they do support [] indexing, which could allow traversal through dictionaries like __globals__ to reach sensitive objects.
  2. Mustache templates: By design, used getattr() as a fallback to support accessing attributes on objects (e.g., `` on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objects
  3. Jinja2 templates: Jinja2's default SandboxedEnvironment blocks dunder attributes (e.g., __class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objects
    passed to templates.
Who Is Affected?
High Risk Scenarios

You are affected if your application:

  • Accepts template strings from untrusted sources (user input, external APIs, databases)
  • Dynamically constructs prompt templates based on user-provided patterns
  • Allows users to customize or create prompt templates

Example vulnerable code:

##### User controls the template string itself
user_template_string = request.json.get("template")  # DANGEROUS

prompt = ChatPromptTemplate.from_messages(
    [("human", user_template_string)],
    template_format="mustache"
)

result = prompt.invoke({"data": sensitive_object})
Low/No Risk Scenarios

You are NOT affected if:

  • Template strings are hardcoded in your application code
  • Template strings come only from trusted, controlled sources
  • Users can only provide values for template variables, not the template structure itself

Example safe code:

##### Template is hardcoded - users only control variables
prompt = ChatPromptTemplate.from_messages(
    [("human", "User question: {question}")],  # SAFE
    template_format="f-string"
)

##### User input only fills the 'question' variable
result = prompt.invoke({"question": user_input})
The Fix
F-string Templates

F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:

  • Added validation to enforce that variable names must be valid Python identifiers
  • Rejects syntax like {obj.attr}, {obj[0]}, or {obj.__class__}
  • Only allows simple variable names: {variable_name}
##### After fix - these are rejected at template creation time
ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__}")],  # ValueError: Invalid variable name
    template_format="f-string"
)
Mustache Templates (Defensive Hardening)

As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:

  • Replaced getattr() fallback with strict type checking
  • Only allows traversal into dict, list, and tuple types
  • Blocks attribute access on arbitrary Python objects
##### After hardening - attribute access returns empty string
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)
result = prompt.invoke({"msg": HumanMessage("test")})

##### Returns: "" (access blocked)
Jinja2 Templates (Defensive Hardening)

As defensive hardening, we've significantly restricted Jinja2 template capabilities:

  • Introduced _RestrictedSandboxedEnvironment that blocks ALL attribute/method access
  • Only allows simple variable lookups from the context dictionary
  • Raises SecurityError on any attribute access attempt
##### After hardening - all attribute access is blocked
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

##### Raises SecurityError: Access to attributes is not allowed

Important Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.

While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.

Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g., HumanMessage, AIMessage, ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.

Remediation
Immediate Actions
  1. Audit your code for any locations where template strings come from untrusted sources
  2. Update to the patched version of langchain-core
  3. Review template usage to ensure separation between template structure and user data
Best Practices
  • Consider if you need templates at all - Many applications can work directly with message objects (HumanMessage, AIMessage, etc.) without templates## Context

A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes.

Templates allow attribute access (.) and indexing ([]) but not method invocation (()).

The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using MessagesPlaceholder with chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g., __globals__) to reach sensitive data such as environment variables.

The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.

Affected Components
  • langchain-core package
  • Template formats:
    • F-string templates (template_format="f-string") - Vulnerability fixed
    • Mustache templates (template_format="mustache") - Defensive hardening
    • Jinja2 templates (template_format="jinja2") - Defensive hardening
Impact

Attackers who can control template strings (not just template variables) can:

  • Access Python object attributes and internal properties via attribute traversal
  • Extract sensitive information from object internals (e.g., __class__, __globals__)
  • Potentially escalate to more severe attacks depending on the objects passed to templates
Attack Vectors
1. F-string Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate

malicious_template = ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__.__name__}")],
    template_format="f-string"
)

##### Note that this requires passing a placeholder variable for "msg.__class__.__name__".
result = malicious_template.invoke({"msg": "foo", "msg.__class__.__name__": "safe_placeholder"})

##### Previously returned
##### >>> result.messages[0].content

##### >>> 'str'
2. Mustache Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

##### Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)

result = malicious_template.invoke({"question": msg})

##### Previously returned: "HumanMessage" (getattr() exposed internals)
3. Jinja2 Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

##### Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

result = malicious_template.invoke({"question": msg})

##### Could access non-dunder attributes/methods on objects
Root Cause
  1. F-string templates: The implementation used Python's string.Formatter().parse() to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:
    from string import Formatter
    
    template = "{msg.__class__} and {x}"
    print([var_name for (_, var_name, _, _) in Formatter().parse(template)])
    # Returns: ['msg.__class__', 'x']
    The extracted names were not validated to ensure they were simple identifiers. As a result, template strings containing attribute traversal and indexing expressions (e.g., {obj.__class__.__name__} or {obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with (), they do support [] indexing, which could allow traversal through dictionaries like __globals__ to reach sensitive objects.
  2. Mustache templates: By design, used getattr() as a fallback to support accessing attributes on objects (e.g., `` on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objects
  3. Jinja2 templates: Jinja2's default SandboxedEnvironment blocks dunder attributes (e.g., __class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objects
    passed to templates.
Who Is Affected?
High Risk Scenarios

You are affected if your application:

  • Accepts template strings from untrusted sources (user input, external APIs, databases)
  • Dynamically constructs prompt templates based on user-provided patterns
  • Allows users to customize or create prompt templates

Example vulnerable code:

##### User controls the template string itself
user_template_string = request.json.get("template")  # DANGEROUS

prompt = ChatPromptTemplate.from_messages(
    [("human", user_template_string)],
    template_format="mustache"
)

result = prompt.invoke({"data": sensitive_object})
Low/No Risk Scenarios

You are NOT affected if:

  • Template strings are hardcoded in your application code
  • Template strings come only from trusted, controlled sources
  • Users can only provide values for template variables, not the template structure itself

Example safe code:

##### Template is hardcoded - users only control variables
prompt = ChatPromptTemplate.from_messages(
    [("human", "User question: {question}")],  # SAFE
    template_format="f-string"
)

##### User input only fills the 'question' variable
result = prompt.invoke({"question": user_input})
The Fix
F-string Templates

F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:

  • Added validation to enforce that variable names must be valid Python identifiers
  • Rejects syntax like {obj.attr}, {obj[0]}, or {obj.__class__}
  • Only allows simple variable names: {variable_name}
##### After fix - these are rejected at template creation time
ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__}")],  # ValueError: Invalid variable name
    template_format="f-string"
)
Mustache Templates (Defensive Hardening)

As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:

  • Replaced getattr() fallback with strict type checking
  • Only allows traversal into dict, list, and tuple types
  • Blocks attribute access on arbitrary Python objects
##### After hardening - attribute access returns empty string
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)
result = prompt.invoke({"msg": HumanMessage("test")})

##### Returns: "" (access blocked)
Jinja2 Templates (Defensive Hardening)

As defensive hardening, we've significantly restricted Jinja2 template capabilities:

  • Introduced _RestrictedSandboxedEnvironment that blocks ALL attribute/method access
  • Only allows simple variable lookups from the context dictionary
  • Raises SecurityError on any attribute access attempt
##### After hardening - all attribute access is blocked
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

##### Raises SecurityError: Access to attributes is not allowed

Important Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.

While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.

Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g., HumanMessage, AIMessage, ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.

Remediation
Immediate Actions
  1. Audit your code for any locations where template strings come from untrusted sources
  2. Update to the patched version of langchain-core
  3. Review template usage to ensure separation between template structure and user data
Best Practices
  • Consider if you need templates at all - Many applications can work directly with message objects (HumanMessage, AIMessage, etc.) without templates
  • Reserve Jinja2 for trusted sources - Only use Jinja2 templates when you fully control the template content
Update: Jinja2 Restrictions Reverted

The Jinja2 hardening introduced in the initial patch has been reverted as of langchain-core 1.1.3. The restriction was not addressing a direct vulnerability but was part of broader defensive hardening. In practice, it significantly limited legitimate Jinja2 usage and broke existing templates. Since Jinja2 is intended to be used only with trusted template sources, the original behavior has been restored. Users should continue to avoid accepting untrusted template strings when using Jinja2, but no security issue exists with trusted templates.

Severity

  • CVSS Score: 8.3 / 10 (High)
  • Vector String: CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:H/VI:L/VA:N/SC:N/SI:N/SA:N

References

This data is provided by the GitHub Advisory Database (CC-BY 4.0).


LangChain serialization injection vulnerability enables secret extraction in dumps/loads APIs

CVE-2025-68664 / GHSA-c67j-w6g6-q2cm

More information

Details

Summary

A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions. The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.

Attack surface

The core vulnerability was in dumps() and dumpd(): these functions failed to escape user-controlled dictionaries containing 'lc' keys. When this unescaped data was later deserialized via load() or loads(), the injected structures were treated as legitimate LangChain objects rather than plain user data.

This escaping bug enabled several attack vectors:

  1. Injection via user data: Malicious LangChain object structures could be injected through user-controlled fields like metadata, additional_kwargs, or response_metadata
  2. Class instantiation within trusted namespaces: Injected manifests could instantiate any Serializable subclass, but only within the pre-approved trusted namespaces (langchain_core, langchain, langchain_community). This includes classes with side effects in __init__ (network calls, file operations, etc.). Note that namespace validation was already enforced before this patch, so arbitrary classes outside these trusted namespaces could not be instantiated.
Security hardening

This patch fixes the escaping bug in dumps() and dumpd() and introduces new restrictive defaults in load() and loads(): allowlist enforcement via allowed_objects="core" (restricted to serialization mappings), secrets_from_env changed from True to False, and default Jinja2 template blocking via init_validator. These are breaking changes for some use cases.

Who is affected?

Applications are vulnerable if they:

  1. Use astream_events(version="v1") — The v1 implementation internally uses vulnerable serialization. Note: astream_events(version="v2") is not vulnerable.
  2. Use Runnable.astream_log() — This method internally uses vulnerable serialization for streaming outputs.
  3. Call dumps() or dumpd() on untrusted data, then deserialize with load() or loads() — Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains 'lc' key structures.
  4. Deserialize untrusted data with load() or loads() — Directly deserializing untrusted data that may contain injected 'lc' structures.
  5. Use RunnableWithMessageHistory — Internal serialization in message history handling.
  6. Use InMemoryVectorStore.load() to deserialize untrusted documents.
  7. Load untrusted generations from cache using langchain-community caches.
  8. Load untrusted manifests from the LangChain Hub via hub.pull.
  9. Use StringRunEvaluatorChain on untrusted runs.
  10. Use create_lc_store or create_kv_docstore with untrusted documents.
  11. Use MultiVectorRetriever with byte stores containing untrusted documents.
  12. Use LangSmithRunChatLoader with runs containing untrusted messages.

The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.

Impact

Attackers who control serialized data can extract environment variable secrets by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load environment variables during deserialization (when secrets_from_env=True, which was the old default). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within trusted namespaces with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.

Key severity factors:

  • Affects the serialization path - applications trusting their own serialization output are vulnerable
  • Enables secret extraction when combined with secrets_from_env=True (the old default)
  • LLM responses in additional_kwargs can be controlled via prompt injection
Exploit example
from langchain_core.load import dumps, load
import os

##### Attacker injects secret structure into user-controlled data
attacker_dict = {
    "user_data": {
        "lc": 1,
        "type": "secret",
        "id": ["OPENAI_API_KEY"]
    }
}

serialized = dumps(attacker_dict)  # Bug: does NOT escape the 'lc' key

os.environ["OPENAI_API_KEY"] = "sk-secret-key-12345"
deserialized = load(serialized, secrets_from_env=True)

print(deserialized["user_data"])  # "sk-secret-key-12345" - SECRET LEAKED!
Security hardening changes (breaking changes)

This patch introduces three breaking changes to load() and loads():

  1. New allowed_objects parameter (defaults to 'core'): Enforces allowlist of classes that can be deserialized. The 'all' option corresponds to the list of objects specified in mappings.py while the 'core' option limits to objects within langchain_core. We recommend that users explicitly specify which objects they want to allow for serialization/deserialization.
  2. secrets_from_env default changed from True to False: Disables automatic secret loading from environment
  3. New init_validator parameter (defaults to default_init_validator): Blocks Jinja2 templates by default
Migration guide
No changes needed for most users

If you're deserializing standard LangChain types (messages, documents, prompts, trusted partner integrations like ChatOpenAI, ChatAnthropic, etc.), your code will work without changes:

from langchain_core.load import load

##### Uses default allowlist from serialization mappings
obj = load(serialized_data)
For custom classes

If you're deserializing custom classes not in the serialization mappings, add them to the allowlist:

from langchain_core.load import load
from my_package import MyCustomClass

##### Specify the classes you need
obj = load(serialized_data, allowed_objects=[MyCustomClass])
For Jinja2 templates

Jinja2 templates are now blocked by default because they can execute arbitrary code. If you need Jinja2 templates, pass init_validator=None:

from langchain_core.load import load
from langchain_core.prompts import PromptTemplate

obj = load(
    serialized_data,
    allowed_objects=[PromptTemplate],
    init_validator=None
)

[!WARNING]
Only disable init_validator if you trust the serialized data. Jinja2 templates can execute arbitrary Python code.

For secrets from environment

secrets_from_env now defaults to False. If you need to load secrets from environment variables:

from langchain_core.load import load

obj = load(serialized_data, secrets_from_env=True)
Credits
  • Dumps bug was reported by @​yardenporat
  • Changes for security hardening due to findings from @​0xn3va and @​VladimirEliTokarev

Severity

  • CVSS Score: 9.3 / 10 (Critical)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:L/A:N

References

This data is provided by the GitHub Advisory Database (CC-BY 4.0).


LangChain affected by SSRF via image_url token counting in ChatOpenAI.get_num_tokens_from_messages

CVE-2026-26013 / GHSA-2g6r-c272-w58r

More information

Details

Server-Side Request Forgery (SSRF) in ChatOpenAI Image Token Counting
Summary

The ChatOpenAI.get_num_tokens_from_messages() method fetches arbitrary image_url values without validation when computing token counts for vision-enabled models. This allows attackers to trigger Server-Side Request Forgery (SSRF) attacks by providing malicious image URLs in user input.

Severity

Low - The vulnerability allows SSRF attacks but has limited impact due to:

  • Responses are not returned to the attacker (blind SSRF)
  • Default 5-second timeout limits resource exhaustion
  • Non-image responses fail at PIL image parsing
Impact

An attacker who can control image URLs passed to get_num_tokens_from_messages() can:

  • Trigger HTTP requests from the application server to arbitrary internal or external URLs
  • Cause the server to access internal network resources (private IPs, cloud metadata endpoints)
  • Cause minor resource consumption through image downloads (bounded by timeout)

Note: This vulnerability occurs during token counting, which may happen outside of model invocation (e.g., in logging, metrics, or token budgeting flows).

Details

The vulnerable code path:

  1. get_num_tokens_from_messages() processes messages containing image_url content blocks
  2. For images without detail: "low", it calls _url_to_size() to fetch the image and compute token counts
  3. _url_to_size() performs httpx.get(image_source) on any URL without validation
  4. Prior to the patch, there was no SSRF protection, size limits, or explicit timeout

File: libs/partners/openai/langchain_openai/chat_models/base.py

Patches

The vulnerability has been patched in langchain-openai==1.1.9 (requires langchain-core==1.2.11).

The patch adds:

  1. SSRF validation using langchain_core._security._ssrf_protection.validate_safe_url() to block:
    • Private IP ranges (RFC 1918, loopback, link-local)
    • Cloud metadata endpoints (169.254.169.254, etc.)
    • Invalid URL schemes
  2. Explicit size limits (50 MB maximum, matching OpenAI's payload limit)
  3. Explicit timeout (5 seconds, same as httpx.get default)
  4. Allow disabling image fetching via allow_fetching_images=False parameter
Workarounds

If you cannot upgrade immediately:

  1. Sanitize input: Validate and filter image_url values before passing messages to token counting or model invocation
  2. Use network controls: Implement egress filtering to prevent outbound requests to private IPs

Severity

  • CVSS Score: 3.7 / 10 (Low)
  • Vector String: CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L

References

This data is provided by the GitHub Advisory Database (CC-BY 4.0).


LangChain Core has Path Traversal vulnerabilites in legacy load_prompt functions

CVE-2026-34070 / GHSA-qh6h-p6c9-ff54

More information

Details

Summary

Multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples).

Note: The affected functions (load_prompt, load_prompt_from_config, and the .save() method on prompt classes) are undocumented legacy APIs. They are superseded by the dumpd/dumps/load/loads serialization APIs in langchain_core.load, which do not perform filesystem reads and use an allowlist-based security model. As part of this fix, the legacy APIs have been formally deprecated and will be removed in 2.0.0.

Affected component

Package: langchain-core
File: langchain_core/prompts/loading.py
Affected functions: _load_template(), _load_examples(), _load_few_shot_prompt()

Severity

High

The score reflects the file-extension constraints that limit which files can be read.

Vulnerable code paths
Config key Loaded by Readable extensions
template_path, suffix_path, prefix_path _load_template() .txt
examples (when string) _load_examples() .json, .yaml, .yml
example_prompt_path _load_few_shot_prompt() .json, .yaml, .yml

None of these code paths validated the supplied path against absolute path injection or .. traversal sequences before reading from disk.

Impact

An attacker who controls or influences the prompt configuration dict can read files outside the intended directory:

  • .txt files: cloud-mounted secrets (/mnt/secrets/api_key.txt), requirements.txt, internal system prompts
  • .json/.yaml files: cloud credentials (~/.docker/config.json, ~/.azure/accessTokens.json), Kubernetes manifests, CI/CD configs, application settings

This is exploitable in applications that accept prompt configs from untrusted sources, including low-code AI builders and API wrappers that expose load_prompt_from_config().

Proof of concept
from langchain_core.prompts.loading import load_prompt_from_config

##### Reads /tmp/secret.txt via absolute path injection
config = {
    "_type": "prompt",
    "template_path": "/tmp/secret.txt",
    "input_variables": [],
}
prompt = load_prompt_from_config(config)
print(prompt.template)  # file contents disclosed

##### Reads ../../etc/secret.txt via directory traversal
config = {
    "_type": "prompt",
    "template_path": "../../etc/secret.txt",
    "input_variables": [],
}
prompt = load_prompt_from_config(config)

##### Reads arbitrary .json via few-shot examples
config = {
    "_type": "few_shot",
    "examples": "../../../../.docker/config.json",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "{input}: {output}",
    },
    "prefix": "",
    "suffix": "{query}",
    "input_variables": ["query"],
}
prompt = load_prompt_from_config(config)
Mitigation

Update langchain-core to >= 1.2.22.

The fix adds path validation that rejects absolute paths and .. traversal sequences by default. An allow_dangerous_paths=True keyword argument is available on load_prompt() and load_prompt_from_config() for trusted inputs.

As described above, these legacy APIs have been formally deprecated. Users should migrate to dumpd/dumps/load/loads from langchain_core.load.

Credit

Severity

  • CVSS Score: 7.5 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

References

This data is provided by the GitHub Advisory Database (CC-BY 4.0).


LangChain has incomplete f-string validation in prompt templates

CVE-2026-40087 / GHSA-926x-3r5x-gfhw

More information

Details

LangChain's f-string prompt-template validation was incomplete in two respects.

First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as PromptTemplate. In particular, DictPromptTemplate and ImagePromptTemplate could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting.

Examples of the affected shape include:

"{message.additional_kwargs[secret]}"
"https://example.com/{image.__class__.__name__}.png"

Second, f-string validation based on parsed top-level field names did not reject nested replacement fields inside format specifiers. For example:

"{name:{name.__class__.__name__}}"

In this pattern, the nested replacement field appears in the format specifier rather than in the top-level field name. As a result, earlier validation based on parsed field names did not reject the template even though Python formatting would still attempt to resolve the nested expression at runtime.

Affected usage

This issue is only relevant for applications that accept untrusted template strings, rather than only untrusted template variable values.

In addition, practical impact depends on what objects are passed into template formatting:

  • If applications only format simple values such as strings and numbers, impact is limited and may only result in formatting errors.
  • If applications format richer Python objects, attribute access and indexing may interact with internal object state during formatting.

In many deployments, these conditions are not commonly present together. Applications that allow end users to author arbitrary templates often expose only a narrow set of simple template variables, while applications that work with richer internal Python objects often keep template structure under developer control. As a result, the highest-impact scenario is plausible but is not representative of all LangChain applications.

Applications that use hardcoded templates or that only allow users to provide variable values are not affected by this issue.

Impact

The direct issue in DictPromptTemplate and ImagePromptTemplate allowed attribute access and indexing expressions to survive template construction and then be evaluated during formatting. When richer Python objects were passed into formatting, this could expose internal fields or nested data to prompt output, model context, or logs.

The nested format-spec issue is narrower in scope. It bypassed the intended validation rules for f-string templates, but in simple cases it results in an invalid format specifier error rather than direct disclosure. Accordingly, its practical impact is lower than that of direct top-level attribute traversal.

Overall, the practical severity depends on deployment. Meaningful confidentiality impact requires attacker control over the template structure itself, and higher impact further depends on the surrounding application passing richer internal Python objects into formatting.

Fix

The fix consists of two changes.

First, LangChain now applies f-string safety validation consistently to DictPromptTemplate and ImagePromptTemplate, so templates containing attribute access or indexing expressions are rejected during construction and deserialization.

Second, LangChain now rejects nested replacement fields inside f-string format specifiers.

Concretely, LangChain validates parsed f-string fields and raises an error for:

  • variable names containing attribute access or indexing syntax such as . or []
  • format specifiers containing { or }

This blocks templates such as:

"{message.additional_kwargs[secret]}"
"https://example.com/{image.__class__.__name__}.png"
"{name:{name.__class__.__name__}}"

The fix preserves ordinary f-string formatting features such as standard format specifiers and conversions, including examples like:

"{value:.2f}"
"{value:>10}"
"{value!r}"

In addition, the explicit template-validation path now applies the same structural f-string checks before performing placeholder validation, ensuring that the security checks and validation checks remain aligned.

Severity

  • CVSS Score: 5.3 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

References

This data is provided by the GitHub Advisory Database (CC-BY 4.0).


Release Notes

langchain-ai/langchain (langchain-core)

v0.1.16

Compare Source

What's Changed

New Contributors

Full Changelog: langchain-ai/langchain@v0.1.15...v0.1.16

v0.1.15

Compare Source

What's Changed

  • experimental[patch]: Release 0.0.56 by [@​baskaryan](https://red

Configuration

📅 Schedule: (UTC)

  • Branch creation
    • ""
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Never, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate-bot renovate-bot requested review from a team as code owners April 27, 2026 18:23
@product-auto-label product-auto-label Bot added the api: redis Issues related to the googleapis/langchain-google-memorystore-redis-python API. label Apr 27, 2026
@dpebot
Copy link
Copy Markdown

dpebot commented Apr 27, 2026

/gcbrun

@renovate-bot renovate-bot changed the title chore(deps): update dependency langchain-core to v1 [security] chore(deps): update dependency langchain-core to v1 [security] - autoclosed Apr 27, 2026
@renovate-bot renovate-bot deleted the renovate/pypi-langchain-core-vulnerability branch April 27, 2026 18:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: redis Issues related to the googleapis/langchain-google-memorystore-redis-python API.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants