LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks - The Hacker News
Three critical vulnerabilities (CVE-2026-34070, CVE-2025-68664, CVE-2025-67644) have been discovered in LangChain and LangGraph, widely used AI frameworks for building LLM applications. These flaws include path traversal, deserialization of untrusted data, and SQL injection, allowing attackers to access arbitrary filesystem files, leak environment secrets, and execute arbitrary SQL queries against conversation history databases.
Source: Original Report ↗