Skip to content

LangChain

LangChain is the general-purpose building layer for composing prompts, tools, retrievers, and other LLM workflow pieces, which makes it a practical starting point for smaller or more linear applications.

LangGraph sits one layer up and is better suited to workflows that need explicit state, branching, retry behavior, looping, or longer-running agent coordination.

Deep Agents adds an agent harness on top of the LangChain ecosystem when you want built-in planning, subagents, file-system tooling, and support for more demanding multi-step tasks.

LangSmith fills the observability role across the ecosystem by helping teams trace runs, evaluate behavior, and monitor quality over time.

A reasonable progression is to start with LangChain, adopt LangGraph when orchestration becomes more complex, use Deep Agents when you want an agent harness, and add LangSmith when you need stronger debugging and evaluation.

Because this ecosystem moves quickly, the most reliable way to check current APIs and examples is often the official docs MCP server rather than older blog posts or tutorials. LangChain provides an MCP endpoint at https://docs.langchain.com/mcp, which can be used from tools to fetch up-to-date documentation.

LangChain

The core framework for composing modular LLM applications and agent workflows

LangGraph

An open source AI agent framework designed to build, deploy and manage complex generative AI agent workflows

Deep Agents

A standalone library in the LangChain ecosystem for building agents with planning, subagents, file-system tools, and other capabilities for complex multi-step tasks

LangSmith:

A unified observability & evals platform where teams can debug, test, and monitor AI app performance