LangGraph

LangGraph is LangChain's successor framework for stateful, multi-actor LLM applications. You model agent logic as a graph: nodes are steps (LLM call, tool call, logic, custom code), edges route based on the current state. Compared to the old chain-style LangChain, it's much better at branching, loops, and multi-agent coordination. It's the most provider-agnostic mature framework.

Why graph-based

Old LangChain was built around linear chains. They worked for pipelines but choked on branching logic, loops, and multi-agent handoff. Graph-based lets you express: "after this LLM call, if state.X is Y go to node A, else go to node B." Explicit, testable, loopable. It's more verbose, and that verbosity pays off when your agent logic gets complex.

What LangGraph provides

When LangGraph is the right call

When it's overkill

The critique you'll hear

LangChain accumulated a reputation for over-abstraction: too many layers, too many integrations, hard to debug. LangGraph addresses some of this by making the state machine explicit, but it inherits some of the ecosystem's baggage. Before adopting, ask: "Could I write this in ~200 lines on raw API with a simple loop?" If yes, that's often cleaner.

A good fit: a long-running research agent

A research agent that takes 10 minutes to run, needs to pause for user feedback mid-run, and coordinates three sub-agents fits LangGraph very well. Checkpointing lets the agent pause and resume without re-running 5 minutes of search. The state machine makes the handoffs explicit. The observability via LangSmith shows you the whole trace.

Same agent written on raw API would be more code and harder to debug.

A bad fit: a simple Q&A agent with 3 tools

LangGraph's abstraction is overhead. A 50-line loop over the OpenAI or Anthropic tool-use API does the same thing with less surface area. The value of the framework comes in when complexity grows.

Pitfalls

What to do with this