ReAct (Reasoning + Acting) is the foundational agent loop introduced in 2022. At each step, the agent reasons out loud about what to do next, takes an action (calls a tool), observes the result, then reasons again. Nearly every modern agent uses some variant.
Thought: I need to find X. I'll search for it. Action: web_search(query="X") Observation: [search results] Thought: Based on the results, I need to... Action: [next tool call] Observation: [result] ... Thought: I have enough. Final answer: ...
Having the LLM write its reasoning explicitly before each action improves quality for complex tasks. The model commits to a rationale, which makes its actions more coherent. And the reasoning is in the context for subsequent steps.
Modern tool-use APIs (Claude, GPT-4, Gemini) have native support for ReAct-like reasoning via structured output, where reasoning and tool calls are emitted in a structured format the orchestrator parses.
ReAct loops need clear stop conditions:
For tasks with a known execution plan, skip ReAct and use a deterministic workflow. ReAct's value is in exploration; if the path is known, the reasoning overhead is waste.