ReAct (Reasoning + Acting) is the canonical agent pattern. The model reasons about what to do, takes an action (tool call), observes the result, and loops. It's the default, most agents are ReAct agents.
while not done:
thought = model.reason(context)
action = model.choose_tool(thought)
result = tool.execute(action)
context += (thought, action, result)
Simple in concept, surprisingly hard in practice.
Agent calls the same failing tool 50 times. Solution: hard caps on tool calls, exponential backoff, escalation when stuck.
Every step appends to context. On a 50-step task, the prompt becomes huge. Solution: summarize old context into checkpoints, drop low-value tool results.
Agent has 20 tools, picks the wrong one. Solution: fewer tools, better descriptions, or delegate to a sub-agent (see multi-agent).
Agent keeps going long after the task is done. Solution: explicit "stop when X" instructions, max-turn limits, model-emitted "task complete" signal.
Early step sets the agent on the wrong track. Solution: planning pass before execution (see plan-execute), checkpoints with user approval, evaluation harness.
Goal: answer user's research question using web_search and fetch_url.
Reasoning each step:
1. What do I still need to know?
2. Is it a broad search or a specific URL fetch?
3. Call the tool.
4. Evaluate: did the result help?
Stop when:
- You can answer the question
- You've made 5 tool calls without substantial new info
- The user's question is beyond the scope of these tools
On tool error:
- Retry once with reformulated input
- If still erroring, state the failure and what you'd need to proceed
If the task has 20+ known steps and branching is minimal, plan-execute will outperform ReAct, less drift, more predictable. If the task has parallel subtasks, multi-agent can run them concurrently.
But for 80% of agent work, ReAct is the right default.