Prompting for agents

Prompting an agent isn't the same as prompting a chatbot. The agent has to make decisions, call tools, recover from errors, and stop when done. The prompt needs to tell it how.

The four sections every agent prompt needs

1. Role and goal

Tell the agent what it is and what it's trying to achieve.

You are a research assistant. Your goal is to answer the user's question
using web search and return a concise, sourced summary.

2. Rules and constraints

What it must do, what it must not do, how to format output.

Rules:
- Always cite sources with URLs
- Never speculate beyond what sources say
- If sources conflict, surface the disagreement, don't hide it
- Stop after 3 search rounds even if you want more data

3. Tool-use guidance

Tell it how to use each tool, when to use which, and how to handle errors.

You have two tools:
- web_search: use for finding new information
- fetch_url: use to read a specific page the user references

If a tool returns an error, try once more with a reformulated input.
If it errors twice, surface the failure to the user, don't hide it.

4. Output format

Describe exactly what the final response should look like. For agents, be strict, the output will be parsed or acted on.

XML tags for structure

Claude responds well to XML-tagged sections. They help the model locate parts of the prompt and produce structured outputs.

<task>Summarize the article at the URL below.</task>
<url>https://example.com/article</url>
<constraints>
- 200 words max
- Include the 3 key claims with source quotes
</constraints>

Reasoning scaffolds

For complex tasks, explicitly tell the model to think before acting:

Before calling any tool:
1. Identify what's needed to complete the task
2. Decide which tool to call and with what arguments
3. Predict what the result will tell you

After each tool call:
1. Compare the result to what you expected
2. Decide whether to continue or change approach

This works even without extended thinking enabled, the model will produce the reasoning in the main output.

Error recovery

Agents fail. Tools error. Give the model a playbook:

Watch out: Unbounded retry loops are the #1 way an agent burns money and does nothing. Always set a hard limit.

Output format matters more than you think

If downstream code parses the agent's output, be obsessive about the format. Model output drift is real. Put the format spec near the end of the prompt (it's the last thing the model considers) and use structure like JSON schemas when possible.

Anti-patterns