Tool descriptions matter

The tool description is the single most important piece of prose in your agent. The model reads it at every turn to decide whether to call this tool. A one-liner produces random behavior. A precise description produces reliable behavior. Most teams ship agents with descriptions that look like afterthoughts. Spend thirty minutes rewriting them and your accuracy jumps.

Why descriptions are effectively prompts

When you register a tool, you send the model its name, description, and input schema. That's all it sees. There's no magic. When the model decides "should I call search_web here?" it's reading your description and pattern-matching the current situation against it. If the description is vague, the match is vague. If the description is specific, the match is specific.

The five-ingredient recipe

Weak vs strong, side by side

The rewrite, concrete

Weak:

"Search the web."

Strong:

"Search the public web for recent news and information. Use this when
the user asks about current events, recent developments, or facts
that may have changed since your training cutoff. Returns top 5
results with title, URL, and snippet. Do NOT use for searching
internal company documents (use search_internal_docs instead) or
for math (use calculator). Rate limited to 30 calls per minute."

Same tool. The second version tells the model exactly when to reach for it, when to reach for a different tool, and what it'll get back. Accuracy on ambiguous prompts typically jumps from ~60% to ~90% with this kind of rewrite. Free quality.

Parameter descriptions, too

Each parameter gets its own description. Specify format, valid values, examples of what "good" looks like:

{
  "query": {
    "type": "string",
    "description": "Natural language search query. 3-10 words works best. Include specific entities and timeframes, e.g. 'Apple iPhone 16 launch date'."
  },
  "recency": {
    "type": "string",
    "enum": ["day", "week", "month", "year"],
    "description": "How recent results should be. Default: 'month'."
  }
}

The "do NOT use" is the superpower

When your agent has several similar tools, you need explicit routing. The model will not know that search_web and search_internal_docs are different unless you tell it. Every description should say what not to use the tool for and which tool to reach for instead. Without that, overlapping tools get picked at random.

A worked example: routing between three search tools

Imagine you have three tools: search_web, search_internal_docs, search_support_kb.

search_web
  Use for: news, public companies, things outside the company.
  Do NOT use for: internal policies, company-specific info.

search_internal_docs
  Use for: internal policies, HR, engineering wiki.
  Do NOT use for: customer support tickets or public information.

search_support_kb
  Use for: known support questions and past ticket resolutions.
  Do NOT use for: internal engineering docs or general web info.

Now when a user asks "what's our PTO policy" the model has no ambiguity: that's internal_docs, not the others.

How to test descriptions

Pitfalls

What to do with this