Tool use

Tool use is how an agent does anything outside its own head. Without tools, the model just talks. With tools, it can search, query, send, create, delete, run code. A tool is a function you register with the model. It has a name, a description, and a schema for its arguments. The model requests a call, your code runs the call, the result goes back. That's the whole mechanic. Getting it right is 80% of the work of building a good agent.

The mental model

You're not giving the model access to your functions. You're giving it a menu. The menu lists each tool's name, what it does, and what arguments it needs. The model picks a dish. You cook it. You hand back the plate. The model decides what to do with it.

The model never sees your code. It only sees what's on the menu. If your menu is confusing, the model orders the wrong thing. If your menu is precise, the model orders well.

Anatomy of a tool

{
  "name": "search_web",
  "description": "Search the web for recent information. Returns top 5 results with title, url, snippet.",
  "input_schema": {
    "type": "object",
    "properties": {
      "query": {"type": "string", "description": "what to search for"},
      "recency": {"type": "string", "enum": ["day", "week", "month"], "description": "how recent results must be"}
    },
    "required": ["query"]
  }
}

The five-step tool-use loop

What the big three APIs look like

Different names, same pattern. Write your orchestrator against a thin adapter and you can swap providers without rewriting the loop.

Parallel tool calls

Modern APIs let the model request multiple tools in one turn. "Search for X, look up Y, fetch Z." Your code runs them in parallel and returns all three results at once. This is a huge latency win for independent calls. If two tools have no dependency, let the model batch them.

A worked example: "Which of my last 3 support tickets is still open?"

  1. Model sees: user prompt + 4 tool specs (list_tickets, get_ticket, update_ticket, escalate).
  2. Model requests: list_tickets(user_id: "me", limit: 3, order_by: "created_desc").
  3. You run it, return: [{id: 991, status: "closed"}, {id: 992, status: "open"}, {id: 993, status: "closed"}].
  4. Model thinks: only 992 is open, so the answer is one.
  5. Model returns: final answer text. No more tool calls needed.

The model never called get_ticket, update_ticket, or escalate. It picked the one tool that answered the question. Good menu → good choice.

Errors: return them, don't throw them

When a tool fails, don't crash the loop. Return a structured error as the tool result:

{
  "error": "rate_limit",
  "message": "Rate limited. Try again in 60 seconds.",
  "retry_after": 60
}

Now the model can read the error and decide: retry, pivot to a different tool, or tell the user. If you throw instead, the agent just dies. If you return a plain string like "something broke," the model has nothing to reason about. Structured errors let the agent recover.

Pitfalls

What to do with this