When not to use agents

Agents are fun to build so people build them everywhere. For most tasks they're the wrong tool: slower, more expensive, less predictable, harder to debug. This page is the "don't" list. If your use case shows up here, you probably want a plain function call or a fixed workflow instead.

The test, upfront

Before you build an agent, ask these five questions. If you answer "no" to any of them, you probably don't need an agent.

  1. Will the path through the task be different each run?
  2. Does the next step depend on information I don't have in the prompt?
  3. Do I have at least 3 meaningfully different tools the model could call?
  4. Am I okay with 10-60 seconds per run and variable cost?
  5. Can my team debug a 30-turn agent trace when it goes wrong?

Four yeses means you have a real agent use case. Three or fewer means something else fits better.

The don't-use list

Deterministic tasks: don't use an agent

"Parse this PDF, pull out the five fields, write a row to the database." The steps are known. The sequence never changes. A plain function with one LLM call for extraction is faster, cheaper, more reliable. Wrapping that in an agent adds nothing but variance.

Low-latency interactions: don't use an agent

Chat UIs where the user is waiting for a reply under two seconds. Autocomplete. Anything embedded in a hot path. Agents need to loop: think, call, wait, think, call, wait. Even a fast loop is 5-10 seconds. For real-time, put the context in the prompt and do one call.

Compliance-critical outputs: don't use an agent (by itself)

Medical advice, legal decisions, financial approvals. Agents are non-deterministic by design: two runs of the same task can pick different tools, different orders, different answers. Auditors hate that. Use workflows with logged, reviewable steps. If an agent is involved, have it draft and have a human or rule-engine approve.

When one LLM call is enough

"Given this support ticket, classify the category and sentiment." You have all the input. The output is structured. Just call the LLM with the ticket and a schema. No tools, no loop. The test: if you can fit all the information needed into the prompt, you don't need an agent.

When the tools are flaky

An agent makes 10 tool calls. If each tool fails 5% of the time, about 40% of agent runs hit at least one failure. Now you need recovery logic: retries, fallbacks, "the database was down, try again later." That's a different, harder problem. Fix your underlying tools first. Agents amplify infrastructure problems.

When you can't afford to debug them

A broken workflow has a specific failing step you can read and fix. A broken agent has a 40-turn trace of LLM reasoning, tool calls, and results that you have to read through to find the one wrong decision at turn 17. Teams that haven't invested in tracing, eval datasets, and the discipline to actually read the traces should not ship agents. The first time it fails silently in production, you'll understand why.

Cost comparison at scale

At 10,000 runs a month, the gap between a workflow and an unbounded agent is the difference between a Netflix subscription and a car payment. If the agent buys you nothing the workflow couldn't, that's just money set on fire.

The "we should use an agent" trap

Agents feel like the future so teams reach for them. Symptoms you've fallen into this: you built an agent and its trace always looks the same. Or every new edge case means a new "tool" that's really just a hard-coded function. Or the agent works great on your demo cases and randomly fails on customer cases. The fix is usually to rip out the loop and replace it with a workflow, keeping the agent only at the step that actually needs a decision.

When an agent is actually the right choice

The flip side: don't under-use agents either. If the task truly branches at runtime, you have real tools, you've thought about tracing, and the variance is acceptable, an agent is exactly right. Research, debugging, open-ended customer support, code generation, anything where the next step depends on what you just learned.

What to do with this

Further reading

Watch

Andrej Karpathy - Intro to Large Language Models (1 hour)