When not to use agents

Agents get overused because they're exciting to build. But for many tasks they're worse on every metric: cost, latency, reliability, debuggability. Here are the cases where you should skip the agent.

When the task is deterministic

"Parse this document, extract these five fields, insert into database." Not an agent problem. A single LLM call, or even a traditional parser, is faster and more reliable.

When latency matters

Agent loops can take 10-60 seconds. For real-time applications, agents are too slow. Pre-compute, cache, or use a single-turn LLM with sufficient context.

When cost matters

Every tool call is additional LLM tokens. Agent sessions often run 5-10x the tokens of single-turn. At scale, this dominates your bill.

When you need determinism

Regulated workflows, auditable decisions, legal outputs. An agent's non-determinism (different reasoning each run) is a feature for exploration but a liability for compliance.

When the task is narrow and well-defined

A function call works. "Given this weather data, tell me if I should bring a jacket." The LLM gets the data in the prompt; no agent needed.

When the tools are unreliable

Agents amplify tool failures. If your database is flaky, your agent will encounter failures mid-task, and recovery patterns are hard. Fix infrastructure before adding agents.

When the team can't debug agents

Agent traces are complex. Without observability tooling and the discipline to review them, agents fail silently in production. If your team can't commit to observability, don't ship an agent.

When the problem is ill-defined

"Help me with marketing." Agents don't help with ambiguous tasks better than humans asking clarifying questions. Fix the spec before building the tool.