What is an AI agent?
📖 4 min readUpdated 2026-04-19
The simplest definition: an AI agent is an LLM in a loop with tools. It gets a goal. It thinks. It picks a tool. It uses the tool. It thinks again based on what the tool returned. It repeats until the goal is done. That's it.
The three components
- The LLM, the reasoning engine. Takes context, decides what to do next.
- Tools, functions the LLM can call. Search the web, query a database, send an email, run code.
- The loop, orchestration that runs until the task is complete or a stop condition fires.
Why this is different from a normal LLM call
A normal LLM call: prompt in, answer out. One shot. The LLM does what it can with the context it has.
An agent: prompt in, but the LLM can reach out for information, take actions, and incorporate results before responding. It can decide "I don't know this, let me search," execute the search, read the results, and respond. The agent can take many reasoning and action steps, not just one.
The spectrum of autonomy
- Prompt + single tool, minimal agent behavior, one tool call per request
- ReAct-style loops, multi-step reasoning and tool use until answer is reached
- Planning agents, form a plan first, then execute steps
- Multi-agent systems, multiple specialized agents coordinating
- Fully autonomous, agents with long-running tasks, self-correction, persistent memory
Most production agents sit in the middle: structured loops with bounded autonomy.
What agents enable
- Tasks that require information the model doesn't have (RAG, search, tool use)
- Multi-step workflows where each step depends on prior results
- Tasks requiring real-world actions (sending emails, writing files, booking meetings)
- Longer-running work that needs planning and checkpoints
What agents don't magically fix
- Bad prompts, agents amplify prompt quality, they don't replace it
- Bad tools, agents can only do what tools allow
- Bad reasoning, if the base model reasons poorly, loops don't help
- Bad data, garbage inputs produce garbage agent traces