Autonomous AI is not a single technology. It's an assembly of four layers, model, tools, harness, permissions, working together in a loop.
An AI is "autonomous" when it can take a goal, decompose it, use tools to act, observe the results, and iterate, without a human reviewing every step.
A large language model. Claude, GPT, Gemini, that generates reasoning and decisions. The model is the "brain," but it alone cannot do anything. It produces text. That text has to be interpreted and acted on.
Functions the model can call. Search the web. Read a file. Send an email. Query a database. Tools turn text-generation into real-world action. A model without tools is a chatbot. A model with tools is an agent.
The runtime that connects the model to the tools. It parses the model's tool-call requests, executes them, and feeds results back. Claude Code is one harness. There are many others. A good harness makes the loop reliable.
The rules that govern what the harness will let the agent do without checking. Read-only by default? Allowed to write files? Allowed to make network calls? Allowed to spend money? Permission design is what makes autonomy safe.
Goal → Reason → Act (call tool) → Observe (tool result) → Reason → ...
This is the canonical agent loop. The model reasons about what to do, the harness runs the tool, the result becomes new input, the model reasons again. Repeat until the goal is met or a termination condition fires.
Autonomy is a spectrum, not a binary. A system is more autonomous as:
Once you have the loop working, the next step is putting the loop in the background. Running it on a schedule. Running it headless. Running it without you. That's the path from "cool demo" to "operating infrastructure." The rest of this framework is about walking that path.