Claude, overview

Claude is Anthropic's family of large language models. For autonomous AI, Claude matters because it was designed with agents in mind, strong tool use, long context, and built-in safety tuning.

The families

Claude comes in three tiers at any given generation:

Pick based on task complexity and cost tolerance. Most agents work well on Sonnet. Spin up Opus for hard reasoning, drop to Haiku for parallel, cheap worker tasks.

Why Claude for agents

How it compares

Honest take: GPT-4 and Gemini are also strong. For pure language tasks they're comparable. For agent work, Claude has a real edge on:

For image reasoning or code, the race is closer. Vendor lock-in is a real risk, prefer abstractions that let you swap models.

API vs. Claude.ai vs. Claude Code

Three surfaces matter:

Pricing model (directional)

Priced per-token, input and output separately. Output tokens cost 5× input. Prompt caching reduces input cost by ~90% on cached portions. For agent workflows with stable system prompts, caching matters a lot. See Prompt caching.