Research agent

A research agent takes a question, searches sources (web, internal, databases), reads what it finds, cross-references, and synthesizes a structured answer with citations. It's one of the most valuable agent patterns because it automates work that's high-value and time-consuming for humans: competitive intelligence, market research, due diligence, investment memos. The craft is in making the agent rigorous, not just fast.

The loop

Core tools

Design choices

A worked example: "Who are the top 3 competitors for product X?"

  1. Decompose: "Who sells similar products?" + "Who shows up in competitive comparison articles?" + "What does product X say about competitors?"
  2. Parallel search on all three.
  3. Read the top 5 results per query, extract named competitors.
  4. Score by frequency + source diversity. A competitor mentioned in 4 independent sources ranks higher than one mentioned once.
  5. Deep-read on the top 3: their product page, pricing, recent news.
  6. Synthesize: 3-competitor brief with URLs for every claim.

Took 12 tool calls, 4 LLM turns, ~60 seconds, ~$0.25. A human doing the same is 1-2 hours.

Citation discipline

Citations are the difference between a research agent and a plausible-sounding bullshit generator. Hard rules:

Failure modes

Parallel is a massive win here

Research is the canonical parallel case. You're searching multiple queries; they're independent. Batch them. A 5-query research task that takes 10 seconds sequential takes 2 seconds parallel, with no quality loss.

Production systems + frameworks

What to do with this