Customer support agent

Customer support is the most-deployed agent pattern in production, and also the one most teams get wrong. The concept is simple: read the ticket, retrieve relevant KB content, answer, escalate if stuck. The execution has a dozen failure modes, most of them about being too eager to answer. The best support agents say "I'll get a human on this" more readily than they say "here's my guess."

The flow

Core tools

Key design choices

Retrieval quality drives everything

A customer support agent is mostly a retrieval system with an LLM on top. If retrieval returns the wrong articles, no amount of prompting fixes the answer. Invest here first:

Without retrieval quality, you have a confident hallucinator, not a support agent.

Escalation: the hardest design decision

Every support-agent team tunes this. Escalate too aggressively and humans drown in tickets the agent could've handled. Too rarely and customers get garbage answers to questions the agent shouldn't have attempted.

Good escalation triggers:

A worked example: a billing question

  1. Ticket: "Why did I get charged twice last month?"
  2. Classify: billing, neutral sentiment.
  3. Get user context: 2 charges in March, one was a proration, one was the regular monthly bill.
  4. Retrieve: KB article on proration after plan upgrades.
  5. Draft reply: explains the two charges with specific dates and amounts, links to the KB article, offers to refund the proration if the user prefers.
  6. Human approval required (policy: any reply mentioning refunds goes through a human).
  7. Agent submits the draft, human approves, reply sends.

The agent did 80% of the work; the human approved. Perfect use case for pre-action approval.

Personalization with caution

Pulling user context makes answers dramatically better. It also makes cross-user leaks possible. Rules:

Hard guardrails

Measurement

Track accuracy and CSAT together. High deflection + low CSAT = agent is just shoving wrong answers at users.

The "too eager to answer" trap

Models are trained to be helpful. That bias makes them over-answer. In support, confident wrong answers are worse than "I'll get a human." Prompt the agent to escalate when anything is uncertain, not to make a best guess.

What to do with this