← Glossary AI

Hallucination

When an AI makes up facts and says them confidently. The most common failure mode of LLMs.

Explained simply.

LLMs are trained to produce plausible-sounding text, not true text. Usually those overlap. Sometimes they don't. When the model generates a fact, citation, name, or statistic that sounds right but isn't real, that's a hallucination. The model isn't lying on purpose - it's doing what it was trained to do (predict the next likely word), and the most likely next word happens to be wrong. The dangerous part is that hallucinations read with the same tone of confidence as correct answers.

An example.

Ask a model for a legal citation and it might produce 'Smith v. Jones, 2019, 447 F.3d 221' - which is formatted correctly, spelled like real case law, and completely invented. Lawyers have been fined for filing briefs with hallucinated citations.

Why it matters.

Hallucinations don't go away; you manage them. Use RAG so the model can cite real sources. Ask it to show its work. Build checks that verify claims against a trusted database. For anything consequential (legal, medical, financial), never trust the first output.