When an AI makes up facts and says them confidently. The most common failure mode of LLMs.
LLMs are trained to produce plausible-sounding text, not true text. Usually those overlap. Sometimes they don't. When the model generates a fact, citation, name, or statistic that sounds right but isn't real, that's a hallucination. The model isn't lying on purpose - it's doing what it was trained to do (predict the next likely word), and the most likely next word happens to be wrong. The dangerous part is that hallucinations read with the same tone of confidence as correct answers.
Hallucinations don't go away; you manage them. Use RAG so the model can cite real sources. Ask it to show its work. Build checks that verify claims against a trusted database. For anything consequential (legal, medical, financial), never trust the first output.