← Glossary AI

Grounding

Tying an AI's answers to real, verifiable sources so it can't just make things up.

Explained simply.

Grounding is the opposite of hallucination. Instead of letting the model answer from its training memory (which might be stale or wrong), you force it to answer ONLY using sources you provide - documents, database rows, search results. The model is instructed to cite or refuse. A well-grounded system is much harder to catch in a lie because every claim traces back to a source.

An example.

A grounded customer support bot is told: 'Answer only using the help articles I'm pasting below. If the answer isn't in them, say "I don't have that info" and offer to hand off to a human.' Now if a user asks something off-topic, the bot honestly says it doesn't know - instead of making something up.

Why it matters.

Grounding is what turns an AI from a plausibly-wrong chatbot into a reliable production system. Every serious enterprise AI app is grounded. If you're shipping AI to end users, figure out grounding before you ship.