Tying an AI's answers to real, verifiable sources so it can't just make things up.
Grounding is the opposite of hallucination. Instead of letting the model answer from its training memory (which might be stale or wrong), you force it to answer ONLY using sources you provide - documents, database rows, search results. The model is instructed to cite or refuse. A well-grounded system is much harder to catch in a lie because every claim traces back to a source.
Grounding is what turns an AI from a plausibly-wrong chatbot into a reliable production system. Every serious enterprise AI app is grounded. If you're shipping AI to end users, figure out grounding before you ship.