Giving an AI a library to look things up in before it answers. Open-book mode.
An LLM only knows what it learned during training. It doesn't know your company's internal docs, last week's emails, or a PDF you just got. RAG solves this in three steps: (1) store your documents in a searchable index; (2) when a question comes in, find the most relevant snippets; (3) paste those snippets into the AI's prompt along with the question. Now the AI is 'open-book' - it can quote from your library.
Fine-tuning takes days, costs money, and gets stale the moment your docs change. RAG updates the moment you update your docs. Almost every 'ChatGPT for your company' product is RAG under the hood.