Teaching an existing model new tricks by training it on your specific examples.
You take a base model (like GPT or Claude or Llama) that already knows language and you show it hundreds or thousands of examples of YOUR task. The model's internal weights shift slightly to get better at that specific kind of input. Done right, fine-tuning produces a model that's genuinely better at your niche - faster, cheaper per call, and more consistent than prompt tricks alone.
Don't reach for fine-tuning first. Try better prompts, then RAG, then agents. Fine-tune only when (a) you have lots of high-quality labeled data, (b) the task is repeated at high volume, (c) prompts have topped out, and (d) the cost savings or consistency gains justify the engineering.