Fine-tuning takes a pre-trained model and continues training it on a smaller dataset of your own examples. The result is a model that follows your style, format, or domain conventions much more reliably than prompting alone.
When to fine-tune (rare): you have a narrow task (classification, structured extraction) where you've already pushed prompting and few-shot examples as far as they go, and you have at least a few hundred high-quality examples.
When NOT to fine-tune (common): you want the model to know new facts (use RAG), you want better reasoning (use a smarter base model), or you have fewer than 100 examples (prompt engineering will outperform you).
Fine-tuning is one of the most overrated AI techniques. Most teams that say "we should fine-tune" are actually solving a prompting or retrieval problem.
Bring this to your business
Knowing the term is one thing. Shipping it is another.
We do two-week AI Sprints — one term, one workflow, into production by Day 10.