Why use RAG instead of fine-tuning a large language model?
RAG & Vector DB Interview: RAG Architecture, Components, Use Cases Explained
Audio flashcard · 0:28Nortren·
Why use RAG instead of fine-tuning a large language model?
0:28
RAG is preferred when knowledge changes frequently or when you need source attribution and verifiability. Fine-tuning bakes information into model weights, which is expensive, hard to update, and offers no transparency about which fact came from where. RAG keeps knowledge in an external store you can update in seconds, attaches source citations to each answer, and works with any base model without retraining. Fine-tuning still wins for teaching style, format, or domain-specific reasoning patterns that retrieval alone cannot inject.
python.langchain.com