MemotivaLLM Engineer Interview Questions: Fine-Tuning, LoRA, QLoRA, PEFT, and Instruction Tuning

What is the difference between fine-tuning and continued pretraining?

LLM Engineer Interview Questions: Fine-Tuning, LoRA, QLoRA, PEFT, and Instruction Tuning

Audio flashcard · 0:20

Nortren·

What is the difference between fine-tuning and continued pretraining?

0:20

Continued pretraining extends the original pretraining objective on new general or domain text, updating all weights to absorb new knowledge. Fine-tuning typically uses task-specific data with supervised objectives to teach behavior. Continued pretraining is for adding knowledge; fine-tuning is for shaping behavior. Both can be combined when adapting a model to a new domain.
huggingface.co