What is the difference between full fine-tuning and parameter-efficient fine-tuning?
LLM Engineer Interview Questions: Fine-Tuning, LoRA, QLoRA, PEFT, and Instruction Tuning
Audio flashcard · 0:18Nortren·
What is the difference between full fine-tuning and parameter-efficient fine-tuning?
0:18
Full fine-tuning updates every weight in the model, which requires huge memory and storage and risks catastrophic forgetting. Parameter-efficient fine-tuning, or PEFT, freezes the original weights and trains only a small number of new parameters, typically less than one percent of the full model. PEFT methods like LoRA make fine-tuning practical on consumer GPUs.
huggingface.co