What is LoRA?
LLM Engineer Interview Questions: Fine-Tuning, LoRA, QLoRA, PEFT, and Instruction Tuning
Audio flashcard · 0:18Nortren·
What is LoRA?
0:18
LoRA stands for Low-Rank Adaptation. It freezes the pretrained model weights and inserts small trainable matrices into each transformer layer. These matrices have low rank, meaning few parameters, and their product is added to the original weights at inference. LoRA achieves results similar to full fine-tuning while training only a tiny fraction of the parameters.
arxiv.org