MemotivaLLM Engineer Interview Questions: Fine-Tuning, LoRA, QLoRA, PEFT, and Instruction Tuning

What is the difference between LoRA and prefix tuning?

LLM Engineer Interview Questions: Fine-Tuning, LoRA, QLoRA, PEFT, and Instruction Tuning

Audio flashcard · 0:17

Nortren·

What is the difference between LoRA and prefix tuning?

0:17

LoRA adds low-rank update matrices to the attention weights of every layer. Prefix tuning instead prepends learnable virtual tokens to the input of each layer, leaving all model weights untouched. Both are parameter-efficient, but LoRA is more widely used because it is simpler to implement, stable to train, and works across more architectures.
huggingface.co