How do you prepare data for fine-tuning a chat model?
LLM Engineer Interview Questions: Fine-Tuning, LoRA, QLoRA, PEFT, and Instruction Tuning
Audio flashcard · 0:19Nortren·
How do you prepare data for fine-tuning a chat model?
0:19
Format data using the model's chat template, which structures messages as alternating user and assistant turns with role markers. Each example should be a complete conversation, not just a single response. Apply the same template at inference time to match training distribution. Hugging Face tokenizers provide apply_chat_template to handle this consistently across model families.
huggingface.co