When does few-shot prompting fail?
Prompt Engineering Patterns: Zero-Shot vs Few-Shot Prompting Explained for LLM Developers
Audio flashcard · 0:19Nortren·
When does few-shot prompting fail?
0:19
Few-shot fails when the task requires reasoning beyond pattern matching, when the examples introduce subtle biases that the model copies, when the task is too complex for the model regardless of examples, or when the output format the examples show is inherently ambiguous. For complex reasoning tasks, chain-of-thought prompting usually works better than pure few-shot.
---
promptingguide.ai