Prompt Engineering Patterns: Instruction Following, Negative Constraints, and Output Format Control
Discover techniques for optimizing your prompts, including versioning, A/B testing, and production best practices. This section equips you with the tools to ensure your prompts yield the best results.
8 audio · 2:26
Nortren·
How do you write clear instructions for an LLM?
0:16
Write clear instructions by being specific about what you want, using concrete examples, ordering instructions from most to least important, separating instructions from data with delimiters, telling the model what to do rather than what to avoid, and testing instructions against edge cases. Vague instructions produce vague outputs.
Why are positive instructions better than negative ones?
0:18
Positive instructions tell the model what to do; negative instructions tell it what not to do. Positive instructions are more reliable because models sometimes do exactly what you told them not to, especially with complex negations. Instead of "don't be too long" say "respond in two to three sentences." Instead of "don't use jargon" say "use plain language a beginner would understand."
What is the difference between instructions and constraints?
0:18
Instructions describe what the model should do; constraints describe limits on how it should do it. "Summarize this article" is an instruction. "Use no more than 100 words" is a constraint. Both belong in a prompt, but should be clearly separated and ordered consistently. Constraints work better when stated as positive limits than as prohibitions.
Control length by giving specific numerical targets like "respond in 100 words" or "use exactly five bullet points." Vague directions like "be brief" or "be detailed" produce inconsistent results. For absolute limits, also enforce them at the API level using max_tokens. Models respect length instructions better when the target is concrete and achievable for the task.
Control tone by being specific in your instructions, providing examples of the desired style, and assigning a role that matches the tone. "Write in a professional, neutral tone like a Wikipedia article" works better than "be professional." For consistency across many outputs, codify the desired style in the system prompt and reference it in user prompts.
Delimiters are markers that separate sections of a prompt, like triple backticks, XML tags, or distinctive headers. They help the model distinguish instructions from input, examples from queries, and one document from another. Delimiters reduce confusion when the prompt mixes user-provided content with system instructions, and they make prompt injection harder.
How do you handle ambiguous user inputs in a prompt?
0:17
Handle ambiguity by instructing the model to ask clarifying questions when input is unclear, by providing fallback behavior for missing information, by listing common ambiguities and how to resolve them, or by rephrasing the user's input internally before answering. Never let the model silently guess in high-stakes applications.
What is the principle of giving the model time to think?
0:18
Giving the model time to think means structuring prompts so the model can produce intermediate reasoning before its final answer, instead of forcing an immediate response. This includes chain-of-thought, asking for an outline before writing, or asking the model to consider alternatives before deciding. Models perform better with explicit thinking room, especially on hard problems.
---