What are guardrails in prompt engineering?
Prompt Engineering Patterns: Prompt Injection, Jailbreaks, and Defensive Prompting Techniques
Audio flashcard · 0:19Nortren·
What are guardrails in prompt engineering?
0:19
Guardrails are mechanisms that constrain LLM behavior to prevent unsafe, off-topic, or policy-violating outputs. They include input filters that block malicious prompts, output filters that catch bad responses, classifiers that detect PII, structured output enforcement, and topic restrictions. Guardrails sit alongside prompt engineering as a separate defense layer.
---
github.com