What are guardrails in LLM systems?
LLM Engineer Interview Questions: LLM Evaluation, Hallucinations, Guardrails, Production Monitoring
Audio flashcard · 0:20Nortren·
What are guardrails in LLM systems?
0:20
Guardrails are mechanisms that constrain LLM behavior to prevent unsafe, off-topic, or policy-violating outputs. They include input filters that block malicious prompts, output filters that catch harmful or off-topic responses, classifiers that detect PII or toxic content, and topic restrictions that keep the model on its intended use case. Guardrails are usually layered for defense in depth.
github.com