MemotivaPrompt Engineering Patterns: Prompt Injection, Jailbreaks, and Defensive Prompting Techniques

How do you sanitize user input for LLM prompts?

Prompt Engineering Patterns: Prompt Injection, Jailbreaks, and Defensive Prompting Techniques

Audio flashcard · 0:18

Nortren·

How do you sanitize user input for LLM prompts?

0:18

Sanitize by escaping or removing characters that could be interpreted as instructions, wrapping user content in clear delimiters, treating retrieved documents as untrusted data not instructions, and limiting input length. Unlike SQL injection, there is no perfect escape mechanism for natural language, so combine sanitization with other defenses.
owasp.org