Back to Lexicon
Jailbreaking
intermediateAttempts to bypass an AI model's safety guidelines through clever prompting. A major security concern that motivates robust safety training and guardrails.
Category: safety
securityattacks
Extended tutorial content coming soon.
Check back for examples, tips, and in-depth explanations.