Back to Lexicon

Jailbreaking

intermediate

Attempts to bypass an AI model's safety guidelines through clever prompting. A major security concern that motivates robust safety training and guardrails.

Category: safety
securityattacks

Extended tutorial content coming soon.

Check back for examples, tips, and in-depth explanations.