Back to Lexicon
Red Teaming
intermediateAdversarial testing where humans or other AI systems attempt to find vulnerabilities, elicit harmful outputs, or bypass safety measures in AI systems.
Category: safety
testingsecurity
Extended tutorial content coming soon.
Check back for examples, tips, and in-depth explanations.