Back to Lexicon

Executive Order 14110

intermediate

US Executive Order on Safe, Secure, and Trustworthy AI (October 2023). Establishes safety testing requirements, red-teaming mandates, and reporting obligations for developers of powerful AI systems.

Category: ethics
regulationlawusacompliance

Overview

Executive Order 14110 is the most significant US government action on AI safety to date. It requires developers of large AI models to share safety test results with the government and establishes standards for AI security and reliability. Key provisions include: mandatory reporting for training runs above compute thresholds, red-teaming requirements before deployment, standards for authenticating AI-generated content, and guidelines for AI use in critical infrastructure. The order directs federal agencies to develop sector-specific AI guidelines and establishes NIST as a key body for AI safety standards. While not legislation, it signals serious government attention to AI risks.

Key Concepts

Compute Thresholds

Training runs above specified FLOP thresholds trigger reporting requirements.

Safety Testing Mandates

Required evaluation of dual-use capabilities before deployment.

Content Authentication

Standards for identifying AI-generated content.

Sector-Specific Guidance

Tailored requirements for healthcare, finance, infrastructure, etc.

Related Concepts