AI Security · Regulation
AI Compliance
EU AI Act, ISO 42001, NIST AI RMF (2026)
AI compliance went from a future problem to a current one with the EU AI Act phasing in through 2026 and 2027. The penalty math is no longer hypothetical: \u20ac35M or 7% of global turnover for prohibited AI use. This guide covers the four frameworks that matter, the EU AI Act risk tiers and what each requires, the comparison every CAIO and CTO needs for choosing between NIST AI RMF and ISO 42001 (or both), and the 10-step operational checklist for getting compliant this year.
30-SECOND EXECUTIVE TAKEAWAY
- The EU AI Act is real and extraterritorial. If your AI output reaches EU users, it applies. Treat as in scope until proven otherwise.
- NIST AI RMF is the umbrella. Most US enterprises adopt NIST as canonical, layer EU AI Act controls where applicable, and pursue ISO 42001 if external assurance is needed.
- The work is in the controls matrix. Mapping each requirement to a specific control with a named owner is what turns compliance from a binder into an operating program.
Why AI compliance matters now
For most of the last decade, AI sat in a regulatory gap. Existing rules around data protection, sector regulation, and consumer protection touched parts of AI deployment, but no horizontal framework specifically governed AI systems as such. That changed in 2024 with the EU AI Act, the first major horizontal AI regulation in any major jurisdiction, with phase-in dates running through 2027 and penalties up to \u20ac35M or 7% of global annual turnover for the most serious violations.
ISO/IEC 42001 launched in late 2023 as the first international management system standard for AI, giving organizations a certifiable scaffold comparable to ISO 27001 for security. NIST released its AI Risk Management Framework in early 2023, providing the most-adopted voluntary structure in the US. Sector regulators in healthcare, financial services, and the public sector have layered their own AI-specific guidance on top.
The result is a stack: NIST AI RMF as the umbrella for most US enterprises, EU AI Act controls layered on for any system reaching EU users, ISO 42001 pursued when external assurance becomes a procurement criterion, and sector-specific rules on top of all three for regulated industries. The right operational question for a CTO or CAIO is no longer whether to invest in AI compliance, but how to structure the program so the same evidence supports multiple frameworks at once.
THE FOUR FRAMEWORKS
EU AI Act vs ISO 42001 vs NIST AI RMF vs sector rules
Side by side. Pick one as your canonical scaffold, then layer the others where applicable. The work pays back when you build a single controls matrix that satisfies multiple frameworks at once.
| Framework | Scope | Effective | Applies to | Penalty for non-compliance |
|---|---|---|---|---|
| EU AI Act Risk-tiered: prohibited, high-risk, limited-risk, minimal-risk | EU, mandatory | In force August 2024; phased through 2027 | Any AI system whose output is used in the EU; both providers and deployers | Up to €35M or 7% of global turnover for prohibited use |
| ISO/IEC 42001 Management system standard (PDCA cycle), comparable to ISO 27001 | International, certifiable | Published December 2023 | Voluntary; increasingly required by enterprise procurement | No regulatory penalty; certification loss has commercial impact |
| NIST AI RMF Govern, Map, Measure, Manage | US, voluntary | Released January 2023; updates ongoing | Any organization adopting it; the de facto US standard | No regulatory penalty; failure to apply where adopted creates organizational risk |
| Sector-specific Sector-specific rules layered on top of horizontal frameworks | Varies by sector | Various; FDA AI/ML guidance evolving 2024–2026 | Healthcare (FDA AI/ML), banking (SR 11-7 + emerging AI guidance), public sector (OMB M-24-10), and others | Regulator-specific; can be severe in regulated industries |
EU AI ACT
The four EU AI Act risk tiers
The EU AI Act is risk-tiered. Each tier carries different obligations. Most enterprise AI deployments sit in limited-risk or minimal-risk; some sit in high-risk; almost none should be prohibited (and if they are, kill them).
Examples: Social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), exploitative manipulation, predictive policing based solely on profiling
Obligations: Cannot be deployed in the EU; €35M / 7% turnover penalty for violations
Examples: AI in critical infrastructure, education and vocational training, employment and worker management, essential services, law enforcement, border control, justice, democratic processes, biometric ID
Obligations: Risk management system, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness, conformity assessment, registration in the EU database
Examples: AI systems that interact directly with humans (chatbots), generate or manipulate content (deepfakes, generated images), emotion recognition where permitted
Obligations: Transparency obligations: users must be told they are interacting with AI or that content is AI-generated
Examples: Spam filters, AI-enabled video games, most enterprise productivity AI features
Obligations: No mandatory obligations; voluntary codes of conduct encouraged
OPERATIONAL CHECKLIST
The 10-step AI compliance checklist
Use this as the standing checklist for your AI compliance program. Adjust depth per framework, but the steps apply across NIST AI RMF, ISO 42001, EU AI Act, and most sector rules.
- Inventory every AI system in production, in pilot, and via third-party SaaS
- Classify each system against EU AI Act tiers (prohibited, high-risk, limited, minimal) and any applicable sector rules
- Build the controls matrix: each EU AI Act article, ISO 42001 clause, or NIST function mapped to a specific control with a named owner
- Produce technical documentation for every high-risk system to the EU AI Act standard
- Implement human oversight, logging, and accuracy monitoring for high-risk systems
- Stand up a conformity assessment process for high-risk systems before deployment
- Register applicable systems in the EU AI database where required
- Integrate compliance findings into the broader AI risk register with quarterly review
- Train relevant staff on AI Act transparency obligations (especially for limited-risk chatbots and content generation)
- Review the full program annually and after any major regulatory development or model upgrade
See the AI risk management guide for the risk register that ties compliance into the broader AI risk program.
AI Compliance: Frequently Asked Questions
What is AI compliance?
When did the EU AI Act take effect?
Does the EU AI Act apply to non-EU companies?
What is ISO/IEC 42001?
What’s the difference between AI compliance and AI governance?
How do you operationalize AI compliance?
What does AI compliance typically cost?
Continue the AI security cluster
Compliance defines what you must do; risk management makes sure you actually do it.