ctaio.dev Ask AI Subscribe free

AI Security / AI Compliance

AI Security · Regulation

AI Compliance

EU AI Act, ISO 42001, NIST AI RMF (2026)

AI compliance went from a future problem to a current one with the EU AI Act phasing in through 2026 and 2027. The penalty math is no longer hypothetical: \u20ac35M or 7% of global turnover for prohibited AI use. This guide covers the four frameworks that matter, the EU AI Act risk tiers and what each requires, the comparison every CAIO and CTO needs for choosing between NIST AI RMF and ISO 42001 (or both), and the 10-step operational checklist for getting compliant this year.

30-SECOND EXECUTIVE TAKEAWAY

  • The EU AI Act is real and extraterritorial. If your AI output reaches EU users, it applies. Treat as in scope until proven otherwise.
  • NIST AI RMF is the umbrella. Most US enterprises adopt NIST as canonical, layer EU AI Act controls where applicable, and pursue ISO 42001 if external assurance is needed.
  • The work is in the controls matrix. Mapping each requirement to a specific control with a named owner is what turns compliance from a binder into an operating program.

Why AI compliance matters now

For most of the last decade, AI sat in a regulatory gap. Existing rules around data protection, sector regulation, and consumer protection touched parts of AI deployment, but no horizontal framework specifically governed AI systems as such. That changed in 2024 with the EU AI Act, the first major horizontal AI regulation in any major jurisdiction, with phase-in dates running through 2027 and penalties up to \u20ac35M or 7% of global annual turnover for the most serious violations.

ISO/IEC 42001 launched in late 2023 as the first international management system standard for AI, giving organizations a certifiable scaffold comparable to ISO 27001 for security. NIST released its AI Risk Management Framework in early 2023, providing the most-adopted voluntary structure in the US. Sector regulators in healthcare, financial services, and the public sector have layered their own AI-specific guidance on top.

The result is a stack: NIST AI RMF as the umbrella for most US enterprises, EU AI Act controls layered on for any system reaching EU users, ISO 42001 pursued when external assurance becomes a procurement criterion, and sector-specific rules on top of all three for regulated industries. The right operational question for a CTO or CAIO is no longer whether to invest in AI compliance, but how to structure the program so the same evidence supports multiple frameworks at once.

THE FOUR FRAMEWORKS

EU AI Act vs ISO 42001 vs NIST AI RMF vs sector rules

Side by side. Pick one as your canonical scaffold, then layer the others where applicable. The work pays back when you build a single controls matrix that satisfies multiple frameworks at once.

FrameworkScopeEffectiveApplies toPenalty for non-compliance
EU AI Act
Risk-tiered: prohibited, high-risk, limited-risk, minimal-risk
EU, mandatory In force August 2024; phased through 2027 Any AI system whose output is used in the EU; both providers and deployers Up to €35M or 7% of global turnover for prohibited use
ISO/IEC 42001
Management system standard (PDCA cycle), comparable to ISO 27001
International, certifiable Published December 2023 Voluntary; increasingly required by enterprise procurement No regulatory penalty; certification loss has commercial impact
NIST AI RMF
Govern, Map, Measure, Manage
US, voluntary Released January 2023; updates ongoing Any organization adopting it; the de facto US standard No regulatory penalty; failure to apply where adopted creates organizational risk
Sector-specific
Sector-specific rules layered on top of horizontal frameworks
Varies by sector Various; FDA AI/ML guidance evolving 2024–2026 Healthcare (FDA AI/ML), banking (SR 11-7 + emerging AI guidance), public sector (OMB M-24-10), and others Regulator-specific; can be severe in regulated industries

EU AI ACT

The four EU AI Act risk tiers

The EU AI Act is risk-tiered. Each tier carries different obligations. Most enterprise AI deployments sit in limited-risk or minimal-risk; some sit in high-risk; almost none should be prohibited (and if they are, kill them).

Prohibited

Examples: Social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), exploitative manipulation, predictive policing based solely on profiling

Obligations: Cannot be deployed in the EU; €35M / 7% turnover penalty for violations

High-risk

Examples: AI in critical infrastructure, education and vocational training, employment and worker management, essential services, law enforcement, border control, justice, democratic processes, biometric ID

Obligations: Risk management system, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness, conformity assessment, registration in the EU database

Limited-risk

Examples: AI systems that interact directly with humans (chatbots), generate or manipulate content (deepfakes, generated images), emotion recognition where permitted

Obligations: Transparency obligations: users must be told they are interacting with AI or that content is AI-generated

Minimal-risk

Examples: Spam filters, AI-enabled video games, most enterprise productivity AI features

Obligations: No mandatory obligations; voluntary codes of conduct encouraged

OPERATIONAL CHECKLIST

The 10-step AI compliance checklist

Use this as the standing checklist for your AI compliance program. Adjust depth per framework, but the steps apply across NIST AI RMF, ISO 42001, EU AI Act, and most sector rules.

  1. Inventory every AI system in production, in pilot, and via third-party SaaS
  2. Classify each system against EU AI Act tiers (prohibited, high-risk, limited, minimal) and any applicable sector rules
  3. Build the controls matrix: each EU AI Act article, ISO 42001 clause, or NIST function mapped to a specific control with a named owner
  4. Produce technical documentation for every high-risk system to the EU AI Act standard
  5. Implement human oversight, logging, and accuracy monitoring for high-risk systems
  6. Stand up a conformity assessment process for high-risk systems before deployment
  7. Register applicable systems in the EU AI database where required
  8. Integrate compliance findings into the broader AI risk register with quarterly review
  9. Train relevant staff on AI Act transparency obligations (especially for limited-risk chatbots and content generation)
  10. Review the full program annually and after any major regulatory development or model upgrade

See the AI risk management guide for the risk register that ties compliance into the broader AI risk program.

AI Compliance: Frequently Asked Questions

What is AI compliance?
AI compliance is the discipline of meeting the regulatory and contractual obligations that apply to AI systems. The big four in 2026 are the EU AI Act (mandatory for systems used by EU residents, risk-tiered), ISO/IEC 42001 (international, certifiable AI management system), the NIST AI Risk Management Framework (US, voluntary, the de facto standard), and sector-specific rules like FDA AI/ML and the Federal Reserve’s SR 11-7 model risk guidance for banks. Most enterprises adopt NIST AI RMF as the umbrella, layer EU AI Act controls if they serve EU users, and pursue ISO 42001 if they need certifiable third-party assurance.
When did the EU AI Act take effect?
The EU AI Act entered into force on 1 August 2024. The provisions phase in over multiple deadlines: prohibited AI practices became enforceable on 2 February 2025; obligations on general-purpose AI models started on 2 August 2025; the bulk of high-risk system requirements apply from 2 August 2026; and provisions for AI systems embedded in regulated products (medical devices, vehicles, etc.) phase in through 2027. Penalties for prohibited AI use can reach €35M or 7% of global annual turnover.
Does the EU AI Act apply to non-EU companies?
Yes. It follows the same extraterritorial logic as GDPR: any AI system whose output is used in the EU brings the provider and the deployer into scope, regardless of where they are headquartered. A US-based SaaS vendor selling AI-powered features to EU customers is in scope. A non-EU enterprise using an AI system that processes data of EU residents is in scope. The right operating assumption for any large enterprise with EU exposure is that the EU AI Act applies until proven otherwise.
What is ISO/IEC 42001?
ISO/IEC 42001:2023 is the first international management system standard for artificial intelligence. It defines the requirements for an AI management system in the same way ISO 27001 defines them for information security: governance, risk management, policy, controls, continual improvement. It is certifiable, meaning third parties can audit and certify your AI management system, which is increasingly being asked for in enterprise procurement. ISO 42001 and NIST AI RMF map cleanly onto each other; many organizations adopt NIST internally and pursue ISO 42001 certification when external assurance becomes a buying criterion.
What’s the difference between AI compliance and AI governance?
AI governance is the broader discipline of deciding how AI is developed and used inside the organization: policy, ownership, risk appetite, escalation. AI compliance is the subset focused on meeting external obligations: regulations, standards, contracts. Most enterprise AI compliance work in 2026 lives inside the broader AI governance program. See our AI governance hub for the wider context.
How do you operationalize AI compliance?
Five working components. (1) An inventory of AI systems with each one classified against EU AI Act tiers and any applicable sector rules. (2) A controls matrix that maps each requirement (EU AI Act article, ISO 42001 clause, NIST function) to a specific control with a named owner. (3) Documentation that meets the EU AI Act technical documentation requirements for any high-risk system. (4) A risk register that integrates compliance findings with the broader AI risk register; see our AI risk management guide. (5) An ongoing monitoring cadence with quarterly review for high-risk systems and annual review for the program itself.
What does AI compliance typically cost?
Highly variable. For a mid-size enterprise pursuing EU AI Act readiness on existing AI systems, the first-year cost (legal review, controls implementation, documentation, training) typically runs $300K–1.5M. ISO 42001 certification adds $50K–300K depending on scope and existing management system maturity. Ongoing compliance program cost is typically 5–10% of total AI program spend. Penalties for non-compliance dwarf these numbers; the calculus is usually clear once you run it.
·
Thomas Prommer
Thomas Prommer Technology Executive — CTO/CIO/CTAIO

These salary reports are built on firsthand hiring experience across 20+ years of engineering leadership (adidas, $9B platform, 500+ engineers) and a proprietary network of 200+ executive recruiters and headhunters who share placement data with us directly. As a top-1% expert on institutional investor networks, I've conducted 200+ technical due diligence consultations for PE/VC firms including Blackstone, Bain Capital, and Berenberg — work that requires current, accurate compensation benchmarks across every seniority level. Our team cross-references recruiter data with BLS statistics, job board salary disclosures, and executive compensation surveys to produce ranges you can actually negotiate with.

Continue the AI security cluster

Compliance defines what you must do; risk management makes sure you actually do it.