AI Security · NIST AI RMF
AI Risk Management
The 2026 CTO Playbook
AI risk management is what stops AI experiments from becoming AI incidents. NIST AI RMF gives you the structure, ISO 42001 gives you certifiability, the EU AI Act gives you the legal floor, and the six risk categories below give you the operational checklist. This is the playbook a CTO or CAIO can use to stand up a real AI risk program in one quarter, including the risk register fields that make the program defensible to auditors and useful to engineering.
30-SECOND EXECUTIVE TAKEAWAY
- Pick one canonical framework. NIST AI RMF for most US enterprises; add ISO 42001 if you need certifiability; layer EU AI Act if you serve EU users. Cherry-picking controls from three frameworks creates audit chaos.
- The risk register is the program. A real, owned, reviewed risk register makes AI risk management operational. A policy PDF doesn\u2019t. The fields below are the minimum.
- Governance and risk management are different jobs. Governance decides what risk to accept; risk management proves the controls work. Most failures are mistaking one for the other.
4
NIST AI RMF functions: Govern, Map, Measure, Manage
4
EU AI Act risk tiers: Prohibited, High-risk, Limited, Minimal
6
risk categories every enterprise AI program must cover
Why AI risk management is its own discipline
Enterprises already do risk management. Information security risk, operational risk, vendor risk, model risk for the financial sector, product risk for regulated industries. The reasonable question is why AI needs its own framework on top.
The honest answer is that AI risk has properties existing frameworks weren\u2019t designed for: probabilistic outputs (the same input can produce different outputs), training data lineage (risks inherited from data the organization didn\u2019t curate), foundation model dependency (risks inherited from a model the organization didn\u2019t build), agentic action (the system takes actions in the real world, not just produces predictions), and regulatory exposure that\u2019s changing fast under the EU AI Act, NIST guidance, and sector-specific rules.
Existing risk frameworks need to be extended or supplemented. AI risk management is the discipline that does that work. NIST AI RMF is the most-adopted scaffold; ISO 42001 is the certifiable parallel; the EU AI Act is the legal mandate where applicable.
THE FRAMEWORK
NIST AI RMF: the four functions
NIST AI RMF organizes AI risk work into four functions. The functions aren\u2019t sequential phases. They run in parallel for any AI system, on different cadences. The framework deliberately doesn\u2019t prescribe specific controls; that\u2019s your job. It prescribes the structure your controls live inside.
Govern
Cultivate a culture of AI risk management across the organization. Establish policies, accountability, and the resources to execute. The function that turns AI risk management from project work into a sustained capability.
In practice: Define AI use policy, designate AI risk owner, fund the program, train executives on AI threat model, set risk appetite, approve exceptions process.
Map
Establish the context and characterize the AI system. What it does, who it affects, what data it uses, what decisions it influences, what regulations apply.
In practice: Inventory AI systems, classify each by EU AI Act risk tier and internal sensitivity, document data lineage, map dependencies on foundation models and vendors.
Measure
Analyze, assess, benchmark, and monitor AI risks and the trustworthiness of AI systems. Quantify where possible; qualify where not.
In practice: Run model evaluations and red teams, measure fairness and accuracy across sub-populations, assess robustness to drift and adversarial inputs, monitor production behavior.
Manage
Allocate risk resources to mapped and measured risks: prioritize, treat, monitor, and communicate. The execution function.
In practice: Maintain risk register, decide treat/transfer/avoid/accept per risk, implement controls, run incident response, brief the executive committee, report to the board.
SCOPE
The six AI risk categories
Every AI system carries risks from these six categories. A risk register that doesn\u2019t cover all six has gaps. Most organizations focus on security and compliance and under-invest in operational, fairness, and strategic risks. That\u2019s where the long-tail incidents come from.
| Category | Example risks | Primary framework |
|---|---|---|
| Security risks | Prompt injection, model extraction, training data poisoning, adversarial inputs, supply chain compromise of foundation models, agentic AI privilege abuse | OWASP LLM Top 10, NIST AI RMF Manage |
| Compliance risks | EU AI Act (high-risk classification, transparency obligations), ISO 42001 management system requirements, sector-specific (FDA AI/ML, financial MRM), data protection (GDPR for training data) | EU AI Act, ISO 42001, NIST AI RMF Govern |
| Operational risks | Model drift, hallucination in production, dependence on third-party model providers, cost overruns from inference, latency degradation | NIST AI RMF Measure + Manage |
| Fairness & ethics risks | Disparate performance across demographic groups, encoded bias from training data, harmful outputs, loss of human oversight in high-stakes decisions | NIST AI RMF (Trustworthy AI characteristics), EU AI Act high-risk requirements |
| Reputational risks | Public AI failures (offensive outputs, viral hallucinations), regulatory enforcement, customer trust loss, employee distrust | NIST AI RMF Govern + Manage |
| Strategic risks | Vendor lock-in to a single foundation model, technical debt from poorly-architected AI features, missed opportunities from over-conservative AI policy, talent loss | CAIO and CTO judgment, board-level oversight |
DOWNLOADABLE TEMPLATE
The AI risk register: required fields
A risk register is what turns AI risk management from policy theater into operational practice. These are the fields a real register needs. Track them in whatever tool fits your existing risk org (GRC platform, internal wiki, spreadsheet); the discipline matters more than the tool.
| Field | Purpose |
|---|---|
| Risk ID | Unique identifier for cross-referencing across reports and incidents |
| AI system / use case | Which specific AI deployment this risk applies to (or "all AI systems" if cross-cutting) |
| Risk category | Security · Compliance · Operational · Fairness · Reputational · Strategic |
| Description | Plain-language description of what could go wrong and what would happen |
| NIST AI RMF function | Primary tag: Govern · Map · Measure · Manage. Drives review workflow. |
| EU AI Act tier | If applicable: Prohibited · High-risk · Limited · Minimal |
| Inherent likelihood | 1–5 score before controls; documented basis for the score |
| Inherent impact | 1–5 score (financial, regulatory, reputational); documented basis |
| Controls in place | Specific controls referenced by ID; link to the technical or policy artifact |
| Residual likelihood / impact | Score after controls; if same as inherent, controls aren’t actually working |
| Owner | Named individual (not team) accountable for this risk |
| Review cadence | Continuous · Monthly · Quarterly · Annual; based on tier and category |
| Last reviewed | Date and reviewer name; flags overdue items in dashboards |
| Triggers for re-review | Specific events (model upgrade, regulatory change, near-miss) that force out-of-cycle review |
Subscribers to the CTAIO newsletter get the executive PDF pack with the risk register template, NIST AI RMF crosswalk, and the quarterly review SOP.
AI SECURITY BEST PRACTICES
Ten things that separate real AI risk programs from theater
Patterns from CAIO and CISO conversations across regulated and non-regulated industries. Each one is the difference between an AI risk program that survives an incident and one that fails the post-mortem.
- Inventory every AI system, including shadow AI usage. You can’t manage what you can’t see
- Adopt a single risk framework as canonical (NIST AI RMF works); don’t pick controls a la carte from multiple sources
- Tag every risk to a specific business owner, not "the AI team"
- Distinguish governance decisions (what risk we accept) from controls (how we mitigate); they have different owners
- Run quarterly tabletop exercises for AI incident response, including prompt injection and agent misbehavior
- Require an AI risk review before any new model, vendor, or use case enters production
- Map AI risks to the EU AI Act tiers even if you don’t serve EU users. The structure exposes gaps that your internal taxonomy misses
- Treat third-party AI vendors as part of your risk surface; require their NIST AI RMF or ISO 42001 alignment in contracts
- Brief the board annually on AI risk posture using risk-register data, not anecdotes
- Build the muscle to retire AI systems that no longer meet the risk threshold. This is the hardest organizational habit to develop and the one that prevents the most regret
FOR YOUR ROLE
What to do this quarter
For the technical CTO
Stand up the inventory. You can\u2019t manage what you can\u2019t see, and most enterprises underestimate AI surface area by 3\u20135x once shadow AI is included. Adopt NIST AI RMF as the canonical scaffold, integrate with your existing risk register tool, and require a risk review before any new AI feature ships. See the prompt injection guide for the security-side checklist.
For the business CAIO
Get the AI risk register on the executive committee agenda quarterly, with named owners for each top-tier risk. Brief the board annually using risk-register data, not anecdotes about ChatGPT. Align with the CRO and CISO so AI risk is one channel into enterprise risk reporting, not three competing ones. See the Chief AI Officer guide for the role mandate that makes this possible.
For the CISO / CRO
Map the AI risk surface to your existing risk taxonomy. AI risk is mostly a new application of risk management, not a new discipline. Where it genuinely is new (prompt injection, agentic action, foundation model dependency), partner with the CAIO rather than absorbing the whole problem. Add a NIST AI RMF crosswalk to your existing controls inventory.
AI Risk Management: Frequently Asked Questions
What is AI risk management?
What is the NIST AI Risk Management Framework?
How is AI risk management different from model risk management?
What goes in an AI risk register?
Who owns AI risk in the organization?
How often should AI risk reviews happen?
What’s the difference between AI risk management and AI governance?
Continue the AI security cluster
Risk management is the program; specific risks need specific defenses.