ctaio.dev Ask AI Subscribe free

AI Security / AI Risk Management

AI Security · NIST AI RMF

AI Risk Management

The 2026 CTO Playbook

AI risk management is what stops AI experiments from becoming AI incidents. NIST AI RMF gives you the structure, ISO 42001 gives you certifiability, the EU AI Act gives you the legal floor, and the six risk categories below give you the operational checklist. This is the playbook a CTO or CAIO can use to stand up a real AI risk program in one quarter, including the risk register fields that make the program defensible to auditors and useful to engineering.

30-SECOND EXECUTIVE TAKEAWAY

  • Pick one canonical framework. NIST AI RMF for most US enterprises; add ISO 42001 if you need certifiability; layer EU AI Act if you serve EU users. Cherry-picking controls from three frameworks creates audit chaos.
  • The risk register is the program. A real, owned, reviewed risk register makes AI risk management operational. A policy PDF doesn\u2019t. The fields below are the minimum.
  • Governance and risk management are different jobs. Governance decides what risk to accept; risk management proves the controls work. Most failures are mistaking one for the other.

4

NIST AI RMF functions: Govern, Map, Measure, Manage

4

EU AI Act risk tiers: Prohibited, High-risk, Limited, Minimal

6

risk categories every enterprise AI program must cover

Why AI risk management is its own discipline

Enterprises already do risk management. Information security risk, operational risk, vendor risk, model risk for the financial sector, product risk for regulated industries. The reasonable question is why AI needs its own framework on top.

The honest answer is that AI risk has properties existing frameworks weren\u2019t designed for: probabilistic outputs (the same input can produce different outputs), training data lineage (risks inherited from data the organization didn\u2019t curate), foundation model dependency (risks inherited from a model the organization didn\u2019t build), agentic action (the system takes actions in the real world, not just produces predictions), and regulatory exposure that\u2019s changing fast under the EU AI Act, NIST guidance, and sector-specific rules.

Existing risk frameworks need to be extended or supplemented. AI risk management is the discipline that does that work. NIST AI RMF is the most-adopted scaffold; ISO 42001 is the certifiable parallel; the EU AI Act is the legal mandate where applicable.

THE FRAMEWORK

NIST AI RMF: the four functions

NIST AI RMF organizes AI risk work into four functions. The functions aren\u2019t sequential phases. They run in parallel for any AI system, on different cadences. The framework deliberately doesn\u2019t prescribe specific controls; that\u2019s your job. It prescribes the structure your controls live inside.

Govern

Cultivate a culture of AI risk management across the organization. Establish policies, accountability, and the resources to execute. The function that turns AI risk management from project work into a sustained capability.

In practice: Define AI use policy, designate AI risk owner, fund the program, train executives on AI threat model, set risk appetite, approve exceptions process.

Map

Establish the context and characterize the AI system. What it does, who it affects, what data it uses, what decisions it influences, what regulations apply.

In practice: Inventory AI systems, classify each by EU AI Act risk tier and internal sensitivity, document data lineage, map dependencies on foundation models and vendors.

Measure

Analyze, assess, benchmark, and monitor AI risks and the trustworthiness of AI systems. Quantify where possible; qualify where not.

In practice: Run model evaluations and red teams, measure fairness and accuracy across sub-populations, assess robustness to drift and adversarial inputs, monitor production behavior.

Manage

Allocate risk resources to mapped and measured risks: prioritize, treat, monitor, and communicate. The execution function.

In practice: Maintain risk register, decide treat/transfer/avoid/accept per risk, implement controls, run incident response, brief the executive committee, report to the board.

SCOPE

The six AI risk categories

Every AI system carries risks from these six categories. A risk register that doesn\u2019t cover all six has gaps. Most organizations focus on security and compliance and under-invest in operational, fairness, and strategic risks. That\u2019s where the long-tail incidents come from.

Category Example risks Primary framework
Security risks Prompt injection, model extraction, training data poisoning, adversarial inputs, supply chain compromise of foundation models, agentic AI privilege abuse OWASP LLM Top 10, NIST AI RMF Manage
Compliance risks EU AI Act (high-risk classification, transparency obligations), ISO 42001 management system requirements, sector-specific (FDA AI/ML, financial MRM), data protection (GDPR for training data) EU AI Act, ISO 42001, NIST AI RMF Govern
Operational risks Model drift, hallucination in production, dependence on third-party model providers, cost overruns from inference, latency degradation NIST AI RMF Measure + Manage
Fairness & ethics risks Disparate performance across demographic groups, encoded bias from training data, harmful outputs, loss of human oversight in high-stakes decisions NIST AI RMF (Trustworthy AI characteristics), EU AI Act high-risk requirements
Reputational risks Public AI failures (offensive outputs, viral hallucinations), regulatory enforcement, customer trust loss, employee distrust NIST AI RMF Govern + Manage
Strategic risks Vendor lock-in to a single foundation model, technical debt from poorly-architected AI features, missed opportunities from over-conservative AI policy, talent loss CAIO and CTO judgment, board-level oversight

DOWNLOADABLE TEMPLATE

The AI risk register: required fields

A risk register is what turns AI risk management from policy theater into operational practice. These are the fields a real register needs. Track them in whatever tool fits your existing risk org (GRC platform, internal wiki, spreadsheet); the discipline matters more than the tool.

Field Purpose
Risk ID Unique identifier for cross-referencing across reports and incidents
AI system / use case Which specific AI deployment this risk applies to (or "all AI systems" if cross-cutting)
Risk category Security · Compliance · Operational · Fairness · Reputational · Strategic
Description Plain-language description of what could go wrong and what would happen
NIST AI RMF function Primary tag: Govern · Map · Measure · Manage. Drives review workflow.
EU AI Act tier If applicable: Prohibited · High-risk · Limited · Minimal
Inherent likelihood 1–5 score before controls; documented basis for the score
Inherent impact 1–5 score (financial, regulatory, reputational); documented basis
Controls in place Specific controls referenced by ID; link to the technical or policy artifact
Residual likelihood / impact Score after controls; if same as inherent, controls aren’t actually working
Owner Named individual (not team) accountable for this risk
Review cadence Continuous · Monthly · Quarterly · Annual; based on tier and category
Last reviewed Date and reviewer name; flags overdue items in dashboards
Triggers for re-review Specific events (model upgrade, regulatory change, near-miss) that force out-of-cycle review

Subscribers to the CTAIO newsletter get the executive PDF pack with the risk register template, NIST AI RMF crosswalk, and the quarterly review SOP.

AI SECURITY BEST PRACTICES

Ten things that separate real AI risk programs from theater

Patterns from CAIO and CISO conversations across regulated and non-regulated industries. Each one is the difference between an AI risk program that survives an incident and one that fails the post-mortem.

  1. Inventory every AI system, including shadow AI usage. You can’t manage what you can’t see
  2. Adopt a single risk framework as canonical (NIST AI RMF works); don’t pick controls a la carte from multiple sources
  3. Tag every risk to a specific business owner, not "the AI team"
  4. Distinguish governance decisions (what risk we accept) from controls (how we mitigate); they have different owners
  5. Run quarterly tabletop exercises for AI incident response, including prompt injection and agent misbehavior
  6. Require an AI risk review before any new model, vendor, or use case enters production
  7. Map AI risks to the EU AI Act tiers even if you don’t serve EU users. The structure exposes gaps that your internal taxonomy misses
  8. Treat third-party AI vendors as part of your risk surface; require their NIST AI RMF or ISO 42001 alignment in contracts
  9. Brief the board annually on AI risk posture using risk-register data, not anecdotes
  10. Build the muscle to retire AI systems that no longer meet the risk threshold. This is the hardest organizational habit to develop and the one that prevents the most regret

FOR YOUR ROLE

What to do this quarter

For the technical CTO

Stand up the inventory. You can\u2019t manage what you can\u2019t see, and most enterprises underestimate AI surface area by 3\u20135x once shadow AI is included. Adopt NIST AI RMF as the canonical scaffold, integrate with your existing risk register tool, and require a risk review before any new AI feature ships. See the prompt injection guide for the security-side checklist.

For the business CAIO

Get the AI risk register on the executive committee agenda quarterly, with named owners for each top-tier risk. Brief the board annually using risk-register data, not anecdotes about ChatGPT. Align with the CRO and CISO so AI risk is one channel into enterprise risk reporting, not three competing ones. See the Chief AI Officer guide for the role mandate that makes this possible.

For the CISO / CRO

Map the AI risk surface to your existing risk taxonomy. AI risk is mostly a new application of risk management, not a new discipline. Where it genuinely is new (prompt injection, agentic action, foundation model dependency), partner with the CAIO rather than absorbing the whole problem. Add a NIST AI RMF crosswalk to your existing controls inventory.

AI Risk Management: Frequently Asked Questions

What is AI risk management?
AI risk management is the discipline of identifying, measuring, mitigating, and governing the risks that come with deploying AI systems. It overlaps with cybersecurity, model risk management, and enterprise risk management, but it covers risks those frameworks don’t: probabilistic outputs, training data lineage, model drift, agentic actions, prompt injection, regulatory exposure under the EU AI Act and ISO 42001, and AI-specific reputation risk.
What is the NIST AI Risk Management Framework?
NIST AI RMF (released January 2023, ongoing updates) is the US government’s voluntary framework for managing AI risk across the lifecycle. It has four functions: Govern (organizational policies and accountability), Map (context and AI system characterization), Measure (analyze risks and trustworthiness), and Manage (treat, monitor, communicate). It’s become the de facto US standard and pairs naturally with ISO 42001 for organizations needing certifiable management systems.
How is AI risk management different from model risk management?
Model risk management (MRM) was built for the financial industry to manage statistical and quantitative models like credit scoring, pricing, and capital. The discipline is mature (SR 11-7 from the Federal Reserve dates to 2011), focuses on validation, and assumes deterministic models with stable inputs. AI risk management extends MRM to handle non-deterministic outputs, foundation models the organization didn’t train, agentic actions, prompt injection, and AI use cases outside the regulated lines of business. Many financial institutions are extending their MRM org to cover AI rather than building a separate AI risk function.
What goes in an AI risk register?
At minimum: a unique risk ID, the AI system or use case it applies to, the risk category (security, compliance, reputational, operational, fairness, etc.), the inherent likelihood and impact, the controls in place, the residual likelihood and impact after controls, the named owner, the review cadence, and the last review date. Strong AI risk registers also tag risks to specific NIST AI RMF functions, the EU AI Act risk tier, and the model lifecycle stage. The same record then drives board reporting, regulator response, and engineering remediation.
Who owns AI risk in the organization?
In 2026, three patterns work. (1) The Chief AI Officer owns AI risk strategy, with a direct line to the CRO and CISO for execution. (2) The CRO owns AI risk as one risk category among others, with the CAIO providing technical input. (3) A standing AI risk committee owns AI risk decisions; the CAIO chairs, the CISO and CRO are voting members. The wrong pattern is "the engineering team will handle it". AI risk decisions touch legal, compliance, and brand, all of which need executive ownership.
How often should AI risk reviews happen?
Continuously for high-risk systems (anything customer-facing, agentic, or under EU AI Act high-risk classification). Quarterly for medium-risk. Annually for low-risk and the framework itself. Specific triggers also force a review: a new foundation model release the system depends on, a regulatory change (EU AI Act updates, sector rules), a near-miss incident, a major architecture change, or onboarding a third-party AI vendor.
What’s the difference between AI risk management and AI governance?
AI governance is the policy layer: decisions about what AI use is acceptable, what controls are mandatory, how exceptions are approved. AI risk management is the operational layer: identifying specific risks, applying the controls, monitoring for drift, escalating exceptions. Governance produces the rules; risk management proves they work. You need both. Governance without risk management is a binder. Risk management without governance is engineering activity with no executive authority.
·
Thomas Prommer
Thomas Prommer Technology Executive — CTO/CIO/CTAIO

These salary reports are built on firsthand hiring experience across 20+ years of engineering leadership (adidas, $9B platform, 500+ engineers) and a proprietary network of 200+ executive recruiters and headhunters who share placement data with us directly. As a top-1% expert on institutional investor networks, I've conducted 200+ technical due diligence consultations for PE/VC firms including Blackstone, Bain Capital, and Berenberg — work that requires current, accurate compensation benchmarks across every seniority level. Our team cross-references recruiter data with BLS statistics, job board salary disclosures, and executive compensation surveys to produce ranges you can actually negotiate with.

Continue the AI security cluster

Risk management is the program; specific risks need specific defenses.