AI Policy Template: 9-Section Guide
The AI Acceptable Use Policy Every Enterprise Needs
Most AI policies fail because they are too vague to enforce, too long to read, or written by legal without engineering input. This guide gives you the nine sections an effective AI policy needs, what goes in each one, and a practical example for every section. Use it as a template to draft your own policy, or as a checklist to audit the one you already have.
30-second executive takeaway
- Nine sections, 8 to 12 pages. Purpose and scope, approved tools, data classification rules, prohibited uses, human oversight, pre-deployment review, vendor requirements, incident response, and review cadence. Anything less leaves gaps. Anything more means nobody reads it.
- A policy that is not enforced is not a policy. Technical controls, mandatory training, documented consequences, and visible metrics are the four enforcement mechanisms. You need all four. Publishing the document is not enforcement.
- Start with the template, customize everything. A template gives you the structure. But your policy needs your tools, your data classifications, your risk tiers, and your team names. A template that has not been customized is a placeholder, not a policy.
THE PROBLEM
Why most AI policies fail
The typical enterprise AI policy is written by the legal department in response to a board request or a competitor announcement. It is published on the intranet, announced in an all-hands, and then ignored. Six months later, the organization has the same shadow AI problem it had before the policy existed, plus a false sense of compliance.
Five failure modes show up repeatedly. Too vague: the policy says "use AI responsibly" without defining what responsible use looks like in specific situations. An engineer cannot look up whether they are allowed to paste internal code into an AI coding assistant. Too long: a 40-page document that covers every conceivable scenario but that nobody reads past page three. Not enforced: the policy exists but there are no technical controls blocking unapproved tools, no training requirements, and no consequences for violations. Not updated: the policy was written before the EU AI Act compliance deadlines, before the organization deployed its current AI stack, and before the last three AI incidents. Written without engineering input: the policy is legally sound but operationally impractical. It requires pre-deployment reviews but does not say who conducts them, how long they take, or how they integrate with existing development workflows.
An effective AI policy avoids all five failure modes. It is specific enough to follow, short enough to read, enforced through technical controls and accountability, updated on a defined cadence, and written collaboratively by legal, engineering, security, and business stakeholders.
THE TEMPLATE
The nine sections of an effective AI policy
Each section addresses a specific governance need. Together they cover who the policy applies to, what tools are allowed, what data can be used, what is prohibited, what oversight is required, what must happen before deployment, how vendors are managed, what to do when things go wrong, and who keeps the policy current. Use this as your drafting outline.
Purpose and scope
What this policy covers and who it applies to. Define "AI" for the purposes of your organization (include ML models, LLMs, generative AI, automated decision systems, and any system that produces outputs used in decisions about people). Specify which employees, contractors, vendors, and business units are subject to the policy. State the policy's objective: to ensure AI is used safely, ethically, and in compliance with applicable regulations.
Example language
"This policy applies to all employees, contractors, and third-party vendors who develop, deploy, procure, or use AI systems on behalf of [Company]. AI systems include any software that uses machine learning, large language models, generative AI, or automated decision-making to produce outputs used in business operations, customer interactions, or decisions affecting individuals."
Approved AI tools and services
The sanctioned list of AI tools the organization has reviewed and approved for use, organized by category (code generation, content creation, data analysis, customer service). Include the process for requesting addition of a new tool: who to submit the request to, what the review criteria are, the expected timeline, and who has approval authority. This section prevents shadow AI by giving people a legitimate path to get tools approved.
Example language
"Approved tools: GitHub Copilot (code generation, Tier 2 data only), ChatGPT Enterprise (content drafting, no customer data), Anthropic Claude (analysis, Tier 2 data with DPA). To request a new tool, submit the AI Tool Request form to [CAIO office]. Review takes 10 business days. Approval authority: CAIO for Tier 1 tools, AI governance team for Tier 2."
Data classification and AI use
Which data classification tiers can be used with which AI tools. Map your existing data classification scheme (typically Tier 1: public, Tier 2: internal, Tier 3: confidential, Tier 4: restricted) to specific AI tool permissions. Tier 4 data (PII, financial records, health data, trade secrets) should never enter external AI tools unless a specific DPA and technical controls are in place. This is the section that prevents the most common AI incident: an employee pasting customer data into a public AI tool.
Example language
"Tier 1 (Public): Any approved AI tool. Tier 2 (Internal): Enterprise-licensed AI tools with DPA only. Tier 3 (Confidential): On-premise or private-cloud AI only, with CISO approval. Tier 4 (Restricted): No external AI tools. Internal models only, with pre-deployment review and CAIO sign-off. When in doubt, treat data as one tier higher than you think it is."
Prohibited uses
What AI must never be used for in your organization. Be specific. Common prohibitions: autonomous hiring or termination decisions without human review, real-time biometric surveillance of employees, automated denial of customer services without appeal path, generating content that impersonates real individuals, using AI to circumvent compliance controls, and deploying AI in high-risk use cases without completing the pre-deployment review. The prohibited uses section draws the bright lines that no business case can override.
Example language
"AI must not be used to: (a) make autonomous hiring, promotion, or termination decisions without human review and approval, (b) conduct real-time facial recognition or biometric monitoring of employees, (c) generate communications that impersonate specific individuals, (d) process Tier 4 data without explicit CAIO and CISO approval, (e) deploy in any high-risk use case without completing the pre-deployment review process defined in Section 6."
Human oversight requirements
Define the human oversight pattern for each risk tier. High-risk AI systems (decisions affecting employment, credit, healthcare, legal outcomes) require human-in-the-loop: a qualified person reviews and approves each decision before it takes effect. Medium-risk systems require human-on-the-loop: the system operates but a person monitors outputs and can override. Low-risk systems require human-in-command: a person sets the parameters and reviews aggregate performance periodically. Document who the designated reviewer is for each system and what their qualifications must be.
Example language
"High-risk systems: Human-in-the-loop. Every AI-generated recommendation must be reviewed and approved by a qualified human before it is communicated to the affected individual. The reviewer must have domain expertise and documented training on the system's limitations. Medium-risk: Human-on-the-loop with daily output review and override capability. Low-risk: Human-in-command with monthly performance review."
Pre-deployment review process
What must happen before any AI system goes into production. Define the review checklist: risk classification, data impact assessment, bias testing results (for systems that affect people), security review, legal/compliance sign-off, human oversight pattern documentation, and model card completion. Specify who conducts the review, the approval authority for each risk tier, the timeline SLA, and what happens when a system fails the review. Make this a deployment gate, not a suggestion.
Example language
"Before any AI system is deployed to production, the deploying team must complete the AI Pre-Deployment Checklist and submit it to [CAIO office]. High-risk systems: CAIO approval required, 15-business-day SLA. Medium-risk: Governance team approval, 5-business-day SLA. Low-risk: Self-certification with governance team notification, 2-business-day SLA. Systems that fail review may not be deployed until all identified issues are resolved and the review is repeated."
Vendor and third-party AI
Contractual and technical requirements for AI vendors. Require Data Processing Agreements with explicit AI training opt-outs. Require transparency about model architecture, training data provenance, and performance characteristics. Define vendor evaluation criteria: SOC 2 compliance, bias testing documentation, incident notification SLA, data residency, and right to audit. Specify that vendor AI systems are subject to the same governance standards as internal systems. The goal is to prevent vendors from becoming an ungoverned backdoor into your AI ecosystem.
Example language
"All AI vendor contracts must include: (a) DPA with explicit opt-out of using our data for model training, (b) documentation of model architecture and training data provenance, (c) 24-hour incident notification SLA, (d) right to audit upon 30 days notice, (e) data residency in approved jurisdictions. Vendor AI systems must complete the same pre-deployment review process as internal systems. Procurement may not execute AI vendor contracts without CAIO and Legal sign-off."
Incident reporting and response
What to do when something goes wrong. Define what constitutes an AI incident (biased output, harmful content, data leak through an AI tool, model producing incorrect results that affect decisions, unauthorized AI use). Establish the reporting channel (who to contact, how quickly). Define the response process: triage, containment, investigation, remediation, root cause analysis, and post-mortem. Specify regulatory notification paths for incidents that trigger reporting obligations. Include escalation criteria for when the incident reaches the governance board, the CEO, and the full board of directors.
Example language
"An AI incident is any event where an AI system produces harmful, biased, incorrect, or unauthorized output that affects or could affect individuals, business operations, or regulatory compliance. Report immediately to [[email protected]] and your manager. Response SLA: triage within 2 hours, containment within 24 hours, root cause analysis within 5 business days, remediation plan within 10 business days. Incidents involving PII or regulatory implications trigger mandatory notification to Legal within 4 hours."
Review cadence and ownership
Who updates the policy, how often, and what triggers a review. Formal review every six months by the CAIO with input from Legal, CISO, and business unit champions. Trigger-based reviews when new regulations take effect, when significant AI incidents occur, when new tool categories emerge, or when the governance board identifies gaps. Every update goes through the same approval process as the original. Version control is mandatory. A change log is published to the organization. Employees are notified of material changes and required to acknowledge them.
Example language
"The CAIO is responsible for this policy and conducts a formal review every six months. Trigger-based reviews occur within 30 days of: new AI regulation taking effect, a Severity 1 AI incident, or governance board request. All updates require governance board approval. Changes are versioned (v1.0, v1.1, v2.0) with a published change log. Material changes require employee re-acknowledgment within 30 days."
CUSTOMIZATION
Template vs policy: what to customize
A template gives you the structure, the section coverage, and example language that shows what good looks like. But a template is not a policy. Every section needs to be adapted with your organization's specific context. Here is what requires customization in every section.
Purpose and scope: Replace the generic definition of AI with the specific systems your organization uses. Name the business units. Reference your jurisdiction and applicable regulations. Approved tools: List your actual tools, not example tools. Include the data tier each tool is approved for. Data classification: Use your existing classification scheme. If you do not have one, build one before writing the AI policy. Prohibited uses: Add industry-specific prohibitions. Financial services, healthcare, education, and government each have use cases that are off-limits regardless of risk tier. Human oversight: Name the actual reviewers and their qualifications. Pre-deployment review: Specify your actual review team, SLAs, and escalation paths. Vendor requirements: Reference your procurement process and legal review workflow. Incident response: Use your actual reporting channels and escalation contacts. Review cadence: Name the CAIO or governance owner. Set the dates.
The customization test: if you removed your company name from the policy and it could belong to any organization, it is still a template. A real policy could only belong to your organization because every section references your specific tools, teams, processes, and regulations.
ADOPTION
How to get the policy adopted
Publishing a policy is not the same as adopting a policy. Adoption means people know the policy exists, understand what it requires, follow it in practice, and face consequences when they do not. Four mechanisms make the difference between a policy that lives on the intranet and a policy that lives in the organization.
Champions, not mandates
Identify AI champions in every business unit. These are senior people who understand the policy, can answer questions from their teams, and serve as the governance program's representatives on the ground. Champions make the policy feel like organizational common sense rather than a compliance mandate from headquarters. Train them first. Give them a direct line to the CAIO. Make them visible.
Training that is actually useful
Mandatory completion for all employees, with role-specific modules for engineers (pre-deployment review process, bias testing requirements) and managers (approved tools, data classification, incident reporting). Keep it under 30 minutes. Use scenarios from your actual organization, not generic examples. Include a knowledge check. Require annual refresher. Track completion by business unit and report to the governance board.
Consequences that are real
Document the consequences for policy violations and communicate them clearly. Consequence severity should scale with risk tier: using an unapproved tool for a low-risk task gets a coaching conversation. Using an unapproved tool with customer PII gets a formal review. Deploying a high-risk system without pre-deployment review is a performance management event. Consequences without training is punishment. Training without consequences is suggestion.
Visibility that creates accountability
Publish a compliance dashboard. Track approved vs. unapproved tool usage, pre-deployment review completion rates, training completion by business unit, and incident volume. Report to the governance board monthly. Include AI policy compliance metrics in manager performance reviews. When compliance is visible, non-compliance is noticeable. When non-compliance is noticeable, behavior changes.
Frequently Asked Questions
What should go in an AI policy?
How long should an AI policy be?
Who writes the AI policy?
How do you enforce an AI policy?
How often should an AI policy be updated?
Should we use a template or write a custom AI policy?
From template to policy to program
The policy template is the starting point. The AI Governance Hub connects it to the maturity model, the role definitions, and the operational frameworks that turn a document into a program.