Back to blog
    AI Governance Policy Template for Enterprise Teams
    ai-governancepolicy-templateenterprise-aicomplianceresponsible-ai

    AI Governance Policy Template for Enterprise Teams

    A complete AI governance policy template covering model inventory, risk tiers, human oversight requirements, vendor management, and incident response. Adapt for your organization.

    EErtas Team·

    A written AI governance policy is no longer optional for enterprise organizations. The EU AI Act requires documented governance processes for high-risk systems. Cyber insurance underwriters are starting to ask for it as a condition of coverage. Boards are asking for it in the wake of high-profile AI failures at peer organizations. Legal is asking for it because regulatory risk is now a board-level concern, not just an IT concern.

    This template covers every essential section. The policy text below is designed to be adapted — fill in the bracketed fields with your organization's specifics, adjust risk thresholds to match your industry and regulatory context, and have legal review before publishing internally.

    How to Use This Template

    Don't try to implement every section simultaneously. Start with Sections 3 and 4 — risk classification and model inventory. Those two sections give you a foundation that everything else references. Then build out governance structure (Section 2) and human oversight requirements (Section 5). Sections 6 through 10 can follow as your program matures.


    ORGANIZATIONAL AI GOVERNANCE POLICY

    Document ID: [POLICY-AI-001] Version: [1.0] Effective Date: [DATE] Next Review Date: [DATE + 1 YEAR] Policy Owner: [AI Risk Officer / CTO / CISO] Approved By: [Board / Executive Committee]


    Section 1: Scope and Definitions

    1.1 Scope

    This policy applies to all AI systems operated by [Organization Name], including:

    • AI systems developed internally
    • AI systems procured from third-party vendors or accessed via API
    • AI capabilities embedded in third-party software products used by the organization
    • AI systems operated by contractors or partners on behalf of the organization

    This policy covers any system that uses machine learning, large language models, or automated decision-making logic that affects business outcomes, customer interactions, employee decisions, or compliance obligations.

    1.2 Definitions

    TermDefinition
    AI SystemAny software system that uses machine learning, statistical modeling, or rules-based automation to produce outputs (predictions, classifications, generated content, decisions) that influence organizational decisions or customer outcomes
    High-Risk AIAn AI system operating in a context where errors could cause significant harm to individuals, create legal liability, or trigger regulatory enforcement. See Section 3 for classification criteria.
    ModelThe trained artifact at the core of an AI system — the weights and parameters that produce outputs from inputs. Distinct from the application built around it.
    Human-in-the-Loop (HITL)An oversight configuration where a human reviews and approves each AI output before any consequential action is taken
    Human-on-the-Loop (HOTL)An oversight configuration where AI operates autonomously but a human monitors outputs and has defined override authority
    Human-out-of-the-Loop (HOOTL)An automated configuration with no human review of individual outputs; humans review aggregate performance metrics only
    Audit TrailAn immutable, timestamped record of AI system inputs, outputs, processing steps, human review decisions, and system changes
    Model InventoryThe organization's formal register of all AI systems in operation, as defined in Section 4

    Section 2: Governance Structure

    2.1 AI Governance Committee

    The AI Governance Committee (AGC) is the primary decision-making body for AI policy, risk tolerance, and escalated decisions.

    • Composition: Chief Technology Officer, Chief Risk Officer, Chief Legal Officer, Chief Information Security Officer, and rotating business unit representative
    • Meeting frequency: Quarterly regular meetings; ad hoc for P0/P1 incidents or material policy decisions
    • Decision authority: Approves high-risk AI system deployments; sets risk tolerance thresholds; approves policy changes; reviews incident reports for P0 and P1 events

    2.2 AI System Owner

    Every AI system in the Model Inventory must have a named AI System Owner.

    • Responsibilities: Maintain accurate inventory entry; ensure validation is completed on schedule; respond to incidents within defined SLAs; ensure human oversight is implemented as specified; escalate to AGC when risk tier or scope changes
    • Assignment: AI System Owners are assigned at the time of Model Inventory registration and must be approved by the relevant business unit head

    2.3 AI Risk Officer

    The AI Risk Officer holds enterprise-level accountability for AI governance program effectiveness.

    • Responsibilities: Maintain the AI governance policy; oversee the Model Inventory; report to the AGC quarterly on program status; coordinate with external auditors and regulators; monitor regulatory changes and update policy within 30 days of material changes

    Section 3: AI Risk Classification

    All AI systems must be classified into one of three risk tiers at the time of Model Inventory registration. The AI System Owner proposes the tier; the AI Risk Officer confirms or adjusts.

    Tier 1 — High Risk

    Systems that affect individuals' access to services, employment opportunities, healthcare outcomes, legal status, financial products, or educational opportunities. Also includes any system classified as high-risk under the EU AI Act Annex III or as a model under SR 11-7 in a regulated financial context.

    Examples: loan eligibility screening, employee performance scoring, medical triage support, contract review with individual-level recommendations, fraud detection with account-level consequences.

    Required controls:

    • Human-in-the-Loop oversight (HITL) — human approval before any consequential action
    • Full audit trail (immutable, timestamped, minimum [X] year retention)
    • Quarterly validation
    • Documented model card
    • Registration in Model Inventory before deployment

    Tier 2 — Medium Risk

    Internal productivity tools, content generation for internal use, analytics dashboards without individual-level consequences, and customer-facing tools that produce recommendations rather than decisions.

    Examples: internal document summarization, sales forecasting dashboards, customer support chatbot (with human escalation path), meeting transcription and summary.

    Required controls:

    • Human-on-the-Loop (HOTL) oversight with defined override process
    • Incident logging
    • Semi-annual validation review
    • Registration in Model Inventory within [X] days of deployment

    Tier 3 — Low Risk

    Systems with no direct individual impact, high error tolerance, and no regulatory scope.

    Examples: spam filters, autocomplete suggestions, content tag recommendations, internal search ranking.

    Required controls:

    • Basic input/output logging
    • Annual review
    • Registration in Model Inventory within [X] days of deployment

    Tier escalation: If a Tier 2 or Tier 3 system is modified to affect individual-level outcomes, or if its outputs are used to feed a Tier 1 decision, it must be re-classified. The AI System Owner is responsible for identifying and reporting tier escalation triggers.


    Section 4: Model Inventory Requirements

    4.1 Registration requirement

    All AI systems must be registered in the [Organization Name] Model Inventory:

    • Tier 1 systems: before deployment to production
    • Tier 2 systems: within [10] business days of deployment
    • Tier 3 systems: within [30] business days of deployment

    Shadow AI (systems deployed outside the formal IT procurement process) must be reported and registered within [30] days of discovery. AI System Owners and business unit heads are jointly responsible for identifying and reporting shadow AI.

    4.2 Required inventory fields

    See the AI Model Inventory Template for the complete field specification and a populated example. At minimum, each inventory entry must include: Model ID, Model Name, Version, Type, Vendor/Source, Deployment Environment, Business Purpose, Risk Tier, Owner, Validation Status, Last Validation Date, Next Review Date, Regulatory Scope, Human Oversight Level, and Incident Log Link.

    4.3 Review cadence

    Risk TierMinimum Review Cadence
    Tier 1 (High Risk)Quarterly
    Tier 2 (Medium Risk)Semi-annually
    Tier 3 (Low Risk)Annually
    Any tier (post-incident)Within 5 business days of incident closure

    Section 5: Human Oversight Requirements

    Human oversight requirements are determined by risk tier and implemented using the HITL Workflow Design Worksheet prior to deployment.

    Tier 1 systems: Human approval is required before any action affecting an individual is taken. Reviewers must have access to: the AI output, the inputs that produced it, the model version, and relevant policy context. Review SLAs must be defined and monitored. Override rates must be tracked.

    Tier 2 systems: Human monitoring is required with a defined override process. Monitoring includes: regular sampling of outputs, automated alerts for anomalous output patterns, and a documented escalation path for reviewer-identified concerns.

    Tier 3 systems: Automated operation is permitted with output logging and aggregate performance monitoring. If Tier 3 outputs are used as inputs to Tier 1 or Tier 2 systems, the full pipeline is treated as Tier 1 for oversight purposes.

    Automation bias prevention: All systems with human review must implement randomized retrospective review of auto-approved decisions. Minimum retrospective review rate: [5]%. Override rate deviations greater than [20]% from baseline trigger a review of threshold settings.


    Section 6: Vendor Management

    6.1 Pre-deployment evaluation

    All AI vendors must be evaluated using the AI Vendor Evaluation Scorecard before any AI system is deployed using their technology. This includes: commercial API providers, embedded AI in SaaS products, and open-source foundation models where a vendor relationship exists.

    Minimum acceptable total scorecard score: 3.0 overall. No mission-critical Tier 1 system may depend on a vendor scoring below 3.0 in Dimension 2 (Audit and Logging) or Dimension 4 (Data Governance).

    6.2 Ongoing evaluation

    Annual re-evaluation is required for all active vendors. The following events trigger immediate re-evaluation outside the annual cycle:

    • Vendor acquisition or change of control
    • Material changes to the vendor's data governance terms
    • Vendor model updates that affect a Tier 1 system
    • Any regulatory action involving the vendor

    6.3 Contractual requirements

    Before deploying any Tier 1 AI system using a vendor model, the following must be confirmed in writing:

    • Data processing agreement or equivalent confirming data handling terms
    • Confirmation that the organization's data is not used for model training (or explicit opt-out)
    • Version pinning availability and change notification commitments
    • Log export capability meeting the organization's retention requirements

    Section 7: Data Governance

    Training data and inference-time data used with AI systems must comply with the organization's data classification policy.

    • PHI (Protected Health Information) and PII (Personally Identifiable Information): require explicit AI processing approval from the Privacy Officer before use in any AI system (training or inference)
    • Privileged data (legal, financial, trade secrets): require approval from the relevant function head and Legal before use in any AI system
    • Third-party data: must be reviewed for license terms permitting AI use before inclusion in training datasets or inference pipelines

    No personal data may be sent to a third-party AI vendor without an approved data processing agreement confirming the vendor's handling obligations.


    Section 8: Audit Trail and Logging

    Tier 1 systems: All inputs, outputs, model version, timestamp, and human review decisions must be logged for every inference. Logs must be immutable (tamper-evident), timestamped in UTC, and stored for a minimum of [X years per applicable regulation].

    Tier 2 systems: All inputs, outputs, model version, and timestamp must be logged. Human review decisions must be logged for any case that received human review. Minimum retention: [X years].

    Tier 3 systems: Aggregate performance metrics must be logged. Minimum retention: [1 year].

    Log access is restricted to authorized personnel. Logs must be available to the AI Risk Officer, internal audit, and authorized regulators on request. Log infrastructure must include protection against accidental loss.


    Section 9: Incident Response

    An AI incident is any event in which an AI system produces output that causes or contributes to unintended harm to an individual, a regulatory breach, a significant financial loss, or a material reputational risk.

    9.1 Severity levels

    SeverityDefinitionInitial Response SLA
    P0 — CriticalPhysical harm; financial loss >$[100K]; regulatory breach; 1,000+ individuals affectedNotify AI Risk Officer within 1 hour
    P1 — HighSystematic errors for a defined group; compliance gap discovered; reputational risk if publicNotify AI Risk Officer within 4 hours
    P2 — MediumIncorrect outputs for a subset of inputs; no immediate harmNotify AI System Owner within 24 hours
    P3 — LowQuality degradation; no individual harm; no compliance implicationLog and review within [5] business days

    9.2 Response process

    Follow the AI Incident Response Playbook for all P0 and P1 incidents. Key steps: preserve evidence before remediation; confirm scope before containment; notify Legal and Compliance for P0/P1; conduct post-incident review within [10] business days.

    9.3 Notification requirements

    • Board/executive notification: P0 within 24 hours
    • Regulatory notification: per applicable regulation (EU AI Act, GDPR Article 33, HIPAA Breach Notification Rule)
    • Individual notification: if individuals were affected by incorrect AI decisions, Legal will advise on notification obligations

    Section 10: Policy Review and Updates

    This policy is reviewed annually by the AI Risk Officer and approved by the AI Governance Committee.

    The AI Risk Officer will update this policy within 30 days of any of the following:

    • Material change to applicable regulation (EU AI Act implementing acts, SR 11-7 guidance updates, NIST AI RMF revisions)
    • P0 incident with findings that require policy-level response
    • Board or executive direction
    • Material change in the organization's AI use profile (new high-risk use case, new regulated market)

    All policy updates are version-controlled. Previous versions are retained for [5] years.


    Implementation Guidance

    The most common implementation failure is trying to build the entire governance program at once. Start here:

    Month 1: Implement Section 3 (risk classification). Classify every AI system currently in production. This surfaces your Tier 1 systems and tells you where the highest-urgency gaps are.

    Month 2: Implement Section 4 (model inventory). Register every system. Accept that your first inventory will be incomplete — the process of building it will surface shadow AI you didn't know existed.

    Month 3: Implement Section 2 (governance structure). Assign AI System Owners to every registered system. Convene the AI Governance Committee for the first time. Review the Tier 1 systems and confirm that oversight controls are in place.

    Months 4-6: Implement Sections 5-8 (oversight, vendor management, data governance, logging). These build on the foundation the first three months established.

    Ertas Data Suite's built-in audit trail and operator logging directly satisfy the requirements in Sections 4 and 8 — every data transformation is logged with timestamp and operator ID, generating the immutable audit records this policy requires without additional tooling.

    Book a discovery call with Ertas →

    Turn unstructured data into AI-ready datasets — without it leaving the building.

    On-premise data preparation with full audit trail. No data egress. No fragmented toolchains. EU AI Act Article 30 compliance built in.

    Keep reading