Back to blog
    Shadow AI Policy Template for Regulated Industries
    shadow-aipolicycomplianceregulated-industriesenterprise-securitysegment:enterprise

    Shadow AI Policy Template for Regulated Industries

    A practical, immediately usable AI acceptable use policy template for healthcare, financial services, and other regulated organizations. Includes data classification tables, regulatory overlays, and enforcement frameworks.

    EErtas Team·

    Most AI acceptable use policies fail. They're either so vague that employees ignore them, or so restrictive that employees work around them. Either way, the organization gets the worst outcome: a policy exists, nobody follows it, and leadership assumes the problem is handled.

    This article provides a complete, practical policy template that regulated organizations can adapt and deploy. It's designed for healthcare, financial services, legal, and other industries where data handling mistakes carry regulatory consequences. Every section includes the reasoning behind it, because a policy nobody understands is a policy nobody follows.

    Why Most AI Policies Fail

    Before the template, let's understand what goes wrong.

    Problem 1: Written by legal, not operations. Policies drafted exclusively by legal counsel tend to prohibit everything that carries risk. Since all AI usage carries some risk, the result is a de facto ban wrapped in careful language. Employees read it as "they don't want us using AI" and proceed to use AI anyway, just without telling anyone.

    Problem 2: No approved alternatives. A policy that says "don't use ChatGPT" without saying "use this instead" is useless. Employees need AI tools. If the policy doesn't address that need, employees will address it themselves.

    Problem 3: No data classification guidance. "Don't put sensitive data into AI tools" leaves employees guessing about what counts as sensitive. Is an internal meeting summary sensitive? A draft email? A project timeline? Without specific guidance, employees either treat everything as sensitive (and use AI tools anyway because the restriction feels unreasonable) or treat nothing as sensitive.

    Problem 4: Set and forget. AI tools evolve monthly. A policy written in January is outdated by March. Without a review schedule, the policy becomes increasingly disconnected from reality.

    The template below addresses each of these failures.


    Section 1: Scope and Definitions

    1.1 Purpose

    This policy governs the use of artificial intelligence tools by all employees, contractors, and third-party agents acting on behalf of [Organization Name]. It establishes rules for which AI tools may be used, what data may be processed through them, and how usage is monitored and enforced.

    1.2 Scope

    This policy applies to:

    • All AI-powered software, including large language models (ChatGPT, Claude, Gemini, etc.), code assistants (GitHub Copilot, Cursor, etc.), image generators, and AI features embedded in existing software
    • Usage on company-owned devices, personal devices used for work, and any network or internet connection when processing company data
    • Both direct AI tool usage and indirect usage through third-party services that incorporate AI processing

    1.3 Definitions

    TermDefinition
    AI ToolAny software that uses machine learning, large language models, or neural networks to generate, analyze, or transform content. This includes standalone tools (ChatGPT, Claude), embedded features (AI in Google Docs, Notion AI), and developer tools (Copilot, Cursor).
    Approved AI ToolAn AI tool that has been reviewed and approved by the AI Governance Committee for use with specified data tiers. See Appendix A for the current approved list.
    Shadow AIAny use of non-approved AI tools for work purposes, or use of approved tools in non-approved ways (e.g., using personal accounts instead of corporate licenses).
    PromptAny input submitted to an AI tool, including text, code, files, images, and voice input.
    AI OutputAny content generated by an AI tool in response to a prompt.
    Data TierThe classification level of data as defined in Section 3.

    Section 2: Approved Tools and Request Process

    2.1 Approved AI Tools

    The AI Governance Committee maintains an approved tools list (Appendix A), updated quarterly. Each approved tool includes:

    • Permitted data tiers (what data can be processed)
    • Approved use cases (what the tool may be used for)
    • Required account type (corporate license vs. personal account)
    • Any tool-specific restrictions

    Current approved tools (example — customize for your organization):

    ToolPermitted DataRequired AccountUse Cases
    [Internal AI Platform]Tier 1, 2, 3Corporate SSOAll internal use cases
    GitHub Copilot BusinessTier 2, 3 (code only)Corporate licenseCode generation, debugging
    ChatGPT EnterpriseTier 2, 3Corporate licenseWriting, research, analysis
    Grammarly BusinessTier 3 onlyCorporate licenseGrammar and style checking

    2.2 Requesting New Tools

    Any employee may request evaluation of a new AI tool by submitting a request to the AI Governance Committee. The request must include:

    1. Tool name and vendor
    2. Intended use case and business justification
    3. Types of data that would be processed
    4. Number of employees who would use it

    The committee evaluates requests within 15 business days using the following criteria:

    • Data processing and storage practices (where is data stored, is it used for training, retention period)
    • Vendor security certifications (SOC 2, ISO 27001, HIPAA BAA availability)
    • Contractual data protections
    • Regulatory compliance implications
    • Whether an existing approved tool can serve the same purpose

    Requests are approved, approved with restrictions, or denied with explanation.


    Section 3: Data Classification for AI Usage

    This is the most important section of the policy. Specific data classification eliminates the ambiguity that drives shadow AI usage.

    3.1 Data Classification Table

    TierClassificationAI PolicyExamples
    Tier 1: ProhibitedData that must never be entered into any external AI tool, regardless of provider agreementsInternal approved tools only. Never external.PII (SSNs, DOB, addresses), PHI (patient records, diagnoses, treatment plans), trade secrets, source code for proprietary algorithms, privileged legal communications, classified or restricted information, credentials and API keys, M&A materials, unreleased financial results
    Tier 2: Approval RequiredData that may be processed by approved AI tools with appropriate safeguardsApproved tools only. Corporate accounts only. Manager approval for bulk processing.Internal reports and analyses, strategy documents, customer account information (non-PII), employee performance data (anonymized), vendor evaluations, product roadmaps, internal communications
    Tier 3: PermittedData that poses minimal risk if processed by approved AI toolsAny approved tool. No additional approval needed.Publicly available information, general research queries, generic writing assistance, publicly documented code patterns, industry news and analysis

    3.2 When In Doubt

    If an employee is unsure which tier applies to specific data, the rule is: treat it as the higher tier and seek clarification. Contact your manager or the AI Governance Committee. This is not a penalty — it's the expected behavior when classification is ambiguous.

    3.3 Mixed-Tier Data

    Prompts that contain data from multiple tiers are governed by the most restrictive tier. A Tier 3 research question that includes a Tier 1 customer name becomes a Tier 1 prompt. Employees should strip higher-tier data from prompts when possible.


    Section 4: Acceptable Use Guidelines

    4.1 General Requirements

    All AI tool usage must:

    • Use corporate accounts, never personal accounts
    • Comply with the data classification in Section 3
    • Produce outputs that are reviewed by a human before use in any decision, communication, or deliverable
    • Not be represented as human work when the distinction matters (e.g., regulatory filings, sworn statements)

    4.2 Prohibited Uses

    Regardless of data tier, the following uses are prohibited:

    • Submitting credentials, API keys, or authentication tokens to any AI tool
    • Using AI tools to make automated decisions about individuals (hiring, firing, lending, clinical decisions) without human review
    • Using AI outputs in regulatory filings without expert human review and approval
    • Circumventing AI tool restrictions (VPNs to bypass blocking, personal devices to avoid monitoring)
    • Using AI tools to generate content that violates any law or regulation

    4.3 Department-Specific Guidelines

    Each department should maintain supplementary guidelines that provide concrete examples relevant to their work. Examples:

    Engineering: Code generated by AI assistants must pass the same review process as human-written code. Do not paste proprietary algorithms or database schemas into external tools. Use the internal AI platform for debugging proprietary code.

    Legal: Never paste client communications, case strategy, or privileged materials into external AI tools. Draft documents generated with AI assistance must be reviewed by a licensed attorney. AI-generated legal research must be verified against primary sources.

    Human Resources: Never enter employee PII (names + compensation, names + performance ratings, names + disciplinary records) into external tools. Anonymize data before using AI for analysis.

    Finance: Never enter unreleased financial results, M&A targets, or material non-public information into any AI tool. Use the internal AI platform for financial modeling and analysis.


    Section 5: Monitoring and Enforcement

    5.1 Monitoring Scope

    The organization monitors AI tool usage to protect company data and ensure policy compliance. Monitoring includes:

    • Network traffic to known AI tool domains
    • Data volume transmitted to AI services
    • Automated scanning of outbound prompts for Tier 1 data patterns (PII, PHI, credentials)
    • Usage patterns and anomaly detection

    Monitoring does not include reading the content of every prompt. Automated systems flag potential policy violations, which are then reviewed by the security team.

    5.2 Enforcement Framework

    ViolationFirst OccurrenceSecond OccurrenceThird Occurrence
    Using non-approved AI tool with Tier 3 dataNotification and trainingWritten warningAccess restrictions
    Using non-approved AI tool with Tier 2 dataWritten warning and trainingAccess restrictionsDisciplinary action
    Using any external tool with Tier 1 dataImmediate investigation, access restrictions, potential disciplinary actionDisciplinary action up to terminationTermination
    Circumventing monitoring or blockingWritten warning and investigationDisciplinary actionTermination
    Intentional exfiltration via AI toolsImmediate termination and legal referral

    5.3 Safe Harbor Provision

    Employees who self-report accidental policy violations within 24 hours will not face disciplinary action for first-time Tier 2 violations. This provision exists to encourage transparency. It does not apply to Tier 1 violations involving PHI, PII, or regulated data where mandatory reporting obligations exist.


    Section 6: Incident Response for Data Leakage

    6.1 When a Violation Is Detected

    1. Contain (within 1 hour): Revoke the employee's access to the AI tool. If the tool allows it, request deletion of the submitted data.
    2. Assess (within 24 hours): Determine what data was exposed, its classification tier, whether it involved regulated data (PII, PHI, financial data), and the number of individuals affected.
    3. Notify (per regulatory requirements): If the exposure involves regulated data, initiate the appropriate notification process:
      • HIPAA: Notify the Privacy Officer immediately. 60-day breach notification clock may start.
      • GDPR: 72-hour notification to supervisory authority if personal data of EU residents is involved.
      • State breach notification laws: Varies by jurisdiction. Consult legal.
      • SEC: If material non-public information is involved, consult legal immediately.
    4. Remediate (within 7 days): Contact the AI vendor to request data deletion. Document the vendor's response. If training data ingestion is possible, document this as an unrecoverable exposure.
    5. Review (within 30 days): Conduct a root cause analysis. Was the policy unclear? Was the employee unaware of the classification? Was there no approved alternative for their use case? Update policy and tooling based on findings.

    6.2 Documentation

    All incidents must be documented in the AI Incident Log, including: date, employee (anonymized in aggregate reports), data classification, AI tool involved, data exposed, containment actions, regulatory notifications, root cause, and remediation steps.


    Section 7: Training Requirements

    7.1 Mandatory Training

    TrainingAudienceFrequencyDuration
    AI Acceptable Use BasicsAll employeesAnnual + at onboarding30 minutes
    Data Classification for AIAll employeesAnnual20 minutes
    Department-Specific AI GuidelinesDepartment membersAnnual30 minutes
    AI Governance Committee TrainingCommittee membersQuarterly60 minutes
    Incident Response ProceduresIT Security, Legal, PrivacySemi-annual45 minutes

    7.2 Training Content

    Training must include:

    • Why the policy exists (not just rules, but reasoning)
    • How to access approved AI tools
    • How to classify data before using AI tools
    • Real examples of policy violations and their consequences
    • How to report accidental violations (emphasizing safe harbor)
    • How to request new tools or capabilities

    Training that consists solely of "here's what you can't do" is counterproductive. Lead with "here's what you can do and how to do it safely."


    Section 8: Review Schedule

    Review ActivityFrequencyResponsible Party
    Approved tools list updateQuarterlyAI Governance Committee
    Policy full reviewSemi-annualLegal, Security, Operations
    Monitoring effectiveness assessmentQuarterlyIT Security
    Incident trend analysisMonthlyIT Security
    Employee feedback surveySemi-annualHR + AI Governance Committee
    Regulatory landscape reviewQuarterlyLegal

    Regulatory Overlay: HIPAA

    Organizations subject to HIPAA must add the following provisions:

    • PHI is always Tier 1. No PHI may be entered into any external AI tool, even those with Business Associate Agreements, unless the BAA explicitly covers AI-assisted processing and the tool has been specifically approved for PHI use cases.
    • De-identification standard. Data de-identified per the HIPAA Safe Harbor method (18 identifiers removed) may be treated as Tier 2. Expert determination method de-identification may be treated as Tier 3.
    • BAA requirements for AI vendors. Any AI tool approved for Tier 2 data that may incidentally process PHI must have a BAA that addresses: data use for model training (must be prohibited), data retention and deletion, breach notification timeline, audit rights, and subcontractor obligations.
    • Minimum necessary standard. Prompts involving any health-related data must contain only the minimum necessary information for the intended purpose.

    Regulatory Overlay: GDPR

    Organizations processing personal data of EU/EEA residents must add:

    • Lawful basis for processing. AI tool usage involving personal data requires an identified lawful basis under Article 6. Legitimate interest assessments must be documented for AI use cases.
    • Data transfer mechanisms. External AI tools operated by US-based companies require valid transfer mechanisms (Standard Contractual Clauses, EU-US Data Privacy Framework certification, or Binding Corporate Rules).
    • DPIA requirement. A Data Protection Impact Assessment must be completed before deploying any AI tool that processes personal data at scale, involves automated decision-making, or processes special categories of data.
    • Right to explanation. When AI outputs influence decisions about individuals, the organization must be able to explain the logic involved. This requires maintaining records of AI tool usage in decision processes.
    • Data subject rights. Employees, customers, and other data subjects retain their rights (access, rectification, erasure, portability) for data processed through AI tools.

    Regulatory Overlay: SOC 2

    Organizations maintaining SOC 2 compliance must ensure:

    • Change management. Introduction of new AI tools must go through the change management process documented in your SOC 2 controls.
    • Access control. AI tool access must be provisioned and deprovisioned through the same identity management systems as other applications. Personal account usage violates access control requirements.
    • Logging and monitoring. AI tool usage logs must be retained for the same period as other system access logs (typically 12 months minimum).
    • Vendor risk management. AI vendors must be assessed through your vendor risk management program. Risk assessments must be documented and reviewed annually.
    • Incident management. AI data leakage incidents must be managed through the existing incident management process and documented accordingly.

    Regulatory Overlay: EU AI Act

    Organizations deploying AI systems in the EU must consider:

    • Risk classification. Determine whether your AI use cases fall under prohibited, high-risk, limited-risk, or minimal-risk categories under the EU AI Act.
    • High-risk obligations. If any AI usage qualifies as high-risk (e.g., AI used in employment decisions, creditworthiness assessment, or healthcare), additional requirements apply: risk management systems, data governance, technical documentation, human oversight, and accuracy/robustness requirements.
    • Transparency obligations. Users must be informed when they are interacting with an AI system. Content generated by AI must be labeled as such when it could be mistaken for human-generated content.
    • General-purpose AI models. If using foundation models (GPT-4, Claude, Llama, etc.), ensure compliance with transparency and documentation requirements for general-purpose AI models under Article 53.

    Applied Example: Healthcare Organization Policy

    A 500-bed hospital system with 4,000 employees adapted this template as follows:

    Approved tools: Internal AI platform (built on Llama 3.3, deployed on-premise) for all tiers. Microsoft Copilot for Microsoft 365 (corporate license, Tier 2 and 3 only, with PHI explicitly prohibited). No external consumer AI tools approved.

    Key modifications: Added a specific prohibition on entering patient identifiers (name, MRN, DOB) into any prompt, even on the internal platform, unless the use case has been approved by the Privacy Officer. Created a "clinical AI" sub-policy for AI-assisted clinical decision support tools, separate from the general AI policy. Required the AI Governance Committee to include the Chief Medical Information Officer and a clinical staff representative.

    Enforcement result: Shadow AI usage dropped from an estimated 45% of clinical staff to 8% within 120 days of deploying the internal AI platform and policy. The remaining 8% was primarily physicians using specialty medical AI tools that are being evaluated for the approved list.

    Applied Example: Financial Services Firm Policy

    A mid-size investment advisory firm (800 employees, SEC-registered) adapted this template:

    Approved tools: Internal AI platform (on-premise, isolated from internet) for all tiers. Bloomberg Terminal AI features for Tier 2 and 3 market data. No external consumer AI tools approved for any purpose.

    Key modifications: Added an explicit prohibition on entering material non-public information (MNPI) into any external AI tool, with violation treated as a potential insider trading incident requiring immediate legal review. Added a requirement that AI-generated investment research carry an "AI-assisted" label in all client-facing materials. Required pre-clearance for any AI tool used in quantitative trading strategy development.

    Enforcement result: Consolidated all AI usage onto the internal platform within 90 days. Two incidents in the first quarter: both Tier 2 violations (internal reports entered into a non-approved browser extension with AI features), both caught by endpoint monitoring, both resolved through retraining.


    The Enforcement Paradox

    The hardest part of AI policy is calibrating enforcement. The paradox:

    • Too strict: Employees perceive the policy as unreasonable, compliance drops, shadow usage increases, and the policy exists only on paper.
    • Too loose: The policy provides no meaningful protection, regulators view it as inadequate, and incidents occur with no framework for response.

    The goal is a usable middle ground. The policy should be restrictive enough to prevent genuine harm and permissive enough that employees can follow it without significant productivity loss.

    The strongest signal that your policy has found the right balance: employees report accidental violations through the safe harbor provision, request new tools through the formal process, and usage of the internal AI platform grows month over month.

    If nobody is using the safe harbor, nobody is requesting tools, and internal platform usage is flat — your policy is too strict and employees are working around it. If you're seeing frequent Tier 1 violations — your policy is too loose or your training is inadequate.

    Measure, adjust, and iterate. A policy is a living document, not a monument.

    Turn unstructured data into AI-ready datasets — without it leaving the building.

    On-premise data preparation with full audit trail. No data egress. No fragmented toolchains. EU AI Act Article 30 compliance built in.

    Keep reading