Back to blog
    AI Liability and Insurance in 2026: What Your Underwriter Is Now Asking About
    ai-liabilityai-insuranceai-governanceenterprise-airisk-management

    AI Liability and Insurance in 2026: What Your Underwriter Is Now Asking About

    Cyber and E&O insurers are updating their questionnaires to include AI governance. Here's what they're asking and what 'good' looks like from an underwriting perspective.

    EErtas Team·

    Insurance is a lagging indicator. It prices risk after the market has seen enough claims to model it. The fact that cyber and errors-and-omissions underwriters are now adding AI governance sections to renewal questionnaires tells you something important: the claims have started arriving.

    If your AI governance program exists only in slides and not in documentation, your next renewal conversation is going to be uncomfortable.

    What Underwriters Are Actually Asking

    In 2026, AI governance questions appear across three lines of insurance. Each reflects a different liability vector.

    Cyber liability underwriters are focused on data flows and system integrity. Their questions center on what data you're sending to third-party AI providers, what data processing agreements govern those transfers, and how you detect AI system failures that could lead to data exposure or business interruption.

    Errors and omissions (E&O) underwriters are focused on professional advice and service delivery. They want to know whether AI is used in client-facing decisions, what human review happens before AI-generated advice reaches a client, and how you validate model performance in your specific domain.

    Directors and officers (D&O) underwriters are focused on board-level governance. They're asking whether the board has visibility into AI systems in production, whether there's a designated AI risk owner at the executive level, and whether AI risk is included in enterprise risk management reporting.

    The specific questions that appear most frequently across all three lines in 2026:

    1. Do you have a written AI governance policy?
    2. Do you maintain a model inventory of all AI systems in production?
    3. What human oversight mechanisms are in place for AI-driven decisions?
    4. How do you validate AI model performance before deployment?
    5. What is your AI incident response process?
    6. Do you conduct bias and accuracy testing for AI that affects customers?
    7. What data do you send to third-party AI providers?
    8. Do you have data processing agreements with all AI vendors?

    These aren't philosophical questions. Underwriters are looking for documented evidence — written policies, inventory records, testing logs, vendor agreements.

    What the Pricing Signal Means

    Underwriters are not asking these questions to educate themselves. They're asking to price risk. Organizations that cannot answer these questions with documentation are being treated as higher risk — which means higher premiums, sub-limits on AI-related claims, or explicit exclusions for losses caused by AI systems.

    An explicit exclusion is the worst outcome. It means your E&O policy won't cover a claim arising from AI-assisted professional services, even though AI-assisted professional services is increasingly how you deliver your work.

    The organizations being hit hardest are those that have deployed AI tools ad hoc, without governance structures, because the tools were convenient. The liability exposure was always there. The insurance market is now pricing it explicitly.

    The OpenAI/DoD Liability Question

    OpenAI's US Department of Defense contract, signed in early 2026, raised a specific liability scenario that enterprise risk managers haven't had to consider before: what happens if your AI vendor's strategic direction changes in ways that affect model behavior, and that behavior change causes harm in your professional context?

    The scenario is concrete. Suppose a vendor optimizes models for defense-adjacent use cases — precision, authority, decisive outputs — and those characteristics cause the model to suppress appropriate hedging in a medical or legal context. The model gives a more definitive answer than the evidence supports. A professional relies on it. A client is harmed.

    The deploying enterprise faces E&O exposure. The question of whether the vendor contributed to the harm through undisclosed model changes is a contribution claim — possible but difficult to win, since vendor ToS agreements typically disclaim warranties and cap liability at fees paid.

    Your E&O underwriter is pricing the probability of exactly this scenario. They want to know how you monitor model behavior changes and what your incident response looks like when a model starts performing differently than it did at deployment.

    Three Liability Scenarios Underwriters Are Modeling

    Professional liability. An attorney uses an AI research tool that returns incorrect case citations. The attorney submits the brief without independent verification. The court sanctions the attorney. The client sues for malpractice. The attorney's AI vendor's ToS says the vendor is not liable for incorrect outputs. The attorney's E&O policy is the only insurance in play — provided the policy doesn't exclude AI-assisted work product.

    Employment discrimination. A hiring tool uses AI to screen resumes. The AI produces statistically disparate outcomes across protected classes. The EEOC investigates. The organization cannot demonstrate that human review corrected for algorithmic bias because they have no logs showing how AI recommendations were used in hiring decisions. D&O exposure for the board members who approved the tool without governance requirements.

    Consumer harm. A financial services firm uses AI to recommend products. The model's recommendations consistently favor higher-margin products in ways that don't align with customer suitability requirements. A class action follows. The question of whether the AI system's behavior was a "known" issue the firm should have detected through regular model validation goes directly to whether the claim is covered or excluded.

    All three scenarios have one thing in common: the absence of documented governance makes the legal position significantly worse.

    What "Good" Looks Like to an Underwriter

    Underwriters aren't expecting perfection. They're expecting evidence of a systematic approach to AI risk. The elements that create the strongest insurance position:

    Written AI governance policy — a document that articulates your organization's approach to AI use, risk classification, and oversight. It doesn't need to be long. It needs to exist and be dated.

    Model inventory — a maintained register of all AI systems in production: what they do, what data they process, what model or vendor underlies them, and when they were last validated.

    Documented human oversight — for any AI-assisted decision that affects a customer, employee, or regulatory obligation, a record showing that a human reviewed the AI output before it was acted upon. The documentation standard is: if a claim is filed two years from now, can you reconstruct what the AI recommended and what the human did with that recommendation?

    Model validation records — evidence that you tested model performance before deployment and that you periodically re-test to detect drift. The specific tests matter less than the fact that you did them and recorded the results.

    AI incident response plan — a written procedure for what happens when an AI system produces erroneous outputs at scale. Who is notified, what systems are suspended, how affected customers are identified.

    Vendor data processing agreements — signed agreements with every third-party AI provider that address data use, retention, and processing purposes.

    The Documentation Paradox

    Here's the practical problem many organizations face: they've done the governance work, but they haven't documented it. The data science team validates models — but the validation happens in a notebook that gets overwritten. Human review happens — but it's not logged in a way that creates an audit record. Vendor agreements exist — but they're the vendor's standard ToS, not a negotiated DPA.

    From an insurance underwriting perspective, undocumented governance is roughly equivalent to no governance. The underwriter cannot verify what they cannot see. The defense attorney cannot reconstruct what was not recorded.

    Documentation is not a compliance formality. In the context of AI insurance, it is the asset.

    Where Ertas Data Suite Fits

    The audit trail, data lineage records, and processing documentation that Ertas Data Suite generates are structured precisely for this purpose. Every data processing operation — ingestion, cleaning, labeling, augmentation — creates an immutable record with timestamps, operator identity, and before/after state. When an underwriter asks for evidence of data governance, or when a regulator asks for audit logs, the documentation exists and is exportable.

    The on-premise architecture means your data processing records never leave your environment. That answers the underwriter's question about third-party data transfers before it needs to be asked.

    For professional services firms deploying AI in client-facing workflows, the combination of documented governance processes and air-gapped data handling is what moves you from "higher risk" to "demonstrably managed risk" in an underwriting conversation.

    If you're approaching an insurance renewal and need to assess how your current AI governance posture maps to underwriter expectations, start with the eight questions listed above and document honest answers to each.

    Book a discovery call with Ertas →

    Turn unstructured data into AI-ready datasets — without it leaving the building.

    On-premise data preparation with full audit trail. No data egress. No fragmented toolchains. EU AI Act Article 30 compliance built in.

    Keep reading