Back to blog
    Who Is Liable When AI Makes a Wrong Decision? The Accountability Chain Explained
    ai-liabilityai-governanceresponsible-aienterprise-ailegal

    Who Is Liable When AI Makes a Wrong Decision? The Accountability Chain Explained

    AI accountability involves multiple parties across a chain: model provider, deployer, user, and the individual affected. Here's how liability is distributed and what contracts actually protect you.

    EErtas Team·

    When Air Canada's chatbot gave a passenger incorrect information about bereavement fares and Air Canada tried to argue the chatbot was a separate legal entity not bound by its statements, a Canadian tribunal rejected that argument immediately. Air Canada was liable. Not the chatbot. Not the LLM provider. Air Canada.

    That case is from 2024, but the principle it established will govern AI liability for the foreseeable future: the organization that deploys an AI system in a customer-facing context owns the legal relationship with the customer and owns the liability when things go wrong.

    The more interesting question — the one boards, legal teams, and insurers are now working through — is how liability distributes across the full accountability chain when an AI system causes harm.

    The Four Parties in the Chain

    Every AI deployment involves at least four parties. Understanding their roles is the prerequisite for understanding how liability flows.

    The model provider — OpenAI, Anthropic, Meta (for open weights), Google, Mistral. This party created the underlying model: its architecture, its training data, its safety alignment choices, and its default inference behavior. The model provider made decisions about what the model knows, what it refuses to do, and how it behaves under different prompting conditions.

    The fine-tuner or customizer — an enterprise that took a base model and adapted it for a specific purpose: additional training on domain-specific data, RLHF alignment to preferred outputs, system prompt engineering that shapes behavior. This party modified what the model does. They created a version of the model that differs from the base.

    The deployer — the enterprise that built a product or workflow around the model, whether fine-tuned or used directly via API. The deployer made decisions about the use case, the user interface, how AI outputs are presented to users, whether human review occurs before AI outputs are acted upon, and what safeguards prevent misuse. The deployer controls the context in which the AI operates.

    The user — the person or automated system that queries the model with specific inputs. The user's inputs shape the specific output that led to the specific harm.

    When something goes wrong, courts, regulators, and insurers are asking where in this chain the harm originated and who had the ability to prevent it.

    The General Rule: Deployers Bear Primary Liability

    The Air Canada case applies broadly. The deployer chose to deploy. The deployer chose the use case. The deployer decided the level of human oversight. The deployer made representations to users about the system's capabilities. The deployer bears primary liability for outcomes.

    This is consistent with product liability principles applied to services: the entity that puts a product or service into commerce is the entity responsible for harm caused by that product or service, regardless of what components or services were sourced from third parties.

    For most enterprise AI deployments, this means: your organization bears primary liability for harm caused by AI you deploy, including AI built on third-party models.

    What Your AI Vendor's Terms of Service Actually Say

    Most enterprise AI provider agreements contain three provisions that matter for liability analysis.

    Liability cap: vendor liability is capped at fees paid in the preceding 12 months (or some similar small number). If you've paid OpenAI $50,000 over the past year, their maximum liability to you in any dispute is $50,000 — regardless of the harm caused to your customers.

    Warranty disclaimer: the provider disclaims all implied warranties, including fitness for a particular purpose. They're explicitly saying: we make no representation that this model is suitable for your specific use case. The due diligence for use case suitability is yours.

    Indemnification for downstream harm: you typically agree to defend and indemnify the provider against any claims arising from your use of the model. If a customer sues both you and the provider, you've contractually agreed to cover the provider's defense costs and any judgment against them.

    Read your ToS before deploying in any high-stakes context. The protections flow almost entirely in the provider's favor.

    When Liability Can Flow Back to the Provider

    The case for holding a model provider liable is narrow but not impossible.

    Misrepresentation: if the provider made specific, documented representations about model capability that were materially false — and you relied on those representations in deploying the model — you may have a misrepresentation or fraud claim. This is distinct from puffery ("our model is highly capable") and requires specific, verifiable factual claims.

    Failure to disclose known issues: if the provider knew the model had specific failure modes that were relevant to foreseeable use cases and did not disclose them, there is a potential negligence or fraudulent concealment argument. This is fact-specific and difficult to prove without discovery.

    The strategic pivot question: OpenAI's DoD contract raised a question the market hadn't seriously considered before. If a provider optimizes model behavior for one domain (defense applications) in ways that degrade performance in another domain (medical, legal, financial), and if they do so without notifying enterprise customers, does that constitute a breach of the implied covenant of good faith? Does it give rise to contribution claims when deployers face liability for the degraded performance?

    These arguments are novel. Courts haven't adjudicated them. But they're the arguments plaintiffs' attorneys are developing now.

    The EU AI Act Shifts the Burden of Proof

    European deployments face a different legal environment. The EU AI Act, combined with the proposed EU AI Liability Directive, creates a significant shift: for high-risk AI systems (as defined in Annex III of the EU AI Act), if a claimant can show that a high-risk AI system caused harm and that the deployer cannot demonstrate compliance with governance requirements, courts can presume causation.

    In traditional tort law, the claimant must prove causation — they must demonstrate that the defendant's conduct caused the harm. Reversing that presumption, even conditionally, is a substantial change. It means a deployer who cannot demonstrate compliance with EU AI Act governance requirements faces a presumption that their non-compliance caused or contributed to harm.

    The practical implication: in EU-regulated contexts, EU AI Act compliance is not just a regulatory obligation. It's a litigation defense.

    The Professional Services Problem

    For law firms, medical practices, accounting firms, and financial advisory businesses, there's an additional dimension that vendor ToS cannot address: the non-delegable duty of care.

    A licensed professional's duty of care runs to their client. It cannot be outsourced. If an attorney uses AI to research case law and delivers incorrect legal advice, the AI vendor's ToS caps the attorney's recovery against the vendor — but it does nothing to limit the client's malpractice claim against the attorney.

    The professional's obligation is independent of what tools they used. Using an AI tool doesn't create a new party between the professional and the client who can absorb liability. The attorney, doctor, or accountant who delivers AI-generated advice without adequate independent review is in exactly the same legal position as one who delivers incorrect advice generated through any other means.

    This is why AI governance in professional services isn't optional. It's the mechanism by which the professional demonstrates that their duty of care was exercised — that the AI output was reviewed, validated, and independently assessed before delivery.

    Practical Implications for Enterprise AI Buyers

    Before deployment: read the vendor ToS with specific attention to the liability cap, warranty disclaimers, and indemnification provisions. If you cannot afford a dispute where your maximum recovery from the vendor is last year's subscription fees, that use case may not be suitable for that vendor's model without additional contractual protections.

    During deployment: document human oversight for every high-stakes decision. The documentation standard is: could you reconstruct, two years from now, what the AI recommended and what a human did with that recommendation before it reached a customer?

    Across your organization: ensure that your AI use cases are assessed against your professional obligations, not just your technical requirements. The question isn't only "can the AI do this?" but "does our use of AI in this way satisfy the duty of care we owe to the people it affects?"

    At contract renewal: the governance provisions you negotiate into vendor agreements — change notification, behavioral testing windows, incident reporting — are your early warning system for the provider changes that create liability exposure.

    The Model Ownership Angle

    When you own the model weights and the training process, the accountability chain is shorter and the liability analysis is cleaner.

    There's no "the vendor changed the model without notice" argument available against you — because you control the model version. There's no uncertainty about what training data was used — because you assembled it. There's no provider indemnification clause creating adverse incentives — because there's no provider in the chain.

    Model ownership doesn't eliminate liability. You still bear full deployer liability. But it eliminates a category of unpredictable upstream risk: the risk that a provider decision you didn't make and weren't notified of becomes a contributing factor in a claim you're defending.

    Clarity of accountability is a risk management benefit in itself. When you own what you deploy, the chain is auditable from training data to production output.

    See AI Liability and Insurance in 2026 for how underwriters are pricing these risks and what documentation satisfies their requirements.

    See early bird pricing →

    Book a discovery call with Ertas →

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.

    Keep reading