Back to blog
    Why Banks Won't Send Transaction Data to ChatGPT (And What They'll Do Instead)
    financebankingcompliancechatgptdata-privacyon-premisefine-tuning

    Why Banks Won't Send Transaction Data to ChatGPT (And What They'll Do Instead)

    Financial institutions face SOC 2, PCI-DSS, and FINRA constraints that make cloud AI APIs a compliance risk. Fine-tuned models running on-premise are the alternative — here's why and how.

    EErtas Team·

    Last month we wrote about why law firms won't send client data to ChatGPT. The response was overwhelming — it clearly hit a nerve.

    The same argument applies to financial services, but the stakes are higher and the regulatory walls are thicker.

    Every major bank, asset manager, insurer, and fintech company is under pressure to deploy AI. Customer expectations are set by consumer AI products. Boards want efficiency gains. Competitors are moving.

    But the compliance reality stops most initiatives dead.

    The Compliance Wall

    Here's what happens when a bank tries to use ChatGPT (or any cloud AI API) for customer-facing work:

    SOC 2 Audit Trail

    Every system that handles customer data needs a documented audit trail. When you send a customer's transaction history to OpenAI's API:

    • Where did the data go? (OpenAI's servers, location varies)
    • Who processed it? (OpenAI's infrastructure)
    • How long was it retained? (depends on OpenAI's data policy)
    • Can you demonstrate control over deletion? (you can't — it's their infrastructure)

    Your auditor will ask these questions. If you don't have satisfactory answers, your SOC 2 certification is at risk.

    PCI-DSS Scope Expansion

    If any prompt you send to a cloud API contains or references payment card data — even a partial account number, even a transaction summary — that API endpoint enters your PCI scope. Suddenly:

    • The AI vendor needs to be PCI-compliant
    • You need to document the data flow in your PCI compliance artifacts
    • Your QSA (Qualified Security Assessor) needs to assess the additional scope
    • You're paying for a more expensive PCI assessment

    For a bank processing millions of transactions, the PCI scope expansion alone can cost more than the AI implementation saves.

    FINRA Record-Keeping

    FINRA requires broker-dealers to retain records of customer communications. If an AI model generates a response that influences a customer interaction — a product recommendation, a risk assessment, a compliance summary — that output may need to be retained and auditable.

    Cloud API responses flow through third-party infrastructure. Logging them reliably, retaining them according to regulatory schedules, and producing them during regulatory examinations adds complexity that most compliance teams will reject.

    The Risk Committee Decision

    In practice, here's how it plays out:

    1. Product team proposes using AI for [specific use case]
    2. Proposal goes to risk/compliance committee
    3. Committee asks: "Where does the data go?"
    4. Answer: "OpenAI's servers"
    5. Committee: "No."

    This conversation happens hundreds of times across financial services every week. The technology works. The compliance doesn't.

    What They're Building Instead

    Financial institutions aren't rejecting AI. They're rejecting cloud AI APIs. The difference matters.

    The alternative: fine-tuned models that run on the institution's own infrastructure.

    The Architecture

    1. Training data (historical transactions, labeled documents, customer interactions) → stays within the institution's data perimeter
    2. Fine-tuning happens on controlled compute (cloud GPUs via a platform like Ertas, or on-premise for the most sensitive institutions)
    3. The trained model (a GGUF file or LoRA adapter) is exported and downloaded
    4. Inference runs on the institution's own hardware — a GPU server in their data center, a Mac in a server room, or a cloud instance in their private VPC
    5. Customer data never leaves the institution's infrastructure during inference

    This architecture satisfies every compliance requirement:

    • SOC 2: Data stays within your certified perimeter
    • PCI-DSS: No scope expansion — processing happens on your infrastructure
    • FINRA: Full control over logging, retention, and auditability
    • GDPR: Data residency requirements met (your data center, your jurisdiction)

    The Economics Work Too

    The compliance argument alone is sufficient for most financial institutions. But the economics reinforce it.

    A mid-size bank processing 500 documents per day:

    ApproachMonthly costCompliance overhead
    GPT-4o API$1,500-5,000High (vendor assessment, BAA, PCI scope)
    Fine-tuned 8B on-premise$15-30 electricityMinimal (inherits existing compliance)

    The fine-tuned model isn't just cheaper — it often produces better results on domain-specific financial tasks. A model trained on your institution's specific transaction categories, document formats, and regulatory requirements outperforms a generic model that handles everything from poetry to physics.

    The Agency Opportunity

    If you're running an AI agency, financial services is one of the most underserved markets.

    The demand is enormous. Banks, credit unions, asset managers, insurers, and fintech companies all want AI. Most of them are stuck — unable to use cloud APIs, unable to hire ML teams ($300K+ per ML engineer), unable to justify building in-house fine-tuning infrastructure.

    An agency that can:

    1. Understand the compliance requirements (SOC 2, PCI-DSS, FINRA)
    2. Fine-tune models on financial domain data using a visual platform
    3. Deploy on-premise on the client's infrastructure
    4. Deliver per-client LoRA adapters for different use cases

    ...is solving a problem that the client genuinely cannot solve themselves.

    The willingness to pay is higher than in most verticals. Financial institutions budget millions for compliance and technology. An AI deployment that passes compliance review is worth far more to them than the same deployment in a less regulated industry.

    For a detailed breakdown of the financial services agency opportunity, see our market guide.

    We've seen this pattern play out in two other regulated industries:

    Healthcare: HIPAA prevents sending patient data to cloud APIs. Fine-tuned models running on-premise are the solution. Hospitals are deploying them for clinical documentation, coding assistance, and patient communication.

    Legal: Attorney-client privilege prevents sending case data to third-party processors. Law firms are deploying fine-tuned models for contract review, document analysis, and research assistance.

    Financial services is the third leg of this regulated-industry trifecta. Same compliance-driven demand. Same on-premise deployment solution. Same business opportunity for agencies and consultants.

    The common thread: every regulated industry that can't use cloud APIs is a market for fine-tuned, locally-deployed AI models. The teams that learn to build and deploy these models — and navigate the compliance requirements — will own these markets.

    Getting Started

    If you're a financial institution:

    1. Identify your highest-volume, most-repetitive AI use case
    2. Build a training dataset from historical data (200-500 labeled examples)
    3. Fine-tune on Ertas — no ML expertise required
    4. Deploy on your infrastructure
    5. Expand to additional use cases once the first one is validated

    If you're an AI agency targeting financial services:

    1. Learn the compliance requirements (SOC 2, PCI-DSS, FINRA basics)
    2. Build a reference deployment you can demo
    3. Lead with compliance, follow with capabilities
    4. Price for the value (compliance-safe AI deployment is worth premium pricing)

    The financial services AI market is waiting for solutions that work within regulatory constraints. Fine-tuned models deployed on-premise are that solution.


    References: FINOS AI Governance Framework, IBM — Gen AI in Financial Regulatory Framework, AdvisorEngine — AI Compliance Framework 2026.

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.

    Keep reading