
Why Banks Won't Send Transaction Data to ChatGPT (And What They'll Do Instead)
Financial institutions face SOC 2, PCI-DSS, and FINRA constraints that make cloud AI APIs a compliance risk. Fine-tuned models running on-premise are the alternative — here's why and how.
Last month we wrote about why law firms won't send client data to ChatGPT. The response was overwhelming — it clearly hit a nerve.
The same argument applies to financial services, but the stakes are higher and the regulatory walls are thicker.
Every major bank, asset manager, insurer, and fintech company is under pressure to deploy AI. Customer expectations are set by consumer AI products. Boards want efficiency gains. Competitors are moving.
But the compliance reality stops most initiatives dead.
The Compliance Wall
Here's what happens when a bank tries to use ChatGPT (or any cloud AI API) for customer-facing work:
SOC 2 Audit Trail
Every system that handles customer data needs a documented audit trail. When you send a customer's transaction history to OpenAI's API:
- Where did the data go? (OpenAI's servers, location varies)
- Who processed it? (OpenAI's infrastructure)
- How long was it retained? (depends on OpenAI's data policy)
- Can you demonstrate control over deletion? (you can't — it's their infrastructure)
Your auditor will ask these questions. If you don't have satisfactory answers, your SOC 2 certification is at risk.
PCI-DSS Scope Expansion
If any prompt you send to a cloud API contains or references payment card data — even a partial account number, even a transaction summary — that API endpoint enters your PCI scope. Suddenly:
- The AI vendor needs to be PCI-compliant
- You need to document the data flow in your PCI compliance artifacts
- Your QSA (Qualified Security Assessor) needs to assess the additional scope
- You're paying for a more expensive PCI assessment
For a bank processing millions of transactions, the PCI scope expansion alone can cost more than the AI implementation saves.
FINRA Record-Keeping
FINRA requires broker-dealers to retain records of customer communications. If an AI model generates a response that influences a customer interaction — a product recommendation, a risk assessment, a compliance summary — that output may need to be retained and auditable.
Cloud API responses flow through third-party infrastructure. Logging them reliably, retaining them according to regulatory schedules, and producing them during regulatory examinations adds complexity that most compliance teams will reject.
The Risk Committee Decision
In practice, here's how it plays out:
- Product team proposes using AI for [specific use case]
- Proposal goes to risk/compliance committee
- Committee asks: "Where does the data go?"
- Answer: "OpenAI's servers"
- Committee: "No."
This conversation happens hundreds of times across financial services every week. The technology works. The compliance doesn't.
What They're Building Instead
Financial institutions aren't rejecting AI. They're rejecting cloud AI APIs. The difference matters.
The alternative: fine-tuned models that run on the institution's own infrastructure.
The Architecture
- Training data (historical transactions, labeled documents, customer interactions) → stays within the institution's data perimeter
- Fine-tuning happens on controlled compute (cloud GPUs via a platform like Ertas, or on-premise for the most sensitive institutions)
- The trained model (a GGUF file or LoRA adapter) is exported and downloaded
- Inference runs on the institution's own hardware — a GPU server in their data center, a Mac in a server room, or a cloud instance in their private VPC
- Customer data never leaves the institution's infrastructure during inference
This architecture satisfies every compliance requirement:
- SOC 2: Data stays within your certified perimeter
- PCI-DSS: No scope expansion — processing happens on your infrastructure
- FINRA: Full control over logging, retention, and auditability
- GDPR: Data residency requirements met (your data center, your jurisdiction)
The Economics Work Too
The compliance argument alone is sufficient for most financial institutions. But the economics reinforce it.
A mid-size bank processing 500 documents per day:
| Approach | Monthly cost | Compliance overhead |
|---|---|---|
| GPT-4o API | $1,500-5,000 | High (vendor assessment, BAA, PCI scope) |
| Fine-tuned 8B on-premise | $15-30 electricity | Minimal (inherits existing compliance) |
The fine-tuned model isn't just cheaper — it often produces better results on domain-specific financial tasks. A model trained on your institution's specific transaction categories, document formats, and regulatory requirements outperforms a generic model that handles everything from poetry to physics.
The Agency Opportunity
If you're running an AI agency, financial services is one of the most underserved markets.
The demand is enormous. Banks, credit unions, asset managers, insurers, and fintech companies all want AI. Most of them are stuck — unable to use cloud APIs, unable to hire ML teams ($300K+ per ML engineer), unable to justify building in-house fine-tuning infrastructure.
An agency that can:
- Understand the compliance requirements (SOC 2, PCI-DSS, FINRA)
- Fine-tune models on financial domain data using a visual platform
- Deploy on-premise on the client's infrastructure
- Deliver per-client LoRA adapters for different use cases
...is solving a problem that the client genuinely cannot solve themselves.
The willingness to pay is higher than in most verticals. Financial institutions budget millions for compliance and technology. An AI deployment that passes compliance review is worth far more to them than the same deployment in a less regulated industry.
For a detailed breakdown of the financial services agency opportunity, see our market guide.
The Parallel to Healthcare and Legal
We've seen this pattern play out in two other regulated industries:
Healthcare: HIPAA prevents sending patient data to cloud APIs. Fine-tuned models running on-premise are the solution. Hospitals are deploying them for clinical documentation, coding assistance, and patient communication.
Legal: Attorney-client privilege prevents sending case data to third-party processors. Law firms are deploying fine-tuned models for contract review, document analysis, and research assistance.
Financial services is the third leg of this regulated-industry trifecta. Same compliance-driven demand. Same on-premise deployment solution. Same business opportunity for agencies and consultants.
The common thread: every regulated industry that can't use cloud APIs is a market for fine-tuned, locally-deployed AI models. The teams that learn to build and deploy these models — and navigate the compliance requirements — will own these markets.
Getting Started
If you're a financial institution:
- Identify your highest-volume, most-repetitive AI use case
- Build a training dataset from historical data (200-500 labeled examples)
- Fine-tune on Ertas — no ML expertise required
- Deploy on your infrastructure
- Expand to additional use cases once the first one is validated
If you're an AI agency targeting financial services:
- Learn the compliance requirements (SOC 2, PCI-DSS, FINRA basics)
- Build a reference deployment you can demo
- Lead with compliance, follow with capabilities
- Price for the value (compliance-safe AI deployment is worth premium pricing)
The financial services AI market is waiting for solutions that work within regulatory constraints. Fine-tuned models deployed on-premise are that solution.
References: FINOS AI Governance Framework, IBM — Gen AI in Financial Regulatory Framework, AdvisorEngine — AI Compliance Framework 2026.
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.
Keep reading

Fine-Tuning AI for Financial Services: Compliance, Use Cases, and Deployment
A comprehensive guide to deploying fine-tuned AI models in financial services. Covers SOC 2, PCI-DSS, and FINRA compliance, five production use cases, and why on-premise fine-tuned models are replacing cloud APIs in banking and finance.

Model Risk Management for Fine-Tuned LLMs: SR 11-7 Compliance Guide
A practical guide to applying the Federal Reserve's SR 11-7 model risk management framework to fine-tuned LLMs in banking. Covers documentation requirements, validation frameworks, auditor questions, and why on-premise deployment simplifies compliance.

On-Premise AI for Banking: Satisfying Regulator Audit Requirements
Architecture and operational guide for deploying on-premise AI in banking environments that satisfy OCC, FINRA, and Federal Reserve audit requirements. Covers infrastructure, audit trails, access controls, change management, disaster recovery, and a 10-dimension compliance comparison.