
Why Some Organizations Will Never Be Able to Use OpenAI — and What They Use Instead
For some enterprises, the question isn't whether to use OpenAI but whether they legally can. Here are the organizations that are structurally excluded and what AI infrastructure they use instead.
The current industry conversation about OpenAI's strategic direction, Anthropic's choices, and enterprise AI vendor risk presupposes something that isn't true for everyone: that you get to choose.
A meaningful segment of enterprises — government agencies, clinical systems, industrial operators, certain financial institutions, competitive intelligence functions — has no legal or structural ability to use cloud-based AI providers. Not because of preference. Not because of price. Because of architecture, classification requirements, regulatory mandate, or operational design that predates AI entirely.
For these organizations, the question isn't which AI vendor to use. It's how to build capable AI infrastructure without any external API dependency at all.
Category 1: Classified and ITAR-Regulated Environments
The most absolute exclusion is classification. Systems operating at certain classification levels in government and defense environments — those handling classified national security information — cannot interface with commercial cloud services. Full stop.
This isn't a policy choice that can be waived for a sufficiently capable vendor. It's an architectural requirement. Classified networks are physically separated from unclassified infrastructure. API calls to OpenAI, Anthropic, Google, or any commercial provider aren't possible because the network path doesn't exist and can't be created without violating the classification requirements of the system.
International Traffic in Arms Regulations (ITAR) creates a related constraint for defense contractors and manufacturers working with controlled technical data. ITAR-controlled technology cannot be processed on commercial cloud infrastructure without specific export authorization — authorization that typically isn't available for operational systems. Defense primes, aerospace manufacturers, and certain research institutions working with ITAR data face effective prohibition on commercial cloud AI for data touching controlled technical areas.
The AI infrastructure in these environments: local inference, typically on government-managed or contractor-managed hardware, using models either developed specifically for classified use or fine-tuned open-source models approved through relevant security review processes.
Category 2: Air-Gapped Industrial Systems
Operational technology (OT) networks in manufacturing, energy generation, water treatment, and other critical infrastructure are intentionally disconnected from the internet. This isn't a legacy oversight — it's a deliberate security architecture decision, reinforced by CISA guidance, IEC 62443 standards, and sector-specific regulations.
An AI system that requires internet connectivity to function cannot be deployed in these environments. A natural gas compressor station, a nuclear plant's control systems, a water treatment facility's SCADA network — these systems are air-gapped by design. Any AI supporting operations in these environments runs locally, on hardware on the OT network, with no external dependencies.
The use cases for AI in industrial OT environments are real and growing: predictive maintenance on equipment sensor data, anomaly detection in process control, documentation assistance for operators, historical data analysis. All of it has to run on local infrastructure because that's the only infrastructure available.
Category 3: Restricted Healthcare Environments
Healthcare's relationship with AI connectivity is more nuanced than the absolute exclusions in categories 1 and 2, but a meaningful subset of clinical environments have connectivity restrictions that effectively exclude commercial cloud AI.
Some clinical information systems operate on isolated networks specifically for cybersecurity reasons — a response to the wave of ransomware attacks on healthcare infrastructure in the early 2020s that caused patient harm. Isolated clinical networks can't make external API calls any more than classified networks can.
Certain government healthcare systems — some military medical facilities, some national health service environments in countries with strict data sovereignty requirements — have explicit restrictions on sending patient data to commercial cloud providers. The NHS in the UK, public health systems in Germany and France, and defense medical systems in multiple countries have varying degrees of restriction on commercial cloud AI use for patient data.
For these environments, AI that works means AI that runs on the clinical network. No exceptions based on vendor capability or contractual protections.
Category 4: Financial Systems with Data Localization Requirements
Not all financial AI exclusions are absolute, but some are. Countries with strict data localization requirements — requiring that financial data be processed on infrastructure physically located within their borders — effectively prohibit the use of US-based cloud AI providers for systems that handle that data.
Russia's Federal Law No. 242-FZ requires personal data of Russian citizens to be stored on servers in Russia. China's Cybersecurity Law has analogous requirements. Several other jurisdictions have enacted or are enacting similar frameworks. Banks and financial institutions operating in these jurisdictions for systems handling regulated data cannot route that data to US-based AI providers.
The EU's GDPR doesn't create an absolute prohibition but creates significant friction for transfers to US-based providers — friction that some organizations have resolved by limiting data to on-premise or EU-located processing.
For these organizations, local inference isn't a preference — it's the only compliant architecture.
Category 5: Legal Environments with Privilege Constraints
The legal sector exclusion is practical rather than legal, but the practical constraint is binding.
Attorney-client privilege and work product doctrine protect communications and analysis that are kept confidential within appropriate boundaries. Bar association guidance in multiple jurisdictions has addressed AI use in legal practice, and while the guidance varies, a common thread is that attorneys using cloud AI for matter-related work need to be able to ensure confidentiality of client information.
Many law firms — particularly in the Am Law 100, Magic Circle firms, and those handling sensitive M&A, litigation, or regulatory matters — have policies prohibiting use of cloud AI platforms for matter-related work. The prohibition isn't based on a specific legal rule. It's based on privilege risk analysis and bar guidance that recommends against it.
The practical result: attorneys who want to use AI for legal work at these firms use local models on firm-managed infrastructure, or they don't use AI at all for matter-related work.
Category 6: Competition-Sensitive Enterprises
This category doesn't face a legal prohibition but has reached the same conclusion through risk analysis: some data is too competitively sensitive to send to any external system.
M&A advisors working on undisclosed transactions. Patent attorneys with client invention disclosures. Defense primes with proprietary system designs not covered by ITAR but still competitively critical. Investment managers with proprietary trading strategy data. These organizations have determined that the risk of processing their most sensitive data on commercial cloud infrastructure — even with strong contractual protections — is unacceptable.
The calculation isn't about vendor trustworthiness. It's about attack surface. Data processed on an external system is exposed to that system's security posture, the systems of sub-processors, and the security operations of everyone with administrative access. For data where a breach would be catastrophic, minimizing the exposure surface means keeping processing local.
What These Organizations Use Instead
The local AI stack for these organizations has the same components across categories:
Open-source foundation models: Llama 3.3, Qwen 2.5, Mistral, and Gemma are capable foundation models available under open licenses. They can be downloaded, reviewed, and deployed on private infrastructure with no ongoing vendor relationship required.
Fine-tuning on domain data: General-purpose models get domain-specialized through fine-tuning on organization-specific data — clinical notes, legal documents, technical manuals, financial records. Fine-tuned 7B models consistently reach 90-95% accuracy on narrow domain tasks, matching or exceeding GPT-4-class performance on the specific tasks that matter.
Local inference infrastructure: Ollama, llama.cpp, and vLLM run on-premise on hardware ranging from high-end workstations to dedicated GPU servers. No internet connection required for inference.
On-premise data preparation tools: Getting data ready for fine-tuning — ingestion, cleaning, labeling, augmentation, format conversion — requires tooling. Cloud-based data prep tools aren't an option for these organizations, which means they need desktop or on-premise solutions.
The Ertas Architecture for Excluded Organizations
Ertas Data Suite was designed explicitly for the structural exclusion cases. It's a native desktop application built with Tauri 2.0 — not a web application that requires a server, not a cloud service with an on-premise option, but actual desktop software that installs and runs like enterprise software.
The complete data preparation pipeline runs locally: Ingest, Clean, Label, Augment, and Export — the full workflow from raw data to fine-tuning-ready dataset, entirely on the user's machine. No data egress. No network requirement. Air-gapped operation as a first-class design requirement, not an afterthought. Full audit trail built in.
For the fine-tuning step, Ertas Fine-Tuning SaaS provides an option that works even for some of the constrained environments: bring your dataset (which you've prepared locally), upload it for a fine-tuning run on cloud GPUs, download the resulting GGUF weights, then run inference entirely locally. Your data is only in the cloud during the training run, not in persistent storage, and the resulting model runs on your infrastructure with no ongoing cloud dependency.
For organizations where even that cloud training step isn't acceptable — certain classified environments, strictly air-gapped operations — the weights can be obtained through other means: pre-trained open-source weights downloaded through approved channels, fine-tuning run on private GPU infrastructure, or procurement of fine-tuned model artifacts through appropriate channels.
The Broader Point
The organizations structurally excluded from commercial cloud AI are not edge cases. They represent a significant fraction of economic activity: government, defense, healthcare, financial services, legal, and competition-sensitive enterprise. Many of them have been waiting for local AI infrastructure to reach the capability level where it solves their actual problems.
That threshold has been crossed. A fine-tuned 7B model on a local GPU server is no longer a capability compromise for most production enterprise use cases. It's a viable alternative — often a better one for narrow, well-defined tasks — that happens to also satisfy the connectivity, data governance, and operational constraints these organizations face.
The organizations that "can't use OpenAI" are finding that they don't need to.
See early bird pricing for Ertas Fine-Tuning SaaS →
Turn unstructured data into AI-ready datasets — without it leaving the building.
On-premise data preparation with full audit trail. No data egress. No fragmented toolchains. EU AI Act Article 30 compliance built in.
Keep reading

The Case for On-Premise AI in Regulated Industries
For healthcare, legal, finance, and defense, on-premise AI isn't just a preference — it's increasingly a compliance requirement. Here's why cloud AI fails regulated environments.

Why Regulated Industries Need Different AI Infrastructure — Not Just Different Prompts
Regulated industries face AI challenges that can't be solved by better prompt engineering. Healthcare, legal, finance, and defense need fundamentally different infrastructure choices.

AI Model Access Control in Regulated Industries: Who Gets to Query What
Not everyone in your organization should have the same access to the same AI models. Here's how to design role-based access control for AI systems in healthcare, legal, and financial environments.