Back to blog
    When Your AI Vendor Makes a Geopolitical Decision: What Enterprise Buyers Need to Know
    vendor-riskopenaiai-governanceenterprise-aimodel-ownership

    When Your AI Vendor Makes a Geopolitical Decision: What Enterprise Buyers Need to Know

    OpenAI is now a defense contractor. Anthropic walked away. These are geopolitical decisions with operational consequences for every enterprise that depends on these models.

    EErtas Team·

    In early 2026, OpenAI signed a contract with the US Department of Defense. Anthropic was offered a similar arrangement and declined, citing concerns about deploying AI in contexts involving lethal force decisions.

    Two vendors. Two different choices. Both decisions now live inside the enterprise stacks that depend on their models.

    This isn't about taking a political position on defense AI. It's about something more operational: foundation model providers are making geopolitical decisions, and those decisions have direct consequences for every enterprise buyer who depends on their models. Understanding the mechanism — not just the headline — is what your procurement and risk teams need to work through.

    How Geopolitics Gets Into Your Model

    With cloud infrastructure, the compute is largely fungible. If your AWS region goes down, you failover to another region or another cloud. The physical location of the server didn't change what the compute did.

    AI models work differently. The training and fine-tuning process encodes priorities into model behavior directly. What the model is trained on, what it's reinforced for, what it's penalized for — these choices shape what the model does in production. They're not adjustable at inference time. They're baked in.

    When an AI vendor makes a significant contract with a government or military body, the implications aren't just about who they're selling to. They're about what the model gets trained and optimized for over time. Defense-use-case optimization tends to favor different output characteristics than commercial-use-case optimization. These tradeoffs are real and they propagate into every downstream deployment.

    This is the mechanism that makes vendor geopolitical decisions different from, say, a SaaS vendor choosing a new data center location.

    The OpenAI/DoD Case: What It Signals

    OpenAI's entry into defense contracting tells enterprise buyers several concrete things:

    Customer mix is shifting. When a major AI vendor's customer mix includes the US Department of Defense, their product and training priorities are influenced by that customer's needs. This is how vendor relationships work in every industry. It's not corruption — it's business. But it's a signal about where their engineering and training investment will be directed.

    Capability restrictions may follow. Government contracts in sensitive areas often include provisions about what capabilities can be made available to non-government customers, how certain outputs are handled, and what modifications are permissible. The exact terms of the OpenAI/DoD contract aren't public. But the existence of such constraints is a standard feature of defense technology contracts.

    Regulatory environment changes. A vendor operating in defense markets faces a different regulatory posture than a purely commercial AI provider. Export controls, ITAR compliance, classification handling — these requirements don't disappear at the boundary between defense and commercial product lines. They shape how the organization operates.

    None of this means OpenAI's commercial products are now inappropriate for enterprise use. It means the risk profile has changed and enterprise buyers should update their vendor assessment accordingly.

    The Anthropic Choice: A Different Risk Profile

    Anthropic's decision to decline a similar arrangement is also a signal — a different one.

    By passing on defense contracting, Anthropic is signaling that their training priorities remain oriented toward commercial and safety-focused use cases. Their Constitutional AI approach continues to be the primary lens through which training decisions are made, rather than being balanced against defense use case requirements.

    For enterprise buyers in commercial sectors — healthcare, financial services, legal, retail — this is a relevant data point. It suggests Anthropic's model development priorities are more closely aligned with the use cases those enterprises care about.

    For enterprise buyers who are themselves government or defense adjacent, the picture is more nuanced. OpenAI's defense engagement may actually represent better strategic alignment for their use cases.

    The point isn't that one choice is right and the other wrong. The point is that these are substantive decisions with real implications for how models behave, and they deserve to be part of your vendor evaluation — not as political positions, but as operational risk factors.

    This Isn't New — It's Just Visible

    Vendor geopolitical exposure isn't a new phenomenon in enterprise technology. It's been there for a while. It's just harder to see with AI than with other technology categories.

    Tech companies have made decisions about operating in countries with authoritarian governments. Cloud providers face data localization requirements from governments in the EU, China, Russia, and dozens of other jurisdictions — decisions about where data can reside and what governments can access it. Software companies have made choices about selling to certain governments, defense sectors, or surveillance applications.

    What's different with foundation AI models is the training-behavior link. When a cloud provider decides to operate a data center in a jurisdiction with government access requirements, the consequence is data access. When an AI model provider makes decisions about training priorities influenced by government relationships, the consequence is model behavior — what the model does in your production deployment.

    That's a more direct and less visible impact.

    Six Questions to Ask Your AI Vendor

    Most enterprise vendor questionnaires don't include questions about geopolitical exposure. They should. Here's a practical starting set:

    1. Who are your five largest customers by revenue? Vendors don't always answer this, but the question itself signals that you're paying attention to customer mix.

    2. Have you entered, or are you exploring, contracts with government bodies or military organizations? Ask this directly. Public announcements don't always make it into procurement awareness.

    3. How does your government and defense work affect your commercial product training and development priorities? This is the key question. A vendor with a thoughtful answer has actually considered the separation. A vendor with a vague answer probably hasn't.

    4. Are there capability restrictions on your commercial products that result from government contracts or regulatory requirements? You want to know if there are things the model won't do for commercial customers because of obligations to other customers.

    5. What is your policy on government requests for access to customer data or model outputs? This covers law enforcement access, not just procurement contracts.

    6. If your strategic priorities change significantly in the next 24 months, how would you notify commercial customers and what transition support would you provide? This is the exit question. It forces the vendor to think through the commercial impact of strategic pivots.

    The Model Ownership Escape

    Here's what's practically useful about this analysis: if you fine-tune on open-source base models and own the weights, vendor geopolitical decisions stop affecting your production AI behavior.

    Llama 3.3, Qwen 2.5, Mistral, and Gemma are open-source foundation models. They're trained and released. Their training decisions are already made and publicly documented to varying degrees. When you fine-tune one of these models on your domain data and export the weights, what you have is yours. No subsequent decision by Meta, Alibaba, Mistral AI, or Google changes what your fine-tuned model does in production.

    OpenAI signing a DoD contract has zero effect on a GGUF file sitting on your inference server.

    That's not an argument for never using frontier model APIs. Those APIs are genuinely useful, especially for development, exploration, and tasks that require capabilities beyond what smaller fine-tuned models can match. But for the production workloads where behavioral consistency and vendor independence matter, owning the weights is the only complete solution to geopolitical vendor risk.

    The cost case makes this concrete. An agency running client work on GPT-4-class APIs at AU$4,200/month is fully exposed to every strategic decision OpenAI makes. The same workload on per-client fine-tuned local models runs at AU$14.50/month — and is exposed to none of them.

    Making the Assessment

    Enterprise buyers don't need to take a political stance on AI and defense. What they need is to treat vendor geopolitical decisions as operational risk factors and evaluate them the same way they'd evaluate any other vendor risk: with a framework, specific questions, documented conclusions, and a mitigation plan.

    The mitigation hierarchy from the Enterprise AI Vendor Risk Guide applies here: monitor vendor strategic decisions continuously, maintain vendor diversification, and build toward model ownership for workloads where behavioral consistency is non-negotiable.

    Geopolitics has always been in enterprise technology stacks. With AI, you just need to know where to look.

    What AI Model Ownership Actually Means explains the practical path from API dependency to owned model weights.

    See early bird pricing →

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.

    Keep reading