
OpenClaw Security: Why Running Your Own Models Is the Only Real Fix
OpenClaw's security crisis goes deeper than CVEs. The real vulnerability is sending everything through cloud APIs. Local models eliminate the largest attack surface.
OpenClaw is in the middle of a security crisis, and most of the coverage is focused on the wrong problem.
Yes, CVE-2026-25253 is serious — a one-click remote code execution chain with a CVSS score of 8.8. Yes, the ClawHub supply chain attack is alarming — over 800 malicious skills identified, roughly 20% of the entire registry. Yes, 30,000+ internet-exposed instances running without authentication is bad.
But these are symptoms. The underlying vulnerability is architectural: OpenClaw is designed to read your files, access your email, browse the web, and execute shell commands — and by default, it sends every piece of context through a cloud API to do it.
That is the attack surface no one is patching.
The Data Flow Problem
When OpenClaw runs with a cloud API backend (the default configuration), here is what happens with every interaction:
- You ask OpenClaw to summarize your emails. Your email contents are sent to OpenAI/Anthropic's servers as prompt context.
- You ask it to review a contract. The full contract text is transmitted to a third-party API.
- You ask it to check server logs. Your infrastructure details, IP addresses, and error messages leave your network.
- You ask it to draft a client proposal. Your pricing, strategy, and client details become API input.
Every file OpenClaw reads, every command output it processes, every browser page it renders — all of it flows through a cloud endpoint as token input. This is not a bug. It is the intended architecture.
CrowdStrike's analysis put it clearly: OpenClaw's broad permissions, combined with cloud API routing, create a scenario where "if employees deploy OpenClaw on corporate machines and connect it to enterprise systems while leaving it misconfigured and unsecured, it could become a powerful AI backdoor agent."
Meta banned OpenClaw from its corporate networks. Cisco's blog called personal AI agents like OpenClaw "a security nightmare." Kaspersky flagged the tool as unsafe. These are not overreactions.
Why Patching CVEs Is Not Enough
The security community is focused on three vectors:
1. The RCE vulnerability (CVE-2026-25253). This will get patched. But the next RCE will arrive eventually — OpenClaw's attack surface is enormous by design. An agent that can execute arbitrary shell commands, manage files, and control a browser will always be a high-value target.
2. Malicious ClawHub skills. The 800+ poisoned skills (delivering the Atomic macOS Stealer) exposed a fundamental problem with community skill registries. Vetting open-source extensions at scale is an unsolved problem. The ClawHub team can improve review processes, but determined attackers will always find ways to slip through.
3. Exposed instances. The 30,000+ internet-facing OpenClaw deployments are a configuration problem. Port hardening and authentication will reduce this number, but human error guarantees some instances will always be exposed.
None of these fixes address the core issue: even a perfectly secured OpenClaw instance still sends your data to cloud APIs by default.
Local Models: Eliminating the Largest Attack Surface
Running OpenClaw on local models removes the data exfiltration vector entirely:
| Risk | Cloud API | Local Model |
|---|---|---|
| Data sent to third parties | Every prompt, every file, every context | Nothing — inference runs on your hardware |
| API key exposure | Keys stored in config, leaked in logs, stolen via prompt injection | No API keys to leak |
| Vendor data retention | Subject to provider's data handling policy | You control retention |
| Network interception | Tokens in transit to API endpoints | No network traffic for inference |
| Prompt injection → data theft | Attacker-crafted prompts can exfiltrate context via API calls | No external endpoint to exfiltrate to |
This is not a marginal improvement. It eliminates an entire class of attacks.
Prompt Injection Becomes Less Dangerous
One of the most cited OpenClaw risks is prompt injection: a malicious website, email, or document could contain hidden instructions that trick OpenClaw into executing harmful actions. With a cloud API backend, a successful prompt injection can exfiltrate data by encoding it into API calls. With a local model, there is no external endpoint for the injected prompt to call home to.
Prompt injection is still a risk with local models — an injected instruction could still trigger harmful local actions. But the data exfiltration vector is gone.
API Keys Are No Longer a Target
OpenClaw's default setup requires storing API keys in configuration files. These keys have been found in plaintext logs, exposed through unsecured instances, and stolen via prompt injection attacks. When you run local models, there are no API keys to steal. The highest-value credential in most OpenClaw deployments simply does not exist.
What About Model Quality?
The common objection: "Local models are not as good as GPT-4." This is true for general-purpose tasks. It is false for domain-specific agent work.
The tasks OpenClaw performs repeatedly — email triage, document summarization, data extraction, report generation — are exactly the tasks where fine-tuned small models match or exceed frontier models:
- B2B task categorization: 94% accuracy with fine-tuned 7B vs. 71% with prompt-engineered GPT-4
- Support ticket resolution: 87% auto-resolution with fine-tuned model vs. 34% with RAG-enhanced chatbot
- Legal clause flagging: 90% accuracy with fine-tuned model on domain-specific contracts
For the narrow, repeatable tasks that make up 80%+ of OpenClaw usage, fine-tuned local models are not a compromise — they are an upgrade.
A Practical Security Architecture for OpenClaw
Here is how to deploy OpenClaw with a defensible security posture:
1. Local Model as Primary Backend
Deploy a fine-tuned model via Ollama on the same machine or local network as OpenClaw. All routine tasks route through this model. No data leaves your infrastructure.
{
"models": {
"providers": [
{
"name": "local-secure",
"api": "openai-completions",
"baseUrl": "http://127.0.0.1:11434/v1",
"models": ["my-finetuned-model"]
}
]
}
}
2. Cloud Fallback with Data Filtering (Optional)
If you need cloud API access for edge cases, route only non-sensitive queries to the cloud backend. Never send documents, emails, or proprietary data through cloud APIs.
3. Disable or Audit ClawHub Skills
Do not install community skills from ClawHub without reviewing the source code. Build your own skills backed by your fine-tuned model instead — this avoids the supply chain risk entirely.
4. Network Isolation
Run OpenClaw on a machine that cannot reach the internet except through controlled channels. Local model inference does not require any outbound network access.
5. Fine-Tune for Security Awareness
Include security-relevant training examples in your fine-tuning dataset:
- Examples of prompt injection attempts with correct refusal responses
- Patterns that should trigger confirmation before execution (file deletion, system commands, credential access)
- Boundary enforcement for what types of data the model should and should not process
Ship AI that runs on your users' devices.
Ertas early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.
The Enterprise Case
For organisations evaluating OpenClaw for internal use, the security calculus is straightforward:
- With cloud APIs: Every interaction transmits internal data to third-party infrastructure. Every API key is a high-value target. Every vulnerability in OpenClaw is a potential data breach vector.
- With local models: Inference is air-gapped from external services. No credentials to steal. The blast radius of any vulnerability is limited to the local machine.
The security teams at Meta, Cisco, and CrowdStrike are right to flag OpenClaw's risks. But the solution is not to ban the tool — it is to eliminate the data flow that makes it dangerous.
Run your own models. Keep your data local. That is the fix the CVE patches will never deliver.
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.
Keep reading

HIPAA, GDPR, and OpenClaw: A Compliance Guide for Regulated Industries
OpenClaw in healthcare, legal, or finance is a compliance minefield when using cloud APIs. Here's how to map the data flows, identify risks, and deploy compliantly with local models.

Privacy-Conscious AI Development: Fine-Tune in the Cloud, Run on Your Terms
How Ertas balances the convenience of cloud fine-tuning with the privacy of local deployment — giving you control over your data where it matters most.

How to Power OpenClaw with Fine-Tuned Local Models (No API Costs)
OpenClaw defaults to cloud APIs that charge per token. Here's how to run it on fine-tuned local models via Ollama for better domain performance and zero marginal inference cost.