OpenClaw + Ertas

    Replace OpenClaw's default cloud API backend with fine-tuned models deployed through Ollama for zero-cost inference, better domain-specific accuracy, and complete data privacy.

    Overview

    OpenClaw is an open-source autonomous AI agent that connects to messaging platforms (WhatsApp, Telegram, Slack, Discord, Teams) and can execute tasks via large language models — from email triage and file management to browser automation and shell commands. With over 180,000 GitHub stars, it has become the most popular personal AI agent framework.

    By default, OpenClaw routes inference through cloud APIs like OpenAI and Anthropic, which means per-token costs on every interaction and sensitive data leaving your infrastructure. Ertas solves both problems: fine-tune a domain-specific model on your data, export as GGUF, deploy via Ollama, and point OpenClaw at the local endpoint. The result is an AI agent that understands your specific workflows, costs nothing per interaction, and keeps all data on your machine.

    How Ertas Integrates

    OpenClaw supports any model served through an OpenAI-compatible API, which includes Ollama — the recommended local deployment target for Ertas-trained models. After fine-tuning in Ertas Studio, download your model in GGUF format with the accompanying Modelfile. Register it with Ollama using a single CLI command, then update OpenClaw's models.providers configuration to point to your local Ollama endpoint.

    For agencies running per-client OpenClaw deployments, Ertas enables a particularly efficient architecture: fine-tune per-client LoRA adapters (50–200MB each) on a shared base model. Each client's OpenClaw instance connects to the same Ollama server but loads a different adapter at inference time. This eliminates per-client API costs entirely while delivering better domain-specific accuracy than generic cloud models. Ertas Cloud can manage the full lifecycle — training, adapter versioning, deployment monitoring, and A/B testing between model versions.

    Getting Started

    1. 1

      Fine-tune a model for your OpenClaw workflows

      Upload training data from your OpenClaw use cases (email triage examples, support conversations, report templates) to Ertas Studio. Select a base model optimised for agent tasks (Llama 3.3 8B or Qwen 2.5 7B recommended) and launch a LoRA fine-tuning run.

    2. 2

      Export as GGUF

      Download the fine-tuned model in GGUF format with your preferred quantisation level. Q5_K_M is recommended for OpenClaw agent tasks — it balances quality and speed for multi-step reasoning workflows.

    3. 3

      Deploy via Ollama

      Use the Ertas-generated Modelfile to register your model with Ollama in a single command. The Modelfile includes the correct chat template, system prompt, and runtime parameters.

    4. 4

      Configure OpenClaw's model provider

      Update OpenClaw's models.providers configuration to use your local Ollama endpoint at http://127.0.0.1:11434/v1. Set your fine-tuned model as the default for all tasks, or configure task-specific routing.

    5. 5

      Test and iterate

      Run your standard OpenClaw workflows through the fine-tuned model. Collect cases where accuracy falls short, add them to your training dataset, and re-fine-tune for the next iteration.

    json
    // openclaw.json — configure local fine-tuned model
    {
      "models": {
        "providers": [
          {
            "name": "ertas-local",
            "api": "openai-completions",
            "baseUrl": "http://127.0.0.1:11434/v1",
            "models": ["my-finetuned-model"]
          }
        ]
      }
    }
    
    // Deploy your Ertas-trained model with Ollama:
    // ollama create my-finetuned-model -f ./Modelfile
    // ollama run my-finetuned-model "Test prompt"
    Configure OpenClaw to use your Ertas-fine-tuned model served locally through Ollama — zero API costs, full data privacy.

    Benefits

    • Zero per-token inference cost — all OpenClaw interactions run locally
    • Better accuracy on domain-specific agent tasks than generic cloud models
    • Complete data privacy — files, emails, and prompts never leave your infrastructure
    • Per-client LoRA adapters for agencies running multi-tenant OpenClaw deployments
    • Eliminates API key management and the associated security risks
    • Compatible with OpenClaw's full feature set including cron jobs and heartbeat monitoring

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.