OpenRouter + Ertas

    Access Ertas-trained models alongside hundreds of other LLMs through OpenRouter's unified API, with automatic fallback routing and cost optimization.

    Overview

    OpenRouter is a unified API gateway that provides access to hundreds of large language models from multiple providers through a single endpoint. Instead of managing separate API keys, SDKs, and billing relationships with OpenAI, Anthropic, Google, and dozens of open-source model hosts, developers integrate once with OpenRouter and gain access to the entire model ecosystem. OpenRouter handles routing, load balancing, rate limiting, and provider failover automatically.

    Beyond aggregation, OpenRouter provides intelligent model selection tools. Developers can compare models side-by-side on cost, speed, and quality metrics before committing to one. The platform supports custom model hosting, meaning organizations can register their own fine-tuned models alongside public ones and route between them using the same API. This makes OpenRouter a natural distribution and deployment channel for teams that want to serve their Ertas-trained models to internal or external consumers through a managed API layer.

    How Ertas Integrates

    After fine-tuning a model in Ertas Studio, you can register it with OpenRouter as a custom model endpoint. This lets your team — or external consumers — access your Ertas-trained model through the standard OpenRouter API alongside any other model they use. The benefit is operational simplicity: instead of managing a separate inference endpoint and API layer for your fine-tuned model, you leverage OpenRouter's existing infrastructure for authentication, rate limiting, usage tracking, and billing.

    For teams that use multiple models for different tasks, OpenRouter's routing capabilities pair well with Ertas-trained specialist models. You might route complex domain-specific queries to your Ertas-trained model while falling back to a general-purpose model for simpler tasks. OpenRouter's API is fully compatible with the OpenAI SDK, so any application built on OpenAI's client libraries can switch to using your fine-tuned model by changing only the base URL and model name — the same pattern that makes Ertas models portable across the entire open-source inference ecosystem.

    Getting Started

    1. 1

      Fine-tune your model in Ertas Studio

      Train a domain-specific model on your data using Ertas Studio. Select the base model and quantization format appropriate for your performance and cost requirements.

    2. 2

      Deploy to an inference endpoint

      Serve your model via Ertas Cloud, vLLM, or any production-grade inference server with a public or VPN-accessible endpoint.

    3. 3

      Register the model with OpenRouter

      Add your Ertas-trained model as a custom model in OpenRouter's dashboard. Configure the endpoint URL, model capabilities, and access permissions.

    4. 4

      Configure routing rules

      Set up routing preferences to direct specific query types or applications to your fine-tuned model. Configure fallback models for high-availability scenarios.

    5. 5

      Integrate with the OpenRouter API

      Use the OpenRouter API — compatible with the OpenAI SDK — to access your model from any application. Track usage and costs through the OpenRouter dashboard.

    python
    from openai import OpenAI
    
    # Use OpenRouter to access your Ertas-trained model
    client = OpenAI(
        base_url="https://openrouter.ai/api/v1",
        api_key="sk-or-your-openrouter-key",
    )
    
    response = client.chat.completions.create(
        model="your-org/ertas-legal-7b",  # Your custom model on OpenRouter
        messages=[
            {"role": "system", "content": "You are a legal document analyst."},
            {"role": "user", "content": "Summarize the key obligations in this contract."},
        ],
        temperature=0.1,
        max_tokens=1024,
    )
    
    print(response.choices[0].message.content)
    Access your Ertas-trained model through OpenRouter's unified API using the standard OpenAI Python SDK.

    Benefits

    • Serve fine-tuned models through a managed API with built-in auth and rate limiting
    • OpenAI SDK-compatible API means zero code changes for existing applications
    • Automatic fallback routing ensures high availability for production workloads
    • Usage tracking and cost analytics across all models in one dashboard
    • Route between specialist fine-tuned models and general-purpose models intelligently
    • Share fine-tuned models with team members or external consumers through access controls

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.