Hermes Agent + Ertas

    Build self-improving agents with Hermes Agent — Nous Research's MIT-licensed framework using GEPA self-generated skills, where agents create reusable skills from experience and get faster on repeated tasks.

    Overview

    Hermes Agent is Nous Research's open-source agent framework, released in February 2026 and now at over 103K GitHub stars by April 2026. The framework's distinctive capability is its GEPA (Generalized Experience-based Procedural Acquisition) self-improvement mechanism — agents create reusable 'skills' from successful task completions, refine them through use, and accumulate a personal skill library that compounds in capability over time. Empirical results show Hermes agents getting approximately 40% faster on repeated tasks after building 20+ self-generated skills, with the speedup coming from skill reuse rather than re-deriving solutions.

    This self-improvement pattern is fundamentally different from most agent frameworks where each task starts from scratch. With Hermes Agent, an agent that completes a complex task once writes that solution as a skill that can be invoked directly on similar future tasks. The skills themselves are LLM-readable code or structured prompts, so they're inspectable and editable — not opaque learned weights. The framework is MIT-licensed and ships with a self-hosting option starting at €5/month for managed infrastructure, making it accessible to individual developers and small teams in addition to enterprise deployments.

    How Ertas Integrates

    Hermes Agent works with any OpenAI-compatible model endpoint, so Ertas-trained models plug in through standard configuration. After fine-tuning your model in Ertas Studio and deploying via Ollama, vLLM, or Ertas Cloud, you configure Hermes Agent to use that endpoint as its base LLM. The combination is particularly powerful when paired with the Hermes 4 model family (also from Nous Research) — Hermes 4's hybrid `<think>` reasoning mode is designed in tandem with Hermes Agent's skill creation, and using both together produces the highest-quality skill libraries.

    For self-improvement-oriented deployments, the Ertas + Hermes Agent loop is uniquely powerful. Hermes Agent generates skills from agent experience; those skills can be exported as training data and fed back into Ertas Studio to fine-tune the underlying model on its own self-generated procedural knowledge. The fine-tuned model then performs better on the patterns it has seen most, reducing the need for skill-library lookups for common tasks while preserving skill-based handling for novel ones. This creates a compounding improvement loop: better skills → better fine-tunes → better base behavior → better skills.

    Getting Started

    1. 1

      Fine-tune a base model in Ertas Studio

      Train your domain model. Hermes 4 derivatives or Llama 3.1-based fine-tunes pair particularly well with Hermes Agent's skill-creation patterns.

    2. 2

      Deploy to an OpenAI-compatible endpoint

      Export to GGUF and serve via Ollama, vLLM, or Ertas Cloud. Hermes Agent calls any standard chat-completion endpoint.

    3. 3

      Install Hermes Agent and configure the model

      Install Hermes Agent (self-hosted or via Nous's managed infrastructure). Configure the LLM provider to point at your Ertas inference endpoint.

    4. 4

      Run agent tasks and let GEPA accumulate skills

      As the agent completes tasks, GEPA automatically creates skills from successful completions. Over time, the skill library grows and the agent gets faster on repeated patterns.

    5. 5

      Export skills as training data for Ertas

      Periodically export the GEPA skill library as training data and use it to fine-tune the underlying model in Ertas Studio. The improved model further accelerates future skill creation.

    python
    from hermes_agent import Agent, GEPAConfig
    from hermes_agent.providers import OpenAICompatible
    
    # Point Hermes Agent at your Ertas-trained Hermes 4 fine-tune
    llm = OpenAICompatible(
        base_url="http://localhost:11434/v1",
        model="ertas-hermes-4-domain-70b",
        api_key="not-needed",
    )
    
    # Configure GEPA to enable skill accumulation
    gepa = GEPAConfig(
        enabled=True,
        skill_library_path="./skills/",
        auto_distill=True,  # Refine skills as they're reused
    )
    
    agent = Agent(
        name="research-agent",
        llm=llm,
        gepa=gepa,
    )
    
    # First task: agent derives a solution from scratch
    result1 = agent.run("Analyze Q3 earnings for the top 5 semiconductor companies.")
    # A skill is automatically created for "analyze quarterly earnings"
    
    # Later, similar task: agent invokes the existing skill
    result2 = agent.run("Analyze Q3 earnings for the top 10 cloud companies.")
    # ~40% faster than the first run because the skill is reused
    
    # Export skills as training data for further Ertas fine-tuning
    agent.gepa.export_training_data("./skills_training_data.jsonl")
    Run a Hermes Agent backed by an Ertas-trained Hermes 4 fine-tune. GEPA accumulates skills from successful tasks and exports them as training data for further model refinement.

    Benefits

    • GEPA self-improvement: agents create reusable skills and get ~40% faster on repeated tasks
    • MIT license with no commercial restrictions
    • Inspectable skill library — not opaque learned weights, but readable code/prompts
    • Pairs naturally with Hermes 4 model family for highest-quality skill creation
    • Compounding improvement loop: skills → fine-tunes → better base behavior
    • Self-hosting option from €5/month makes it accessible to individuals and small teams

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.