SuperAgent + Ertas

    Deploy Ertas-trained models as the reasoning core of SuperAgent AI agents with tool use, memory, and multi-step task execution capabilities.

    Overview

    SuperAgent is an open-source AI agent framework that lets developers build, deploy, and manage autonomous AI agents with tool-calling capabilities, persistent memory, and multi-step reasoning. Unlike simple chatbot frameworks, SuperAgent provides the full infrastructure for production agent deployments: API management, workflow orchestration, document ingestion, and real-time monitoring. Agents built with SuperAgent can browse the web, query databases, call external APIs, process documents, and execute multi-step tasks with human-in-the-loop approval workflows.

    The framework is designed for production use from the ground up. It includes built-in authentication, rate limiting, usage analytics, and webhook-based event streaming. Agents can be deployed as REST APIs consumed by any application, or embedded in existing products through the provided SDKs. For teams building AI-powered products that go beyond conversational Q&A — agents that actually take actions, process workflows, and integrate with business systems — SuperAgent provides the orchestration layer while the underlying LLM provides the intelligence.

    How Ertas Integrates

    Ertas-trained models plug into SuperAgent as custom LLM providers through the OpenAI-compatible API interface. After fine-tuning a model in Ertas Studio for a specific agent use case — customer onboarding, document processing, research assistance — you deploy it and configure SuperAgent to use it as the reasoning backbone for your agent. The fine-tuned model's domain expertise directly improves the agent's ability to select the right tools, interpret results correctly, and generate accurate responses.

    The impact of fine-tuning on agent performance is substantial. Generic models often struggle with tool selection — choosing the wrong API to call, misinterpreting function parameters, or generating syntactically incorrect tool invocations. A model fine-tuned on examples of correct tool usage in your specific domain makes dramatically fewer errors, leading to higher task completion rates and fewer fallback-to-human escalations. With Ertas, you can generate training data from your agent's production logs — successful tool chains, corrected errors, and human feedback — and continuously improve the model's reasoning capabilities through iterative fine-tuning cycles.

    Getting Started

    1. 1

      Fine-tune a model for agent reasoning

      Train a model in Ertas Studio on task-specific examples including tool selection, parameter formatting, and multi-step reasoning chains relevant to your agent's domain.

    2. 2

      Deploy the model to an inference endpoint

      Serve the model via Ertas Cloud, vLLM, or Ollama with an OpenAI-compatible API that SuperAgent can connect to.

    3. 3

      Create a SuperAgent agent

      Configure a new agent in SuperAgent with your Ertas model as the LLM provider. Define the agent's tools, memory settings, and system prompt.

    4. 4

      Add tools and data sources

      Connect the agent to external tools — databases, APIs, document stores — that it will use to complete tasks. Upload reference documents for RAG-augmented responses.

    5. 5

      Deploy and monitor in production

      Publish the agent as a REST API. Monitor task completion rates, tool usage patterns, and error frequencies to identify opportunities for model retraining.

    python
    import superagent
    
    # Create a SuperAgent client
    client = superagent.Client(api_key="your-superagent-key")
    
    # Create an agent with your Ertas-trained model
    agent = client.agents.create(
        name="Contract Processor",
        llm_provider="openai-compatible",
        llm_config={
            "base_url": "https://cloud.ertas.ai/v1",
            "api_key": "your-ertas-key",
            "model": "ertas-legal-agent-7b",
        },
        system_prompt="You are a contract processing agent. Use the provided tools to extract, classify, and route contract documents.",
    )
    
    # Add tools the agent can use
    client.agents.add_tool(agent.id, tool_id="document-parser")
    client.agents.add_tool(agent.id, tool_id="crm-update")
    client.agents.add_tool(agent.id, tool_id="email-sender")
    
    # Run the agent on a task
    result = client.agents.invoke(
        agent.id,
        input="Process the uploaded contract and extract all payment terms.",
    )
    print(result.output)
    Create a SuperAgent agent powered by an Ertas-trained model with domain-specific tool-calling capabilities.

    Benefits

    • Fine-tuned reasoning improves tool selection accuracy and task completion rates
    • Production-ready agent infrastructure with auth, rate limiting, and monitoring
    • Persistent memory enables agents to maintain context across interactions
    • Human-in-the-loop approval workflows for high-stakes actions
    • Multi-step task execution with automatic error recovery
    • Continuous improvement through production log-based retraining in Ertas Studio

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.