AnythingLLM + Ertas

    Use Ertas-trained models inside AnythingLLM to build private, document-grounded AI assistants with built-in RAG, multi-user support, and agent capabilities.

    Overview

    AnythingLLM is an all-in-one desktop and server application for running AI-powered knowledge bases and chat assistants. It bundles document ingestion, vector storage, RAG retrieval, conversation management, and a chat interface into a single application that runs entirely on your hardware. Users can upload PDFs, Word documents, web pages, and other content types, and AnythingLLM automatically chunks, embeds, and indexes them for retrieval-augmented conversations.

    What makes AnythingLLM stand out is its focus on accessibility and privacy. It provides a polished GUI that non-technical users can operate, supports multi-user workspaces with permission controls, and runs entirely offline after initial setup. It supports multiple LLM backends — Ollama, LM Studio, OpenAI, and any OpenAI-compatible endpoint — making it straightforward to swap in a fine-tuned model without changing anything about the document management or retrieval pipeline. For organizations that need a turnkey private AI solution, AnythingLLM provides the application layer while Ertas provides the model quality layer.

    How Ertas Integrates

    Ertas-trained models connect to AnythingLLM through its LLM provider settings. After deploying your fine-tuned model via Ollama or any OpenAI-compatible endpoint, you select it as the LLM provider in AnythingLLM's settings panel. The model immediately powers all conversations across every workspace, with the domain-specific knowledge from your Ertas training baked directly into the model's responses. Combined with AnythingLLM's document retrieval, this means your AI assistant has both trained knowledge and real-time document access.

    The Ertas-AnythingLLM combination is ideal for deploying internal knowledge base assistants. A legal team can fine-tune a model on contract analysis in Ertas Studio, upload their contract templates and policies into AnythingLLM workspaces, and give every team member access to an AI assistant that understands both general contract law (from fine-tuning) and the specific terms in their active documents (from RAG). Each workspace can have different documents and permissions, so the same model serves multiple teams with appropriate access controls. All of this runs on a single server with zero data leaving the organization's network.

    Getting Started

    1. 1

      Fine-tune a model in Ertas Studio

      Train your domain-specific model using Ertas Studio. Focus on the knowledge domain your AnythingLLM workspaces will cover — legal, medical, technical support, or internal policies.

    2. 2

      Deploy via Ollama or compatible endpoint

      Export the model in GGUF format and serve it through Ollama. AnythingLLM has native Ollama integration that auto-discovers available models.

    3. 3

      Configure AnythingLLM to use your model

      In AnythingLLM's LLM Preference settings, select Ollama as the provider and choose your Ertas-trained model from the auto-populated model list.

    4. 4

      Create workspaces and upload documents

      Set up workspaces for different teams or topics. Upload relevant documents — contracts, manuals, policies — that AnythingLLM will use for retrieval-augmented responses.

    5. 5

      Invite users and start conversations

      Add team members with appropriate workspace permissions. Users can immediately start chatting with an AI assistant grounded in both fine-tuned knowledge and uploaded documents.

    bash
    # 1. Deploy your Ertas-trained model with Ollama
    ollama create ertas-hr-7b -f ./Modelfile
    
    # 2. Verify the model is running
    ollama list
    # NAME              SIZE    MODIFIED
    # ertas-hr-7b       4.1 GB  just now
    
    # 3. Launch AnythingLLM (Docker)
    docker run -d \
      --name anythingllm \
      -p 3001:3001 \
      -v anythingllm_storage:/app/server/storage \
      mintplexlabs/anythingllm
    
    # 4. Open http://localhost:3001
    # → Settings → LLM Preference → Ollama
    # → Select "ertas-hr-7b" from the model dropdown
    # → Create a workspace and upload your HR policy documents
    Deploy an Ertas-trained model with Ollama and connect it to AnythingLLM for a fully private knowledge base assistant.

    Benefits

    • Turnkey private AI assistant with document upload, RAG, and chat interface
    • Multi-user workspaces with permission controls for team-wide deployment
    • Native Ollama integration auto-discovers Ertas-trained models
    • 100% offline operation — no data leaves your network after setup
    • Polished GUI accessible to non-technical team members
    • Agent capabilities let the AI assistant take actions beyond conversation

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.