Msty + Ertas

    Run Ertas-trained models in Msty's elegant desktop chat interface with built-in model management, conversation history, and knowledge base features.

    Overview

    Msty is a desktop AI chat application designed for users who want a polished, native experience for interacting with local language models. Available on macOS, Windows, and Linux, Msty provides a clean conversation interface with features like conversation branching, message editing, markdown rendering, and code syntax highlighting. It supports multiple model backends including Ollama, LM Studio, and direct GGUF file loading, making it one of the most flexible local AI clients available.

    Beyond chat, Msty includes knowledge base functionality that lets users attach documents to conversations for context-aware responses. It manages model downloads, provides performance metrics during inference, and stores all conversation history locally. For individual professionals and small teams who need a daily-driver AI assistant running on their own hardware, Msty provides the desktop-native experience that web-based chat interfaces cannot match — with offline capability, system-level keyboard shortcuts, and integration with the operating system's notification system.

    How Ertas Integrates

    Ertas-trained models work seamlessly with Msty through its Ollama backend integration. After fine-tuning in Ertas Studio and deploying the model to Ollama, Msty automatically discovers and lists it in the model selector. Users can switch between their Ertas-trained specialist models and general-purpose models mid-conversation, comparing outputs side by side. Msty also supports loading GGUF files directly, so you can import an Ertas-exported model without even setting up an inference server.

    The Ertas-Msty workflow is ideal for knowledge workers who need AI assistance in a specific domain. A financial analyst can fine-tune a model on earnings reports and SEC filings in Ertas Studio, load it into Msty, and use it daily for research and analysis — with the model running entirely on their laptop and all conversations stored locally. Msty's knowledge base feature adds another layer: the analyst can attach relevant documents to a conversation, combining the fine-tuned model's domain expertise with real-time document context. This creates a private, personalized AI assistant that understands both the general domain and the specific materials at hand.

    Getting Started

    1. 1

      Fine-tune your model in Ertas Studio

      Train a domain-specific model using your data in Ertas Studio. Export the finished model in GGUF format with your preferred quantization level.

    2. 2

      Deploy via Ollama or load the GGUF directly

      Either register the model with Ollama for server-based inference, or load the GGUF file directly into Msty for a simpler setup.

    3. 3

      Select your model in Msty

      Open Msty and select your Ertas-trained model from the model dropdown. If using Ollama, the model appears automatically; if using a GGUF file, point Msty to the file location.

    4. 4

      Attach knowledge base documents

      Optionally upload documents relevant to your work. Msty will use them as context alongside your fine-tuned model's trained knowledge.

    5. 5

      Start using your personalized AI assistant

      Begin conversations with your domain-tuned model. Use conversation branching to explore different approaches, and access full conversation history for reference.

    bash
    # Option 1: Deploy via Ollama (recommended)
    ollama create ertas-analyst-7b -f ./Modelfile
    # Msty auto-discovers Ollama models — just select it in the app
    
    # Option 2: Direct GGUF loading in Msty
    # 1. Download the GGUF file from Ertas Studio
    # 2. Open Msty → Settings → Models → Add Local Model
    # 3. Browse to: ~/models/ertas-analyst-7b-Q5_K_M.gguf
    # 4. Msty loads the model with llama.cpp backend
    
    # Either way, your fine-tuned model is ready
    # in Msty's chat interface within seconds
    Load an Ertas-trained model into Msty through Ollama auto-discovery or direct GGUF file import.

    Benefits

    • Native desktop application with system integration and keyboard shortcuts
    • Direct GGUF loading eliminates the need for a separate inference server
    • Conversation branching for exploring multiple response paths
    • Knowledge base attachments combine fine-tuned intelligence with document context
    • All conversations stored locally with full privacy
    • Cross-platform support for macOS, Windows, and Linux

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.