ComfyUI + Ertas

    Integrate Ertas-trained language models into ComfyUI's node-based workflow system for text generation, prompt engineering, and multi-modal AI pipelines.

    Overview

    ComfyUI is a powerful node-based visual workflow engine originally designed for Stable Diffusion image generation, but increasingly adopted for broader AI automation tasks. Its graph-based interface lets users connect processing nodes — model loaders, samplers, prompt processors, and output handlers — into complex generation pipelines that are fully reproducible and shareable. Every workflow is a JSON graph that can be version-controlled, shared with teammates, and executed programmatically through ComfyUI's API.

    ComfyUI's extension ecosystem has grown rapidly to support language model integration alongside image generation. Custom node packs now provide LLM chat nodes, text processing utilities, and API connectors that let users build multi-modal workflows combining image generation with text analysis. For teams building content pipelines that involve both visual and textual AI — product description generation with matching images, marketing copy with brand-consistent visuals, or document processing with image extraction — ComfyUI provides a unified visual canvas where all components connect.

    How Ertas Integrates

    Ertas-trained models connect to ComfyUI through LLM integration nodes that support OpenAI-compatible API endpoints. After fine-tuning a model in Ertas Studio for a specific text task — product descriptions, content classification, or prompt enhancement — you deploy it via Ollama or any compatible server and add an LLM node in your ComfyUI workflow. The text generated by your Ertas model can then feed into other nodes: driving image generation prompts, populating templates, or triggering conditional workflow branches.

    This integration is particularly valuable for content production pipelines. Consider a workflow where product images are uploaded, an Ertas-trained model generates detailed descriptions and SEO metadata, and those descriptions are used to condition image variations — all in a single ComfyUI graph. The fine-tuned model ensures descriptions match your brand voice and include domain-specific terminology that a generic model would miss. Because ComfyUI workflows are JSON graphs, the entire pipeline is reproducible and can be triggered via API for batch processing, turning a visual prototype into a production content pipeline.

    Getting Started

    1. 1

      Fine-tune a text model in Ertas Studio

      Train a model on your specific text task — product descriptions, prompt enhancement, content classification, or any text generation use case.

    2. 2

      Deploy the model via Ollama

      Export the GGUF model and serve it through Ollama. ComfyUI LLM nodes connect to Ollama's API endpoint for text generation.

    3. 3

      Install LLM nodes in ComfyUI

      Add an LLM integration node pack to ComfyUI through the manager. These nodes provide chat completion, text processing, and API connector functionality.

    4. 4

      Build your multi-modal workflow

      Connect LLM nodes to your image generation pipeline. Use the Ertas model output to drive prompts, metadata, or conditional logic in your workflow.

    5. 5

      Execute via API for batch processing

      Use ComfyUI's API to trigger workflows programmatically. Process hundreds of items through your fine-tuned text and image pipeline in batch mode.

    json
    {
      "3": {
        "class_type": "LLM_ChatCompletion",
        "inputs": {
          "api_url": "http://localhost:11434/v1",
          "model": "ertas-product-7b",
          "system_prompt": "Generate a product description for the given image.",
          "user_prompt": "Describe this product in 150 words for an ecommerce listing.",
          "temperature": 0.7,
          "max_tokens": 256
        }
      },
      "4": {
        "class_type": "CLIPTextEncode",
        "inputs": {
          "text": ["3", 0],
          "clip": ["1", 1]
        }
      }
    }
    ComfyUI workflow node configuration using an Ertas-trained model to generate product descriptions that feed into image generation prompts.

    Benefits

    • Visual node-based interface makes complex AI pipelines understandable and shareable
    • Combine text generation from fine-tuned models with image generation in one workflow
    • Reproducible workflows stored as JSON for version control and collaboration
    • API-driven execution enables batch processing for production content pipelines
    • Active extension ecosystem with growing LLM integration support
    • Self-hosted and fully private — all generation happens on your hardware

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.