Flowise + Ertas

    Build AI automation workflows visually in Flowise using Ertas-trained models as the reasoning engine — no code required for sophisticated LLM pipelines.

    Overview

    Flowise is an open-source visual workflow builder for LLM applications. It provides a drag-and-drop canvas where users can connect pre-built nodes — chat models, vector stores, document loaders, tools, and memory modules — into functional AI pipelines without writing code. Built on top of LangChain, Flowise exposes the full power of chain orchestration through an intuitive visual interface that makes LLM application development accessible to non-developers and dramatically faster for experienced engineers.

    Flowise supports deployment as a self-hosted service with a built-in API layer, meaning workflows built in the visual editor can be consumed as REST endpoints by any application. It includes built-in credential management, conversation memory, and analytics dashboards. For teams that need to iterate quickly on AI workflows — testing different prompts, retrieval strategies, and model configurations — Flowise eliminates the code-compile-deploy cycle and replaces it with a real-time visual feedback loop.

    How Ertas Integrates

    Ertas-trained models connect to Flowise through any OpenAI-compatible chat model node. After fine-tuning in Ertas Studio, you deploy your model via Ollama, vLLM, or Ertas Cloud, then add a ChatOpenAI or ChatOllama node in the Flowise canvas and point it to your inference endpoint. The model immediately becomes available as the reasoning engine for any workflow you build — RAG chains, conversational agents, document processing pipelines, or multi-step classification flows.

    This combination is particularly powerful for business teams and automation agencies. Ertas handles the technical complexity of model fine-tuning and deployment, while Flowise provides the visual interface for building and iterating on AI workflows. A marketing team can fine-tune a model on brand voice in Ertas Studio, deploy it locally, and then use Flowise to build a content generation workflow that pulls from their knowledge base — all without touching a line of code. The result is domain-specific AI automation that would normally require a full ML engineering team.

    Getting Started

    1. 1

      Fine-tune your model in Ertas Studio

      Train a model on your specific use case data. Ertas Studio handles dataset preparation, training configuration, and model export automatically.

    2. 2

      Deploy to a local or cloud endpoint

      Export the GGUF model and serve it through Ollama or vLLM locally, or deploy to Ertas Cloud for a managed endpoint.

    3. 3

      Add a chat model node in Flowise

      Open the Flowise canvas and drag a ChatOpenAI or ChatOllama node. Configure it with your model endpoint URL and model name.

    4. 4

      Build your workflow visually

      Connect document loaders, vector stores, prompt templates, and tool nodes to create your AI workflow. Test it in real-time using the built-in chat interface.

    5. 5

      Deploy as an API endpoint

      Publish your Flowise workflow as a REST API. Any application can now consume your domain-specific AI pipeline through a simple HTTP call.

    json
    {
      "nodes": [
        {
          "id": "chatModel_0",
          "type": "ChatOpenAI",
          "data": {
            "baseUrl": "http://localhost:11434/v1",
            "modelName": "ertas-support-7b",
            "temperature": 0.2,
            "maxTokens": 1024
          }
        },
        {
          "id": "vectorStore_0",
          "type": "InMemoryVectorStore",
          "data": {
            "topK": 5
          }
        },
        {
          "id": "chain_0",
          "type": "ConversationalRetrievalQA",
          "data": {
            "returnSourceDocuments": true
          }
        }
      ]
    }
    Example Flowise workflow configuration connecting an Ertas-trained model to a vector store for conversational Q&A.

    Benefits

    • No-code AI workflow creation with drag-and-drop visual builder
    • Fine-tuned models work as drop-in replacements in any Flowise chat model node
    • Self-hosted deployment keeps all data within your infrastructure
    • Built-in conversation memory and analytics for production monitoring
    • Rapid iteration on prompts and retrieval strategies without redeploying code
    • Accessible to non-technical teams while remaining powerful for engineers

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.