Back to blog
    How to Deploy a RAG Pipeline as an API Endpoint Your AI Agent Can Call
    rag-pipelineapi-endpointai-agentstool-callingon-premiseenterprise-aisegment:enterprise

    How to Deploy a RAG Pipeline as an API Endpoint Your AI Agent Can Call

    Most RAG tutorials stop at the vector store. Production AI agents need a callable retrieval endpoint with tool-calling specs. Here is how to build and deploy RAG as modular infrastructure, not embedded code.

    EErtas Team·

    Every RAG tutorial follows the same arc: load documents, chunk them, embed them, write them to a vector store. Then it ends. The reader is left with a populated vector database and no clear path from there to a production system that an AI agent can actually call.

    The gap between "vectors in a database" and "a retrieval endpoint my agent can query at runtime" is where most RAG projects stall. This guide covers how to deploy RAG as an API endpoint — a callable retrieval service that AI agents can discover and invoke through standard tool-calling protocols.

    Why RAG Tutorials Miss the Point

    The standard RAG tutorial treats retrieval as embedded code. You write a Python script that queries Pinecone or Chroma, assembles context, and feeds it into a prompt. That works in a notebook. It does not work when you need:

    • An AI agent (running in n8n, LangGraph, or your own orchestrator) to call retrieval as a tool
    • Multiple agents or applications to share the same retrieval pipeline
    • Non-engineers to update the knowledge base without touching code
    • An audit trail showing which documents were retrieved for which queries

    The core problem: RAG built as embedded code is not addressable. It has no URL, no schema, no tool-calling spec. An AI agent cannot call a Python function buried in another service's codebase.

    RAG as Embedded Code vs. RAG as Infrastructure

    This distinction determines whether your retrieval system scales beyond a single application.

    Embedded RAG means the retrieval logic lives inside your application code. The vector search, context assembly, and prompt construction are functions within your app. If a second application needs the same knowledge base, you duplicate the code. If you want an AI agent to use it, you write a wrapper endpoint manually.

    RAG as infrastructure means the retrieval pipeline is a standalone service with its own endpoint, its own schema, and its own lifecycle. You deploy RAG as an API endpoint once, and any agent or application that needs retrieval calls it. This is what it means to run RAG as a service on-premise — retrieval becomes a shared capability, not repeated glue code.

    The best way to deploy RAG as an API is to treat indexing and retrieval as two separate pipelines that share a vector store but run independently.

    The Architecture: Two Pipelines, One Canvas

    A production RAG system has two distinct operational modes that run on different schedules with different requirements.

    The Indexing Pipeline (Batch)

    This pipeline processes your source documents and populates the vector store. It runs on a schedule or on-demand when new documents arrive.

    File Import → Parser → Clean → RAG Chunker → Embedding → Vector Store Writer

    Each step is a discrete operation: import files from a source directory or object store, parse them into text (handling PDFs, DOCX, HTML), clean the text by removing boilerplate, chunk it with overlap for retrieval quality, generate embeddings, and write the vectors to your store.

    This pipeline does not need to be running when agents are querying. It runs when data changes.

    The Retrieval Pipeline (Live)

    This pipeline handles incoming queries and returns relevant context. It runs continuously, listening for requests.

    API Endpoint → Query Embedder → Vector Search → Context Assembler → API Response

    The API Endpoint node receives an incoming query, the Query Embedder converts it to a vector using the same embedding model used during indexing, Vector Search finds the nearest neighbors in the store, the Context Assembler ranks and formats the results, and the API Response node returns structured output to the caller.

    In Ertas Data Suite, both pipelines live on a single visual canvas. You can see the full data flow — from raw documents to vector store to query response — in one view. The retrieval pipeline can be deployed (toggled to a listening state) independently from running the indexing pipeline. This means your RAG retrieval API endpoint stays live and responsive while you re-index documents in the background.

    Walking Through the Retrieval Pipeline

    Each node in the retrieval pipeline handles one concern.

    API Endpoint. This is the entry point. It defines the HTTP interface: accepted parameters (query string, top-k count, optional filters), authentication, and rate limiting. Critically, this node auto-generates a tool-calling specification that describes the endpoint's inputs and outputs in a format AI agents can consume.

    Query Embedder. Takes the raw query string and produces a vector using the same model and parameters as the indexing pipeline's embedding step. Consistency here is non-negotiable — if you used text-embedding-3-small with 512 dimensions during indexing, the query embedder must match exactly.

    Vector Search. Executes a nearest-neighbor search against the vector store. Configurable parameters include top-k (how many chunks to retrieve), similarity threshold (minimum relevance score), and metadata filters (restrict search to specific document categories or date ranges).

    Context Assembler. Takes the raw search results and prepares them for consumption. This includes deduplication (overlapping chunks from the same document), relevance re-ranking, source attribution (which document and page each chunk came from), and formatting the output as structured data rather than a raw text blob.

    API Response. Serializes the assembled context into the response format. Returns the retrieved chunks, their source metadata, relevance scores, and any diagnostics the caller requested.

    Tool-Calling Integration: How to Make RAG Callable by AI Agents

    The API Endpoint node in Ertas generates tool-calling specs automatically. This is the key capability that makes the best RAG deployment tool for AI agents different from a basic REST wrapper.

    When you deploy the retrieval pipeline, the API Endpoint node produces a tool specification compatible with OpenAI function calling, Anthropic tool use, and other agent frameworks. The spec describes:

    • The endpoint URL and authentication method
    • Input parameters with types and descriptions (query as string, top_k as integer, filters as optional object)
    • Output schema describing the response structure
    • A natural language description of what the tool does, so the agent can decide when to use it

    An AI agent configured with this spec can autonomously decide to call your RAG retrieval API endpoint when it needs domain-specific information. This is RAG tool calling for AI agents in practice — the agent does not need hardcoded retrieval logic. It discovers the retrieval capability through the tool spec and invokes it when relevant.

    This turns your RAG pipeline into a component of an agentic RAG pipeline, where the agent orchestrates when and how to retrieve information rather than retrieval being a fixed step in every request.

    Why On-Premise Matters for Deployed Retrieval Endpoints

    When you deploy RAG as a live API endpoint, three factors push toward on-premise deployment.

    Latency. A retrieval endpoint sits in the critical path of every agent interaction that needs context. If your agent is running on your infrastructure and the retrieval endpoint is in a third-party cloud, you add a network round trip to every query. On-premise retrieval endpoints on the same network as your agent reduce query latency to single-digit milliseconds for the network hop.

    Data sovereignty. The documents in your vector store are your proprietary data. Every query to a cloud-hosted retrieval service sends your user's question to a third party and receives your proprietary document chunks in response. For regulated industries — healthcare, finance, legal — this is often a non-starter. Running RAG as a service on-premise keeps both the queries and the retrieved content within your network boundary.

    Cost predictability. Cloud RAG services charge per query. A busy AI agent making 10,000 retrieval calls per day generates a variable monthly bill. On-premise deployment converts this to a fixed infrastructure cost. The best tool to deploy RAG endpoint on-premise gives you predictable economics regardless of query volume.

    Comparison: Three Approaches to Deploying RAG

    FactorCustom Python CodeManaged RAG ServiceVisual Pipeline (Ertas)
    Time to deployDays to weeksHoursHours
    Tool-calling specManual authoringVaries by vendorAuto-generated
    On-premise optionYes (you build it)RarelyYes
    Audit trailYou build loggingVendor-dependentBuilt-in
    Non-engineer accessNoLimitedFull visual canvas
    Indexing/retrieval separationYou architect itAbstracted awayExplicit two-pipeline model
    Per-query costInfrastructure onlyPer-query feesInfrastructure only

    Custom Python retrieval code gives you maximum control but requires you to build the endpoint, the tool-calling spec, the logging, and the operational tooling yourself. Managed RAG services reduce setup time but introduce per-query costs and often lack on-premise deployment options. A visual pipeline approach in Ertas Data Suite gives you the separation of concerns and auto-generated tool-calling specs without writing retrieval infrastructure code.

    The Complete Picture

    Your AI solution becomes two components: an inference API (your fine-tuned model or hosted LLM) and a RAG retrieval endpoint built in Ertas. The inference API handles reasoning. The retrieval endpoint handles knowledge. The AI agent orchestrates both through tool calling.

    No glue code connecting them. No embedded retrieval logic in your application. No vendor dependency on a managed RAG service that charges per query and holds your vectors hostage.

    The indexing pipeline runs when your data changes. The retrieval pipeline runs continuously, serving queries. Both are visible on one canvas, auditable, and maintainable by team members who are not writing Python.

    Get Started

    Ertas Data Suite's Serve category — API Endpoint, Query Embedder, Vector Search, Context Assembler, and API Response — is available now in the design partner program. If you are building AI agents that need a callable retrieval layer, or migrating from embedded RAG code to an agentic RAG pipeline architecture, we would like to work with you.

    Join the design partner program to deploy your first RAG retrieval API endpoint on your own infrastructure.

    Turn unstructured data into AI-ready datasets — without it leaving the building.

    On-premise data preparation with full audit trail. No data egress. No fragmented toolchains. EU AI Act Article 30 compliance built in.

    Keep reading