Mastra + Ertas

    Build production AI agents in TypeScript with Mastra — a framework on top of the Vercel AI SDK that handles workflows, memory, evals, and deployment, with first-class support for fine-tuned local models.

    Overview

    Mastra is a TypeScript-first agent framework built on top of the Vercel AI SDK. It targets the production agent-development gap that frameworks like LangGraph and CrewAI fill in Python — workflow orchestration, persistent memory, structured evaluations, observability, and deployment patterns — but with TypeScript ergonomics and tight integration with the JavaScript/edge ecosystem. Since reaching 1.0 in January 2026, Mastra has grown to over 22K GitHub stars and 300K+ weekly npm downloads, making it the dominant production agent framework in the TypeScript ecosystem.

    Mastra's design philosophy emphasizes incremental complexity: you can start with a single agent, add tools and workflows as needs grow, and integrate memory and evals without restructuring your codebase. The framework provides first-class primitives for agent state machines (workflows), conversational memory (with multiple storage backends), structured tool use, and human-in-the-loop checkpoints. Because it builds on the Vercel AI SDK, Mastra inherits compatibility with 3,300+ models from 94 providers — including any OpenAI-compatible endpoint serving an Ertas-trained model.

    How Ertas Integrates

    Ertas-trained models slot into Mastra agents through the Vercel AI SDK's provider abstractions. After fine-tuning a model in Ertas Studio and deploying it to Ollama, vLLM, or Ertas Cloud (any OpenAI-compatible endpoint), you configure a Mastra agent to use that endpoint — typically with two lines of provider configuration. From there, all of Mastra's framework features (tools, workflows, memory, evals) work transparently with your fine-tuned model just as they would with a frontier API model.

    The TypeScript-native design is particularly valuable for teams shipping AI features into existing JavaScript products. Web applications, edge functions, mobile-app backends, and Node.js services can all use Mastra agents directly without bridging to a Python service. Combined with Ertas-trained models served on the same edge infrastructure (e.g., via Ollama on a self-hosted server or Ertas Cloud), this enables fully self-contained TypeScript agent deployments without language boundaries — a common pain point in cross-language agent architectures.

    Getting Started

    1. 1

      Fine-tune a model in Ertas Studio

      Train your domain-specific model on JSONL data via Ertas Studio. The fine-tuned model captures your domain vocabulary, reasoning patterns, and tool-use conventions.

    2. 2

      Deploy to an OpenAI-compatible endpoint

      Export to GGUF and serve via Ollama, vLLM, or Ertas Cloud. Mastra works with any endpoint that exposes the standard /v1/chat/completions API.

    3. 3

      Install Mastra and configure the model provider

      Install @mastra/core and configure a custom OpenAI-compatible provider pointed at your inference endpoint. The Vercel AI SDK's createOpenAI helper handles this directly.

    4. 4

      Define your agent with tools and workflows

      Create a Mastra agent with role description, tools (function calls, RAG retrievers), and workflow steps. Add persistent memory if your use case spans multi-session interactions.

    5. 5

      Add evaluations and ship

      Configure Mastra evals to track agent quality on a representative test set as you iterate. Deploy to Vercel, your own infrastructure, or any Node.js host.

    typescript
    import { createOpenAI } from "@ai-sdk/openai";
    import { Agent } from "@mastra/core";
    
    // Point Mastra at your Ertas-trained model served via Ollama
    const ertas = createOpenAI({
      baseURL: "http://localhost:11434/v1",
      apiKey: "not-needed",
    });
    
    const supportAgent = new Agent({
      name: "support-agent",
      instructions: "You are a customer support agent for an enterprise SaaS platform.",
      model: ertas("ertas-support-7b"),
      tools: {
        lookupCustomer: {
          description: "Look up customer details by ID",
          parameters: z.object({ customerId: z.string() }),
          execute: async ({ customerId }) => {
            return await db.customers.findById(customerId);
          },
        },
      },
    });
    
    const result = await supportAgent.generate({
      messages: [{ role: "user", content: "What's the status of my account?" }],
    });
    console.log(result.text);
    Wire an Ertas-trained model into a Mastra agent via the Vercel AI SDK's OpenAI-compatible provider, then add tools and ship.

    Benefits

    • TypeScript-native — ship agents directly into web apps, edge functions, and Node.js services
    • First-class Vercel AI SDK integration with 3,300+ supported models including local Ertas-trained models
    • Built-in workflow orchestration, persistent memory, and structured evaluations
    • Production-ready logging, observability, and human-in-the-loop checkpoints
    • Deploy to Vercel edge or any Node.js infrastructure without language boundaries
    • Active and growing TypeScript agent ecosystem with strong open-source community

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.