Vercel AI SDK + Ertas

    Build AI features with the Vercel AI SDK — a TypeScript-first SDK with a unified interface across 94 providers, streaming UI, structured output, tool calling, and full support for fine-tuned local models.

    Overview

    The Vercel AI SDK is the dominant TypeScript SDK for building AI-powered applications, supporting 3,300+ models from 94 providers as of March 2026. Its core value proposition is provider-agnostic ergonomics: the same `generateText`, `streamText`, `generateObject`, and `streamObject` calls work across OpenAI, Anthropic, Google, Mistral, Hugging Face, local Ollama models, and any custom provider you configure. This makes it trivial to swap between models for cost optimization, A/B testing, or fallback patterns without changing application code.

    Beyond the unified provider interface, the AI SDK ships with first-class primitives for the patterns that matter in production: streaming UI components for React, Vue, and Svelte; structured-output validation via Zod; built-in tool calling with parallel execution; and durable workflow patterns through the Vercel Workflow DevKit. The SDK is the foundation underneath Mastra (the production agent framework) and is widely used directly for simpler chat and completion use cases. For TypeScript teams, it's effectively the default infrastructure layer for AI features.

    How Ertas Integrates

    Ertas-trained models integrate with the Vercel AI SDK through the official OpenAI-compatible provider. After fine-tuning your model in Ertas Studio and deploying to an OpenAI-compatible endpoint (Ollama, vLLM, or Ertas Cloud), you create a custom provider with `createOpenAI` pointed at your endpoint base URL. From that point, every Vercel AI SDK feature — streaming, structured output, tool calling, multi-modal input, agent loops via Mastra — works transparently with your fine-tuned model.

    The combination shines for TypeScript-native AI products. Web applications, Next.js apps, edge functions, and Node.js services can use Ertas-trained models directly without bridging to Python — eliminating a common pain point in cross-language AI architectures. For Vercel-hosted applications, the AI SDK's edge runtime support enables low-latency global inference when paired with Ertas Cloud's regional deployment options. For self-hosted applications, the same code paths work against an Ollama or vLLM endpoint on your own infrastructure with no application-level changes.

    Getting Started

    1. 1

      Fine-tune your model in Ertas Studio

      Train a domain-specific model on your data. The fine-tuned model captures domain vocabulary, patterns, and tool-use conventions for use across all your AI features.

    2. 2

      Deploy to an OpenAI-compatible endpoint

      Export to GGUF and serve via Ollama, vLLM, or Ertas Cloud. The AI SDK works with any endpoint exposing the standard /v1/chat/completions API.

    3. 3

      Install the Vercel AI SDK

      Install ai (the core SDK) and @ai-sdk/openai (the OpenAI-compatible provider). The same install supports all standard AI SDK features.

    4. 4

      Create a custom provider for your Ertas endpoint

      Use createOpenAI with your endpoint base URL. The resulting provider works with generateText, streamText, generateObject, and tool-calling APIs.

    5. 5

      Build streaming UI, agents, or structured outputs

      Use the AI SDK's React components for streaming chat, Zod-validated structured outputs, or parallel tool-calling agents — all backed by your Ertas-trained model.

    typescript
    import { createOpenAI } from "@ai-sdk/openai";
    import { generateObject, streamText } from "ai";
    import { z } from "zod";
    
    // Create a provider pointed at your Ertas-trained model
    const ertas = createOpenAI({
      baseURL: "http://localhost:11434/v1",
      apiKey: "not-needed",
    });
    
    // Streaming chat
    const stream = streamText({
      model: ertas("ertas-support-7b"),
      messages: [{ role: "user", content: "How do I cancel my subscription?" }],
    });
    
    for await (const chunk of stream.textStream) {
      process.stdout.write(chunk);
    }
    
    // Structured output with Zod validation
    const ticket = await generateObject({
      model: ertas("ertas-support-7b"),
      schema: z.object({
        category: z.enum(["billing", "technical", "account"]),
        priority: z.enum(["low", "medium", "high"]),
        summary: z.string(),
        suggestedAction: z.string(),
      }),
      prompt: "Classify this support email: [email body here]",
    });
    
    console.log(ticket.object); // Type-safe, validated
    Use the Vercel AI SDK with an Ertas-trained model for streaming chat and Zod-validated structured outputs — same code works with any OpenAI-compatible provider.

    Benefits

    • Unified TypeScript interface — same code works across 3,300+ models from 94 providers
    • First-class streaming UI primitives for React, Vue, Svelte, and SolidJS
    • Structured output via Zod schemas — type-safe agentic outputs without manual parsing
    • Built-in parallel tool calling with strong typing throughout
    • Foundation for Mastra agent framework and Vercel Workflow DevKit
    • Edge runtime support enables low-latency global inference deployment patterns

    Related Resources

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.