Continue.dev + Ertas
Pair Ertas fine-tuned models with Continue.dev's open-source AI coding extension, giving your team a fully self-hosted, customizable AI coding assistant that understands your codebase and runs entirely on your infrastructure.
Overview
Continue.dev is the leading open-source AI coding assistant, available as an extension for VS Code and JetBrains IDEs. Unlike proprietary AI tools, Continue gives developers full control over which models power their AI features — supporting any OpenAI-compatible endpoint, Ollama, LM Studio, and dozens of other providers. Its features include tab autocomplete, inline editing, codebase-aware chat, and custom slash commands, all configurable through a simple JSON file. This flexibility has made Continue the go-to choice for developers and teams who want AI coding assistance without vendor lock-in.
Continue's model-agnostic design means you can swap between any model, but the quality of assistance is only as good as the model behind it. General-purpose models provide broad competence across programming languages and frameworks, yet they lack the specific knowledge of your project's internal APIs, custom abstractions, and team conventions. This is where the combination of a fine-tuned model and Continue's open architecture creates a uniquely powerful workflow — purpose-built intelligence delivered through a purpose-built tool.
How Ertas Integrates
Ertas and Continue.dev are a natural pairing. Ertas Studio handles the model customization side — letting you curate training data from your codebase, fine-tune a code model with LoRA, and export it in a deployment-ready format. Continue handles the developer experience side — providing the editor integration, context gathering, and UI that makes the model useful in daily work. Together, they form a complete, self-hosted AI coding stack where every component is under your control.
The integration is straightforward: fine-tune a model in Ertas Studio, deploy it through Ollama, and point Continue's configuration to the local endpoint. Continue's `config.json` lets you specify different models for different tasks — your fine-tuned model for autocomplete and code generation where project knowledge matters, and a general model for broader questions. Because Continue is open-source and Ertas keeps everything local, the entire pipeline from training data to inference operates on your infrastructure with zero data leaving your network.
Getting Started
- 1
Prepare your codebase training data
Collect high-quality code samples that embody your team's standards: approved PRs, internal library code, documentation, and configuration examples. Organize them into instruction-completion pairs that capture your conventions and patterns.
- 2
Fine-tune a model in Ertas Studio
Upload the curated dataset and select a code-capable base model. Configure fine-tuning parameters — LoRA rank, learning rate, and epoch count — then launch the training job. Compare experiments in Ertas to find the optimal configuration.
- 3
Deploy the model via Ollama
Export the trained model in GGUF format and register it with Ollama. Start the Ollama server to expose an OpenAI-compatible endpoint that Continue can connect to for inference.
- 4
Configure Continue to use your fine-tuned model
Edit Continue's config.json to add your Ollama endpoint as a model provider. Assign your fine-tuned model to autocomplete and chat roles, optionally keeping a general model available for broad programming questions.
- 5
Build custom slash commands for your workflow
Leverage Continue's custom slash commands to create team-specific actions — like generating unit tests in your preferred framework, scaffolding components with your conventions, or explaining internal APIs — all powered by your fine-tuned model.
Benefits
- Fully open-source assistant paired with a model trained specifically on your codebase
- Complete self-hosted stack — no proprietary services, no data leaving your network
- Flexible model routing: fine-tuned model for project tasks, general model for broad questions
- Custom slash commands powered by a model that understands your internal patterns
- No per-seat licensing costs for either the editor extension or the inference runtime
- Transparent, auditable pipeline from training data through model deployment to developer UX
Related Resources
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.