GitHub Copilot + Ertas
Supplement GitHub Copilot's general-purpose suggestions with a fine-tuned model trained on your team's codebase, coding standards, and internal APIs — giving you contextually accurate completions that respect your project's conventions.
Overview
GitHub Copilot is the most widely adopted AI coding assistant in the world, embedded directly into VS Code, JetBrains, and Neovim through official extensions. Powered by large language models trained on vast open-source repositories, Copilot offers real-time code completions, chat-based explanations, and inline code generation that dramatically accelerate day-to-day development. For millions of developers, Copilot has become as essential as syntax highlighting — always present, always suggesting.
However, Copilot's general-purpose training means it lacks awareness of your specific project structure. It doesn't know your internal SDK methods, your team's preferred error-handling patterns, or the naming conventions codified in your style guide. Suggestions often default to popular open-source idioms rather than your organization's established patterns. For teams maintaining large proprietary codebases, this gap between generic and project-specific intelligence represents a persistent source of friction — each correction pulling developers out of their flow state.
How Ertas Integrates
Ertas enables you to build a complementary fine-tuned model that captures your team's coding DNA. By curating training data from approved pull requests, internal documentation, and exemplary modules, you create a dataset that encodes your conventions, preferred abstractions, and architectural patterns. Ertas Studio handles the fine-tuning workflow end-to-end — from dataset validation to hyperparameter tuning to experiment tracking — so your engineers can produce a high-quality custom model without needing machine learning expertise.
The fine-tuned model deploys locally through Ollama or any OpenAI-compatible inference server, running alongside Copilot as a secondary intelligence layer. You can route specific queries to your custom model using tools like Continue.dev or Cursor that support multiple model endpoints, or use the model for code review and generation tasks where project-specific accuracy matters most. Training and inference happen entirely on your infrastructure, so proprietary code never leaves your network — addressing both the accuracy gap and the data privacy concerns that enterprise teams face with cloud-only AI assistants.
Getting Started
- 1
Assemble a training corpus from your codebase
Gather high-quality code samples that represent your team's standards: merged pull requests, well-documented modules, internal library examples, and style guide references. Structure these as instruction-completion pairs that teach the model your conventions.
- 2
Fine-tune a code model in Ertas Studio
Upload your dataset to Ertas Studio and choose a code-specialized base model like CodeLlama or DeepSeek Coder. Configure LoRA parameters for efficient training, then launch the fine-tuning job. Ertas tracks each experiment so you can compare model quality across runs.
- 3
Export and deploy the model locally
Download the fine-tuned model in GGUF format for local deployment. Register it with Ollama or another local inference runtime to expose an OpenAI-compatible API endpoint on your network.
- 4
Configure a secondary assistant alongside Copilot
Set up Continue.dev or a similar multi-model tool in your editor to route queries to your fine-tuned model endpoint. This gives you Copilot's broad knowledge for general tasks and your custom model's precision for project-specific code generation.
- 5
Refine the model with ongoing feedback
Track cases where your fine-tuned model produces incorrect or suboptimal suggestions. Add corrected examples to your training dataset and run incremental fine-tuning in Ertas to steadily improve alignment with your evolving codebase.
Benefits
- Project-aware completions that use your internal APIs, utility functions, and naming conventions
- Full data privacy — proprietary code stays on your infrastructure during both training and inference
- Works alongside Copilot rather than replacing it, combining broad and specialized intelligence
- Eliminates repetitive manual corrections for team-specific patterns and architectural idioms
- Zero per-token inference costs when running the fine-tuned model on your own hardware
- Continuous improvement cycle as you feed real-world corrections back into the training pipeline
Related Resources
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.