Tabnine + Ertas
Augment Tabnine's AI code completions with a model fine-tuned on your organization's codebase, combining Tabnine's inline suggestion engine with deep knowledge of your proprietary APIs, patterns, and coding standards.
Overview
Tabnine is a veteran AI code completion tool trusted by hundreds of thousands of developers and adopted by enterprises for its focus on code privacy and security. Available across all major IDEs — VS Code, JetBrains, Neovim, and more — Tabnine provides real-time inline completions, whole-function generation, and natural language-to-code translation. Its enterprise tier offers on-premises deployment and SOC 2 compliance, making it a popular choice for organizations with strict data governance requirements.
While Tabnine's enterprise features include the ability to index your codebase for improved context, the underlying model's knowledge is still rooted in general programming patterns. It can reference your project files for context, but it hasn't internalized the deeper conventions that define your team's work: the specific abstraction layers you prefer, the error handling strategies codified in your guidelines, or the testing patterns your CI pipeline expects. This means completions often require manual adjustment to match your team's established practices.
How Ertas Integrates
Ertas allows you to create a fine-tuned model that has genuinely learned your team's coding patterns at the weight level — not just referencing files for context, but understanding the statistical patterns of how your team writes code. By training on curated examples from your codebase — approved pull requests, canonical module implementations, and style guide examples — you produce a model that natively generates code in your team's voice. Ertas Studio simplifies the entire workflow from dataset preparation through training to export, with experiment tracking to optimize model quality.
The fine-tuned model deploys through an OpenAI-compatible local endpoint and can be accessed alongside Tabnine through editor tools that support multiple AI providers. For teams already using Tabnine's enterprise features, the Ertas-trained model serves as a specialized complement — handling domain-specific generation tasks where project knowledge matters most, while Tabnine continues to provide fast general completions. All training and inference remain on your infrastructure, aligning with the same data privacy principles that make Tabnine attractive to enterprises.
Getting Started
- 1
Build a training dataset from your best code
Identify exemplary code across your repositories: thoroughly reviewed PRs, well-architected modules, internal API implementations, and documentation that captures your team's standards. Structure these into training examples that teach coding conventions.
- 2
Fine-tune a model in Ertas Studio
Upload the dataset to Ertas Studio and choose a code-focused base model. Set up LoRA fine-tuning with appropriate rank and learning rate parameters. Launch the training run and use Ertas's built-in evaluation tools to assess the model's quality on held-out examples.
- 3
Deploy the model on your infrastructure
Export the fine-tuned model in GGUF format and register it with Ollama or another local inference server. Confirm the endpoint serves responses correctly and that latency is suitable for interactive code completion workflows.
- 4
Set up a multi-provider coding workflow
Install a tool like Continue.dev alongside Tabnine in your editor to access your fine-tuned model for specific tasks — code generation, refactoring, and review — while keeping Tabnine active for fast inline completions.
- 5
Continuously improve with real-world feedback
Gather instances where the model's output diverges from your team's expectations. Incorporate corrected examples into your training data and run follow-up fine-tuning iterations in Ertas to progressively sharpen the model's accuracy.
Benefits
- Deep codebase knowledge baked into model weights, not just retrieved from file context
- Completions that consistently follow your team's naming, architecture, and testing patterns
- Enterprise-grade data privacy with all training and inference on your own hardware
- Complements Tabnine's fast inline suggestions with specialized domain generation
- No per-user AI subscription costs for the fine-tuned model regardless of team size
- Experiment tracking in Ertas Studio to systematically optimize model performance
Related Resources
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.