Aider + Ertas
Connect your Ertas fine-tuned models to Aider's terminal-based AI pair programmer, enabling codebase-aware multi-file editing and code generation directly from the command line with a model that knows your project's conventions.
Overview
Aider is a powerful AI pair programming tool that runs entirely in the terminal, enabling developers to collaborate with language models on real code changes across multiple files. Unlike editor-based AI tools, Aider operates at the git level — it understands your repository structure, can edit multiple files in a single conversation, automatically creates git commits for changes, and integrates seamlessly into terminal-centric workflows. Developers describe what they want in natural language, and Aider generates the code changes, applies them to the working tree, and commits the results.
Aider supports a wide range of model providers — OpenAI, Anthropic, local Ollama endpoints, and any OpenAI-compatible API. This flexibility makes it ideal for developers who prefer the terminal and want granular control over their AI tools. However, like all AI coding tools, the quality of Aider's output depends on the model behind it. General-purpose models handle common patterns well but struggle with project-specific abstractions, internal framework APIs, and the particular coding idioms your team has standardized on.
How Ertas Integrates
Ertas lets you train a model that speaks your codebase's language, and Aider gives that model the ability to act on your repository directly. By fine-tuning on your team's code — PR histories, internal libraries, architectural patterns, and style guides — you create a model in Ertas Studio that generates code matching your conventions by default. When connected to Aider, this model can make multi-file changes that respect your project structure, use your actual utility functions, and follow your error handling patterns without constant correction.
The setup leverages Aider's native support for OpenAI-compatible endpoints. Deploy your fine-tuned model through Ollama and point Aider at the local endpoint using its `--openai-api-base` flag. Aider's repository mapping and git integration handle the rest — your custom model receives full context about the files being edited and generates changes that Aider applies and commits. The entire workflow stays on your machine: Ertas trains the model, Ollama serves it, and Aider orchestrates the code changes, with zero external API calls and no proprietary code leaving your network.
Getting Started
- 1
Curate training examples from your repositories
Gather representative samples of your team's code: well-reviewed PRs with clear descriptions, canonical module implementations, internal API documentation, and examples of your standard patterns for error handling, testing, and configuration.
- 2
Train a code model in Ertas Studio
Upload your dataset to Ertas Studio and select a code-focused base model. Run LoRA fine-tuning with parameters suited to your dataset size and complexity. Use Ertas's experiment tracking to evaluate and compare different training configurations.
- 3
Deploy the model through Ollama
Export the fine-tuned model in GGUF format and register it with Ollama. Verify the model serves responses with acceptable latency and correctly reproduces your team's coding patterns on test prompts.
- 4
Configure Aider to use your local model
Launch Aider with the --openai-api-base flag pointing to your Ollama endpoint and --model set to your fine-tuned model name. Aider will use your custom model for all code generation, editing, and refactoring tasks.
- 5
Iterate on model quality through usage
Use Aider for daily development tasks and note where the model produces code that deviates from your standards. Add corrected examples to your training set and fine-tune incrementally in Ertas to improve accuracy over time.
Benefits
- Multi-file code changes generated by a model that understands your project's architecture
- Terminal-native workflow with automatic git commits for every AI-assisted change
- Fully local pipeline — no API keys, no cloud inference, no data leaving your machine
- Natural language-driven development using a model fluent in your team's coding idioms
- Zero inference costs regardless of how many changes you generate per day
- Seamless integration with existing git workflows and branch-based development
Related Resources
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.