
Introducing Ertas Studio: A Visual Canvas for Fine-Tuning AI Models
Ertas Studio is a canvas-driven interface for fine-tuning multiple AI models simultaneously. Upload data, configure training, and compare results — no CLI required.
Fine-tuning a language model usually means writing training scripts, managing configs, and switching between terminal windows to compare runs. Ertas Studio replaces that workflow with a single visual canvas where you can fine-tune multiple models at once and compare results side by side.
The Problem with Current Fine-Tuning Workflows
If you've fine-tuned a model before, you know the drill:
- Write a training script or adapt someone else's
- Manage hyperparameters across config files
- Run jobs sequentially and manually track which settings produced which results
- Convert checkpoints into a deployable format
- Repeat for every base model you want to compare
Each step introduces friction. By the time you've compared three base models with different hyperparameters, you've spent more time on tooling than on evaluating results.
How Studio Works
Studio is a canvas-driven web interface that handles the entire fine-tuning pipeline — from data upload to model download.
Upload Your Data
Start by uploading a JSONL training dataset or importing one from Hugging Face. Studio validates your data and surfaces formatting issues before training begins.
Fine-Tune on the Canvas
The Studio canvas is where the core work happens. Select a base model, configure training parameters, and launch a fine-tuning job on cloud GPUs — all through the visual interface.
What makes this powerful is that you can run multiple fine-tuning jobs simultaneously. Train different base models on the same dataset, or test different hyperparameters on the same model. The canvas shows all running and completed jobs together, so you can compare outputs without switching between windows or scanning log files.
Preserved Knowledge
Every fine-tuning run is saved. This means you can:
- Return to any previous run and review its configuration and results
- Use a previously fine-tuned model as the starting point for a new run
- Test the same fine-tuned model across different use cases without retraining
This is especially useful when you're iterating — tweak your dataset, run another job, and compare the new results against your baseline without losing any previous work.
Download as GGUF
When you're satisfied with a model, download it as a GGUF file. This is an open format that runs on consumer hardware with tools like llama.cpp, Ollama, and LM Studio. No cloud dependency, no API costs, no vendor lock-in. Cloud deployment via Ertas Cloud is also on the horizon for teams that want managed API endpoints.
Who Studio Is Built For
- Engineers building AI-powered products — Fine-tune models for specific tasks (classification, summarization, code generation) without managing training infrastructure
- Teams evaluating base models — Compare multiple foundation models on your actual data to find the best fit
- Privacy-conscious organizations — Run models on your own hardware for full control over inference data
- Solo developers and researchers — Get from dataset to deployable model in minutes, not days
Data Synthesis (Coming Soon)
We're working on smart data synthesis suggestions that analyze your uploaded dataset and recommend additional training examples to improve model performance. This will help teams that have limited training data get better results from their fine-tuning runs.
Ship AI that runs on your users' devices.
Ertas early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.
Get Early Access
Studio is currently in development. Join the waitlist to get early access and provide feedback that shapes the product.
Fine-tuning should be about evaluating results, not debugging pipelines. That's what Studio is for.
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.
Keep reading

Fine-Tune AI Models Without Writing Code
You don't need ML expertise or a command line to fine-tune language models. Visual tools like Ertas Studio let product teams, researchers, and non-technical users train custom AI models through a point-and-click interface.

Fine-Tune a Product Recommendation Model for E-Commerce: Full Walkthrough
Generic recommendation engines miss semantic product relationships. Here's how to fine-tune a model on your catalog and purchase history to build recommendations that increase average order value.

Llama 3.2 for Mobile Apps: Fine-Tuning and On-Device Deployment
A complete guide to using Meta's Llama 3.2 1B and 3B models in mobile apps. Fine-tuning with LoRA, exporting to GGUF, and deploying on iOS and Android via llama.cpp.