Back to blog
    Ertas Studio vs. DIY Fine-Tuning with Unsloth/Axolotl: What's Right for Your Agency?
    comparisonertasunslothaxolotlagencysegment:agency

    Ertas Studio vs. DIY Fine-Tuning with Unsloth/Axolotl: What's Right for Your Agency?

    An honest comparison of Ertas Studio against DIY fine-tuning tools like Unsloth and Axolotl — focused on what matters for agencies: time-to-deliver, client handoff, and iteration speed.

    EErtas Team·

    If you run an AI agency, you have likely heard of Unsloth and Axolotl — open-source tools that make fine-tuning language models faster and more accessible. They are excellent tools. We have written a detailed technical comparison of all three platforms.

    This article is different. It is specifically for agency operators evaluating these options through the lens of running a client-facing business. The question is not "which tool is technically better?" — it is "which approach lets my agency deliver more value to more clients?"

    The Agency Decision Framework

    Agencies care about five things when choosing a fine-tuning approach:

    1. Time-to-deliver: How fast can you go from client request to deployed model?
    2. ML expertise required: What skills does your team need?
    3. Client handoff workflow: Can clients interact with the tool directly, or is it agency-only?
    4. Iteration speed: How quickly can you incorporate client feedback and retrain?
    5. Support and reliability: What happens when something breaks at 11pm before a client demo?

    Let us evaluate each approach against these criteria.

    Time-to-Deliver

    Unsloth (DIY)

    Unsloth accelerates LoRA training by 2x compared to standard Hugging Face Transformers. Typical workflow:

    1. Set up Python environment with CUDA, PyTorch, Unsloth (1-4 hours first time, 15 min after)
    2. Write data loading and formatting script (30-60 min)
    3. Configure training parameters in Python (15-30 min)
    4. Run training (30-90 min for a 7B model)
    5. Convert to deployment format (GGUF for Ollama) (10-30 min)
    6. Test and validate (30-60 min)

    Total for first client: 4-8 hours (plus environment setup) Subsequent clients: 2-4 hours

    Axolotl (DIY)

    Axolotl wraps the training pipeline in YAML configuration. Typical workflow:

    1. Set up environment with Axolotl dependencies (1-3 hours first time)
    2. Format data into Axolotl's expected structure (30-60 min)
    3. Write YAML configuration (15-30 min)
    4. Run training via CLI (30-90 min)
    5. Convert and deploy (10-30 min)
    6. Test and validate (30-60 min)

    Total for first client: 4-7 hours (plus environment setup) Subsequent clients: 2-3 hours

    Ertas Studio

    1. Upload training data (JSONL/CSV) via web interface (5 min)
    2. Select base model and configure training (5 min)
    3. Click "Train" (30-60 min, automated)
    4. Evaluate in side-by-side comparison interface (15-30 min)
    5. Export to GGUF/SafeTensors (5 min)
    6. Deploy (10-30 min)

    Total for first client: 1-2 hours Subsequent clients: 1-2 hours

    The difference per client is 1-6 hours. Across 10 clients, that is 10-60 hours of agency time saved — a meaningful difference at agency billing rates.

    ML Expertise Required

    Unsloth

    Requires:

    • Python proficiency (intermediate)
    • Understanding of PyTorch basics
    • Knowledge of training hyperparameters (learning rate, epochs, LoRA rank, alpha, target modules)
    • Ability to debug CUDA errors, OOM issues, and training instabilities
    • Understanding of quantisation formats for deployment

    Minimum team requirement: At least one person comfortable with Python and ML concepts. If your team is all n8n/Make.com specialists, you need to hire or upskill.

    Axolotl

    Requires:

    • Basic Python environment management
    • YAML proficiency
    • Understanding of training hyperparameters (same as Unsloth, but configured in YAML)
    • Less debugging than Unsloth (Axolotl handles more edge cases)

    Minimum team requirement: Slightly lower bar than Unsloth, but still requires someone who can navigate Python environments and understand training concepts.

    Ertas Studio

    Requires:

    • Data preparation skills (formatting JSONL/CSV — a spreadsheet task)
    • Understanding of what fine-tuning does (conceptual, not implementation)
    • Ability to evaluate model outputs (domain knowledge, not ML knowledge)

    Minimum team requirement: Any technical team member. The barrier is domain understanding (legal, healthcare), not ML expertise.

    Client Handoff Workflow

    This is where the approaches diverge significantly.

    DIY Tools (Unsloth/Axolotl)

    Client handoff options are limited:

    • You can train models for clients, but clients cannot retrain themselves
    • You need to be involved every time the client wants to update their model with new data
    • No client-facing interface — everything runs from your terminal
    • Model management (tracking versions, comparing runs) requires custom tooling

    This creates an ongoing dependency — the client cannot iterate without you. Good for retainer revenue, bad for client satisfaction and scalability.

    Ertas Studio

    Client-facing options:

    • You can give clients access to their own Ertas Studio project
    • Clients can upload new training data and trigger retraining independently
    • Built-in version history and comparison tools
    • White-label option lets you present the interface under your own brand

    This enables a "teach them to fish" model where clients manage day-to-day model updates and you handle architecture and optimisation. Higher-value engagement for the agency, better experience for the client.

    Iteration Speed

    Client feedback loops define model quality. The faster you can incorporate feedback, the better the model gets.

    Typical Feedback Cycle (DIY)

    1. Client reports quality issues (email/Slack)
    2. Agency collects examples of poor outputs
    3. Agency prepares corrective training data
    4. Agency reruns training script
    5. Agency tests, converts, and redeploys
    6. Agency confirms fix with client

    Calendar time: 2-5 days (fitting this into scheduled work, not drop-everything urgency)

    Typical Feedback Cycle (Ertas Studio)

    1. Client flags issues in Studio's evaluation interface
    2. Agency (or client) adds corrective examples to the training set
    3. Click "Retrain" — new adapter in 30-60 minutes
    4. Compare new vs. previous version in side-by-side view
    5. Export and deploy if improved

    Calendar time: Same day (often within hours)

    Faster iteration produces better models faster. Better models produce happier clients. Happier clients renew.

    When DIY Makes Sense

    Be honest — there are scenarios where Unsloth or Axolotl is the better choice:

    You have an ML engineer on staff. If someone on your team genuinely enjoys writing training scripts and debugging CUDA issues, DIY tools give them maximum control and flexibility.

    Highly custom training pipelines. If your clients need non-standard training approaches — custom loss functions, unusual data formats, multi-task training with complex routing — DIY tools are more flexible.

    You are building a platform, not delivering services. If you are building your own fine-tuning platform (rather than using one), Unsloth's performance optimisations are valuable building blocks.

    Cost is the only consideration. Unsloth and Axolotl are free. If your agency is pre-revenue and bootstrapping, the cost of any paid tool is a real consideration.

    When Ertas Wins

    Your team is automation engineers, not ML engineers. Most n8n/Make.com agencies fall into this category. Ertas removes the ML bottleneck entirely.

    You are scaling client count. The per-client time savings compound. At 10+ clients, the hours saved pay for the platform many times over.

    Client self-service matters. Law firms and healthcare organisations appreciate being able to see their model, test it, and trigger updates without filing a support ticket with you.

    Speed is competitive advantage. If you can deliver a fine-tuned model in a day instead of a week, you win deals against agencies that need longer.

    You want to white-label. Presenting a professional model management interface under your brand builds trust with enterprise clients.

    The Practical Recommendation

    For most AI agencies:

    1. Start with Ertas Studio for your first 5-10 clients. Focus your team's energy on client delivery, not ML infrastructure.
    2. Learn the fundamentals of fine-tuning by reading how LoRA works and running a few experiments with Unsloth on a personal machine. Understanding the underlying mechanics makes you a better practitioner even when using a no-code tool.
    3. Evaluate DIY at scale once you have 15+ clients and a team member who wants to specialise in ML. At that point, you might build custom tooling for specific edge cases while using Ertas for standard workflows.

    The best agencies use both — Ertas for 80% of standard fine-tuning jobs, and DIY tools for the 20% that need custom treatment.


    Ship AI that runs on your users' devices.

    Ertas early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.

    Further Reading

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.

    Keep reading