Back to blog
    Why We Built a Canvas Interface for Machine Learning
    designstudiophilosophyno-codevisual-ml

    Why We Built a Canvas Interface for Machine Learning

    Most ML tools are built for the command line. We think fine-tuning deserves a visual workspace. Here's why we designed Ertas Studio as a canvas — and what that changes about the fine-tuning workflow.

    EErtas Team·

    Every machine learning tool we used followed the same pattern: write a script, run it in a terminal, check the logs, tweak a config file, run it again. The entire workflow lived in text — from configuration to monitoring to evaluation.

    We kept coming back to the same question: why does fine-tuning a model feel like managing a server, when what we're really doing is comparing experiments?

    That question led us to build Ertas Studio as a visual canvas instead of another CLI tool.

    The Problem with Terminal-Based ML Workflows

    There's nothing inherently wrong with command-line tools. They're powerful, composable, and efficient for tasks that are well-understood and repetitive. But fine-tuning a language model isn't a repetitive task — it's an experimental one.

    Experiment Management Is Spatial

    When you're comparing fine-tuning runs, you need to see them side by side. Which configuration produced better outputs? How did the loss curves differ? What happens when you change the dataset?

    In a terminal workflow, this information lives in log files, separate terminal windows, spreadsheets tracking experiment parameters, and your memory of which run was which. You're constantly context-switching between writing commands, reading logs, and comparing results.

    This is a spatial problem forced into a sequential interface.

    Configuration Is Hidden

    A YAML config file is a wall of text. Changing one parameter means finding it in the file, understanding its relationship to other parameters, and remembering what values you've tried before. There's no visual feedback until you run the training job and wait for results.

    The Audience Is Wrong

    CLI-based ML tools assume the user is an ML engineer who is comfortable in a terminal. But the people with the deepest domain knowledge — the ones who can curate the best training data and evaluate results most effectively — often aren't ML engineers.

    By requiring terminal proficiency, existing tools cut out the people who could contribute the most to model quality.

    What a Canvas Changes

    Side-by-Side Comparison

    On a canvas, you arrange experiments spatially. Two fine-tuning runs sit next to each other. Their loss curves overlay. Their outputs for the same test prompts appear in adjacent panels. You can see at a glance which configuration works better.

    This isn't a cosmetic difference — it changes how you think about the problem. Instead of sequentially running experiments and mentally tracking results, you design experiments in parallel and compare them visually.

    Direct Manipulation

    Adjusting a hyperparameter is a slider, not an edit to a config file. Starting a training run is a button, not a terminal command. The interface provides immediate context for every action: what the parameter does, what range is reasonable, what values you've tried before.

    This removes a layer of indirection. You're working directly with the training configuration instead of writing commands that modify the configuration.

    Progressive Disclosure

    A canvas can show you exactly as much complexity as you need. Default settings for a quick training run. Expandable panels for fine-grained hyperparameter control. Advanced options hidden until you need them.

    A CLI, by contrast, either hides everything (requiring documentation lookups) or shows everything (overwhelming new users with hundreds of flags).

    Shared Understanding

    A visual workspace can be shared. A product manager can look at the canvas and understand which experiments have been run, what the results look like, and where the model is in the development process. Try doing that with a folder of training scripts and log files.

    Design Decisions We Made

    The Canvas Is the Workspace

    Ertas Studio isn't a form that you fill out and submit. It's a workspace where you arrange your datasets, models, training runs, and evaluations spatially. You build up a canvas over time, and it becomes a visual record of your fine-tuning experiments.

    Simultaneous Training

    One of the most common fine-tuning patterns is running the same training setup with small variations — different learning rates, different LoRA ranks, different dataset subsets. The canvas makes this natural: duplicate a training configuration, change one parameter, and run both at the same time. Results appear side by side as they complete.

    This turns what used to be a sequential, multi-hour process into a parallel comparison that completes in a single training cycle.

    GGUF as the Exit Point

    We didn't want to build another walled garden. The canvas is where you experiment. GGUF is how you leave. Every fine-tuned model can be exported as a standard GGUF file and deployed with any compatible tool — Ollama, LM Studio, llama.cpp, or your own custom inference setup.

    The canvas makes fine-tuning accessible. The open format ensures you're never locked in.

    Smart Defaults, Full Control

    Every setting has a well-tested default. For most users, clicking "Start Training" with the defaults produces good results. But every parameter is adjustable for users who want full control. We found that this dual approach serves both audiences without compromising either.

    What We Learned

    Building a visual ML tool taught us a few things:

    Domain experts produce better training data than ML engineers. When we removed the terminal barrier, the quality of training datasets improved dramatically. People who understood the domain could directly curate and iterate on their data.

    Comparison is the core activity. Most of fine-tuning isn't training — it's evaluating and comparing. The tool should optimize for comparison, not just execution.

    The infrastructure shouldn't be visible. GPU provisioning, CUDA drivers, training framework versions — none of this matters to someone trying to build a custom model. The canvas abstracts it away without removing control for users who want it.

    Visual tools aren't less powerful. This was our biggest misconception going in. We expected to sacrifice capability for usability. Instead, the visual interface made certain workflows — like multi-model comparison — significantly more powerful than the CLI equivalent.

    Try It Yourself

    Ertas Studio is how we think fine-tuning should work: visual, parallel, and accessible — without sacrificing the power that experienced practitioners need.

    Lock in early bird pricing at $14.50/mo for life — standard pricing will be $34.50/mo at launch. Join the waitlist →

    Further Reading

    Ship AI that runs on your users' devices.

    Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.

    Keep reading