CrewAI + Ertas
Orchestrate teams of AI agents with CrewAI, each powered by a specialized Ertas-trained model for role-based task execution and collaborative workflows.
Overview
CrewAI is a multi-agent orchestration framework that organizes AI agents into crews with defined roles, goals, and backstories. Inspired by how human teams operate, CrewAI assigns each agent a specific role — researcher, writer, editor, analyst — and coordinates their work through structured task delegation and sequential or parallel execution patterns. The framework handles inter-agent communication, task dependencies, context sharing, and output validation automatically.
CrewAI's design philosophy emphasizes simplicity and production readiness. Agents are defined with natural language role descriptions rather than complex configuration files, and tasks specify expected outputs, tools, and delegation rules in a declarative format. The framework supports both sequential workflows (where each agent's output feeds into the next agent's input) and hierarchical workflows (where a manager agent delegates and reviews work from subordinate agents). This flexibility makes CrewAI suitable for everything from content production pipelines to complex research and analysis workflows.
How Ertas Integrates
Ertas-trained models power CrewAI agents through the standard LLM configuration interface. After fine-tuning specialist models in Ertas Studio — a research model, a writing model, an analysis model — you assign each model to the corresponding CrewAI agent role. The agent's role description and backstory guide its behavior at the prompt level, while the fine-tuned model provides deep domain expertise at the weights level. This two-layer specialization produces agents that are both well-directed and genuinely knowledgeable.
The CrewAI-Ertas combination excels at content and analysis workflows. Consider a financial research crew: a data analyst agent (trained on financial statement analysis) gathers and processes company data, a research agent (trained on market analysis) identifies trends and competitive dynamics, and a writer agent (trained on investment memo style) synthesizes everything into a polished report. Each agent uses a different Ertas-trained model optimized for its specific task, and CrewAI coordinates their work so the final output reflects the combined expertise of all three specialists. This is qualitatively different from asking a single general model to do everything — and the output quality reflects that difference.
Getting Started
- 1
Define your crew roles and fine-tune models
Identify the specialist roles needed in your crew. Fine-tune a separate model in Ertas Studio for each role using task-specific training examples.
- 2
Deploy models to inference endpoints
Serve your fine-tuned models via Ollama or vLLM. Ollama can serve multiple models simultaneously, making it ideal for multi-agent setups.
- 3
Create agents with role-specific models
Define CrewAI agents with their roles, goals, and backstories. Assign each agent its dedicated Ertas-trained model through the LLM configuration.
- 4
Define tasks and workflow structure
Create tasks with expected outputs and assign them to agents. Configure the workflow as sequential, parallel, or hierarchical based on task dependencies.
- 5
Run the crew and iterate
Execute the crew workflow and review outputs. Analyze which agents need improvement and retrain their models in Ertas Studio with targeted training data.
from crewai import Agent, Task, Crew
from langchain_openai import ChatOpenAI
# Create specialized LLMs from Ertas-trained models
researcher_llm = ChatOpenAI(
base_url="http://localhost:11434/v1",
model="ertas-researcher-7b",
api_key="not-needed",
)
writer_llm = ChatOpenAI(
base_url="http://localhost:11434/v1",
model="ertas-writer-7b",
api_key="not-needed",
)
# Define agents with role-specific fine-tuned models
researcher = Agent(
role="Market Researcher",
goal="Gather and analyze market data for investment decisions",
backstory="Senior equity analyst with 15 years of experience.",
llm=researcher_llm,
)
writer = Agent(
role="Report Writer",
goal="Write clear, actionable investment memos",
backstory="Financial writer specializing in institutional reports.",
llm=writer_llm,
)
# Define tasks
research_task = Task(
description="Research Q3 earnings for the top 5 semiconductor companies.",
agent=researcher,
expected_output="Structured analysis with revenue, margins, and guidance.",
)
report_task = Task(
description="Write a 2-page investment memo based on the research.",
agent=writer,
expected_output="Professional investment memo with recommendation.",
)
# Run the crew
crew = Crew(agents=[researcher, writer], tasks=[research_task, report_task])
result = crew.kickoff()
print(result)Benefits
- Role-based agent specialization with dedicated fine-tuned models per role
- Natural language agent definitions make crew setup intuitive and fast
- Sequential and hierarchical workflows handle complex multi-step processes
- Inter-agent context sharing ensures coherent outputs across the crew
- Built-in task validation catches errors before they propagate downstream
- Production-ready framework with logging, callbacks, and error handling
Related Resources
Fine-Tuning
GGUF
Inference
LoRA
Getting Started with Ertas: Fine-Tune and Deploy Custom AI Models
How to Fine-Tune an LLM: The Complete 2026 Guide
Fine-Tune AI Models Without Writing Code
Running AI Models Locally: The Complete Guide to Local LLM Inference
Privacy-Conscious AI Development: Fine-Tune in the Cloud, Run on Your Terms
AutoGen
LangChain
Ollama
SuperAgent
vLLM
Ertas for Finance
Ertas for Content Creation
Ertas for Data Extraction
Ertas for AI Automation Agencies
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.