Activepieces + Ertas
Build intelligent automation workflows on Activepieces using Ertas-trained models for self-hosted, privacy-first AI processing at every step.
Overview
Activepieces is an open-source automation platform that provides a visual workflow builder for connecting applications and automating business processes. Like commercial alternatives such as Zapier and Make, Activepieces offers a trigger-action model with hundreds of pre-built connectors. What distinguishes it is the self-hosted deployment model: organizations can run Activepieces on their own infrastructure, ensuring that all data flowing through automated workflows stays within their network perimeter.
This self-hosted architecture makes Activepieces a natural fit for organizations that need AI automation but cannot send data to third-party services. When combined with locally deployed Ertas-trained models, the entire automation pipeline — trigger, AI processing, and action — runs within the organization's infrastructure. No email content, customer data, or business documents leave the network at any point. This is a critical requirement for industries like healthcare, legal, finance, and government where data sovereignty regulations make cloud-based AI automation platforms unsuitable for sensitive workflows.
How Ertas Integrates
Ertas-trained models connect to Activepieces through its HTTP request piece or custom code pieces. After deploying your fine-tuned model via Ollama or any OpenAI-compatible server on your internal network, you add an HTTP POST step in your Activepieces flow that calls the model's chat completions endpoint. Because both Activepieces and the inference server run on your infrastructure, the AI call is a local network request — fast, free, and completely private.
The Ertas-Activepieces stack creates a fully self-hosted AI automation platform. Consider a legal firm that needs to automatically classify incoming case documents, extract key entities, and route them to the appropriate practice group. With Activepieces watching an email inbox or file share for new documents, an Ertas-trained legal model processing each document for classification and extraction, and Activepieces routing the results to the correct team channel — the entire workflow runs on premises with zero external API calls. The firm can audit every step, control data retention, and demonstrate full regulatory compliance. This same pattern applies to healthcare organizations processing referrals, financial firms classifying transactions, and any organization that needs intelligent automation over sensitive data.
Getting Started
- 1
Fine-tune a model in Ertas Studio
Train a model on your specific automation task — document classification, entity extraction, content generation, or text analysis.
- 2
Deploy both Activepieces and your model on-premise
Run Activepieces via Docker on your infrastructure. Deploy your Ertas-trained model through Ollama on the same network for low-latency, private inference.
- 3
Create a flow with your trigger
Build an Activepieces flow starting with your trigger — new email, file upload, webhook, scheduled task, or any supported event source.
- 4
Add an HTTP request step for AI processing
Add an HTTP request piece that POSTs to your local model endpoint. Structure the request body as an OpenAI-compatible chat completion call with the trigger data.
- 5
Route results to downstream actions
Use the model's response to drive subsequent flow steps — route documents, update databases, send notifications, or trigger additional processing.
{
"step": "HTTP Request",
"method": "POST",
"url": "http://ollama-server:11434/v1/chat/completions",
"headers": {
"Content-Type": "application/json"
},
"body": {
"model": "ertas-classifier-7b",
"messages": [
{
"role": "system",
"content": "Classify the document into one of: contract, invoice, correspondence, legal-filing. Return JSON: {category, confidence, summary}"
},
{
"role": "user",
"content": "{{trigger.document_text}}"
}
],
"temperature": 0.0,
"max_tokens": 200
}
}Benefits
- Fully self-hosted automation — no data leaves your infrastructure
- Open-source with no per-workflow or per-execution fees
- Local network AI calls are fast, free, and completely private
- Visual flow builder accessible to non-technical team members
- Complete audit trail for regulatory compliance
- Growing connector library with community-contributed integrations
Related Resources
Fine-Tuning
GGUF
Inference
LoRA
Privacy-Conscious AI Development: Fine-Tune in the Cloud, Run on Your Terms
GDPR-Compliant AI: How to Use LLMs Without Sharing User Data
Running AI Models Locally: The Complete Guide to Local LLM Inference
Self-Hosted AI for Indie Apps: Replace GPT-4 with Your Own Model
Fine-Tune AI Models Without Writing Code
Flowise
Make.com
n8n
Ollama
Zapier
Ertas for Healthcare
Ertas for Legal
Ertas for Finance
Ertas for Data Extraction
Ertas for AI Automation Agencies
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.