
How to Scope a Custom AI Model Project (and What to Charge)
The discovery questions, project types, price ranges, and scope management strategies for custom AI model projects. How to scope correctly before you quote anything.
The most expensive mistake in custom AI model work is starting a project without understanding what you are actually building. Custom AI model projects look similar on the surface — "we want to fine-tune a model on our data" — but they vary enormously in complexity, data quality, integration requirements, and delivery timeline.
Scoping well before quoting saves you from the two most common agency disasters: underquoting a complex project (you lose money) or overquoting a simple one (you lose the client).
The 4 Variables That Determine Project Complexity
Every custom AI model project has four variables. Each one can double or triple the scope:
1. Data availability and quality. The most unpredictable variable. A client with 2,000 clean, labeled (input, output) pairs in JSONL format is a 2-week project. A client who thinks they have data but actually has unstructured PDFs, inconsistent formats, and partial labels is a 6-8 week project (mostly data engineering). Never quote without assessing actual data.
2. Task complexity. Classification tasks (route this ticket to department X) are simpler to fine-tune than generation tasks (write a response to this ticket in our brand voice). Single-output tasks are simpler than multi-step reasoning tasks. The harder the task, the more training data needed and the more iteration required.
3. Deployment requirements. Deploying a GGUF model on Ollama on a client's VPS is straightforward. Integrating with an existing enterprise CRM system, Salesforce custom objects, and a legacy API gateway is a 4-week integration project on top of the model work. Map the integration surface before quoting.
4. Quality threshold. A client who needs 80% accuracy is a different project than one who needs 95%+. Higher thresholds require more training data, more evaluation cycles, and sometimes a larger model. Ask explicitly: "What accuracy level would this need to achieve to replace your current process?"
The Discovery Call Checklist
Do not quote anything until you have answers to these questions. Budget 45-60 minutes.
Problem and current solution:
- What specific task are you hoping AI will handle?
- How does this task get done today? Who does it, how long does it take?
- What does a mistake cost you (in time, money, or customer impact)?
Data:
- Do you have historical examples of this task being done correctly?
- In what format (spreadsheet, CRM records, chat logs, PDFs)?
- How many examples? (Ask for a sample — not a description of the sample, the actual file)
- Who labeled or approved these examples?
Deployment:
- Where would the model run? (Your servers? Client's servers? Cloud API?)
- What systems does it need to integrate with?
- Who on their team will manage it ongoing?
Success criteria:
- How will you know the model is working?
- What accuracy rate would make this successful?
- What happens if the model gets something wrong?
Timeline and stakeholders:
- When do you need this working?
- Who makes the final decision on this project?
- What is the budget range you have in mind?
Project Types and Price Ranges
| Project Type | Description | Timeline | Price Range |
|---|---|---|---|
| Data audit | Assess existing data quality, recommend dataset structure, estimate fine-tuning viability | 1-2 weeks | $1,500-3,000 |
| Proof of concept | Fine-tune one model on existing data, evaluate accuracy, deploy in test environment | 2-4 weeks | $3,000-8,000 |
| Standard deployment | Data prep + fine-tuning + integration + monitoring setup for one use case | 4-8 weeks | $8,000-20,000 |
| Multi-use case deployment | Same as above but 2-4 distinct models/use cases | 8-16 weeks | $20,000-60,000 |
| Ongoing retainer | Monthly model maintenance, retraining, monitoring, evaluation | Monthly | $500-2,500/month |
These are USD ranges for 2026 market conditions. Adjust for your market, your track record, and the client's size. A 500-person company pays more than a 20-person company for the same deliverable.
The Data Problem
This deserves its own section because it is where most AI projects fail, and it is where most scoping mistakes happen.
Clients consistently overestimate their data readiness. Common responses during discovery:
-
"We have 5 years of customer support tickets" → Usually stored in a system with no export function, partially labeled, often reflecting policies that have changed, and formatted inconsistently. Real clean training data: 300-800 examples after 3-4 weeks of data engineering.
-
"We have all our product documentation" → Unstructured PDFs, Word documents, and wikis. Useful for RAG, not directly for fine-tuning. Must be transformed into (question, answer) pairs.
-
"Our team manually does this every day" → The task is clearly defined, but there are no logged examples. Must be collected prospectively or created synthetically.
Red flags that increase scope:
- "We can get you the data by next week" (they do not have it yet)
- Data in a system with export limitations
- Multiple teams have contributed to the dataset (inconsistent labeling)
- The labeling was done by different people over different time periods
Green flags that reduce scope:
- Clean export from a CRM or ticketing system
- Consistent labeling by a single person or tight team
- The client can give you a sample file within 24 hours
Always ask for a sample file before quoting. Looking at 50 actual examples tells you more than any description.
Scope Creep in AI Projects
Scope creep in AI model work has a specific character. It usually appears in one of three forms:
Feature expansion: "Can the model also handle X?" — where X is a related but distinct task that requires additional training data. Address this with a change order, not absorption.
Accuracy negotiation: You deliver a model at 87% accuracy; the client wanted 92%. Reaching higher accuracy often requires more data and more training cycles. Define "done" explicitly in your statement of work with specific accuracy metrics agreed upfront.
Integration expansion: The deployment scope grows as you learn more about the client's systems. The original "integrate with our CRM" turns into "integrate with our CRM, our email platform, and our reporting system." Each integration is a separate deliverable.
The Statement of Work: Key Sections
Every custom AI model project needs a written SOW before work begins. Key sections:
Deliverables — Exact list of what you will deliver: model file(s), integration code, documentation, training data documentation, deployment guide.
Accuracy threshold — The specific metric (e.g., "≥88% accuracy on held-out test set of 200 examples") that constitutes a successful model. This prevents "I thought it would be better" conversations.
Data requirements — Exactly what data the client must provide, in what format, by what date. If they do not provide it, the timeline shifts.
Revision policy — How many rounds of model revision are included. After the included rounds, additional iterations are billed as change orders.
Handoff and support — What training/documentation they receive, how long you provide support post-launch, what is included in post-launch support vs what triggers an additional retainer.
Payment structure — Recommend: 40% upfront, 30% at model delivery, 30% at integration completion. Never start without a deposit.
Ship AI that runs on your users' devices.
Ertas early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.
Further Reading
- AI Agency Proposal Template — How to write proposals that win after scoping
- How to Start an AI Agency in 2026 — The full agency launch playbook
- AI Agency Pricing Strategy — Pricing models and rate guidance for agency work
- Ertas vs DIY Fine-Tuning for Agencies — Platform considerations for scoping
- Manage Multiple Fine-Tuned Models — Managing delivery across multiple client projects
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.
Keep reading

Building a Recurring Revenue AI Service with Fine-Tuned Models
How to structure an AI agency offering around fine-tuned models that generates predictable monthly recurring revenue — covering service tiers, pricing models, and the retraining loop.

How to Price Fine-Tuning Services Profitably (Agency Rate Card)
A concrete rate card and pricing methodology for AI agencies offering fine-tuning services. Stop guessing on price — here's what to charge and how to explain it.

AI Agency Pricing Strategy: Subscription vs. Per-Token Pass-Through
How to price your AI agency services when the underlying costs are per-token. Compare subscription, per-token pass-through, and hybrid pricing models — and why fine-tuned local models unlock the best option.