
AI Agency Proposal Template: How to Win Custom Model Projects
Most AI agency proposals lose because they lead with technology. Here's the structure, the writing formula, and the common mistakes that cost agencies deals.
Most AI agency proposals lose because they lead with technology. The proposal is 80% about LoRA fine-tuning, GGUF format, Ollama deployment, and training dataset requirements. The client nods along, says "interesting," and never responds.
Clients do not care about LoRA. They care about whether their support tickets get resolved faster and whether their AI costs will stop growing every month. The proposal that wins is the one that stays in their language while demonstrating that you understand the technical path.
The Structure That Wins
Seven sections, in this order:
- Executive Summary (1 page)
- Problem Definition
- Proposed Solution
- Methodology
- Timeline
- Investment
- Why Us
This seems obvious. The mistake is in how each section gets written — specifically, where most agencies put their technology-heavy content (everywhere) vs where it belongs (Methodology only).
Section 1: Executive Summary
The executive summary is the most important page in the proposal. Many decision-makers never read past it. It needs to:
- State the client's specific problem in their words
- State the specific outcome you will deliver
- State the investment and timeline at a high level
- Create confidence that you understand their situation
Formula:
[Client company name] currently [specific problem with cost]. This proposal outlines how [your agency] will [specific solution] to achieve [specific outcome — accuracy, cost, time] within [timeline]. The total investment is [price].
Example (fictional):
Meridian Legal currently spends $4,200/month in OpenAI API costs processing client contracts and takes 3-4 hours per contract review. This proposal outlines how we will deploy a fine-tuned contract analysis model that reduces API costs to under $200/month and cuts review time to 45 minutes. Project timeline: 6 weeks. Total investment: $14,500.
The executive summary does not mention LoRA, GGUF, Ollama, or any technical detail. Those belong in Methodology.
Section 2: Problem Definition
Reframe the problem in the client's language, with their specific numbers. This demonstrates that you listened during discovery and that you understand the business impact, not just the technical surface.
What to include:
- Current process and its cost in time and money
- Why their current approach fails (API costs, accuracy gaps, privacy concerns)
- The downstream business impact (slow responses, lost revenue, compliance risk)
What to avoid:
- Generic AI industry statistics ("AI is transforming every industry")
- Technical descriptions of why their current approach is suboptimal
- Anything that sounds like you copied it from their website
The client should read this section and think: "Yes, this is exactly right. They get it."
Section 3: Proposed Solution
State clearly what you will build, without technical jargon, and what it will do.
Template:
We will build a custom AI model trained specifically on [client's data type]. This model will [specific capability] with [specific accuracy] — significantly better than the [current approach] currently delivering [current accuracy]. The model will run on [client's infrastructure / cloud location], meaning [data privacy benefit]. After deployment, [maintenance approach].
The benefit over generic AI: Make the comparison explicit. If you have benchmarks (94% vs 71% accuracy for fine-tuned vs GPT-4 prompting on similar tasks), put them here. Specific numbers are more persuasive than any adjective.
Section 4: Methodology
This is where the technical content belongs — and only here. Write it for a technical stakeholder who wants to understand the approach, not for the CEO who approved the budget.
Subsections:
4.1 Data Assessment and Preparation What data you will use, in what format, what preprocessing is required. If data quality is uncertain, describe the assessment process.
4.2 Model Training Approach High-level: fine-tuning approach (LoRA), base model selection rationale, training configuration. Do not over-explain; reference your track record instead.
4.3 Evaluation Process How you will measure success: held-out test set, specific metrics (accuracy, F1, BLEU, human evaluation), what score constitutes "done."
4.4 Deployment Architecture Where the model runs, how it integrates with their existing systems, security and data flow diagram.
4.5 Ongoing Maintenance How the model gets updated, who monitors performance, what triggers a retraining cycle.
Section 5: Timeline
A milestone-based timeline, not a Gantt chart. Clients want to know when they will see results, not the internal sequencing of your work.
| Milestone | Timeframe | What You Deliver |
|---|---|---|
| Data assessment complete | Week 1-2 | Data quality report, dataset size confirmed |
| First model trained | Week 3-4 | Model v1 on test environment |
| Evaluation and iteration | Week 4-5 | Accuracy report, model v2 if needed |
| Integration complete | Week 5-6 | Model in production, integration tested |
| Handoff and training | Week 6 | Documentation, team training session |
Tie payment milestones to delivery milestones (not calendar dates). This protects you if the client delays on data delivery.
Section 6: Investment
State the price clearly. Do not bury it or apologize for it.
Structure that converts:
- Project fee (broken into milestones): $XX,XXX
- Monthly retainer (model maintenance): $XXX/month
- What's included in each
Always include the ROI calculation. If the client is spending $4,200/month on API costs and you will reduce it to $200/month, their annual savings is $48,000. A $14,500 project fee pays back in under 4 months. Show this explicitly:
Current monthly AI spend: $4,200 Post-deployment estimated cost: $200/month Monthly savings: $4,000 Annual savings: $48,000 Project investment: $14,500 Payback period: 3.6 months
When clients see this math, the price objection mostly disappears.
Section 7: Why Us
Brief — 3-5 paragraphs or bullet points. Include:
- Relevant experience (similar vertical, similar use case)
- Specific results from past work (with client permission)
- Technical approach differentiator (own the model, local deployment, no ongoing API dependency)
- Process reliability (timeline adherence, communication cadence, deliverable documentation)
Avoid: long bios, generic capability lists, marketing language.
Common Proposal Mistakes
Too long. Proposals over 10 pages are rarely read in full. 6-8 pages is the sweet spot.
Technology-first. If the executive summary mentions technical terms, you lose non-technical decision-makers immediately.
No ROI calculation. Every proposal should include the math that justifies the investment.
No specific accuracy numbers. "Your model will be accurate" loses to "Your model will achieve ≥88% accuracy on your ticket classification task, compared to 72% with your current GPT-4 prompting approach."
Vague deliverables. "We will train a model and deploy it" loses to a clear list of exactly what files, documentation, and capabilities are delivered.
Ship AI that runs on your users' devices.
Ertas early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.
Further Reading
- How to Scope a Custom AI Model Project — Before the proposal, the discovery
- AI Agency Sales Process — The full sales cycle from outreach to contract
- AI Agency Pricing Strategy — Setting rates and packaging services
- How to Start an AI Agency in 2026 — The full agency launch playbook
- AI Agency Client Acquisition — Getting prospects to the proposal stage
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.
Keep reading

How to QA a Fine-Tuned Model Before Client Delivery
A complete QA process for testing fine-tuned models before delivering them to clients — covering functional testing, edge cases, regression checks, and client acceptance criteria.

Running 10+ Fine-Tuned Models for Different Clients: Operations Guide
An operations guide for AI agencies managing 10+ fine-tuned models across multiple clients — covering model organization, resource allocation, monitoring, updates, and scaling without chaos.

7 Client Acquisition Channels That Work for Small AI Agencies
Client acquisition is where most AI agencies fail. Here are 7 channels ranked by effort vs return, with specific tactics for each stage of agency growth.