What is GEPA?
Generalized Experience-based Procedural Acquisition — a self-improvement mechanism for AI agents that creates reusable skills from successful task completions and refines them through use, popularized by Nous Research's Hermes Agent framework.
Definition
GEPA (Generalized Experience-based Procedural Acquisition) is a self-improvement mechanism for AI agents that converts successful task experiences into reusable 'skills' the agent can invoke on similar future tasks. Rather than each task starting from scratch, an agent with GEPA accumulates a library of skills derived from its own successful completions — and the skills themselves are refined through repeated use, becoming faster and more reliable over time.
The pattern was introduced by Nous Research in their Hermes Agent framework (released February 2026, published as an ICLR 2026 Oral paper). The skills are LLM-readable code or structured prompts, so they're inspectable and editable rather than opaque learned weights. Empirical results from Nous show Hermes agents getting approximately 40% faster on repeated tasks after building 20+ self-generated skills — the speedup comes from skill reuse rather than re-deriving solutions. GEPA represents a concrete implementation of the long-discussed 'continually improving agent' pattern, distinct from one-shot fine-tuning approaches.
Why It Matters
Most agent systems treat each task as independent — an agent that solves a complex problem today repeats most of the same reasoning tomorrow on a similar problem. GEPA changes this by making the agent's accumulated experience a first-class artifact: skills are persisted, refined, and reused. For long-running production agent deployments, this compounds capability over time without requiring continuous fine-tuning. The pattern also creates a natural training-data feedback loop: skills can be exported and used as fine-tuning data to update the underlying base model.
Key Takeaways
- GEPA agents create reusable skills from successful task completions
- Skills are inspectable code or structured prompts — not opaque learned weights
- Hermes agents demonstrate ~40% speedup on repeated tasks after 20+ accumulated skills
- Pattern enables self-improvement without continuous fine-tuning
- Skills can be exported as fine-tuning data, creating a compounding improvement loop
How Ertas Helps
When deploying Hermes Agent or similar GEPA-enabled frameworks in production, Ertas Studio supports the next loop in the improvement chain: export the agent's accumulated GEPA skill library as training data, then fine-tune the underlying base model on its own self-generated procedural knowledge. The fine-tuned model then performs better on the patterns it has seen most, reducing the need for skill-library lookups for common tasks while preserving skill-based handling for novel ones.
Related Resources
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.