
The EU AI Act's High-Risk System Requirements: What They Demand and What They Don't Tell You
The EU AI Act's Annex III defines high-risk AI categories. If you're deploying in healthcare, legal, finance, or HR, you're almost certainly in scope. Here's what compliance actually requires.
Most enterprises that will be subject to the EU AI Act's high-risk system requirements don't know it yet. The categories are broader than the headlines suggest, and the compliance obligations are more substantive than a brief GDPR-style notice-and-consent framework.
Here's what the regulation actually requires, when it applies, and where the compliance gap is largest for typical enterprise AI deployments.
The Eight High-Risk Categories (Annex III)
The EU AI Act defines high-risk AI systems in Annex III. Any AI system that falls into one of these categories is subject to the full set of high-risk system requirements in Articles 9-16.
1. Biometrics — Systems for real-time or post-processing biometric identification of natural persons. Remote biometric identification systems. Biometric categorization based on sensitive attributes. Emotion recognition systems. This category is why facial recognition AI in employee monitoring or customer verification is almost certainly in scope.
2. Critical infrastructure — AI used in management or operation of critical infrastructure (energy, water, gas, heating, transport). A predictive maintenance AI for a power grid qualifies. An AI-assisted SCADA system qualifies. This isn't limited to AI that makes safety-critical decisions directly — it includes AI that informs those decisions.
3. Education and vocational training — AI that determines access to educational institutions, evaluates students, monitors and detects cheating, determines learning pathways. If you're selling AI to schools or universities in Europe for admissions, assessment, or proctoring, you're in scope.
4. Employment and workers — AI used in recruitment, selection, evaluation during the hiring process, promotion and termination decisions, task allocation, monitoring worker performance and behavior. This is the category that catches more enterprises than any other. AI resume screening, AI interview analysis, AI performance management tools — all in scope.
5. Essential services access — AI systems for credit decisions, insurance underwriting and claims assessment, emergency services dispatch, benefits eligibility determinations. Credit scoring AI, insurance AI, automated benefits processing — all in scope. If your AI affects who gets access to credit, insurance, or government services, it's high-risk by definition.
6. Law enforcement — AI used for risk assessments of individuals, reliability of evidence, individual risk profiles, profiling, crime analytics and prediction. Most law enforcement AI in scope with very limited exceptions.
7. Migration and asylum management — AI for border control, examination of asylum applications, assessing risks for irregular migration. Mostly governmental, but affects vendors selling to border authorities.
8. Administration of justice and democratic processes — AI assisting courts in researching, interpreting, and applying law; AI used in electoral contexts. Legal AI tools used by courts, election-adjacent AI in scope.
Why More Enterprises Are in Scope Than They Think
The employment category alone is expansive. If your enterprise HR function uses any of the following, you are almost certainly deploying a high-risk AI system:
- AI-assisted resume screening or applicant tracking
- AI-assisted video interview analysis
- AI-based skills assessment or cognitive testing
- AI performance management or productivity monitoring
- AI-driven workforce planning that affects headcount decisions
"But we use it just as a tool to help HR make decisions, not to make decisions automatically" is not an exemption. Annex III covers AI systems that are "used" in these processes, regardless of whether the AI has final decision authority. AI in the loop for employment decisions is still in scope.
Similarly, the essential services category catches any AI that influences credit decisions — not just automated credit scoring systems, but AI-assisted underwriting, AI fraud detection that results in account restrictions, and AI that produces risk assessments used in lending workflows.
What Articles 9-16 Actually Require
Article 9: Risk Management System
Not a one-time risk assessment — a continuous risk management system. Article 9 requires providers of high-risk AI to: identify and analyze known and foreseeable risks, estimate and evaluate risks that may emerge from use, adopt risk management measures, test the system to verify that risk management measures work, and document the risk management process continuously.
"Continuous" is load-bearing here. This isn't a pre-deployment checklist. It's an ongoing program that updates as the system operates in the field, as new risks emerge, and as the deployment context changes.
Article 10: Training, Validation, and Testing Data
This is the article that most enterprises are least prepared for. Article 10 requires that training, validation, and testing datasets:
- Meet appropriate quality criteria for their intended purpose
- Are examined for biases that may affect persons in the use context
- Account for characteristics specific to the geographic, behavioral, or functional setting
- Are relevant, representative, complete, and sufficiently error-free
- Have data lineage — origin, collection method, preparation operations
Article 10 is not about GDPR compliance for personal data in training sets. It's about training data governance. It requires documented quality criteria, bias examination, and provenance tracking for every dataset used to train or validate a high-risk system.
Most enterprises using cloud-based AI APIs cannot provide this documentation because they didn't prepare their training data with audit capability. If you fine-tuned a model on proprietary data and can't produce documentation of how that data was collected, examined for bias, and processed — you're not Article 10 compliant.
Article 11: Technical Documentation
Before deploying a high-risk AI system, providers must prepare technical documentation covering 11 elements specified in Annex IV:
- General description of the AI system and its intended purpose
- Detailed description including components, architecture, and algorithms
- Description of the development process — methodology, design choices, assumptions
- Description of the monitoring, functioning, and control measures
- Description of the validation and testing procedures, including test data and results
- Risk management documentation (Article 9)
- Description of the changes made throughout the lifecycle
- List of standards applied and documentation of compliance
- Copy of the EU declaration of conformity
- Information security measures implemented
- Instructions for use
This is a pre-deployment documentation requirement, not a post-deployment reporting requirement. You need the documentation before you deploy, not after something goes wrong.
Article 13: Transparency and Provision of Information to Users
High-risk AI systems must be transparent to their deployers (the businesses that use them) about: the purpose and limitations of the system, the level of accuracy, robustness, and cybersecurity against which testing was conducted, the circumstances under which the system may not operate reliably, the human oversight measures, and computational resources required.
This is upstream transparency — from providers to deployers. If you're buying high-risk AI from a vendor, you're entitled to this information. If you're selling it, you must provide it.
Article 14: Human Oversight
High-risk AI systems must be designed and developed to enable effective human oversight. This means the system must allow operators to fully understand the system's capabilities and limitations, monitor its operation, be able to identify and address anomalies and malfunctions, and be able to disregard, override, or intervene in the system's operation.
Critically: the ability to stop the system must be available. A high-risk AI system that cannot be shut down or overridden by a human operator does not comply with Article 14 regardless of what the technical documentation says.
Article 15: Accuracy, Robustness, and Cybersecurity
High-risk AI systems must achieve an appropriate level of accuracy for their intended purpose, perform consistently throughout their lifecycle, and be resilient against errors, faults, and unauthorized attempts to alter their behavior. Accuracy must be declared in the technical documentation — quantified, not described qualitatively.
"Our model is highly accurate" doesn't satisfy Article 15. "Our model achieves 94.3% accuracy on the validation set described in Annex IV, with 91.7% accuracy on the underrepresented demographic subgroup" does.
The Deadline Reality
High-risk system requirements under the EU AI Act apply from August 2026. If you're deploying high-risk AI systems today, you have time to build compliance — but not much of it, and the scope of what needs to be built is substantial.
The compliance timeline is compressed further by the documentation requirements. Article 11 requires pre-deployment documentation. You can't document Article 10 training data governance retroactively if you didn't track provenance at the time of data preparation.
The Article 30 Logging Requirement
Article 30 requires providers of high-risk AI systems to keep logs of the system's operation for the period appropriate to the intended purpose. This is not about logging API calls. It's about logging the decision process — what the system did, with what inputs, with what parameters, to produce what outputs.
The logging requirement is lifetime of the system for most regulated uses. Financial decisions may require logging for 10+ years under existing financial regulation. Healthcare applications may require the logs for the duration of patient records retention.
The On-Premise Advantage for Article 10 Compliance
Here's what Article 10 compliance actually looks like in practice: you need to be able to produce documentation showing the origin of every dataset used in training, the quality criteria applied, the bias examination conducted, and the preprocessing operations performed. This documentation must exist before deployment.
An on-premise data preparation pipeline with built-in audit logging is the architecturally correct solution for Article 10 compliance. Every transformation step logged with timestamp, operator ID, and parameters. Every data source documented. Every quality gate recorded. Every bias examination output preserved.
This is precisely what Ertas Data Suite provides — a Tauri 2.0 native desktop application for AI data preparation with audit trail built into every step of the pipeline: Ingest, Clean, Label, Augment, Export. The audit log exports in formats suitable for EU AI Act Article 30 technical documentation, with every transformation traceable to a specific operator and timestamp.
The EU AI Act doesn't prescribe technical implementations — only outcomes. You have flexibility in HOW you achieve compliance, but not WHETHER. The question is whether your current infrastructure makes the required outcomes achievable.
For enterprises in scope for high-risk system requirements — especially those in healthcare, finance, HR, and legal domains — cloud-based API AI with no data lineage tracking is the architecturally wrong starting point for compliance. The documentation requirements can't be satisfied retroactively.
Book a discovery call with Ertas → to understand what Article 10 and Article 30 compliance looks like in practice, and whether Ertas Data Suite's audit-logged, on-premise pipeline fits your compliance requirements.
For the broader context of what high-stakes AI deployment requires beyond EU AI Act compliance, see the complete guide →.
Turn unstructured data into AI-ready datasets — without it leaving the building.
On-premise data preparation with full audit trail. No data egress. No fragmented toolchains. EU AI Act Article 30 compliance built in.
Keep reading

EU AI Act Article 30 Documentation Checklist: What High-Risk AI Providers Must Log
EU AI Act Article 30 requires providers of high-risk AI systems to maintain detailed logs. This checklist covers every requirement with practical implementation guidance.

EU AI Act Compliance Timeline: What's Due by August 2026
A clear timeline of EU AI Act enforcement dates, what's already in effect, what's coming in August 2026, and what enterprises need to have in place for training data compliance.

Data Lineage Is Now a Legal Requirement — Are You Ready?
The EU AI Act makes data lineage mandatory for high-risk AI systems. Most enterprise pipelines have lineage gaps at every tool boundary. Here's what needs to change.