
NIST AI RMF vs. EU AI Act vs. ISO/IEC 42001: A Practical Comparison for Enterprise Teams
Three major AI governance frameworks, each from a different jurisdiction and philosophy. Here's what each requires, where they overlap, and how to build a unified compliance posture.
If you operate globally, you're likely subject to all three of the major AI governance frameworks simultaneously. Your US federal contracts reference NIST. Your European operations face EU AI Act obligations. Your enterprise customers and auditors ask about ISO/IEC 42001 certification.
Building three separate compliance programs is expensive and redundant. Building one program that maps to all three is achievable — but only if you understand where the frameworks diverge.
This is a practical comparison, not a policy overview. The goal is a compliance map you can act on.
Framework 1: NIST AI Risk Management Framework (AI RMF 1.0)
Jurisdiction and enforcement: US voluntary guidance, published January 2023. No direct enforcement mechanism. Effectively required for federal contractors through procurement requirements and NIST cybersecurity framework precedent. Heavily referenced in US financial regulatory guidance (OCC, Federal Reserve, FDIC AI risk management guidance).
Core philosophy: risk-based and process-focused. NIST describes good practices and principles without prescribing specific technical implementations. The framework assumes organizations will adapt it to their context rather than follow it prescriptively.
Structure: four core functions — Govern, Map, Measure, Manage — organized into categories and subcategories. The Govern function establishes organizational policies and accountability. Map identifies AI risks in context. Measure analyzes and monitors those risks. Manage responds to and improves on them.
Key strength: technology-neutral and comprehensive. Because it doesn't specify technical requirements, it works across AI types and use cases. It also maps cleanly onto existing enterprise risk management frameworks (ISO 31000, COSO), which means organizations with mature ERM programs can extend their existing structures rather than building parallel AI-specific ones.
Key limitation: voluntary means unverifiable externally. Customers, partners, and regulators cannot obtain independent assurance that your organization follows NIST AI RMF. There's no certification. The framework is useful internally but doesn't satisfy third-party assurance requirements.
Framework 2: EU AI Act
Jurisdiction and enforcement: EU regulation, effective 2024 with phased implementation through 2026. Applies to any AI system placed on the EU market or used in ways that affect EU individuals — regardless of where the provider or deployer is headquartered. Penalties for non-compliance: up to €35 million or 7% of global annual revenue for the most serious violations.
Core philosophy: risk-tiered and rights-based. The EU AI Act classifies AI systems by risk level and imposes progressively more demanding requirements on higher-risk systems. The rights of individuals affected by AI decisions are central to the framework's design.
Structure:
- Prohibited practices (Article 5): AI systems that are banned outright, including social scoring, real-time biometric surveillance in public spaces, and manipulation of human behavior exploiting vulnerabilities.
- High-risk systems (Article 6 + Annex III): AI systems in sectors including critical infrastructure, education, employment, essential services (credit, insurance), law enforcement, migration, and administration of justice. High-risk systems face extensive compliance obligations.
- General-purpose AI models (Title VIII): specific obligations for large AI models released for general use, including transparency about training data and model capabilities.
- Limited and minimal risk: transparency obligations (you must disclose when someone is interacting with AI) but no substantive governance requirements.
Key strength: legally binding with specific documentation requirements. Article 10 specifies what training data governance must look like for high-risk systems. Article 14 specifies human oversight requirements. Article 30 specifies logging requirements. Annexes IV and VII specify the technical documentation required. There's no ambiguity about what's required — there is ambiguity about what counts as compliance, which implementing guidance is still resolving.
Key limitation: complex applicability analysis. Determining whether your AI system qualifies as "high-risk" under Annex III requires careful legal analysis. The EU AI Act also creates obligations for both providers (who develop and market AI systems) and deployers (who use AI systems in their operations), and those obligations differ. Compliance timelines are staggered, and guidance on many provisions is still being developed.
Framework 3: ISO/IEC 42001:2023
Jurisdiction and enforcement: international voluntary standard, published December 2023. Certifiable by accredited third-party auditors, similar to ISO 27001 (information security) and ISO 9001 (quality management). No regulatory enforcement, but certification is increasingly required by enterprise customers and government procurement.
Core philosophy: management system approach. ISO/IEC 42001 doesn't specify what your AI governance should look like in terms of specific technical controls. It specifies that you should have a system for defining your AI governance requirements, implementing them, monitoring them, and improving them. The standard validates the process, not the specific content.
Structure: 10 clauses following the standard ISO High Level Structure (the same structure as ISO 27001 and 9001), enabling integrated management systems for organizations already certified in other standards. Annex A contains 38 controls organized across 8 control domains, including data for AI systems (A.6), AI system impact assessment (A.8), and third-party and customer relationships (A.10).
Key strength: the only major AI governance framework with third-party certification. Customers and partners can verify your compliance without conducting their own audits. For enterprises that already have ISO 27001 or ISO 9001 certification, extending to ISO/IEC 42001 reuses existing infrastructure (internal audit functions, management review processes, document control).
Key limitation: the management system approach means ISO/IEC 42001 certification doesn't tell you exactly what your AI governance contains — only that you have a governed system. Two organizations can both be certified with very different actual practices. For organizations that want prescriptive guidance on what good AI governance looks like technically, ISO/IEC 42001 alone isn't enough.
How They Overlap: A Practical Comparison
| Dimension | NIST AI RMF | EU AI Act | ISO/IEC 42001 |
|---|---|---|---|
| Risk classification | Required, flexible | Required, specific (Annex III) | Required, organization-defined |
| Documentation | Referenced, flexible | Annex IV prescriptive for high-risk | Clause 7.5 records required |
| Human oversight | Principle-based (Govern 6.2) | Article 14 specific requirements | A.6.1.5 referenced |
| Audit trails / logging | Monitor function referenced | Article 30 specific requirements | Clause 9.1 monitoring records |
| Incident response | Manage function referenced | Article 73 serious incident reporting | Clause 10.1 improvement process |
| Third-party assurance | No certification | Conformity assessment for high-risk | Certifiable by accredited auditor |
The substantial overlap is in risk classification, documentation, and monitoring. All three frameworks require you to know what AI systems you have, assess their risk, document your governance approach, and monitor outcomes.
Building One Program That Satisfies All Three
The most efficient path: start with EU AI Act compliance as your foundation.
EU AI Act is the most prescriptive framework for organizations with high-risk systems. If you satisfy EU AI Act's Annex IV technical documentation requirements, Article 14 human oversight requirements, and Article 30 logging requirements, you will cover most of NIST AI RMF's Govern, Map, Measure, and Manage functions and most of ISO/IEC 42001's Annex A controls.
The specific mapping:
EU AI Act Article 10 (training data requirements) → NIST Map 1.6 (data quality) → ISO 42001 A.6.2 (data for AI systems). All three require you to document training data sources, quality assessment, and governance. Satisfying EU AI Act's Article 10 for high-risk systems covers the others.
EU AI Act Article 30 (logging) → NIST Measure 2.5 (monitoring) → ISO 42001 Clause 9.1 (monitoring and measurement). EU AI Act specifies the minimum logging requirements for high-risk systems. Implementing those logs satisfies the monitoring requirements of the other two frameworks.
EU AI Act Article 9 (risk management system) → NIST Govern 1.1-1.4 (risk governance) → ISO 42001 Clause 6.1 (risk assessment). EU AI Act requires a documented risk management system for high-risk systems. This document is essentially what NIST Govern requires and what ISO 42001 Clause 6 calls for.
What you'll need to add for ISO certification specifically: ISO/IEC 42001 has strong requirements around management commitment (Clause 5), organizational roles (Clause 5.3), and continual improvement (Clause 10) that aren't directly addressed by EU AI Act compliance. These are management system requirements, not technical ones — relatively easy to implement but requiring specific documentation.
What you'll need to add for NIST specifically: NIST AI RMF's AI Transparency section (Map 5.1-5.2) and its emphasis on team diversity in AI development (Govern 6.1) aren't directly covered by EU AI Act. These are relatively low-effort additions.
Where Data Governance Is the Pinch Point
All three frameworks converge on training data governance as a critical compliance requirement. EU AI Act Article 10 is the most specific — it requires documented governance of training, validation, and test data sets, including data sources, characteristics, and quality assessment. NIST Map 1.6 and ISO/IEC 42001 A.6.2 require similar documentation in less prescriptive terms.
For organizations using AI in production, this means every dataset used to train or fine-tune a model needs documented lineage: where did the data come from, what transformations were applied, what quality assessment was performed, who authorized its use?
EU AI Act Article 10's requirements for high-risk systems — and ISO/IEC 42001 Annex A control A.6.2's requirements for data management — are exactly what Ertas Data Suite's pipeline generates by default. Every ingestion, cleaning, labeling, and augmentation operation creates an immutable record with timestamps and operator attribution. The documentation that satisfies all three frameworks is generated as a byproduct of the data preparation workflow, not assembled after the fact.
Turn unstructured data into AI-ready datasets — without it leaving the building.
On-premise data preparation with full audit trail. No data egress. No fragmented toolchains. EU AI Act Article 30 compliance built in.
Keep reading

The EU AI Act's High-Risk System Requirements: What They Demand and What They Don't Tell You
The EU AI Act's Annex III defines high-risk AI categories. If you're deploying in healthcare, legal, finance, or HR, you're almost certainly in scope. Here's what compliance actually requires.

AI Model Inventory Template: Track Every Model Your Organization Runs in Production
SR 11-7, EU AI Act, and ISO 42001 all require a model inventory. Here's a complete template with every field you need, plus guidance on what to capture and why.

EU AI Act Article 30 Documentation Checklist: What High-Risk AI Providers Must Log
EU AI Act Article 30 requires providers of high-risk AI systems to maintain detailed logs. This checklist covers every requirement with practical implementation guidance.