NIST AI RMF & AI Compliance
Implementing the NIST AI Risk Management Framework
Overview
The NIST AI Risk Management Framework (AI RMF 1.0), released by the National Institute of Standards and Technology in January 2023, is a voluntary framework designed to help organizations manage risks associated with AI systems throughout their lifecycle. Unlike binding regulations, the AI RMF provides a flexible, structured approach that organizations can adapt to their specific context, capabilities, and risk tolerance. It has rapidly become the leading reference framework for AI governance in the United States and is increasingly recognized internationally.
The framework is organized into two main sections: the foundational information that frames AI risks and the Core functions that provide actionable guidance. The Core consists of four functions — Govern, Map, Measure, and Manage — each containing categories and subcategories that describe specific outcomes and activities. This structure is intentionally aligned with the NIST Cybersecurity Framework's familiar organize-by-function approach, making it accessible to organizations already experienced with NIST standards.
For AI development teams, the AI RMF provides a common language and systematic methodology for identifying, assessing, and mitigating AI risks. It addresses risks related to bias and fairness, transparency and explainability, privacy, security, safety, and accountability. The framework emphasizes that AI risk management should be integrated into broader organizational risk management practices rather than treated as a separate, isolated activity. This holistic approach ensures that AI-specific risks are considered alongside operational, financial, and reputational risks.
AI-Specific Requirements
The Govern function establishes the organizational structures, policies, and processes for AI risk management. It calls for cultivating a risk-aware culture, defining roles and responsibilities for AI governance, implementing policies and procedures for AI development and deployment, and establishing mechanisms for ongoing monitoring and review. Organizations must document their AI risk tolerance, define escalation procedures for identified risks, and ensure that AI governance is supported by adequate resources and executive commitment.
The Map function focuses on understanding the context in which AI systems operate. This includes identifying and categorizing AI systems, assessing the potential impacts of AI system failures or misuse, understanding the characteristics of the data used for training and operation, and evaluating the operational environment and stakeholder expectations. For training data specifically, the Map function calls for understanding data provenance, assessing data quality and representativeness, identifying potential biases, and documenting data collection and processing methodologies.
The Measure function addresses the assessment and analysis of AI risks through quantitative and qualitative methods. Organizations should implement testing and evaluation procedures, measure AI system performance against defined metrics, assess bias and fairness across relevant demographic groups, evaluate security and privacy risks, and conduct regular reassessments as systems and contexts evolve. The Manage function then takes the outputs of mapping and measurement to prioritize and treat identified risks, implement controls, communicate risk information to stakeholders, and continuously monitor the effectiveness of risk treatments.
How Ertas Helps
Ertas supports the NIST AI RMF's Map function by providing comprehensive data lineage and provenance tracking. Understanding where your training data comes from, how it was processed, and what transformations were applied is fundamental to mapping AI risks. Ertas Data Suite records the complete lifecycle of every dataset, from initial ingestion through cleaning, transformation, augmentation, and final preparation for training. This provenance information directly supports the Map function's requirements for understanding data characteristics, quality, and potential biases.
For the Measure function, Ertas provides the audit infrastructure needed to track and evaluate AI system behavior over time. Comprehensive audit logs capture all interactions with data and models, enabling organizations to measure processing integrity, track access patterns, and identify anomalies that might indicate emerging risks. The structured workflow in Ertas Studio captures training configurations, evaluation metrics, and performance benchmarks, providing the quantitative evidence base that the Measure function requires for ongoing risk assessment.
The Manage function benefits from Ertas's technical controls that directly mitigate identified risks. The on-premise architecture manages data security and privacy risks by eliminating external data transmission. PII redaction manages privacy risks in training data. Vault encryption and access controls manage confidentiality and unauthorized access risks. Air-gapped deployment manages data exfiltration risks. These technical controls serve as concrete risk treatments that organizations can document in their AI risk management plans, demonstrating systematic implementation of the framework's Manage function outcomes.
Compliance Checklist
Relevant Ertas Features
- Data lineage and provenance tracking
- Comprehensive audit trail
- PII redaction engine
- On-premise air-gapped deployment
- Vault encryption and access controls
- Structured training workflow documentation
Ship AI that runs on your users' devices.
Early bird pricing starts at $14.50/mo — locked in for life. Plans for builders and agencies.