Back to blog
    EU AI Act Article 30 Documentation Checklist: What High-Risk AI Providers Must Log
    eu-ai-actarticle-30complianceai-governancehigh-risk-ai

    EU AI Act Article 30 Documentation Checklist: What High-Risk AI Providers Must Log

    EU AI Act Article 30 requires providers of high-risk AI systems to maintain detailed logs. This checklist covers every requirement with practical implementation guidance.

    EErtas Team·

    The EU AI Act imposes its most detailed documentation requirements on high-risk AI systems. Article 30, specifically, addresses logging — the automatic, continuous recording of events that allows competent authorities to audit a system's behavior, trace decisions back to specific inputs and model versions, and investigate incidents.

    If you develop, deploy, or import high-risk AI systems in or to the EU, Article 30 applies to you. This checklist covers every requirement, explains what it means in practice, and identifies the common implementation gaps that create compliance exposure.

    Who This Applies To

    The EU AI Act distinguishes between two roles that both carry obligations:

    Providers are organizations that develop a high-risk AI system and place it on the EU market or put it into service. If you built the model or the system, you are a provider. Providers carry the primary documentation burden under Article 11 (technical documentation) and Article 17 (quality management system), and the logging obligations under Article 12 (which feeds into Article 30 for deployers).

    Deployers are organizations that use a high-risk AI system in the course of their professional activities. You can be a deployer even if you didn't build the model — if you're using a vendor's high-risk AI system in your operations, you are a deployer. Article 26 sets deployer obligations, and Article 30 specifically addresses what deployers must log about their use of the system.

    Many organizations are both provider and deployer — they build AI systems (provider) and also deploy AI systems built by others (deployer).

    High-risk AI systems are defined primarily in Annex III of the Act. These include systems used in:

    • Biometric identification and categorization
    • Critical infrastructure management
    • Education and vocational training (admission, assessment)
    • Employment and worker management (recruitment, performance, promotion)
    • Access to essential private services and public benefits (credit scoring, insurance, social benefits)
    • Law enforcement
    • Migration, asylum, and border control
    • Administration of justice and democratic processes

    If your system operates in any of these domains, it is almost certainly high-risk. When in doubt, treat it as high-risk and consult legal counsel.

    Understanding the Article Landscape

    The EU AI Act has multiple documentation requirements that are frequently confused. Here's how they relate:

    ArticleWhat It CoversWho It Applies To
    Article 11Technical documentation — the pre-deployment dossier describing what the system is, how it was built, and its performance characteristicsProviders
    Article 13Transparency — information that must be provided to deployers so they can use the system appropriatelyProviders (to deployers)
    Article 17Quality management system — the organizational processes for managing the AI system lifecycleProviders
    Article 26Deployer obligations — human oversight, use in accordance with instructions, loggingDeployers
    Article 30Logs — specific logging requirements for deployersDeployers

    Article 30 is about what you log during operation. It is distinct from — and in addition to — the technical documentation requirements of Article 11 and the quality management requirements of Article 17.

    Applicability Date

    For most high-risk AI systems (those covered by Annex III), the full requirements became applicable in August 2026. Systems covered by Annex I (regulated products like medical devices, aviation equipment) have a longer transition period.

    If you are reading this in early 2026, you have limited time to implement compliant logging infrastructure before the deadline.


    Article 30 Compliance Checklist

    Article 30(1): Automatic Event Logging

    The Act requires that high-risk AI systems be capable of automatically generating logs of events "throughout the operational lifetime of the system."

    Implementation requirements:

    [ ] The AI system (or your deployment infrastructure around it) is capable of automatically recording events — this cannot be a manual process

    [ ] Logging is continuous across the system's operational lifetime, not sample-based or triggered only on errors

    [ ] Logs are stored securely with access controls restricting who can read them

    [ ] A tamper-prevention mechanism is in place — hash chains, write-once storage, or equivalent — so that logs cannot be modified after writing without detection

    [ ] Logging is active in production from day one of deployment, not added later

    Common gap: Organizations that rely entirely on a vendor-provided API often assume the vendor handles logging. Check your agreement explicitly — many vendors provide basic usage logs but not the detailed, deployer-attributable, input/output logs that Article 30 requires. You may need to implement logging in your own integration layer.


    Article 30(2): Traceability Requirements

    Logs must be sufficient to trace events back to specific inputs, specific model versions, and specific periods of use.

    Implementation requirements:

    [ ] Each AI output is logged with: timestamp (UTC), model version or identifier, and the relevant inputs that produced it

    [ ] Logs contain sufficient information to identify the period of use (start and end dates of a deployment configuration)

    [ ] Logs identify the responsible deployer for each logged event (relevant if multiple deployers share infrastructure)

    [ ] For systems used on specific individuals (employment decisions, credit decisions, etc.): logs must allow identification of which individuals were processed and when

    [ ] Logging granularity is sufficient to identify when the system operated outside its intended purpose or outside its normal operating conditions

    [ ] Logs capture not just successful inferences but also: errors, timeouts, rejected inputs, and edge cases

    Common gap: Systems that log only successful outputs and not failures miss the events most likely to be relevant in an incident investigation. An input that caused the system to fail silently — returning a default output rather than an error — is particularly hard to detect without complete logging.


    Article 30(3): Storage and Access

    Logs must be stored in a manner that is durable, accessible to authorized parties, and protected from loss.

    Implementation requirements:

    [ ] Logs are stored for a period appropriate to the intended purpose and applicable regulation

    The Act itself does not specify a single retention period — it references the intended purpose of the AI system. Practical guidance: for systems making consequential individual-level decisions (credit, employment, benefits), a minimum of 5-10 years is defensible given applicable limitation periods and regulatory investigation timelines. For lower-consequence systems, 6 months is the minimum suggested by early EU AI Act guidance.

    [ ] Access controls ensure that logs are accessible only to: authorized internal personnel (audit, compliance, legal), the national competent authority (on request), and authorized external auditors

    [ ] Logs are accessible to national competent authorities on request without undue delay — "request" here means a regulatory inquiry, not just routine reporting

    [ ] Logs are protected against accidental loss or destruction — backup procedures, redundant storage, and data retention policies all apply

    [ ] Log access is audited — who accessed the logs, when, and for what purpose should itself be logged

    Common gap: Organizations store logs in systems that are purged on short retention schedules inherited from operational data policies. Ensure that AI decision logs are classified separately and not subject to general data lifecycle management that would purge them on a shorter schedule.


    Article 30(4): Deployer-Specific Requirements

    Even when a provider supplies the underlying AI system, the deployer has independent logging obligations for their specific deployment context.

    Implementation requirements:

    [ ] Deployer maintains logs covering their specific deployment: which system was deployed, in which configuration, for which use case

    [ ] Deployer logs include: deployment start date, deployment end date (or "ongoing"), and any configuration changes made during the deployment (prompt changes, threshold adjustments, integration changes)

    [ ] Deployer logs capture system health monitoring events: performance alerts, error rates, accuracy metric changes

    [ ] Deployer maintains records of all human oversight measures applied — for HITL systems, this means logs of every human review decision (see the HITL Workflow Design Worksheet for required log fields)

    [ ] If the deployer modified the system's behavior relative to the provider's instructions (different prompts, different thresholds, different input preprocessing), this is documented and its impact on performance is assessed

    Common gap: Deployers sometimes treat human oversight as a policy commitment rather than a logged control. "We have HITL" without timestamped, auditable records of each human review decision does not satisfy Article 30(4). The human review decisions are themselves required log events.


    Annex IV: Technical Documentation (Provider Requirements)

    Annex IV specifies the technical documentation that providers must prepare before placing a high-risk AI system on the market or putting it into service. While this is a provider obligation rather than an Article 30 logging requirement, deployers receiving AI systems from providers should verify this documentation exists.

    Required documentation elements:

    [ ] General description of the AI system including its intended purpose and the version placed on the market

    [ ] Description of the development process:

    • Design choices made and their rationale
    • Training data characteristics: sources, labeling methodology, data preparation procedures, volume
    • Training and testing processes used
    • Performance metrics and validation results

    [ ] System monitoring, functioning, and control documentation:

    • Technical capabilities and limitations
    • Known or foreseeable risks, including risks arising from misuse
    • Accuracy metrics (disaggregated by relevant subgroups where applicable)
    • Robustness and cybersecurity measures

    [ ] Human oversight measures required for this system:

    • What human oversight is required (HITL, HOTL, or other)
    • How human oversight is implemented technically and operationally
    • Who is responsible for implementing it

    [ ] Changes made to the system throughout its lifecycle (version history)

    [ ] Assessment of risks in the context of fundamental rights and non-discrimination

    For deployers receiving a vendor's system: Request the Annex IV documentation package from your vendor before deployment. A vendor that cannot produce this documentation for a high-risk system has not completed their obligations under the Act. That gap is yours to manage — deploying without it exposes you to regulatory risk alongside the vendor.


    Practical Implementation Guidance

    What Counts as an "Event" for Logging Purposes

    The Act uses the term "events" without exhaustive definition. Based on the regulatory context, the following should be logged at minimum:

    Event TypeLogging Required
    Each inference run (AI produces an output)Yes — timestamp, model version, input, output
    Model version changeYes — old version, new version, date, reason
    System configuration changeYes — what changed, who changed it, when
    Human oversight decision (approve/reject/escalate)Yes — reviewer ID, decision, reasoning for rejects
    System error or failure to produce outputYes — error type, input that caused it, timestamp
    Inputs outside the model's operational domainYes — flag and log with explanation
    Batch processing runs (if applicable)Yes — run ID, volume, model version, start/end time

    Structuring Log Retention for Different Data Types

    AI decision logs often contain personal data — the inputs to the model may include names, financial data, health information, or other PII. This creates a tension between the EU AI Act's logging requirements and GDPR's data minimization and storage limitation principles.

    Practical resolution:

    1. Separate operational logs from decision logs. Operational logs (performance metrics, error rates, latency) can be retained on shorter schedules. Decision logs (inputs, outputs, model version, human review decisions) require longer retention and must be managed under a specific legal basis.

    2. Apply pseudonymization to decision logs where possible. Log a pseudonymous case identifier rather than direct personal identifiers. Maintain the mapping table separately with appropriate access controls. This reduces the GDPR sensitivity of the logs while preserving traceability.

    3. Document your legal basis for log retention. Under GDPR, you need a legal basis for processing personal data in logs. For regulated uses (credit, employment, benefits), the legal basis is typically compliance with a legal obligation (the EU AI Act itself). Document this explicitly in your data processing register.

    The Data Sovereignty Question

    Article 30 logs may themselves contain personal data, triggering GDPR obligations about where that data can be stored. If your AI system processes EU residents' data, your logs of that processing should generally be stored within the EU — or in a jurisdiction with an adequacy decision — unless you have a valid transfer mechanism in place.

    This has practical implications for organizations using cloud-based logging infrastructure: confirm that your log storage region matches your data residency commitments before deployment, not after an inquiry from a supervisory authority.


    Connecting to Implementation

    Implementing Article 30 compliance from scratch requires instrumentation at the integration layer (where your application calls the AI system), a log storage system with appropriate access controls and retention policies, and a review process that captures human oversight decisions in structured, queryable logs.

    Ertas Data Suite generates EU AI Act Article 30-compatible audit logs directly from the data processing pipeline. Every transformation is logged with timestamp and operator ID. Logs are tamper-evident and exportable in structured format for regulatory submission. For organizations running on-premise (air-gapped environments where data cannot leave the building), Data Suite's native desktop architecture means logs are generated and stored locally — no data leaves your infrastructure.

    Book a discovery call with Ertas →

    Summary Compliance Checklist

    Use this as your readiness assessment before the August 2026 deadline:

    Infrastructure: [ ] Automatic logging active in production (not manual, not sample-based) [ ] Tamper-evident log storage in place [ ] Retention policies set (minimum 6 months; longer for consequential decisions) [ ] Access controls implemented and audited [ ] Backup and recovery procedures for log data

    Content: [ ] Each inference logged with: timestamp, model version, input, output [ ] Human oversight decisions logged per decision [ ] Configuration changes logged [ ] Errors and edge cases logged

    Governance: [ ] Annex IV technical documentation received from vendors (or produced internally for systems you develop) [ ] Data processing legal basis documented for log retention [ ] Log data residency confirmed (EU or adequate jurisdiction) [ ] Access to logs can be provided to competent authority within acceptable timeframe

    Deployer-specific: [ ] Deployment start/end dates and configuration changes logged [ ] Human review decisions logged per the HITL worksheet requirements [ ] Deviations from provider instructions documented

    Organizations that have completed this checklist are in a defensible position for Article 30. Organizations that have not should prioritize the infrastructure items first — logging infrastructure takes time to implement correctly, and retroactive logging of past decisions is not possible.

    Turn unstructured data into AI-ready datasets — without it leaving the building.

    On-premise data preparation with full audit trail. No data egress. No fragmented toolchains. EU AI Act Article 30 compliance built in.

    Keep reading