EU AI Act Compliance

EU AI Act Obligations: Why Risk Classification Is Only the Beginning

Risk classification is one of the most visible parts of EU AI Act compliance. But a risk class alone does not show which obligations apply, what evidence is required, who must act, whether a gap has been resolved, or how the decision can be reconstructed. That is where obligation mapping begins — and where most AI governance processes stop too early.

Many AI compliance efforts stop too early. An organization registers an AI system. A reviewer determines that it is high-risk, limited-risk, or not high-risk. The result is written into a spreadsheet or compliance note, and the process appears complete. But the most important governance work has not yet started.

A classification result only describes the regulatory direction. It does not translate that direction into operational work. If a system is high-risk, the organization must understand what follows. If a system is limited-risk, it must understand whether transparency obligations apply. If a system is not high-risk, it must still document why. If certain obligations are not applicable, it must record the basis for that conclusion. The risk class is the gateway — governance begins when the organization can show what that classification means in practice.

A risk class is not governance

The purpose of AI risk classification is not to create a label. It is to determine what the organization must do next. A high-risk classification can trigger significant obligation areas: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, cybersecurity, quality management, conformity assessment, and post-market monitoring — depending on the actor role and system context. A limited-risk classification may trigger transparency or user-information duties. Even a minimal-risk conclusion does not remove the need for a documented decision record where the organization wants accountability and audit readiness.

Risk classification must always connect to an obligation layer. Without obligation mapping, the organization knows the category but not the work.

Obligations depend on risk, actor role, and system context

EU AI Act obligations do not depend on risk classification alone. They also depend on the organization's actor role and the system's operational context. A provider of a high-risk AI system may face obligations connected to system design, technical documentation, conformity assessment, quality management, registration, and demonstration of conformity. A deployer may face obligations connected to use, human oversight, monitoring, input data, record-keeping, information duties, and cooperation. Importers and distributors may have their own supply-chain responsibilities.

This means obligation mapping requires three inputs: the AI system, the risk classification, and the actor role. If one of these elements is missing, the obligation view is structurally incomplete.

Governance principle

A risk label without obligation logic is incomplete compliance. A risk label without actor-role context is even weaker. The Obligation Matrix only becomes meaningful when all three inputs — system, risk class, and actor role — are present and documented.

Why an Obligation Matrix is necessary

An Obligation Matrix translates classification and actor-role context into concrete governance work. It answers the question that risk classification alone cannot: what must be reviewed, evidenced, completed, assigned, approved, or documented next? A strong Obligation Matrix shows which obligation areas apply to the specific AI system, why they apply, which actor role creates the responsibility, which evidence already exists, which evidence is missing, which obligation areas are unclear or not applicable, which role must act, and which legal context supported the determination.

This is the difference between knowing a risk category and governing the system.

Obligations must be system-specific

Generic obligation lists are not enough. A list of EU AI Act requirements may help teams understand the regulation, but it does not create operational governance for a specific AI system. A recruitment AI system raises different obligations than a customer support assistant. A credit scoring tool requires different evidence than an internal drafting assistant. A biometric access-control system requires different review logic than a document classification tool.

The correct governance question is not "what does the EU AI Act generally require?" but "for this AI system, in this use case, under this actor role, with this classification, which obligations apply and what evidence is required?"

Obligations must connect to evidence

Obligations are only meaningful when evidence can be shown. An organization may know that human oversight is required — but can it show how oversight is implemented? It may know that technical documentation is required — but can it show whether the documentation is complete and current? Compliance does not become operational because a requirement is known. It becomes operational when the organization can connect the requirement to evidence.

This is why an Obligation Matrix must be connected to Evidence Readiness. The obligation layer shows what matters. The evidence layer shows whether the organization can prove it.

Missing evidence is a governance signal

Missing evidence should not be hidden inside long forms or legal documents. It should be visible as a governance signal. If evidence is missing, the organization needs to know what is missing, who owns the gap, and what must happen next. A missing technical documentation item may require provider input. A human oversight gap may require process design. A transparency gap may require user-facing information. Without this routing, obligation mapping becomes passive — it tells people what is wrong, but not what must happen next.

Documented non-applicability is essential

Not every obligation applies to every AI system. But weak compliance systems create two opposite problems: they either treat every obligation as applicable (creating noise and unnecessary workload), or they simply leave irrelevant obligations empty (creating audit weakness). A strong governance process must distinguish between missing information and documented non-applicability. An empty field means the organization has not completed the record. A documented non-applicability decision means the obligation was reviewed and found not relevant for the system — with rationale preserved.

Approval requires obligation visibility

An AI system should not be approved merely because it has been classified. Approval should be based on a reviewable governance record. The approving role must be able to see the AI system, the risk classification, the actor role, the applicable obligations, the evidence status, the unresolved gaps, the documented non-applicability decisions, the completion status, the screening result, the legal context, the reviewer rationale, and any override or risk acceptance logic. Without obligation visibility, approval becomes blind.

Obligation status must be precise

Weak compliance systems use vague status labels: complete, open, not relevant, in progress. These are not sufficient for AI governance. A stronger obligation status model must distinguish between: applicable, not applicable, unclear, fulfilled, partial, missing, externally covered, requires review, requires supervisor decision, and requires evidence. An obligation may be applicable but not yet fulfilled. It may be fulfilled but missing evidence. It may be not applicable but only after documented review. Precise status logic prevents false confidence and surfaces real governance gaps.

How EAB structures obligation mapping

In EAB, the Obligation Matrix is the bridge between risk classification and operational governance. The process begins with the AI System Registry and structured intake. AI Screening evaluates scope, prohibited-practice relevance, risk signals, and review needs. Actor Role Assessment determines the organization's position. The Obligation Matrix translates risk classification and actor role into system-specific obligation areas. Evidence Readiness shows whether each obligation is supported by documentation. Role-based workflow routes missing information and completion needs to the responsible roles. Supervisor Approval turns the review into an accountable decision. Audit-Ready Traceability preserves the obligation logic, evidence state, approval path, and later changes.


To understand how evidence supports the obligation record — and what auditors actually need to verify governance decisions — see the guide on Evidence Readiness for AI Governance.

EAB Compliance Platform

From risk label to operational compliance work.

EAB translates risk classification and actor role into a system-specific Obligation Matrix — with evidence status, role-based routing, supervisor approval, and a reconstructable record of what was addressed and why.

EU-hosted · Anchored to CELEX 32024R1689

Get in Touch
Request More Information

Tell us about your organization and what you’re looking to address. We’ll follow up with the relevant information.