EU AI Act Compliance

EU AI Act Risk Classification: How High-Risk AI Systems Are Identified

Risk classification under the EU AI Act determines which obligations apply, which evidence must be prepared, and which governance path an AI system must follow. The core challenge is not only reaching the correct classification — it is being able to show how that classification was reached.

AI risk classification is the process of determining whether an AI system falls into a prohibited, high-risk, limited-risk, or minimal-risk category under the EU AI Act. For organizations, this is not a theoretical exercise. The result determines which obligations must be addressed, which evidence must be prepared, which roles must be involved, and whether the system can proceed through a governed approval process.

A high-risk classification can trigger substantial compliance work. But a non-high-risk conclusion is equally important — it must be based on documented reasoning, not left as an informal assumption. In both cases, the governance question is not only whether the right answer was reached. It is whether the organization can show how that answer was reached, and by whom, under which legal context, with which evidence.

Classification is a governance step, not a free-text opinion

Many organizations treat AI risk classification as a legal or technical opinion: someone reviews the system, writes a short conclusion, stores it in a document, and moves on. That is not enough for operational AI governance. Risk classification affects the entire governance path of the system — it determines which obligations apply, which evidence is required, who must approve, and what re-screening is needed later.

A strong classification record should show which AI system was assessed, which use case was reviewed, which business context was considered, which EU AI Act category was evaluated, whether Annex III relevance was examined, whether prohibited-practice concerns were checked, who performed or reviewed the classification, which evidence supported the result, and which legal version context was used. Without this structure, classification becomes difficult to defend.

Governance principle

A risk label without documented reasoning is not governance. An organization that cannot explain why a system was classified a certain way — and by whom, under which regulatory context — has a statement, not a defensible record.

What high-risk means in practice

High-risk AI systems are not identified by technical complexity. An AI system does not become high-risk merely because it uses advanced models, machine learning, generative AI, or large datasets. The classification depends on the system's intended purpose, operational use, affected domain, decision influence, and regulatory context.

A general-purpose AI tool used to draft internal meeting notes carries a very different governance profile than an AI tool used to support recruitment decisions, educational assessment, creditworthiness evaluation, access to essential services, law enforcement, migration control, or judicial decision support. The same underlying technology can create different risk outcomes depending on how it is used — which is why EAB treats classification as system-specific and use-case-specific. The correct governance question is not "what tool is being used?" but "for what purpose, in which context, by which actor, with what potential impact on people and decisions?"

Annex III is the central classification anchor

Annex III of the EU AI Act identifies categories of AI systems that may fall into high-risk contexts. These include areas such as biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control, and administration of justice and democratic processes.

Annex III should not be treated as a superficial checklist. A serious assessment must connect the AI system to the actual use case. It must examine whether the system is deployed in an Annex III context, whether it materially influences decisions, whether natural persons may be affected, and whether the system's output contributes to a regulated decision process.

A weak classification process asks: "Does this sound like Annex III?" A strong governance process asks: "Which Annex III area was considered, why was it relevant or not relevant, what evidence supports the conclusion, and who is accountable for the classification?" This difference determines whether the classification record is defensible.

Classification depends on intended use

Risk classification cannot be separated from intended use. An AI system deployed in one department may appear low-impact. The same system in another context may be materially relevant for individual rights, access decisions, employment outcomes, or regulated services. A system used to summarize internal documents carries a different risk profile than one used to rank job candidates, prioritize patients, detect student performance risks, assess customer eligibility, or support decisions about access to public benefits.

The classification record must therefore preserve the intended purpose — not just describe the software, but describe how the software is used. A high-risk determination often emerges from the relationship between the model, the task, the affected persons, and the decision context. Generic AI inventories that list tools without documenting use cases are structurally insufficient for this reason.

Classification must distinguish all risk layers

AI risk classification should not jump directly to the high-risk question. A structured review must first distinguish between different regulatory categories in sequence: are prohibited practices potentially involved? Is there Annex III relevance? Is the system materially influencing decisions? Does limited-risk transparency logic apply? Which obligations follow from the classification?

Governance failures often arise when organizations collapse these questions into one. The correct approach is layered — prohibited-practice gate first, then risk classification, then obligation mapping. Only this sequence ensures that serious cases are not accidentally moved through ordinary approval.

Non-high-risk decisions must also be documented

A non-high-risk conclusion can be just as important as a high-risk one. If an organization classifies a system as not high-risk, it must be able to explain the basis — particularly where the system operates near sensitive areas, supports decisions involving natural persons, or appears connected to Annex III categories.

A weak record says: "This system is not high-risk." A strong record says: "This system was assessed against the relevant high-risk indicators. The following Annex III categories were considered. The system does not materially influence the outcome of decision-making for the following reasons. The assessment was reviewed by the responsible role. The supporting evidence is stored in the system record." In operational compliance, the absence of high-risk status should not mean absence of documentation.

Actor role shapes the obligation path

AI risk classification should not be separated from actor-role assessment. The same AI system may create different obligations depending on whether the organization acts as provider, deployer, importer, or distributor in the AI value chain. A provider placing a high-risk AI system on the market faces different obligations than a deployer using the system internally. A mature governance record must therefore show not only the risk category, but also the actor-role context that shaped the obligation path.

Classification must connect to obligations

Risk classification is not the end of the compliance process — it is the point where obligation logic begins. Once a system is classified, the organization must understand what follows. A high-risk result may trigger requirements around risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, cybersecurity, and other areas depending on actor role and system context.

A risk class without an Obligation Matrix is incomplete. It tells the organization the category, but not the work. Without obligation mapping, classification remains a label rather than an operational governance step.

Re-classification and legal change

Risk classification is not necessarily permanent. A system may require renewed review when the legal context changes, guidance evolves, national interpretations develop, the system is modified, the use case expands, or the operational context changes. A system originally classified as limited or minimal risk may require renewed attention if it is later used in a sensitive domain.

This is why classification should be versioned and connected to re-screening. Approved once does not mean compliant forever.

How EAB structures classification

In EAB, EU AI Act risk classification is not an isolated label — it is part of the governed AI system record. The AI System Registry creates the system context. Business Intake captures the intended use, business function, and affected process. Guided Technical Completion adds system-level details, data context, deployment information, and provider information. AI Screening evaluates prohibited-practice relevance, Annex III indicators, risk signals, actor-role context, and system-specific classification logic. Server-side Risk Classification produces a controlled classification result based on structured inputs. The Obligation Matrix translates classification into concrete obligation areas. Supervisor Approval turns the review into an accountable governance decision. Audit-Ready Traceability preserves the decision path, including changes, screening runs, approvals, and re-screening history.


The next step after classification is obligation mapping. The EU AI Act obligations that follow from risk classification — and why classification is only the beginning, and what must follow before a system can be considered governed.

EAB Compliance Platform

From risk classification to governed decision record.

EAB connects AI risk classification to Annex III assessment, actor-role logic, obligation mapping, evidence readiness, and a reconstructable audit trail — so every classification is defensible, not just stored.

EU-hosted · Anchored to CELEX 32024R1689

Get in Touch
Request More Information

Tell us about your organization and what you’re looking to address. We’ll follow up with the relevant information.