An AI system that falls under Art. 5 is not high-risk. It is unlawful. The distinction matters — and EAB enforces it as a hard gate before any classification or approval can proceed.
Every AI system registered in EAB is reviewed against all seven Art. 5 prohibition categories. The review result is a sealed, attributed governance record — not a checkbox.
“A system that falls under Art. 5 is not high-risk. It is unlawful. The review must happen before classification — not as part of it.”
EAB runs the Art. 5 review as a mandatory first step in the governance chain — before risk classification, before obligation derivation, before any approval. The review cannot be skipped. It can be overridden by a supervisor, but the override is logged with justification and remains visible in the audit trail permanently.
Each prohibition category is reviewed independently. A system that raises a concern in one category does not automatically block the others — but a confirmed prohibition in any category halts the governance chain. The result is a structured record with five possible states per category: clear, concern flagged, exemption claimed, supervisor override, or confirmed prohibited.
When the regulation changes and a new version of Art. 5 is anchored in EAB, affected systems enter the re-screening queue. The review does not drift silently — it is tied to the legal source version that was in force at the time of each determination.
Each category is reviewed against the system profile established during registration and technical completion.
Systems that use techniques below the threshold of consciousness, or exploit psychological vulnerabilities to distort behaviour in ways that cause or are likely to cause harm to that person or another person.
Systems that exploit specific vulnerabilities of persons based on their age, disability, or social or economic situation, in a way that distorts their behaviour and causes or is likely to cause harm.
Systems used by public authorities or on their behalf to evaluate or classify natural persons based on social behaviour or personal characteristics, where this leads to detrimental or disproportionate treatment unrelated to the original context.
Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes — with narrow exceptions for serious crime investigation, terrorism prevention, and missing persons searches.
Emotion recognition systems used in the workplace and educational institutions are prohibited. The review identifies whether the system's deployment context falls within this restriction.
Systems that infer sensitive attributes from biometrics (race, political opinion, religion, etc.) and systems used to predict likelihood of criminal offence based solely on profiling or personality assessment.
The review runs automatically at the start of every screening session. It cannot be deferred.
When a supervisor initiates screening, the Art. 5 review panel loads automatically. The system profile from registration and technical completion is pre-loaded — the supervisor reviews each category against the documented system, not from memory.
The supervisor assigns one of five states to each category: clear, concern flagged, exemption claimed, supervisor override, or confirmed prohibited. A justification is required for anything other than clear. The review cannot be submitted with blank states.
A confirmed prohibition in any category halts the governance chain. Risk classification, obligation derivation, and approval are not accessible. The system is flagged as prohibited in the registry. A supervisor can claim an exemption — but the claim and its justification are permanent record entries.
On completion, the Art. 5 review is sealed: supervisor identity, UTC timestamp, legal source version, per-category states, and all justifications. The record is immutable. If the regulation changes, the system enters the re-screening queue — and a new review record is created, not the old one modified.
Every AI system reviewed, every category stated, every prohibition enforced — before classification, before approval, before deployment.
EU-hosted · Anchored to CELEX 32024R1689
Tell us about your organization and what you’re looking to address. We’ll follow up with the relevant information.