EU AI Act Compliance

EU AI Act Prohibited Practices: What Article 5 Means for Organizations

Article 5 of the EU AI Act addresses practices that are considered unacceptable because of their potential impact on fundamental rights, human autonomy, safety, and dignity. For organizations, this creates one clear requirement: prohibited-practice review must happen before ordinary approval logic begins.

Article 5 of the EU AI Act is structurally different from the rest of the regulation. High-risk classification can lead to obligations. Limited-risk classification can lead to transparency duties. Minimal-risk classification reduces regulatory burden. But a prohibited-practice concern under Art. 5 may stop the system from being placed on the market, put into service, or used in the relevant context — full stop. This is why prohibited-practice review is not an optional pre-step. It is the first governance gate.

Organizations that review Article 5 only after classification, only when an auditor asks, or only after deployment have already accepted exposure they may not be able to resolve through later documentation. The check must happen before ordinary approval logic begins.

Article 5 is the unacceptable-risk gate

The EU AI Act follows a risk-based structure, but Article 5 operates at a different level. Where high-risk systems may be permitted if required obligations are fulfilled, and where limited-risk systems may proceed if transparency duties are addressed, prohibited practices under Art. 5 cannot be resolved by ordinary obligation management. They require an early stop, structured review, and documented escalation.

This means organizations should not begin with the question "Is this high-risk?" They should first ask: "Is there any indication that this system may fall into a prohibited-practice area?" Only after that gate has been addressed can ordinary classification and approval continue.

Governance principle

A prohibited-practice check is not a checkbox at the end of a review. It is the first substantive governance gate. If Article 5 concerns appear, the case requires immediate escalation and documented review before any classification or approval logic continues.

The check must come before approval

In many organizations, AI tools enter operations before governance has caught up. A department tests a new tool. A vendor offers an AI feature. A business team pilots automation. By the time legal or compliance teams review the system, the tool may already be in use. That is precisely the governance failure Article 5 review is meant to prevent.

If prohibited-practice concerns are reviewed only after deployment, the organization may have already created exposure that cannot be resolved by later documentation. A prohibited-practice check must therefore sit at intake — before the system moves into screening, classification, or supervisor review.

What Article 5 requires organizations to assess

Article 5 covers several categories of prohibited AI practices. Organizations can structure an early AI compliance screening layer around the main risk areas without replacing legal judgment — the purpose is to detect red flags early enough that serious cases do not accidentally move through ordinary approval.

Manipulative, deceptive, or subliminal techniques

Organizations must assess whether an AI system uses subliminal, manipulative, or deceptive techniques that may materially distort a person's behavior in a way that causes or is reasonably likely to cause significant harm. This is not limited to obvious deception. It may require attention where AI systems personalize persuasion, influence decisions, shape user behavior, or exploit attention asymmetry in ways that are not transparent to the person affected.

Exploitation of vulnerabilities

Article 5 addresses AI systems that exploit vulnerabilities of persons or groups due to age, disability, or specific social or economic situations. For organizations, this means reviewing the affected group. AI systems used with children, elderly persons, persons with disabilities, or economically vulnerable groups require careful screening — not only for what the system does, but for who may be affected and whether the system could exploit that vulnerability.

Social scoring

Article 5 prohibits certain forms of social scoring where natural persons are evaluated over time based on social behavior or personality characteristics, leading to detrimental treatment in unrelated contexts. Customer scoring, employee scoring, eligibility scoring, fraud scoring, and behavioral scoring can all raise governance questions. Not every score is prohibited — but organizations must be able to show that they reviewed the scoring logic, the data used, the resulting treatment, and whether consequences are related, justified, and proportionate.

Criminal-risk assessment based solely on profiling

Article 5 restricts AI systems used to assess the risk of a natural person committing a criminal offence when based solely on profiling or personality traits. For security, fraud, financial crime, and public-sector contexts, this distinction can become relevant. The governance question is whether the assessment is based solely on profiling rather than objective, verifiable facts linked to criminal activity.

Facial recognition database scraping

Article 5 prohibits AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. Organizations using biometric, identity, surveillance, or image-analysis systems must understand how training data, reference databases, and facial images are sourced. A vendor statement alone may not be sufficient where the use case is sensitive.

Emotion inference in workplace and education contexts

Article 5 prohibits AI systems used to infer emotions of natural persons in workplace and education institutions, except in medical or safety contexts. This is especially relevant for HR, employee monitoring, learning platforms, productivity analytics, and workplace safety tools. If a tool infers emotions from biometric or behavioral signals in workplace or education contexts, it may require Art. 5 review — regardless of how the vendor labels it.

Biometric categorisation to infer sensitive attributes

Article 5 addresses biometric categorisation systems that deduce sensitive attributes such as race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation. Any system involving biometric data, image analysis, voice analysis, video analytics, or demographic inference should be reviewed carefully. The key question is whether sensitive categories may be inferred — not just whether the system is described as a "categorisation" tool.

Real-time remote biometric identification in public spaces

Article 5 covers the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement, subject to narrow exceptions. Where AI systems involve surveillance, public spaces, biometric identification, security services, or cooperation with public authorities, the organization should not assume the issue is irrelevant without review.

Article 5 review is a gate, not a comment field

In weak governance structures, Article 5 is often reduced to one field: "Prohibited practice: no." That is not sufficient. A serious review must show what was considered, which risk areas were checked, which information supported the result, who reviewed the case, and whether escalation was required. The goal is not to create complexity — it is to prevent unacceptable-risk systems from entering operations through informal processes.

Documenting non-applicability is essential

Not every system will trigger Article 5 concerns. Most ordinary AI systems will not. But that does not mean the review should disappear. If prohibited-practice areas were considered and found not applicable, that conclusion should be documented — especially in sensitive domains such as HR, biometrics, scoring, or workplace monitoring. In EAB's governance logic, "not applicable" is never an empty field. It is a documented decision.

Vendor descriptions are not sufficient

Many organizations rely heavily on vendor descriptions. The vendor calls it a productivity assistant, a workforce analytics platform, a personalization tool, or a safety solution. These labels may be useful context, but they are not sufficient for Article 5 review. Art. 5 concerns often depend on actual use, affected persons, decision influence, data sources, inference logic, deployment context, and consequences — not on how a vendor describes the product. The organization must review the use case, not only the vendor label.

How EAB structures the prohibited practices check

In EAB, the Prohibited Practices Check is integrated into the governed AI compliance workflow before ordinary classification continues. The AI System Registry and structured intake capture purpose, use case, affected persons, and operational context. AI Screening includes prohibited-practice relevance as the first substantive review step. Risk classification proceeds only after the prohibited-practice layer has been addressed. Supervisor Approval ensures that serious concerns are not approved without accountable review. Audit-Ready Traceability preserves who reviewed the case, what was known, what was decided, and whether further review was required.


The EU AI Act risk classification under Article 6 and Annex III explains what follows once the prohibited-practice gate has been cleared — and how Annex III and intended use shape the classification outcome.

EAB Compliance Platform

Article 5 as a governed first gate — not a footnote.

EAB places prohibited-practice review at the entry point of the AI governance chain — with structured detection, documented rationale, supervisor escalation, and a reconstructable audit trail for every check.

EU-hosted · Anchored to CELEX 32024R1689

Get in Touch
Request More Information

Tell us about your organization and what you’re looking to address. We’ll follow up with the relevant information.