AI compliance screening is not a loose questionnaire. It is not a quick legal opinion delivered by email. It is not a risk label produced by a tool and treated as settled. It is the structured input into a governed review process — the first serious governance step that connects an AI system to its regulatory obligations, the evidence required to demonstrate them, and the responsible roles that must act on the result.
For organizations operating under the EU AI Act, screening determines whether an AI system can move forward through the governance chain. It links the system's purpose, its operational context, its risk profile, the actor roles involved, the applicable obligations, the available evidence, and the approval path — into a single reviewable decision record. That record must be reconstructable. It must show not only what was concluded, but how, by whom, on what basis, and under which version of the regulation.
Screening is not the final decision
This distinction is foundational and widely misunderstood. A screening result can identify relevant risk signals, probable classification outcomes, missing information, possible obligations, and areas requiring further review. What it cannot do — and should not be treated as doing — is replace the organizational decision. The responsibility for approval remains with the organization and its accountable roles.
If an organization treats the screening result as the decision itself, it creates a structural governance weakness. The decision becomes dependent on a single output instead of a controlled process. There is no longer a clear separation between analysis, review, approval, override, and documentation. A governed process keeps these layers distinct.
A mature AI governance process separates these layers explicitly. The screening structures the review. The risk classification identifies the regulatory direction. The obligation logic shows what must be addressed. The evidence layer shows what can be demonstrated. A responsible person reviews the record. The supervisor or accountable role approves, rejects, or requests further completion. The audit trail preserves the decision path. Screening is the entry point into this chain — not the chain itself.
Why informal screening fails
Most organizations begin AI compliance with informal methods: a business unit asks whether a tool is permitted, a legal team gives a quick opinion by email, a spreadsheet lists AI systems without structured intake, a form collects basic information without connecting it to obligations, or a meeting ends with the assumption that a system is low risk. None of these is sufficient as an operational governance structure.
The weakness is not always that the answer is wrong. The deeper failure is that the process cannot be reconstructed. If the organization is later asked how the AI system was reviewed, it must be able to show more than a general statement. It must be able to show the system context, the risk reasoning, the applicable obligations, the evidence available at the time, the responsible reviewer, the approval path, and any changes that followed.
Informal screening fails because it does not reliably create a stable governance record. It does not show who made the assessment. It does not preserve the legal version consulted. It does not connect the classification to obligations. It does not identify missing evidence. It does not route responsibility to the right roles. It does not create a durable approval trail. It may help people discuss compliance. It does not create operational compliance.
What screening must determine
A structured AI compliance screening process must answer several connected questions. These questions do not function as isolated checklist items — they form a chain of governance logic, where each answer shapes the next.
Is the system within regulatory scope?
Before risk classification begins, the organization must establish whether the system under review falls within the relevant AI governance scope. This requires more than asking whether the tool uses machine learning or generative AI. The review must consider the system's purpose, outputs, degree of autonomy, role in business processes, and relationship to human decision-making. A system that appears harmless in technical terms may still create governance relevance if it influences employment outcomes, access to services, customer treatment, operational decisions, or regulated processes.
Is a prohibited practice potentially involved?
Under the EU AI Act Art. 5, certain practices require attention before any risk classification begins — because they may fall into prohibited categories that cannot be resolved by classification or obligation management. A governed screening process must therefore create a structured gate for prohibited-practice relevance as its first substantive step. A system that raises Art. 5 concerns cannot be handled as another risk-classification case; it requires immediate escalation, documented review, and accountable decision-making before the process continues.
Which risk category and actor role apply?
Risk classification Art. 6 is central to AI compliance screening, but it cannot be assessed independently of actor role. The same technical tool may carry different governance implications depending on whether the organization acts as provider, deployer, importer, or distributor — and the obligations that follow differ accordingly. Classification must therefore be documented in relation to the concrete use case, not to a product label or provider category. And actor-role assessment Art. 3 must be completed before obligations can be assigned, because a risk class without an actor-role context is incomplete governance.
Which obligations follow?
A risk classification alone does not tell an organization what to do next. The screening process must connect the classification and actor role to concrete obligation areas — which may include transparency Art. 13, human oversight Art. 14, technical documentation Annex IV, data governance Art. 10, risk management Art. 9, accuracy and robustness Art. 15, conformity assessment, or AI literacy requirements, depending on the system context. This translation — from classification to obligation — is where many weak tools stop too early. They produce a risk label but do not connect it to what the organization must actually do.
What evidence exists — and what is missing?
Compliance is not created by knowing that an obligation exists. It becomes operationally meaningful only when the organization can show what evidence supports each governance step. A screening process must therefore identify whether evidence is complete, partial, missing, unclear, or — after documented review — not applicable. This distinction matters because an empty field is not the same as a reviewed exemption, and missing evidence is not the same as a completed form. Screening must surface the evidence gap so that the right roles can act on it.
Who must act next?
Screening must create responsibility, not just a result. If technical information is missing, the AI system owner must complete the record. If the business purpose is unclear, the business operator must clarify it. If the risk context is serious, a supervisor must review the decision basis. Without role-based routing, screening produces another static document. A governed process turns the result into action by assigning the next step to the right person with the right authority.
From screening result to governed process
AI compliance screening becomes valuable when it is embedded in the full governance chain rather than treated as a standalone assessment. The operational sequence is structured: the AI system is registered; the business context is documented; technical and operational information is completed; the screening evaluates scope, prohibited-practice relevance, risk classification, actor role, obligation areas, and evidence gaps; the result feeds into a structured review; responsible roles address missing information; the supervisor or accountable reviewer approves, rejects, requests completion, or documents a justified override; and the decision path becomes part of the audit trail.
The value is not the result alone. The value is the controlled path from system registration to documented decision — and the ability to reconstruct that path at any later point without depending on anyone's memory of what happened.
Evidence makes screening defensible
A screening result without evidence is weak. It may describe a conclusion, but it does not show whether the conclusion can be defended. For audit-ready governance, the organization must be able to show what information supported the assessment — system description, intended purpose, affected user groups, operational context, provider information, human oversight logic, technical documentation status, legal version consulted, and any prior screening history where relevant.
Evidence exists in different states, and governance must distinguish between them. Some evidence is complete. Some is partial. Some is missing. Some obligations may be not applicable — but only after a documented review that justifies the exclusion. Some areas remain unclear and require further completion. An empty field is not the same as documented non-applicability. A completed form is not the same as an accountable decision.
This is why evidence readiness is not a pre-audit preparation step. It is a continuous governance state that must be visible throughout the lifecycle of the AI system — from initial screening through re-screening, obligation completion, and any approval overrides that follow.
Supervisor approval closes the governance loop
AI compliance screening must lead to accountable review. A mature governance process does not allow blind approval — the approving role must see the screening result, the risk classification, the obligation logic, the evidence status, the unresolved gaps, and any documented non-applicability or override rationale before the decision is made. The supervisor does not approve in isolation; the supervisor approves based on a governed record that shows what was known at the time, which risks were identified, which evidence existed, and why the decision was taken.
This is also where the distinction between AI-assisted review and human accountability becomes operational. A platform can assist the screening process, structure the findings, and surface governance gaps. Human accountability remains intact. The system supports the decision process. It does not replace the responsible decision-maker — and no governance record that depends on an AI output without human review and attribution is defensible at audit.
The audit trail preserves the decision path
The most consequential question in AI governance is often not whether a decision was made. It is whether that decision can be reconstructed. For AI compliance screening, reconstructability means the organization can show: which AI system was screened; which information was available; which risk signals were identified; which classification was produced; which obligations were connected to the result; which evidence was present or missing; which legal version was consulted; which person reviewed or approved the result; which changes occurred after; and whether re-screening was triggered by a legal change event.
If any of these elements cannot be produced, the screening record is incomplete — regardless of how accurate the original determination may have been. A point-in-time governance record is not a summary of the current state. It is a frozen snapshot of what was known, what was decided, and who was responsible, at the moment the decision was made. That snapshot must survive any subsequent change to the system, the regulation, or the organization.
AI-assisted screening must remain governed
AI-assisted screening can accelerate structured regulatory analysis, identify risk signals, and translate system information into reviewable findings. But it must operate inside a governed process — not as a substitute for one. Organizations should not treat an AI output as autonomous compliance. The screening run must be stored. The result must be connected to the system record. The decision path must be logged. The responsible reviewer must remain visible. The output must be treated as structured input, not as final judgment.
When AI-assisted screening is embedded inside a governed process with human accountability, traceability, and review, it becomes a governance accelerator. When it is used as a replacement for that process, it creates a different kind of governance gap — one that is harder to detect precisely because it looks like a compliance output.
How EAB structures the process
In EAB, AI compliance screening is not a standalone tool. It is one layer of an operational governance chain. The AI System Registry creates the system record. Business intake captures purpose, usage, and organisational context. Guided technical completion adds system, data, deployment, and operational detail. AI Screening evaluates scope, prohibited-practice relevance, risk classification indicators, actor-role context, and recommended review areas. The Obligation Matrix translates the screening result into concrete obligation areas. Evidence Readiness shows what can be demonstrated and what remains missing. Supervisor Approval turns the review into an accountable governance decision with attribution and a documented decision basis. The Audit Trail records the full decision path with timestamps, role attribution, and legal version anchoring.
This is the structural difference between EAB and informal AI compliance handling. EAB does not simply help an organization understand AI compliance. It helps the organization govern how AI compliance decisions are made — and ensure that those decisions can be reconstructed, defended, and re-examined at any point in the future.
The next guides in this series examine EU AI Act prohibited practices under Article 5 and EU AI Act risk classification under Annex III — including the ambiguous cases and how the industry is interpreting them.