Practical reading for compliance teams, legal departments, data protection officers, cybersecurity owners, and AI system owners who need to turn regulatory requirements into governed execution — across EU AI Act, GDPR, and NIS2.
A screening session creates the structured basis for determining what the EU AI Act requires of a specific AI system. Here is what that process actually involves — and why informal approaches fail at audit time.
Risk classification is not self-evident. It depends on intended use, Annex III context, actor role, and system-specific review. This guide explains how high-risk status is determined — and why it is a governance step, not just a label.
Article 5 defines the absolute boundary. Before any risk classification begins, every AI system must pass a prohibited-practice gate. What the eight prohibition categories mean in practice — and what a documented check requires.
The EU AI Act assigns obligations based on the actor role an organization holds — not just by the AI system it uses. Understanding whether you are a provider, deployer, importer, or distributor determines what you must actually do.
Risk classification tells you the regulatory direction. It does not tell you what to do next. Understanding which obligations apply — and what each requires as evidence — is a separate governance step that classification alone cannot answer.
Evidence readiness is not a pre-audit activity — it is a continuous governance state. What it means to maintain live, verifiable evidence for each AI obligation, and how to measure the gap before an auditor does.
Audit-ready AI governance is the ability to reconstruct how a decision was made — showing what was reviewed, who was responsible, which evidence existed, and what changed later. This is what defensible governance requires.
A missing obligation and a documented non-applicability are not the same thing — and auditors treat them very differently. What documenting exclusions in a defensible way actually requires.
AI governance requires named responsibility. What attribution, ownership, and reviewability mean when compliance decisions must later be reconstructed — and why they cannot be delegated informally.
The EU AI Act is implemented progressively — approved once does not mean compliant forever. When legal context, guidance, system configuration, or use case changes, the original decision may require renewed review.
An approval gate is not a bottleneck — it is an attribution point. How governance gates structure accountability across the organisation, what the reviewer must actually see, and what happens when gates are absent.
Many AI systems that require classification under the AI Act also process personal data. Where the two regimes overlap, where they diverge, and how to manage both without duplicating or fragmenting the governance record.
A DPIA under GDPR and an AI risk assessment under the EU AI Act serve different purposes and produce different governance outputs. How to run both without conflating the two obligations — or letting one substitute for the other.
Technical and organisational measures and vendor governance are part of an AI system's evidence structure when personal data is involved. What TOM profiles look like for AI, and how processor and provider governance connect.
NIS2 readiness is not only a technical security question — it is a governance and evidence question. What organizations need to document across critical services, security measures, supplier dependencies, incidents, and open gaps.
A NIS2 Incident Register is the governance record around incidents — not a SIEM, SOC, or reporting portal. What it must capture, why lifecycle and reporting status must stay separate, and why it matters before an authority ever asks.
NIS2 extends accountability to the supply chain. How to identify which supplier dependencies create regulatory exposure, what critical service linkage requires, and what documented oversight of vendors actually demands.
Spreadsheets create documentation — they do not create governance. AI governance requires responsibility, workflow, evidence, approval, audit trail, and reconstructable decisions. What the difference means when an auditor asks for evidence.
Boards are increasingly accountable for AI governance outcomes. What meaningful AI compliance reporting looks like at executive level — and what a board should be able to ask and verify.
Internal and external auditors reviewing AI governance are not assessing intent. They are assessing evidence. What enterprise-grade AI governance records look like and what gaps they expose.
Technical documentation is not a file archive — it is the evidence backbone of AI governance. What Article 11 and Annex IV require and how documentation connects to risk classification, obligations, and audit readiness.
Article 14 requires more than a policy that says humans are in the loop. Deployers must assign real oversight to named people with competence, authority, and documented intervention rights.
Article 4 requires that providers and deployers ensure sufficient AI literacy in their staff. What this means in practice, who needs literacy, how it differs by role, and what evidence governance requires.
GPAI models sit behind many everyday AI tools. Most organizations are not GPAI providers — but they still need governance over how GPAI-based systems are used, governed, and evidenced.
Conformity assessment determines how a provider demonstrates that a high-risk AI system meets EU AI Act requirements. Self-assessment is available for many systems — but it is not informal, and it is not permanent.
Article 72 makes clear that compliance does not end at approval. High-risk AI systems must be monitored throughout their lifetime — with structured plans, evidence, corrective action, and re-screening when issues arise.
Many organizations assume EU AI Act compliance is mainly a provider problem. It is not. Article 26 gives deployers their own obligations — from following instructions and assigning oversight to managing evidence and approving use.
AI in recruitment, evaluation, monitoring, and worker management is one of the most consequential high-risk areas under the EU AI Act. What Annex III means for HR tools, why governance must be use-case-specific, and what evidence is required.
EAB turns EU AI Act, GDPR, and NIS2 compliance into a structured, attributed, audit-ready governance process — not a document folder.
EU-hosted · Anchored to CELEX 32024R1689
Tell us about your organization and what you’re looking to address. We’ll follow up with the relevant information.