Governance Workflow

Re-Screening When the Law Changes

AI compliance is not frozen at first approval. The EU AI Act is implemented progressively. Guidance evolves. National interpretation develops. Systems change. Use cases expand. Actor roles shift. Evidence becomes outdated. Re-screening is the controlled process of reviewing an already governed AI system again when a relevant trigger indicates that the original decision may no longer be sufficient.

Many organizations treat approval as the end of compliance. An AI system is registered, screened, classified, evidenced, and approved. The record is archived. That is not durable governance. AI systems operate in changing environments. A decision that was reasonable at one point in time may require renewed review later. This does not mean the original decision was wrong — it means the governance context has changed.

Re-screening is not the same as starting over

Re-screening should not erase the original record. The organization must preserve what was reviewed, when it was reviewed, who approved it, which evidence existed, which legal context applied, and why the decision was made. A new review should be added to the governance history, not replace it. This allows the organization to show both the original decision and the later reassessment.

If re-screening overwrites the old record, the organization loses the ability to reconstruct the past. If re-screening creates a new point-in-time record, the organization gains a governance timeline. The right model is not replacement — it is versioned continuity.

What can trigger re-screening

Re-screening can be triggered by legal changes, operational changes, technical changes, organizational changes, or evidence and audit findings. A mature governance system should not rely on informal memory to detect these triggers — it should make them visible.

The legal context may change through amendments, implementing acts, delegated acts, Commission guidance, harmonized standards, national implementation details, supervisory expectations, or official interpretive developments. The EU AI Act is a regulation with progressive rollout through 2027 — its practical governance environment continues to evolve. A system approved under one legal-source context may require renewed review when new guidance affects classification, actor role, obligations, technical documentation, transparency, general-purpose AI, or evidence expectations. This is why legal source versioning matters: the organization must know which systems were reviewed under which legal context.

Obligation relevance changes

A change in regulatory guidance may affect certain obligation areas. Systems previously treated as low-impact may require renewed attention if transparency expectations change. High-risk systems may require review if technical documentation or human oversight expectations become clearer. The trigger is not only "the law changed" — it is: which systems are affected by the changed legal or obligation context?

System modification

AI systems change technically. A model may be updated. A provider may change functionality. A system may be fine-tuned. New data may be introduced. Automation level may increase. Human oversight may be reduced. New outputs may be generated. Integration may deepen. A technical change can affect risk classification, evidence requirements, actor role, and obligation status. If the system changes materially, re-screening may be required.

Use-case expansion

An AI system may begin with a narrow use case and later expand. An internal drafting tool may become customer-facing. A support assistant may begin making recommendations. A productivity tool may be used in HR. An analytics tool may begin influencing access decisions. A pilot may become production. A low-risk use case may move into an Annex III-adjacent context. Use-case expansion is one of the most important re-screening triggers. The governance record must follow the actual use of the system, not only its original description.

Actor role change

The organization's role can change. A company may start as deployer. Later, it modifies the system, rebrands it, integrates it into its own product, or places it on the market. A non-EU provider relationship may introduce importer relevance. A group structure may create multi-entity responsibility. If the actor role changes, obligations may change — and that requires re-screening.

Evidence change

Evidence may become outdated, incomplete, invalid, or newly available. Provider documentation may change. A technical document may no longer match the system. A human oversight procedure may no longer reflect operational reality. A transparency notice may become outdated. A training record may expire. A vendor assurance may be replaced. Evidence change can justify re-screening because the decision basis has changed.

Incident or audit finding

An incident may reveal that an AI system behaves differently than expected. An audit may find incomplete evidence. A customer review may raise a governance gap. A regulator inquiry may expose uncertainty. A security issue may affect system reliability. A privacy review may identify personal data concerns. These events can trigger re-screening because they challenge the original decision basis.

Re-screening must identify affected systems

A legal or operational change does not automatically affect every AI system. The organization must identify which systems are relevant. A strong re-screening process should be able to filter systems by risk classification, actor role, legal entity, use case, sector, obligation area, evidence status, provider, system type, personal data relevance, Annex III context, approval date, and legal-source version. This is where a structured AI System Registry becomes critical. If systems are only listed in spreadsheets, identifying affected systems becomes manual and unreliable.

The re-screening queue

A re-screening queue is the operational layer that turns a trigger into governance work. It should show which systems need re-screening, why they were added to the queue, which trigger applies, which legal or operational context changed, which role must act, which evidence may need review, which approval path is required, and which original record remains preserved. Without a queue, re-screening depends on scattered emails, calendar reminders, and individual memory.

Re-screening should preserve the original record

The original decision record must remain intact. It should show the system state, screening result, risk classification, actor role, obligations, evidence, approval, and legal-source context that existed at the time. The new re-screening record should show what changed. This allows the organization to answer: What was the original decision? What triggered renewed review? Which fields changed? Which obligations changed? Who reviewed the new context? Was approval confirmed, changed, or withdrawn? Was risk accepted? This creates audit-ready continuity.

Legal source versioning is a key control. The organization should know which legal context was used when a system was reviewed. This does not mean the platform interprets the law autonomously in real time — it means the decision record should be linked to the relevant legal-source context available at the time. When legal guidance or incorporated platform logic changes, affected systems can be identified for renewed review. Versioning preserves both past and present, avoiding two weaknesses: the assumption that old approvals remain valid forever, and the erasure of the original decision basis.

Re-screening and supervisor approval

Re-screening should not happen silently. If a system requires renewed review, the accountable role must see the result. The Supervisor may need to approve continued use, request completion, reject continued use, accept a documented risk, or escalate. This is especially important where re-screening identifies new high-risk relevance, new Article 5 concern, changed actor role, missing evidence, outdated documentation, changed vendor status, new personal data relevance, or unresolved obligation gaps. A re-screening process without approval logic is only an update. A re-screening process with approval logic is governance.

Re-screening and evidence readiness

Re-screening often reveals evidence issues. Evidence that was complete before may no longer be sufficient. Evidence that was externally covered may need renewed verification. Evidence that was missing may now block approval. Evidence that was not applicable may become applicable. This is why re-screening must connect to Evidence Readiness. The question is not only whether the risk class changed — it is whether the decision basis remains supported.

Re-screening and non-applicability

Non-applicability must also be reviewed when context changes. An obligation previously marked not applicable may become relevant if the system use case changes, personal data is introduced, the actor role changes, the system becomes customer-facing, the system enters a sensitive domain, the vendor changes functionality, or new guidance affects interpretation. If the original rationale remains valid, the record can show that it was reviewed. If it no longer applies, the obligation state must change. This is why non-applicability must be documented in the first place — undocumented non-applicability cannot be re-screened properly.

Why spreadsheets fail re-screening

Spreadsheets may show the current status of a system, but they often lose the original decision path. They do not reliably preserve old screening records, identify affected systems when context changes, connect systems to legal-source versions, route re-screening tasks to responsible roles, or create a new approval path. Similarly, AI-generated text inside a spreadsheet does not solve the governance problem. It does not create re-screening workflow, preserve legal version context, route missing information, or generate management visibility. ChatGPT helps you understand. EAB helps you defend how the decision was made.

How EAB structures re-screening

In EAB, re-screening is part of continuous compliance. The AI System Registry preserves the governed system record. AI Screening creates point-in-time screening results. Risk Classification, Actor Role Assessment, Obligation Matrix, Evidence Readiness, and Supervisor Approval create the original decision basis. Legal Source Version Snapshot preserves the legal context used in the review. The Re-Screening Queue identifies systems requiring renewed review when legal, operational, technical, evidence, or system-context changes require attention. Versioned Screening Records preserve the old record and create a new one. Audit-Ready Traceability shows what changed, who reviewed it, which result was produced, and which approval followed.

This is how EAB prevents compliance from becoming frozen at first approval.


For a deeper look at how approval gates make re-screening decisions accountable rather than informal, see Why AI Compliance Needs Approval Gates.

EAB Compliance Platform

Approved once does not mean compliant forever.

EAB preserves the original governance record while creating a new review layer when legal, operational, or system context changes require it — so compliance stays alive, versioned, and reconstructable.

EU-hosted · Anchored to CELEX 32024R1689

Get in Touch
Request More Information

Tell us about your organization and what you’re looking to address. We’ll follow up with the relevant information.