Post-market monitoring under the EU AI Act makes clear that compliance does not end when a high-risk AI system is placed on the market, put into service, or approved internally. Article 72 requires providers to establish a post-market monitoring system that collects, documents, and analyses relevant data on the performance of the system throughout its lifetime. For organizations, the practical message is simple: a governed AI system must remain governed after first approval. Compliance is not frozen at the point of deployment.
Post-market monitoring is continuous governance
Many compliance processes focus on pre-market or pre-deployment work: the system is classified, documentation is prepared, obligations are reviewed, approval is granted, the system goes live. But AI systems can behave differently over time. Users change how they interact with the system. Data shifts. Performance changes. The provider updates the model. Edge cases appear. Incidents reveal weaknesses. Human oversight may prove insufficient. Evidence becomes outdated. Post-market monitoring exists because the system’s real-world performance matters — and governance must follow reality.
Post-market monitoring is primarily a provider duty
Article 72 focuses on providers of high-risk AI systems. Providers must set up a monitoring system proportionate to the nature of the AI technology and the risks of the high-risk AI system, collecting, documenting, and analysing relevant data — including data from deployers where appropriate — to evaluate continuous compliance throughout the system’s lifetime. But deployers are not irrelevant. Deployers may generate operational information that helps identify performance issues, incidents, misuse, unexpected outcomes, or context changes. Where provider and deployer cooperation is required, the governance chain must be documented.
Post-market monitoring needs a plan
Post-market monitoring should not be informal. It should be based on a plan that is part of technical documentation. A strong monitoring plan may address what performance data is monitored, which risks are tracked, which sources of information are used, how user feedback is captured, how deployer information is collected, how incidents or malfunctions are escalated, how serious issues are assessed, who owns review, how corrective action is triggered, how documentation is updated, and how monitoring results feed back into governance. The plan is not merely a document — it is the operating logic for post-market governance.
Monitoring must connect to real-world performance
AI systems may meet requirements during testing, but real-world use can expose new issues: performance degradation, unexpected errors, bias indicators, misuse patterns, user overreliance, inadequate human oversight, input data problems, output quality issues, security concerns, changes in operational context, recurring complaints, incident signals, and provider updates. The point is not only to collect data. The point is to understand whether the system continues to operate within the governed expectations.
Post-market monitoring and serious incidents
Post-market monitoring is closely connected to incident and malfunction awareness. If monitoring reveals serious incidents, malfunctioning, or risks affecting health, safety, or fundamental rights, the organization may need to follow further reporting, corrective action, or governance escalation processes. The record should show what was observed, when, who reviewed it, whether the issue was serious, whether corrective action was required, whether deployer communication was needed, and whether re-screening or approval change was triggered. Without this structure, post-market monitoring can become passive observation.
Monitoring must feed corrective action
Post-market monitoring has little value if it does not trigger action. If monitoring identifies a risk, the organization must know what happens next. Possible actions include technical investigation, provider notification, deployer notification, evidence update, risk reassessment, human oversight change, instructions update, training update, incident review, conformity review, temporary suspension, corrective action, re-screening, or approval change. The governance record should show whether monitoring findings were acted upon. Monitoring without follow-up is not governance.
Post-market monitoring and re-screening
Monitoring findings can trigger re-screening. If real-world performance differs from the original assessment, the system may need renewed review. If the system is used differently than intended, re-screening may be required. If oversight is failing, approval may need review. If new risks appear, the Obligation Matrix may change. If legal guidance changes, the monitoring plan itself may require adjustment. Post-market monitoring and re-screening are therefore connected: monitoring detects or records signals, and re-screening updates the governance decision.
Monitoring and technical documentation
Post-market monitoring must remain connected to technical documentation. If monitoring reveals that the system behaves differently than documented, the technical documentation may need update. If the system changes, documentation must reflect the change. If new risks are identified, risk management and evidence records may need update. If provider instructions change, deployer governance may need update. Technical documentation should not remain frozen while the system evolves — monitoring keeps documentation aligned with operational reality.
Monitoring requires evidence readiness
Post-market monitoring produces evidence that must be structured. The organization should know which monitoring data exists, which performance indicators are tracked, which incidents were recorded, which feedback was received, which issues were reviewed, which corrective actions occurred, which provider or deployer communications exist, which evidence is missing, which follow-up is overdue, and which monitoring plan version applies. This is evidence readiness for post-market governance.
Spreadsheets are weak for post-market monitoring. A spreadsheet can list monitoring tasks or incident notes, but it often fails to connect monitoring findings to the AI system record, technical documentation, provider obligations, deployer information, evidence state, corrective action, re-screening, and approval history. Post-market monitoring is timeline-based and requires lifecycle visibility. A spreadsheet usually shows snapshots. It does not reliably govern continuous monitoring or maintain the connection between what was observed and what governance action followed.
How EAB structures post-market monitoring governance
In EAB, post-market monitoring is connected to continuous compliance. The AI System Registry preserves the governed system record. Technical Documentation Readiness shows documentation state. Evidence Readiness shows whether monitoring evidence is complete, partial, missing, unclear, external, or outdated. The Re-Screening Queue supports renewed review when monitoring findings, system changes, legal changes, or evidence changes require attention. Audit-Ready Traceability preserves monitoring-related changes, review actions, approval updates, and decision history. This allows organizations to treat post-market monitoring as part of the system lifecycle rather than an isolated provider document.