Many organizations assume that EU AI Act compliance is mainly a provider problem — the vendor built the system, the provider supplies the model, so the provider handles compliance. That assumption is incomplete. Deployers have their own obligations. A deployer is generally an organization that uses an AI system under its authority, except in a purely personal non-professional context. For enterprises, regulated organizations, SMEs, and professional service providers, this matters because many AI systems are not developed internally but are still used operationally. The deployer may not build the AI system, but it still governs its use.
Deployers must know which AI systems they use
The first practical obligation is visibility. An organization cannot govern what it cannot see. Deployers need an AI System Registry that shows which AI systems are used, by whom, for which purpose, in which business process, under which legal entity, and with which provider or vendor. Without this, deployer obligations become theoretical. Shadow AI creates the biggest weakness — if departments use AI tools without central registration, the organization cannot classify risk, assign ownership, connect GDPR records, review vendor evidence, document human oversight, or approve use. The deployer’s first task is to make AI use visible.
Deployers must use high-risk systems according to instructions
For high-risk AI systems, deployers must take appropriate technical and organizational measures to ensure they use the system in accordance with the instructions for use. This means deployers need access to those instructions, must understand them, operationalize them, and document how the system is used. A deployer should be able to show which provider instructions apply, who reviewed them, which internal process implements them, which users are allowed to operate the system, which limitations are relevant, which input data requirements exist, which human oversight measures are required, and which monitoring or logging practices apply. If the organization cannot show how it follows provider instructions, deployer governance is weak.
Deployers must assign human oversight
Deployers of high-risk AI systems must assign human oversight to natural persons with the necessary competence, training, and authority. This is one of the most important operational duties. A deployer cannot simply rely on the provider saying that the system supports oversight. The deployer must define who performs oversight in its own organization: who reviews outputs, who can intervene, who can override or disregard results, who escalates concerns, who is trained, who documents oversight actions, and who approves use. Human oversight is not a generic policy — it is an assigned control.
Deployers must ensure input data quality where they control input data
Where a deployer exercises control over input data, it must ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system. A provider may supply the system, but the deployer may control the data entered into it. If the deployer supplies poor, incomplete, biased, outdated, or unsuitable input data, the system output may become unreliable or harmful. The deployer should document which input data is used, who controls it, whether personal data is involved, whether the data is relevant, whether data quality risks exist, and which role is responsible.
Deployers must monitor operation
Deployers need to monitor the operation of high-risk AI systems based on the instructions for use. If they have reason to consider that use according to instructions may result in risk to health, safety, or fundamental rights, they may need to inform the provider or distributor and suspend use where appropriate. Deployers should not treat AI system use as passive consumption. They need to capture operational concerns, performance problems, unexpected behavior, incidents, user complaints, or oversight failures. These signals should feed governance review.
Deployers may need to keep logs
Where logs are automatically generated by a high-risk AI system and are under the deployer’s control, the deployer may have obligations to keep them for an appropriate period depending on the legal context. Deployers must understand whether logs exist, where they are stored, who controls them, how long they are retained, and how they support audit or incident review. A deployer that cannot access or preserve relevant logs may be unable to reconstruct use. Logging is therefore part of deployer evidence readiness.
Deployers must inform affected persons in certain contexts
Deployer transparency obligations may arise where high-risk AI systems are used to make or assist decisions about natural persons in certain contexts. Organizations should assess whether affected persons need to be informed about AI use, especially where the system supports decisions that affect them. A deployer should know who is affected, whether AI supports a decision, whether information must be provided, what information is given, and when it is given. Transparency is not only a provider duty — in real operations, deployers often control the interaction with affected persons.
Deployers must connect AI governance with GDPR
If the AI system processes personal data, deployers must not treat AI Act governance separately from GDPR governance. They may need to connect the AI system to processing activities, DPIA or DSFA logic, TOM profiles, vendor governance, data subject rights, lawful basis, retention, and processor documentation. This is especially important in HR, customer analytics, healthcare, education, credit support, fraud detection, and monitoring contexts. For more on this intersection, see GDPR and the EU AI Act: Where Both Frameworks Apply.
Deployers must create internal approval gates
Deployers need approval gates because AI systems enter business processes quickly. Before a system is used, the organization should be able to show that the system is registered, the intended use is documented, the provider is known, risk classification was performed, actor role was assessed, obligations were mapped, evidence is visible, human oversight is assigned, GDPR relevance was checked, provider instructions were reviewed, and supervisor approval occurred. This approval path is not a bottleneck — it is the point where accountability becomes visible.
Deployers must maintain evidence
Deployer evidence may include provider instructions, system use description, business purpose, risk classification, actor role, human oversight assignment, input data assessment, logs where controlled, training or AI literacy evidence, transparency notices, vendor documentation, DPIA or DSFA records where relevant, approval record, incident or monitoring notes, and re-screening history. The organization must know which evidence exists and which gaps remain. A deployer without evidence readiness cannot demonstrate responsible use.
Spreadsheets can list systems and owners. They cannot reliably govern deployer obligations. They do not enforce provider instruction review, assign human oversight, connect input data governance, preserve logs or evidence state, create approval gates, connect GDPR records, manage re-screening, or create audit-ready traceability. Deployers need more than an AI inventory. They need operational governance that routes tasks, preserves decisions, and makes accountability visible.
How EAB structures deployer governance
In EAB, deployer governance is built into the AI system workflow. The AI System Registry creates visibility. Business Intake documents purpose and use. Guided Technical Completion captures provider, data, deployment, and operational details. AI Screening identifies risk signals and classification context. Actor Role Assessment determines whether the organization is deployer or holds another role. The Obligation Matrix translates deployer obligations into concrete action areas. Evidence Readiness shows whether provider instructions, oversight evidence, input data information, GDPR records, AI literacy evidence, and approval records exist. Supervisor Approval creates accountable decision gates. The Re-Screening Queue supports continued review. Audit-Ready Traceability preserves the full decision path.