General-purpose AI models sit behind many everyday AI systems — copilots, chatbots, model APIs, embedded AI features, document analysis tools, coding assistants, and domain-specific applications. Most organizations do not provide the underlying model. They use systems built on it. But that distinction does not remove the need for governance. The EU AI Act creates specific obligations for providers of general-purpose AI models, and additional obligations for those with systemic risk. Organizations that deploy tools based on such models may have different obligations — but they still need governance visibility over how these tools are used.
GPAI is not the same as every AI system
A general-purpose AI model is a model capable of serving a wide range of purposes and being integrated into a variety of downstream systems. Many organizations do not provide the underlying model — they use systems built on it. A company developing and placing its own general-purpose model on the market may carry GPAI provider obligations. A software company embedding a model API into its product must assess whether it is a provider of an AI system, a deployer, or in another actor role depending on control, modification, branding, intended purpose, and market behavior. An enterprise using a commercial AI assistant may simply be deploying a tool based on a GPAI model. This is why actor-role assessment is essential.
Using GPAI-powered tools still requires governance
Many organizations assume that using a well-known AI tool is safe because the provider is large and legally sophisticated. That assumption is not governance. A company using ChatGPT, Copilot, Gemini, Claude, Mistral-based tools, model APIs, or embedded AI features still needs to understand how the tool is used internally. The risk is not only the model — it is the use case. A GPAI-based tool used for internal drafting creates a different governance profile than the same tool used for recruitment screening, customer eligibility, legal review, healthcare support, or regulated operational decisions. The organization must govern the use case, not only the vendor name.
GPAI creates vendor dependency
GPAI use often creates strong vendor dependency: model provider, cloud provider, SaaS vendor, API provider, or enterprise platform vendor. This dependency affects data protection, confidentiality, technical documentation, instructions for use, model updates, logging, retention, security, output reliability, contractual safeguards, incident response, copyright and content risk, and vendor evidence. A GPAI-powered AI system should therefore be connected to Vendor Governance and Evidence Readiness.
Provider documentation matters for downstream governance
Providers of GPAI models may need to supply documentation enabling downstream providers or deployers to understand capabilities and limitations. For deployer organizations, provider documentation becomes part of evidence readiness. The organization should know what documentation is available, what the model is intended for, what limitations are stated, what usage restrictions apply, what changes or model updates may affect use, and what contractual safeguards exist. If the organization cannot access relevant provider documentation, that is a governance gap.
GPAI and AI Act risk classification
A GPAI model itself is not automatically high-risk merely because it is powerful or general. Risk classification often depends on the downstream AI system and its intended use. A GPAI-based tool used in a high-risk context — recruitment, education, credit, essential services, healthcare, or other Annex III-related contexts — may require structured assessment. The organization should not assume that “general-purpose” means “low-risk.” It should ask: how is the model used, what output does it produce, who is affected, does it influence decisions, and which obligations follow?
GPAI and shadow AI
General-purpose AI tools often enter organizations through shadow use. Employees use public tools, departments activate AI features inside existing platforms, vendors introduce AI functions without central review, teams experiment with model APIs, and business units treat AI assistants as productivity tools rather than governed systems. This creates visibility problems. The organization may not know which GPAI-powered systems are used, where data goes, who is affected, whether high-risk contexts exist, or whether vendor terms are acceptable. A strong AI System Registry should capture GPAI-powered tools before they become uncontrolled infrastructure.
GPAI and data protection
GPAI tools often process input data. Employees may enter prompts containing personal data, confidential data, customer data, employee data, legal data, financial data, source code, trade secrets, or operational information. This creates GDPR, confidentiality, vendor, and TOM questions. The organization should document whether personal data is entered, which data categories may be processed, whether the provider acts as processor under GDPR, whether enterprise data is used for training, which retention settings apply, which TOMs and contractual safeguards exist, and whether DPIA logic is relevant. GPAI governance must therefore connect to GDPR governance where personal data is involved.
GPAI and downstream product risk
Organizations building products on top of GPAI models need special attention. If the company embeds a general-purpose model into its own SaaS product, workflow, platform, or customer-facing service, the actor-role analysis becomes more complex. The organization may not be the GPAI model provider, but it may become provider of an AI system. If it modifies, configures, packages, or offers the system under its own name, obligations may arise depending on the product and use case. The governance record should show which model provider is used, which downstream system is created, who controls intended purpose, and which documentation is received and provided.
GPAI and AI literacy
GPAI tools are widely available and easy to misuse. This makes AI literacy especially important. Users should understand that AI output may be wrong, biased, incomplete, or hallucinated; that confidential information may require protection; that personal data input may be restricted; that AI-generated content may need review; and that sensitive use cases require governance approval. Supervisors and system owners need deeper literacy. AI literacy evidence becomes part of responsible GPAI deployment.
Spreadsheets can list AI tools. A spreadsheet may show that a department uses ChatGPT or Copilot — but it may not show the actual use case, data categories, vendor evidence, model dependency, provider documentation, approval status, transparency requirements, AI literacy evidence, or whether the tool affects decisions about people. It also struggles to connect GPAI use to GDPR, vendor governance, evidence readiness, and re-screening. A list of tools is not governance. GPAI requires system, vendor, use-case, evidence, and approval visibility.
How EAB structures GPAI governance
In EAB, GPAI-powered tools are treated as governed AI systems or dependencies where relevant. The AI System Registry captures the tool, provider, purpose, owner, legal entity, and usage context. Business Intake documents how the tool is used. Guided Technical Completion captures provider, model, data, deployment, and vendor context. AI Screening evaluates whether the use case creates prohibited-practice relevance, high-risk indicators, transparency obligations, or other governance concerns. Actor Role Assessment distinguishes provider, deployer, importer, distributor, downstream system provider, or other relevant contexts. The Obligation Matrix translates the result into action. Evidence Readiness captures provider documentation, vendor evidence, TOMs, GDPR links, AI literacy evidence, and approval status. Supervisor Approval ensures the tool does not move forward blindly. Audit-Ready Traceability preserves the decision path.