If an organization leaves an obligation empty, skips a field, or assumes that a requirement does not apply without documenting why, it creates an audit weakness. A missing answer is not the same as a reviewed decision. A silent omission is not governance. For AI compliance, "not applicable" must be actively documented — with reasoning, evidence, and attributable review.
Not applicable is a governance decision
In weak compliance processes, non-applicability is treated casually. A reviewer sees an obligation and decides it does not matter. A field is left blank. A spreadsheet cell is marked "N/A." A requirement is skipped. The process moves on. But in audit-ready governance, "not applicable" is not a shortcut. It is a decision — and like any governance decision, it must be reviewable.
The organization should be able to show what was considered, why the obligation was judged not applicable, who made or reviewed that determination, and which evidence supported the conclusion.
An empty field can mean many things: the obligation was not reviewed, the reviewer did not know the answer, the evidence is missing, the requirement was assumed irrelevant, or the system owner skipped the question. These are very different governance states. If the organization does not distinguish them, it creates ambiguity that becomes a problem at audit. A documented non-applicability rationale can defend the organization. A blank field cannot.
Not applicable is different from missing
"Missing" means the organization needs information or evidence. "Not applicable" means the obligation was reviewed and judged not relevant to the specific system context. These states must not be mixed.
If human oversight information is missing, that is not "not applicable." If technical documentation has not been obtained from a provider, that is not "not applicable." If the actor role is unclear, obligation mapping cannot simply mark provider obligations as not applicable. If the organization does not know whether biometric functionality exists, it cannot mark biometric-related issues as not applicable without basis. Not applicable requires enough information to support the conclusion. Where information is absent, the correct status is missing or unclear.
Not applicable is different from not high-risk
A system that is not high-risk may have fewer obligations, but that does not automatically make every governance question irrelevant. Limited-risk transparency obligations may still apply. Internal accountability may still matter. GDPR obligations may apply where personal data is processed. AI literacy evidence may be relevant for employees operating or overseeing AI systems. Vendor governance may still matter.
"Not high-risk" is not a blanket substitute for documented non-applicability. The organization must document which obligations do not apply and why — even for systems with a low regulatory burden.
Non-applicability must be system-specific
Generic statements are weak. "This obligation does not apply to our company" is usually not enough. "The vendor is responsible" is not enough. "This is only internal use" is not enough. A strong non-applicability decision must be linked to a specific AI system and use case — explaining why the obligation does not apply in that context.
For example: the system does not involve biometric identification; the system is not used in an Annex III context; the organization acts as deployer, not provider, for this use case; the obligation is provider-side and covered by upstream documentation; the system does not interact directly with natural persons; the system does not process personal data. Each statement should be tied to evidence or documented reasoning.
Non-applicability depends on actor role
Actor role is one of the main reasons why non-applicability must be documented carefully. An obligation may apply to a provider but not to a deployer. A deployer may still have its own obligations even where provider-side obligations do not apply. If actor role is unclear, non-applicability conclusions become unreliable.
An organization may assume it is only a deployer. But if it substantially modifies a system, rebrands it, places it on the market, or integrates it into its own product, the role analysis may change — which could change which obligations apply. This is why "not applicable" must be connected to Actor Role Assessment.
Non-applicability depends on use case
The same AI tool may have different governance implications depending on use. An AI assistant used to draft internal meeting summaries may not trigger the same obligations as the same tool used to screen job applicants. A chatbot used for internal knowledge search may not create the same transparency requirements as a public-facing system that interacts with customers. The tool name does not decide applicability. The use case does. Non-applicability must therefore be documented at system and use-case level.
Undocumented non-applicability creates audit risk
An auditor may not only ask what obligations were fulfilled. An auditor may also ask why certain obligations were not addressed. If the organization cannot answer, the record appears incomplete. The problem is not that the obligation necessarily applied — it is that the organization cannot show why it did not. This creates several weaknesses: the organization may appear to have skipped the review; internal teams may need to reconstruct the rationale manually; decision-makers may rely on memory; evidence may no longer be available; and the system context may have changed since the review.
Not applicable should not hide risk acceptance
Sometimes organizations use "not applicable" to hide unresolved risk. This is dangerous. If an obligation applies but the organization decides to proceed despite missing evidence, that is not non-applicability. It may be a risk acceptance decision, an exception, or an override. These require supervisor approval and documented justification. Non-applicability means the obligation does not apply to the system context. Risk acceptance means the obligation or gap is relevant, but the organization accepts a defined risk under documented conditions. Mixing these two statuses undermines audit readiness.
Not applicable should not hide external dependency
An organization may rely on a vendor for certain evidence or obligations. That does not automatically make the obligation not applicable. It may mean the evidence is externally covered, the organization needs provider documentation, or the obligation belongs primarily to another actor. The dependency should be visible and documented as such — not collapsed into "not applicable" without explanation. This distinction is particularly important for deployers using third-party AI systems, where vendor responsibility does not eliminate organizational responsibility.
Non-applicability must be reviewed when context changes
A non-applicability decision may not remain valid forever. An obligation that did not apply at first review may become relevant when the system use case changes, personal data is introduced, the actor role changes, a vendor adds functionality, biometric functionality is added, or the system enters an Annex III context. This is why non-applicability must be connected to re-screening. The organization should be able to show that non-applicability was valid at the time of review and that later changes can trigger renewed assessment. Approved once does not mean not applicable forever.
How EAB structures non-applicability documentation
In EAB, documented non-applicability is part of the governance record — not an empty space. The Obligation Matrix shows which obligation areas are applicable, unclear, missing, partial, fulfilled, externally covered, or not applicable. Evidence Readiness connects non-applicability to rationale and supporting information. Actor Role Assessment helps determine whether obligations apply based on the organization's role. Supervisor Approval can review the decision basis where required. Audit-Ready Traceability preserves the non-applicability decision, field changes, reviewer attribution, and later re-screening. This makes "not applicable" a documented governance state with an owner and a review path.
For a full picture of the evidence layer around AI governance decisions, see Evidence Readiness for AI Governance: What Auditors Actually Need.