This article is a landing surface. An AI error is not always spectacular. Often, it is simply plausible. It “sounds true”, it fits into a workflow, and it ends up being used as if it were reliable. This is precisely where the problem begins: the error ceases being a technical detail and becomes a liability.
The tipping point: from plausibility to enforceability
In a non-critical context, an approximate response is an irritant. In a context where the response influences a commitment, a decision, an internal policy, a public communication, or a client interaction, the same response becomes a risk. The question is no longer “is it plausible?”. The question becomes: is it enforceable? An enforceable response is one that can be defended when contested: client, employee, partner, audit, media, regulator. A plausible response is not enforceable by default.
Why AI error differs from human error
A human error is generally contextualized by a role, a responsibility, an intention, and an identifiable decision framework. An AI error poses a structural problem:
- it is produced without a human decision-maker having explicitly chosen the perimeter of what can be asserted;
- it can be reproduced at scale (same formulations, same drifts) in different contexts;
- it can be interpreted as “official” once integrated into a brand system (site, chatbot, agent, support) or internal process.
The risk is therefore not “error” in the strict sense. The risk is the absence of a justification chain when the error is contested.
What makes an AI response legally risky
A response becomes legally risky when it crosses a commitment boundary: a promise, a condition, an interpretation, a sensitive recommendation, an attributable assertion, an HR decision, etc. This is often invisible when the response is produced. The risk appears afterward, when someone asks:
- what does this response rest on?
- why was this response produced despite uncertainty?
- what was excluded, therefore not deducible?
- why was a non-response not chosen?
If these questions have no reconstructible answer, plausibility becomes exposure.
The core problem: absent interpretive legitimacy
Common vocabulary speaks of “hallucinations”. This is useful as a symptom, but insufficient. A response can be false without hallucinating, and a response can be accurate while remaining non-enforceable. In the interpretive risk framework, the hard core is the absence of interpretive legitimacy: the response is produced when the minimum justification conditions are not met. Typical examples:
- overly broad perimeter (the system “invents” capabilities, zones, guarantees);
- insufficient or non-hierarchized sources;
- contradictions masked by a “true-sounding” synthesis;
- indeterminacy filled by default (instead of being flagged).
Why superficial fixes fail
Many fixes reduce symptoms, but do not restore enforceability:
- Disclaimers: useful, but insufficient if the organization uses the response anyway as if it were reliable.
- Human in the loop: useful only if one knows what to validate and by which criteria.
- RAG: useful for anchoring, but insufficient if source hierarchy is absent, or if arbitration remains implicit.
- Fine-tuning: can align a style, but does not automatically create a non-response boundary and a justification chain.
The problem is not “the model”. The problem is the governability structure around the model.
The realistic way out: making responses governable
The objective is not to promise zero error. The objective is to make the response:
- bounded: the system does not leave the declared perimeter;
- hierarchized: sources do not all carry the same weight;
- traceable: justification is reconstructible;
- enforceable: the response can be defended, or non-response can be justified.
The detailed mechanism is here: /interpretive-risk/method/.
Non-response is not failure
Governability implies a counterintuitive but essential idea: non-response can be the most legitimate outcome. Forcing a system to respond, even when sources are missing, when sources contradict each other, or when the question crosses a commitment boundary, amounts to transforming indeterminacy into assertion. And therefore into liability. The framing and limits are made explicit here: /interpretive-risk/perimeter/.
Further reading (canonical links)
- Main hub: /interpretive-risk/
- Who is exposed: /interpretive-risk/who/
- Lexicon (disambiguation): /interpretive-risk/lexicon/
- Blog category “Interpretive risk”: /blog/interpretive-risk/
Doctrinal references (bridge to existing corpus)
- Probabilistic arbitration and competing formulations: /blog/interpretive-phenomena/probabilistic-arbitration-competing-formulations/
- When two sources contradict each other on a brand: /blog/ai-interpretation/when-two-credible-sources-contradict-each-other/
- Hallucination as upstream structuring failure: /blog/interpretive-phenomena/hallucination-upstream-structuring-failure/
Anchor
This article serves as a public entry point to interpretive risk. It does not aim to dramatize, nor to sell a solution. It aims to make visible a reality: in a world where AI responses become actionable, the plausible error ceases being a technical detail and becomes a responsibility problem.