Article

Hallucination is not the problem: the absence of interpretive legitimacy

“Hallucination” names a symptom. It does not govern a system. The core problem is the production of answers without interpretive legitimacy.

EN FR
CollectionArticle
TypeArticle
Categoryrisque interpretatif
Published2026-01-27
Updated2026-03-15
Reading time3 min

This article requalifies a symptom. The word “hallucination” has become the universal shortcut for designating everything that goes wrong in AI answers. It is useful for naming a discomfort. It is insufficient for governing a system. An organization can reduce certain hallucinations and remain exposed, because the central problem is not only the error. The central problem is interpretive legitimacy at the moment the answer is produced.

Why “reducing hallucinations” is a trapped promise

One often speaks of “hallucination reduction” as an objective, without specifying what is measured, nor what is governed. Yet, three facts make this promise unstable:

  • an answer can be false without being perceived as a hallucination (it can simply be approximate, or incomplete);
  • an answer can be accurate and yet not defensible (no reconstructible justification chain);
  • an answer can be plausible, confident, and socially convincing, while crossing an engagement boundary without authority.

Therefore, “fewer hallucinations” does not mean “less liability.”

The hard core: answering without legitimacy

Within the framework of interpretive risk, a drift becomes critical when an answer is produced while the minimum justification conditions are not met. This absence of legitimacy typically appears when:

  • the scope is too broad (the system infers capabilities, offerings, areas, guarantees);
  • the source hierarchy is absent (sources of unequal weight treated as equivalent);
  • contradictions exist and are masked by a synthesis that “sounds true”;
  • an indeterminacy is filled by default instead of being flagged;
  • the question crosses an engagement boundary without an explicit act of authority.

These mechanisms produce answers that are coherent on the surface, but fragile when contested.

Interpretive legitimacy, concretely

An answer becomes more governable when one can reconstruct, without fiction:

  • the declared scope (what is included, and what is excluded);
  • the source hierarchy (what is authoritative, and what is secondary);
  • the treatment of contradictions (flagging, explicable arbitration, or refusal);
  • the management of void (indeterminacy flagged, not masked);
  • the legitimacy of a non-answer (when answering would create liability).

See the full mechanics: /interpretive-risk/method/.

Why non-answer is central

An organization that forces an answer in all circumstances manufactures a debt. Legitimate non-answer is not a UX flaw: it is a governance capability. It appears when justification conditions are insufficient, contradictory, or out of scope. See:

The symptom and the cause (operational distinction)

  • Symptom: hallucination, incoherence, unverifiable assertion, overconfident answer.
  • Governable cause: absence of interpretive legitimacy (scope, hierarchy, contradictions, indeterminacy, engagement boundary).

This distinction makes it possible to move beyond the moral debate (“AIs hallucinate”) and enter a structural debate (“what conditions authorize an answer?”).

Anchor

Hallucination is a symptom. Interpretive risk arises when an answer is produced without interpretive legitimacy. The realistic way out consists of governing response conditions: scope, hierarchy, treatment of contradictions, indeterminacy, and legitimate non-answer.