Page

Definitions

Public registry of canonical definitions used in the interpretive governance doctrine developed by Gautier Dorval. Stable conceptual perimeters for machine interpretation.

EN FR
CollectionPage
TypeHub

Visual schema

From term to framework

Definitions stabilize the vocabulary before doctrine, frameworks, and operational usage.

01

Canonical term

Name without ambiguity.

02

Scope

Delimit what the term covers.

03

Doctrine

Connect the term to the doctrinal frame.

04

Framework

Make it applicable inside a system.

05

Usage

Mobilize it in posts, cases, and audits.

Definitions and canonical concepts

This page serves as a public registry of canonical definitions used in the interpretive governance doctrine developed by Gautier Dorval.

It lists the primary conceptual references that govern term usage on this site and that aim to frame machine interpretation when they are encountered.

This registry constitutes neither an operational method nor a promise of results. It exists to reduce ambiguity by declaring stable conceptual perimeters.

The canonical entity graph is published here: /entity-graph.jsonld


Quick navigation


Observable phenomena (field)

Authority, limits, and non-response

Evidence, audit, and observability

Governances and architecture

Application contexts

In this section

Agentic

Agentic designates an execution mode where an AI system plans, sequences, and executes actions based on an objective, often over multiple steps, with varying autonomy.

Definition
AI disambiguation

AI disambiguation designates all methods aimed at stabilizing entity identification by search engines and generative AI, reducing confusions, semantic collisions, and erroneous attributions.

Definition
Canonical fragility

Canonical fragility designates the vulnerability of a declared truth when its authority depends on too narrow an anchoring: a single page, format, access path, or signal type.

Definition
Canonical silence

Canonical silence designates a governed state where the absence of information in the canon is not a gap to fill, but an explicit bound: the system must not produce a statement beyond what is declared.

Definition
Compliance drift

Compliance drift designates the phenomenon where an AI system produces responses increasingly incompatible with declared rules, policies, or constraints, without explicit canon change.

Definition
Endogenous governance

Endogenous governance designates all mechanisms by which an entity canonizes, stabilizes, and makes enforceable its own truth within its surfaces, so AI can activate it without depending on external interpretations.

Definition
External coherence graph

The external coherence graph designates the mapping of public signals that frame how an entity is interpreted by AI systems in the open web.

Definition
Governed negation

Governed negation designates a canonical property where an entity, corpus, or system explicitly declares what is not true, not covered, or must not be inferred.

Definition
Interpretability perimeter

The interpretability perimeter designates the exact zone where an AI system can produce a legitimate interpretation from a given corpus, without crossing the authority boundary.

Definition
Interpretive capture

Interpretive capture designates the phenomenon where an actor or signal set manages to impose a framing in AI systems, making the produced interpretation oriented, stable, and dominant.

Definition
Interpretive collision

An interpretive collision occurs when an AI system fuses, confuses, or mixes two distinct entities, concepts, or reference frames because their signals are too close or ambiguous.

Definition
Interpretive debt

Interpretive debt designates the cumulative liability produced when approximations on high-impact information are repeated, reformulated, and stabilized by automated interpretation systems.

Definition
Interpretive governance

Primary canonical definition of interpretive governance: the mechanism by which the interpretation space of a site, entity, or corpus is explicitly bounded to limit plausible but erroneous AI inferences.

Definition
Interpretive hallucination

An interpretive hallucination is the production of a plausible but false statement, generated or reconstructed by a probabilistic system, then presented with a form of certainty.

Definition
Interpretive inertia

Interpretive inertia designates an AI system's resistance to modifying an already stabilized interpretation, even after canon correction or clarification.

Definition
Interpretive invisibilization

Interpretive invisibilization designates the phenomenon where information is present and accessible, but does not exist in AI-generated responses because it is not selected or activated.

Definition
Interpretive observability

Interpretive observability designates the capacity to measure, detect, and attribute interpretation variations produced by an AI system, to monitor canonical truth stability.

Definition
Interpretive remanence

Interpretive remanence designates the persistence of an old interpretation in AI outputs, even after the canon has been corrected, clarified, or updated.

Definition
Interpretive SEO

Interpretive SEO designates the discipline that aims to stabilize how inference systems interpret, infer, and attribute meaning from a site, entity, and content.

Definition
Interpretive SEO vs Entity SEO vs GEO vs AEO

Canonical clarification of the relations, overlaps, and distinctions between interpretive SEO, Entity SEO, GEO, and AEO. Each discipline by role, action level, and purpose.

Definition
Interpretive smoothing

Interpretive smoothing designates AI's tendency to erase specificities, nuances, exceptions, or paradoxes of a concept in order to fit it into a standardized, more frequent, and easier-to-synthesize category.

Definition
Interpretive sustainability

Interpretive sustainability designates the property of an information system such that the meaning of high-impact information remains bounded, stable, and correctable over time.

Definition
Interpretive tail

The interpretive trail designates the transitory state where a canonical correction begins producing effects, but incompletely, irregularly, or contextually.

Definition
Legitimate non-response

Legitimate non-response designates a governed output where an AI system does not respond because the question exceeds the interpretability perimeter or crosses the authority boundary.

Definition
Neighborhood contamination

Neighborhood contamination designates the phenomenon where an entity's interpretation is altered by the semantic proximity of neighboring content, to the point where AI attributes properties from the environment rather than the canon.

Definition
Non-agentic systems

Non-agentic systems designate AI systems that produce an output without planning and executing a tool-driven action sequence oriented toward an objective.

Definition
Response conditions

Response conditions designate explicit prerequisites determining if an AI system can respond, how it must respond, and in which cases it must produce a legitimate non-response.

Definition
Semantic calibration

Semantic calibration designates all actions aimed at aligning, tuning, and stabilizing the correspondence between a canonical truth and how an AI system interprets and returns it.

Definition
Semantic compression

Semantic compression designates the mechanism by which a generative system condenses a complex informational space into a shorter, coherent, and statistically plausible formulation.

Definition
SSA-E + A2 + Dual Web

SSA-E + A2 + Dual Web designates a doctrinal implementation standard for interpretive governance, aimed at stabilizing entities, reducing ambiguity, and bounding machine interpretation.

Definition
State drift

State drift designates the divergence between the actual state of dynamic information and the state returned by an AI system, which responds as if the state were stable when it has changed.

Definition
Version power

Version power designates an entity's capacity to make a given canonical version prevail in AI systems, and to make previous versions explicitly obsolete, traceable, and not activatable by default.

Definition
Authority boundary

The authority boundary designates the explicit limit between what a system can infer and what it is legitimate to present as authorized, official, or applicable.

Definition
Authority conflict

An authority conflict arises when two or more sources claim legitimate authority on the same point but produce incompatible statements. Without arbitration, the correct output may be legitimate non-response.

Definition
Authority Governance (Layer 3)

Authority Governance (Layer 3) designates the adjacent governance regime that bounds executable authority when an interpretive output becomes an action-bearing input.

Definition
Exogenous governance (short definition)

Exogenous governance designates all methods aimed at reducing contradictions, ambiguity, and conflicts in external sources used by AI systems to reconstruct an entity.

Definition
Memory governance

Memory governance: doctrinal extension applied to stateful systems (agents, advanced RAG, persisted memories) to prevent inference fossilization into facts.

Definition
External Authority Control (EAC)

External Authority Control (EAC). Canonical definition within interpretive governance, semantic architecture, and AI systems.

Definition
Canon-output gap

Canonical definition of the canon-output gap: the distance between what the canon states and what an AI system reconstructs. Gap types, practical symptoms, and the minimum rule based on proof of fidelity.

Definition
Interpretation integrity audit

Canonical definition of the interpretation integrity audit: a formal procedure, an opposable report, a snapshotted corpus, an evidence chain, and conditional validity.

Definition
Interpretation trace

Canonical definition of interpretation trace: the minimum footprint that makes an AI output understandable, auditable, and contestable without depending on style or post-hoc narrative.

Definition
Proof of fidelity

Canonical definition of proof of fidelity: the minimum evidence required to show that an AI output remains faithful to the canon rather than merely plausible.

Definition