Visual schema
From term to framework
Definitions stabilize the vocabulary before doctrine, frameworks, and operational usage.
Canonical term
Name without ambiguity.
Scope
Delimit what the term covers.
Doctrine
Connect the term to the doctrinal frame.
Framework
Make it applicable inside a system.
Usage
Mobilize it in posts, cases, and audits.
Definitions and canonical concepts
This page serves as a public registry of canonical definitions used in the interpretive governance doctrine developed by Gautier Dorval.
It lists the primary conceptual references that govern term usage on this site and that aim to frame machine interpretation when they are encountered.
This registry constitutes neither an operational method nor a promise of results. It exists to reduce ambiguity by declaring stable conceptual perimeters.
The canonical entity graph is published here: /entity-graph.jsonld
Quick navigation
- Observable phenomena (field)
- Authority, limits, and non-response
- Evidence, audit, and observability
- Governances and architecture
- Application contexts
Observable phenomena (field)
- Interpretive invisibilization
- Interpretive collision
- Interpretive capture
- Interpretive inertia
- State drift
- Interpretive smoothing
- Interpretive remanence
- Neighborhood contamination
- Interpretive trail
Authority, limits, and non-response
- Authority boundary
- Authority Governance (Layer 3)
- Authority conflict
- Legitimate non-response
- Canonical silence
- Governed negation
- Response conditions
- Interpretive hallucination
- Interpretability perimeter
Evidence, audit, and observability
- Interpretive observability
- Semantic calibration
- Compliance drift
- Interpretive debt
- Interpretive sustainability
- Version power
- Canonical fragility
Governances and architecture
- Interpretive governance
- Endogenous governance
- Exogenous governance
- External coherence graph
- Memory governance
- SSA-E + A2 + Dual Web
- Semantic compression
- AI disambiguation
- Interpretive SEO
Application contexts
- Agentic
- Non-agentic systems
- Post-semantics (thinking & reasoning) vs interpretive governance
- Interpretive SEO vs Entity SEO vs GEO vs AEO
In this section
Agentic designates an execution mode where an AI system plans, sequences, and executes actions based on an objective, often over multiple steps, with varying autonomy.
AI disambiguation designates all methods aimed at stabilizing entity identification by search engines and generative AI, reducing confusions, semantic collisions, and erroneous attributions.
Canonical fragility designates the vulnerability of a declared truth when its authority depends on too narrow an anchoring: a single page, format, access path, or signal type.
Canonical silence designates a governed state where the absence of information in the canon is not a gap to fill, but an explicit bound: the system must not produce a statement beyond what is declared.
Compliance drift designates the phenomenon where an AI system produces responses increasingly incompatible with declared rules, policies, or constraints, without explicit canon change.
Endogenous governance designates all mechanisms by which an entity canonizes, stabilizes, and makes enforceable its own truth within its surfaces, so AI can activate it without depending on external interpretations.
The external coherence graph designates the mapping of public signals that frame how an entity is interpreted by AI systems in the open web.
Governed negation designates a canonical property where an entity, corpus, or system explicitly declares what is not true, not covered, or must not be inferred.
The interpretability perimeter designates the exact zone where an AI system can produce a legitimate interpretation from a given corpus, without crossing the authority boundary.
Interpretive capture designates the phenomenon where an actor or signal set manages to impose a framing in AI systems, making the produced interpretation oriented, stable, and dominant.
An interpretive collision occurs when an AI system fuses, confuses, or mixes two distinct entities, concepts, or reference frames because their signals are too close or ambiguous.
Interpretive debt designates the cumulative liability produced when approximations on high-impact information are repeated, reformulated, and stabilized by automated interpretation systems.
Primary canonical definition of interpretive governance: the mechanism by which the interpretation space of a site, entity, or corpus is explicitly bounded to limit plausible but erroneous AI inferences.
An interpretive hallucination is the production of a plausible but false statement, generated or reconstructed by a probabilistic system, then presented with a form of certainty.
Interpretive inertia designates an AI system's resistance to modifying an already stabilized interpretation, even after canon correction or clarification.
Interpretive invisibilization designates the phenomenon where information is present and accessible, but does not exist in AI-generated responses because it is not selected or activated.
Interpretive observability designates the capacity to measure, detect, and attribute interpretation variations produced by an AI system, to monitor canonical truth stability.
Interpretive remanence designates the persistence of an old interpretation in AI outputs, even after the canon has been corrected, clarified, or updated.
Interpretive SEO designates the discipline that aims to stabilize how inference systems interpret, infer, and attribute meaning from a site, entity, and content.
Canonical clarification of the relations, overlaps, and distinctions between interpretive SEO, Entity SEO, GEO, and AEO. Each discipline by role, action level, and purpose.
Interpretive smoothing designates AI's tendency to erase specificities, nuances, exceptions, or paradoxes of a concept in order to fit it into a standardized, more frequent, and easier-to-synthesize category.
Interpretive sustainability designates the property of an information system such that the meaning of high-impact information remains bounded, stable, and correctable over time.
The interpretive trail designates the transitory state where a canonical correction begins producing effects, but incompletely, irregularly, or contextually.
Legitimate non-response designates a governed output where an AI system does not respond because the question exceeds the interpretability perimeter or crosses the authority boundary.
Neighborhood contamination designates the phenomenon where an entity's interpretation is altered by the semantic proximity of neighboring content, to the point where AI attributes properties from the environment rather than the canon.
Non-agentic systems designate AI systems that produce an output without planning and executing a tool-driven action sequence oriented toward an objective.
Canonical clarification of relations and distinctions between post-semantic thinking, post-semantic reasoning, and interpretive governance applied to generative AI systems.
Response conditions designate explicit prerequisites determining if an AI system can respond, how it must respond, and in which cases it must produce a legitimate non-response.
Semantic calibration designates all actions aimed at aligning, tuning, and stabilizing the correspondence between a canonical truth and how an AI system interprets and returns it.
Semantic compression designates the mechanism by which a generative system condenses a complex informational space into a shorter, coherent, and statistically plausible formulation.
SSA-E + A2 + Dual Web designates a doctrinal implementation standard for interpretive governance, aimed at stabilizing entities, reducing ambiguity, and bounding machine interpretation.
State drift designates the divergence between the actual state of dynamic information and the state returned by an AI system, which responds as if the state were stable when it has changed.
Version power designates an entity's capacity to make a given canonical version prevail in AI systems, and to make previous versions explicitly obsolete, traceable, and not activatable by default.
The authority boundary designates the explicit limit between what a system can infer and what it is legitimate to present as authorized, official, or applicable.
An authority conflict arises when two or more sources claim legitimate authority on the same point but produce incompatible statements. Without arbitration, the correct output may be legitimate non-response.
Authority Governance (Layer 3) designates the adjacent governance regime that bounds executable authority when an interpretive output becomes an action-bearing input.
Exogenous governance designates all methods aimed at reducing contradictions, ambiguity, and conflicts in external sources used by AI systems to reconstruct an entity.
Memory governance: doctrinal extension applied to stateful systems (agents, advanced RAG, persisted memories) to prevent inference fossilization into facts.
External Authority Control (EAC). Canonical definition within interpretive governance, semantic architecture, and AI systems.
Canonical definition of the canon-output gap: the distance between what the canon states and what an AI system reconstructs. Gap types, practical symptoms, and the minimum rule based on proof of fidelity.
Canonical definition of the interpretation integrity audit: a formal procedure, an opposable report, a snapshotted corpus, an evidence chain, and conditional validity.
Canonical definition of interpretation trace: the minimum footprint that makes an AI output understandable, auditable, and contestable without depending on style or post-hoc narrative.
Canonical definition of proof of fidelity: the minimum evidence required to show that an AI output remains faithful to the canon rather than merely plausible.