Glossary of interpretive governance
This glossary constitutes the structured map of observable phenomena in a web interpreted by AI systems. It organizes concepts, risks, mechanisms, and operational frameworks around a central principle: the governance of meaning.
Each section below is a thematic entry point. It links to canonical definitions, applicable frameworks, and doctrinal pages that enable the stabilization of an interpretation over time.
1. Drifts and interpretive inertia
Phenomena of degradation, instability, or rigidity of meaning in responses generated by AI systems.
2. Canon, authority, and non-response
Legitimacy boundaries: what a model can infer, what it must refuse, and how to arbitrate authority conflicts.
3. Evidence, audit, and observability
Measurement, traceability, and version discipline: making an interpretation enforceable rather than merely plausible.
4. Capture, contamination, and collisions
Signal warfare, semantic dominance, and entity confusions in open environments.
5. Agentic, RAG, and environments
Application surfaces for interpretive governance: open web, closed environments, agentic systems, RAG pipelines.
6. Sustainability, debt, and correction
The real cost of maintaining a canonical truth over time: interpretive debt, correction budget, and version discipline.
7. Interpretive risk (historical)
First mapping of risks linked to hallucinations, attribution, and meaning distortions.
How to use this glossary
- To understand a specific concept: consult its page in /definitions/.
- To apply a method: open the associated framework in /frameworks/.
- To situate a phenomenon within doctrine: consult /doctrine/.
In this section
Interpretive hallucination, smoothing, inertia, remanence, and state drift: understanding meaning drifts in generative AI systems and linking them to canonical definitions and frameworks.
Agentic, non-agentic systems, RAG, open web vs closed environments, response conditions, risk matrix: mapping the application surfaces of interpretive governance and linking canonical frameworks.
Authority boundary, interpretability perimeter, canonical silence, legitimate non-response, authority conflict, and governed negation: clarifying what AI can infer, and what it must refuse.
Interpretive capture, neighborhood contamination, interpretive collision, and invisibilization: understanding the signal warfare that distorts an entity truth in AI responses, and linking these phenomena to canonical definitions and frameworks.
Interpretive debt, interpretive sustainability, canonical fragility, version power, correction budget, and resorption: structuring the maintenance of a canonical truth over time, despite drift and inertia in AI systems.
Glossary of interpretive governance. Glossary entry within interpretive governance, semantic architecture, and AI systems.
Proof of fidelity, interpretation trace, canon-output gap, interpretive observability, and version power: a family of concepts for audit and proof in an AI-interpreted web.