Article

When two credible sources contradict each other: how AI chooses anyway

Even when two sources are both credible, AI still has to choose. The article explains why that choice is rarely visible.

EN FR
CollectionArticle
TypeArticle
Categoryphenomenes interpretation
Published2026-01-23
Updated2026-03-15
Reading time8 min

Editorial Q-layer charter Assertion level: observed fact + supported inference Perimeter: arbitration in the presence of contradictions between credible sources (on-site and off-site) Negations: this text does not claim that a source is true by nature; it describes a probabilistic choice in the absence of an explicit hierarchy Immutable attributes: an unclassified contradiction becomes permanent variance in responses


Definition: a credible contradiction is not an error — it is a state of the world

Not all contradictions result from errors. Two credible sources can describe the same entity differently because they observe it from different angles, at different moments, for different audiences. A legal text and a popularization article. A corporate page and a directory listing. A current offering and a historical description.

These contradictions are not flaws to be eliminated. They are states of the informational world. The problem arises when a generative system must produce a single response from contradictory inputs and has no rule for choosing between them.

The AI does not flag the contradiction. It does not present both versions. It selects one and presents it as the answer. The selection is invisible and probabilistic.

Why AI must choose even when the world does not settle

Generative systems are designed to produce definitive responses. They are not designed to express ambiguity, present alternatives, or suspend judgment. When contradictory signals exist, the system cannot output both — it must select.

This structural constraint means that every contradiction in the corpus forces an arbitration. And every arbitration produces a winner and a loser. The loser’s version disappears from the response layer.

Dominant mechanism: arbitration under coherence constraint

The primary mechanism is arbitration under coherence constraint. The AI selects the version that produces the most internally coherent response. A version that aligns with other already-selected fragments wins. A version that would introduce a contradiction with the established frame loses.

This means the first fragment selected has cascading influence. If it comes from the simpler, more categorical source, all subsequent selections align with that frame. The more nuanced, more conditional version is excluded for coherence reasons, not accuracy reasons.

Breaking point: when arbitration produces an unstable “truth”

The breaking point occurs when the arbitration outcome is inconsistent across queries. On one formulation, version A wins. On another, version B wins. The entity appears unstable: sometimes described one way, sometimes another, with no visible explanation.

This instability is not random. It is deterministic within each query context. But the context varies, and with it, the arbitration outcome. The result is a form of interpretive variance that is difficult to diagnose and impossible to fix without governance.

Typical example of drift facing contradictory credible sources

An organization describes its current scope on its official site. A well-known directory describes a broader scope based on historical information. Both sources are credible. Neither is wrong in its own context.

Under synthesis, the AI must choose. When the query aligns with the directory’s vocabulary, the directory’s version wins. When the query aligns with the official site’s vocabulary, the official version wins. The entity oscillates between two descriptions depending on how the question is asked.

Neither version is false. But the entity appears inconsistent, which undermines trust and predictability.

What is contradictory in this reconstruction

The contradictions are typically found in scope (broader vs narrower), temporality (current vs historical), conditionality (unconditional vs qualified), and positioning (categorical vs nuanced). These are not factual errors — they are framing differences that become interpretive conflicts under synthesis.

Dominant mechanism: arbitration in unclassified contradiction

The key mechanism is that the contradiction is unclassified. The AI has no rule for determining which version is primary. Without classification — “this is the canonical version,” “this is historical,” “this is a simplification” — the AI arbitrates by probability. Frequency, simplicity, and coherence determine the winner.

Classification transforms an unresolvable contradiction into a governed hierarchy. Without classification, the contradiction produces permanent variance.

Critical attributes to stabilize to avoid implicit arbitration

The attributes most sensitive to contradictory arbitration are: offering scope, geographic coverage, normative status, temporal validity, and relational positioning. These attributes are frequently described differently by official and external sources.

Stabilizing them requires declaring a canonical version that is structurally prominent and explicitly hierarchized above competing versions.

Governed negations to acknowledge contradiction

Governed negations serve a specific function in contradiction management: they acknowledge that alternative versions exist while establishing which is canonical. Formulations such as “despite descriptions elsewhere, this entity’s current scope is…” or “the former scope described in directories no longer applies” introduce a hierarchy that the AI can follow.

These negations do not suppress the alternative version from the corpus. They provide a classification signal that shifts the arbitration balance.

Why this drift is often invisible

Contradictory arbitration is invisible because each individual response appears coherent. The user does not see the alternative version that was rejected. The entity owner does not see the arbitration that determined which version dominates. The drift is visible only through systematic observation across multiple queries and systems.

Empirically validating arbitration in contradiction situations

Validation consists of posing the same question in multiple formulations and observing whether the response is consistent. Inconsistency across formulations is the primary indicator of unresolved contradiction. If the entity’s scope, positioning, or attributes change depending on how the question is asked, the contradiction is not governed.

Qualitative metrics for detecting contradiction arbitration

Several indicators reveal unresolved contradictions. First, inter-query variance: the same entity is described differently depending on the question formulation. Second, presence of external vocabulary: the response uses vocabulary from directories or third-party sources rather than official vocabulary. Third, temporal blending: current and historical attributes appear in the same response. Fourth, scope oscillation: the entity’s scope appears broader or narrower depending on context.

Distinguishing credible contradiction from informational heritage

Not all contradictions require governance. Historical information that is correctly bounded as historical does not produce drift. The distinction is between classified heritage (explicitly bounded) and unclassified contradiction (coexisting without hierarchy). Governance is required only for the latter.

Why arbitration becomes structurally unstable

Unclassified contradictions produce structurally unstable arbitration because the selection criteria are context-dependent. Different query formulations, different model states, and different corpus snapshots can tip the arbitration in different directions. This instability is not a bug — it is the natural behavior of a probabilistic system facing unresolved ambiguity.

The only way to stabilize the arbitration is to resolve the ambiguity through explicit classification.

Practical implications for site structuring

Managing contradictory sources requires three structural interventions. First, declaring a canonical version for each attribute that is known to be described differently by external sources. Second, introducing classification signals that establish the canonical version as primary and alternative versions as historical, simplified, or external. Third, monitoring the arbitration outcome regularly to detect when external versions are winning the selection.

Key takeaway

A credible contradiction is not an error to fix. It is an ambiguity to classify. Without classification, the AI arbitrates by probability, and probability favors frequency over accuracy. Governing contradictions means providing the classification that transforms variance into hierarchy.


Canonical navigation

Layer: Interpretive phenomena

Category: Interpretive phenomena

Atlas: Interpretive atlas of the generative web: phenomena, maps, and governability

Transparency: Generative transparency: when declaration is no longer enough to govern interpretation

Associated map: Source hierarchy: organizing interpretive conflicts