Article

HR: when AI inference becomes a discrimination risk

In HR, AI often starts as a productivity tool. The risk appears when generated output is treated as if it were a reliable evaluation rather than a rhetorical inference built on incomplete and contestable signals.

EN FR
CollectionArticle
TypeArticle
Categoryrisque interpretatif
Published2026-01-27
Updated2026-03-15
Reading time4 min

This article is a typical case. In HR, AI is often introduced as a productivity tool: CV synthesis, application sorting, interview summary generation, question recommendations, writing assistance. The interpretive risk appears when the system’s output is used as if it were a reliable evaluation, while it rests on inferences, formulation biases, or non-enforceable surface coherence.

The break point: from synthesis to evaluation

AI can summarize a profile convincingly. The problem begins when the summary slides, even subtly, toward an implicit conclusion: “good fit”, “risk”, “unstable profile”, “weak leadership”, “lack of rigor”, etc. The shift is often invisible because it is rhetorical: a formulation becomes an interpretation. At that point, the question is no longer “is it useful?”, but: who can defend this inference if it is contested?

Why risk is amplified in HR

HR concerns people, therefore rights, therefore possible contestation. Three factors increase exposure:

  • Incomplete data: a CV, interview notes, or history are not sufficient to justify a stable inference.
  • Structural ambiguities: evaluation criteria are often implicit, variable, and context-dependent.
  • Responsibility boundary: an HR decision has real consequences (employment, career, reputation), therefore justification must remain explainable.

The highest-risk outputs

Certain outputs are particularly prone to interpretive drift:

  • application ranking or scoring;
  • rejection or prioritization recommendations;
  • “psychologizing” summaries or behavioral interpretations;
  • deduction of intentions, attitudes, or traits from formulations, career path, or writing style;
  • generalization from weak signals (CV gaps, frequent changes, schools, sectors).

The risk is not “that AI is biased” in the moral sense. The risk is that inference is used as implicit truth, without an enforceable justification chain.

Why “human in the loop” can still fail

It is often assumed that a human review suffices. In practice, this fails if:

  • validation criteria are not explicit;
  • the decision-maker confuses fluidity and accuracy;
  • the system produces persuasive coherence that reduces contestability.

A human in the loop without an interpretive legitimacy framework becomes a human who validates the plausible.

The central mechanism: arbitrating weak signals

In HR, AI does not merely summarize. It arbitrates: it selects elements, implicitly hierarchizes them, and fabricates a profile reading. When sources are insufficient or contradictory, the system often compensates with a coherent narrative. Associated mechanisms (disambiguation):

What it means to make HR use governable

Making use governable in HR does not mean suppressing AI. It means bounding what it can assert and what it must not infer. Key properties:

  • Bounding: prohibit inferences of traits, intentions, or non-demonstrable causalities.
  • Hierarchy: distinguish observable facts, summaries, and interpretations.
  • Traceability: make explainable which elements led to a summary or recommendation.
  • Legitimate non-response: refuse to decide when information is insufficient, contradictory, or out of perimeter.

The limits framing is here: /interpretive-risk/perimeter/.

Recognizing exposure before the incident

The exposure signal is not merely a visible error. The signal is an AI that produces an implicit evaluation (or ranking) without a reconstructible justification chain. The structuring question becomes: if a candidate contests, what can be explained without fiction? To identify whether an organization is exposed: /interpretive-risk/who/.

Anchor

This typical case illustrates a key point: in HR, a plausible inference can be experienced as a real evaluation. Without bounding, hierarchy, traceability, and legitimate non-response, AI can produce persuasive coherence that becomes contestable, therefore legally and reputationally risky.