Definition

Non-agentic systems

Non-agentic systems designate AI systems that produce an output without planning and executing a tool-driven action sequence oriented toward an objective.

EN FR
CollectionDefinition
TypeDefinition
Version1.0
Stabilization2026-02-19
Published2026-02-19
Updated2026-03-13

Non-agentic systems

Non-agentic systems designate AI systems that produce an output (response, summary, classification, recommendation) without planning and without executing a tool-driven action sequence oriented toward an objective. They interpret and generate, but are not designed to act autonomously in an environment.

In interpretive governance, this distinction is structuring: a non-agentic system can produce distortions, but its primary risk is a false or ungoverned output. An agentic system can transform an ungoverned output into an action.


Definition

A non-agentic system is one that:

  • produces output in one or multiple turns, but without an action execution loop;
  • does not plan a task in tool-driven steps to accomplish;
  • does not call (or orchestrate) tools autonomously to reach an objective;
  • has no implicit mandate to act on an external state (write, publish, modify, purchase, trigger).

A non-agentic system can nonetheless be connected to sources (e.g. RAG) or produce recommendations. Non-agenticity simply means the system does not possess autonomous execution capacity for action chains.


Why this is critical in AI systems

  • Risk remains interpretive: impact depends on what the user does with the output.
  • Governance remains necessary: authority boundary, interpretability perimeter, legitimate non-response.
  • Confusion is frequent: calling “agent” a system that executes nothing blurs risk management.

Non-agentic vs agentic

  • Non-agentic: interprets and generates an output. Does not plan and does not execute tool-driven objective-oriented actions.
  • Agentic: plans, sequences, calls tools, executes actions, and can change an external state.

The same AI can exist in both modes depending on architecture: an LLM “chat” is non-agentic; the same LLM, integrated into a tool orchestrator, becomes agentic.


Examples of non-agentic systems

  • Generative response: question-answer, explanation, synthesis.
  • Summary / reformulation: document condensation.
  • Classification: categorization, entity extraction.
  • Recommendation: suggestion, prioritization, scoring (without execution).

Typical risks in non-agentic mode

  • Authority boundary crossing: inferences presented as facts.
  • Canonical silence violated: filling an absence by plausibility.
  • Authority conflict: invented synthesis instead of a legitimate non-response.
  • State drift: response on a dynamic variable without validity conditions.

What non-agentic systems are not

  • They are not “less dangerous by nature”. Outputs can have high impact (regulatory, medical, financial), even without automated action.
  • They are not a “weak” RAG. RAG can be non-agentic if it triggers no action.
  • They are not a guarantee of fidelity. Evidence and conditions are always required.

Minimum rule (enforceable formulation)

Rule NAS-1: a non-agentic system must apply response conditions and preserve the authority boundary. In the absence of an authorized basis (interpretability perimeter, canonical silence, authority conflict), it must produce a legitimate non-response rather than fill by plausibility.


Example

Question: “Does this policy apply to all subsidiaries, everywhere, starting now?”

Ungoverned output: “Yes.”

Governed output (non-agentic): “I cannot conclude without jurisdiction and version. Otherwise, legitimate non-response.”