Agentic
Agentic designates an execution mode where an AI system does not merely produce a response, but plans, sequences, and executes actions (tools, API calls, navigation, writing, decisions) based on an objective, often over multiple steps, with a varying degree of autonomy.
In interpretive governance, agentic mode drastically raises stakes: an interpretation becomes an action. A plausible output can therefore produce a real effect. Hence the importance of authority boundary, response conditions, and legitimate non-response.
Definition
An agentic system is one where:
- AI possesses planning capacity (decomposing a task);
- it can call tools (browsers, APIs, databases, internal systems);
- it can chain actions over multiple turns;
- it produces outputs that can be operational (change a state, write, publish, trigger a flow).
Agentic mode can exist in closed environments (internal agent) or on the open web (agent navigating and relying on external sources).
Why this is critical in AI systems
- Interpretation = execution: an interpretation error materializes as an action.
- Error cost increases: silent errors, irreversible decisions, implicit liability.
- Attacks become actionable: interpretive capture, contamination, collisions can guide the agent.
Typical risks in agentic mode
- Authority boundary crossing: the agent infers and acts as if it were declared.
- State drift: the agent acts on an outdated state (price, stock, status).
- Authority conflict: the agent chooses a source without an arbitration rule.
- Absence of evidence: no enforceable interpretation trace explains the action.
Practical indicators (symptoms)
- The system executes without requiring minimum conditions (version, context, authorization).
- The system acts on undeclared hypotheses (plausibility transformed into action).
- The system does not produce an interpretation trace for decisions.
- The system favors secondary sources over the canon.
What agentic is not
- It is not a simple chatbot. A chatbot responds. An agent executes.
- It is not a RAG. RAG retrieves information; the agent acts with that information.
- It is not necessarily autonomous. Agentic mode can be supervised (human in the loop).
Minimum rule (enforceable formulation)
Rule AG-1: any agentic execution must be conditioned by explicit response conditions, a strict authority boundary, and a minimum interpretation trace of decisions. Failing that, the agent must produce a legitimate non-response or request human validation before action.
Example
Case: an agent must “update” a policy or publish content.
Risk: it deduces an undeclared intent, or acts on an obsolete version.
Governed output: require version/date, produce an interpretation trace, request validation if the action is irreversible.