Every enterprise AI governance framework published in the last two years includes some version of the phrase "human in the loop." The principle is straightforward: for consequential AI decisions, a human should review and approve before action is taken.
The problem is that "human in the loop" is a policy statement, not a technical implementation. Writing it into a governance document does not make it happen. Building the infrastructure to enforce it is a different project entirely, and most organizations have not done it.
What "Human in the Loop" Actually Requires
For human oversight to be real — not performative — four things need to exist. First, the AI system must be able to detect when an action is consequential enough to require human review. Second, the system must be able to pause execution before taking the action. Third, a human must receive the request with sufficient context to make an informed decision. Fourth, the system must resume or abort based on the human's decision, and that decision must be logged.
Each of these requirements is a technical problem, not a policy problem.
The Gap in Security Automation
In security operations, consequential actions are everywhere. Containing a compromised host. Blocking a suspicious IP range. Revoking a user's access. Closing a P1 incident. Each of these actions, taken incorrectly, has immediate operational consequences that can be difficult or impossible to reverse.
How ARX Implements It
ARX's SDK intercept layer sits between your agent's code and the security tools it calls. Every API call is intercepted before execution. The policy engine evaluates the call against the agent's declared permission scope and a dynamic risk score computed from session context. For high-risk actions, the system suspends the agent's execution and sends a structured approval request to a human reviewer via Slack.
The reviewer sees the action requested, the risk score, what the agent has already done in the current session, and the specific resource being affected. They click Approve or Deny. The agent resumes or aborts. The decision is logged to the immutable audit trail.
This is not a policy. It is infrastructure. Human in the loop — actually enforced.