@import url('https://fonts.googleapis.com/css2?family=Barlow+Condensed:wght@700;800&family=Share+Tech+Mono&display=swap');
GATE REVIEW APPROVED DENIED
AI GOVERNANCE

Why "Human in the Loop" Means Nothing Without Infrastructure to Enforce It

The gap between AI governance policy and AI governance reality — and how to close it.

5 min read

A Mershard J.B. Frierson · Founder, ARX

Every enterprise AI governance framework published in the last two years includes some version of the phrase "human in the loop." The principle is straightforward: for consequential AI decisions, a human should review and approve before action is taken.

The problem is that "human in the loop" is a policy statement, not a technical implementation. Writing it into a governance document does not make it happen. Building the infrastructure to enforce it is a different project entirely, and most organizations have not done it.

What "Human in the Loop" Actually Requires

For human oversight to be real — not performative — four things need to exist. First, the AI system must be able to detect when an action is consequential enough to require human review. Second, the system must be able to pause execution before taking the action. Third, a human must receive the request with sufficient context to make an informed decision. Fourth, the system must resume or abort based on the human's decision, and that decision must be logged.

Each of these requirements is a technical problem, not a policy problem.


The Gap in Security Automation

In security operations, consequential actions are everywhere. Containing a compromised host. Blocking a suspicious IP range. Revoking a user's access. Closing a P1 incident. Each of these actions, taken incorrectly, has immediate operational consequences that can be difficult or impossible to reverse.

Without infrastructure that intercepts these actions before execution, your "human in the loop" policy is a sentence in a document. It is not a control.

How ARX Implements It

ARX's SDK intercept layer sits between your agent's code and the security tools it calls. Every API call is intercepted before execution. The policy engine evaluates the call against the agent's declared permission scope and a dynamic risk score computed from session context. For high-risk actions, the system suspends the agent's execution and sends a structured approval request to a human reviewer via Slack.

The reviewer sees the action requested, the risk score, what the agent has already done in the current session, and the specific resource being affected. They click Approve or Deny. The agent resumes or aborts. The decision is logged to the immutable audit trail.

This is not a policy. It is infrastructure. Human in the loop — actually enforced.

// MORE FROM ARX
COMPLIANCE
How to Get a SOC 2 Report for Your Internal Security Tool
Read →
ENGINEERING
Hardcoded API Keys Are the Single Biggest Security Risk in Your Security Program
Read →

Ready to see what your team built?

Deploy your first agent in 14 days. No cost. No commitment.