4 Comments
User's avatar
Eleanor Brown's avatar

I use the terms in/on/off the loop as an easy way into discussing key concepts about oversight. However, there is a tendancy to see 'in the loop' as always the best and safest option, and as you point out, there is a complex structure that sits underneath any AI system and in the loop decisionpoint. Declaring something 'in the loop' masks the complexity, but too often I see it as the default position without really understanding what it means in practice.

Lee Bryant's avatar

Great point. We need to engage with the detail and not just comfort ourselves with warm words that suggest there is meaningful human oversight when the reality might be a lot more complex.

Matt Searles's avatar

Your CISO contact is right — human oversight alone can't keep pace. But the answer isn't removing humans from the loop. It's building infrastructure where the loop is structural rather than procedural.

The current model: slow regulation → policy documents → manual compliance → compliance theatre. Your post names exactly why this fails. Regulatory capture, inertia, insufficient power to enforce.

The alternative: governance as architecture. Every agent action is a signed event on a hash-chained causal graph. Authority scopes are checked before execution, not audited after the fact. Values conflicts halt the system and escalate to humans automatically. The human stays in the loop for the decisions that matter — values, identity, existence — while routine operations run with structural accountability built into the data layer.

Not "policy and technology and education working together." The policy IS the technology. Encoded as constraints, not documents. Verifiable by walking the chain, not by trusting the compliance team.

I've been building this — 38 posts on the architecture at mattsearles2.substack.com. The latest walks through cross-domain accountability chains where a single event traces through four governance domains on one hash chain.

Lee Bryant's avatar

Thanks Matt. Yes, I would agree the future is governance as architecture, and that will hopefully enable human in the loop to play a meaningful role in escalation and verification. And of course this relies on codification and upkeep of the rulesets that apply. I think education also needs to play a role, especially if we want people to be able to work at the speed needed in many case, but also to avoid some of the behaviours that lead to governance actions being triggered in the first place. I will dig into your archive - thanks for sharing.