Shift*Academy

Shift*Academy

Who Decides (and how) with AI at the Table?

When machines ask good questions, it’s leaders who are forced to show their working.

Cerys Hearsey's avatar
Cerys Hearsey
Jan 27, 2026
∙ Paid

The first wave of enterprise AI was about exploring basic capabilities - what these systems can do, and how well they summarise, simulate, or suggest answers - and marvelling at the magic of reports written in seconds, tasks automated intelligently, and insights surfaced or synthesised. But the real test of these systems begins when they go beyond assisting us and start participating.

Because once AI starts making recommendations for action, the question changes. It’s no longer “Can AI do this?” but “Who decides what to do next?”

But what happens when a model recommends a risky course of action, or just a solution that sits awkwardly between areas of human accountability and no one wants to sign off on it? Or what if a decision is deferred to “the system,” but the outcome isn’t acceptable because AI logic clashes with human values, judgement, or just internal politics?

These are not edge cases, they are the future shape of organisational life.

And they require a new kind of leadership capability that can navigate ambiguity, accept visibility, and stand behind decisions when the machine suggests but the human needs to choose.

This edition explores what happens in those moments, where authority is tested, reframed, or exposed. Because even in an age of recommendation engines and autonomous agents, leadership doesn’t just disappear - it is visible in a thousand small ways.

To understand how this tension shows up inside organisations, we can look at a few situations where AI recommendations, risk escalations, or system logic intersect with human judgement. These aren’t hypothetical futures; they are already happening in teams using early-stage agents, AI-powered copilots, or automated governance tools. In each case, what’s surfaced is a gap in how decision-making authority is understood, expressed, or avoided.

The Friction of Recommendation

A product operations team is using an AI agent to monitor campaign performance in real time. It sees the numbers dropping and proactively recommends reallocating 40% of the remaining budget to a higher-performing campaign. It’s not a bad idea.

The logic checks out. The maths is solid. The performance forecasts are reasonable. But when the recommendation hits the team Slack channel, no one replies. The decision sits there, as a dozen eyes quietly hope someone else will say yes.

Everyone agrees it might be the right call, but no one wants to own the downside if it’s not.

Eventually, the decision is escalated to the marketing lead, who, unsure of the agent’s training data and uncomfortable with the lack of human input, stalls. “Let’s review this in next week’s performance review meeting,” they say.

By then, the window of opportunity is gone.

The agent didn’t fail. The team didn’t disagree. But the system revealed something fragile: a lack of decision clarity. Who had the right to say yes? Who would have been held accountable if it went wrong?

The AI surfaced the question, but the organisation wasn’t ready to answer it.

Escalation to Nowhere

A compliance agent scans procurement workflows daily, using embedded logic to flag unusual patterns and escalate anything that crosses a defined threshold of financial risk. It was designed with guardrails, trained on past audit findings, and approved by risk and finance leadership.

One morning, it triggers an alert on a contract being pushed through unusually fast with limited vendor competition, high value, with vague justification. The agent does exactly what it was built to do: escalate.

The escalation is routed to the “Responsible DRI” for commercial risk in the workflow system. A role defined in theory, but in practice, unstaffed. The field had been populated with a generic group alias months earlier as a placeholder.

The email goes out, no one replies. The Slack alert is marked as read, but no one takes action.

Eventually, the agent escalates again, this time to the COO’s office, with the subject line “Urgent escalation: contractual risk flag – no action taken.”

The COO forwards the alert to a special projects lead, with a note: “Can someone look into this?” That person, unclear on the context and unwilling to step into risk exposure, quietly asks around and decides to let it lie, so nothing happens.

No one made a bad decision. No one disagreed with the agent’s logic. The system escalated precisely as designed. But what it revealed was an accountability void, an organisational structure not built to absorb machine-generated urgency.

In the post-mortem weeks later, someone remarks, “It wasn’t clear who owned the final call.” Escalation makes authority visible, not just who’s in charge, but whether anyone actually claims the role when it matters.

Override at the Edge

A talent acquisition team is trialling a hiring assistant. The model has been trained on historical performance data, role descriptions, feedback cycles, and even peer review narratives to help shortlist candidates. It’s not making the final call, just ranking applicants and flagging promising fits for early interview rounds.

For the latest role, team lead in a high-performing engineering unit, the model surfaces a top candidate. On paper, everything fits: prior experience, key skills, even past indicators of leadership potential. The system flags the match with high confidence and generates a draft outreach email.

But the hiring manager hesitates. They’ve read the CV, seen the recommendation, and something doesn’t sit right. Not because the data is wrong, but because the story is missing.

The candidate comes from a firm known for individual heroics, not team-based execution. Their references are glowing, but highly self-directed. The manager, thinking about the culture of peer coaching and system-level thinking their team relies on, decides to pause. They veto the recommendation because fit isn’t measurable in metrics alone, not because the model failed.

The override sparks an internal debate. Some see it as bias, overruling the model based on gut feeling. Others see it as leadership, defending the unspoken traits that hold the team together. Eventually, the team adjusts the agent’s prompts to ask for more behavioural context in future matches.

But what it exposed was this:

  • The model was confident.

  • The manager had doubts.

  • The decision revealed the organisation’s values, not its logic.

Override moments like this are opportunities to surface implicit criteria, lived experience, and the difference between efficiency and culture.

Why This Matters Now

Many leadership teams are investing heavily in AI pilots, automation initiatives, and operating model redesigns, but can find that progress stalls in familiar places: where decisions are delayed, accountability unclear, or actions taken without clear sponsorship.

These aren’t just change management issues, but symptoms of an outdated decision architecture. When authority isn’t designed into workflows, the friction multiplies. Performance stalls. Risk accumulates. High-potential employees hesitate. And AI can’t bridge the gap, no matter how powerful the model.

For senior leaders, this is both a problem and an opportunity: clarify decision rights now, and you’ll move faster, govern better, and avoid building brittle, unaccountable systems at scale.

When Authority Becomes Visible

Informal or implicit processes shaped by social cues or seniority will come under scrutiny and strain as machines begin to recommend actions or escalate issues. The example scenarios above all point to the same underlying reality: decisions are becoming part of the infrastructure. They need to be designed rather than assumed.

In traditional settings, authority often functions through consensus or deferred judgement. Sometimes responsibilities are unspoken, and approval are granted informally. But in an AI-augmented environment, recommendations are made explicitly, escalations are timestamped, and decision logs form part of the record. The system may not be able enforce accountability, but it will increasingly expose its absence.

This shift introduces a new kind of design work: the architecture of decision-making.

Organisations must now think carefully about who holds the right to act in different contexts, how that authority is granted or delegated, and what happens when machine logic collides with human ambiguity. It is no longer sufficient to assume that leadership will step in when needed. That assumption needs to be built into workflows, roles, and escalation pathways in ways that are legible and operational.

Rather than focusing solely on model performance or technical integration, leaders need to invest in making human judgement legible to the system. This includes defining which decisions can be automated, which require confirmation, and where discretion or interpretation is essential. It also means identifying and clarifying the thresholds for human override, and ensuring there is a feedback loop to refine both the system and the governance around it.

Authority is no longer something that can live solely in hierarchy or reputation. It must be designed into the way the organisation operates, in forms that both humans and machines can understand.

But how can leaders begin to design for this? Read on for three techniques that can provide a practical starting point.

User's avatar

Continue reading this post for free, courtesy of Lee Bryant.

Or purchase a paid subscription.
© 2026 Shiftbase Ltd · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture