Shift*Academy

Shift*Academy

The Missing Middle of AI Adoption

A practical technique for upgrading the coordination layer where AI actually creates value at the team level

Cerys Hearsey's avatar
Cerys Hearsey
May 05, 2026
∙ Paid

AI has helped speed up the edges of work, but not yet the system that holds it together. Individuals can now produce artefacts at extraordinary speed, but the moment that work needs to be shared, combined, or acted on, the old constraints reappear. Decisions stall. Context fragments. Coordination absorbs the gain.

The result is a strange pattern: more activity, without a corresponding increase in momentum. This is partly a question of where AI has been applied. Most adoption today sits at one of two extremes. At one end, organisations pursue large-scale transformation: redesigning processes, introducing automation, and attempting to remove bottlenecks from entire workflows. At the other, individuals experiment at the edge: using AI to support their own work in small, often isolated ways.

Both directions create value but they operate at very different levels. Transformation operates at the level of the system. Individual use operates at the level of the task. Between them sits a third space that is far less visible, but more consequential: the shared work of the team.

This is where the next generation of human-AI collaboration is most likely to emerge. Not through a single, centrally approved transformation use case, and not through everyone individually prompting their way through the working day, but through narrow, focused forms of AI support that help teams coordinate specific pieces of shared work more effectively: an agent that maintains context across a handover; an agent that prepares inputs for a recurring decision; an agent that tracks divergence between workstreams. Small capabilities, placed at the points where work connects.

This middle ground is rarely the focus of AI initiatives. It is too granular for transformation programmes, and too collective for individual experimentation. This space needs a clearer name if it is to be worked on deliberately.

The Coordination Layer

The coordination layer is the part of the organisation where work is connected rather than created. It is where individual outputs are brought into relation with one another, where decisions are made in the context of other decisions, and where progress depends not only on the quality of individual contributions, but on how effectively those contributions come together.

This is also the layer where centaur teams become real, or fail to. A team does not become a human-AI team simply because individuals use AI tools, or because a process has been automated somewhere upstream. It becomes one when AI starts to support the shared coordination of work: helping context move, helping decisions stabilise, helping people understand what has changed, what matters, and what needs attention next.

This layer is present in almost every team, although it is rarely named as such. It can be seen in the flow of updates that turn activity into a shared understanding of progress, in the handovers that move work from one role to another, and in the recurring decisions that shape priorities, trade-offs, and direction. It also exists in the less visible forms of coordination — the informal knowledge carried through conversations, habits, and experience, relied upon even when it is never fully articulated.

When the coordination layer works well, decisions feel connected rather than isolated, context is preserved rather than reconstructed, and effort accumulates over time instead of dissipating across boundaries. When it works poorly, work fragments, decisions are revisited, context is repeatedly rebuilt, and progress slows because the connections between individuals are weak.

This layer has always existed, but it has rarely been treated as something that can be deliberately designed. Instead, it tends to emerge through a combination of process, habit, and individual intervention. Managers step in to resolve ambiguity, teams develop informal ways of keeping each other aligned, and work holds together through experience and continuous adjustment.

AI enters this layer in an unusual way, exposing the mostly implicit or informal ways in which work outputs are connected and combined. As long as coordination remains informal and partially invisible, it is difficult to improve in any systematic way and difficult for AI to meaningfully participate in. Systems can generate outputs with increasing speed and quality, but they struggle to integrate those outputs into a flow of work that depends on shared context, judgment, and timing. This helps explain why so many early gains from AI remain local.

The Failure Mode

If the coordination layer is where work comes together, then most current approaches to AI are operating around it rather than within it. This pattern has been visible in previous waves of digital transformation, particularly in how organisations approached the idea of “use cases.”

At the strategic level, use cases focused on large, cross-cutting ambitions, often framed around ideas such as a single face to the customer. At the other end, individual use cases were relatively easy to identify and act on. Between these sat a more complex space: the level of key processes and shared workflows, where work moved across teams, functions, and systems. This was where coordination was most critical, and where the underlying structure of work was often least visible.

These process-level interventions touched many people, relied on partially visible forms of coordination, and were frequently underpinned by informal practices that were not fully documented or understood. Changing one part of the flow risked unintended consequences elsewhere - the spreadsheet, workaround, or habit that quietly held a process together. In practice, this meant the middle ground was often left under-explored. Not because it lacked value, but because it lacked clarity, ownership, and safe ways to engage with it.

Where organisations made progress, it was typically because this layer was made visible in a structured way. In a technique we call social process surrounds, we break down key processes into their component stages, identifying where collaboration, data sharing, and knowledge flow are failing, and then mapping where small, targeted interventions could improve how work connects across those stages.

The same pattern is now re-emerging in the context of AI. The dominant use case logic still pulls organisations towards the extremes. What is harder to legitimise is the team-level use case: narrow enough to be specific, but collective enough to require shared design.

The consequences are now more visible. Work moves faster at the edges, but slows as it comes together. Outputs arrive in greater volume, but require more effort to interpret, reconcile, and integrate. Decisions are made more quickly in isolation, but take longer to stabilise when they interact with other decisions. As individuals optimise their own tasks, variation increases — and without a shared frame of reference, these local optimisations introduce small inconsistencies that must be resolved through additional coordination.

Managers, often without formal recognition of the role they are playing, become the point at which this gap is managed. As the volume and variability of work increases, so too does the demand placed on this form of intervention. People are expected to adopt AI, but are left to do so individually. They experience gains in their own work, but also an increase in the effort required to stay aligned with others.

Working on the Coordination Layer

If the coordination layer is where work either holds together or breaks down, then the first step is to learn how to see it. This layer does not present itself as a single system that can be redesigned in one move. It is distributed across workflows, roles, artefacts, and habits, and for this reason tends to remain partially invisible in how work is described or measured.

A more effective starting point is not to treat it as a system to be replaced, but as a layer to be observed. This requires a shift in attention away from what work is being done, and towards how work moves and flows: where it slows, where it fragments, where context is lost or reconstructed, and where decisions depend on inputs that are not consistently available.

What becomes visible through this lens is a pattern of disconnection. Information that is assumed to be shared turns out to be fragmented. Workflows that look linear are, in practice, iterative and contingent. These coordination points are where the most useful AI opportunities are likely to sit, not broad agents that “manage the process,” but narrow agents that help with one recurring coordination burden: assembling the right context, preparing a decision, carrying information across a handover, or making divergence visible before it becomes a problem.

From Social Process Surrounds to Agentic Process Surrounds

Focusing on social process surrounds was particularly valuable in precisely the areas that are now coming back into focus: key processes that spanned multiple teams, where coordination was fragmented, partially visible, and often dependent on informal practices that no one wanted to disrupt without fully understanding.

What made this work was not the scale of the intervention, but its placement. By improving how work connected at specific points in the flow, it became possible to increase coherence without changing the underlying process itself. AI changes what this kind of approach can do.

Where social process surrounds focused on improving collaboration between people, an AI-augmented surround can begin to participate more directly in coordination itself. This does not mean creating a single agent to own or optimise an entire process. More often, it means placing narrow agents around the process at specific coordination points, where they can maintain a shared view of work, surface relevant context, support the aggregation of inputs, or highlight divergence from expected patterns.

These coordination points tend to appear in recurring moments: preparing inputs for a decision that draws on multiple sources; handing work from one team to another with enough context to avoid rework; consolidating updates into a shared view of progress; identifying where parallel workstreams are beginning to diverge. In most organisations, these moments are handled through a combination of manual effort, experience, and follow-up.

These are precisely the points where narrow AI capabilities can be most effective:

  • An agent that prepares and structures decision inputs before a meeting.

  • An agent that carries forward context across a handover so it does not need to be reconstructed.

  • An agent that flags inconsistencies between updates from different teams.

  • An agent that maintains a shared, current view of work as it evolves.

Individually, these are small interventions. But because they sit within the flow of coordination, their effects extend beyond the immediate task. The surround becomes less like a static collaboration space and more like a living coordination environment, not replacing the core process, and not removing human judgment, but helping the team carry the connective work that would otherwise depend on memory, manual follow-up, and individual intervention.

These interventions are often smaller than expected, but they operate at points of high leverage. They are exactly the sort of narrow AI interventions that make centaur teams a reality.

Read on to learn about places to get started.

User's avatar

Continue reading this post for free, courtesy of Lee Bryant.

Or purchase a paid subscription.
© 2026 Shiftbase Ltd · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture