Shift*Academy

Shift*Academy

Same Models, Different Worlds

How outcomes, skills and context combine to create compounding organisational intelligence

Cerys Hearsey's avatar
Cerys Hearsey
Mar 24, 2026
∙ Paid

AI models are becoming shared utilities. When the same intelligence is available to everyone, the real differentiator is no longer the model, but the world around it. If your competitors are using the same models as you, trained on similar data and accessed through the same interfaces, then the intelligence itself cannot be the thing that gives your organisation competitive advantage.

If everyone has access to an identical intelligence layer, where does differentiation actually come from?

At first glance the answer might seem to lie in prompting skill or in deploying more sophisticated agents and tools on top of the models. But the organisations gaining the most value from AI are not simply using the models differently, they are building richer environments around them. And the more intelligence you retain in the environment, the easier it becomes to switch models when needed.

Instead of treating AI as a standalone tool, enterprise AI pioneers are beginning to shape the world that AI operates within: the outcomes it is responsible for, the expertise it inherits, and the context it can access. This is also creating a clearer distinction between skills and context in terms of agentic architecture.

Memory and Ownership

In everyday AI usage, people are starting to build up context inside individual tools: conversations, prompts, working patterns, fragments of reasoning. Over time, this becomes something more than usage. It becomes a kind of working memory.

→ But that memory is fragile.

→ Switch models, and it disappears.

→ Move tools, and it fragments.

→ Hit usage limits, and it resets.

For those focused on using commercial tools, running into usage limits with, say, Claude and potentially having to switch models is not just inconvenience, it is the loss of accumulated context. Conversations, assumptions, and working patterns have to be rebuilt from scratch.

The same pattern is playing out inside organisations.

Copilots sit in different tools. Context is scattered across systems. There is no shared memory layer, no consistent way to carry forward how work is done, what has been learned, or how decisions have been made.

The intelligence may be shared, but the context is not; and more importantly, it is not owned.

Tools vs Worlds

On paper, most AI deployments look remarkably similar. The same models sit underneath a fairly common toolset. The same copilots appear in productivity software and the same agent frameworks promise orchestration across workflows. From the outside, it can feel as though every organisation is drawing from the same intelligence layer, and in many ways, they are.

But beneath that shared surface, two very different approaches are being used.

In some organisations, AI remains primarily a tool layer. Employees interact with models through prompts and copilots, using them to draft content, analyse documents, generate ideas or automate parts of existing workflows. The intelligence sits at the interface: helpful, but largely separate from the deeper structure of how the organisation operates.

In others, a different approach - instead of focusing only on how people interact with AI, attention begins to shift toward the environment.

  • What outcomes the system is responsible for.

  • What expertise it inherits from the organisation.

  • What context it can access across tools, data and systems.

This is where an architectural distinction that is gaining traction in agent systems becomes useful. The difference between Skills and context systems such as MCP. At a simple level, the distinction is straightforward:

  • Skills describe how the system approaches a problem. They encode reasoning patterns, frameworks and playbooks that guide analysis or decision-making, but increasingly they also define how work gets done in practice: the sequences of actions, process steps and interactions the system can carry out across tools and workflows.

  • Context systems determine what the system can see and interact with. They provide structured access to documents, tools, data sources and workflows across the organisation.

AI systems do not become powerful simply because the models improve; they become powerful when three things begin to align:

  • the outcomes they are responsible for

  • the skills they can use to reason and act

  • the context they can access across the organisation

Together, these elements start to form something that looks less like a tool and more like an operational environment for intelligence.

Tool Mode: Intelligence Without a World

Digging a bit deeper into the differences, it is clear that in Tool Mode, AI is treated primarily as an interface for generating output.

Employees prompt copilots to draft documents, analyse data, generate presentations, or summarise discussions. Individual productivity improves, and in many cases the gains are significant. Teams can move faster in small pockets, content can be produced more easily, and analytical work that once took hours can often be completed in minutes.

But the intelligence remains largely detached from the organisation itself.

The model has limited awareness of the systems people use every day. It does not have structured access to decision histories, internal frameworks, or the reasoning patterns that shape how the organisation operates. Knowledge remains scattered across documents, chat threads and individual expertise. As a result, AI becomes powerful but context-poor.

Outputs may be impressive in isolation, but they often lack alignment with the organisation’s specific priorities, norms and operating logic. Two employees asking the same question may receive different answers. Strategic nuance is lost. Decisions remain dependent on individuals interpreting and adjusting the output.

In this mode, AI behaves a little like a brilliant intern dropped into the organisation with access to a search engine but very little understanding of how the company actually works.

World Mode: Intelligence Inside an Environment

When organisations focus on designing the environment, instead of just prompting and tool use, attention can shift towards:

  • What expertise should be codified and reusable?

  • And what context should be systematically accessible?

This is where the distinction between Skills and context systems becomes operational.

Skills capture reasoning patterns that previously lived mostly in people’s heads, but they can also encode how those patterns translate into action: what steps to take, which tools to use, and how work flows from one stage to the next. They encode the frameworks, heuristics and analytical approaches that experienced practitioners use when evaluating problems or making decisions.

A competitive analysis skill, for example, might specify how to compare products across pricing, features, positioning and risk, and how to gather that information across internal and external sources as part of a repeatable workflow. A risk assessment skill might define the dimensions that should be considered before approving a supplier or launching a new initiative. These patterns represent organisational expertise. When codified as reusable skills, they become something new: shared reasoning infrastructure.

Context systems complement this by shaping the environment the intelligence operates within. Through mechanisms such as MCP, agents can access the tools, documents, databases and workflows that contain the organisation’s operational knowledge.

When outcomes, skills and context begin to align, the intelligence is no longer simply generating output. It begins participating in the operational fabric of the organisation, both in how decisions are made and in how work is executed.

Why Tool Mode Is the Default

Just as Extraction Mode dominates organisational transformation, Tool Mode tends to dominate early AI adoption.

The reasons are straightforward.

Deploying AI tools is easy. Codifying expertise and structuring context is not.

It is far simpler to train employees in prompting or roll out copilots across the organisation than it is to examine how knowledge actually flows through the system. Codifying reasoning patterns requires surfacing tacit expertise that may never have been formally articulated. Structuring context requires connecting fragmented systems and clarifying which information should be authoritative. Both activities are organisational work rather than purely technical work. Most enterprises therefore begin with the most visible layer: interaction with the model.

But this choice has consequences: the intelligence becomes faster, but the organisation does not necessarily become smarter.

The Mechanics of World-Building

In our previous piece we explored the idea of world-building as a leadership discipline, the craft of designing the environments in which human and machine intelligence operate together. Organisations were compared to worlds with their own physics, culture and geography: rules that shape behaviour, norms that guide judgment, and environments that determine how actors navigate the system.

The distinction between outcomes, skills and context begins to reveal what those layers look like in practice.

  • Outcomes define the direction of the world. They describe what success looks like and where responsibility ultimately sits. When AI systems are attached to outcomes rather than isolated tasks, they begin participating in the organisation’s operating logic rather than simply generating output.

  • Skills represent a form of codified expertise. They capture the reasoning patterns that experienced practitioners use when analysing problems, evaluating trade-offs or making decisions. In world-building terms, they begin to encode aspects of the organisation’s culture — the ways it interprets information, the factors it considers important and the principles that guide judgment.

  • Context systems provide the geography of the world. Through mechanisms such as MCP, agents gain structured access to documents, tools, data and workflows. These systems determine what the intelligence can see, what information it can retrieve and where it can act.

Seen together, these elements begin to form the operational layer of world-building. The models may be shared across organisations, but the world around them is not.

One organisation may give an agent access to fragmented documentation and informal processes. Another may provide structured context, codified expertise and clearly defined outcomes. The underlying intelligence is the same, but the environment it operates within is fundamentally different.

Let’s explore these three layers in more depth.

Outcomes Define Direction

Before thinking about skills or context, start with something simpler: what the system is actually trying to achieve.

Most early uses of AI focus on tasks, drafting, summarising, analysing. Useful, but peripheral. The centre of the organisation is not tasks, it is outcomes.

  • Reduce customer churn

  • Improve supplier reliability

  • Increase conversion

  • Resolve support issues faster

When intelligence is attached to outcomes, the system begins to organise differently. Decisions, data and workflows align around a shared objective rather than fragmenting across individual activities.

This is the shift from AI as a productivity tool to AI as part of how the organisation delivers results.

Outcomes define direction, skills and context can only make sense once that direction is clear.

Learning Becomes Codification

If outcomes define what matters, skills define how the organisation gets things done.

In AI systems, a “skill” is simply a codified way of approaching a problem, a reusable reasoning pattern. Instead of asking a model to “analyse a market,” a skill defines how that analysis should be done: what to compare, what structure to follow, what constitutes a good answer.

For example, a competitive analysis skill might require comparison across pricing, features, positioning and risk, and end with clear strategic implications. What matters is not the example, but the shift.

Learning no longer lives only in people. It becomes something that can be captured, reused and improved.

Frameworks, heuristics and decision patterns that were once taught informally start to become shared infrastructure. Different teams stop reinventing the same thinking. AI systems and humans begin to draw on the same reasoning patterns.

Learning moves from training individuals to building organisational capability.

Context Becomes Architecture

If skills define how the organisation thinks, context determines what it can act on.

Without structured context, AI operates in isolation, limited to prompts and general knowledge. With it, intelligence becomes connected to the organisation’s actual work. This is where approaches like Model Context Protocol (MCP) matter as they provide a structured way for AI systems to access:

  • internal documents

  • data sources

  • business tools

  • workflows

So instead of generating answers in the abstract, the system works with live organisational information. At that point, the quality of the environment starts to matter as much as the quality of the model.

Two organisations using the same AI can produce very different results depending on how well their context is structured.

If organisations are beginning to codify outcomes, skills and context, an interesting question surfaces: where does this knowledge actually live?

In software engineering, GitHub provides a shared environment where code can be stored, improved and versioned collaboratively. A similar concept may emerge for organisational intelligence: a place where skills, decision rules and context connections can be maintained as shared infrastructure.

You could think of it as a kind of GitHub for world-building - a repository where the logic of how the organisation operates becomes visible, improvable and reusable.

User's avatar

Continue reading this post for free, courtesy of Lee Bryant.

Or purchase a paid subscription.
© 2026 Shiftbase Ltd · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture