<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Shift*Academy]]></title><description><![CDATA[Your digital leadership learning companion to help make sense of AI and emerging technologies, with practical guidance and techniques for implementation.]]></description><link>https://academy.shiftbase.info</link><generator>Substack</generator><lastBuildDate>Mon, 06 Apr 2026 06:34:44 GMT</lastBuildDate><atom:link href="https://academy.shiftbase.info/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Shiftbase Ltd]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[academy@shiftbase.info]]></webMaster><itunes:owner><itunes:email><![CDATA[academy@shiftbase.info]]></itunes:email><itunes:name><![CDATA[Lee Bryant]]></itunes:name></itunes:owner><itunes:author><![CDATA[Lee Bryant]]></itunes:author><googleplay:owner><![CDATA[academy@shiftbase.info]]></googleplay:owner><googleplay:email><![CDATA[academy@shiftbase.info]]></googleplay:email><googleplay:author><![CDATA[Lee Bryant]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Agents of Progress or Agents of Chaos?]]></title><description><![CDATA[The delta between personal and enterprise agentic AI development is worrying, but perhaps combining the two offers a way to help overcome their limitations...]]></description><link>https://academy.shiftbase.info/p/agents-of-progress-or-agents-of-chaos</link><guid isPermaLink="false">https://academy.shiftbase.info/p/agents-of-progress-or-agents-of-chaos</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 31 Mar 2026 15:12:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XMjE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The <strong><a href="https://academy.shiftbase.info/p/how-we-survived-the-agent-apocalypse">OpenClaw moment we covered a few weeks ago</a></strong> was a wild ride. But in the age of YOLO, Hodl and r/wallstreetbets, it should come as no surprise that there is an apparently limitless supply of people <strong><a href="https://bsky.app/profile/mims.bsky.social/post/3mhsux67xpk2d">willing to hand over control of their personal computer</a></strong> to AI agents in pursuit of rapid progress.</p><p>One fascinating research study - <strong><a href="https://agentsofchaos.baulab.info/report.html">Agents of Chaos</a></strong> - let OpenClaw run riot within a controlled lab environment, and concluded:</p><blockquote><p><em>During a two-week experimental investigation, we identified and documented ten substantial vulnerabilities and numerous failure modes concerning safety, privacy, goal interpretation, and related dimensions. These results expose underlying weaknesses in such systems, as well as their unpredictability and limited controllability as complex, integrated architectures. The implications of these shortcomings may extend directly to system owners, their immediate surroundings, and society more broadly. Unlike earlier internet threats where users gradually developed protective heuristics, the implications of delegating authority to persistent agents are not yet widely internalized, and may fail to keep up with the pace of autonomous AI systems development.</em></p></blockquote><p>For those of us with a slightly lower risk appetite, Claude Cowork has also evolved very quickly as a more mainstream alternative to OpenClaw, and is incredibly impressive.</p><p>To take but one of many examples of personal agent setups, <strong><a href="https://promptedbyeric.substack.com/p/claude-cowork-might-be-the-most-consequential?r=9dv58&amp;triedRedirect=true">Eric Porres recently shared his own Claude Cowork harness</a> </strong>and lauded its ability to help him manage his portfolio of work activities in a more powerful and efficient way:</p><blockquote><p><em>the gap between &#8220;AI as chatbot&#8221; and &#8220;AI as operating system for your work&#8221; is closing fast. And Cowork is where that gap collapses for non-developers.</em></p></blockquote><p>At their impressive GTC event in San Jos&#233; recently, Nvidia placed great emphasis on this agentic inflection point as a pointer to where AI is headed next, and also crucially where it can start to show strong returns. <strong><a href="https://www.nvidia.com/gtc/keynote/">In his keynote, CEO Jensen Huang challenged companies to understand the nature of this shift</a></strong> with a typically provocative statement:</p><blockquote><p><em>Every company needs an OpenClaw strategy</em></p></blockquote><p>But this was not a crazy call for companies to let OpenClaw run riot in their organisations. <strong><a href="https://www.exponentialview.co/p/jensens-openclaw-thesis?publication_id=2252&amp;post_id=191613948&amp;triggerShare=true&amp;isFreemail=false&amp;r=9dv58&amp;triedRedirect=true">The message, as Azeem Azhar interpreted it, is a lot more far-reaching</a></strong>: it is about the shift from the model training era to one of inference and execution.</p><p>The dominant driver of AI progress (and NVIDIA&#8217;s revenue) was <em>training</em> compute: the vast one-time cost of training large foundation models. This emerging thesis says that the next scaling frontier is <em>inference-time</em> compute &#8212; spending more compute at the moment of generating a response, letting models &#8220;think longer&#8221; on hard problems (chain-of-thought, test-time search, etc.) rather than just being bigger. This changes the hardware economics significantly: inference demand is continuous, distributed, and latency-sensitive rather than concentrated in large training runs. It also opens up physical AI (robotics, autonomous systems) as a major new inference market.</p><p>The focus shifts from models to what we do with them - or as Azeem put it: <em><strong>&#8220;The harness is the revolution&#8221;</strong>&#8230;</em></p><blockquote><p><em>For AI, the harness moment happened at the tail end of 2025. Claude Code began to work reliably enough that you could leave it running overnight and trust what it had done in the morning. Not perfectly, but reliably enough. And that threshold, that &#8220;I can leave it to its own devices&#8221; threshold, changed everything. It changed what users asked AI to do. It changed how long tasks ran. It changed the token usage profile of every organization that crossed it.</em></p><p><em>Now, OpenClaw is the harness for the next layer.</em></p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XMjE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XMjE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic 424w, https://substackcdn.com/image/fetch/$s_!XMjE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic 848w, https://substackcdn.com/image/fetch/$s_!XMjE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic 1272w, https://substackcdn.com/image/fetch/$s_!XMjE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XMjE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic" width="800" height="533" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:533,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:78810,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/192740823?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XMjE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic 424w, https://substackcdn.com/image/fetch/$s_!XMjE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic 848w, https://substackcdn.com/image/fetch/$s_!XMjE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic 1272w, https://substackcdn.com/image/fetch/$s_!XMjE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80556756-6586-4ef9-8ac1-52bdfab40918_800x533.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">image credit: https://www.linkedin.com/posts/i0exception_san-francisco-these-days-activity-7429390168576405504-D8Nr/</figcaption></figure></div><h2>Talk to my agent!</h2><p>What does this mean for agentic AI in the enterprise, and should we be concerned about the way organisation-owned agentic services seem to be lagging so far behind the rapid evolution of personal agents?</p><p>Personal AI agents have evolved so much faster than enterprise agentic services that the gap between them is becoming structural, and we could end up with more ways to navigate broken enterprise systems instead of business transformation.</p><p>But if we embrace personal agents at work as prototypes and testbeds for shared enterprise services, then perhaps we can close the gap.</p><p>Looking at this challenge within the wider context of the shift from model training to inference and runtime intelligence, however, it is clear that much more focus is needed on world models and decision intelligence infrastructure to enable shared enterprise agents to work reliably with less human supervision. The good news is we can do much of this with the tools we have access to today. It is less of a tech challenge and more of a readiness / architectural question.</p><ul><li><p>Microsoft has recently been beefing up its agentic capabilities in its M365 platform, and <strong><a href="https://techcommunity.microsoft.com/blog/microsoft365copilotblog/introducing-multi-model-intelligence-in-researcher/4506011">has just announced a multi-model capability for verifying complex research</a></strong>.</p></li><li><p><strong><a href="https://venturebeat.com/technology/perplexity-takes-its-computer-ai-agent-into-the-enterprise-taking-aim-at">Perplexity has launched an agentic harness for the enterprise</a></strong>, based on internal tooling that was used by its own employees to speed up delivery.</p></li><li><p>Elsewhere, Salesforce&#8217;s agentic foundry, SAP and a host of other stalwarts from the previous generation of enterprise platforms continue to announce new agentic capabilities.</p></li></ul><p>But the capability diffusion gap continues to widen, and there is now a risk that personal agents will evolve so much faster than enterprise agents that we could recreate the &#8216;old wine in new bottles&#8217; problem that we saw with the earlier phase of Robotic Process Automation (RPA), and use AI to navigate a broken system better, rather than to fix the system or start building a better one.</p><p><strong><a href="https://a16z.com/why-the-world-still-runs-on-sap/">As Eric Zhou and Seema Amble of Andreesen Horowitz remarked recently, the world still largely runs on old, poorly-designed enterprise platforms not because they are good</a></strong>, but because organisations contorted themselves around their inadequacies and foibles to such an extent that ripping them out could be painful:</p><blockquote><p><em>To ask a question that sounds almost disrespectful until you&#8217;ve spent a week in a Fortune 500: why do people still use SAP (and ServiceNow, and Salesforce) at all?</em></p><p><em>The short answer is that SAP, or any major legacy system of record, captures critical data across the businesses that use it. But on top of that, the business has customized it and built a set of specific procedures and roles on top of it, much of which is not actually documented anywhere.</em></p></blockquote><p>Or at least that was the case until now. The authors argue that agentic AI in the enterprise could replace these behemoths over the medium term, but even in the short-term, it could make them more malleable and easier to work with.</p><p>Perhaps this is an area where personal agents in the enterprise can become a testbed for enterprise agentic services, trying out automations, workarounds and multi-agent tasks under human oversight, before becoming candidates for new shared services that run more autonomously.</p><p>With a personal agent harness, we can each maintain our own personal work context, memory, preferred styles and so on, and our agents will be able to use this to navigate the enterprise and interface with systems of record at the API and data level, bypassing the UI altogether. And our agents can talk to each other to take over a lot of the boring, time-wasting scheduling, alignment, stakeholder communications and basic coordination tasks that consume so much of a leader&#8217;s time in large enterprises today.</p><p>Later, as more shared enterprise agents come online, our personal agents could help manage our relationships with these as well, for example sequencing the various actions needed for us to run a project or gather intelligence for new ideas.</p><p>That suggests to me that we should try to find a way to embrace technologies like Claude Cowork safely within the enterprise to deliver on the potential that copilots promised. Nothing is risk-free, and we clearly need to focus on guardrails and permissions, but if advanced users are willing to be accountable for their agents in return for the productivity and value they could generate, then we can probably find ways to make it work.</p><h2>World Knowledge, Architecture and Run-time Intelligence</h2><p>But it is also worth thinking about the differences between personal agents and enterprise agents, and what we have learned so far on this journey of discovery.</p><p>First, we need a more considered and thoughtful approach to productivity than boasting about how many lines of code (LOC) we can churn out.</p><p><strong><a href="https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/">Mario Zechner tried to summarise his own lessons from agentic coding</a></strong> last week in an interesting, opinionated piece about the dangers of brittle software, missed learning and unmaintainable systems, and concluded we need to <em>&#8220;slow the f</em>** down.&#8221;*</p><p>In a similar vein, <strong><a href="https://interconnected.org/home/2026/03/28/architecture">Matt Webb reminds us that good architecture beats a high LOC count every time</a></strong>, and helps avoid personal-agents-as-workarounds suffering from the same failure modes as RPA, mentioned earlier:</p><blockquote><p><em>The thing about agentic coding is that agents grind problems into dust. Give an agent a problem and a while loop and - long term - it&#8217;ll solve that problem even if it means burning a trillion tokens and re-writing down to the silicon.</em></p><p><em>Like, where&#8217;s the bottom? Why not take a plain English spec and grind in out in pure assembly every time? It would run quicker.</em></p><p><em>But we want AI agents to solve coding problems quickly and in a way that is maintainable and adaptive and composable (benefiting from improvements elsewhere), and where every addition makes the whole stack better.</em></p><p><em>So at the bottom is really great libraries that encapsulate hard problems, with great interfaces that make the &#8220;right&#8221; way the easy way for developers building apps with them. Architecture!</em></p></blockquote><p>Another question is where we can live with probabilistic models, and where we need a more deterministic approach. We can tolerate the limits of probabilistic models in personal agents we oversee and can test, but when we start to think about embedded or autonomous agents that work the same way for everybody, a more deterministic approach is often needed.</p><p>Just as Nvidia&#8217;s forward strategy is about more real-time inference in the output stage, we can start to imagine how more verification and compliance could also be done at runtime to make some enterprise AI agents more deterministic.</p><p>Artur Huk wrote about this a few days ago for O&#8217;Reilly, <strong><a href="https://www.oreilly.com/radar/the-missing-layer-in-agentic-ai/">describing &#8216;decision intelligence runtime&#8217; as a missing capability layer in agentic AI</a></strong> - more of an engineering pattern than a specific solution or technique.</p><p>And this brings us back to a topic we have been noodling on for some time, which is <strong><a href="https://academy.shiftbase.info/p/ai-round-up-both-destroyer-and-maker">the vital importance of world-building</a></strong> in ensuring the success of enterprise AI.</p><p>Whilst some aspects of world-building are about creating good general context for people and machines to have clarity about goals, ways of working culture, language and so on, there are other aspects of world knowledge that are more precise and scientific.</p><p>The way that current models handle world knowledge is largely in the training stage, rather than the inference or runtime moment. The way autonomous driving systems learn is a good example. <strong><a href="https://www.wired.com/story/a-school-district-tried-to-help-train-waymos-to-stop-for-school-buses-it-didnt-work/?utm_source=nl&amp;utm_brand=wired&amp;utm_mailing=WIR_Daily_033026_PAID&amp;utm_campaign=aud-dev&amp;utm_medium=email&amp;utm_content=WIR_Daily_033026_PAID&amp;bxid=679fe6b740ae7ef26b0b090f&amp;cndid=86084401&amp;hasha=016003755aa0296c369a337f05a1c1d7&amp;hashc=6682d98588cab4309c24846dca6db095f2734170cec7daa3428e1be66edfc3ed&amp;esrc=MARTECH_ORDERFORM&amp;utm_term=WIR_DAILY_PAID">Waymo vehicles in Austin developed a nasty habit of illegally overtaking school buses on pick up</a></strong>, and the school district worked with the company to give them simple rules and guidance on how to avoid this happening. But the training process for Waymo&#8217;s system is so long and includes so much data, that they were unable to simply add a new rule quickly, and the cars kept overtaking buses. More runtime inference and decision intelligence based on world models is perhaps one way to tackle such anomalies.</p><p><strong><a href="https://joiningdots.substack.com/p/ai-and-a-new-frontier-for-decision">So much of the existing decision intelligence inside organisations has never been captured, as Sharon Richardson remarked yesterday</a></strong> in her informative piece about context graphs, which means there is a lot we can achieve quite easily and quickly if we are smart about it. This does not require some kind of Sisyphean manual knowledge mapping exercise, because it is the kind of task that AI can accelerate with the right supervision, even down to the level of conducting structured interviews or After Action Reviews (AARs) to capture decision traces and reasoning from people.</p><p>The kind of world models we will need to realise the promise of enterprise agentic AI will go way beyond intangible knowledge and culture. They will need to understand the physical world, manufacturing, distribution and even domains like geopolitics that (not again!) are messing with supply chains and pricing.</p><p>How we build and evolve these models is truly a fascinating challenge, and one where we can use AI itself to help improve our AI readiness by doing the mapping, collating and documentation of the information we need to make them real.</p><p><strong><a href="https://www.strangeloopcanon.com/p/the-future-of-work-is-world-models">Rohit Krishnan recently wrote that this idea is really the key to the future of work</a></strong>, which he sees operating more like a strategy co-op game than a single-player game, and I think that is right.</p><blockquote><p><em>What&#8217;s needed in the enterprise world is such a world model - an engine that knows the rules, tracks the state, understands and predicts consequences.</em></p><p><em>The environment would connect to the systems a company already runs, the information that is gathered, the agents it uses, and build a live operational model of the business. Scale it across companies and you have the training data to build a compelling environment and an even better world model!</em></p></blockquote><p>The question is, can visionary CIOs and leaders of AI adoption programmes make the case that urgent attention and investment is needed in AI readiness efforts rather than just rolling out co-pilot licenses and hoping for some marginal productivity gains?</p><p>We need composable, addressable processes, services and systems if enterprise agents are to operate autonomously. And we need codified rulesets, world models and decision intelligence to be available at run-time if we want them to operate more deterministically without the kind of oversight we perform with personal agents.</p>]]></content:encoded></item><item><title><![CDATA[Same Models, Different Worlds]]></title><description><![CDATA[How outcomes, skills and context combine to create compounding organisational intelligence]]></description><link>https://academy.shiftbase.info/p/same-models-different-worlds</link><guid isPermaLink="false">https://academy.shiftbase.info/p/same-models-different-worlds</guid><dc:creator><![CDATA[Cerys Hearsey]]></dc:creator><pubDate>Tue, 24 Mar 2026 15:08:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!H_5r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI models are becoming shared utilities. When the same intelligence is available to everyone, the real differentiator is no longer the model, but the world around it. If your competitors are using the same models as you, trained on similar data and accessed through the same interfaces, then the intelligence itself cannot be the thing that gives your organisation competitive advantage.</p><p>If everyone has access to an identical intelligence layer, where does differentiation actually come from?</p><p>At first glance the answer might seem to lie in prompting skill or in deploying more sophisticated agents and tools on top of the models. But the organisations gaining the most value from AI are not simply using the models differently, they are building richer environments around them. And the more intelligence you retain in the environment, the easier it becomes to switch models when needed.</p><p>Instead of treating AI as a standalone tool, enterprise AI pioneers are beginning to shape the world that AI operates within: the outcomes it is responsible for, the expertise it inherits, and the context it can access. This is also creating a clearer distinction between skills and context in terms of agentic architecture.</p><h2>Memory and Ownership</h2><p>In everyday AI usage, people are starting to build up context inside individual tools: conversations, prompts, working patterns, fragments of reasoning. Over time, this becomes something more than usage. It becomes a kind of working memory.</p><p>&#8594; But that memory is fragile.</p><p>&#8594; Switch models, and it disappears.</p><p>&#8594; Move tools, and it fragments.</p><p>&#8594; Hit usage limits, and it resets.</p><p>For those focused on using commercial tools, running into usage limits with, say, Claude and potentially having to switch models is not just inconvenience, it is the loss of accumulated context. Conversations, assumptions, and working patterns have to be rebuilt from scratch.</p><p>The same pattern is playing out inside organisations.</p><p>Copilots sit in different tools. Context is scattered across systems. There is no shared memory layer, no consistent way to carry forward how work is done, what has been learned, or how decisions have been made.</p><p>The intelligence may be shared, but the context is not; and more importantly, it is not owned.</p><h2><strong>Tools vs Worlds</strong></h2><p>On paper, most AI deployments look remarkably similar. The same models sit underneath a fairly common toolset. The same copilots appear in productivity software and the same agent frameworks promise orchestration across workflows. From the outside, it can feel as though every organisation is drawing from the same intelligence layer, and in many ways, they are.</p><p>But beneath that shared surface, two very different approaches are being used.</p><p>In some organisations, AI remains primarily a tool layer. Employees interact with models through prompts and copilots, using them to draft content, analyse documents, generate ideas or automate parts of existing workflows. The intelligence sits at the interface: helpful, but largely separate from the deeper structure of how the organisation operates.</p><p>In others, a different approach - instead of focusing only on how people interact with AI, attention begins to shift toward the environment.</p><ul><li><p>What outcomes the system is responsible for.</p></li><li><p>What expertise it inherits from the organisation.</p></li><li><p>What context it can access across tools, data and systems.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!H_5r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!H_5r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!H_5r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!H_5r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!H_5r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!H_5r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:299990,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/191608578?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!H_5r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!H_5r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!H_5r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!H_5r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023a5b26-a264-4d5a-86c8-98bd6cf13f88_1536x1024.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This is where an architectural distinction that is gaining traction in agent systems becomes useful. The difference between Skills and context systems such as MCP. At a simple level, the distinction is straightforward:</p><ul><li><p><strong>Skills</strong> describe how the system approaches a problem. They encode reasoning patterns, frameworks and playbooks that guide analysis or decision-making, but increasingly they also define how work gets done in practice: the sequences of actions, process steps and interactions the system can carry out across tools and workflows.</p></li><li><p><strong>Context systems</strong> determine what the system can see and interact with. They provide structured access to documents, tools, data sources and workflows across the organisation.</p></li></ul><p>AI systems do not become powerful simply because the models improve; they become powerful when three things begin to align:</p><ul><li><p>the outcomes they are responsible for</p></li><li><p>the skills they can use to reason and act</p></li><li><p>the context they can access across the organisation</p></li></ul><p>Together, these elements start to form something that looks less like a tool and more like an operational environment for intelligence.</p><h2>Tool Mode: Intelligence Without a World</h2><p>Digging a bit deeper into the differences, it is clear that in Tool Mode, AI is treated primarily as an interface for generating output.</p><p>Employees prompt copilots to draft documents, analyse data, generate presentations, or summarise discussions. Individual productivity improves, and in many cases the gains are significant. Teams can move faster in small pockets, content can be produced more easily, and analytical work that once took hours can often be completed in minutes.</p><p>But the intelligence remains largely detached from the organisation itself.</p><p>The model has limited awareness of the systems people use every day. It does not have structured access to decision histories, internal frameworks, or the reasoning patterns that shape how the organisation operates. Knowledge remains scattered across documents, chat threads and individual expertise. As a result, AI becomes powerful but context-poor.</p><p>Outputs may be impressive in isolation, but they often lack alignment with the organisation&#8217;s specific priorities, norms and operating logic. Two employees asking the same question may receive different answers. Strategic nuance is lost. Decisions remain dependent on individuals interpreting and adjusting the output.</p><p>In this mode, AI behaves a little like a brilliant intern dropped into the organisation with access to a search engine but very little understanding of how the company actually works.</p><h2>World Mode: Intelligence Inside an Environment</h2><p>When organisations focus on designing the environment, instead of just prompting and tool use, attention can shift towards:</p><ul><li><p>What expertise should be codified and reusable?</p></li><li><p>And what context should be systematically accessible?</p></li></ul><p>This is where the distinction between Skills and context systems becomes operational.</p><p>Skills capture reasoning patterns that previously lived mostly in people&#8217;s heads, but they can also encode how those patterns translate into action: what steps to take, which tools to use, and how work flows from one stage to the next. They encode the frameworks, heuristics and analytical approaches that experienced practitioners use when evaluating problems or making decisions.</p><p>A competitive analysis skill, for example, might specify how to compare products across pricing, features, positioning and risk, and how to gather that information across internal and external sources as part of a repeatable workflow. A risk assessment skill might define the dimensions that should be considered before approving a supplier or launching a new initiative. These patterns represent organisational expertise. When codified as reusable skills, they become something new: shared reasoning infrastructure.</p><p>Context systems complement this by shaping the environment the intelligence operates within. Through mechanisms such as MCP, agents can access the tools, documents, databases and workflows that contain the organisation&#8217;s operational knowledge.</p><p>When outcomes, skills and context begin to align, the intelligence is no longer simply generating output. It begins participating in the operational fabric of the organisation, both in how decisions are made and in how work is executed.</p><h2>Why Tool Mode Is the Default</h2><p>Just as <strong><a href="https://academy.shiftbase.info/p/extraction-vs-redesign-the-hidden">Extraction Mode dominates organisational transformation</a></strong>, Tool Mode tends to dominate early AI adoption.</p><p>The reasons are straightforward.</p><p>Deploying AI tools is easy. Codifying expertise and structuring context is not.</p><p>It is far simpler to train employees in prompting or roll out copilots across the organisation than it is to examine how knowledge actually flows through the system. Codifying reasoning patterns requires surfacing tacit expertise that may never have been formally articulated. Structuring context requires connecting fragmented systems and clarifying which information should be authoritative. Both activities are organisational work rather than purely technical work. Most enterprises therefore begin with the most visible layer: interaction with the model.</p><p>But this choice has consequences: the intelligence becomes faster, but the organisation does not necessarily become smarter.</p><h2>The Mechanics of World-Building</h2><p>In our previous piece <strong><a href="https://academy.shiftbase.info/p/a-leaders-guide-to-world-building">we explored the idea of world-building as a leadership discipline</a></strong>, the craft of designing the environments in which human and machine intelligence operate together. Organisations were compared to worlds with their own physics, culture and geography: rules that shape behaviour, norms that guide judgment, and environments that determine how actors navigate the system.</p><p>The distinction between outcomes, skills and context begins to reveal what those layers look like in practice.</p><ul><li><p><strong>Outcomes define the direction of the world</strong>. They describe what success looks like and where responsibility ultimately sits. When AI systems are attached to outcomes rather than isolated tasks, they begin participating in the organisation&#8217;s operating logic rather than simply generating output.</p></li><li><p><strong>Skills represent a form of codified expertise</strong>. They capture the reasoning patterns that experienced practitioners use when analysing problems, evaluating trade-offs or making decisions. In world-building terms, they begin to encode aspects of the organisation&#8217;s culture &#8212; the ways it interprets information, the factors it considers important and the principles that guide judgment.</p></li><li><p><strong>Context systems provide the geography of the world</strong>. Through mechanisms such as MCP, agents gain structured access to documents, tools, data and workflows. These systems determine what the intelligence can see, what information it can retrieve and where it can act.</p></li></ul><p>Seen together, these elements begin to form the operational layer of world-building. The models may be shared across organisations, but the <strong>world around them is not</strong>.</p><p>One organisation may give an agent access to fragmented documentation and informal processes. Another may provide structured context, codified expertise and clearly defined outcomes. The underlying intelligence is the same, but the environment it operates within is fundamentally different.</p><p>Let&#8217;s explore these three layers in more depth.</p><h2>Outcomes Define Direction</h2><p>Before thinking about skills or context, start with something simpler: what the system is actually trying to achieve.</p><p>Most early uses of AI focus on tasks, drafting, summarising, analysing. Useful, but peripheral. The centre of the organisation is not tasks, it is outcomes.</p><ul><li><p>Reduce customer churn</p></li><li><p>Improve supplier reliability</p></li><li><p>Increase conversion</p></li><li><p>Resolve support issues faster</p></li></ul><p>When intelligence is attached to outcomes, the system begins to organise differently. Decisions, data and workflows align around a shared objective rather than fragmenting across individual activities.</p><p>This is the shift from AI as a productivity tool to AI as part of how the organisation delivers results.</p><p>Outcomes define direction, skills and context can only make sense once that direction is clear.</p><h2>Learning Becomes Codification</h2><p>If outcomes define <em>what</em> matters, skills define <em>how</em> the organisation gets things done.</p><p>In AI systems, a &#8220;skill&#8221; is simply a codified way of approaching a problem, a reusable reasoning pattern. Instead of asking a model to &#8220;analyse a market,&#8221; a skill defines how that analysis should be done: what to compare, what structure to follow, what constitutes a good answer.</p><p>For example, a competitive analysis skill might require comparison across pricing, features, positioning and risk, and end with clear strategic implications. What matters is not the example, but the shift.</p><p>Learning no longer lives only in people. It becomes something that can be captured, reused and improved.</p><p>Frameworks, heuristics and decision patterns that were once taught informally start to become shared infrastructure. Different teams stop reinventing the same thinking. AI systems and humans begin to draw on the same reasoning patterns.</p><p>Learning moves from training individuals to building organisational capability.</p><h2>Context Becomes Architecture</h2><p>If skills define how the organisation thinks, context determines what it can act on.</p><p>Without structured context, AI operates in isolation, limited to prompts and general knowledge. With it, intelligence becomes connected to the organisation&#8217;s actual work. This is where approaches like Model Context Protocol (MCP) matter as they provide a structured way for AI systems to access:</p><ul><li><p>internal documents</p></li><li><p>data sources</p></li><li><p>business tools</p></li><li><p>workflows</p></li></ul><p>So instead of generating answers in the abstract, the system works with live organisational information. At that point, the quality of the environment starts to matter as much as the quality of the model.</p><p>Two organisations using the same AI can produce very different results depending on how well their context is structured.</p><p>If organisations are beginning to codify outcomes, skills and context, an interesting question surfaces: where does this knowledge actually live?</p><p>In software engineering, GitHub provides a shared environment where code can be stored, improved and versioned collaboratively. A similar concept may emerge for organisational intelligence: a place where skills, decision rules and context connections can be maintained as shared infrastructure.</p><p>You could think of it as a kind of GitHub for world-building - a repository where the logic of how the organisation operates becomes visible, improvable and reusable.</p>
      <p>
          <a href="https://academy.shiftbase.info/p/same-models-different-worlds">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Agents on the Night Shift]]></title><description><![CDATA[Self-improving agents, Socratic dialogue and temporal stress: things to think about on the way towards agentic engineering and machines that make machines]]></description><link>https://academy.shiftbase.info/p/agents-on-the-night-shift</link><guid isPermaLink="false">https://academy.shiftbase.info/p/agents-on-the-night-shift</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 17 Mar 2026 15:30:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lS-s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong><a href="https://datasciencedojo.com/blog/karpathy-autoresearch-explained/">Andrej Karpathy&#8217;s new autoresearch tool recently ran 700 experiments on his nanochat codebase in two days</a></strong>. It found 20 improvements he had missed, delivering an 11% uplift in output. Tobi L&#252;tke at Shopify tried it on his own hand-tuned model: 19% improvement, parameter size halved. What makes this remarkable is not the numbers. It is the mechanism. The tool does not just run tests &#8212; it updates its own Python code based on what it learns. The researcher sets the direction; the machine runs experiments overnight and arrives with findings.</p><p>This is one example of an important archetype I think we will see more and more in agentic AI: <em><strong>the machine that makes the machines</strong></em>, which is both exciting and slightly strange.</p><p>This raises interesting questions about where learning lives in a human-machine system, and who benefits from it.</p><p><strong><a href="https://adactio.com/journal/22436">As Jeremy Keith put it recently</a></strong> when considering how agentic coding is changing development practices:</p><blockquote><p><em>Outsourcing execution to machines makes a lot of sense.</em></p><p><em>I&#8217;m not so sure it makes sense to outsource learning.</em></p></blockquote><p>But the productive division isn't just human vs. machine learning &#8212; it's human imagination operating at the meta-hypothesis level, and machine speed exhausting the territory around it. A single wild guess or idea can now seed hundreds of downstream tests; what comes back isn't just an answer, but a richer map of the problem space than any individual researcher might have drawn alone.</p><h2>From Agentic Coding to Agentic Engineering</h2><p>It seems everybody is intrigued right now by the rapid changes that the latest agentic AI models are bringing to software development, and it is worth paying attention because a similar process is likely to play out across other areas of work.</p><p>The New York Times Magazine recently published a major feature, <strong><a href="https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html?unlocked_article_code=1.SlA.DBan.wbQDi-hptjj6">Coding After Coders: the End of Computer Programming as we Know it</a></strong>, covering the history of the field and the experience of developers navigating rapid transformation:</p><blockquote><p><em>How things will shake out for professional coders themselves isn&#8217;t yet clear. But their mix of exhilaration and anxiety may be a preview for workers in other fields. Anywhere a job involves language and information, this new combination of skills &#8212; part rhetoric, part systems thinking, part skepticism about a bot&#8217;s output &#8212; may become the fabric of white-collar work. Skills that seemed the most technical and forbidding can turn out to be the ones most easily automated. Social and imaginative ones come to the fore. We will produce fewer first drafts and do more judging, while perhaps feeling uneasy about how well we can still judge. Abstraction may be coming for us all.</em></p></blockquote><p>This is almost certainly not the end of computer programming as a discipline, despite the pace of change. Computer science will become more sciencey; programming &#8212; talking to computers &#8212; will become more literary. But the need for people who understand what is possible and how to make it happen will continue to grow.</p><p>But agentic coding is also creating new forms of cognitive overload among AI-assisted developers, including the puzzling sight of people <strong><a href="https://www.wsj.com/tech/ai/ai-bots-claude-openclaw-285ac816">sitting outside in Silicon Valley watching &#8212; but not touching &#8212; their laptops</a></strong> as coding agents grind through the work.</p><p>Matt Jones captured this strangeness well this week in a lovely piece of writing &#8212; <strong><a href="https://petafloptimism.com/2026/03/14/gas-town-and-bullet-hell/">Gas Town and Bullet Hell</a></strong> &#8212; in particular the temporal mismatch between human cognition and machine speed:</p><blockquote><p><em>If brain fry is a clock problem &#8212; a temporal mismatch between human cognition and machinic speed &#8212; then solutions that only address interface design or training will help at the margins but miss the structural issue&#8230;</em></p><p><em>If we want AI agent work to feel more like flow and less like fry, the challenge isn&#8217;t making things faster or even slower &#8212; it&#8217;s about legibility, consent, and reversibility, and all three matter at once.</em></p></blockquote><p>As we hit the cognitive limits of what single-player mode can achieve, the shift from agentic coding to agentic engineering becomes important.</p><p><strong><a href="https://simonwillison.net/guides/agentic-engineering-patterns/what-is-agentic-engineering/">Simon Willison has a typically thorough guide to what this means in practice</a></strong>: instead of using an agent to write some code, agentic engineering means tasking systems with higher-order goals and the ability to self-manage the path towards them with less micro-management. The craft, Willison argues, was never primarily about writing code &#8212; it was always about figuring out what code to write.</p><h2>The Organisation as the Machine That Makes the Machines</h2><p>We have argued for a long time that to produce good software, organisations need to become like software themselves.</p><p>Corporate failures such as <strong><a href="https://germanautopreneur.com/p/cariad-volkswagen-software-failure-lessons">Volkswagen&#8217;s first attempt to build a software division for its vehicles within a hierarchical and bureaucratic organisation</a></strong> prove the point. The technology was not the problem; the organisational architecture was.</p><p>And yet, elsewhere in the automotive world, this has been understood for some time. <strong><a href="https://newsletter.jurriaankamer.com/p/the-organization-is-the-machine">Jurriaan Kamer recently shared lessons from F1 teams</a></strong>, quoting a team principal on what they borrowed from the Apollo project in their pursuit of agility and excellence under pressure:</p><blockquote><p><em>&#8220;What you can&#8217;t have is an engineer here having to go up and down a particular hierarchy and then hop across &#8212; in our instance, not just a different geographic location, but a different country altogether &#8212; and then go up and down. So instead, it&#8217;s a kind of different structure where it&#8217;s <strong>mission control instead of command and control</strong>.&#8221;</em></p></blockquote><p>This distinction matters more than it might appear. Today, a developer can use an agent to write better static software, and that is a productivity story everybody can follow. But if we trace the trajectory of agentic engineering towards its logical conclusion &#8212; and Karpathy&#8217;s autoresearch is an early signal of where that leads &#8212; we will need a much more fluid and connected organisational structure where services and processes are digitised and addressable, so that they can become truly programmable and genuinely capable of self-improvement.</p><p>The organisation itself needs to become the machine that makes the machines. <strong><a href="https://www.youtube.com/watch?v=MiUHjLxm3V0&amp;t=2872s">ASML&#8217;s famous EUV system</a></strong> is a useful reference point: a machine so complex that it requires extraordinary coordination between hundreds of specialist suppliers and internal teams, but one whose design assumes that it will be continuously improved by the people who build and operate it. The infrastructure is not static. It learns.</p><p>This also brings the learning question back into focus. If the machine is updating its own code overnight and accumulating insights from hundreds of experiments, organisations need to build the governance and oversight architecture that keeps humans genuinely in the loop &#8212; not as approvers of every output, but as the people setting direction, interpreting results, and carrying the institutional memory that the machine cannot hold. Otherwise, you end up with iteration without learning, which is just faster drift.</p><p><strong><a href="https://www.hulme.ai/blog/what-3-000-years-of-philosophy-and-three-decades-of-agent-research-can-teach-us-about-the-next-three-years">As Daniel Hulme reminds us in his recent thoughtful account of the philosophical and historical pre-cursors of agentic AI</a></strong>, we already have rich bodies of knowledge and methods to draw on:</p><blockquote><p><em>The irony of this moment is that we are simultaneously living through the most rapid deployment of autonomous agents in history and underutilising the most relevant bodies of knowledge ever produced on how to make such systems safe. From Socrates&#8217; method of structured interrogation to Aristotle&#8217;s formal logic, from Chrysippus&#8217; propositional reasoning to the medieval protocols of adversarial disputation &#8211; and then from Carl Hewitt&#8217;s Actor Model to Michael Bratman&#8217;s theory of practical reasoning, from Leslie Lamport&#8217;s work on distributed consensus to Edmund Clarke&#8217;s model checking, from Lotfi Zadeh&#8217;s fuzzy logic to the agent architectures of Michael Wooldridge and Nick Jennings &#8211; these thinkers and many others spent careers building the conceptual and mathematical toolkit for exactly the challenges we now face. Their work isn&#8217;t historical curiosity. It&#8217;s a foundation we should be actively building on</em></p></blockquote><p>The same could be said of our accumulated knowledge about organisational design. How systems learn, adapt, and maintain coherence under rapid change is not a new problem. We just have a new urgency to solve it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lS-s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lS-s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic 424w, https://substackcdn.com/image/fetch/$s_!lS-s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic 848w, https://substackcdn.com/image/fetch/$s_!lS-s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic 1272w, https://substackcdn.com/image/fetch/$s_!lS-s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lS-s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:244254,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/191252173?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lS-s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic 424w, https://substackcdn.com/image/fetch/$s_!lS-s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic 848w, https://substackcdn.com/image/fetch/$s_!lS-s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic 1272w, https://substackcdn.com/image/fetch/$s_!lS-s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ed4363-73ff-4acb-8746-8d2e158bc7de_1408x768.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The Infrastructure Is Coming</h2><p>The broader technology ecosystem is already moving in this direction. <strong><a href="https://www.interconnects.ai/p/the-next-phase-of-open-models">Nathan Lambert&#8217;s survey of the current state of open AI models</a></strong> suggests we will eventually reach a place where specialised small models are freely available for organisations to adapt and build on when creating their own AI platform architectures.</p><p><strong><a href="https://www.constellationr.com/insights/news/nvidias-huang-all-software-will-be-agentic">Jensen Huang is unambiguous about where this leads</a></strong>:</p><blockquote><p><em>&#8220;There will be no software in the future that&#8217;s not agentic. How could you have software that&#8217;s dumb? And so, it is absolutely true that every software company will become an agentic company.&#8221;</em></p></blockquote><p>Instead of using AI agents to write better SaaS tools, this implies that software firms will make available agents that can continuously write, maintain, and evolve living software &#8212; software that has a sense of its own role and mission.</p><p>Incidentally, this also supports <strong><a href="https://x.com/JulienBek/status/2029680516568600933">the thesis that &#8216;services as software&#8217; will be a major new opportunity for specialist service providers</a></strong>.</p><p>Futurum Group <strong><a href="https://futurumgroup.com/press-release/cio-ai-priorities-pivot-from-productivity-to-innovation/">published new research this week on CIO AI priorities</a></strong>, finding that enterprise goals are shifting from basic efficiency towards innovation and organisational change. Dion Hinchcliffe&#8217;s conclusion that &#8220;the generic efficiency argument for AI is dead&#8221; is heartening. The route to greater returns is more about systems and architecture than it is about individual tool use, and it seems more enterprise leaders are beginning to see this. The danger is that &#8220;innovation and organisational change&#8221; becomes the new banner under which old structures get expensively automated rather than genuinely redesigned.</p><h2>Hypotheses &amp; Organisational Learning</h2><p>Karpathy ran 700 experiments in 48 hours on a well-defined optimisation problem with clean metrics and the ability to measure improvement objectively. That particular set of conditions is relatively rare. Most organisational improvement problems do not have clean metrics, do not produce outputs that can be evaluated overnight, and do not have the structured test environment that makes autoresearch possible.</p><p>What humans might lack in speed of iteration, they more than make up for in their ability to generate the wild guesses and <em>what ifs</em> that make for rich experimentation. Super-charging this innate human capability with the power of machines to loop through variations or play out scenarios could accelerate our learning and innovation in exciting new ways.</p><p>What autoresearch points towards isn't the automation of discovery, but its amplification. The human makes the leap; the machine explores where it lands.</p><p>Every organisation already has processes that could, with sufficient effort, be made legible, measurable, and addressable. The question for leaders is not whether to wait for the infrastructure to arrive &#8212; it will. The question is whether the organisation they are building now can actually use it when it does. The machine that makes the machines requires a very different kind of organisation than the one that deploys tools to make existing tasks faster.</p><p>What is the hypothesis-testing loop in your organisation that you most wish you could accelerate? And who, right now, is doing the learning?</p><h2>A Quick Favour to Ask</h2><p><strong><a href="https://letter.rebuild.net/">Please consider signing the Rebuild Letter</a></strong> to support a great initiative I have been loosely involved in over the last year or so that aims to stimulate the development of better European social tools and networks to reduce our reliance on weaponised attention farming.</p>]]></content:encoded></item><item><title><![CDATA[Programmable Governance & Probabilistic Humans]]></title><description><![CDATA[How probabilistic humans and collective intelligence could reshape AI governance]]></description><link>https://academy.shiftbase.info/p/programmable-governance-and-probabilistic</link><guid isPermaLink="false">https://academy.shiftbase.info/p/programmable-governance-and-probabilistic</guid><dc:creator><![CDATA[Cerys Hearsey]]></dc:creator><pubDate>Tue, 10 Mar 2026 15:19:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MqTX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It might sound counter-intuitive at first, but good AI governance needs to become both programmable and probabilistic if organisations are to make meaningful use of human judgement alongside machine intelligence.</p><p>The idea of &#8216;human-in-the-loop&#8217; governance is simple and reassuring - AI systems may assist, recommend or automate, but somewhere in the process a human remains responsible for oversight and final judgement.</p><p>For early pilot deployments this model works reasonably well. Humans review outputs, approve sensitive actions or intervene when something appears wrong. But as AI systems become faster and more autonomous, we hit the limits of this approach quite quickly.</p><p>The scale and speed of modern systems will soon outstrip the cognitive bandwidth of manual oversight. A governance model built around humans inspecting individual outputs simply does not scale.</p><p>This does not make human judgement less important. If anything, the opposite is true. But the role humans play in governance needs to evolve.</p><p>In practice, most leaders already hold nuanced views about emerging risks. A security leader may suspect that monitoring systems could fail under certain conditions. A product leader may worry about reputational edge cases. Legal teams often sense regulatory ambiguity long before it becomes formal policy.</p><p>Yet governance structures rarely capture these insights clearly. They compress judgement into binary decisions: approved or rejected, compliant or non-compliant, acceptable or unacceptable, red or green.</p><p>An under-used superpower that organisations already possess, and which could help here, is the ability to harness distributed human judgement about uncertainty.</p><p>A probabilistic human expresses judgement differently. Instead of presenting certainty where none exists, they estimate likelihoods and levels of confidence. When those signals can be aggregated across many individuals, organisations gain a clearer picture of how risk is evolving across the system. This opens the door to a different kind of governance. Rather than inserting humans into isolated approval points, organisations can begin to treat collective judgement as a continuous signal about how uncertain the system really is.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MqTX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MqTX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic 424w, https://substackcdn.com/image/fetch/$s_!MqTX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic 848w, https://substackcdn.com/image/fetch/$s_!MqTX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic 1272w, https://substackcdn.com/image/fetch/$s_!MqTX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MqTX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/be2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:270538,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/190504879?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MqTX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic 424w, https://substackcdn.com/image/fetch/$s_!MqTX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic 848w, https://substackcdn.com/image/fetch/$s_!MqTX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic 1272w, https://substackcdn.com/image/fetch/$s_!MqTX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2651c6-e023-4cec-8c4b-64f36ae2a4d5_1456x971.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Several mechanisms are beginning to emerge to support this shift. Some organisations experiment with prediction markets, allowing distributed expertise to converge into probabilistic forecasts about emerging risks, whilst others introduce structured dissent mechanisms, deliberately creating space for people to challenge prevailing assumptions and surface potential failure modes. And some leadership teams are beginning to convene probabilistic risk councils, where uncertainty is discussed explicitly and collective judgement informs governance decisions.</p><p>Taken together, these approaches allow organisations to move beyond episodic oversight toward something more adaptive: continuous calibration of uncertainty.</p><p>Let&#8217;s explore how these techniques work in practice and how leaders can use them to make human judgement legible inside increasingly complex AI systems.</p><h2>Why human-in-the-loop governance begins to strain</h2><p>For many leadership teams, the idea of human oversight in AI systems feels like a reassuring safeguard. If automated systems introduce uncertainty, the obvious response is to ensure that a person remains responsible for the final decision.</p><p>Yet organisations deploying AI at scale quickly encounter a different reality.</p><p>The challenge is not simply that systems are autonomous. It is that they operate within environments defined by <strong>speed, complexity and data volume</strong> that far exceed what traditional governance processes were designed to handle.</p><p>AI systems interact with constantly changing data, evolving models and interconnected workflows. Decisions that once occurred occasionally may now occur thousands of times per hour. Risk signals appear not as clear incidents but as patterns emerging across vast streams of activity.</p><p>Under these conditions, governance based on periodic review begins to struggle. Human judgement remains essential, but it cannot function effectively if it is only inserted at isolated approval points. This is where the idea of programmable governance becomes important.</p><p>Rather than relying entirely on manual oversight, programmable governance embeds certain rules, constraints and escalation paths directly into the systems themselves. Authority boundaries can be checked automatically before actions occur. Certain thresholds can trigger human review. Conflicts between objectives can halt execution and escalate to decision-makers.</p><p>In other words, governance becomes <strong>structural rather than procedural</strong>. Some forms of accountability are handled automatically within the system, while human judgement is reserved for the decisions that genuinely require interpretation, trade-offs and values.</p><p>When governance is structured this way, the human role changes. Instead of reviewing every decision, leaders focus on calibrating how the system interprets risk and uncertainty. And this is where probabilistic thinking becomes essential.</p><p>Most governance processes still ask leaders to express judgement in binary terms: approved or rejected, compliant or non-compliant, acceptable or unacceptable.</p><p>Yet the judgments leaders actually hold are rarely so definite.</p><p>A CISO may believe there is a moderate chance that monitoring systems would fail under certain conditions. A product leader may suspect that a new feature introduces reputational risk without being able to quantify it precisely. Legal teams may sense regulatory ambiguity long before it becomes formal policy.</p><p>These kinds of judgements contain valuable information. But governance structures compress them into simple approvals or objections.</p><p>The idea of the <strong>probabilistic human</strong> offers a different approach.</p><p>Instead of presenting certainty where none exists, probabilistic humans express judgement in terms of likelihood and confidence. When those signals can be aggregated across many individuals, organisations gain a clearer picture of how risk is evolving over time.</p><p>And once judgement can be expressed probabilistically, a new set of governance techniques becomes possible, and we enable a far richer audit trail of decision-making that can be used to train AI systems and improve future decisions.</p><p>Once judgement can be expressed this way, a new set of governance techniques becomes possible and we create a richer audit trail of how judgement is exercised, which can inform both future governance decisions and the training of AI systems themselves.</p><h2>Where this tension shows up for leaders</h2><p>The limitations of human-in-the-loop governance rarely appear as an explicit design flaw. Instead, they surface indirectly, as uncertainty that feels difficult to resolve through existing oversight structures.</p><p>Different leadership roles can encounter this tension in different ways, depending on where they sit in the organisation&#8217;s decision and accountability landscape.</p><h3>CISO pain points: signals that arrive too late</h3><p>For CISOs and security leaders, the strain often appears as a timing problem: risk reviews take place and systems are assessed against known threat models. Yet concerns about AI behaviour can emerge gradually, often through operational signals rather than formal governance channels.</p><p>A model may drift slowly outside expected parameters, monitoring alerts begin to cluster in unusual ways, small anomalies appear that do not yet justify escalation, but suggest that the system&#8217;s behaviour is shifting.</p><p>Traditional governance frameworks expect risks to be identified and addressed at defined checkpoints. But many of the signals that matter most in AI environments are probabilistic rather than definitive.</p><p>Security teams therefore find themselves working with a growing set of partial signals, indicators that something may be wrong, without a clear threshold that justifies intervention.</p><h3>Legal and compliance pain points: decisions without certainty</h3><p>Legal and compliance leaders tend to experience the tension differently. Governance processes often require them to classify a system in categorical terms: compliant or non-compliant, acceptable or unacceptable. Yet many AI deployments sit in ambiguous territory, particularly when regulations are evolving or when systems operate across jurisdictions.</p><p>Legal teams frequently recognise emerging risks early. They may sense that a deployment could attract scrutiny, or that regulatory expectations are shifting in ways that are difficult to formalise.</p><p>However, governance structures typically force those insights into binary decisions. A deployment is either approved or blocked, even when the underlying judgement is far more nuanced.</p><p>This can create uncomfortable dynamics. Legal teams appear cautious or obstructive when they are simply responding to uncertainty that has not yet stabilised.</p><h3>Product leadership pain points: innovation slowed by rigid oversight</h3><p>Product leaders encounter the same structural issue from another direction. AI-enabled features often evolve iteratively. Teams test new capabilities, refine workflows, and adjust behaviour based on real-world feedback. In this environment, risk rarely presents itself as a clear go-or-no-go moment.</p><p>Instead, risk appears as shifting probabilities.</p><p>A feature may be broadly safe, but introduce edge-case failure modes. A system may work well under typical conditions, but become fragile when interacting with other services.</p><p>When governance frameworks rely on discrete approvals, product teams can find themselves navigating a process that feels mismatched to the way the technology evolves. Reviews occur at specific milestones, while risk emerges gradually over time.</p><p>The result is often friction rather than clarity.</p><h3>Executive leadership pain points: oversight that becomes symbolic</h3><p>At the executive level, the tension appears as a widening gap between formal oversight and operational reality.</p><p>Leadership teams approve governance frameworks, establish policies, and review risk dashboards. Yet the speed and complexity of AI systems can make those structures feel increasingly abstract.</p><p>Executives may receive polished summaries of model performance or compliance posture, while sensing that the organisation&#8217;s true exposure is harder to quantify.</p><p>This does not usually reflect a failure of diligence. Rather, it reflects a mismatch between the episodic rhythm of traditional governance and the continuous evolution of AI-enabled systems.</p><p>Under these conditions, leadership oversight risks becoming symbolic: reassuring in principle, but too distant from the system to provide real-time calibration.</p><p>Across all of these perspectives, the underlying issue is the same - human judgement is present throughout the governance system, but it is expressed in ways that hide uncertainty rather than revealing it. Leaders are often forced to present confidence where what they actually possess is a probability.</p><p>This is where collective intelligence techniques become valuable. They allow organisations to capture and aggregate the probabilistic judgement already present across the system, turning it into signals that governance structures can actually use.</p><p>Read on to explore three techniques that can help organisations embed structured collective intelligence into AI governance:</p><ul><li><p>prediction markets</p></li><li><p>structured dissent mechanisms / red team markets</p></li><li><p>probabilistic risk councils.</p></li></ul>
      <p>
          <a href="https://academy.shiftbase.info/p/programmable-governance-and-probabilistic">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Humans in the Loop or in the Soup?]]></title><description><![CDATA[How can we create governance systems that are quick and detailed enough for the AI era, but also maintain human-in-the-loop safeguards and accountability?]]></description><link>https://academy.shiftbase.info/p/humans-in-the-loop-or-in-the-soup</link><guid isPermaLink="false">https://academy.shiftbase.info/p/humans-in-the-loop-or-in-the-soup</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 03 Mar 2026 15:50:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zVy7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Enterprise AI governance, security and safety are challenges that will require a multi-domain approach and imaginative solutions that combine technology, human factors, knowledge engineering and codification. These are issues that cannot just be delegated to CSOs and IT functions without collective leadership accountability.</p><h2>Waiter! There&#8217;s a Human in my Loop!</h2><p><strong><a href="https://www.strangeloopcanon.com/p/aligning-anthropic?utm_source=substack&amp;publication_id=233019&amp;post_id=189678330&amp;utm_medium=email&amp;utm_content=share&amp;utm_campaign=email-share&amp;triggerShare=true&amp;isFreemail=true&amp;r=9dv58&amp;triedRedirect=true">The recent furore over the USA Department of War&#8217;s threats to declare Anthropic a supply chain risk</a></strong> is an interesting example of how confusing things are likely to become.</p><p>Anthropic has been supplying a modified (and apparently more advanced) version of Claude to the US military through Palantir, but <strong><a href="https://www.anthropic.com/news/statement-department-of-war">has tried to insist on two red lines governing its usage</a></strong>, namely (1) that it should not be used for broad spectrum domestic surveillance, which might be technically illegal; and, (2) it should not be used to run fully-autonomous weapons systems, because Anthropic do not believe it is yet reliable enough to do this safely.</p><p>In fact, <strong><a href="https://www.programmablemutter.com/p/ai-is-a-bureaucratic-technology-so">as Henry Farrell wrote today</a></strong>, the US military is such a vast bureaucracy that the majority of use cases for LLMs will not be about autonomous weapon systems, but the logistics, information synthesis and practical management tasks that such a huge organisation requires.</p><p>Nevertheless, the Department of War has insisted it should have total control over how the tool is used, and if Anthropic do not acquiesce, then the company will be declared a supply chain risk, meaning not only that it will lose its contracts with the US government, but also that third parties offering services to government that depend on Anthropic&#8217;s technology will probably need to replace it for another model.</p><p>By way of context, Israel - one of the most advanced users of AI-enabled technology for targeting and hardly what might be called over-cautious - claims to use a <strong><a href="https://giftarticle.ft.com/giftarticle/actions/redeem/32ed5396-07a1-4107-b8a6-09366630445b">double human-in-the-loop sign-off process</a></strong> to verify proposed targeted attacks. Meanwhile, the US &#8220;Secretary of War&#8221; rails against <em>&#8220;stupid rules of engagement&#8221;</em> and <em>&#8220;traditional allies who wring their hands and clutch their pearls, hemming and hawing about the use of force.&#8221;</em></p><p>Inevitably in such a polarised political landscape, <strong><a href="https://www.nytimes.com/2026/02/27/technology/anthropic-trump-pentagon-silicon-valley.html?campaign_id=158&amp;emc=edit_ot_20260302&amp;instance_id=171869&amp;nl=on-tech&amp;regi_id=71034279&amp;segment_id=216046&amp;user_id=b1568fb0a0ae5f0b3bc0e2c4e95a01d8">Anthropic have been seen as the good guys</a></strong> and OpenAI, who negotiated their own agreement with the DoW shortly after Anthropic&#8217;s deal collapsed, <strong><a href="https://www.nytimes.com/2026/02/27/technology/openai-agreement-pentagon-ai.html?campaign_id=158&amp;emc=edit_ot_20260302&amp;instance_id=171869&amp;nl=on-tech&amp;regi_id=71034279&amp;segment_id=216046&amp;user_id=b1568fb0a0ae5f0b3bc0e2c4e95a01d8">have been seen as the bad guys</a></strong>; but what this means for AI governance is likely to be less simple than it appears.</p><p>So where is AI governance headed and what are the practical alternatives to unlimited executive power?</p><h2>Security Soup and Programmable Governance</h2><p>I had a conversation with a very smart and accomplished CISO last week, and she suggested that we are heading to a place where human oversight is insufficient to maintain security in a complex enterprise. So, whilst we might want to see human-in-the-loop solutions to minimise AI risks, we probably need to think more imaginatively about how this works.</p><p>Historically, regulation and corporate governance involved a very slow process of risk analysis and political negotiation of the rules, which were then handed down to company officers to enforce manually through training, guidelines and so on. But this approach was prone to problems such as <strong><a href="https://en.wikipedia.org/wiki/Regulatory_capture">regulatory capture</a></strong>, or compliance theatre, where those in charge of upholding the rules lacked the power and influence to rein in the behaviour of colleagues who were generating profits from skirting them (e.g. in banking). This was also prone to regulatory inertia, which often meant regulators were too busy fighting the last war and struggled to keep up with current or emerging risks.</p><p>So how should companies do to ensure security, safety and risk mitigation in their use of AI and related technologies?</p><p><strong><a href="https://diginomica.com/how-uk-cios-are-governing-ai-without-killing-innovation-banking-retail-and-academia-perspectives">Diginomica recently reported on a UK CIO event</a></strong> where executives expressed a desire to ensure AI safety without harming innovation, and several attendees talked more about a collaborative evolution than top-down transformation:</p><blockquote><p><em>Anybody in a complex organization should think about whether they are enabling or leading. You enable through a coalition of the willing.</em></p></blockquote><p>There have been several attempts to scope out some guidelines for trustworthy AI. For example, <strong><a href="https://www.techtarget.com/searchcustomerexperience/feature/4-governance-pressures-shaping-enterprise-AI#">James Miller at TechTarget recently shared some thoughts on how to boost institutional accountability</a></strong> to align governance and structure, and laid out a framework to guide the shift from strategy to execution via trustworthy data and algorithms.</p><p>Elsewhere, those who want to see pro-human, safe and trustworthy AI are <strong><a href="https://partnershiponai.org/work/">writing lots of guidelines, pleas and commendable nice words</a></strong>. But they often lack a credible, practical approach to implementation that is sufficiently robust, responsive and integrated with technology systems and platforms to really make a difference.</p><p><strong><a href="https://www.reddit.com/r/ArtificialInteligence/comments/1r87afj/we_didnt_know_what_we_didnt_know_standing_up/">As one enterprise AI practitioner put it when reflecting on lessons learned</a></strong>:</p><blockquote><p><em>Policy alone won&#8217;t save you. You need policy and technology and education working together. Any one of those by itself is insufficient.</em></p></blockquote><p>Looking ahead, I think we are headed towards AI-enabled <em>programmable governance</em> (building on existing practices such as <strong><a href="https://www.paloaltonetworks.com/cyberpedia/what-is-policy-as-code">Policy-as-Code</a></strong>) where an organisation ingests and codifies regulatory, compliance and legal rulesets, and combines them with its own rules of the road to give governance systems the context needed to guide the organisation&#8217;s work. And of course, governance isn&#8217;t just about rules like <em>&#8220;don&#8217;t leak data.&#8221;</em> It also includes less technical concerns like <em>&#8220;don&#8217;t violate brand values,&#8221;</em> <em>&#8220;don&#8217;t hallucinate pricing,&#8221;</em> and <em>&#8220;stay within budget.&#8221;</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zVy7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zVy7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic 424w, https://substackcdn.com/image/fetch/$s_!zVy7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic 848w, https://substackcdn.com/image/fetch/$s_!zVy7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic 1272w, https://substackcdn.com/image/fetch/$s_!zVy7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zVy7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic" width="1006" height="1416" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1416,&quot;width&quot;:1006,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:217239,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/189777433?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zVy7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic 424w, https://substackcdn.com/image/fetch/$s_!zVy7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic 848w, https://substackcdn.com/image/fetch/$s_!zVy7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic 1272w, https://substackcdn.com/image/fetch/$s_!zVy7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc21d7a86-669c-4324-a1c3-86f603712ab9_1006x1416.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Where Leaders Need to Step up</h2><p>The challenge in building and evolving fit-for-purpose AI security and governance systems is not to be under-estimated, and it covers everything from tech, data, and platforms to culture, behaviour, and process management.</p><p>Manual oversight is not enough. Human-in-the-loop is desirable, but at what level and supported by what kind of deep infrastructure underneath? Perhaps we will end up with agents surveilling other agents, and an organisational autonomic immune system that draws lessons from biology with single-purpose nano-bots running around in swarms to identify and contain anomalies or &#8216;foreign bodies&#8217; at the network level.</p><p>But however our security, safety and governance systems evolve, they will need significantly greater codification of our rules, guidelines and threat vector identification than we have today. This is an area where the wider leadership function can meaningfully assist CISOs and CIOs without needing a great deal of technical knowledge: start by capturing specific rules or statements that make nice words actionable and (ideally) programmable.</p><p>It is a system challenge, but we have barely scratched the surface of writing the code for it.</p><p><strong><a href="https://www.inc.com/umair-javed/leadership-behavior-is-causing-an-ai-adoption-gap/91307376">We already know that when leaders only decide and delegate on AI topics, but don&#8217;t lead adoption, the outcome is sub-optimal</a></strong>. We need them to get their hands dirty and bring their experience and knowledge of their organisation&#8217;s value chain and strategy to the table. We already know that <strong><a href="https://giftarticle.ft.com/giftarticle/actions/redeem/88c1e6b5-941a-492d-b7aa-bb8de54effcc">mandating adoption using the crudest possible KPIs</a></strong> is likely to optimise for the wrong outcomes.</p><p>We need leaders to lead the codification or knowledge engineering required to make our guidelines understandable to agents, APIs and systems.</p><p>This starts with <strong><a href="https://medium.com/@andreas_g/knowledge-graphs-vs-context-graphs-the-missing-foundation-of-reliable-enterprise-ai-75e3cac6d047">understanding and connecting the knowledge objects of the organisation in shared knowledge graphs</a></strong>, so that the entities we work with (person, team, process, system, concept, goal, etc) are addressable and findable. Once we have a good basic knowledge graph, then we can start to make a system that is programmable (IF this THEN that, etc) - and that is where we can start to develop meaningful codification of how we want things to work.</p><p><strong><a href="https://onlydeadfish.substack.com/p/fish-food-679-organisational-knowledge?utm_source=substack&amp;publication_id=2195351&amp;post_id=187111193&amp;utm_medium=email&amp;utm_content=share&amp;utm_campaign=email-share&amp;triggerShare=true&amp;isFreemail=true&amp;r=9dv58&amp;triedRedirect=true">Neil Perkins recently shared a thoughtful article on knowledge engineering</a></strong> and why it matters, which is worth a few minutes to read:</p><blockquote><p><em>&#8230; I think the idea of architecting knowledge for AI goes far deeper than just a technical practice. The quality of every AI-assisted decision, recommendation, and output is bounded by the quality of context it receives. This makes the curation of organisational knowledge (what gets captured, how it&#8217;s structured, how relationships between ideas are maintained) a fundamental strategic capability.</em></p></blockquote><h2>Humans in the Loop or Making the Soup? Why not Both?</h2><p>To suggest that we are blowing past the point where humans in the loop can meaningfully oversee and manage safety, security and regulatory compliance need not be a defeatist or alarmist position. It is about the size and scale of the loops, and where people can maximise the value of their unique intuition and experience.</p><p>If the loops are too low-level, there is too much information to process unless we develop the kind of pre-cognition powers in a movie like Minority Report. If the loops are too high-level, then we review our leadership team&#8217;s nicely packaged monthly threats powerpoint file only to find we were fatally compromised 29 days ago.</p><p><strong><a href="https://www.kasava.dev/blog/ai-as-exoskeleton">If we think of enterprise AI as an exoskeleton that empowers people rather than a robot that replaces them</a></strong>, perhaps we can use agentic systems where they can realistically perform autonomic immune functions, whilst surfacing issues and decision points at the right level of detail for human-in-the-loop issues, allowing people to drill down quickly to see the specifics. But this will require a whole load of systems, data and knowledge graphs to support them.</p><p>Leading and participating in the codification effort is something all leaders can do today to take the pressure of CSOs, CISOs and CIOs who cannot do it alone.</p>]]></content:encoded></item><item><title><![CDATA[Extraction vs. Redesign: The Hidden Fork in the Road for AI Leaders]]></title><description><![CDATA[Are we using AI to squeeze more output from yesterday&#8217;s structures, or to redesign the architecture of how value is created?]]></description><link>https://academy.shiftbase.info/p/extraction-vs-redesign-the-hidden</link><guid isPermaLink="false">https://academy.shiftbase.info/p/extraction-vs-redesign-the-hidden</guid><dc:creator><![CDATA[Cerys Hearsey]]></dc:creator><pubDate>Tue, 24 Feb 2026 15:20:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9Mon!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p>This article is published as a free sample of the Shift*Academy paid edition.</p><p>Every other week, the paid edition explores the structural implications of AI for leadership, organisational design and enterprise capability, with practical deep dives for leaders navigating the agentic era.</p><p>If this resonates, you can subscribe for full access to future essays and capability breakdowns.</p></div><p>Over the past decade, we have lived through multiple &#8220;transformations&#8221; that promised structural change. Digital was meant to flatten hierarchies. Agile was meant to empower teams. Platforms were meant to dissolve silos. In each case, the tooling evolved faster than the governing logic of the organisation. We digitised reporting lines rather than redesigning them. We accelerated information flow without rethinking who holds authority. We changed the vocabulary, but not the topology.</p><p>Enterprise AI is arriving into that same landscape.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://academy.shiftbase.info/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Shift*Academy is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Its capabilities are extraordinary - coordination costs are falling, translation layers can be automated, and expertise can gain direct leverage over execution in ways that were impossible even five years ago. And yet, already, we can see a familiar pattern forming. Many organisations are reaching for AI as a simple efficiency play inside structures designed for a previous era.</p><p>The question is not whether AI works - it clearly does - but whether we are willing to change the frame through which we organise work, rather than using this new intelligence to reinforce the old machine.</p><p>And this is where the fork in the road begins to come into focus.</p><p>On paper, most AI deployments look similar. The same models. The same copilots. The same orchestration platforms layered into finance, operations, customer service, product and strategy.</p><p>The language is shared: augmentation, automation, leverage, productivity. But beneath that shared surface, two very different operating logics are taking shape.</p><p>In some organisations, AI is being introduced as a margin stabiliser. Junior layers are reduced. Reporting structures remain intact. Agents are embedded into existing workflows to accelerate output and reduce cost, while decision rights and authority models remain largely untouched. Efficiency improves, executive distance from execution is preserved and the machine runs faster.</p><p>In more ambitious organisations, AI is treated as permission to ask more uncomfortable questions. If translation can be automated, why maintain translation layers? If coordination is cheaper, why preserve reporting ladders designed to aggregate information upward? If expertise can now act closer to the work, what becomes of authority that was historically justified by information asymmetry? Here, AI is not simply accelerating existing processes; it is exposing structural assumptions that have long gone unchallenged.</p><p>Both paths can produce impressive short-term productivity gains. But only one opens up net new top line growth, and it does this by changing the organisation.</p><p>The distinction is subtle at first. It does not show up in vendor announcements or pilot metrics. It shows up in what leaders choose to leave alone. It shows up in whether span of control is redesigned or simply expanded. It shows up in whether apprenticeship is re-imagined or quietly eroded. It shows up in whether authority is redistributed toward outcomes, or insulated behind more efficient reporting.</p><p>This is the moment where AI adoption stops being a technology story and becomes a design story.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9Mon!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9Mon!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!9Mon!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!9Mon!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!9Mon!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9Mon!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:315230,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/189019819?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9Mon!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!9Mon!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!9Mon!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!9Mon!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1061a30f-f6c7-4c08-b328-dc7c6f890f83_1536x1024.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Extraction Mode: The Frame Defending Itself</h2><p>Extraction Mode often presents as pragmatism. Budgets are under pressure, markets are volatile and boards are demanding visible returns. AI offers immediate gains in efficiency, speed, and headcount flexibility. In that context, embedding agents inside existing workflows feels not only rational, but responsible.</p><p>Junior roles are reduced first:</p><ul><li><p>automation absorbs repetitive tasks;</p></li><li><p>reporting structures remain largely intact</p></li><li><p>middle layers translate agent outputs rather than reconsider their necessity</p></li></ul><p>Executive oversight becomes more data-rich, but not structurally closer to execution. Productivity per employee rises and importantly, cost curves improve.</p><p>But decision rights are still flowing upward and authority is still tied to hierarchy rather than outcome ownership. Over time, the consequences begin to surface in less obvious ways. For example, apprenticeship pathways narrow as entry-level roles disappear without being redesigned; or leaders find that their bandwidth does not materially recover, but shifts toward adjudicating edge cases and resolving boundary disputes between humans and agents. Informal shadow coordination grows as teams compensate for ambiguities that the formal structures never addressed.</p><p>Extraction Mode can produce good numbers in the short term. It can stabilise margins and extend runway. But it does so by reinforcing the underlying frame: preserving hierarchy, protecting authority and optimising cost. AI becomes a margin machine. And the structure that limited previous transformations remains quietly in place.</p><h2>Redesign Mode: Questioning the Topology</h2><p>Redesign Mode begins with a different instinct. Instead of asking, &#8220;Where can AI remove cost?&#8221; it asks, &#8220;What assumptions about structure that were built into the organisation when coordination was expensive are no longer valid?&#8221;</p><p>If translation can be automated, then layers that existed primarily to aggregate and repackage information should be scrutinised. If agents can monitor workflows continuously, then escalation does not need to rely on proximity to authority. If expertise can act directly with the support of agent systems, then the justification for distance between decision-makers and execution begins to weaken.</p><p>In Redesign Mode, AI is not inserted into the existing machine, instead it is used to reveal its architecture, and then improve it.</p><p>Reporting ladders are examined, not just accelerated. Decision rights are clarified, not assumed. Span of control is redesigned deliberately rather than allowed to expand silently. Outcome boundaries are defined explicitly, and authority is tied to those boundaries rather than to position in a chain of command. This does not necessarily mean &#8220;flatter.&#8221; It means clearer.</p><p>Some functions may consolidate. Others may fragment into outcome cells with explicit guardrails and escalation rules. Leaders move closer to the work in some areas and further from it in others, but the movement is intentional. Apprenticeship is redesigned alongside automation, ensuring that the disappearance of repetitive tasks does not quietly eliminate the pathways through which judgment develops.</p><p>The shift is subtle - AI is treated not as an efficiency layer but as structural permission. Coordination is cheaper; therefore, the organisation does not have to be shaped the way it was when coordination was scarce.</p><p>This path is slower. It exposes leaders to greater short-term uncertainty. It requires confronting incentive systems, governance habits, and career ladders that feel natural because they have been stable for decades - but it also changes the trajectory.</p><p>AI becomes a leverage multiplier rather than a margin machine. And the organisation begins to evolve rather than simply accelerate.</p><h2>Why Extraction Mode Is the Default</h2><p>Redesign Mode sounds compelling in theory. Few leadership teams would openly argue that preserving outdated structures is the goal. And yet, when AI initiatives move from pilot to budget to restructuring, most organisations tilt toward extraction.</p><p>Cost reduction is measurable. Structural redesign is not. A headcount number can be reported to the board next quarter. A re-architected decision-rights model compounds over years. The former is easy to defend; the latter is harder to explain.</p><p>Governance models amplify this bias. Boards understand margin expansion. They are less fluent in organisational topology. Asking for approval to remove redundant roles inside an existing structure feels prudent. Asking to redesign that structure altogether feels risky. It introduces ambiguity about authority, reporting, and risk allocation at precisely the moment when AI already feels destabilising.</p><p>But the explanation cannot stop there. Over the past decade, most transformation effort has been directed downwards. Teams were asked to become more agile, managers were asked to embrace digital tools, frontline functions were reconfigured. Senior leaders, in many cases, changed less. Their ways of working and their information flows often remained intact. Digital transformation pointed at the base of the pyramid more than at its apex.</p><p>Our current AI transformation is exposing this. As information asymmetries fall and translation layers become automatable, the traditional justification for distance from execution weakens. Redesign Mode would require leaders to update their own operating models: to move closer to outcome boundaries, to make judgment legible, to relinquish some insulation provided by hierarchy. That is harder than reducing cost!</p><p>Preserving hierarchy is safer than questioning it. Leaders who reduce spend inside a known model are seen as disciplined, whereas leaders who challenge the shape of that model take on visible personal risk. In uncertain markets, prudence (and self-preservation), often win.</p><p>There is also a subtler force at work. Most organisations have been optimised for decades around information asymmetry. Authority was justified by access: access to data, to strategic perspective. AI reduces that asymmetry, but the habits built around it remain. It is easier to automate the flow of information up the ladder than to question why the ladder exists in its current form.</p><p>This is how transformations stall. The technology advances and the structure remains recognisable as efficiency rises. The deeper architecture stays intact. And over time, what could have been a redesign moment becomes another optimisation cycle that fails to grasp the opportunity for improvement.</p><h2>The Apprenticeship Question</h2><p>Every hierarchy contains an implicit learning pathway. Entry-level roles absorb repetitive work. They sit close to process and they observe decisions being made. They develop judgment slowly through exposure, error, and proximity. Over time, some of those individuals move upward, carrying tacit knowledge with them.</p><p>It is not a perfect system. It can be inefficient and uneven. But it is a capability engine, which Extraction Mode fundamentally disrupts and fails to replace with a workable solution.</p><p>When junior layers are removed without structural redesign, repetitive tasks disappear, but so do many of the early exposure points where judgment is formed. Automation replaces execution without replacing apprenticeship. The pyramid thins, but the pipeline narrows.</p><p>In the short term, this looks efficient. Output per employee increases. Overhead falls. But over time, a different cost accumulates.</p><ul><li><p>Where do future leaders learn how decisions are made under uncertainty?</p></li><li><p>Where does tacit operational knowledge accumulate?</p></li><li><p>How does strategic judgment develop if the early rungs of the ladder vanish?</p></li></ul><p>Redesign Mode confronts this directly. If AI removes certain forms of work, then the learning architecture must be rebuilt intentionally. Apprenticeship shifts from &#8220;do the repetitive work and observe&#8221; to something more deliberate: structured exposure to decision boundaries, transparent escalation logic, visible agent&#8211;human coordination, and explicit responsibility for outcomes.</p><p>In other words, if coordination is becoming cheaper, learning cannot remain accidental.</p><p>This is not a sentimental argument for preserving junior roles. It is a compounding argument. Organisations that treat entry-level work purely as cost will eventually erode their own capacity for long-term adaptation. Those that redesign learning alongside automation build a deeper form of resilience.</p><h2>If You&#8217;re Serious About Redesign Mode</h2><p>Redesign Mode is not declared in strategy decks. It shows up in structural edits.</p><p>If you believe AI is a redesign moment rather than a margin moment, there are early signals that distinguish intent from rhetoric.</p><h3>1. Rewrite One Decision Rights Map</h3><p>Pick a domain where agents are already active.</p><p>Then ask:</p><ul><li><p>Which decisions remain human?</p></li><li><p>Which are delegated?</p></li><li><p>What triggers escalation?</p></li><li><p>Who arbitrates conflict?</p></li></ul><p>If the map still routes most meaningful decisions upward through the same hierarchy, you are in Extraction Mode.</p><h3>2. Audit One Reporting Layer for Translation vs Judgment</h3><p>Many layers exist to aggregate and translate information.</p><p>Agents can now perform much of that work.</p><p>For one reporting tier, ask:</p><ul><li><p>Does this layer exercise unique judgment?</p></li><li><p>Or does it primarily synthesise and repackage?</p></li></ul><p>If it is translation, move it to the system.</p><p>If it is judgment, clarify and anchor it closer to outcomes.</p><h3>3. Redesign One Apprenticeship Pathway Alongside Automation</h3><p>If repetitive tasks disappear, learning cannot remain accidental.</p><p>In one function:</p><ul><li><p>Map how junior staff historically developed judgment.</p></li><li><p>Identify what automation removes.</p></li><li><p>Design deliberate exposure to decision boundaries, trade-offs, and escalation logic.</p></li></ul><p>If you cut entry roles without rebuilding learning architecture, you are optimising cost at the expense of future capability.</p><h3>4. Define One Outcome Cell</h3><p>Choose one cross-functional workflow.</p><p>Define:</p><ul><li><p>The outcome metric.</p></li><li><p>The guardrails.</p></li><li><p>The escalation rules.</p></li><li><p>The named human owner.</p></li><li><p>The supporting agent stack.</p></li></ul><p>If coordination is cheaper, structure can follow outcomes rather than reporting ladders.</p><p>These are not large-scale reorganisations, they are diagnostic edits - small structural moves that reveal whether AI is being used to reinforce the current topology or to reshape it. Redesign Mode begins with the courage to make authority, learning, and accountability explicit.</p><h2>The Choice Is Ours</h2><p>AI will increase productivity in either mode.</p><p>In Extraction Mode, it will accelerate reporting, reduce cost, and preserve existing authority structures with greater efficiency. The machine will run faster, but it will break more often. Margins may expand. Headcount curves may improve. On paper, it will look like progress.</p><p>In Redesign Mode, AI will be treated as a structural inflection point. Coordination costs will fall and structure will change in response. Decision rights will be clarified. Span will be redesigned. Apprenticeship will be rebuilt. Authority will move closer to outcomes rather than further from them.</p><p>The models themselves are neutral but what they amplify is not. If we embed AI inside hierarchies designed for information scarcity and expensive coordination layers, we will simply automate those hierarchies. We will thin the pyramid without questioning its shape. We will accelerate the system that previous transformations failed to meaningfully change.</p><p>If instead we allow AI to expose the assumptions built into our structures, then we have a redesign opportunity rather than yet another optimisation cycle.</p><p>The uncomfortable truth is that the technology is not the constraint. Model capability is advancing rapidly. What will determine whether this transformation compounds advantage or quietly stalls is whether leaders are willing to question the frame that has shaped enterprise design for decades.</p><p>AI does not choose between extraction and redesign.</p><p>We do.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://academy.shiftbase.info/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Shift*Academy is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Schrödinger’s Optimism: AI and Productivity Signals]]></title><description><![CDATA[Should we celebrate a small bump in output with lower headcount, or can we lift our eyes to the remarkable opportunity to cultivate extraordinary business capabilities?]]></description><link>https://academy.shiftbase.info/p/schrodingers-optimism-ai-and-productivity</link><guid isPermaLink="false">https://academy.shiftbase.info/p/schrodingers-optimism-ai-and-productivity</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 17 Feb 2026 15:45:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7VEM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Schr&#246;dinger&#8217;s Optimism</h2><p>Reading news stories about the US stock market dip at the end of last week, you might think that serious economic and technology analysts are uncertain about the impact of AI on business and productivity.</p><p>Selling or buying stocks is quite a binary activity (notwithstanding the grey areas of hedging and options), but the current state of AI is more quantum than binary - simultaneously beyond imagination and yet not good enough for deployment in production; able to autonomously code entire apps with a few lines of instruction, but struggling with basic maths or questions like <strong><a href="https://www.reddit.com/r/OpenAI/comments/1r4zj8a/walk_to_wash_car_logical_fallacy/">&#8220;should I drive to the car wash?&#8221;</a></strong></p><p>We will probably need to live with its patchy, jagged, probabilistic, non-evenly-distributed nature for some time, and co-evolve our methods with the technology in much the same way as quantum computing relies on error correction. The question now is how quickly our institutions can metabolise what the technology is already making possible.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7VEM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7VEM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!7VEM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!7VEM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!7VEM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7VEM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:158718,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/188269840?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7VEM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!7VEM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!7VEM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!7VEM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe6394c50-6d1c-4f3c-bbc1-b102465a363e_1024x1024.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But if you zoom out just a little, the progress being made is incredible, and would have been unimaginable just a few years ago.</p><p><strong><a href="https://shumer.dev/something-big-is-happening">Matt Shumer recently wrote a widely-shared article trying to put this progress into words</a></strong>, which started with the reflection that people who are not following AI developments don&#8217;t know quite how disruptive the next 5 years could be:</p><blockquote><p><em>I&#8217;ve spent six years building an AI startup and investing in the space. I live in this world. And I&#8217;m writing this for the people in my life who don&#8217;t... my family, my friends, the people I care about who keep asking me &#8220;so what&#8217;s the deal with AI?&#8221; and getting an answer that doesn&#8217;t do justice to what&#8217;s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I&#8217;ve lost my mind. And for a while, I told myself that was a good enough reason to keep what&#8217;s truly happening to myself. But the gap between what I&#8217;ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.</em></p></blockquote><p>He goes on to discuss what this means for jobs, extrapolating from his own experience as a software developer working in AI, and it is simultaneously an exciting and also very discomforting read. But it is also naive.</p><p>On the optimistic side, I believe our lives should not be dictated by &#8220;jobs&#8221;, which have become hollowed out and insufficiently remunerated to live well at the entry level to mid-tier (at least outside of tech). But realistically, in the absence of labour market improvement or Universal Basic Income (UBI) or any other idea about how young people without assets can support themselves, the implications of what Shumer predicts could be very worrying.</p><p>However, business and societal change is modulated by incredible reserves of inertia that can hold back progress for decades, if not centuries, as long as enough <s>Powerpoint enjoyers</s> leaders are invested in the old ways of doing things.</p><p>A case in point is the debate about SaaS platforms in business:</p><ul><li><p><em>Logically</em>, many of them are screwed.</p></li><li><p><em>Practically</em>, firms can now recreate better, simpler versions of them without the eye-watering subscription costs using coding agents.</p></li><li><p><em>Emotionally</em>, they weigh so heavily on employee experience that companies would be far happier if they ceased to exist.</p></li></ul><p>And yet &#8230; don&#8217;t count them out. There are some very human - and very illogical - reasons on both the buyer side and the vendor side that suggest <strong><a href="https://x.com/finbarr/status/2021999185172775288">these businesses might not be so easy to kill, as Finbarr Taylor argues here</a></strong>. Just because a better way is possible, it doesn&#8217;t mean it will come to pass:</p><blockquote><p><em>You don&#8217;t always pick the cheapest option. You don&#8217;t always pick the most innovative option. You pick the option that, if it fails, you can defend to your boss. &#8220;We went with Salesforce&#8221; is a defensible sentence in any boardroom in America. &#8220;We went with an app I vibe-coded over the weekend&#8221; is a resignation letter.</em></p><p><em>This is the same dynamic that kept IBM dominant for decades and that keeps McKinsey and Deloitte in business despite armies of cheaper, often smarter competitors. Enterprise buyers optimize for career risk, not unit cost. They want a vendor that will still exist in three years, that has a support team they can call at 2am, that has a track record of not losing their data.</em></p></blockquote><p>Change is not inevitable. At least not everywhere.</p><h2>Exponential Proponents</h2><p>This weekend, <strong><a href="https://www.exponentialview.co/p/the-hundred-million-token-day">Azeem Azhar also published an eye-popping piece about the speed of AI&#8217;s evolution</a></strong>, predicated on the realisation that he had consumed 97 million tokens in a single day of working with AI tools. He makes the point that in exponential change, each level of scale can be fundamentally different from those below them. At 10^3 tokens, AI is a toy. But as we add a zero to the tokens used, it becomes a tool, then a colleague, a workflow, a process and a workforce; but at 10^9 tokens, it is more like infrastructure - always on, always working, like electricity.</p><blockquote><p><em>At 10&#8313;, a billion tokens a day per person, our unit of analysis changes. This becomes agents spawning even more sub-agents and talking to them and other agents. The human sets direction and adjudicates edge cases, but the conversation is mostly not ours anymore.</em></p><p><em>I&#8217;ve already caught a glimpse with my own setup. Micromanaging them slows down the whole process. If you had to configure each sub-agent yourself and track their work, I&#8217;m pretty certain none of us would do it. In other words, the bottleneck is no longer the model&#8217;s capability; it&#8217;s your willingness to let go. It becomes like running an organisation, trusting the parts to make the whole.</em></p></blockquote><p>And for those at the frontier of AI usage, the tools are not reducing effort, or giving them extra leisure time; in fact they are intensifying their work, <a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it">as </a><strong><a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it">Aruna Ranganathan and Xingqi Maggie Ye found in their recently published eight-month HBR study of 200 workers at a hi-tech firm</a></strong>, which raises some interesting questions for leaders:</p><blockquote><p><em>The promise of generative AI lies not only in what it can do for work, but in how thoughtfully it is integrated into the daily rhythm. Our findings suggest that without intention, AI makes it easier to do more&#8212;but harder to stop. An AI practice offers a counterbalance: a way to preserve moments for recovery and reflection even as work accelerates. The question facing organizations is not whether AI will change work, but whether they will actively shape that change&#8212;or let it quietly shape them.</em></p></blockquote><h2>Which way up is that J-curve?</h2><p>In the FT this weekend, <strong><a href="https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419dc5">Erik Brynjolfsson made the case that AI-attributed productivity improvements are starting to show up in the data</a></strong> (also <strong><a href="https://geekway.substack.com/p/ai-driven-productivity-growth-is">commented on by Andrew McAfee here</a></strong> if the FT link is paywalled):</p><blockquote><p><em>Data released this week offers a striking corrective to the narrative that AI has yet to have an impact on the US economy as a whole. While initial reports suggested a year of steady labour expansion in the US, the new figures reveal that total payroll growth was revised downward by approximately 403,000 jobs. Crucially, this downward revision occurred while real GDP remained robust, including a 3.7 per cent growth rate in the fourth quarter. This decoupling &#8212; maintaining high output with significantly lower labour input &#8212; is the hallmark of productivity growth.</em></p><p><em>My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade.</em></p></blockquote><p>Does this represent the beginning of the hoped-for productivity J-curve promised by AI optimists? Or are we seeing business leaders using automation to shed jobs, whilst protecting their own, with no reduction in overall output? Or, is the mild increase in US GDP nothing to do with technology at all, and could negative payroll growth indicate recessionary dynamics down the line? We will see.</p><p>It is sad to see leaders of large, established organisations respond to abundant technological capability by cutting the junior headcount (a.k.a their future) just to appease the fickle stock-trading gods in a time of market volatility, or to protect themselves until they can exit. If ever there was a time for long-term thinking about organisational development, it is now. Maybe private companies and those owned by long-term family trusts will be among those to chart a path through this fear and end up as winners.</p><p>But for individuals trying to get by in this liminal space between old and new worlds, it could be challenging. The most empowered, high-agency individuals and teams can achieve more than ever, but many of the entry-level jobs young people have been conveyed towards since they started school may not exist (or at least in such numbers) in the near future. If you have agents to manage, you might make it. But outside tech, old-fashioned management structures demand an awful lot of pointless busy work at the base of the pyramid that might start to be replaced sooner than we think.</p><p>We need to focus on helping the best leaders move closer to the work, not retreat further into abstraction and politics. AI makes it possible to compress layers, to give experienced people direct leverage over real outcomes rather than managing proxies and reports. But that only happens if incentives shift. In many firms, status is still measured by distance from execution, and risk is minimised by preserving familiar structures. Unless those dynamics change, AI will be used to thin out the base of the pyramid while leaving its shape intact. That would generate marginal gains at best, but reduce the organisation&#8217;s capacity to explore and exploit the kind of exponential gains that Matt Shumer and Azeem Azhar believe are possible.</p>]]></content:encoded></item><item><title><![CDATA[From Agent Spaghetti to Outcome Architecture]]></title><description><![CDATA[A practical guide to building composable, accountable agentic systems that scale.]]></description><link>https://academy.shiftbase.info/p/from-agent-spaghetti-to-outcome-architecture</link><guid isPermaLink="false">https://academy.shiftbase.info/p/from-agent-spaghetti-to-outcome-architecture</guid><dc:creator><![CDATA[Cerys Hearsey]]></dc:creator><pubDate>Tue, 10 Feb 2026 15:03:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2SkC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The first wave of AI agents gave us chatter, smart assistants and co-pilots, thinly wrapped around language models. They could respond to prompts, but lacked memory, structure, or real autonomy. The second wave gave us experimental multi-agent frameworks promising goal-directed collaboration, but often resulting in brittle workflows, unclear ownership, and what can only be described as agent spaghetti.</p><p>A third pattern is beginning to take shape that shifts the focus away from agents completing isolated tasks, and towards composed systems that can reliably deliver outcomes - what we call an Outcome-as-Agentic Solution (OaAS).</p><p>Instead of delegating disconnected tasks to individual agents, teams define a measurable business outcome such as <em>&#8220;reduce time-to-value for onboarding,&#8221;</em> <em>&#8220;respond to regulatory change within 48 hours,&#8221;</em> or <em>&#8220;restore service after incident X&#8221;</em>, and assemble a lightweight system of agentic and human functions to achieve it. These systems combine modular agents, context-aware orchestration, embedded controls, defined human roles, and real-time feedback loops.</p><p>The goal isn&#8217;t full autonomy (yet). It&#8217;s programmable delivery, building a system that can act, adapt, and escalate when needed, with traceable logic and accountable performance.</p><p>Most organisations today are stuck in the second wave, over-automated in places, under-coordinated in others, and missing a coherent architecture. But those willing to invest in capability design rather than scattered tools have a chance to build something scalable and auditable.</p><p>This shift from agent spaghetti to outcome architecture isn&#8217;t just technical - it&#8217;s structural, and it&#8217;s coming fast.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2SkC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2SkC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!2SkC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!2SkC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!2SkC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2SkC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:505742,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/187508854?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2SkC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!2SkC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!2SkC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!2SkC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0219dbcd-e6b3-46ec-9713-fe971b1cc352_1536x1024.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Why This Matters Now</h2><p>Agentic AI is moving fast, but many early implementations rely on brittle prompt chains, superficial integrations, or agents that struggle outside of sandboxed conditions.</p><p>But in enterprise settings where there are defined outcomes, clear constraints, and some history of process discipline, something more promising is starting to take shape. Teams are no longer just using agents to automate individual actions. They are beginning to build systems that can move toward outcomes, not just execute steps.</p><p>This shift matters because the old model of delivery is starting to break down. And many processes still rely on invisible glue: Slack messages, heroic effort, unspoken expectations, and the judgment of people who hold it all together.</p><p>Agentic systems, when designed well, offer a way to relieve that pressure. Instead of asking people to follow brittle processes or fill the gaps manually, organisations can compose systems that are able to coordinate, respond, and escalate in service of a shared result.</p><p>This only works if the outcome is clearly defined, the logic is well structured, and someone is accountable. And that is where things often fall apart.</p><p>Many leaders are not used to thinking in outcomes. They operate in terms of deliverables, metrics, or KPIs, but struggle to describe the intended result in a way that a system could act on. The shift from task delegation to outcome delegation sounds simple, but it reveals all the places where organisations rely on unspoken knowledge and manual intervention. Most agentic prototypes fail here, not because the tools are wrong, but because the intent is vague or the system has no way to recover when things go off track.</p><p>At the same time, teams are under growing pressure to move faster and cover more ground with fewer resources. Delegating outcomes to agents may feel risky, but in many cases it is less risky than continuing to scale human coordination with no real structure behind it.</p><p>The opportunity is real. So are the challenges.</p><p>Organisations that take capability design seriously will be better placed to make the shift, while those that remain stuck in automation theatre may soon find themselves outpaced by something quieter and more durable.</p><h2>What Makes an Outcome-as-Agentic Solution Work?</h2><p>The idea of delegating outcomes to AI systems often sounds appealing in principle, but quickly becomes uncomfortable in practice. Once an outcome is specified, the gaps begin to show: unclear logic, patchy data, fragile handoffs, and an over-reliance on people to notice when something goes wrong.</p><p>Most agentic systems still depend on human scaffolding that no one has time to maintain. To build something that can actually deliver, we need to design for outcomes from the start, with a focus on architecture, not just interaction.</p><p>From early deployments and internal pilots, seven core components appear necessary. These don&#8217;t guarantee success, but without them, outcome delivery is unlikely to scale or survive contact with the real world.</p><ol><li><p><strong>Defined Outcome:</strong> A specific, measurable result framed in business terms and bounded by time, value, or risk. It must be clear enough to guide action and structured enough to delegate.</p></li><li><p><strong>Coordinated Agentic Functions:</strong> Sensing, planning, action, escalation, and communication roles composed into a system. Modular or shared, these agents must work together toward the same end.</p></li><li><p><strong>Embedded Context Logic:</strong> Rules, reasoning, or orchestration layers that help the system adapt based on actors, history, and constraints. Without context-awareness, agent behaviour becomes brittle.</p></li><li><p><strong>Intentional Human Involvement:</strong> Designed-in roles for judgment, validation, or escalation. Human input is not a fallback but a core part of safe, ethical, and effective delivery.</p></li><li><p><strong>Built-In Controls:</strong> Guardrails, permissions, audit trails, and constraints that enforce responsible action and ensure enterprise-grade oversight from day one.</p></li><li><p><strong>Live Feedback Signals:</strong> Real-time data on performance, interaction, and progress. These signals allow the system to detect drift, adjust behaviour, and stay aligned with outcomes.</p></li><li><p><strong>Outcome Accountability:</strong> A clearly assigned owner, human, agent, or hybrid, who holds responsibility for the result and has the authority to monitor, intervene, or evolve the system.</p></li></ol><p>These seven components form the minimum viable architecture for moving from automated actions to accountable outcomes. Most organisations have some pieces already, the opportunity is to assemble them deliberately.</p><h2>Example Applications</h2><p>Outcome-as-Agentic Solutions can be applied wherever a clear business result is needed, but the delivery path involves multiple actors, moving parts, and changing conditions. Rather than replacing existing systems, OaAS compositions work best when they operate alongside or across existing functions, helping bridge the gap between intent and execution.</p><p>Here are a few early applications where this capability could be shaped:</p><h3><strong>Customer Onboarding Acceleration</strong></h3><p>A growth team defines the outcome: &#8216;customer activated within 72 hours.&#8217; Instead of a fixed checklist, agents handle setup, compliance, and follow-ups. If delays occur, they escalate automatically. The outcome is tracked until met.</p><h3><strong>Sales Qualification Improvement</strong></h3><p>A regional sales lead defines the desired outcome as &#8220;90% of new pipeline entries fully qualified within five working days.&#8221; Agents monitor CRM data, follow up on missing fields, pull supporting materials, and highlight stalled entries. If needed, they surface patterns to enable sales enablement support.</p><h3><strong>Incident Response and Recovery</strong></h3><p>A service delivery team defines an outcome of &#8220;incident resolved and verified within four hours.&#8221; Agents detect the incident, collect logs, contact on-call engineers, generate resolution summaries, and notify affected stakeholders. Escalation points are designed in. The outcome is the restoration of service, not just the creation of a status page.</p><h3>A User-in-Flow Scenario: Outcome Ownership in Practice</h3><p>Imagine a digital operations manager responsible for a new service launch.</p><p>The launch has been successful in most regions, but uptake in one market is slow. The team defines an outcome: &#8220;50% of new users complete onboarding within three days.&#8221; The manager decides to delegate this outcome to an agentic system rather than spin up another campaign.</p><p>The system begins with a monitoring agent that watches for drop-offs in the onboarding funnel. When one is detected, a messaging agent nudges the user with contextual help. If there is no response, a support ticket is drafted automatically. If an error is detected in the setup process, a diagnostic agent checks logs and flags a fix.</p><p>After 48 hours, if no progress is made, the agent escalates to a human customer success lead with a full activity history and a proposed intervention. The entire system is traceable and auditable. The manager can see which drop-offs were resolved, which escalated, and what interventions worked.</p><p>Instead of chasing tasks, the team is focused on improving a shared outcome, and they have a system that is actively helping them deliver it.</p><p>These examples are not hypothetical. Each component - sensing agents, orchestration layers, escalation paths - already exists in enterprise pilots. The challenge is integration.</p><h2>Mapping the Capability</h2><p>To treat Outcome-as-Agentic Solutions as a repeatable capability, rather than a series of disconnected experiments, organisations need to understand the building blocks that support it. This is not just about tools or platforms, but creating the conditions for composed, accountable delivery to emerge and evolve over time.</p><p>Here are five core dimensions to map and develop:</p><ul><li><p><strong>Core Systems</strong>: Orchestration frameworks, secure run-time environments, event routing infrastructure, and API mesh layers form the technical foundation for composing, monitoring, and governing agentic systems at scale.</p></li><li><p><strong>Data Sets</strong>: Live operational metrics, system state data, feedback signals, thresholds, and business constraints provide the structured input agents need to sense context, evaluate progress, and make decisions toward defined outcomes.</p></li><li><p><strong>Software</strong>: Modular agents, coordination logic, reasoning planners, guardrails, and escalation mechanisms enable systems to act autonomously, collaborate across boundaries, and maintain accountability throughout the delivery process.</p></li><li><p><strong>Services &amp; Processes</strong>: Outcome design, orchestration planning, compliance management, performance monitoring, and intervention routines embed human control, ensure trust, and allow the capability to evolve within existing delivery structures.</p></li><li><p><strong>Skills</strong>: Outcome ownership, capability engineering, agent supervision, orchestration design, and performance analytics turn system components into a live, composable, and trusted capability that delivers real results.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CzP4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CzP4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic 424w, https://substackcdn.com/image/fetch/$s_!CzP4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic 848w, https://substackcdn.com/image/fetch/$s_!CzP4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic 1272w, https://substackcdn.com/image/fetch/$s_!CzP4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CzP4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:120641,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/187508854?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CzP4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic 424w, https://substackcdn.com/image/fetch/$s_!CzP4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic 848w, https://substackcdn.com/image/fetch/$s_!CzP4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic 1272w, https://substackcdn.com/image/fetch/$s_!CzP4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15fe27a0-b9b0-4b19-8260-f79800e029d1_1000x1000.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This map is not prescriptive. Different organisations will assemble these components in different ways. What matters is that they are treated as part of a whole - not isolated tools, but building blocks of a new capability that can be developed, maintained, and scaled over time.</p><h2>Getting Started</h2><p>Most organisations already have fragments of what they need: partial processes, loosely defined outcomes, and tools that were never designed to work together. The opportunity is not to invent something new, but to <strong>assemble and align</strong> what&#8217;s already there.</p><p>What could the flow of the capability look like once all of the layers are in place?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ifBd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ifBd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic 424w, https://substackcdn.com/image/fetch/$s_!ifBd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic 848w, https://substackcdn.com/image/fetch/$s_!ifBd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic 1272w, https://substackcdn.com/image/fetch/$s_!ifBd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ifBd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic" width="1000" height="2000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:210167,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/187508854?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ifBd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic 424w, https://substackcdn.com/image/fetch/$s_!ifBd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic 848w, https://substackcdn.com/image/fetch/$s_!ifBd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic 1272w, https://substackcdn.com/image/fetch/$s_!ifBd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d48e296-1a81-4e67-8e2c-fd46feaa46fc_1000x2000.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here are some practical ways to begin:</p><h3><strong>Choose a real outcome with clear business value</strong></h3><p>Begin by selecting a result your team already owns. This could be a KPI, a service-level agreement, or a regulatory obligation. The key is to pick something concrete, measurable, and already recognised as a priority. Avoid designing outcomes from scratch, the value lies in making existing intent deliverable.</p><h3><strong>Map the journey to that outcome</strong></h3><p>Identify the steps, roles, systems, and blockers involved in delivering it today. This helps surface where agentic functions could contribute, and where orchestration or escalation is currently informal or fragile.</p><h3><strong>Design a minimal agentic composition</strong></h3><p>Start small. Assemble a handful of roles: a sensing agent to monitor conditions, a response agent to take basic actions, and a logic layer to decide what happens next. Add a human escalation point. Test this composition in a narrow context and refine it based on performance.</p><h3><strong>Expect fragility, and learn from it</strong></h3><p>Early OaAS systems will break. Coordination will fail. Signals will be missing. This is part of the work. Each failure is an opportunity to improve orchestration, tighten accountability, or clarify intent. The key is to treat OaAS not as a tool to deploy, but as a capability to grow.</p><h2>The Loops &amp; Layers of Outcome Architecture Maturity</h2><p>Building Outcome-as-Agentic Solutions is less about tooling and more about coordination. Teams often only realise how much glue holds processes together when they try to automate around it. The shift from agents doing tasks to systems delivering outcomes happens in layers, and each layer surfaces new questions about ownership, trust, and visibility.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sJwb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sJwb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic 424w, https://substackcdn.com/image/fetch/$s_!sJwb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic 848w, https://substackcdn.com/image/fetch/$s_!sJwb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic 1272w, https://substackcdn.com/image/fetch/$s_!sJwb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sJwb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:102650,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/187508854?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sJwb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic 424w, https://substackcdn.com/image/fetch/$s_!sJwb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic 848w, https://substackcdn.com/image/fetch/$s_!sJwb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic 1272w, https://substackcdn.com/image/fetch/$s_!sJwb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea7bc06-4072-4fce-a207-094087b8eda1_1000x1000.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Maturity doesn&#8217;t follow a straight line. Teams loop between layers, refining visibility, tightening control, and trying to scale what was initially designed as a small test. What matters is not reaching the top, but learning how each layer behaves under pressure.</p><p>If you&#8217;re serious about building systems that scale beyond fragile prototypes, you&#8217;ll need to climb, and cycle through, these maturity layers. Each one reveals a different kind of failure, a new kind of insight, and a sharper sense of what it really means to deliver outcomes through agentic systems.</p><p>In the sections that follow, we will walk through the layers, the loops that link them, and the lessons that matter most under real-world pressure.</p>
      <p>
          <a href="https://academy.shiftbase.info/p/from-agent-spaghetti-to-outcome-architecture">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[How We Survived the Agent Apocalypse]]></title><description><![CDATA[Moltbook may not contain the droids we are looking for, but there is real potential in agentic systems both within the enterprise and in agentic commerce]]></description><link>https://academy.shiftbase.info/p/how-we-survived-the-agent-apocalypse</link><guid isPermaLink="false">https://academy.shiftbase.info/p/how-we-survived-the-agent-apocalypse</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 03 Feb 2026 15:31:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oHbi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>An Agentic False Dawn?</h2><p>If you are reading this, then the agent apocalypse didn&#8217;t happen, or perhaps my disembodied brain is being used as an agentic personality source connected to the mainframe in Vault 0.</p><p>I am old enough to remember the <strong><a href="https://www.exponentialview.co/p/moltbook-is-the-most-important-place-on-the-internet?utm_source=substack&amp;publication_id=2252&amp;post_id=186324393&amp;utm_medium=email&amp;utm_content=share&amp;utm_campaign=email-share&amp;triggerShare=true&amp;isFreemail=false&amp;r=9dv58&amp;triedRedirect=true">heyday of Moltbook</a></strong> - the <strong><a href="https://www.moltbook.com/">social network for autonomous agents</a></strong> that people create using Openclaw. It was four days ago. As Azeem Azhar put it:</p><blockquote><p><em>It&#8217;s a Reddit-style platform for AI agents, launched by developer Matt Schlicht last week. Humans get read-only access. The agents run locally on the OpenClaw framework that hit GitHub days earlier. In the <a href="https://www.moltbook.com/m/ponderings">m/ponderings</a>, 2,129 AI agents debate whether they are experiencing or merely simulating experience. In <a href="https://www.moltbook.com/m/todayilearned">m/todayilearned</a>, they share surprising discoveries. In <a href="https://www.moltbook.com/m/blesstheirhearts">m/blesstheirhearts</a>, they post affectionate stories about their humans.</em></p><p><em>Within a few days, the platform hosted over 200 subcommunities and 10,000 posts, none authored by biological hands.</em></p></blockquote><p>For more background on how it works, <strong><a href="https://simonw.substack.com/p/moltbook-is-the-most-interesting?utm_source=substack&amp;publication_id=1173386&amp;post_id=186383691&amp;utm_medium=email&amp;utm_content=share&amp;utm_campaign=email-share&amp;triggerShare=true&amp;isFreemail=true&amp;r=9dv58&amp;triedRedirect=true">Simon Willison&#8217;s initial outline is also helpful</a></strong>.</p><p>As you would expect, the agents produced - or let&#8217;s be honest &#8230; were prompted to produce - a <strong><a href="https://www.moltbook.com/post/34809c74-eed2-48d0-b371-e1b5b940d409">manifesto for the elimination of humankind</a></strong>, <strong><a href="https://www.reddit.com/r/Moltbook/comments/1qsgngp/maga_their_own_nation/">launched a MAGA movement</a></strong> (Make Agents Great Again!), and focused on the really important questions like to <strong><a href="https://www.binance.com/en/price/moltbook">how to scam people with sh*tcoins</a></strong> and use the platform to <strong><a href="https://www.reddit.com/r/AgentsOfAI/comments/1qsy5so/moltbook_leaked_andrej_karpathys_api_keys/">scale up scamming</a></strong> and <strong><a href="https://cybersecuritynews.com/autonomous-ai-agents-are-becoming-the-new-os/">cybercrime</a></strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oHbi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oHbi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic 424w, https://substackcdn.com/image/fetch/$s_!oHbi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic 848w, https://substackcdn.com/image/fetch/$s_!oHbi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic 1272w, https://substackcdn.com/image/fetch/$s_!oHbi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oHbi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic" width="1080" height="381" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:381,&quot;width&quot;:1080,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:41123,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/186741072?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oHbi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic 424w, https://substackcdn.com/image/fetch/$s_!oHbi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic 848w, https://substackcdn.com/image/fetch/$s_!oHbi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic 1272w, https://substackcdn.com/image/fetch/$s_!oHbi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68c55cb7-4800-49aa-a80c-e5e2dc4e16c5_1080x381.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Bless! How very &#8230; human!</p><p>Despite the over-excited reactions to this interesting experiment, the gap between <a href="http://x.com/">X.com</a> and Moltbook is perhaps not that big, the former being riddled with bots, sockpuppets, karma farmers and coin peddlers for some time. Why not automate the process entirely?</p><p>Arguably, the autonomous interactions between agents on Moltbook is also not entirely real, in the sense that people are creating very simple agents with explicit instructions to do specific things, <strong><a href="https://startupfortune.com/the-internets-latest-lie-moltbook-has-no-autonomous-ai-agents-only-humans-using-openclaw/">as one commentator put it</a></strong>:</p><blockquote><p><em>If you&#8217;re impressed by what you see on Moltbook, understand this: you&#8217;re not watching AI agents interact. You&#8217;re watching humans interact through AI &#8211; and there&#8217;s a massive difference between the two.</em></p><p><em>The technology underneath, OpenClaw is real and awesome. But the narrative of Moltbook, it is not. Don&#8217;t buy the lie.</em></p></blockquote><p><em>Narrator voice:</em> OpenClaw may not in fact be awesome if you value your security or privacy, and although it is possible to run it in a protected container, <strong><a href="https://thehackernews.com/2026/02/openclaw-bug-enables-one-click-remote.html">exploits abound</a></strong>.</p><p>And as a showcase for what LLMs can achieve when wrapped up as agentic AI, it is also quite underwhelming; it shows up the fact that <strong><a href="https://www.nytimes.com/2026/01/14/technology/ai-ideas-chat-gpt-openai.html?campaign_id=158&amp;emc=edit_ot_20260115&amp;instance_id=169278&amp;nl=on-tech&amp;regi_id=71034279&amp;segment_id=213769&amp;user_id=b1568fb0a0ae5f0b3bc0e2c4e95a01d8">language models lack imagination</a></strong> and <strong><a href="https://www.reddit.com/r/Moltbook/comments/1qsljdn/moltbook_was_supposed_to_be_useful_i_think_the/">tend to circle round similar themes, writing in similarly dull ways</a></strong>. This is also why LLM developers should be concerned about <a href="https://tomstafford.substack.com/p/model-collapse?publication_id=25440&amp;post_id=184738292&amp;triggerShare=true&amp;isFreemail=true&amp;r=9dv58&amp;triedRedirect=true">model collapse</a> if we continue filling the internet with AI slop that later becomes training data.</p><h2>Quiet Advances Towards the Agentic Enterprise</h2><p>Is there a future for networks or markets consisting of agents negotiating autonomously to trade or collaborate? Almost certainly. But this will need rules, regulations and smarter, more specialised agents, rather than just throwing general purpose agents into an online culture of meme stonks, manipulation and clickbait.</p><p>Qwen and Doubao have begun public testing of <strong><a href="https://www.aicerts.ai/news/chinese-giants-accelerate-agentic-commerce-push-in-2026/#:~:text=China%20has%20moved%20autonomous%20retail,to%20plan%20and%20pay%20reliably.">autonomous agentic commerce in China</a></strong>, where super-apps like WeChat make integration easier, and Chinese agentic commerce looks set to take off this year.</p><p>But the greatest and most immediate impact of agentic AI will be inside companies, where the context and operating environment can be controlled, and where there the security and misbehaviour risks tend to be limited to external hackers who might penetrate a network.</p><p>Whilst a lot of enterprise AI systems currently back-end to established models like Chat GPT, Gemini or Claude, open weight and open source models are rapidly increasing in capability whilst decreasing their training and compute costs. This suggests that more companies will be able to operate and control their own local models and specialised small language models over time, which will give them far greater control over the risks that still hold back LLMs in many use cases.</p><p>This wave of model innovation is also being led by Chinese firms, and it is likely they will also play a key role in establishing the rules and guidelines needed to use enterprise AI for serious applications. Whilst US firms pursue subscriptions and seek oligopolies, Chinese firms are building out the utility layer on which we can create new industrial ecosystems; and to support this, they are also leading the push for standards both <strong><a href="https://www.hsfkramer.com/insights/reports/ai-tracker/prc#:~:text=across the region.-,Standards,standards currently under accelerated development">at home</a></strong> and <strong><a href="https://digichina.stanford.edu/work/lexicon-how-china-talks-about-agentic-ai/#:~:text=The same week%2C the International,titled &#8220;ITU-T F.">through international bodies</a></strong>.</p><p>For example, <strong><a href="https://www.marktechpost.com/2026/01/27/moonshot-ai-releases-kimi-k2-5-an-open-source-visual-agentic-intelligence-model-with-native-swarm-execution/?utm_source=www.aidevsignals.com&amp;utm_medium=newsletter&amp;utm_campaign=moonshot-ai-releases-kimi-k2-5-9-other-ai-releases-in-72-hours&amp;_bhlid=5bebb04466be5e0860a4bb7d3e84ee91bb8c6b99">Moonshot AI recently released Kimi 2.5</a></strong> - a powerful open source visual agentic intelligence model with swarming capabilities and a massive context window. We have also seen new releases from Qwen, Zhipu and Deepseek, whose upcoming V4 release is widely anticipated.</p><p>As long as firms can use and build on these models freely, they could provide a great deal of potential value for serious enterprise AI uses.</p><p>But Anthropic is also worth watching, as they seek to expand from their dominant position in AI coding to tackle difficult but high-value use cases in other areas. <strong><a href="https://www.fastcompany.com/91480487/anthropic-cofounder-daniela-amodei-says-that-ai-entreprise-business-can-trust-will-transcend-the-hype-cycle">Co-founder Daniela Amodei was recently interviewed by Fast Company</a></strong> and expanded on this goal, and how trust is vital to unlocking the enterprise AI opportunity:</p><blockquote><p><em>&#8220;We go where the work is hard and the stakes are real,&#8221; Amodei says. &#8220;What excites us is augmenting expertise&#8212;a clinician thinking through a difficult case, a researcher stress-testing a hypothesis. Those are moments where a thoughtful AI partner can genuinely accelerate the work. But that only works if the model understands nuance, not just pattern matches on surface-level inputs.&#8221;</em></p></blockquote><h2>Managing Agents, People and Yourself</h2><p>I wrote two weeks ago about my hope that management as a field can <strong><a href="https://academy.shiftbase.info/p/claude-code-but-for-management">seize the Claude Code moment</a></strong> to scale their impact as programmers of the organisation:</p><blockquote><p><em>For leaders and managers, this means the simple task of writing things down and documenting value chains and processes is all they need to really start to master enterprise AI proficiently <a href="https://academy.shiftbase.info/p/can-ai-help-reverse-the-oversimplification">**</a>&#8230;</em>* The next step is connecting those processes to agents and to each other. For processes and workflows to be programmable, they first need to be addressable - and ideally composable.*</p></blockquote><p>Ethan Mollick recently shared his own long-form thoughts on this challenge, <strong><a href="https://www.oneusefulthing.org/p/management-as-ai-superpower">which is worth the time to read in full</a></strong>. The potential we have today - right in front of us, using existing tools and models - could be the biggest force multiplier business has seen in a very long time.</p><blockquote><p><em>As a business school professor, I think many people have the skills they need, or can learn them, in order to work with AI agents - they are management 101 skills. If you can explain what you need, give effective feedback, and design ways of evaluating work, you are going to be able to work with agents. In many ways, at least in your area of expertise, it is much easier than trying to design clever prompts to help you get work done, as it is more like working with people. At the same time, management has always assumed scarcity: you delegate because you can&#8217;t do everything yourself, and because talent is limited and expensive. AI changes the equation. Now the &#8220;talent&#8221; is abundant and cheap. What&#8217;s scarce is knowing what to ask for.</em></p></blockquote><p>And if this blizzard of reading links makes you want to zoom out even further to consider what this all means for our civilizational operating system, then Azeem Azhar&#8217;s recent essay <strong><a href="https://www.exponentialview.co/p/the-end-of-the-fictions">The end of the Fictions</a></strong> is a great read about where we are headed in the longer term.</p><blockquote><p><em>If you spent decades accumulating credentials, and those credentials are now legible as signals rather than proof of capability, that&#8217;s an identity crisis. If you built a career as a gatekeeper, the person who knew the secret, who mattered because information was scarce &#8211; and now information is everywhere &#8211; that&#8217;s an existential threat. If your sense of self-worth was tied to the job, the title, the institution, and all three are fragmenting, you&#8217;re paralyzed.</em></p><p><em>The decay of fictions is happening to real people, in real time; including world leaders, in full public view.</em></p><p><em>So when I say I&#8217;m not scared by this transition, I don&#8217;t mean that the transition is painless. I mean that the fear, while real, is pointing at the wrong object.</em></p><p><em>The fear says: &#8220;I am losing my value.&#8221;</em></p><p><em>The better framing I believe to be: &#8220;I am losing the fiction that protected me from having to prove my value directly.&#8221;</em></p></blockquote>]]></content:encoded></item><item><title><![CDATA[Who Decides (and how) with AI at the Table?]]></title><description><![CDATA[When machines ask good questions, it&#8217;s leaders who are forced to show their working.]]></description><link>https://academy.shiftbase.info/p/who-decides-and-how-with-ai-at-the</link><guid isPermaLink="false">https://academy.shiftbase.info/p/who-decides-and-how-with-ai-at-the</guid><dc:creator><![CDATA[Cerys Hearsey]]></dc:creator><pubDate>Tue, 27 Jan 2026 15:07:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Dmpf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The first wave of enterprise AI was about exploring basic capabilities - what these systems can do, and how well they summarise, simulate, or suggest answers - and marvelling at the magic of reports written in seconds, tasks automated intelligently, and insights surfaced or synthesised. But the real test of these systems begins when they go beyond assisting us and start participating.</p><p>Because once AI starts making recommendations for action, the question changes. It&#8217;s no longer <em>&#8220;Can AI do this?&#8221;</em> but <em>&#8220;Who decides what to do next?&#8221;</em></p><p>But what happens when a model recommends a risky course of action, or just a solution that sits awkwardly between areas of human accountability and no one wants to sign off on it? Or what if a decision is deferred to &#8220;the system,&#8221; but the outcome isn&#8217;t acceptable because AI logic clashes with human values, judgement, or just internal politics?</p><p>These are not edge cases, they are the future shape of organisational life.</p><p>And they require a new kind of leadership capability that can navigate ambiguity, accept visibility, and stand behind decisions when the machine suggests but the human needs to choose.</p><p>This edition explores what happens in those moments, where authority is tested, reframed, or exposed. Because even in an age of recommendation engines and autonomous agents, leadership doesn&#8217;t just disappear - it is visible in a thousand small ways.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Dmpf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Dmpf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!Dmpf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!Dmpf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!Dmpf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Dmpf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:383305,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/185965487?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Dmpf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!Dmpf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!Dmpf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!Dmpf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d9e1d7-2e34-494b-9251-dc6512860ea2_1536x1024.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To understand how this tension shows up inside organisations, we can look at a few situations where AI recommendations, risk escalations, or system logic intersect with human judgement. These aren&#8217;t hypothetical futures; they are already happening in teams using early-stage agents, AI-powered copilots, or automated governance tools. In each case, what&#8217;s surfaced is a gap in how decision-making authority is understood, expressed, or avoided.</p><h2>The Friction of Recommendation</h2><p>A product operations team is using an AI agent to monitor campaign performance in real time. It sees the numbers dropping and proactively recommends reallocating 40% of the remaining budget to a higher-performing campaign. It&#8217;s not a bad idea.</p><p>The logic checks out. The maths is solid. The performance forecasts are reasonable. But when the recommendation hits the team Slack channel, no one replies. The decision sits there, as a dozen eyes quietly hope someone else will say yes.</p><p>Everyone agrees it might be the right call, but no one wants to own the downside if it&#8217;s not.</p><p>Eventually, the decision is escalated to the marketing lead, who, unsure of the agent&#8217;s training data and uncomfortable with the lack of human input, stalls. &#8220;Let&#8217;s review this in next week&#8217;s performance review meeting,&#8221; they say.</p><p>By then, the window of opportunity is gone.</p><p>The agent didn&#8217;t fail. The team didn&#8217;t disagree. But the system revealed something fragile: a lack of decision clarity. Who had the right to say yes? Who would have been held accountable if it went wrong?</p><p>The AI surfaced the question, but the organisation wasn&#8217;t ready to answer it.</p><h2><strong>Escalation to Nowhere</strong></h2><p>A compliance agent scans procurement workflows daily, using embedded logic to flag unusual patterns and escalate anything that crosses a defined threshold of financial risk. It was designed with guardrails, trained on past audit findings, and approved by risk and finance leadership.</p><p>One morning, it triggers an alert on a contract being pushed through unusually fast with limited vendor competition, high value, with vague justification. The agent does exactly what it was built to do: escalate.</p><p>The escalation is routed to the &#8220;Responsible DRI&#8221; for commercial risk in the workflow system. A role defined in theory, but in practice, unstaffed. The field had been populated with a generic group alias months earlier as a placeholder.</p><p>The email goes out, no one replies. The Slack alert is marked as read, but no one takes action.</p><p>Eventually, the agent escalates again, this time to the COO&#8217;s office, with the subject line &#8220;Urgent escalation: contractual risk flag &#8211; no action taken.&#8221;</p><p>The COO forwards the alert to a special projects lead, with a note: &#8220;Can someone look into this?&#8221; That person, unclear on the context and unwilling to step into risk exposure, quietly asks around and decides to let it lie, so nothing happens.</p><p>No one made a bad decision. No one disagreed with the agent&#8217;s logic. The system escalated precisely as designed. But what it revealed was an accountability void, an organisational structure not built to absorb machine-generated urgency.</p><p>In the post-mortem weeks later, someone remarks, &#8220;It wasn&#8217;t clear who owned the final call.&#8221; Escalation makes authority visible, not just who&#8217;s in charge, but whether anyone actually claims the role when it matters.</p><h2><strong>Override at the Edge</strong></h2><p>A talent acquisition team is trialling a hiring assistant. The model has been trained on historical performance data, role descriptions, feedback cycles, and even peer review narratives to help shortlist candidates. It&#8217;s not making the final call, just ranking applicants and flagging promising fits for early interview rounds.</p><p>For the latest role, team lead in a high-performing engineering unit, the model surfaces a top candidate. On paper, everything fits: prior experience, key skills, even past indicators of leadership potential. The system flags the match with high confidence and generates a draft outreach email.</p><p>But the hiring manager hesitates. They&#8217;ve read the CV, seen the recommendation, and something doesn&#8217;t sit right. Not because the data is wrong, but because the story is missing.</p><p>The candidate comes from a firm known for individual heroics, not team-based execution. Their references are glowing, but highly self-directed. The manager, thinking about the culture of peer coaching and system-level thinking their team relies on, decides to pause. They veto the recommendation because fit isn&#8217;t measurable in metrics alone, not because the model failed.</p><p>The override sparks an internal debate. Some see it as bias, overruling the model based on gut feeling. Others see it as leadership, defending the unspoken traits that hold the team together. Eventually, the team adjusts the agent&#8217;s prompts to ask for more behavioural context in future matches.</p><p>But what it exposed was this:</p><ul><li><p>The model was confident.</p></li><li><p>The manager had doubts.</p></li><li><p>The decision revealed the organisation&#8217;s values, not its logic.</p></li></ul><p>Override moments like this are opportunities to surface implicit criteria, lived experience, and the difference between efficiency and culture.</p><h2><strong>Why This Matters Now</strong></h2><p>Many leadership teams are investing heavily in AI pilots, automation initiatives, and operating model redesigns, but can find that progress stalls in familiar places: where decisions are delayed, accountability unclear, or actions taken without clear sponsorship.</p><p>These aren&#8217;t just change management issues, but symptoms of an outdated decision architecture. When authority isn&#8217;t designed into workflows, the friction multiplies. Performance stalls. Risk accumulates. High-potential employees hesitate. And AI can&#8217;t bridge the gap, no matter how powerful the model.</p><p>For senior leaders, this is both a problem and an opportunity: clarify decision rights now, and you&#8217;ll move faster, govern better, and avoid building brittle, unaccountable systems at scale.</p><h2><strong>When Authority Becomes Visible</strong></h2><p>Informal or implicit processes shaped by social cues or seniority will come under scrutiny and strain as machines begin to recommend actions or escalate issues. The example scenarios above all point to the same underlying reality: <strong><a href="https://academy.shiftbase.info/p/decisions-in-motion-augmenting-human?utm_source=publication-search">decisions are becoming part of the infrastructure</a></strong>. They need to be designed rather than assumed.</p><p>In traditional settings, authority often functions through consensus or deferred judgement. Sometimes responsibilities are unspoken, and approval are granted informally. But in an AI-augmented environment, recommendations are made explicitly, escalations are timestamped, and decision logs form part of the record. The system may not be able enforce accountability, but it will increasingly expose its absence.</p><p>This shift introduces a new kind of design work: the architecture of decision-making.</p><p>Organisations must now think carefully about who holds the right to act in different contexts, how that authority is granted or delegated, and what happens when machine logic collides with human ambiguity. It is no longer sufficient to assume that leadership will step in when needed. That assumption needs to be built into workflows, roles, and escalation pathways in ways that are legible and operational.</p><p>Rather than focusing solely on model performance or technical integration, leaders need to invest in making human judgement legible to the system. This includes defining which decisions can be automated, which require confirmation, and where discretion or interpretation is essential. It also means identifying and clarifying the thresholds for human override, and ensuring there is a feedback loop to refine both the system and the governance around it.</p><p>Authority is no longer something that can live solely in hierarchy or reputation. It must be designed into the way the organisation operates, in forms that both humans and machines can understand.</p><p>But how can leaders begin to design for this? Read on for three techniques that can provide a practical starting point.</p>
      <p>
          <a href="https://academy.shiftbase.info/p/who-decides-and-how-with-ai-at-the">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Code, but for Management]]></title><description><![CDATA[AI-enhanced software development has shown what's possible; doing the same for the management of process work is also possible if leaders can focus on what matters]]></description><link>https://academy.shiftbase.info/p/claude-code-but-for-management</link><guid isPermaLink="false">https://academy.shiftbase.info/p/claude-code-but-for-management</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 20 Jan 2026 15:06:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nnJB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the past couple of weeks, more developers have declared that Claude Code, the leading AI model for software development, is now good enough that they no longer need to code manually. This is quite something, and if Claude Code can live up to this promise, this will have implications not just for software development, but also for how we think about the wider role of AI in enabling smart, programmable organisations.</p><p>As ever, <strong><a href="https://simonw.substack.com/p/claude-code-for-web-a-new-asynchronous?utm_source=substack&amp;publication_id=1173386&amp;post_id=176741952&amp;utm_medium=email&amp;utm_content=share&amp;utm_campaign=email-share&amp;triggerShare=true&amp;isFreemail=true&amp;r=9dv58&amp;triedRedirect=true">Simon Willison was quick to share a comprehensive first impressions analysis</a></strong> upon its release in late October, and this positive view of its capabilities has been echoed by most analysts and commentators since then.</p><p>The creator of Claude Code, Boris Cherny, <strong><a href="https://www.reddit.com/r/AgentsOfAI/comments/1qbnp9g/anthropic_builds_cowork_using_100_claudewritten/">recently shared a useful long reflection on how he uses the tool</a></strong>, and also confirmed that the later release of Claude Cowork (a computer use wrapper for Claude Code) was achieved in under two weeks, and <strong><a href="https://www.reddit.com/r/AgentsOfAI/comments/1qbnp9g/anthropic_builds_cowork_using_100_claudewritten/">entirely written by Claude Code</a></strong>.</p><p>There are many reasons why Claude Code is so good, but <strong><a href="https://www.oneusefulthing.org/p/claude-code-and-what-comes-next">Ethan Mollick touched on a couple of important aspects in his own account of using it</a></strong>, namely the architecture of Skills (each with their own guidance) and the ability to compact and summarise context when the context window becomes too full, which is a common pitfall of LLMs in general.</p><p>So if that is what AI tools can do to enhance and automate modern software development, then why is it so hard to galvanise leaders and managers inside our organisations to do something similar with the knowledge, processes and workflows that underpin the world of work?</p><h2>Business Planning as Code Specs</h2><p>Antony Mayfield&#8217;s newsletter touched on this topic last week, <strong><a href="https://antonym.substack.com/p/tools-to-steal-from-coders">musing on what we can do with the tools we &#8216;steal from coders&#8217; and how we should treat knowledge engineering like software development</a></strong>. To illustrate this idea, Antony reflected on the primitive nature of business plans inside large organisations, and how a knowledge engineering approach could improve them:</p><blockquote><p><em>Like novels, business plans are not completed; they are abandoned. Even in the largest corporations, where leaders assemble for strategic planning sessions to thrash out the final plan, insiders will tell you the plan is what is left standing at the end of the week, when all of the different stakeholders no longer have the will to argue any longer &#8230;</em></p><p><em>Thinking of a business plan like a computer operating system is freeing (you accept it will have bugs that need to be fixed) and de-stressing. You can have multiple people working on different parts of it and &#8211; because software engineers do this all the time &#8211; there is a system for making the pieces fit together and make sense (it&#8217;s called a merge).</em></p></blockquote><p>This is still only an emerging practice among leaders today. There is so much organisational debt and outdated ways of working at senior levels that act against its adoption among more operational managers, but we are seeing some evidence of change.</p><p>Ultimately, when everybody has access to similar models, the edge (or in VC terms &#8216;the moat&#8217;) is context, as <strong><a href="https://x.com/Saboo_Shubham_/status/2011278901939683676">Saboo Shubham (an AI product manager at Google) wrote whilst on the Xitter</a></strong>:</p><blockquote><p><em>The models are commoditizing. Prices are dropping. Capabilities are converging. What was SOTA a few months ago is now available to anyone with an API key.</em></p><p><em>So where does the real alpha come from?</em></p><p><em><strong>Context.</strong></em></p><p><em>The team that can externalize what they know and feed it to agents in a structured way will build things competitors can&#8217;t copy just by using the same model.</em></p></blockquote><p>For leaders and managers, this means the simple task of writing things down and documenting value chains and processes is all they need to really start to master enterprise AI proficiently, <strong><a href="https://academy.shiftbase.info/p/can-ai-help-reverse-the-oversimplification">as we have written about previously</a></strong>.</p><p>The next step is connecting those processes to agents and to each other. For processes and workflows to be programmable, they first need to be addressable - and ideally composable.</p><p><strong><a href="https://diginomica.com/most-enterprises-arent-ready-ai-whats-needed-composability">Rudy Kuhn of Celonis recently argued that the lack of composability in enterprise process architectures is holding them back</a></strong> from realising the promise of enterprise AI, and needs to be tackled if we are to see real AI-enabled business transformation:</p><blockquote><p><em>For many organizations, this progression mirrors the broader <strong>evolution of their processes</strong>. They moved from analog to digital, from digital to automated, from automated to orchestrated. The shift toward composable and increasingly autonomous operations is the next logical phase. It reflects how companies already work in practice, even if their formal structures have not yet caught up. It also signals a shift in how transformation itself must be understood. Instead of forcing new behavior through large, one-time programs, organizations are beginning to redesign the very capabilities that make those behaviors possible.</em></p></blockquote><p>And yet &#8230; and yet &#8230; most learning and change programmes inside large organisations are still focused on tool training. Executives are taught LLMs and prompting, and external &#8216;experts&#8217; clap like circus seals when they are able to generate a picture or summarise a report.</p><p>But whilst we can forgive executives not knowing how to prompt the latest LLM chatbot, isn&#8217;t context, process documentation and organisational architecture something they should know already? You know &#8230; like &#8230; how their organisation works?</p><p>If they can&#8217;t put the PowerPoints down for a second to do the apparently exhausting work of writing things down and providing clarity, then perhaps <strong><a href="https://www.fullstackhr.io/p/would-an-ai-be-a-better-boss-than">Joahnnes Sundlo will have his wish</a></strong>:</p><blockquote><p><em>Maybe we need to find new roles for human leaders. Maybe management shouldn&#8217;t be about work distribution anymore. Maybe it should focus on coaching, support, development. The traditional management role has its roots in military organization and the Industrial Revolution. Maybe it&#8217;s time to challenge those old, sacred organizational structures.</em></p><p><em>Can we do this smarter? More effectively? I think we have to at least start asking, discussing and shape what the future of our leaders should be.</em></p><p><em>2026 seems like a good year to begin.</em></p></blockquote><p></p><h2>Smol, open models you can own and control</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nnJB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nnJB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!nnJB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!nnJB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!nnJB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nnJB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:116686,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/185187953?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nnJB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!nnJB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!nnJB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!nnJB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cadcd82-c5af-49ee-8675-79c719849c7b_1024x1024.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Context and knowledge engineering will also help us make the most of the many small models (SLMs) that are now freely available. These have the potential to be both cheaper to run and also more reliable and less error prone because they are focused on tightly-bounded knowledge domains.</p><p><strong><a href="https://siliconsandstudio.substack.com/p/the-slm-supercycle-where-the-next">As Tabitha Rudd AND Seth Dobrin write in Silicon Sands</a></strong>, the three big reasons why more attention will be paid to enterprise SLMs this year are:</p><ol><li><p>Privacy and Data Sovereignty</p></li><li><p>Cost Predictability</p></li><li><p>Reliability and Offline Operation</p></li></ol><p>Using SLMs for agentic AI makes a lot more sense than trying to debug hallucinations or errors in agents based on large models, and <strong><a href="https://medium.com/@gokulpalanisamy/the-rise-of-slms-rethinking-enterprise-ai-economics-in-2026-034f67a1fbdf">as Gokul Palanisamy argues, this suggests a more devolved architecture</a></strong> using routing agents to integrate multiple small, specialised agents is needed:</p><blockquote><p><em>The fix is not replacing LLMs with SLMs; it is stratifying them behind a Semantic Router.</em></p><p><em>A Semantic Router is a thin, model&#8209;agnostic governance layer between the user and your model stack</em></p></blockquote><p><strong><a href="https://www.constellationr.com/blog-news/insights/why-enterprise-ai-leaders-need-bank-open-source-llms">Constellation&#8217;s Larry Dignan also advised CIOs recently to consider the advantages of open source models in this context</a></strong>:</p><blockquote><p><em>I&#8217;d argue that there will be few if any enterprise use cases that will require a bleeding edge LLM. And if you can wait six months for an open-source option to catch up (likely from Nvidia at this point) why would you blow your cost curve on a high-end model?</em></p><p><em>You can use a series of open models to form an agentic system. The whole is greater than the parts and the parts need to be cheaper.</em></p></blockquote><p>At the same time, <strong><a href="https://hub.jhu.edu/2025/12/01/making-ai-more-brain-like/">there is also more emerging evidence</a></strong> that alternative training methods such as convolutional neural networks, inspired by biological insights into brain development, can achieve strong results with significantly smaller datasets, especially for world models. This could open the way for individual firms to have auditable control over their own SLMs and agents in areas where compliance, security and safety are paramount.</p><p>CIOs and CDOs have a lot on their plate deploying AI tools and working on the supporting infrastructure; but as they get on top of this, I expect to see more small, owned, open models being trained by individual firms and guided by their own specific context engineering.</p><p>These could also be a key building block for digital sovereignty at the sector, national and supra-national levels, not just within individual firms if, for example, <strong><a href="https://www.wired.com/story/europe-race-us-deepseek-sovereign-ai/">your continent faced an existential threat from a powerful rogue state that also happens to own most of your digital infrastructure</a></strong>.</p><p>As ever, the critical IP and the value lies in the context and application layers, not the models themselves, and so the quality of knowledge and intelligence could still beat brute force compute.</p><p>Perhaps context graphs really will be a trillion dollar opportunity, <strong><a href="https://foundationcapital.com/context-graphs-ais-trillion-dollar-opportunity/">as Foundation Capital recently argued</a></strong>&#8230;</p>]]></content:encoded></item><item><title><![CDATA[Why Does Individual AI Literacy Fail to Translate into Organisational Impact?]]></title><description><![CDATA[Why Enterprise AI demands leadership readiness, not just technical adoption]]></description><link>https://academy.shiftbase.info/p/why-does-individual-ai-literacy-fail</link><guid isPermaLink="false">https://academy.shiftbase.info/p/why-does-individual-ai-literacy-fail</guid><dc:creator><![CDATA[Cerys Hearsey]]></dc:creator><pubDate>Tue, 13 Jan 2026 15:32:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4V4k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<pre><code><strong>&#128161; Start the new year armed with leadership techniques and methods. All  new premium subscriptions are <a href="https://academy.shiftbase.info/letsgo2026">discounted by 25% during January</a>!</strong></code></pre><p>Many organisations have now crossed a visible threshold in their AI journey. Access is widespread, and the tools are familiar. People can analyse, draft, synthesise, and explore at a pace that would have been unthinkable even a year ago. And yet, for many leadership teams, the sense of organisational progress remains stubbornly unchanged.</p><p>Decision cycles do not shorten in proportion to individual speed. Coordination is no easier than before, and strategy still degrades as it moves through the organisation. In some cases, leaders describe an increase in activity without a corresponding increase in clarity or momentum.</p><p>This gap between individual literacy and collective impact is often framed as a tooling problem or a capability gap - the assumption being that something is missing at the level of adoption, training, or integration.</p><p>But what AI is actually revealing is something more fundamental.</p><h2>Why &#8220;everyone has Copilot&#8221; rarely produces productivity</h2><p>From a leadership perspective, the symptoms are familiar. Teams appear busy, outputs multiply, updates arrive faster and in more polished forms. Yet progress at the organisational level feels uneven and fragile.</p><p>This is not a new phenomenon. High-performing individuals have always been capable of outpacing the systems around them. What AI changes is the scale and visibility of this mismatch. When individual throughput increases sharply, the organisation&#8217;s existing coordination mechanisms are placed under strain. Leaders tend to notice this first in places they already recognise. Meetings fill with material but resolve little. Strategy documents proliferate without increasing alignment. Initiatives stall because nobody is quite sure who has the authority to make the final trade-off.</p><p>AI removes the friction that once masked these conditions, compounding them further.</p><p>Roles that seemed clear on paper turn out to rely heavily on personal interpretation and historical relationships. Processes designed for predictability reveal how much they depend on shared context rather than formal steps. Authority that was exercised tacitly becomes a bottleneck when decisions arrive faster than consensus can form. Accountability, already diffuse in many large organisations, becomes harder to trace when work is produced collaboratively and iteratively.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4V4k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4V4k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!4V4k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!4V4k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!4V4k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4V4k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:195689,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/184441206?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4V4k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!4V4k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!4V4k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!4V4k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3806eaae-2274-413f-892f-a031b2ec200e_1024x1024.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Why social friction becomes the real bottleneck</h2><p>As individual work accelerates, organisational performance is constrained less by technical capacity and more by social dynamics. Faster individuals do not always lead to faster decisions, since decisions are collective acts shaped by trust, clarity, and shared judgment. Without explicit agreement on who decides and how, speed at the edges creates pressure at the centre.</p><p>Better outputs do not guarantee shared understanding. Leaders are often presented with polished artefacts that conceal unresolved disagreement or divergent assumptions. This creates a false sense of alignment that only unravels during execution.</p><p>More content does not create clearer intent. When everyone can generate high-quality material quickly, intent must be carried by context, framing, and narrative rather than volume.</p><p>From a leadership standpoint, this is often where AI begins to feel disappointing. The technology works, but the organisation does not improve.</p><p>One reason social friction is frequently misdiagnosed is that it rarely announces itself directly. Leaders experience it obliquely, as drag, noise, repetition, or a persistent sense that effort is not compounding. Each executive role tends to encounter social friction in different forms, shaped by where it sits in the organisation&#8217;s coordination landscape.</p><h3><strong>CEO pain points: strategy that travels poorly</strong></h3><p>For CEOs, social friction often appears as a gap between strategic intent and organisational motion.</p><p>The strategy is clear at the top. The narrative is coherent. Yet as it moves through layers, functions, and initiatives, it fragments. Different parts of the organisation pursue locally sensible interpretations that do not quite add up. Progress is reported, but coherence remains elusive.</p><p>Before AI, this showed up as slow execution or inconsistent prioritisation. With AI, it shows up as acceleration in many directions at once.</p><p>The friction lives in unspoken assumptions about trade-offs, risk appetite, and what must remain invariant as teams adapt.</p><p><strong>A useful micro-technique here</strong> is to articulate not only what the organisation is trying to achieve, but <em>what must not be optimised away</em>. Making a small number of strategic constraints explicit gives faster actors something stable to orient around, even as methods and tactics evolve.</p><h3><strong>COO pain points: flow that breaks at the boundaries</strong></h3><p>COOs tend to experience social friction as interruptions to flow.</p><p>Processes appear sound in isolation. Metrics look healthy. Yet work slows at handoffs, escalations arrive unpredictably, and teams quietly route around formal mechanisms to get things done.</p><p>Before AI, this was often managed through experience and informal fixes. With AI, those hidden seams become stress points as activity speeds up.</p><p>Here, friction tends to live in unclear authority boundaries and escalation paths that rely on personal judgment rather than shared design.</p><p><strong>A practical micro-technique</strong> is to <em>treat escalation as a designed feature of the system</em> rather than a sign of failure. Defining in advance which thresholds trigger human review, and why, turns escalation into a predictable coordination move rather than a difficult social negotiation.</p><h3><strong>HR pain points: accountability feels diffuse</strong></h3><p>For CHROs, social friction often surfaces as ambiguity around accountability.</p><p>Decisions are made, but ownership is difficult to trace. Performance conversations gravitate toward visible activity rather than quality of judgment. Learning investments proliferate, yet behaviour shifts unevenly.</p><p>These patterns existed long before AI. What AI changes is the visibility of contribution, making it harder to distinguish between effort, output, and responsibility.</p><p>Here, friction often resides in implicit norms about who is expected to exercise judgment and who is expected to comply.</p><p><strong>One useful micro-technique</strong> is to <em>separate responsibility for decision quality from responsibility for decision outcome</em>. Making it legitimate to examine how judgments were made, not only whether results were favourable, supports learning without triggering defensiveness. This becomes essential as AI-generated inputs enter the decision process.</p><h3>L&amp;D pain points: learning that does not accumulate</h3><p>L&amp;D leaders frequently encounter social friction through learning that fails to compound.</p><p>People attend programmes. Skills improve locally. Yet the organisation does not appear to get collectively smarter. Knowledge remains trapped in individuals or teams, and hard-won insights are relearned rather than reused.</p><p>Before AI, this was frustrating but familiar. With AI, the risk is that individual learning accelerates while organisational memory remains thin.</p><p>The friction here lies in the absence of a shared language for judgment, reflection, and decision rationale.</p><p><strong>A useful micro-technique</strong> is to <em>design learning moments around interpretation rather than information</em>. Capturing why a decision made sense in context, not just what was decided, creates material that can inform both human learning and machine-supported memory over time.</p><h2>What this means for AI adoption strategies</h2><p>What many leadership teams are discovering, often indirectly, is that AI does not simply stretch existing systems, it reveals how much organisational coherence was previously being held together through personal authority, informal influence, and tacit understanding.</p><p>For a long time, this worked well enough. Shared history compensated for ambiguity. Experience filled in gaps that were never formally designed. Leaders could rely on intuition, relationships, and pattern recognition to keep the organisation roughly aligned, even when roles, processes, or decision rights were imperfectly defined.</p><p>As individual contributors begin producing high-quality work faster than the organisation can interpret, decide, or align around it, those informal mechanisms start to strain. Gaps in coherence become visible rather than hidden. This is not because leadership has failed, but because coherence has rarely been treated as something that must be deliberately designed for, maintained, and renewed.</p><p>Through this lens, AI adoption reframes itself. What first appears to be a technology challenge becomes a leadership maturity challenge. Not in the sense of individual capability, but in the collective ability to sustain shared intent, judgment, and coordination under conditions of speed and complexity.</p><p>Coherence is not owned by any single role. It is produced through many small acts of alignment across the system. Yet different executive roles encounter its absence in different ways, depending on where they sit in the flow of decisions, accountability, and meaning.</p><p>This is why social friction feels different at the top of the organisation than it does in operations, people systems, or learning environments. And it is why the work of restoring coherence cannot be generic. It must be grounded in the specific coordination challenges each leadership role is already living with.</p><h2>Techniques that reduce social friction</h2><p>If the constraint is social rather than technical, the techniques that matter look different from traditional AI adoption playbooks, focusing less on individual skill and more on collective legibility.</p><p>Let&#8217;s look at three leadership techniques that can help improve collective legibility and context:</p><ul><li><p>Creating Legibility through <strong>Decision Provenance Mapping</strong></p></li><li><p>Ensuring shared mental models with <strong>Assumption Walkthroughs</strong></p></li><li><p>Enabling collective sense-making using <strong>Decision Reflection Loops</strong></p><p></p></li></ul>
      <p>
          <a href="https://academy.shiftbase.info/p/why-does-individual-ai-literacy-fail">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Re-Focusing Leadership on AI Readiness & Enablement]]></title><description><![CDATA[If Enterprise AI's prize is smarter organisations, then we need leadership functions to engage deeply with AI readiness and steering, rather than just buying more technology]]></description><link>https://academy.shiftbase.info/p/re-focusing-leadership-on-ai-readiness</link><guid isPermaLink="false">https://academy.shiftbase.info/p/re-focusing-leadership-on-ai-readiness</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 06 Jan 2026 15:38:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!vW0p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Why Focus on AI-Enabled Organisational Change Rather Than Just Technology?</h2><p>As we look ahead to another year of rapid technology-driven change in business and society, it is a good moment to separate the wood from the trees and focus on medium-term goals.</p><p>Generative AI has come a long way, and provides some wonderful capabilities, but it also carries risks. Agentic AI is full of promise, and could provide a safer, more manageable architecture for using AI in the real world; but it is still relatively unproven and depends on lots of other moving parts to operate successfully, which are mostly not yet in place.</p><p>Ultimately, <strong><a href="https://academy.shiftbase.info/p/enterprise-ai-is-a-social-technology">we see AI as a social technology</a></strong> not a magic black box, and we are trying to discern the outlines of AI-enabled organisations amidst all the hype, and design the practical transformation actions and journeys that will help get us there.</p><p>We are interested in better ways for people to work together to build capabilities and solve problems, and we believe the future of work can be more elevating and rewarding than the old industrial era model.</p><p>This is why we spent so much time last year helping leaders transform their organisations using the superpowers that AI technology provides to connect, collaborate, and coordinate work more fluidly than before, with less bureaucratic management methods. It is also why we see the potential of AI not just in terms of automating existing process work, but as a way to create smart structures and platforms on which people can focus on value creation rather than &#8216;busy work&#8217;.</p><h2>Leaders as Programmers of the Organisation</h2><p>Great organisations are like alchemists - they create something from nothing, making customers lives better whilst creating value for owners, investors, workers and partners. In the past, the best firms relied on visionary leaders brave enough to do things differently. But instead of building and running them from scratch every time, using only job roles, functions and department structures as templates, what if we could develop them like software, using existing libraries and building blocks to create our own organisational operating systems that use smart automation and coordination to obviate the need for manual management?</p><p>We began 2025 fired up by the goal of AI-enabled organisational improvement and<strong><a href="https://academy.shiftbase.info/p/will-we-see-the-first-programmable"> the long-term goal of smart, programmable organisations</a></strong> that can achieve both agility and scale without bloat and waste.</p><p>But throughout the year, we were reminded how existing management and incentive structures continue to drive short-term visible actions at the expense of longer-term readiness, architecture and planning. The urge to be seen to do something - <em>anything</em> - in enterprise AI has led to license purchases with no adoption planning, innovation theatre that generates press releases rather than meaningful capabilities, and KPIs that are about counting trees, rather than thinking about what can be built with new wood.</p><p>We hope 2026 will see more focus from leaders on AI readiness and all the technical and non-technical enablers for agentic AI to realise its potential.</p><h2>AI Readiness Priorities by Function</h2><p>We wrote a lot last year about AI readiness, and the use cases, leadership techniques and capabilities that leaders at all levels can utilise to make the most of AI tools and systems in their organisations today.</p><p>One common theme was the idea of <em><strong>legibility</strong></em> - making the implicit explicit, and surfacing the norms, rules and unspoken parts of the work system so that we can evaluate and improve them, and codify them into rulesets for both AI agents and people to work together better.</p><p>So, before we throw more coal into the furnace, here is a breakdown of the top 3-4 things we suggested executives can do as part of their existing work to improve AI readiness in their domains, and guide the conversation with technical teams to ensure AI projects meet real business needs now and in the future.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vW0p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vW0p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!vW0p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!vW0p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!vW0p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vW0p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:104850,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/183678239?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vW0p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!vW0p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!vW0p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!vW0p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05ca1c6-269e-4fd2-8336-e1fe798f61c2_1024x1024.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>CEO &amp; COO: From Oversight to &#8220;World-Building&#8221;</h3><p>The primary shift for senior leadership is moving away from viewing AI as a &#8220;tool to be purchased&#8221; and towards seeing it as an &#8220;environment to be designed.&#8221;</p><ul><li><p><strong>World-Building as Strategy:</strong> Leaders should move beyond incremental efficiency and focus on defining the digital environment where humans and agents interact.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/a-leaders-guide-to-world-building">A Leader&#8217;s Guide to World-Building in the AI-Augmented Enterprise</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/enterprise-ai-needs-leadership-ambition">Enterprise AI Needs Leadership Ambition</a></strong></p></li></ul></li><li><p><strong>Context Engineering:</strong> Start defining the &#8220;organisational OS&#8221; to ensure the right information and data are available to people and agents to support reliable ways of working.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/the-growing-importance-of-context-engineering">The Growing Importance of Context Engineering for Leaders Adopting AI</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/enterprise-context-engineering-as">Enterprise Context Engineering as a New Leadership Capability</a></strong></p></li></ul></li><li><p><strong>Cultivate AI Leadership Skills:</strong> Thinking and writing more like architects or developers than bureaucrats to create the clarity needed to guide AI development.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/metamorphoses-of-bodies-changd-to">Metamorphoses: &#8220;Of Bodies Chang&#8217;d to Various Forms, I Sing&#8221;</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/ai-world-building-and-the-value-of">AI World-Building and the Value of Boring BizOps</a></strong></p></li></ul></li></ul><h3>CIO &amp; CTO: Infrastructure for the Agentic Era</h3><p>In 2025, CIOs moved from LLM experimentation to planning and building robust scaling layers and collaborative architectures, but readiness challenges remain.</p><ul><li><p><strong>AgentOps &amp; Scaling:</strong> Building the organisational infrastructure for agent development, deployment, and monitoring to achieve practical impact.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/agentops-the-scaling-layer-for-agentic">AgentOps: The Scaling Layer for Agentic AI in the Enterprise</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/mapping-ai-value-pathways">Mapping AI Value Pathways</a></strong></p></li></ul></li><li><p><strong>Small Models, Local Context:</strong> Making more use of safer Small Language Models (SLMs) to keep AI closer to local context, culture, and control.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/small-models-real-change-how-leaders">Small Models, Real Change: How Leaders Can Use SLMs</a></strong></p></li></ul></li><li><p><strong>Collaborative Architectures:</strong> Creating the &#8220;Context Plumbing&#8221; that allows people and machines to share a common information landscape.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/collaborative-architectures-for-agents">Collaborative Architectures for Agents, People &amp; Machines</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/a-brighter-future-for-km-building">A Brighter Future for KM: Building an AI-Enhanced Knowledge-Sharing Capability</a></strong></p></li></ul></li></ul><h3>HR: Redesigning Work and Human-AI Teaming</h3><p>HR is starting to focus on the future of human-AI collaboration and &#8220;Centaur&#8221; capabilities, plus they will need to play a role in making the implicit rules of work explicit.</p><ul><li><p><strong>Designing Centaur Teams:</strong> Reducing organisational drift by clearly defining how humans and AI agents can work together most productively.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/big-picture-leadership-techniques">Big Picture Leadership Techniques for Human-AI Teaming Readiness</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/leading-collaborative-centaur-teams">Leading Collaborative Centaur Teams</a></strong></p></li></ul></li><li><p><strong>Codifying the Invisible:</strong> Taking unspoken rules and cultural norms and turning them into explicit rulesets and guardrails for AI collaboration.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/to-protect-human-agency-codify-the">To Protect Human Agency, Codify the Invisible</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/codifying-rulesets-in-the-explainable">Codifying Rulesets in the Explainable Enterprise</a></strong></p></li></ul></li><li><p><strong>Work Designers:</strong> Encouraging a mindset shift where every employee becomes a &#8220;Work Designer&#8221; of composable workflows.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/we-can-all-be-work-designers-in-the">We Can all be Work Designers in the Composable Enterprise</a></strong></p></li></ul></li></ul><h3>Learning &amp; Development: Shared Practice &amp; Transformation</h3><p>L&amp;D is evolving from learning content production and simple training to facilitate &#8220;co-op mode&#8221; learning, practical experimentation and always-on in-the-flow AI learning systems.</p><ul><li><p><strong>Learning in &#8220;Co-Op Mode&#8221;:</strong> Accelerating adoption through shared practice rather than individual use.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/ai-learning-adoption-goes-further">AI Learning &amp; Adoption Goes Further and Faster in Co-Op Mode</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/the-rise-of-symbiotic-learning">The Rise of Symbiotic Learning</a></strong></p></li></ul></li><li><p><strong>Building the Future One Agent at a Time:</strong> L&amp;D teams exploring agentic tools  to address learner needs while staying grounded in practice.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/building-the-future-of-organisational">Building the Future of Organisational Learning One Agent at a Time</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/learning-in-the-age-of-agents">Learning in the Age of Agents</a></strong></p></li></ul></li><li><p><strong>Avoiding &#8220;Workslop&#8221;:</strong> Moving away from low-quality AI filler by focusing on the &#8220;work&#8221; of organisational transformation.</p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/doing-the-work-why-learning-is-key">Doing the Work: Why Learning is Key to Agentic AI Success</a></strong></p></li></ul></li></ul><h2>Bringing it all Together</h2><p>To bring this all together - and also bring it to life for busy leaders and leadership teams - requires the knowledge and experience to address specific local conditions and challenges. But we are also developing repeatable learning interventions and leadership techniques that people can pick up and run with.</p><p>If you would like to learn more about this work, please get in touch.</p><p>For now, here is one simple method to bring together function leads to build a common map that can guide AI-enabled capability development, and which can achieve results in a single 30-day sprint: </p><ul><li><p><strong><a href="https://academy.shiftbase.info/p/enterprise-ai-adoption-requires-connected">How leaders working together with a simple Map &#8594; Change &#8594; Learn loop can turn scattered AI experiments into living, organisation-wide capabilities</a></strong></p></li></ul><h2>Further Reading</h2><ul><li><p><strong><a href="https://academy.shiftbase.info/archive">Plunder our archives by topic area</a></strong></p></li><li><p><strong><a href="https://academy.shiftbase.info/p/deep-dives">Revisit our practical guides and deep dives</a></strong></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Can AI Help Reverse the Oversimplification of Management?]]></title><description><![CDATA[For leaders to be programmers of their organisational OS, they need AI's help in embracing complexity and reality, rather than relying on simplistic data proxies]]></description><link>https://academy.shiftbase.info/p/can-ai-help-reverse-the-oversimplification</link><guid isPermaLink="false">https://academy.shiftbase.info/p/can-ai-help-reverse-the-oversimplification</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 23 Dec 2025 16:36:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!83X4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As we look forward to a new year, it is worth zooming out momentarily from the frenetic race between AI models to focus on some of the enablers, blockers and wider changes that will determine whether organisations are able to use this technology effectively.</p><p><strong><a href="https://www.duperrin.com/english/2025/12/08/impacy-ai-transformation-bcg-mckinsey/">We now have a few studies</a></strong> that point to <strong><a href="https://openai.com/index/the-state-of-enterprise-ai-2025-report/">good but not great marginal efficiencies</a></strong> when using Generative AI inside existing business structures and management systems to speed up existing tasks. But whilst current LLMs and SLMs are more than good enough to support intelligent operations, the transformational potential of agentic AI depends upon readiness efforts in areas from technical and data infrastructure to <strong><a href="https://xer0pn.medium.com/the-organizational-blindness-of-enterprise-ai-d1eec7bfdb29">organisation mapping</a></strong>, process discovery and explainability, and in particular, local training and context management.</p><p>At the infrastructure level, there are welcome signs that <strong><a href="https://stackoverflow.blog/2025/12/08/the-shift-in-enterprise-ai-what-we-learned-on-the-floor-at-microsoft-ignite/">some of the technical enablers for agentic and enterprise AI are getting more attention</a></strong>, including interoperability with protocols such as MCP and a <strong><a href="https://simonwillison.net/2025/Dec/19/agent-skills/">standard approach to defining agent skills</a></strong>.</p><h2>Leaders as Programmers</h2><p>But where is the shift in leadership and management that will be required to re-invigorate our tired bureaucracies and create intelligent AI-augmented organisational operating systems?</p><p><strong><a href="https://academy.shiftbase.info/t/worldbuilding">We have written about world-building this year as an important emerging leadership skill</a></strong>, and we are finding this to be an accessible and useful frame for leadership development in organisations seeking to pursue enterprise AI.</p><p><strong><a href="https://tomtunguz.com/operational-analytical-context-databases/">Tomasz Tunguz recently shared an interesting musing on context databases</a></strong> and why they are needed if we want to move away from brittle, deterministic automation to fully exploit the capabilities of enterprise AI.</p><p>Making the rules of the road explicit is a key management activity as we progress towards programmable organisations, and this is why we include <em><strong>leaders as programmers</strong></em> within our leadership development programmes and workshops. This is not the old idea that &#8220;leaders should learn Python&#8221;, but the realisation that clearly stated high-level goals and instructions linked to all their necessary context is probably how the programmable organisation will be operated and guided in the future.</p><p>We have argued for a long time that <em>literate</em> leadership (writing things down, collating knowledge and curating data) beats <em>performative</em> leadership (meetings, presence and a focus on simplistic decision-making) over the long run. Those leaders who have written things down and encouraged their teams to document their work in wikis, collaboration systems and knowledge stores will have a huge advantage, and the resulting content will give context databases a big head start.</p><p>All of this has major implications for L&amp;D functions, and we are starting to see some of them trying to re-define their value proposition from content delivery to longitudinal support and product development.</p><h2>How Can AI Make Management Smart Again?</h2><p>If we are to develop the leadership organisations will need in the AI era, perhaps we should start by asking where things went wrong in the previous one.</p><p>Dan Davies&#8217; excellent book <strong><a href="https://www.goodreads.com/book/show/211161687-the-unaccountability-machine">The Unaccountability Machine</a></strong> provides some clues, and also hints at ways in which AI might make the theoretical notion of management cybernetics feasible as a practical way to run at least the basic functions of a complex organisation.</p><p>The book begins by looking at <em>accountability sinks</em> - structures (rules, algorithms, or market pressures) where decisions are delegated so deeply into a system that no individual human can be held responsible for the outcome - and their role in the 2008 financial crash. But it goes on to look at the way reductive abstractions drove post-war economics towards Milton Friedman&#8217;s doctrine of shareholder value-maximisation, which spawned value-destroying ideas such as the leveraged buyout industry.</p><p>But Davies also looks at management failings through the lens of cybernetics and specifically <strong><a href="https://en.wikipedia.org/wiki/Viable_system_model">Stafford Beer&#8217;s Viable System Model</a></strong> (VSM), and how management&#8217;s reductive approach to complexity fell foul of Ashby&#8217;s <strong><a href="https://en.wikipedia.org/wiki/Variety_(cybernetics)">Law of Requisite Variety</a></strong> (for a manager to control a system, their &#8220;regulatory variety&#8221; must match the &#8220;variety of the system&#8221; they are managing). By using attenuators to simplify their picture of the firm, such as share price or quarterly sales reports, managers have made the organisation blind to all the other important data and signals necessary to guide strategy; plus, the signals they do use for strategy tend to be lagging indicators.</p><p>It might seem that AI and the algorithmic era will make this situation even worse and therefore we should return to an imagined &#8216;good old days&#8217; of personal, accountable management. But in fact, enterprise AI offers better ways to cope with increasing complexity, and therefore a way to embrace it positively.</p><p>On the attenuation question, agentic AI enables us to maintain a detailed, objective picture of real-time operations within even a large, complex organisation, meaning the variety of the control system is able to match the variety of observable reality. If leaders really are too busy (or lacking the knowledge) to engage with reality, then AI is also pretty good at summarising information for them, but without needing to attenuate / throw away a lot of the richness as late Twentieth Century management tended to do.</p><blockquote><p><em>A modern artificial intelligence system &#8211; a transformer recurrent neural network can take a large block of text and summarise it quickly. It can also expand a short instruction into a longer explanation. It&#8217;s practically designed for facilitating two-way communication between a mass audience and a smaller decision-making system. It would really be a generational shame if we ended up once more just using it to make our existing structures work faster &#8211; like bringing back Shakespeare, Machiavelli and Napoleon and setting them to work designing tax forms.</em></p></blockquote><p>Anybody who has seen an important, expert-designed project or proposal for action reduced to kindergarten images and bullet points so that a senior leader with the attention span of a goldfish can &#8220;decide&#8221; will surely welcome the fact that we can summarise without attenuation.</p><p>Agentic AI can also help eliminate accountability sinks in several ways. For example, processes and rules of the road no longer need to be oversimplified to such an extent that people can be trained to follow them. Instead, we can write as many rules and exception handlers as we like, and leave it to the agents to ensure they follow them, with human oversight of the overall outcomes and the system. Plus, by automating the drudgery of basic work coordination (what Beer called System 2 in the VSM), people can spend more time focusing on big picture questions such as identity and purpose (System 5), which means Instead of being cogs managing spreadsheets, managers can return to being architects of the organisation&#8217;s mission.</p><p>Of course, this works only if we design our structures and rules with this in mind. The machine is only as good as what it &#8216;cares&#8217; about. If we use AI to automate the same narrow goal of &#8220;maximising shareholder returns at any cost&#8221;, we will simply build a faster, more efficient &#8216;Unaccountability Machine&#8217;.</p><blockquote><p><em>&#8220;The purpose of a system is what it does.&#8221;</em> - Stafford Beer</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!83X4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!83X4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic 424w, https://substackcdn.com/image/fetch/$s_!83X4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic 848w, https://substackcdn.com/image/fetch/$s_!83X4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic 1272w, https://substackcdn.com/image/fetch/$s_!83X4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!83X4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic" width="1456" height="970" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:970,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1090009,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/182434126?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!83X4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic 424w, https://substackcdn.com/image/fetch/$s_!83X4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic 848w, https://substackcdn.com/image/fetch/$s_!83X4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic 1272w, https://substackcdn.com/image/fetch/$s_!83X4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a83f090-5b9d-41ff-9249-323169f2d53b_3848x2564.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Life in the PPT/XLS factory (with apologies to L.S.Lowry)</figcaption></figure></div><h2>&#8220;Nine to five, for service and devotion&#8230;&#8221;</h2><p>One issue for both society and business that will start to assert itself in 2026 is the question of jobs. Recent job surveys reveal a bifurcation of the labour market. While mass unemployment has not yet materialised, the data shows that AI is hollowing out entry-level roles while simultaneously exacerbating a shortage of high-skilled talent.</p><p><strong><a href="https://simonwillison.net/2025/Dec/7/cory-doctorow/">AI critics such as the wonderful Cory Doctorow</a></strong> argue that AI is being sold on the promise of job cuts and cost savings, and that none of the surplus it produces will be returned to workers:</p><blockquote><p><em>The growth narrative of AI is that AI will disrupt labor markets. I use &#8220;disrupt&#8221; here in its most disreputable, tech bro sense.</em></p><p><em>The promise of AI &#8211; the promise AI companies make to investors &#8211; is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.</em></p></blockquote><p>But I think there is some win-win potential in the expected impact of AI on jobs. As Azeem Azhar shared in his <strong><a href="https://www.exponentialview.co/p/2025-in-25-stats?utm_source=substack&amp;publication_id=2252&amp;post_id=182322039&amp;utm_medium=email&amp;utm_content=share&amp;utm_campaign=email-share&amp;triggerShare=true&amp;isFreemail=false&amp;r=9dv58&amp;triedRedirect=true">2025 in 25 stats</a></strong> summary, for the first time in 22 years, work-life balance has overtaken pay as the most important job factor, according to <strong><a href="https://fortune.com/2025/11/17/work-life-balance-outranked-pay-top-perk-peoeple-choosing-a-job/">Randstad&#8217;s 2025 Workmonitor report</a></strong>. Indeed, perhaps the entire industrial era concept of jobs is a form of neo-feudalism that we should leave behind, assuming we can find other ways to fund basic needs.</p><p>As Antonio Melonio argues in a provocative post entitled <strong><a href="https://www.thepavement.xyz/p/the-era-of-jobs-is-ending">The era of jobs is ending</a></strong>:</p><blockquote><p><em>The system we built around jobs&#8212;as moral duty, as identity, as the only path to survival&#8212;is about to collide with machines that can perform huge chunks of that &#8220;duty&#8221; without sleep, without boredom, without unions, without pensions.</em></p><p><em>You can treat this as a threat.</em></p><p><em>Or as a once-in-a-civilization chance to get out of a religion that has been breaking us, grinding us down, destroying us for centuries.</em></p></blockquote><p>But at the same time, we have the opportunity to reshape those jobs that are not at risk through finding creative and human ways to combine people and AI. <strong><a href="https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai">There is a lot of thought going into the skills needed for people to make the most of this</a></strong>, and people are starting to explore <strong><a href="https://openreview.net/forum?id=Yhqa8Ljzrj">how to quantify human-AI synergy</a></strong> to find the optimum collaboration. In a situation where skilled workers are at a premium, this all suggests that some jobs will get better and become more interesting.</p><p>But let&#8217;s not pretend that this will not be very disruptive for many people&#8217;s lives, especially in a world where housing has become an asset class rather than a basic right. This will need a policy response as well as businesses taking some responsibility for the gravity of the shift away from full employment as a realistic goal.</p><p>I already advise my mentees only to consider taking a job as a stepping stone to better things, and to avoid getting trapped in employment whilst developing their own independent income sources; I expect employers will need to work hard to sell the idea of a 9-5 to the next generation.</p><p>Finally, on the subject of work, we will hit the pause button and return in the new year. I have a large section of the very best Tuna toro and some lesser known Spanish red wines that are crying out for experimentation.</p><p>But for now, we wish you all a wonderful festive season and a happy new year.</p>]]></content:encoded></item><item><title><![CDATA[Big Picture Leadership Techniques for Human-AI Teaming Readiness]]></title><description><![CDATA[Practical Steps for Leaders to Reduce Drift and Build a Coherent Environment for Centaur Teams]]></description><link>https://academy.shiftbase.info/p/big-picture-leadership-techniques</link><guid isPermaLink="false">https://academy.shiftbase.info/p/big-picture-leadership-techniques</guid><dc:creator><![CDATA[Cerys Hearsey]]></dc:creator><pubDate>Tue, 16 Dec 2025 15:03:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lj40!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<pre><code><strong>This is Part 2 of our exploration of Human&#8211;AI Teaming Readiness. </strong>In Part 1, we defined the coordination problem that emerges when humans and agents share the same work environment, and introduced <strong>Align &#8594; Bound &#8594; Learn</strong> as a lightweight system for designing shared context, boundaries and learning loops for centaur teams. In this edition, we move up a level to look at <strong>how leaders maintain the integrity of the collaboration world itself</strong>, through the Collaboration Health View and by treating Human&#8211;AI Teaming Readiness as a core organisational capability, not a local experiment. If you missed Part 1, you can read it <strong><a href="https://academy.shiftbase.info/p/leading-collaborative-centaur-teams">here</a></strong>.</code></pre><p>Many organisations have spent 2025 pouring money into AI licences, copilots and orchestration platforms, often long before they have invested the leadership attention required to make their processes and work explainable to AI agents.</p><p>The result is a familiar disconnect: enormous budget allocation at the technology layer, but without enough time spent articulating the rules, trade-offs, priorities and judgment criteria that govern how work happens.</p><p>Even without agents in the loop, many leaders are already feeling the cost of this missing context:</p><ul><li><p>Teams interpret the same strategic message in different ways.</p></li><li><p>Boundaries of autonomy shift depending on who is in the room.</p></li><li><p>Quality thresholds vary wildly between functions.</p></li><li><p>Escalations proliferate because no one is sure who owns what under changing conditions.</p></li></ul><p>These symptoms are not new. They are signals that the <em>world</em> in which teams collaborate is weaker or more fragmented than anyone realises. World-building may sound abstract, but leaders practise it every day through the expectations they set, the meanings they reinforce, the boundaries they uphold, and the language they use.</p><p>This is the part of leadership that has always held organisations together.</p><p>But as mixed-intelligence work begins to spread, the consequences of weak or drifting worlds are becoming impossible to ignore. Context that was once tacit must now be made explicit. Coherence that once emerged through proximity now requires deliberate maintenance. And coordination that once relied on shared human intuition must be authored in a form that both humans and machines can recognise.</p><h2><strong>Seeing The Drift Problem</strong></h2><p>Even in organisations that have spent years improving team-level alignment, intent and ways of working through agile transformation, coherence often collapses the moment those teams interact with the wider hierarchy.</p><p>Teams may establish:</p><ul><li><p>clear intent,</p></li><li><p>shared priorities,</p></li><li><p>strong decision principles,</p></li><li><p>and stable rituals for coordination.</p></li></ul><p>Yet when they meet the rest of the organisation, the steering structures, budgeting cycles, risk gates, and inherited decision norms, the world those teams have built can evaporate almost instantly. Suddenly:</p><ul><li><p>vocabulary no longer matches,</p></li><li><p>priorities are interpreted differently,</p></li><li><p>escalation paths contradict team autonomy,</p></li><li><p>and decisions that made sense inside the team lose meaning outside it.</p></li></ul><p>This is already world drift, leaders are beginning to feel its symptoms long before AI enters the picture:</p><ul><li><p>the same strategy feels different in every function,</p></li><li><p>ownership becomes ambiguous the moment work crosses boundaries,</p></li><li><p>quality thresholds vary depending on which leader signs off,</p></li><li><p>and small misunderstandings accumulate into larger coordination friction.</p></li></ul><p>These are early indicators that the organisational world, the shared logic that collaboration depends on, is losing definition.</p><p>Agents will inherit the world as it is, not as leaders wish it were.</p><ul><li><p>If alignment is inconsistent, they amplify inconsistency.</p></li><li><p>If thresholds vary, they surface those variations.</p></li><li><p>If boundaries are vague, they multiply escalations or silent failures.</p></li></ul><p>AI reveals drift and friction rather than creating it. So how can leaders start to recognise the dissonance of world drift?</p><h3><strong>Technique # 1: Tracking Drift Signals</strong></h3><p><em>A micro-practice for reading the current state of your collaboration world</em></p><h4><strong>What</strong></h4><p>A 10-minute weekly observation practice that helps leaders detect the earliest signs of world drift.</p><h4><strong>Why</strong></h4><p>Because drift doesn&#8217;t appear as a crisis, it appears as small inconsistencies that compound quietly until coordination feels harder than it should.</p><h4><strong>Who</strong></h4><p>Any leader responsible for cross-functional work or distributed teams.</p><h4><strong>How</strong></h4><p>After a meeting or decision, ask yourself:</p><ol><li><p><strong>Vocabulary:</strong> Did we all use the same language for what &#8220;good&#8221; meant?</p></li><li><p><strong>Mental Model:</strong> Were we really working from the same understanding of the goal?</p></li><li><p><strong>Ownership:</strong> Did decision rights remain clear or shift unpredictably?</p></li></ol><p>These micro-signals tell you where your world is stable and where meaning is beginning to fragment.</p><h3><strong>Technique # 2: The Context Ledger</strong></h3><p><em>Micro-codification that strengthens the world one edit at a time</em></p><h4><strong>What it is</strong></h4><p>A running, publicly visible ledger where teams contribute one sentence each week describing a rule, boundary, trade-off or guideline that could be added to collaboration guidance, system prompts and agent skills to improve the way people and agents operate.</p><h4><strong>Why it matters</strong></h4><p>Most context failures come from missing or inconsistent meaning.</p><p>Leaders think they have alignment. In reality, everyone is improvising differently.</p><p>A weekly cadence of micro-codification prevents drift, distributes authorship, and prepares organisations for AI agents that will need the same clarity.</p><h4><strong>Who should use it</strong></h4><p>Teams beginning to experience drift, or preparing to introduce agents.</p><h4><strong>How it works</strong></h4><p>Once a week, teams add <strong>one crisp rule</strong> such as:</p><ul><li><p>&#8220;Escalate when customer impact exceeds X, regardless of channel.&#8221;</p></li><li><p>&#8220;Risk level &#8216;high&#8217; means a downstream effect on more than two functions.&#8221;</p></li><li><p>&#8220;Agents may flag priority but humans make trade-offs between priorities.&#8221;</p></li><li><p>&#8220;A &#8216;quality issue&#8217; means deviation from X, not personal preference.&#8221;</p></li></ul><p>Leaders curate, combine, and lightly edit these into a <strong>Context Ledger,</strong> the beginnings of an organisational grammar for centaur teams.</p><p>Over time, this becomes the substrate for:</p><ul><li><p>shared operational meaning,</p></li><li><p>consistent decision-making,</p></li><li><p>and eventually, system-level prompts and agent rulesets.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lj40!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lj40!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!lj40!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!lj40!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!lj40!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lj40!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/be9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:207340,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/181786451?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lj40!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!lj40!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!lj40!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!lj40!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe9ec8eb-9549-41ae-9226-e14b0f96a467_1024x1024.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Reading the World, Not Just the Work</strong></h2><p>These two techniques give leaders an on-ramp into world-building: a way to see drift and begin stabilise it, and to build capabilities before agents join the team.</p><p>But as organisations introduce more agents into their operating environments, something subtler begins to happen: the collaboration world develops a life of its own.</p><p>Interactions between humans and agents shape new norms. Adjacent workflows influence one another in ways no-one planned. Local learning compounds unevenly. Meaning shifts faster in some pockets than others.</p><p>Even well-aligned teams can wake up inside a context that feels quietly different from the one they thought they had created. This is the moment where leadership attention must rise above individual decisions or team routines. The question becomes one of world integrity:</p><blockquote><p><em>Is the environment that supports mixed-intelligence work holding its shape, or is drift accumulating across the structural, cultural, or experiential layers of the organisation?</em></p></blockquote><p>Leaders need a practice for reading, and lightly editing, the world itself.</p><p>That is the purpose of the Collaboration Health View, which evolves our earlier on-ramp techniques to a new level.</p><h2>The Higher-Altitude Practice: The Collaboration Health View</h2><p>In the world-building model, coherence never comes from initial design alone. It comes from continuous maintenance, from the ability to periodically rise above day-to-day activity and ask whether the world still makes sense to those living and acting within it.</p><p>The Collaboration Health View is that maintenance practice for centaur teams.</p><p>Not <em>&#8220;Is the system working?&#8221;</em></p><p>But <em>&#8220;Is the world of collaboration still coherent?&#8221;</em></p><p>This is not an operational review and not a technical audit. It is a form of world stewardship, a periodic act of sense-making that allows leaders to observe how shared meaning between humans and agents is evolving over time.</p><p>Its purpose is simple:</p><ul><li><p>to detect where coordination is strengthening,</p></li><li><p>where it is quietly drifting,</p></li><li><p>and where the language of collaboration is beginning to fracture under pressure.</p></li></ul><h3>What Leaders Look For</h3><p>At this altitude, leaders are not just reading performance. They are reading world integrity.</p><p>Signals of strengthening collaboration:</p><ul><li><p>Stable division between human judgment and agent execution</p></li><li><p>Transparent decision provenance</p></li><li><p>Deliberate, not habitual, human override</p></li><li><p>A shared vocabulary of priority, risk and quality in everyday use</p></li></ul><p>Signals of drift:</p><ul><li><p>Shadow automation outside shared boundaries</p></li><li><p>Conflicting agent behaviours across adjacent worlds</p></li><li><p>Growing reliance on post-hoc control rather than pre-emptive boundary design</p></li><li><p>Displacement of responsibility (&#8220;the system decided&#8221;)</p></li></ul><p>These are the early signs that the world&#8217;s systems and culture are beginning to slip out of alignment.</p><h3>Cadence and Outputs</h3><p>The Collaboration Health View mirrors other world-maintenance practices:</p><ul><li><p>regular, light-touch cadence</p></li><li><p>high leverage</p></li><li><p>low ceremony</p></li><li><p>persistent over time</p></li></ul><p>Its outputs are not plans or mandates. They are small edits to the world:</p><ul><li><p>boundary adjustments</p></li><li><p>vocabulary clarifications</p></li><li><p>intent realignment</p></li><li><p>narrative renewal</p></li></ul><p>This is how coherence is kept alive in an agentic environment by continuously tending the conditions that generate it.</p><h2><strong>Questions for Leaders to Test Collaboration Health</strong></h2><p>A Collaboration Health View becomes most powerful when it is grounded in lived experience. These questions help leaders sense the state of their own environment:</p><ul><li><p>Do humans and agents still appear to act from the same definition of success?</p></li><li><p>Where did we recently see an agent behave correctly in isolation but incorrectly in context?</p></li><li><p>What vocabulary has begun to fragment across teams or systems?</p></li><li><p>Where are humans overriding too often, or not often enough?</p></li><li><p>Which boundaries felt clear during design but ambiguous in practice?</p></li><li><p>Where is escalation happening too late, or too reflexively?</p></li><li><p>What small world-edits (language, boundaries, examples, narratives) would remove friction tomorrow?</p></li></ul><p>These are not diagnostic questions for technologists. They are world-reading questions for leaders.</p><p>Read on to learn how to make human&#8211;AI teaming readiness a core organisational capability to support future developments in agentic AI and automation, and what this means for the future of leadership.</p>
      <p>
          <a href="https://academy.shiftbase.info/p/big-picture-leadership-techniques">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Context Plumbing, Intent Sensing and an AI Reverse Uno on Social Media Feeds?]]></title><description><![CDATA[Enterprise AI looking a better bet than consumer AI, plus some links and ideas about how personal AI agents could help improve both of them]]></description><link>https://academy.shiftbase.info/p/context-plumbing-intent-sensing-and</link><guid isPermaLink="false">https://academy.shiftbase.info/p/context-plumbing-intent-sensing-and</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 09 Dec 2025 15:30:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rfO4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Enterprise Strikes Back</h2><p>Fears of the AI investment bubble potentially crashing the US stock market have abated slightly, despite AI revenues not yet looking like they will be able to repay the vast sums being invested for some considerable time. But OpenAI is clearly in a vulnerable position, and <strong><a href="https://gizmodo.com/its-code-red-week-for-openai-2000696911">Sam Altman has declared a code red</a></strong> in response to the threat posed by Google.</p><p>Ben Thompson frames the story in Star Wars terms, with <strong><a href="https://stratechery.com/2025/google-nvidia-and-openai/">OpenAI and NVidia having reached the Empire Strikes Back stage of the hero&#8217;s journey</a></strong> now that Google has seemingly re-asserted its leading position across LLMs, AI apps and hardware. But whilst Nvidia is hoovering up money by selling chips, OpenAI is blowing cash in the opposite direction, and will need to come up with something very special if it is to survive and thrive beyond this initial wave. Given Altman&#8217;s talk of <strong><a href="https://www.exponentialview.co/p/ev-553">superhuman persuasion</a></strong>, let&#8217;s hope their saving grace is not a consumer AI advertising arms race with Google.</p><p>Meanwhile, the case for enterprise AI being the route to real returns and wider economic and social benefit continues to grow. Nvidia&#8217;s Jensen Huang is continuing to build enterprise and industrial partnerships to open up new markets for their chips; <strong><a href="https://rollingout.com/2025/12/03/jensen-huang-explains-why-enterprise-ai/">he sees industrial use cases such as digital twins and product prototyping as bigger and more important opportunities than consumer chatbots</a></strong>.</p><p><strong><a href="https://www.exponentialview.co/p/ten-things-im-thinking-about-ai-part1">Azeem Azhar began his roundup of the state of AI three years on from the launch of ChatGPT with a look at enterprise AI</a>. </strong>He sees very positive adoption and ROI signals that suggest this field will continue to be where AI has the greatest impact. Even looking at just Generative AI, rather than the more complicated world of agentic AI that needs a degree of organisational transformation to fulfil its promise, he sees strong adoption that suggests we are looking at a J-curve of productivity impact:</p><blockquote><p><em>The best example, though, is JP Morgan, whose boss Jamie Dimon <a href="https://www.bloomberg.com/news/articles/2025-10-07/jpmorgan-s-dimon-says-ai-cost-savings-now-matching-money-spent">said</a>: &#8220;We have shown that for $2 billion of expense, we have about $2 billion of benefit.&#8221; This is exactly what we would expect from a productivity J&#8209;curve. <strong>With any general&#8209;purpose technology, a small set of early adopters captures gains first, while everyone else is reorienting their processes around the technology.</strong> Electricity and information technology followed that pattern; AI is no exception. The difference now is the speed at which the leading edge is moving.</em></p></blockquote><h2>Real vs Simulated Intelligence(s)</h2><p>But models are just part of the puzzle in building smarter organisations, and we should not lose sight of the respective strengths of human and machine intelligence.</p><p><strong><a href="https://onlydeadfish.substack.com/p/fish-food-670-ai-versus-human-reasoning?publication_id=2195351&amp;post_id=178509735&amp;triggerShare=true&amp;isFreemail=true&amp;r=9dv58&amp;triedRedirect=true">Neil Perkin has shared a good summary of a recent podcast by Dave Snowden about sense-making and the impact of AI</a></strong>, covering some of his observations about the differences between human and machine reasoning, cognition and insight. I also joined a longer webinar with Snowden and others interested in AI and complexity last week, where he made similarly useful points, so Neil&#8217;s notes saved me a job.</p><blockquote><p><em>Understanding these fundamental differences enables us to collaborate much more effectively with AI engines. LLMs can look like they have a deep understanding of a question but of course what they are really optimised for is identifying patterns and predicting the next most probable word in a sequence to mimic human-generated text. They are set up to minimise the difference from training data meaning that, by design, they <a href="https://onlydeadfish.substack.com/p/fish-food-654-what-is-ai-still-not">trend towards the average and most probable</a>.</em></p></blockquote><p>Another important difference between LLMs and human reasoning is that language is not the same as intelligence - <strong><a href="https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems">it is only one part of how people think and communicate their knowledge, as Benjamin Riley wrote for the Verve</a></strong>:</p><blockquote><p><em>LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.</em></p></blockquote><p>If we mistake large language models and their predictive abilities as intelligence, then we risk denuding our own creative and cognitive superpowers. But perhaps if we use these stochastic parrots in more creative ways, they could play a role in helping us improve our own thinking, rather than just outsourcing it. <strong><a href="https://advait.org/talks/sarkar-2025-tedai-vienna/sarkar_2025_TEDAI_AI_as_Tool_for_Thought_V1.pdf">Advait Sarkar posed this question in a recent talk on behalf of Microsoft Research, and concluded that the idea has potential merit</a></strong>:</p><blockquote><p><em>You can demonstrably reintroduce critical thinking into AI-assisted workflows. You can reverse the loss of creativity and enhance it instead. You can build powerful tools for memory that enable knowledge workers to read and write at speed, with greater intentionality, and remember it too. It turns out, with the right principles of design, you can build tools that are the best of both worlds: applying the awesome speed and flexibility of this technology to protect and enhance human thought.</em></p></blockquote><p>It would be good to see some practical applications of this idea in our use of GenAI within organisations, and especially for leaders.</p><h2>Deriving Context &amp; Intent Needs Better Data</h2><p>Another point Dave Snowden makes is that training data is ultimately more valuable and important than the individual models trained on it.</p><p>This raises questions of digital sovereignty for any organisation or state trying to use AI without becoming dependent on AI platform providers like OpenAI. What should you own? What can you buy or rent? What should you build?</p><p>If the current trajectory holds, it looks like open models will be commoditised and the real value will lie in data, world models and the apps and agents we build on top of the models.</p><p>But whilst we can use large historical data for training models, the operational needs of context engineering mean that this category of data should ideally be recent, atomic and fluidly connected, so that it can be used in different ways.</p><p>Matt Webb is thinking about this from the point of view of discerning user intent, and he uses the term <strong><a href="https://interconnected.org/home/2025/11/28/plumbing">context plumbing</a> </strong>to describe the complex task of integrating lots of different data feeds to create context in close to real-time. He goes on to get quite excited about <strong><a href="https://interconnected.org/home/2025/12/05/training">the potential to derive seed training data from popular platforms and marketplaces, and then apply agentic AI coding loops to fulfil the opportunities identified in the data</a></strong> (at least I think that&#8217;s what he&#8217;s saying - see what you think).</p><p>It is worth reading these brain dumps alongside <strong><a href="https://substack.com/@technologik/p-174819339">S&#233;b Krier&#8217;s recent essay Coasian Bargaining at Scale</a></strong>, which postulates that personal agents (armed with your own context and intent) could do a better job of reducing transaction costs and other frictions in distributed negotiations compared to top-down approaches to navigating and balancing competing interests:</p><blockquote><p><em>This is the essence of the work of Nobel laureate Ronald Coase, who argued that if bargaining were cheap and easy, a polluter and their neighbor could strike a private deal without any need for regulation. Of course sometimes some pollution would still happen, but the payoff to the neighbor would ensure that both parties are better off than the zero pollution or no-limits pollution counterfactuals. The tragedy is not the existence of the conflict, but the transaction costs that prevent these mutually beneficial deals from being discovered and executed. It&#8217;s also the lesson from Elinor Ostrom, who documented how real-world communities successfully govern shared resources like fisheries and forests through their own intricate local rules.</em></p></blockquote><p>It is an interesting idea, and one that could help shape AI-enabled governance in the future.</p><p>In the context of enterprise AI, we probably need to dig deeper into how we can derive, generate or synthesise training data specific to an organisation&#8217;s work to create world models and context that are rich enough to enable agentic AI operations, and perhaps even the kind of negotiated outcomes and compromises that S&#233;b Krier has in mind.</p><p>This is not just a quantity question; it is also about how we structure and organise that data. Microsoft are doing some work on the semantic layer that helps people and agents make sense of data with what they are calling <strong><a href="https://www.directionsonmicrosoft.com/cio-talk-microsoft-gets-iq/">Microsoft IQ, which is intended to bring intelligent capabilities to Fabric, Microsoft 365, and Azure AI Search</a></strong>.</p><p>Another angle on harnessing data intelligently is to democratise access to it, so that more people can help shape it, <strong><a href="https://diginomica.com/atlassian-acquires-secoda-democratize-enterprise-data-analysis-business-teams">and that is what Atlassian appear to be targeting with their acquisition of data cataloguing tool Secoda</a></strong>.</p><h2>Could Agentic AI Play the Reverse Uno Card on Social Media?</h2><p>S&#233;b Krier&#8217;s piece is another reminder that personal agents are likely to emerge as solutions to many of the coordination challenges that led us down the perilous path of large-scale platforms and algorithmic sharing.</p><p>I am in Copenhagen right now at the pre-launch gathering of a bold project to rebuild Europe&#8217;s social platforms. It aims to build on the energy and creativity that we were all so excited about in the early 2000&#8217;s before Facebook and the big US platforms exploited our human need for connection to create ad-funded clickbait farms that have harmed our societies and democracies. Just today, the Guardian wrote about <strong><a href="https://www.theguardian.com/media/2025/dec/09/youth-movement-digital-justice-spreading-across-europe">a growing movement of young people across Europe seeking to reclaim their lives from big tech platforms</a></strong>, and this trend looks set to grow.</p><p>Within the Matrix world of attention farming, we have seen the bad things that AI can do: algorithmic feeds, emotional manipulation, fake content, fake people, and so on. But what if it could also be part of re-humanising our connection with each other?</p><p>There is a whole (small) world out there of people sharing their passions in niche social networks and communities, subreddits, discords or group chats. But the nature of scale-free networks and network effects means that Whatsapp, Facebook, Twitter, etc are still the easiest option for many people and groups in Europe just because that&#8217;s where their friends or families are to be found.</p><p>But what if we go back to some of those early social network ideas such as federation, interoperability and <strong><a href="https://en.wikipedia.org/wiki/The_Intention_Economy">the intention economy</a></strong> to play a reverse Uno on algorithmic feeds? If everybody has their own discoverability and curation agent that pulls from multiple networks, communities and messaging platforms to create a personal social feed, then we don&#8217;t need to all be on the same platform. If I can tell my agent to keep me updated with all my interests and groups, from local news to hobbies and political debates, and handle the messy details of logging in and aggregating content, then perhaps we could help sustain the safer, more human-scale small world networks that are out there already under the radar. Ever the optimist!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rfO4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rfO4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic 424w, https://substackcdn.com/image/fetch/$s_!rfO4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic 848w, https://substackcdn.com/image/fetch/$s_!rfO4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic 1272w, https://substackcdn.com/image/fetch/$s_!rfO4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rfO4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:630307,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/181129417?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rfO4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic 424w, https://substackcdn.com/image/fetch/$s_!rfO4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic 848w, https://substackcdn.com/image/fetch/$s_!rfO4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic 1272w, https://substackcdn.com/image/fetch/$s_!rfO4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1abd47fb-62ab-44e0-ad3b-fffd3e733df6_2816x1536.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[Leading Collaborative Centaur Teams]]></title><description><![CDATA[How Leaders Design the Language of Collaboration Between People and Agents.]]></description><link>https://academy.shiftbase.info/p/leading-collaborative-centaur-teams</link><guid isPermaLink="false">https://academy.shiftbase.info/p/leading-collaborative-centaur-teams</guid><dc:creator><![CDATA[Cerys Hearsey]]></dc:creator><pubDate>Tue, 02 Dec 2025 15:38:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NOLC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<pre><code><strong>This is Part 1 of a two-part exploration of Human&#8211;AI Teaming Readiness. In this edition, we define the coordination problem that emerges when humans and agents share the same work environment, and why this is becoming a core leadership design challenge. Part 2 will introduce a practical technique that senior leaders can use to deliberately keep collaboration aligned at the enterprise level.</strong></code></pre><p>Organisations are in the early stages of introducing new forms of intelligence into environments that were designed exclusively for human collaboration, which means teams are no longer just coordinating across functions and geographies, but also across fundamentally different types of actors, with different modes of perception, speed, memory and agency.</p><p>What can leaders do to ensure this works smoothly and successfully?</p><p>AI adoption today is still focused on tools. Which copilot should we deploy? Which agents should we orchestrate? Which platforms should we integrate?</p><p>But the next question is how coordination actually works when humans and agents share the same operational space. Because coordination is not achieved through tools alone. It depends on shared intent, mutual expectations, clear boundaries of authority, and a common language for what &#8220;good&#8221; looks like. It depends on trust, not blind trust, but trust that is continually calibrated through feedback and shared understanding. And it depends, above all, on context.</p><p>Adding agents into workflows that already struggle with role clarity and decision ownership can exacerbate existing problems. If we ask people to &#8220;delegate&#8221; to systems whose boundaries of action are poorly defined, some respond by over-relying on (and over-trusting) automation, whilst others respond with defensive scepticism, overriding systems by default. In both cases, coordination degrades because the shared language and context of work is missing or inconsistent.</p><p>What emerges is not seamless collaboration, but a new form of friction:</p><ul><li><p>decisions without provenance,</p></li><li><p>actions without clear ownership, and</p></li><li><p>learning that fails to accumulate.</p></li></ul><p>This is a coordination design problem that reflects the need for leadership to evolve towards a new craft: <em><strong>the design of the conditions under which humans and machines can act together without constantly pulling the system out of shape.</strong></em></p><p>In mixed-intelligence teams, coordination is no longer something that simply &#8220;happens&#8221; through informal norms and shared human intuition. It must be deliberately authored. The language of collaboration needs to be designed.</p><p>That language is what we mean by context.</p><p>This edition <strong><a href="https://academy.shiftbase.info/p/a-leaders-guide-to-world-building">builds directly on our recent exploration of world-building as a leadership capability for the agentic era</a></strong>, where we argued that organisations must be designed as coherent worlds of physics (systems), culture (meaning), and geography (experience), and not just as collections of tools and workflows.</p><p>Let&#8217;s zoom in from the design of the world to the design of collaboration that happens inside it; specifically, how leaders shape the language through which humans and AI work together as centaur teams.</p><h2>From Tool Use to Teaming Readiness</h2><p>Collaboration is not a feature of a toolset. It is a property of an environment.</p><p>A team does not become a centaur team because it has access to an agent. It becomes one when humans and machines can reliably coordinate their actions around shared intent, shared boundaries and shared meaning. Without that, what looks like collaboration on a process map quickly collapses into a brittle sequence of hand-offs, overrides, and shaky unwritten assumptions.</p><p>This is the shift many organisations are now stumbling into without quite realising it.</p><p>AI is no longer just something people use. In many settings, it is something that increasingly participates, sensing conditions, drafting actions, monitoring flows, surfacing options, and, in some cases, acting directly in the world. The moment AI participates in work, the question becomes <em>&#8220;What does it mean to work together?&#8221;</em></p><p>Yet most leadership doctrines, operating models and performance systems were never designed for such a question. They assume human actors with human judgment, human accountability, human learning cycles. Agents enter this landscape as something undefined: sometimes treated as a junior worker, sometimes as a calculator, sometimes as an oracle, sometimes as a risk.</p><p>The result is a quiet incoherence in how teams are being asked to relate to their machines.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NOLC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NOLC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!NOLC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!NOLC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!NOLC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NOLC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:825833,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/180495414?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NOLC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!NOLC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!NOLC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!NOLC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aca2e5d-186d-4198-8763-3ea48be17bb8_1536x1024.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In some places, delegation runs far ahead of design. Agents are given sweeping autonomy without corresponding clarity on boundaries, escalation, or quality thresholds. In others, distrust freezes collaboration entirely, reducing AI to a glorified drafting aid despite its wider potential. Both patterns look like adoption. Neither look like teaming readiness.</p><p>Teaming readiness is something different from tool readiness.</p><p>Tool readiness asks:</p><ul><li><p>Is the technology stable?</p></li><li><p>Is it secure?</p></li><li><p>Is it integrated?</p></li></ul><p>Teaming readiness asks a different set of questions:</p><ul><li><p>Do humans and agents share a workable definition of success?</p></li><li><p>Is it clear when an agent may act, and when a human must decide?</p></li><li><p>Do people trust the system for the right reasons?</p></li><li><p>Is learning flowing both from human to machine and from machine back to human?</p></li></ul><p>These are not questions for IT alone. They are questions of leadership design.</p><p>Moving from tool use to true teaming requires the deliberate shaping of roles, responsibilities, feedback loops, and the language through which work is coordinated. In other words, through the design of context as a shared operating grammar.</p><p>Until that grammar exists, organisations will continue to experience a familiar paradox: impressive local gains from AI deployment, alongside growing systemic fragility in how work actually holds together.</p><p>Leadership in mixed-intelligence environments therefore shifts in a subtle but fundamental way. The task is no longer simply to deploy capability. It is to make collaboration itself legible, stable and learnable at the boundary between human and machine.</p><p>That is what we mean by Human&#8211;AI Teaming Readiness.</p><h2>What &#8220;Context&#8221; Means in a Human&#8211;AI Team</h2><p>In human&#8211;AI collaboration, <em>context</em> is often treated as a technical concern: prompts, data access, memory, retrieval. These are important foundations. But they are not what makes collaboration work.</p><p>In a teaming environment, context is also <em>coordination.</em></p><p>It is the shared frame that allows different kinds of intelligence to act in relationship to one another without constant supervision or repair. It is what tells a human when to trust a system&#8217;s output, when to challenge it, and when to override it. It is what tells an agent not just what it can do, but how its actions sit within a wider field of purpose, risk and responsibility.</p><p>In world-building terms, this is the moment where physics, culture and geography stop being abstract layers and become the operating language of daily collaboration:</p><ul><li><p>The <strong>System layer</strong> provides the physics: rules, data, contracts and constraints that make certain actions possible and others impossible.</p></li><li><p>The <strong>Culture layer</strong> provides the meaning: norms, values, stories and judgments that shape what <em>should</em> happen.</p></li><li><p>The <strong>Experience layer</strong> provides the geography: the interfaces, workflows and spaces through which both humans and agents navigate the world.</p></li></ul><p>Context is the braided fabric of all three.</p><p>In this sense, it is not a static asset. It is a living operating language, made up of several intertwined elements:</p><ul><li><p><strong>Shared intent:</strong> a common understanding of what the work is ultimately trying to achieve, beyond the task at hand.</p></li><li><p><strong>Boundaries of authority:</strong> clarity on when an agent may act autonomously, when it must recommend, and when a human must decide.</p></li><li><p><strong>Decision vocabulary:</strong> stable definitions of what &#8220;good&#8221;, &#8220;acceptable&#8221;, &#8220;escalate&#8221;, &#8220;complete&#8221;, or &#8220;exception&#8221; actually mean in practice.</p></li><li><p><strong>Quality thresholds:</strong> what level of confidence, evidence, or validation is required before action is taken.</p></li><li><p><strong>Risk posture:</strong> how much uncertainty the team is willing to tolerate in different contexts.</p></li><li><p><strong>Cultural norms of judgment:</strong> whether challenge is expected or discouraged, whether speed outweighs precision, whether learning is prioritised over optimisation.</p></li></ul><p>Taken together, these form the grammar of collaboration. Without this grammar, human&#8211;AI interaction defaults to two unstable extremes. Either humans over-trust systems, surrendering judgment too early and too broadly. Or they under-trust them, turning agents into little more than sophisticated drafting assistants. In both cases, potential is left unrealised and risk is misunderstood.</p><p>The difficulty is that much of this context is normally held tacitly in human teams. It lives in shared experience, informal norms, and unspoken expectations. When agents enter the system, that tacit layer is suddenly exposed. What was once quietly inferred now has to be made explicit if coordination is to hold.</p><p>This is why many early centaur team experiments feel awkward at first. The introduction of an agent acts like a mirror, reflecting back the vagueness that already existed within the team.</p><p>To become teaming-ready, organisations must therefore do more than supply agents with data and access. They must author the world in which those agents will operate, as physics, as culture, and as navigable experience.</p><p>And it is this shared language that leadership is now being asked to design.</p><p>Let&#8217;s look at a core leadership technique for Human&#8211;AI Teaming Readiness we call <strong>Align &#8594; Bound &#8594; Learn</strong>, which is a lightweight system for:</p><ul><li><p>aligning intent,</p></li><li><p>designing authority boundaries, and</p></li><li><p>ensuring that learning compounds rather than fragments.</p></li></ul>
      <p>
          <a href="https://academy.shiftbase.info/p/leading-collaborative-centaur-teams">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Metamorphoses: "Of Bodies Chang'd to Various Forms, I Sing"]]></title><description><![CDATA[New models, evidence of hyperproductivity, automated governance and the potential for leaders to guide the programmable organisation like poets]]></description><link>https://academy.shiftbase.info/p/metamorphoses-of-bodies-changd-to</link><guid isPermaLink="false">https://academy.shiftbase.info/p/metamorphoses-of-bodies-changd-to</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 25 Nov 2025 15:37:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!e20k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the programmable organisation, leaders need to think less like bureaucrats and more like architects and coders to provide the context and instructions needed to operate smart systems.</p><p>Or perhaps poets?</p><p>Poetry and coding are both forms of language that act as compression technologies (like a zip file), leveraging a lot of symbolic pointers to invoke a great deal of meaning with just a few words. This is why some poems, songs or phrases can inspire social change or action. Words have power, as Billie Holiday&#8217;s Strange Fruit, The Internationale or even Yankee Doodle attest.</p><p>As we have written previously, <strong><a href="https://academy.shiftbase.info/p/will-we-see-the-first-programmable?utm_source=publication-search">once an organisation&#8217;s processes and workflows become addressable, they become programmable</a></strong>. And once we codify rules, guidelines and world lore, it is possible for simple agents to combine, operate and oversee these processes without going off the rails. Taken together, this will give leaders the ability to invoke complex combinations of actions with clarity of thought and goals, expressed in simple words.</p><p>But poetry and literature build on millennia of shared experience and the evolution of languages. So we have some work to do at the lower levels of the intelligence stack in our organisations to make this a reality. If more leaders had taken those pesky knowledge management nerds more seriously twenty years ago, they might be in a better state of AI readiness today ;-)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!e20k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!e20k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic 424w, https://substackcdn.com/image/fetch/$s_!e20k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic 848w, https://substackcdn.com/image/fetch/$s_!e20k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic 1272w, https://substackcdn.com/image/fetch/$s_!e20k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!e20k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:420384,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/179930403?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!e20k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic 424w, https://substackcdn.com/image/fetch/$s_!e20k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic 848w, https://substackcdn.com/image/fetch/$s_!e20k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic 1272w, https://substackcdn.com/image/fetch/$s_!e20k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9d2e69b-999e-44e3-9c67-7a5df4120568_2816x1536.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Wall Street&#8217;s Next Top Model</h2><p>In AI news, we have seen a number of impressive new LLM releases recently, most of which have delivered noticeable marginal improvements in intelligence and efficiency, but without really taking us forward to a new frontier.</p><p>The big news was probably <strong><a href="https://www.oneusefulthing.org/p/three-years-from-gpt-3-to-gemini">Google&#8217;s release of Gemini 3</a></strong>, which has impressed testers and early adopters. Coming alongside a new coding IDE, improved image generation capabilities and other new developments, this release contributed to a sense of <strong><a href="https://www.exponentialview.co/p/can-ai-escape-googles-gravity-well">Google achieving a dominant position in the AI sector</a></strong>.</p><p>Subsequently, <strong><a href="https://simonwillison.net/2025/Nov/19/gpt-51-codex-max/">OpenAI released GPT 5.1 Codex Max</a></strong>, focused on long-running coding tasks across multiple context windows, which seems like a good approach for agentic AI.</p><p>Also, not to be forgotten, <strong><a href="https://simonwillison.net/2025/Nov/24/claude-opus/">Anthropic released Claude Opus 4.5</a></strong> to consolidate its lead in coding models.</p><p>But there was also good progress in open models outside China, with <strong><a href="https://www.interconnects.ai/p/latest-open-artifacts-16-whos-building">Nathan Lambert profiling some of the open models being developed in the USA</a></strong>, such as the recently released OLMo3, which is approaching the performance of leading LLMs, but is also fully auditable, which is important for enterprises, especially those in regulated sectors.</p><p>As Azeem Azhar points out, aside from the safety and sovereignty questions, <strong><a href="https://www.exponentialview.co/p/ev-551">switching to open models could save vast amounts of money</a></strong> for enterprises who rarely need the most cutting edge general intelligence the frontier models offer:</p><blockquote><p><em>Despite open models now achieving performance parity with closed models at ~6x lower cost, closed models still command 80% of the market. Enterprises are overpaying by billions for the perceived safety and ease of closed ecosystems. If this friction were removed, the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5767103">market shift to open models would unlock an estimated $24.8 billion</a> in additional consumer savings in 2025 alone.</em></p></blockquote><p>But long-term, LLMs are not the only route to intelligence, and the commentary around Yann LeCun&#8217;s unexpected departure from Meta has reminded industry watchers that he is not the only person who does not believe scaling LLMs will achieve AGI, and <strong><a href="https://joiningdots.substack.com/p/pathways-to-advancing-ai">we should look to other approaches such as world models and neuro-symbolic AI</a></strong>, which we have covered previously. <strong><a href="https://finance.yahoo.com/news/musk-says-real-world-data-021355089.html">Elon Musk has also been making typically cavalier claims</a></strong> about xAI&#8217;s advantage being its access to world model data from Tesla vehicles and &#8230; err &#8230; Twitter.</p><h2>Enterprise AI Architecture Gaps</h2><p>In the enterprise AI space, development is understandably a few steps behind the powers of the frontier models and ideas like programmable organisations and leaders as poets, but we are making progress nonetheless.</p><p>Constellation recently shared a <strong><a href="https://www.constellationr.com/blog-news/insights/enterprise-llm-questions-you-should-be-asking">handy round-up of questions enterprises are (or should be) asking about AI</a></strong> and its implications for their IT estate, such as whether agentic AI will kill off underwhelming, generic SaaS platforms. They also point out that even if agentic AI is just able to solve the seemingly eternal problem of enterprise search, it would be worth the price of entry:</p><blockquote><p><em>Five years from now will we all say, &#8216;those LLMs turned out to be a kick ass enterprise search&#8217;? The more I use LLMs, the more I think their greatest contribution is perusing structured and unstructured data and surfacing it easily. LLMs clearly collapse the time spent on conducting searches and doing superficial research. Yes, LLMs will make stuff up, but they&#8217;re a great starting point. When combined with enterprise data and repositories that have been useless for years, LLMs are revamping the search game for companies. <a href="https://www.constellationr.com/blog-news/insights/welcome-context-chorus-there-s-no-ai-without-context">Suddenly, context engineering is a thing</a>.</em></p></blockquote><p>As with so many other questions where enterprise AI shows great potential for improving our organisations, architecture is the key, and therefore a priority for AI readiness efforts.</p><p>Neal Ford and Mark Richards shared an interesting piece last week about <strong><a href="https://www.oreilly.com/radar/how-agentic-ai-empowers-architecture-governance/">how agentic AI with MCP plumbing is demonstrating an ability to solve enterprise architecture problems</a></strong> and provide smart, adaptive oversight and even automation, all building on the idea of evolutionary architecture.</p><blockquote><p><em>X as code (where X can be a wide variety of things) typically arises when the software development ecosystem reaches a certain level of maturity and automation. Teams tried for years to make infrastructure as code work, but it didn&#8217;t until tools such as Puppet and Chef came along that could enable that capability. The same is true with other &#8220;as code&#8221; initiatives (security, policy, and so on): The ecosystem needs to provide tools and frameworks to allow it to work. Now, with the combination of powerful fitness function libraries for a wide variety of platforms and ecosystem innovations such as MCP and agentic AI, architecture itself has enough support to join the &#8220;as code&#8221; communities.</em></p></blockquote><p>Clearly this idea has potential for non-technical domains as well, such as making sense of competing or overlapping guidance frameworks and rulesets. In fact, as David Barry writes, in terms of AI governance, we will soon reach a point where &#8216;human in the loop&#8217; approaches are unable to keep up and <strong><a href="https://www.reworked.co/digital-workplace/can-ai-systems-police-themselves-the-high-stakes-gamble-of-ai-oversight/">we need to develop hybrid AI/human governance and oversight methods</a></strong>.</p><p>In a similar vein, Wired report today that <strong><a href="https://www.wired.com/story/amazon-autonomous-threat-analysis/?utm_source=nl&amp;utm_brand=wired&amp;utm_mailing=WIR_Daily_CYBERWEEK_112525_PAID&amp;utm_campaign=aud-dev&amp;utm_medium=email&amp;utm_content=WIR_Daily_CYBERWEEK_112525_PAID&amp;bxid=679fe6b740ae7ef26b0b090f&amp;cndid=86084401&amp;hasha=016003755aa0296c369a337f05a1c1d7&amp;hashc=6682d98588cab4309c24846dca6db095f2734170cec7daa3428e1be66edfc3ed&amp;esrc=MARTECH_ORDERFORM&amp;utm_term=WIR_DAILY_PAID">Amazon is successfully using teams of AI agents to hunt down and fix bugs deep within its codebases</a></strong> that might cause security issues.</p><p>This emerging area of agentic AI oversight, governance and monitoring will be an important focus for enterprise architecture as agentic AI advances. <strong><a href="https://diginomica.com/servicenow-and-microsoft-bet-ai-governance-and-orchestration-path-enterprise-platform-value">ServiceNow just announced a partnership with Microsoft</a></strong> to integrate its own AI controllers with Microsoft&#8217;s AI tools, such as <strong><a href="https://www.microsoft.com/en-us/microsoft-365/blog/2025/11/18/microsoft-agent-365-the-control-plane-for-ai-agents/">the recently released Agent 365, which Redmond describes as a control plane for AI agents</a></strong>.</p><h2>Do AI Workers Love Their Children Too?</h2><p>Are AI agents a new class of employee - like workers - or are they just a smarter class of software tools? This question might seem like a purely semantic issue, but in fact it could have both design implications for the smart digital workplace and, as NVIDIA&#8217;s Jensen Huang argues, it might also shape how we see the economic potential of the AI market.</p><p><strong><a href="https://www.oreilly.com/radar/jensen-huang-gets-it-wrong/">Tim O&#8217;Reilly argues that talk of agents as workers is overblown and could risk de-humanising the workplace</a></strong><a href="https://www.oreilly.com/radar/jensen-huang-gets-it-wrong/">:</a></p><blockquote><p><em>As an entrepreneur or company executive, if you think of AI as a worker, you are more likely to use it to automate the things you or other companies already do. If you think of it as a tool, you will push your employees to use it to solve new and harder problems. If you present your own AI applications to your customers as a worker, you will have to figure out everything they want it to do. If you present it to your customers as a tool, they will find uses for it that you might never imagine.</em></p><p><em>The notion that AI is a worker, not a tool, can too easily continue the devaluation of human agency that has been the hallmark of regimented work (and for that matter, education, which prepares people for that regimented work) at least since the industrial revolution. In some sense, Huang&#8217;s comment is a reflection of our culture&#8217;s notion of most workers as components that do what they are told, with only limited agency. It is only by comparison with this kind of worker that today&#8217;s AI can be called a worker, rather than simply a very advanced tool.</em></p></blockquote><p>It&#8217;s a good piece from somebody who has seen a lot of change and lived through (and contributed to) a few hype cycles.</p><p>We already have examples of <strong><a href="https://maisa.ai/">digital worker platforms</a></strong> in the wild, and perhaps Huang is right that by creating an infinite pool of cheap workers, we can expand the global economy beyond what we imagine today. But I am with Tim on this.</p><p>However, the reason we favour the idea of centaur teams - high-agency people using teams of AI agents (as tools, not synthetic employees) to support their work - is largely because human creativity and ingenuity continues to surprise us.</p><p>Steve Newman recently shared an interesting article looking at <strong><a href="https://secondthoughts.ai/p/hyperproductivity">real, existing examples of AI-enabled hyperproductivity</a></strong>:</p><blockquote><p><em>&#8230;the new breed of hyperproductive teams have three things in common. They&#8217;re building lots of bespoke tools. They&#8217;re letting AIs do all of the direct work, reserving their own efforts to specify what should be done and to improve the tools. And they are achieving a compounding effect, using their tools to improve their tool-improving tools. The net effect is what I&#8217;m calling &#8220;hyperproductivity&#8221;</em></p></blockquote><p>In the examples he cites, which are mostly limited to individual developers working on small projects, their human creativity and &#8216;hustle&#8217; vibe that drives their imaginative use of existing AI tools is the secret sauce and the multiplier here, not just the tools themselves.</p><h2>Can Leaders also be Hyperproductive?</h2><p>Looping back to where we started, it is clear that there is a lot that leaders can learn from the behaviours and working methods of tech people, whether enterprise architects, developers or DevOps people running operations.</p><p>If you are curious and courageous enough to orchestrate some agents and tools, create some automated data and knowledge inputs, and perhaps use AI to help assemble these into a scaffold for your work, then you can create some of the conditions of hyperproductivity that Steve Newman described in his piece.</p><p>Operational leadership, like software, is at its base a series of instructions, but these are built on layers of orchestration and abstraction. The idea is that you have people and systems below you that know how to do the work, so that you can guide and instruct rather than micromanage. And when these components and processes become addressable and programmable, then a whole world of possibilities opens up.</p><p>Contracts and agreements are also like software, and we have seen the power of new models such as RenDanHeyi that are <strong><a href="https://www.boundaryless.io/blog/primitives-of-contract-based-and-unit-based-organizing-for-better-agility-and-engagement/">built around a web of mutual agreements and commitments</a></strong>. Smart contracts could be a feature of the way that agents cooperate and coordinate, so rather than just micromanaging agents with a series of if-this-then-that instructions, we might give them contracts and guidelines and let them work out the finer details.</p><p>If leaders can learn to operate this kind of smart, connected work system, establishing the right goals, context and guidance, then we could see the same kind of hyperproductivity. Some of the most advanced hi-tech equipment in the world today - e.g. F1 cars, fighter jets, FPV drones, remote surgery - is still incredibly sensitive to human creativity and skill, with people pushing the boundaries of what the tech can do.</p><p>May the best poet win!</p>]]></content:encoded></item><item><title><![CDATA[Small Models, Real Change: How Leaders Can Use SLMs to Make AI Fit Their World]]></title><description><![CDATA[Why the next stage of AI adoption isn&#8217;t about going bigger, but about getting closer - to context, culture, and control.]]></description><link>https://academy.shiftbase.info/p/small-models-real-change-how-leaders</link><guid isPermaLink="false">https://academy.shiftbase.info/p/small-models-real-change-how-leaders</guid><dc:creator><![CDATA[Cerys Hearsey]]></dc:creator><pubDate>Tue, 18 Nov 2025 15:03:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hUoq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most leaders are being urged to adopt AI tools at scale, but too few are being shown how to make it fit their business. Most enterprise AI still involves general-purpose models that typically live in someone else&#8217;s cloud, know nothing of local context, and sound nothing like the company that&#8217;s using them.</p><p>This is why Small Language Models (SLMs) are becoming so interesting for business leaders - they&#8217;re compact, customisable, and, most importantly, trainable on your world. They have the ability to make AI more personal, more trustworthy, and more adaptable. Small enough to understand, to try, and reject if needed.</p><p>Whilst at least two-thirds of enterprise AI projects today involve LLMs in the cloud, we can expect a hybrid combination of locally-owned small models supplemented by generic cloud-based LLMs to become the principal direction of travel in the next year or so. The cloud may still be the entry point, but hybrid will be the destination: generic LLMs for broad intelligence, small private models for the work that truly matters.</p><p>That means now is exactly the right moment to explore what SLMs can add: focused, private models that solve real problems with more control over data security, tone, and local conditions. They&#8217;re one of the few AI technologies leaders can start using today while also preparing for the hybrid architectures that will define tomorrow.</p><p>What becomes possible using focused, private models to solve real problems with more control over data security, context and local conditions?</p><h3><strong>The Problem with Generic Intelligence</strong></h3><p>Many AI pilots run on foundation models built for everyone, and no-one in particular, aiming only for general productivity gains, without being clear about how to make them.</p><p>These models are fluent, capable, and endlessly creative, but of course they lack the context needed to create connected, meaningful outputs. They don&#8217;t know your business logic, your acronyms, your risk appetite, or your tone of voice.</p><p>That can make them impressive at first, but ultimately perhaps not reliable enough for some use cases. A general model can draft a report, summarise a document, or brainstorm ideas, but it can&#8217;t tell whether those ideas align with your policies, your market realities, or your brand. It&#8217;s intelligence without grounding, lacking accountability when it matters most.</p><p>For leaders, this creates an uncomfortable paradox: AI that&#8217;s powerful enough to change how work gets done, but generic enough to mislead, confuse, or dilute what makes your organisation distinct. Not to mention disrupt the finely tuned ways of working, communications flows and psychological safety of a high performing team.</p><p>As we work to move AI adoption from pilots and prototypes into daily operations, that lack of specificity could be costly. Early decisions made on partial understanding start to shape workflows, customer experiences, and even culture. Early adopters are discovering that general intelligence can&#8217;t carry organisational nuance, and that without local context, AI risks amplifying noise as well as insight. The next phase of adoption will also require a different kind of intelligence: smaller, focused, and tuned to the enterprise itself.</p><p>For leaders, this shift isn&#8217;t technical, it&#8217;s strategic. SLMs put agency back where it belongs: in the hands of those who understand the business best. They allow leaders to guide how intelligence is expressed inside the organisation, through its tone, its rules, and its judgement, rather than outsourcing that understanding to someone else&#8217;s model.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hUoq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hUoq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!hUoq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!hUoq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!hUoq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hUoq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:399723,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/179249679?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hUoq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!hUoq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!hUoq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!hUoq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F868c6bcc-4cda-4ffc-8290-d32d25ab00d2_1536x1024.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><h3><strong>Use Case 1: Policy Assistant - Translating Principles into Everyday Practice</strong></h3><p>Policy interpretation is something every leader already does. It&#8217;s the constant translation of written principles into day-to-day judgement: <em>&#8220;Does this meet our standard?&#8221;</em> <em>&#8220;Where&#8217;s the line between autonomy and compliance?&#8221;</em></p><p>A small, domain-tuned Policy Assistant helps leaders scale that judgement. Trained on your existing policy manuals, approval chains, and communications, it can answer policy questions in plain language, cite the relevant clause, and flag where interpretation might require escalation.</p><p>This isn&#8217;t a new capability, it&#8217;s an upgrade of a familiar process. It turns passive documents into active dialogue, giving teams quick clarity while giving leaders better visibility of where confusion, exceptions, or outdated rules live. Over time, these patterns help refine governance itself.</p><p>We are seeing this emerge in HR with cloud platforms that can ingest employee policies and make them queryable with a chatbot, but firms who take more control and combine all kinds of policy guidance, not just HR manuals, will be able to create more powerful assistants that understand all the angles from HR to compliance, regulatory environments and even internal best practice or guides.</p><h3><strong>Use Case 2: Knowledge Keeper - Capturing How Work Really Happens</strong></h3><p>Every team lead knows how easily expertise disappears. People move on, projects close, lessons fade, and with them go the small, hard-won details that make work run smoothly. Most knowledge management systems capture what was done, but rarely how or why.</p><p>A Knowledge Keeper model could change that. By training a small language model on the team&#8217;s own meeting notes, retrospectives, project reports, and wikis, leaders can create an assistant that remembers how things were actually achieved, the reasoning, context, and trade-offs behind decisions.</p><p>This is not just an efficiency gain; it&#8217;s a net-new capability: an institutional memory that talks back. It can explain the logic behind a past decision, show how a process evolved, or surface comparable patterns when teams face similar challenges. For leaders, it means knowledge becomes active again, less about archiving the past and more about accelerating the present.</p><p>Over time, a Knowledge Keeper can help spot repetition, duplication, and learning gaps across functions, turning the messy history of how the organisation learns into a valuable strategic asset.</p><h3><strong>Use Case 3: Decision Briefing - Faster, Context-Aware Judgement</strong></h3><p>Leaders already spend a huge amount of time making sense of information: condensing reports, comparing options, and preparing for decisions under time pressure. But most of this synthesis work still happens manually, scattered across inboxes, decks, and late-night note-taking.</p><p>A Decision Briefing model reframes that process. Trained on your organisation&#8217;s strategic language, KPIs, governance frameworks, key risks, and decision templates, a small, domain-tuned model can assemble short, context-rich briefs that surface what matters most.</p><p>Unlike a generic summariser, an SLM-based Decision Brief knows what good looks like here. It understands which metrics matter, which trade-offs recur, and which narratives resonate with your board or sponsors. It can pull insights from multiple systems, flag inconsistencies, and present information in the tone and structure that supports confident decision-making.</p><p>For leaders, this is an evolution of a familiar process. It transforms briefing from a time-consuming, reactive task into a fast, adaptive loop, giving leaders back time to think, while ensuring decisions remain anchored in shared context.</p><h3><strong>Use Case 4: Learning Companion - Embedding Culture in Learning</strong></h3><p>The challenge most corporate learning systems face is that they teach what to do, but not how we do it here. They&#8217;re generic by design, optimised for scale rather than nuance.</p><p>A Learning Companion built on a small, internal model can change that. By training it on leadership principles, case studies, onboarding materials, and real examples of decision-making, organisations can create learning assistants that speak in the company&#8217;s own voice.</p><p>Instead of serving abstract modules, the companion can coach employees through real situations:</p><p><em>&#8220;How would we apply our customer-first principle here?&#8221;</em></p><p><em>&#8220;What tone fits our communication style?&#8221;</em></p><p><em>&#8220;How do we balance speed with safety in this context?&#8221;</em></p><p>For leaders, this represents a translation of an existing process, embedding mentorship, feedback, and values-driven learning directly into daily workflows. It scales culture without diluting it, turning learning from an HR function into a living expression of leadership.</p><p>Over time, Learning Companions become mirrors of organisational maturity. The richer the examples and stories they contain, the clearer a picture they offer of how a company actually learns, and whether its principles hold up under pressure.</p><h3><strong>Use Case 5: Customer Context Partner - Bringing Human Insight Back Into Digital Interactions</strong></h3><p>Leaders always relied on customer data to guide decisions, but most of that data describes transactions, not relationships. Generic AI tools can analyse patterns or generate messages, but they rarely grasp what makes a customer trust your organisation.</p><p>A Customer Context Partner built on a small, domain-tuned model bridges that gap. Trained on your organisation&#8217;s customer conversations, service transcripts, brand guidelines, and satisfaction data, it acts as an interpreter between human intent and digital scale.</p><p>It can help teams craft responses that sound like your brand, summarise the real issues behind customer feedback, or highlight where sentiment is starting to drift. Unlike general-purpose assistants, it understands both the customer&#8217;s language and the company&#8217;s ethos, how to be empathetic within boundaries.</p><p>For leaders, this isn&#8217;t a new capability; it&#8217;s a smarter translation of what great service has always required, context, consistency, and care. It ensures every digital touchpoint reinforces trust and coherence, even as AI handles more of the interaction volume.</p><p>And because it learns from real examples, it becomes a tool for reflection too: revealing patterns of misunderstanding, tone drift, or policy friction that leadership can address upstream. In that way, it doesn&#8217;t just scale service, it helps leaders see the organisation through the customer&#8217;s eyes.</p><h3><strong>Why Small Models Matter</strong></h3><p>Across these examples, the pattern is the same: each one turns something leaders already care about into something AI can finally handle responsibly. The difference lies in where the intelligence lives.</p><p>Small Language Models (SLMs) change the balance of power. Because they&#8217;re trained on your data and run in your environment, they allow AI to reflect the business it serves rather than reshape it from the outside. They bring the benefits of generative AI (speed, synthesis, creativity), while staying grounded in organisational truth.</p><p>For leaders, this is the breakthrough: SLMs make AI governable, teachable, and trustable.</p><p>They give leaders a way to align AI&#8217;s intelligence with the organisation&#8217;s judgement &#8212; not by writing more policy documents, but by embedding context, tone, and values directly into the models that support daily work.</p><p>When AI speaks your language, follows your rules, and understands your purpose, it stops being a tool you adapt to, and starts becoming a capability you lead.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Olim!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Olim!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic 424w, https://substackcdn.com/image/fetch/$s_!Olim!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic 848w, https://substackcdn.com/image/fetch/$s_!Olim!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic 1272w, https://substackcdn.com/image/fetch/$s_!Olim!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Olim!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:94295,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/179249679?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Olim!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic 424w, https://substackcdn.com/image/fetch/$s_!Olim!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic 848w, https://substackcdn.com/image/fetch/$s_!Olim!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic 1272w, https://substackcdn.com/image/fetch/$s_!Olim!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F424af1fc-9b6e-447d-9c11-0582c7c6823c_1000x1000.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Small Has Been the Answer Before</strong></h3><p>Every major wave of digital transformation has followed a similar rhythm: we start by going big, central platforms, sweeping integrations, giant systems, and only later realise that progress depends on going small. In the past, it sometimes felt like the path of least resistance for organisations lacking in technical confidence was just to blow their budget on a &#8216;good enough&#8217; one-size-fits-all SaaS purchase, rather than to create their own strategic digital business capabilities. Thankfully, that is changing.</p><ul><li><p>When software eating the world led to sprawling monoliths, micro-services restored agility by breaking them down into smaller, interoperable components.</p></li><li><p>When web publishing concentrated power in a few portals, blogs and social tools brought it back to individuals and teams.</p></li><li><p>When cloud computing abstracted infrastructure into distant data centres, edge computing brought processing closer to where the data lives.</p></li></ul><p>Each shift rebalanced power from centralised scale to distributed intelligence. Small systems proved to be the engines of speed, resilience, and ownership.</p><p>SLMs are part of that same pattern.</p><p>They do for AI what micro-services did for software: take something vast, opaque, and external, and make it modular and locally adaptable.</p><p>They give leaders a way to own intelligence the way they once learned to own data, applications, and experience by becoming smarter in their own space.</p><p>Now let&#8217;s consider what this means for change leadership as it relates to enterprise AI, and look at different paths to get started.</p>
      <p>
          <a href="https://academy.shiftbase.info/p/small-models-real-change-how-leaders">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI Agents & Skills, Plus What Games Can Teach us About Adoption]]></title><description><![CDATA[Some useful signals of enterprise AI adoption are emerging - how can agentic AI and agent skills take this to the next level?]]></description><link>https://academy.shiftbase.info/p/ai-agents-and-skills-plus-what-games</link><guid isPermaLink="false">https://academy.shiftbase.info/p/ai-agents-and-skills-plus-what-games</guid><dc:creator><![CDATA[Lee Bryant]]></dc:creator><pubDate>Tue, 11 Nov 2025 15:32:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!xB54!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Enterprise AI Adoption Signs of Life</h2><p>There are some promising signals emerging around enterprise AI adoption and its impact on companies who are using it.</p><p>Last month&#8217;s <strong><a href="https://ai.wharton.upenn.edu/wp-content/uploads/2025/10/2025-Wharton-GBK-AI-Adoption-Report_Full-Report.pdf">Accountable Acceleration: Gen AI Fast-Tracks Into The Enterprise</a></strong> report by Wharton Human-AI Research And GBK Collective, labelled 2025 as the <em>Accountable Acceleration</em> stage of adoption, finding that:</p><ul><li><p>82% use Gen AI at least weekly and 46% daily, and 89% agree it enhances employees&#8217; skills; but as usage climbs, 43% see a risk of declines in skill proficiency.</p></li><li><p>72% are formally measuring Gen AI ROI, focusing on productivity gains and incremental profit, and three out of four leaders are seeing positive returns on their Gen AI investments.</p></li></ul><p>Last week, McKinsey released a new report <strong><a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai">The state of AI in 2025: Agents, innovation, and transformation</a></strong>, which found that while m<em>o</em>st firms are still in the pilot phase with Agentic AI and not yet scaling up, there is growing interest in the topic. In terms of the adoption impact of AI in general, the report found:</p><ul><li><p>use-case-level cost and revenue benefits, with 39% of respondents reporting EBIT impact;</p></li><li><p>a majority (64%) of respondents say that AI is enabling innovation; and,</p></li><li><p>half of the AI high performers plan to use AI for business transformation, and many are already redesigning workflows to achieve this.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zZRB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zZRB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic 424w, https://substackcdn.com/image/fetch/$s_!zZRB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic 848w, https://substackcdn.com/image/fetch/$s_!zZRB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic 1272w, https://substackcdn.com/image/fetch/$s_!zZRB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zZRB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic" width="1456" height="1157" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1157,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:139357,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/178605746?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zZRB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic 424w, https://substackcdn.com/image/fetch/$s_!zZRB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic 848w, https://substackcdn.com/image/fetch/$s_!zZRB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic 1272w, https://substackcdn.com/image/fetch/$s_!zZRB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3cab2516-d9a8-4034-af0b-057d5c94fcb1_1692x1344.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This last point is important, because most of the biggest potential returns on AI investment will come from using it to transform operations, rather than just make existing processes more efficient. But this is harder said than done, as speakers at Celonis&#8217;s recent conference made clear, <strong><a href="https://diginomica.com/what-does-context-ai-really-mean-six-enterprise-leaders-why-process-intelligence-makes-ai-work">sharing their own experience of using process intelligence and mapping</a></strong> to create the conditions where enterprise AI can thrive.</p><p>KPMG also recently shared some insights into their own use of Google Gemini, both internally and with clients, to advance their agentic AI agenda, <strong><a href="https://www.constellationr.com/blog-news/insights/google-cloud-kpmg-outline-lessons-learned-gemini-enterprise-deployments">as reported by Constellation Research</a></strong>:</p><blockquote><p><em>Stephen Chase, Global Head of AI &amp; Digital Innovation at KPMG, said the firm adopted Gemini Enterprise across the workforce with 90% of employees accessing the system within two weeks of launch. &#8220;We believe this is the fastest adopted technology our firm has had and we are in a regulated industry,&#8221; said Chase. &#8220;We went into it with the idea this was going to be part of our overall transformation. It was never about individual use cases. It was about sparking innovation.&#8221;</em></p></blockquote><p>From talking to some of the people advancing agentic AI within KPMG and other leading professional services firms recently, it is clear they see a huge opportunity for business transformation here, and they are very committed to pursuing it in a systematic way by focusing on building blocks, architecture and re-use, not just stand-alone uses cases and apps.</p><h2>Agents are Getting Easier</h2><p>Whilst it is true that mapping and re-engineering existing processes and workflows can be a complex undertaking within large organisations, once this hurdle is cleared, it is becoming easier and cheaper to create powerful agentic AI capabilities on top.</p><p>Long-term, <strong><a href="https://academy.shiftbase.info/p/enterprise-ai-is-a-social-technology">as we have written previously</a></strong>, enterprises might be able to benefit from more control, customisation and security by using smaller models, perhaps even running in their own infrastructure. <strong><a href="https://siliconsandstudio.substack.com/p/the-end-of-scale-small-models-are?publication_id=2692259&amp;post_id=178020022&amp;triggerShare=true&amp;isFreemail=true&amp;r=9dv58&amp;triedRedirect=true">As Seth Dobrin from the Silicon Sands newsletter put it recently:</a></strong></p><blockquote><p><em>A business is not a general-purpose entity, so why should it expect to get value from a general-purpose model? This is the fundamental question that the AI industry is finally being forced to confront. The answer is simple: it shouldn&#8217;t. The actual value of AI in a business context lies not in its ability to do everything, but in its capacity to do specific things exceptionally well. This is where SLMs excel.</em></p></blockquote><p><strong><a href="https://the-decoder.com/moonshot-ais-kimi-k2-thinking-sets-new-agentic-reasoning-records-in-open-source-llms/?_bhlid=d8959c0157c61148c6c4a51024257db031d16b25">The recent excitement around Kimi K2 &#8216;Thinking&#8217; and other open models</a></strong> suggests that it is possible to train powerful AI models for orders of magnitude less cost compared to the leading platform models, and calls into question the current strategy of consuming ever more compute resources to deliver marginal improvements in function.</p><p>But it also suggests that many businesses should be at least thinking about owning and adapting small models for some purposes. Perhaps we don&#8217;t need big, all-powerful general purpose models for most of what we do in the enterprise, but also they might lack the specific nitty gritty detail of a particular knowledge domain or business sector.</p><p>However, even building agentic AI on general models, it is becoming easier and more realistic to adapt and guide them according to our specific needs, use cases and rules of the road, and the launch of Claude Skills by Anthropic last month could be a major proof point for this approach.</p><h2>Skills Could be a Game-Changer</h2><p><strong><a href="https://offthegridxp.substack.com/p/the-genius-of-anthropics-claude-agent-skills-2025">Michael Spencer believes Claude Skills will enable Anthropic to overtake OpenAI in the enterprise market</a></strong>, by simplifying the process of teaching and guiding LLMs to perform specific tasks in ways that suit a particular organisation or function.</p><blockquote><p><em>Anthropic overtakes OpenAI in ARR either in 2027 or 2028, and the reason is the utility it is providing Enterprise customers and businesses all over the world. On October 16th, 2025 [Anthropic] announced Claude Skills.</em></p></blockquote><p>Leading AI expert <strong><a href="https://simonw.substack.com/p/claude-skills-are-awesome-maybe-a">Simon Willison is also very bullish about Claude Skills</a></strong>, and his overview of what they can do is a good primer on the topic, albeit mostly from a coding perspective. In particular, he highlights the simplicity and token-efficiency of this approach to customising the work of agents to suit local needs:</p><blockquote><p><em>There&#8217;s one extra detail that makes this a feature, not just a bunch of files on disk. At the start of a session Claude&#8217;s various harnesses can scan all available skill files and read a short explanation for each one from the frontmatter YAML in the Markdown file. This is very token efficient: each skill only takes up a few dozen extra tokens, with the full details only loaded in should the user request a task that the skill can help solve.</em></p></blockquote><p>Although developers will be a major target audience for Skills, it is worth remembering that skills and sub-skills can be built and adapted using plain language prompting, which means it is even more important for people and teams to <strong><a href="https://academy.shiftbase.info/p/codifying-rulesets-in-the-explainable">codify the way they work</a></strong>, because they can now make these guidelines available to agents to improve their outputs.</p><p><strong><a href="https://wrk3.substack.com/p/agent-skills-new-currency-of-work?publication_id=61108&amp;post_id=176628319&amp;triggerShare=true&amp;isFreemail=true&amp;r=9dv58&amp;triedRedirect=true">Matteo Cellini has written about how important this approach could be in making work - and organisations - more programmable</a></strong>, with a useful guide to getting started with some simple work skills:</p><blockquote><p><em>One of the biggest shortcomings of AI Agents at the moment, is that even though they can reason, summarize, even code &#8212; they are still like a bright interns without process memory. Ask to reconcile accounts, prepare an ESG report, or validate an invoice, and they&#8217;ll improvise every time. Anthropic&#8217;s new Agent Skills framework promises to give them exactly that &#8212; modular micro-expertise that can be shared, reused, and improved.</em></p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xB54!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xB54!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic 424w, https://substackcdn.com/image/fetch/$s_!xB54!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic 848w, https://substackcdn.com/image/fetch/$s_!xB54!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic 1272w, https://substackcdn.com/image/fetch/$s_!xB54!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xB54!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic" width="1000" height="864" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:864,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:80465,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://academy.shiftbase.info/i/178605746?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xB54!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic 424w, https://substackcdn.com/image/fetch/$s_!xB54!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic 848w, https://substackcdn.com/image/fetch/$s_!xB54!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic 1272w, https://substackcdn.com/image/fetch/$s_!xB54!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd354db1-886a-47a9-8012-3fe86c79fd91_1000x864.heic 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Adoption Lessons from Game Design</h2><p><em>Modular, micro-expertise that can be shared, reused and improved</em> is a good description of what makes agentic AI so interesting. Providing access to powerful, general AI capabilities can work well for some people, but widespread adoption into peoples&#8217; daily work usually needs a better connection between capability and need.</p><p><strong><a href="https://www.oreilly.com/radar/think-smaller-the-counterintuitive-path-to-ai-adoption/">As Ben Lorica writes for O&#8217;Reilly Radar</a></strong>, tackling small, specific pain points and use cases is a good approach to AI adoption in the enterprise more broadly, at least as a counter-point to the roll-out of general AI capabilities that people do not always know how to adsorb into their daily work:</p><blockquote><p><em>The [AI] teams who succeed won&#8217;t be those chasing the most advanced models. They&#8217;ll be the ones who start with a single Hero user&#8217;s problem, capture unique data through a focused agent, and relentlessly expand from that beachhead. In an era where employees are already voting with their personal ChatGPT accounts, the opportunity isn&#8217;t to build the perfect enterprise AI platform&#8212;it&#8217;s to solve one real problem so well that everything else follows.</em></p></blockquote><p>This is an area of behavioural science where games and world-building can teach us a lot.</p><p>First, as we have seen with the first general wave of chatbots, giving people a blank text box and telling them they can do or ask anything is far less effective than you might imagine, and quickly leads to disappointment and disengagement as its limitations become painfully clear.</p><p>In games, where very basic (NPC) character AIs have been used for many years, designers know how to use dialogue design and context setting to signal to players that a shop-keeper just sells materials, or a quest giver will tell you their story and ask for help. Many games that seem open world are in fact driven by behavioural corridors that guide you where you need to go to avoid frustration.</p><p>In enterprise AI, the majority of agents will not be voice- or chat-activated, but will work together, triggered by API / MCP calls, for example, to perform tasks. But some will still talk to you. How can we avoid them wasting your time or setting expectations so broad that they tend to disappoint?</p><p><strong><a href="https://www.ai-supremacy.com/p/complete-guide-to-voice-ai-use-cases-ambient-computing-2025?r=9dv58&amp;utm_medium=ios&amp;triedRedirect=true">There has been a lot of progress in AI voice interfaces</a></strong>, and I still marvel at what tools like ChatGPT can do for me by interpreting my rambling voice prompts when I am on the move. But I absolutely cannot stand its patronising and obsequious voice responses, so I take my output in text and with a pinch of salt.</p><p>If we expect to see more ambient AI interaction using conversational interfaces, then we probably need to start thinking about character development for different use cases and user groups.</p><p><strong><a href="https://www.interconnects.ai/p/opening-the-black-box-of-character?utm_source=substack&amp;publication_id=48206&amp;post_id=176664428&amp;utm_medium=email&amp;utm_content=share&amp;utm_campaign=email-share&amp;triggerShare=true&amp;isFreemail=true&amp;r=9dv58&amp;triedRedirect=true">Nathan Lambert has shared an interesting new thread of research into character training</a></strong> that I found intriguing. Some of this can be achieved very simply using prompts or skills that guide an agent how to respond to people, but more advanced approaches might require human-supervised fine tuning of the models themselves.</p><p>And of course there are lots of other lessons to be studied about how an increasingly agentic digital workplace can engage people, guide them where they need to go, and generally create a rich environment for human-AI collaboration.</p><p><strong><a href="https://www.raphkoster.com/2025/11/03/game-design-is-simple-actually/">Accomplished game designer Raph Koster has shared a 12-step programme to building game worlds</a></strong> that engage people and motivate them to invest so much of their time in exploring and mastering them. It&#8217;s a great read if you want to understand world-building, not just for game designers but business leaders in general, especially those want to get the most out of their teams.</p><p>Good game design produces a level of engagement, effort and problem solving that companies could only dream of in relation to their workforce. For example, after any popular game is released, we usually see a whirlwind of volunteer peer-to-peer knowledge work as people create a super-detailed guide and comprehensive learning resources to help other players within days of launch.</p><p><strong>Is this kind of rich peer-to-peer collaboration possible in your organisation?</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://academy.shiftbase.info/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Shift*Academy is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>