The Growing Importance of Context Engineering for Leaders Adopting AI
How business leaders can play an important role in enterprise AI adoption without needing to become technical domain experts
We have been thinking a lot recently about how to help business leaders understand, direct and deploy AI in their organisations without needing to become technical domain experts. Traditional approaches to learning, change and technology adoption increasingly feel insufficient.
Normally, ‘technology adoption’ means deploying new tools or technology within the old system, and then persuading people to engage with and use them to get their work done. But in a situation where these new technologies change not only the system, its structures and processes, but also potentially both the job of the employee and the manager trying to persuade them to adopt it, things are a lot more… multi-dimensional.
We need to have a good sense of the big picture and the direction of travel towards it, but we need to start small and help leaders understand that the real value lies in having the imagination to ask the right questions and establish a meaningful context for both people and AI tools to operate within.
Right now, it seems various analysts and commentators are dancing around this issue, picking out elements or dimensions of the challenge, but are collectively struggling to offer a credible approach to moving forward on so many fronts at the same time.
AI is coming for (y)our jobs!
Vala Afshar recently wrote that AI is a force multiplier for other tech trends, and that these trends point towards greater autonomy and automation of key functions organisations will need to perform in the future. This is probably true, and a useful observation, but where do we start?
Even IT departments are likely to lose headcount as automation ramps up, as the Wall Street Journal reported last week in a piece about a startup that aims to remove ‘human middleware:
“The idea … is that “human middleware,” or individuals among a company’s IT staff who manually connect data between disparate systems, have large amounts of menial work they shouldn’t have to do, said Mayan Mathen, XOPS’s co-founder and chief executive.”
The impact of AI on jobs is undeniable, but its long term implications are not obvious. After the industrial revolution, it became accepted that people had to work 9-5 in a factory (or its modern equivalent, the cubicle farm) to have a home, a family, or a life. But this iron-clad rule was being shaken up before AI started having an impact on jobs.
A combination of the pandemic, when many people no longer ‘went to work’ but the world kept turning, and the housing criss has led many people to lose faith in the employment ‘compact’ and explore other ways to have a good life. There has also been something of a quiet revolt among younger employee cohorts against the trade-offs implicit in becoming a partner in a professional services firm or a senior manager in a corporate, where the personal life cost of climbing the ladder is not seen as an undeniably good investment any more.
With this is mind, AI ‘taking our jobs’ is not necessarily the full story. The unbundling and re-bundling of tasks in companies, when combined with automation, will almost certainly reduce the number of jobs available, but might make those that remain most meaningful and remunerative.
As Azeem Azhar put it recently:
“Whether AI flattens the wage ladder or steepens it depends on which tasks get handed to machines—and how quickly governments can turn foresight into action.
It is not the technology that determines the shape of the labour market. It is us.
If wage and employment outcomes depend more on the tasks firms automate than on AI itself, policy must focus squarely on those task choices.”
Perhaps this will rebalance the relationship between powerful but non-contributing generic managers and the talent they need around them to succeed, or perhaps - as an FT opinion piece hinted yesterday in The Boss is Back - the opposite will be the case, at least in the current climate of US business.
Indeed, the very nature of our organisations will also change. We have written about programmable organisations, and the huge reduction in management roles these will require; and we have written about the power of highly connected small centaur teams to achieve results that go beyond what whole departments can deliver today in large corporates.
Enterprise technology analysts Constellation Research have tried to categorise AI maturity and adoption into different levels, and concluded that the most advanced ‘AI exponentials’ will probably be net new organisations, rather than emerge from existing firms today:
“Over time, most leading enterprises will become AI native. In fact, many of the category leaders are well on the way there."
However, the AI exponential companies--which will operate 80% of the business by machine and generate more than $5 million in profit per employee--are going to have to start from scratch. These companies are lean and mean and frankly may never get beyond 10 employees.”
What can leaders do?
So what can we do to help leaders get their arms and minds around this multi-dimensional change challenge? If this article was an AI-written slopsicle, I would now confidently present the 9 steps guaranteed to make it work, ending with “And in conclusion…” But this is just a round-up of interesting links and ideas, and we are all still in exploration and discovery mode. However, there are some pointers that might show the way.
First, as this recent HBR piece points out, Your AI Strategy Needs More Than a Single Leader:
“Too often, the CAIO is tucked under the CTO or siloed in a strategy group. They’re told to experiment, to be bold, but also to avoid risk, deliver immediate value, and ensure enterprise-wide transformation. In one global-services firm, the AI lead created a promising prototype using a large language model. But the project went nowhere. Business units hadn’t been brought in early, training never happened, and the new tool sat unused.
Even in better-aligned orgs, the pressure on a single leader to drive AI transformation is intense. Boards want their investment in AI to quickly translate to top-line revenue. Legal asks for guardrails. Operations wants automation. Marketing wants personalization. It’s not that these goals are wrong. But expecting one person to deliver them all is setting them up to fail.”
We need to work together in more productive ways, as Cerys wrote last week in Enterprise AI Adoption Requires Connected Leadership Steering, to see all the points of view and priorities on a single map.
And the goal is not adoption per se, but business improvement and evolution. As Bertrand Duperrin shared last week, use cases and basic adoption are not enough for freeform or general purpose technologies like AI:
“Unlike traditional business software, these platforms do not offer predefined uses: they are malleable, modular, and require users to give them structure and meaning. This is characteristic of freeform technologies, which are neither rigid nor prescriptive, but simply environments that users can organize themselves.”
Leaders as context engineers
Whilst technologists engage with the detail of how AI, agents and connected data structures work at the software and hardware level, we need to cultivate the thinking and imagination of leaders to expand the art of the possible.
This is supposed to be what they are good at after all - although the majority are so deeply embedded in meaningless process work and banging people over the head with their three over-simplistic values or targets (or other forms of slideware that made sense in a meeting with other slideware enjoyers), that their first-principles thinking and ability to imagine another world might need some cultivation.
LLMs are mysterious beasts, even to themselves, and they need to be coaxed, guided and often treated with a degree of scepticism to get the best out of them. There is a tendency to anthropomorphise LLMs, but this is not a useful, or indeed safe, way to think about how they work.
In an enterprise context, the best architectures right now look like ecosystems of small, tightly controlled, single-purpose agents coordinated by orchestrators, and supported by testing, validation and human-in-the-loop oversight. This does not require omnipotent AGI, but sensible automation and basic reasoning.
This piece about why you shouldn’t ask them why they get things wrong is a good example of why we should be careful ascribing sentience to their output:
“Even if a language model somehow had perfect knowledge of its own workings, other layers of AI chatbot applications might be completely opaque. For example, modern AI assistants like ChatGPT aren't single models but orchestrated systems of multiple AI models working together, each largely "unaware" of the others' existence or capabilities. For instance, OpenAI uses separate moderation layer models whose operations are completely separate from the underlying language models generating the base text.
When you ask ChatGPT about its capabilities, the language model generating the response has little knowledge of what the moderation layer might block, what tools might be available in the broader system (aside from what OpenAI told it in a system prompt), or exactly what post-processing will occur. It's like asking one department in a company about the capabilities of another department with a completely different set of internal rules.”
Right now there are too many hucksters performing cheap magic tricks with generative AI and jumping straight from “Look! You can make a pretend AI assistant” to baseless promises of automagic organisational transformation. We need to be real about the challenges involved here, and not just entertain leaders, but help them do the work needed to achieve AI readiness. And that starts with themselves and their own jobs - assuming they are still needed ;-)
The other useful and acquirable skill we need to cultivate is context engineering. Practitioners have realised that whilst you can one-shot prompt some simple outputs using generative AI, we need to go a lot further in providing context for enterprise AI tools and agents to do what we what them to, just as we do with people. Attention is moving away from RAG as a method of helping AI tools understand a particular organisation and towards bigger context windows, more persistent memory and system prompts to ensure an agent has a good sense of what good looks like. This is an area that is evolving fast, but although it seems like a technical area, I would argue this is becoming a core leadership skill and role, and one that is eminently teachable.
This two part piece from Addy Osmani at O’Reilly is a good starting point for further reading on the topic of context engineering:
“As [AI pioneer Andrei] Karpathy explained, doing this well involves everything from clear task instructions and explanations, to providing few-shot examples, retrieved facts (RAG), possibly multimodal data, relevant tools, state history, and careful compacting of all that into a limited window. Too little context (or the wrong kind) and the model will lack the information to perform optimally; too much irrelevant context and you waste tokens or even degrade performance. The sweet spot is non-trivial to find. No wonder Karpathy calls it both a science and an art.”
Four ways for leaders to get started
Start to redefine the leadership role: shift from “driving adoption” to “framing context.” Leaders should ask the right questions, set direction, and create meaning, not micromanage tools.
Practice context engineering: treat this not as a technical trick, but as a core leadership skill: how to set boundaries, provide relevant information, and create shared understanding between humans and AI.
Build connected leadership teams: don’t pin transformation on a CAIO. Form cross-functional steering groups that integrate strategy, tech, operations, and people perspectives.
Experiment with centaur teams: pilot small, high-autonomy, human+AI units that can model future ways of working inside the enterprise.