Enterprise AI is a Social Technology, Not the Sorcerer’s Apprentice
Some links and thoughts on agentic AI's positive potential to transform work and jobs, and the need to avoid both magical thinking and AI panic
Magical Realism
Humans appear to use some very primitive sorting algorithms when presented with new discoveries, such as is it a God? … and … could this magic kill us all?
In debates around AI, this tendency is creating increasingly polarised viewpoints and judgements at a time when we should all probably be keeping an open, but sceptical mind as we continue exploring what the technology can actually do (or not do) in practice.
The New York Times recently profiled long-time AI sceptic Eli Yudkowsky, who defined his current purpose as:
“To have the world turn back from superintelligent A.I., and we get to not die in the immediate future,” he told me. “That’s all I presently want out of life.”
In the same newspaper, Thomas Friedman has not allowed his admitted ignorance of the topic to prevent him waxing lyrical about its dangers and existential threats in recent columns. Melanie Mitchell’s debunking of these ‘takes’ is worth a read:
Friedman is not wrong to worry about what's going to happen vis a vis the U.S., China, and AI. However, we need less magical thinking and more realism. Those who are in a position of crafting AI regulations would be much better off digging into the many excellent reality-grounded proposals, ones that require reasoning from, yes, those slow, analog-era humans.
But in the interest of balance, it is worth noting that AI-positive writers also sometimes use the M-word when describing their reactions to some of the more advanced Generative AI examples of knowledge synthesis. Ethan Mollick recently wrote about noticing a shift “from partners to audience, from collaboration to conjuring” as more capable multi-stage agent reasoning becomes less transparent and explainable:
This is the issue with wizards: We're getting something magical, but we're also becoming the audience rather than the magician, or even the magician's assistant. In the co-intelligence model, we guided, corrected, and collaborated. Increasingly, we prompt, wait, and verify… if we can.
So what do we do with our wizards? I think we need to develop a new literacy: First, learn when to summon the wizard versus when to work with AI as a co-intelligence or to not use AI at all. AI is far from perfect, and in areas where it still falls short, humans often succeed. But for the increasing number of tasks where AI is useful, co-intelligence, and the back-and-forth it requires, is often superior to a machine alone. Yet, there are, increasingly, times when summoning a wizard is best, and just trusting what it conjures.
Just as with social media, there are myriad ways in which this ‘magical’ technology could create social harms when deployed at a scale that prohibits accountability and mutual responsibility, or if the economics of its productisation incentivises the wrong things.
But the real value of online social collaboration and sharing was always in small-world networks with a degree of common interest or common purpose. That was the case before Facebook and Twitter, and it remains the case in organisations, where people have contracts, rules and shared goals.
When we first started deploying wikis and social collaboration tools inside companies, some managers feared that people would behave badly and write nonsense for all to see, but in reality employees were no more likely to do this than they were to walk in to the office and spray paint graffiti on the walls. It just didn’t happen. The dynamics are different. This is also why some well-managed online communities remain havens of mutual excellence and kindness even in the highly toxic online world we see today.
As Melanie put it in her piece debunking Friedman:
To badly paraphrase the horror movie Soylent Green: “AI is people!” The vast corpus of human writing these systems are trained on is the basis for everything modern AI can do; no magical “emergence” need be invoked.
If we treat AI as a social technology, perhaps we will learn to govern it as such, rather than imbuing it with superhuman, mystically dangerous characteristics.
This is why we are so interested in the ostensibly less exciting field of enterprise AI.
The application of smart technology within organisations that have a clear purpose or goal could be the catalyst for supercharged human collaboration and collective action that could potentially transform work into something more fulfilling and rewarding. But with human oversight and control, and at human scale.
Corporate Kabuki
So why is the transformation of work so important?
Alex McCann caused a stir a couple of weeks ago with an eviscerating take-down of the pointless, performative nature of many corporate jobs today, explaining why some smart young people are either looking at other ways to make a life and a living, or using the salary and infrastructure of their corporate roles to secretly work on their side hustles.
This is particularly acute for people in their twenties. We entered the workforce just as the illusion was becoming impossible to maintain. We never had that period where we could believe our corporate roles were meaningful….
The most honest person I've met recently was a VP at a tech company who told me: "I manage a team of twelve people who create documents for other teams who create documents for senior leadership who don't read documents. I make £150k a year. It's completely absurd, and I'm riding it as long as I can while building something real on the side.”
Leaving aside the ritualistic workplace theatre that consumes so much time and energy in corporate life, there is a deeper issue here that AI-enabled change has the potential to address. Many organisational structures are neither human nor machine, but a kind of inefficient and wasteful Heath Robinson contraption made of people.
To illustrate the point, John Cutler recently shared a typology of some suboptimal organisational archetypes that he has encountered in his conversations with managers and workers.
By transforming manual process work into automated digital services and agents, we can let machines be machines and people be people. We may need fewer people in some cases, but as we move away from the industrial-era 9-5 factory model of mass labour, new opportunities and options are emerging as viable alternatives.
And as Jurgen Apello pointed out this week, AI agents probably need more micro-management than people anyway, so we can always take out our desire for control on them instead. But the worst case scenario would be deploying agentic AI into existing command-and-control management structures and wrapping it around existing ways of working.
Platforming & Terraforming
The most obvious organisational architecture for the enterprise AI era has been around for a while now - the organisation as a platform. It is not a one-size-fits-all organisational model - it can support various shapes of teams, functions and ways of working on top - but it is a basic architecture that describes how to manage the core technical functions of an organisation.
The cascading hierarchy model was the universal blueprint for the late C19th and C20th when the most reliable approach to scaling was specialisation, functional division and clear command and control. The platform organisation architecture is a design pattern for the current hi-tech era that requires integration and lateral connection, not vertically divided silos.
This all sounds like a technical challenge, but in reality it is also a leadership issue, as it requires more of a systems thinking approach to how we see the value chain and the structures that support it.
Agentic AI is closely related to platform engineering, but instead of just digital services and apps that are connected by APIs, we can now make these services smarter and more autonomous through agentic orchestration. This is also closely aligned with the architectural concept of micro-services, and could help address the problem of sprawl and complexity that this model sometimes suffers from.
As with micro-services, it makes sense to build simpler, more specialised agents and then combine them to achieve more complex goals. In enterprise AI, tightly scoped agents using Small Language Models (SLMs) seems like a better choice for most purposes than generalist LLMs, which also has the benefit of being cheaper and less prone to hallucination.
Sundeep Teki recently shared a deep dive into the suitability of SLMs for agentic AI that argues:
The evidence is overwhelming and the logic is undeniable: the future of agentic AI is not monolithic but modular, not centralized but distributed, and not defined by brute-force scale but by intelligent specialization. The shift from LLM-centric to SLM-first architectures is not a matter of mere preference but an inevitable evolution driven by the powerful, convergent forces of economic necessity, operational pragmatism, and demonstrated technical capability.
The current paradigm, with its massive infrastructure costs and operational inefficiencies, is a relic of the industry's initial exploration phase. The maturation of the AI field demands a move from a research-driven focus on raw capability to an engineering-driven focus on delivering value efficiently, reliably, and sustainably.
In the enterprise software market, some existing vendors will try to supply the whole thing, such as Salesforce with its Agentforce platform. Capgemini’s Salesforce architect team shared a good overview of its structure and capabilities here, also covering the coordination role of the Salesforce Atlas reasoning engine:
The Atlas Reasoning Engine represents the cognitive core of Agentforce 3.0, orchestrating a sophisticated multi-stage processing pipeline that transforms natural language inputs into autonomous business actions. The engine’s autonomous planning capabilities decompose complex tasks into executable steps with dependency analysis, while continuous contextual learning refines decision-making over time.
But many companies will opt to avoid lock-in by building and owning their own core platform as a strategic business asset, and there are many software, systems and services out there to help them do that. Red Hat’s Openshift team recently shared a piece on how platform engineering can help make AI development and deployment repeatable, governed, and accessible across the enterprise:
AI in the enterprise is not just about building smarter models; it is about creating smarter platforms. Platform engineering principles (such as standardization, self-service, GitOps-driven automation, trusted supply chains, and flexible data platforms) are what transform isolated AI experiments into reliable, production-ready systems.
Of course, before we can automate processes or turn them into agents, we first need to understand and map them. Constellation Research have noticed more enterprise software firms are beefing up their process mining capabilities as agentic AI adoption increases:
Enterprise software vendors appear to be coalescing around the idea that process mining is an enabler for agentic AI and should be built into platforms….
The ability to identify inefficiencies via process and task mining is critical because those insights "are then food for our AI agents" to automate with streamlined processes, said Panchbhai. You don’t have to sell me on this process mining meets agentic AI notion. Simply put, I thought agentic AI would just scale process disaster without some kind of optimization in the background.
But in addition to just mapping or mining existing processes, this is a great opportunity to re-think many of them to take advantage of the new affordances of AI tools.
Getting that right requires some new leadership and management skills at every level to truly understand the value creation machine, how it works and how it could be improved.
We have been creating and delivering some new executive education ideas and content to address this need and help leaders think like architects, developers and terraformers. We will try to share some of the ideas behind this in forthcoming editions.