CHROs as Systems Architects not Programme Owners
Programmes, people and performance - why AI is exposing what HR was never designed to do
Enterprise AI-led transformation is changing the focus of most leadership roles to a greater or lesser extent, but one of the most impacted is likely to be the HR function and CHRO roles in particular.
CHROs are being asked to lead AI transformation, ensuring workforce readiness, embedding new tools into everyday workflows, and holding the culture together through a period of profound disruption. It is a significant mandate, but it sits uncomfortably alongside a question that has been building for years and that AI has now made urgent: what exactly is HR for?
The old answer was that HR owns the programs; talent acquisition, compensation, learning and development, each with its designated custodian, but that no longer seems strategic enough. Organising around programme ownership rather than whole system performance has left a critical gap, one that AI adoption and change efforts are now falling into.
Some commentators argue that the existing mandate and role focus can be expanded to cope with the manifold challenges posed by AI transformation.
Oracle’s Yvette Cameron recently wrote for Diginomica that HR can step up in its current form to play a leading role in AI adoption through workforce readiness, personalised workflow integration, and a supportive culture, and shared the following starting points:
Build trust through transparent AI
Embed AI in everyday workflows
Scale AI pilots with human insight
Follow a practical roadmap for HR leadership
These are necessary but not sufficient. They assume that the existing model of HR, organised around programmes and interventions, can stretch to accommodate AI. Increasingly, that assumption looks fragile.
Others think it is time for a reset and a re-focus within HR leadership.
TalentSherpa recently shared an article about the need for a new Human Operating System, arguing that the CHRO should evolve into a human systems architect without accountability for outcomes or risk falling down the strategic food chain.
The old model organized HR around program ownership. Someone owns talent acquisition. Someone owns compensation. Someone owns learning and development. These roles are defined by the programs they run, not the outcomes those programs produce.
The new model organises around system performance.
At its core, this is a shift in responsibility. Not just from programmes to systems, but from delivery to stewardship.
If leadership in the age of AI is increasingly about world-building (shaping the conditions, constraints, and environments in which people and machines operate) then the CHRO’s role becomes one of enabling and stewarding that world.
Not just owning outcomes directly, but ensuring the system in which those outcomes emerge is coherent, legible, and sustainable.
Missing Infrastructure to Support Human Performance
When you look closely at what determines whether AI actually takes hold in an organisation, the same missing layer keeps appearing. It isn’t the technology or even the strategy, but more often the human infrastructure underneath that is holding back progress, including the conditions and incentives that encourage people to learn, adapt, make sense of change, and to keep performing well through it.
In practice, this missing infrastructure shows up in identifiable ways:
Teams using AI differently with no shared standards of judgment
Little visibility into how decisions are being made with AI support
No mechanisms for comparing, learning from, or improving those decisions over time
Ashley Goodall points out that we know a great deal about the ingredients of human performance, but most of that knowledge never makes it into management or HR practice. Human performance isn’t treated as an HR problem, so the accumulated understanding of how to help people do their best work has nowhere to land:
The opportunity is clear: we know an awful lot about the ingredients of human performance—but much of what we know doesn’t make it into our management or HR practices, with the result that too many workplaces actually impair human productivity. This is in large part because human performance isn’t considered an HR problem, and so the accumulated knowledge of how to help people do their best work has no place to land inside a typical organization.
In other words, we already understand a great deal about human performance, but we have not designed our organisations to operationalise that knowledge. That gap matters enormously in an AI context, because adopting AI is a continuous, social, often disorienting process of figuring out what your work means now, what you’re responsible for, and what good judgment looks like when a machine is doing some of the labour.
We are seeing too many policies mandating the use of AI without providing the learning, guidance and context that people need to make sense of how it connects with, and ideally enhances their existing work.
AI adoption must also engage with the cognitive impact - positive and negative - of using AI for more and more of our work. In HBR this month, Guy Champniss lists six areas of psychological cost relating to AI usage, and points to existing knowledge and practices in behavioural science that could help mitigate them if applied:
Cognitive Debt
Autonomy Debt
Competency Debt
Relatedness Debt
Credibility Debt
Professional Identity Debt
It is impossible to predict at this point just how AI will transform the workplace. However, one thing is certain: understanding and building the right human infrastructure will be as important as picking the right AI tools
What Happened to Social Learning?
Another area of missing HR infrastructure relates to social learning and collaboration. We did a lot of work in the early 2000’s on social learning as a way to accelerate digital collaboration skills within organisations, and it was a very effective area of intervention. But it feels like most companies let this learning slide and reverted to their old ways as the economy tightened after 2008.
More recently, we have written a lot about the relationship between learning, HR, and change; and how AI could transform each of them. Learning is not a separate activity that happens outside the flow of work, but it needs to be integrated with knowledge development and change to have a direct impact on human performance.
As Jane Bozarth put it for the Association for Talent Development, sense-making and meaning need to come before action, including in relation to AI adoption.
This is why work improves when learning happens socially. When colleagues talk through cases, narrate decisions, and share lessons learned, they begin to see patterns sooner. They recognize implications earlier and, ultimately, make better judgments.
She lists a few of these areas that make up a social learning infrastructure:
Networks that connect people across boundaries
Communities of Practice that sustain professional dialogue
Habits of working out loud that make thinking visible
Cultural signals that curiosity and sharing are valued rather than risky
These capabilities don’t sit neatly inside any single HR program, so often nobody owns them, which is precisely why they don’t get built.
The influential learning consultant Josh Bersin goes further, arguing that our entire approach, philosophies, tech stack, and operating models for learning are out of date.
HIs latest research report into corporate learning found that 74% of companies tell us they are not keeping up with their company’s demand for new skills. The traditional response of more training, better content, or a refreshed LMS won’t close that gap. The problem isn’t a shortage of courses, but the absence of a dynamic knowledge infrastructure where information flows across boundaries, people can explore and question in real time, and learning is woven into the flow of work rather than scheduled around it.
Our skills challenge at work is not one of “learning” or “training.” Rather it’s a problem of dynamically sharing information, enabling people to explore, question, and apply new ideas. The traditional pedagogical paradigm of “training” is holding us back.
From Program Owner to System Architect
This is where the CHRO’s role needs to evolve.
The shift is from owning programs to architecting system performance, which means taking accountability not for whether the learning platform has good content, but for whether the organisation actually gets better at what it does. In that sense, the CHRO becomes a key steward of the organisation’s operating environment, the human layer of the world leaders are now being asked to build.
Having an AI adoption programme and dutifully tracking its roll-out status is less important than ensuring people are genuinely equipped — psychologically, practically, socially — to work well alongside AI over time.
That is a bigger job that requires a different relationship with data, with line leadership, and with the outcomes the business cares about. It means HR stops being the function that runs things and becomes the function that understands, designs, and continually improves the conditions under which people perform.
Perhaps due to its people mandate and CHRO seat at the top table, HR is uniquely positioned to bring structure to the balance between AI innovation, culture, and governance. But only if HR is willing to claim that territory with genuine authority, rather than waiting to be handed it.
The CHROs who will matter most in the next five years won’t be the ones who ran the best AI programmes.
They will be the ones who recognised that AI does not fail at the level of tools or training, but at the level of systems, and who took responsibility for building the human infrastructure those systems depend on.
HR will not lose relevance because of AI. It will lose relevance if it continues to organise around programmes in a world that now runs on systems.



