AI Learning & Adoption Goes Further and Faster in Co-Op Mode
How can shared practice, not individual use, help accelerate learning and sustainable adoption of AI?
Most workplace AI initiatives begin with Generative AI used in single-player mode, with employees trying out Copilot, consulting ChatGPT, or running quick experiments on their own.
That is a great place to start, but there are good reasons to consider using AI tools collectively, in co-operative mode. Groups working together can help shape how AI is adopted, used, governed, and improved, and will create better learning outcomes overall.
When people shift into co-op mode, new possibilities open up: richer feedback, faster learning, clearer rules, and domain-specific oversight. Each builds on the same principle: AI adoption becomes stronger, safer, and more sustainable when it is a shared practice rather than an individual experiment.
This edition explores what AI co-op mode looks like in practice, and how organisations can build it into their everyday ways of working.
The Problem with Individualised AI Adoption
Most of today’s workplace AI experiments are framed around the individual. Employees are invited to “try Copilot,” “ask ChatGPT,” or “see what the tool can do for you.”
Anu Atluru recently (and rather eloquently) described how she sees technology leverage being inverted by AI, from scaling the collective through network effects to scaling the individual through AI augmentation:
Now, the individual can rival the collective. The single-player ceiling has shattered. Scaling yourself is now the most accessible, fastest-growing source of leverage …
At the logical extreme of the great leverage inversion, the boundary between individual and collective erodes. The self becomes a distributed system: one consciousness orchestrating many capabilities. The individual rivals the collective by becoming the collective. The self is the platform.
This is an empowering idea, and we will undoubtedly see high-agency individuals and small teams achieving more with AI than whole departments or functions were able to achieve previously. But it also risks further fragmentation of our collective structures, which are still very important when it comes to business, society and governance.
Whilst single-player mode for AI exploration, learning and adoption has real value - it builds familiarity, lowers barriers, and sparks curiosity - it is not the only way to pursue AI adoption, and we can probably get further, faster, together.
Individualised use creates private, unshared learning. Each person’s prompts and discoveries remain locked inside their own workflow, invisible to colleagues. The only actors who see the wider patterns are the vendors, whose models learn from aggregated usage across millions of users. Inside the organisation, there is no equivalent mechanism to surface insights, refine practices, or align AI use with collective goals.
This produces two risks. First, the risk of superficial adoption: novelty without depth, where AI is seen as a series of clever tricks rather than a capability worth investing in. Second, the risk of dependency: without shared practices, organisations remain reliant on outside providers to define how AI evolves, forfeiting the chance to shape it in ways that fit their culture and purpose.
What’s missing is the group mode of working with AI, the place where individual experimentation turns into shared practice, and where shared practice can be governed in line with organisational values.
The Missing Co-op Mode
Whilst individual adoption can lead to fragmented learning, co-op mode provides the missing layer that turns AI use into something collectively useful and accountable.
Co-op mode can take many forms - Communities of Practice, guilds, or guide networks - but the essence is the same: AI can be shaped as a social technology.
Within these groups, responsibility shifts from “what do I do with this tool?” to “how do we want to use it together?” And that shift makes all the difference. Instead of every interaction being a private experiment, communities can:
Surface patterns of how AI is being used, good and bad, and build feedback loops where stories of success and failure inform the next round of experiments.
Debate and refine rules that express organisational values in practice.
Take collective responsibility for improving bots and agents, rather than waiting for vendors to update their models.
Seen this way, group mode is not just about adoption. It is also a form of governance, able to adjust norms as contexts shift and contradictions appear. Just as open-source projects thrive because communities maintain and improve the code, AI in organisations will thrive when groups maintain and improve the norms that shape its use.
And crucially, a group-centred approach creates a form of AI literacy that is sustainable. Instead of relying on one-off training sessions or compliance modules, literacy emerges from continuous dialogue, shared experience, and collective refinement. This makes AI literacy less about mastering a tool once, and more about participating in an evolving practice that stays relevant as the technology, and the organisation, changes.
To make it concrete, let’s look at four use cases where groups working together could unlock deeper value from AI than individuals ever could:
Group-based reinforcement learning from human feedback
Action-oriented shared learning
Developing rules as code and social governance
Oversight of knowledge domains by communities of practice

Use Case 1: Human Reinforcement Learning
Most commercial AI models are tuned through reinforcement learning from human feedback (RLHF). But in practice, this usually means thousands of low-paid workers providing binary feedback - “thumbs up, thumbs down” - on model outputs. It creates scale without depth.
Inside organisations, group mode offers a more powerful alternative. Groups of people who understand their domain, whether that’s product design, HR or compliance, can provide richer, contextual feedback that goes beyond simple yes/no responses.
For example, instead of asking “is this answer correct?”, a team could:
Discuss whether an output aligns with customer-first values.
Add commentary about unintended consequences the model missed.
Suggest refinements or additional sources of content or context that make outputs more practical or trustworthy in their workflow.
This is reinforcement learning with context: feedback loops shaped by with shared purpose and expertise. Over time, the signal these groups provide can improve both the model’s performance and how it fits with the organisation’s norms and goals. And as organisations start to develop and use specific small language models trained in specific areas of knowledge, the role of expert oversight of their outputs becomes even more important as errors might be harder for a generalist to spot.
Where outsourcing turns feedback into piecework, group mode turns it into collective sense-making.
Use Case 2: Action-Oriented Shared Learning
Working with AI is still a new skill, and like any skill it develops faster when we learn together. Co-op mode makes AI adoption a social learning process rather than a series of isolated experiments.
We have written previously about the kind of symbiotic learning that is possible with AI tools - us teaching them and them teaching us - and this works even better in co-op mode.
One practical approach is to establish lightweight group feedback loops around AI use:
Capture stories of practice
Each time an AI tool or agent is used in meaningful work, the experience is shared - good, bad, or ambiguous. This might be a short note on an internal channel, a tagged example in a shared library, or a quick reflection in a community meeting.
Review against the rules
Community members check the output against the rulebook or norm statements they have created. Did the AI support customer-first? Did it act in line with radical transparency? Were there unintended consequences?
Spot contradictions and patterns
Over time, these reviews surface where rules are inconsistent, where agents are drifting, and where new norms are needed. This gives the organisation early warning signals that wouldn’t appear if use remained individualised.
Refine and redistribute
Communities propose updates to the rulebook, and those changes are shared back across the network. In this way, norms evolve dynamically, and literacy develops alongside usage.
This isn’t about building heavy governance structures, it’s about making collective reflection part of the everyday rhythm of work. Over time, these loops create living libraries of prompts, practices and lessons that evolve with both the technology and the organisation.
Read on for additional use cases and thoughts on what this means for scaling AI adoption in the enterprise.
Keep reading with a 7-day free trial
Subscribe to Shift*Academy to keep reading this post and get 7 days of free access to the full post archives.