Context Plumbing, Intent Sensing and an AI Reverse Uno on Social Media Feeds?
Enterprise AI looking a better bet than consumer AI, plus some links and ideas about how personal AI agents could help improve both of them
The Enterprise Strikes Back
Fears of the AI investment bubble potentially crashing the US stock market have abated slightly, despite AI revenues not yet looking like they will be able to repay the vast sums being invested for some considerable time. But OpenAI is clearly in a vulnerable position, and Sam Altman has declared a code red in response to the threat posed by Google.
Ben Thompson frames the story in Star Wars terms, with OpenAI and NVidia having reached the Empire Strikes Back stage of the hero’s journey now that Google has seemingly re-asserted its leading position across LLMs, AI apps and hardware. But whilst Nvidia is hoovering up money by selling chips, OpenAI is blowing cash in the opposite direction, and will need to come up with something very special if it is to survive and thrive beyond this initial wave. Given Altman’s talk of superhuman persuasion, let’s hope their saving grace is not a consumer AI advertising arms race with Google.
Meanwhile, the case for enterprise AI being the route to real returns and wider economic and social benefit continues to grow. Nvidia’s Jensen Huang is continuing to build enterprise and industrial partnerships to open up new markets for their chips; he sees industrial use cases such as digital twins and product prototyping as bigger and more important opportunities than consumer chatbots.
Azeem Azhar began his roundup of the state of AI three years on from the launch of ChatGPT with a look at enterprise AI. He sees very positive adoption and ROI signals that suggest this field will continue to be where AI has the greatest impact. Even looking at just Generative AI, rather than the more complicated world of agentic AI that needs a degree of organisational transformation to fulfil its promise, he sees strong adoption that suggests we are looking at a J-curve of productivity impact:
The best example, though, is JP Morgan, whose boss Jamie Dimon said: “We have shown that for $2 billion of expense, we have about $2 billion of benefit.” This is exactly what we would expect from a productivity J‑curve. With any general‑purpose technology, a small set of early adopters captures gains first, while everyone else is reorienting their processes around the technology. Electricity and information technology followed that pattern; AI is no exception. The difference now is the speed at which the leading edge is moving.
Real vs Simulated Intelligence(s)
But models are just part of the puzzle in building smarter organisations, and we should not lose sight of the respective strengths of human and machine intelligence.
Neil Perkin has shared a good summary of a recent podcast by Dave Snowden about sense-making and the impact of AI, covering some of his observations about the differences between human and machine reasoning, cognition and insight. I also joined a longer webinar with Snowden and others interested in AI and complexity last week, where he made similarly useful points, so Neil’s notes saved me a job.
Understanding these fundamental differences enables us to collaborate much more effectively with AI engines. LLMs can look like they have a deep understanding of a question but of course what they are really optimised for is identifying patterns and predicting the next most probable word in a sequence to mimic human-generated text. They are set up to minimise the difference from training data meaning that, by design, they trend towards the average and most probable.
Another important difference between LLMs and human reasoning is that language is not the same as intelligence - it is only one part of how people think and communicate their knowledge, as Benjamin Riley wrote for the Verve:
LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.
If we mistake large language models and their predictive abilities as intelligence, then we risk denuding our own creative and cognitive superpowers. But perhaps if we use these stochastic parrots in more creative ways, they could play a role in helping us improve our own thinking, rather than just outsourcing it. Advait Sarkar posed this question in a recent talk on behalf of Microsoft Research, and concluded that the idea has potential merit:
You can demonstrably reintroduce critical thinking into AI-assisted workflows. You can reverse the loss of creativity and enhance it instead. You can build powerful tools for memory that enable knowledge workers to read and write at speed, with greater intentionality, and remember it too. It turns out, with the right principles of design, you can build tools that are the best of both worlds: applying the awesome speed and flexibility of this technology to protect and enhance human thought.
It would be good to see some practical applications of this idea in our use of GenAI within organisations, and especially for leaders.
Deriving Context & Intent Needs Better Data
Another point Dave Snowden makes is that training data is ultimately more valuable and important than the individual models trained on it.
This raises questions of digital sovereignty for any organisation or state trying to use AI without becoming dependent on AI platform providers like OpenAI. What should you own? What can you buy or rent? What should you build?
If the current trajectory holds, it looks like open models will be commoditised and the real value will lie in data, world models and the apps and agents we build on top of the models.
But whilst we can use large historical data for training models, the operational needs of context engineering mean that this category of data should ideally be recent, atomic and fluidly connected, so that it can be used in different ways.
Matt Webb is thinking about this from the point of view of discerning user intent, and he uses the term context plumbing to describe the complex task of integrating lots of different data feeds to create context in close to real-time. He goes on to get quite excited about the potential to derive seed training data from popular platforms and marketplaces, and then apply agentic AI coding loops to fulfil the opportunities identified in the data (at least I think that’s what he’s saying - see what you think).
It is worth reading these brain dumps alongside Séb Krier’s recent essay Coasian Bargaining at Scale, which postulates that personal agents (armed with your own context and intent) could do a better job of reducing transaction costs and other frictions in distributed negotiations compared to top-down approaches to navigating and balancing competing interests:
This is the essence of the work of Nobel laureate Ronald Coase, who argued that if bargaining were cheap and easy, a polluter and their neighbor could strike a private deal without any need for regulation. Of course sometimes some pollution would still happen, but the payoff to the neighbor would ensure that both parties are better off than the zero pollution or no-limits pollution counterfactuals. The tragedy is not the existence of the conflict, but the transaction costs that prevent these mutually beneficial deals from being discovered and executed. It’s also the lesson from Elinor Ostrom, who documented how real-world communities successfully govern shared resources like fisheries and forests through their own intricate local rules.
It is an interesting idea, and one that could help shape AI-enabled governance in the future.
In the context of enterprise AI, we probably need to dig deeper into how we can derive, generate or synthesise training data specific to an organisation’s work to create world models and context that are rich enough to enable agentic AI operations, and perhaps even the kind of negotiated outcomes and compromises that Séb Krier has in mind.
This is not just a quantity question; it is also about how we structure and organise that data. Microsoft are doing some work on the semantic layer that helps people and agents make sense of data with what they are calling Microsoft IQ, which is intended to bring intelligent capabilities to Fabric, Microsoft 365, and Azure AI Search.
Another angle on harnessing data intelligently is to democratise access to it, so that more people can help shape it, and that is what Atlassian appear to be targeting with their acquisition of data cataloguing tool Secoda.
Could Agentic AI Play the Reverse Uno Card on Social Media?
Séb Krier’s piece is another reminder that personal agents are likely to emerge as solutions to many of the coordination challenges that led us down the perilous path of large-scale platforms and algorithmic sharing.
I am in Copenhagen right now at the pre-launch gathering of a bold project to rebuild Europe’s social platforms. It aims to build on the energy and creativity that we were all so excited about in the early 2000’s before Facebook and the big US platforms exploited our human need for connection to create ad-funded clickbait farms that have harmed our societies and democracies. Just today, the Guardian wrote about a growing movement of young people across Europe seeking to reclaim their lives from big tech platforms, and this trend looks set to grow.
Within the Matrix world of attention farming, we have seen the bad things that AI can do: algorithmic feeds, emotional manipulation, fake content, fake people, and so on. But what if it could also be part of re-humanising our connection with each other?
There is a whole (small) world out there of people sharing their passions in niche social networks and communities, subreddits, discords or group chats. But the nature of scale-free networks and network effects means that Whatsapp, Facebook, Twitter, etc are still the easiest option for many people and groups in Europe just because that’s where their friends or families are to be found.
But what if we go back to some of those early social network ideas such as federation, interoperability and the intention economy to play a reverse Uno on algorithmic feeds? If everybody has their own discoverability and curation agent that pulls from multiple networks, communities and messaging platforms to create a personal social feed, then we don’t need to all be on the same platform. If I can tell my agent to keep me updated with all my interests and groups, from local news to hobbies and political debates, and handle the messy details of logging in and aggregating content, then perhaps we could help sustain the safer, more human-scale small world networks that are out there already under the radar. Ever the optimist!




Love this (and to sense that early 2000s optimism we all had). Glad you’re still ever the optimist. Thanks for a great article!