Standing on the Shoulders of Giants
Lee considers the debate around AGI, AI as a platform and the immense potential of modular operations + AI to improve the way we build and manage organisations
Making Brains vs Connecting Brains
My co-founder Livio and I used to share an apartment in London in the 1990s that was close to a dreadful nightclub, so we were sometimes inadvertent witnesses to messy, drunken procreation activities in the alleyway next to our house. The creation of a new, individually distinct human brain, soul and will is indeed a miracle of evolution and bio-chemistry, but it is also remarkably easy and cheap to achieve.
Yet the AI race seems obsessed with spending vast amounts of monetary and computational resources to create fake sentience, fake will and fake AGI, rather than do all those less sexy things that machines do better than us. And despite being nowhere near achieving this, and arguably stuck in a technological local maximum, this ambition is already producing understandable fears about AI taking over, enslaving or even destroying humanity, such as Ian Hogarth’s much-discussed essay in the Financial Times:
“The “Shoggoth” meme illustrates the unknown that lies behind the sanitised public face of AI. It depicts one of HP Lovecraft’s tentacled monsters with a friendly little smiley face tacked on. The mask — what the public interacts with when it interacts with, say, ChatGPT — appears “aligned”. But what lies behind it is still something we can’t fully comprehend.”
Some commentators, like Jaron Lanier writing in the New Yorker, argue “real” AGI is a myth and in fact it currently looks like a kind of sparkling machine learning that is actually categorisable (due to its training data being the internet) as a form of social collaboration:
“A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics. Programs that create images to order are something like a version of online image search, but with a system for combining the pictures. In both cases, it’s people who have written the text and furnished the images. The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.”
We have no shortage whatsoever of human intelligence. The challenge is how to connect it, use it and augment it to best effect. Pursuing the AGI pipe dream risks diverting attention away from the manifold practical applications of AI/ML in the short-term to support and help people with their work, life or creative expression.
AI as a Platform, Not a Person
Azeem Azhar is a thinker I respect in the emerging technology space, and he is very bullish about the short- and long-term impact of AI as a new platform for innovation and growth, describing the current breakthrough moment as just as impactful as the rise of the web around 1996, based on a series of conversations in Silicon Valley recently:
“There's a confidence there that has been missing for years. As mobile penetration peaked, phones became jammed with apps, and ennui enveloped social networks, Silicon Valley hunted for the next big thing. Many thought it might be blockchain. Mark Zuckerberg thought it would be metaverse.
We know now it will be AI. And Silicon Valley is never more vibrant than when there is a new platform.”
One thing we have learned since 1996 about the power of platforms to enable new kinds of value creation is the importance of modularity and building blocks. If we want AI to be able to orchestrate and coordinate processes, workflows, data and algorithms, then we should try to make them as modular and self-contained as possible.
Another Exponential View colleague, Chantal Smith, recently published a two-part briefing on modularity (the second part might still be paywalled) that touches on why this is so important and how it can create the building blocks that smart technologies can use to create powerful outcomes:
“Modularity is the degree to which a system or entity is broken down into smaller, individual components that can be (1) replaced or modified without affecting the rest of the system and (2) combined in different ways using a connector.”
The simplistic beauty of modularity is quite something to behold, from the impact of standardised container shipping to the increasing power of software engineering, and I think it will play an important role in early, practical uses of AI as well.
Standing on the Shoulders of Giants
There is no greater barrier to evolutionary improvement in an organisation than addiction to the apparently predictable system of top-down hierarchical management - what we might call the ‘intelligent design’ approach.
Embracing the uncertainty and competition of evolution, emergence and ecosystem approaches demands excellence and the ability to really embrace the power of co-opetition inside and out, which can be scary, but the pay-off is potentially huge.
Think about the evolution of coding for a moment. From stacks of punch cards to assembly language, C and the higher level languages that can manifest complex routines from a single command. We take it for granted today that we can instantly invoke and install huge amounts of prior art with a few keystrokes. And what’s more, each of the thousands of individual libraries or routines are still evolving at the component level as other people refine or improve them, and they are being composed and re-composed to create higher order systems thanks to their inherent modularity.
This is similar to the way human language and culture evolved from simple sounds to words that represent complex objects, all the way to the novel, where a 100 byte sentence about a summer’s day can evoke gigabytes of information that we all understand the signifier to be referring to.
Today, we are getting close to being able to write simple AI prompts in plain human-readable language that produce complex custom code to bring to life whatever we can imagine. Soon, inside our organisations, we will be able to write simple process, workflow or even organisational capability recipes, and smart software platforms will make them real. The sheer leverage this gives us, and the speed with which novel, innovative code becomes commoditised and componentised in ways that we can invoke from a command prompt or an AI input box, is truly astonishing.
Ethan Mollick this morning shared some of his experiments with ChatGPT / Code Interpreter and Microsoft Copilot, and these provide a useful insight into how quickly all office workers will be able to automate some of the basic ‘objects’ of their work. But that really should not be the goal here. Notwithstanding the fact that we shouldn’t be creating Word documents and PPTs to communicate with sentient beings in 2023, the real power here will be using a natural language command line to instantiate a new project, a department or even an entire product line, with AI-automators duplicating the modular building blocks of workflows, processes, systems and datasets in the background to create the structures needed to fulfil our request.
Many firms who think they are doing ‘OK’ will continue using humans to manually organise process work, stick with tweaking legacy Excel macros and use layers of generic management to hold their creaky ship together. But others will be spinning up new ventures, departments and work recipes from the command prompt, without needing to worry too much about how the sausage is made.
But in the short-term, this requires us to do the hard work of organisational improvement and re-connecting our divided structures to create the conditions where this is even possible. We need to leave behind nonsensical individual KPIs and create the conditions for a collaborative enterprise to emerge from within the pretend predictability of hierarchy and bureaucracy. And a lot of this work will really challenge existing structures, incentives and internal empires, such as the urgent need for data integration to train AIs and enable internal platforms to exploit the huge untapped value that exists within firms today, and this will impact a lot of comfortable internal roles and fiefdoms.
Technological evolution - a powerful sub-set of human evolution - will, I hope, also have a profound effect on accelerating organisational evolution after about a century of stasis.
Just because we can, doesn't mean we should.
What has been considered in the application and impact of AI platforms on the workforce? Will the increased "productivity" result in some more equitable distribution of the benefits or a further concentration of these profits into even fewer hands (of those that already control the capital)? Will we have a 3 or 4 day week on the same pay due to the speed at which we might now be able to deliver with the possibility of employing two people at a 3 day week (each on full pay), OR, as I would suggest evidence supports... will the entry level, junior and even mid-tier jobs disappear as companies become more "efficient", downsize the labour costs and keep the profits for the top tier of managers, partners and shareholders?
The ethics of these big shifts in our use of technology, as usual, trail a long way behind their application.
I think there is a case for hope, a levy on the use of these (and other) automation technologies that replace labour to provide a universal basic income and an increased quality of life for the majority of people, BUT also a case for fear, as these technologies further disinfranchise, disempower and marginalise the have-nots in society. There is also a fear that these new technologies, AIs in particular, are programmed with some of the personal, institutional and cultural inequalities of our time and, if anything then embed these into the substrata of their very being, entrenching the hegemony even further.
Personally, I see far more cost than benefit because of the hands that these technologies are in and because of the paucity of constraints or considerations applied to them.
In my view, currently, I'm inclined to support the calls for a moratorium on their development and application, or simply put, "burn them all with fire"!
Very insightful as always. But still the open question for me remains - how do we evolve towards the "collaborative enterprise" while the markets favor the efficiency that AI brings towards the workplace in reassembling old ideas. And to be fair - AI also helps to bring new insights and new creatives to our world if used in a intelligent way - but used as in the example above to generate "plausible b**" work it will IMHO rather support the existing hierarchy and bureaucracy as it plays by its rules.