4 Comments

Just because we can, doesn't mean we should.

What has been considered in the application and impact of AI platforms on the workforce? Will the increased "productivity" result in some more equitable distribution of the benefits or a further concentration of these profits into even fewer hands (of those that already control the capital)? Will we have a 3 or 4 day week on the same pay due to the speed at which we might now be able to deliver with the possibility of employing two people at a 3 day week (each on full pay), OR, as I would suggest evidence supports... will the entry level, junior and even mid-tier jobs disappear as companies become more "efficient", downsize the labour costs and keep the profits for the top tier of managers, partners and shareholders?

The ethics of these big shifts in our use of technology, as usual, trail a long way behind their application.

I think there is a case for hope, a levy on the use of these (and other) automation technologies that replace labour to provide a universal basic income and an increased quality of life for the majority of people, BUT also a case for fear, as these technologies further disinfranchise, disempower and marginalise the have-nots in society. There is also a fear that these new technologies, AIs in particular, are programmed with some of the personal, institutional and cultural inequalities of our time and, if anything then embed these into the substrata of their very being, entrenching the hegemony even further.

Personally, I see far more cost than benefit because of the hands that these technologies are in and because of the paucity of constraints or considerations applied to them.

In my view, currently, I'm inclined to support the calls for a moratorium on their development and application, or simply put, "burn them all with fire"!

Expand full comment

Just because we can doesn't mean we should --> agree.

AI applied to workplace surveillance and treating people as machines = bad -> agree.

Ethics trail application -> agree, and this is potentially dangerous.

Levy on AI to fund UBI? -> maybe, or maybe other ways to fund UBI that don't penalise tech. I think there is a general de-coupling of work and income, and a civilised society should seek to find simple ways to keep everyone afloat. But the devil is in the details.

AI encoding bias and inequalities? -> agree this is a risk, especially for AIs trained on the last 30 years of the internet. My piece was more about AIs trained on more limited domains of work. Either way, alogrithmic transparency would be nice.

Who will be disintermediated by AI? My guess is the management class of generic, political apparatchiks who "run" the social structure of large orgs will be far more likely to be denuded than customer service people, retail workers, makers, or even cleaners. I like to think the pandemic proved the way we value jobs is wrong, and I hope that as automation proceeds, we will place a premium on personal service, artisanal or helper roles just as we now all seek out free-range, small-scale food production rather than industrialised goo.

Burn it with fire? -> disagree, but only because I think both the down-sides and the up-sides have been over-sold and neither will be as bad or as good as we might imagine.

I genuinely believe there is a place in the world for lightweight AI and automation that serves people, does the boring stuff that many people are currently employed to do, and elevates their human potential (which I think is irreplaceable by machines). But in the context of orgs and the workplace, despite the potential, there are many vested interests and bonuses depending on the old way of working continuing for one more quarter etc...

Expand full comment

Very insightful as always. But still the open question for me remains - how do we evolve towards the "collaborative enterprise" while the markets favor the efficiency that AI brings towards the workplace in reassembling old ideas. And to be fair - AI also helps to bring new insights and new creatives to our world if used in a intelligent way - but used as in the example above to generate "plausible b**" work it will IMHO rather support the existing hierarchy and bureaucracy as it plays by its rules.

Expand full comment

Thanks Björn. Appreciate the feedback. And yes, I am still after all the years you have known me, trying to believe that senior managers will eventually choose to organise their firms for the good of shareholders, employees and the future, rather than for themselves ;-)

I think you are right that in the near future, we will see old things done more efficiently with lightweight automation and AI, and perhaps we never go any further, just as mainframes and computing wrapped themselves around C19th management hierarchies, rather than doing things differently. But I am hopeful we will eventually see something like the Digitally-Enabled Directed Autonomy (DEDA) model that some Chinese scale-ups are making good use of already, but perhaps with a less internally competitive culture (at least in Europe).

Markets favour returns, both short- and long-term, and so the proof needs to be in results. But anti-competitive markets can stay irrational longer than even I can talk about this stuff. German industry and automotive has suffered so badly over the past decade due to terrible senior leadership culture and decision making. Many US banks are currently going through something similar. The question is when these failures of management and structure become so dangerous that they imperil whole markets or economies. If I had an activist hedge fund, I would be placing very large bets against such companies or buying them to re-structure in a better way... but here I am on Substack LOL ;-)

Expand full comment