disorientation and exploration

“We become what we behold. We shape our tools and then our tools shape us.” —Father John Culkin (1967) A Schoolman’s Guide to Marshall McLuhan

Disorientation and exploration are essential for human learning. By using Generative AI (GPT/LLM) are we bypassing these two stages of learning in search of efficiency and robotic productivity?

“John Nosta, founder of the NostaLab think tank, says AI trains humans to think backward by providing answers before they understand.” — link via Archiv.Today

Read more

better than good enough

In 2012 Ross Dawson observed that “in a connected world, unless your skills are world-class, you are a commodity”. Fast forward to the dawn of 2026:

Here’s what AI did. It drove the cost of nearly every signal to zero. Resumes used to cost time and thought. Now they cost a prompt. Cover letters used to reveal how someone thinks. Now they reveal which model they used. But companies did the same thing. They replaced judgment with AI screeners. Now you have two AIs talking to each other. One generating signals. One evaluating them. Neither connected to anything real. —David Arnoux

Read more

learning as rebellion

Is human learning now an act of rebellion?

Since 2017 I have made this observation — For the past several centuries we have used human labour to do what machines cannot. First the machines caught up with us, and surpassed humans, with their brute force. Now they are surpassing us with their brute intelligence. There is not much more need for machine-like human work which is routine, standardized, or brute.

Read more

continuing to step aside

I am not ignoring new technologies in the ‘AI’ field, but I believe there is a real need for people to get better at communicating and making sense with other people. Well that is what I wrote early last year in stepping aside. What have I learned since then?

I still have not found any use for generative AI in my own work.

The rush to implement generative AI in the workplace is leading to massive job cuts especially amongst software programmers. The perfect storm of neo-liberalism and automation continues to tear up 20th century social contracts.

Read more

writing by humans, for humans

Recently I have found it difficult to maintain my writing pace of +20 years. There are 3,700 blog posts published here but few in the last year. The fact that large language models (LLM) have scraped my website and continue to do so has had me feeling less motivated to share my thoughts. But maybe the best act of rebellion against AI slop is to keep writing and not let the silicon valley bastards grind me down.

Read more

sensemaking through the slop

The image below is one I have often used in explaining sensemaking with the PKM framework. It describes how we can use different types of filters to seek information and knowledge and then apply this by doing and creating, and then share, with added value, what we have learned. One emerging challenge today is that our algorithmic knowledge filters are becoming dominated by the output of generative pre-trained transformers based on large language models. And more and more, these are generating AI slop. Which means that machine filters, like our search engines, are no longer trusted sources of information.

As a result, we have to build better human filters — experts, and subject matter networks.

Read more

working for capitalists

The automation of human work is an ongoing objective of our capitalist systems. Our accounting practices amortize machines while listing people as costs, which keeps the power of labour down. The machines do not even have to be as good as a person, due to our bookkeeping systems that treat labour and capital differently. Labour is a cost while capital is an investment. Indeed, automation + capitalism = a perfect storm.

Recently, The Verge reported that the CEO of Shopify, an online commerce platform, told employees — ‘Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI.’ The underlying, completely misinformed assumption being that large language models and generative pre-trained transformers are as effective at thinking and working as humans.

Read more

every medium reverses its properties when pushed to its limits

In 2018  — seeing the figure through the ground — I used the Laws of Media developed by Marshall and Eric McLuhan to examine the impact of social media. McLuhan’s Laws state that every medium (technology) used by people has four effects. Every medium extends a human property, obsolesces the previous medium (& often makes it a luxury good), retrieves a much older medium, and reverses its properties when pushed to its limits. These four aspects are known as the media tetrad.

This image was the resulting tetrad.

Read more

rebuilding trust one catalyst at a time

I have worked in the fields of human performance improvement, social learning, collaboration, and sensemaking for several decades. Currently in all of these fields the dominant discussion is about using and integrating generative artificial intelligence [AKA machine learning] using large language models. I am not seeing many discussions about improving individual human intelligence or our collective intelligence. My personal knowledge mastery workshops focus on these and leave AI as a side issue when we discuss tools near the end of each workshop. There is enough to deal with in improving how we seek, make sense of, and share our knowledge.

Read more

let’s go on to organize

Ten years ago I wrote a series of posts for Cisco on the topic of ‘The Internet of Everything’ (IoE), which was a variation on the Internet of Things, or the idea that all objects, such as light-bulbs and refrigerators, would be connected to the internet. With AI in everything now, I guess we are at that stage of technology intrusion, or rather techno-monopolist intrusion.

I would like to review some of the highlights from a decade ago.

tl;dr — little has changed

Read more