disorientation and exploration

“We become what we behold. We shape our tools and then our tools shape us.” —Father John Culkin (1967) A Schoolman’s Guide to Marshall McLuhan

Disorientation and exploration are essential for human learning. By using Generative AI (GPT/LLM) are we bypassing these two stages of learning in search of efficiency and robotic productivity?

“John Nosta, founder of the NostaLab think tank, says AI trains humans to think backward by providing answers before they understand.” — link via Archiv.Today

Read more

learning as rebellion

Is human learning now an act of rebellion?

Since 2017 I have made this observation — For the past several centuries we have used human labour to do what machines cannot. First the machines caught up with us, and surpassed humans, with their brute force. Now they are surpassing us with their brute intelligence. There is not much more need for machine-like human work which is routine, standardized, or brute.

Read more

continuing to step aside

I am not ignoring new technologies in the ‘AI’ field, but I believe there is a real need for people to get better at communicating and making sense with other people. Well that is what I wrote early last year in stepping aside. What have I learned since then?

I still have not found any use for generative AI in my own work.

The rush to implement generative AI in the workplace is leading to massive job cuts especially amongst software programmers. The perfect storm of neo-liberalism and automation continues to tear up 20th century social contracts.

Read more

writing by humans, for humans

Recently I have found it difficult to maintain my writing pace of +20 years. There are 3,700 blog posts published here but few in the last year. The fact that large language models (LLM) have scraped my website and continue to do so has had me feeling less motivated to share my thoughts. But maybe the best act of rebellion against AI slop is to keep writing and not let the silicon valley bastards grind me down.

Read more

sensemaking through the slop

The image below is one I have often used in explaining sensemaking with the PKM framework. It describes how we can use different types of filters to seek information and knowledge and then apply this by doing and creating, and then share, with added value, what we have learned. One emerging challenge today is that our algorithmic knowledge filters are becoming dominated by the output of generative pre-trained transformers based on large language models. And more and more, these are generating AI slop. Which means that machine filters, like our search engines, are no longer trusted sources of information.

As a result, we have to build better human filters — experts, and subject matter networks.

Read more

working for capitalists

The automation of human work is an ongoing objective of our capitalist systems. Our accounting practices amortize machines while listing people as costs, which keeps the power of labour down. The machines do not even have to be as good as a person, due to our bookkeeping systems that treat labour and capital differently. Labour is a cost while capital is an investment. Indeed, automation + capitalism = a perfect storm.

Recently, The Verge reported that the CEO of Shopify, an online commerce platform, told employees — ‘Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI.’ The underlying, completely misinformed assumption being that large language models and generative pre-trained transformers are as effective at thinking and working as humans.

Read more

rebuilding trust one catalyst at a time

I have worked in the fields of human performance improvement, social learning, collaboration, and sensemaking for several decades. Currently in all of these fields the dominant discussion is about using and integrating generative artificial intelligence [AKA machine learning] using large language models. I am not seeing many discussions about improving individual human intelligence or our collective intelligence. My personal knowledge mastery workshops focus on these and leave AI as a side issue when we discuss tools near the end of each workshop. There is enough to deal with in improving how we seek, make sense of, and share our knowledge.

Read more

let’s go on to organize

Ten years ago I wrote a series of posts for Cisco on the topic of ‘The Internet of Everything’ (IoE), which was a variation on the Internet of Things, or the idea that all objects, such as light-bulbs and refrigerators, would be connected to the internet. With AI in everything now, I guess we are at that stage of technology intrusion, or rather techno-monopolist intrusion.

I would like to review some of the highlights from a decade ago.

tl;dr — little has changed

Read more

remembering nothing and failing

In an article on the impact of AI on computer science education, the general conclusion is that all jobs will have a generative AI component and it will be necessary in most jobs to understand computer science. The piece opens with an experiment conducted by a professor with one of his computer science classes.

One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components.

Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade.

Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed.

Read more

intractable human problems

The current hype around ‘artificial intelligence’ in the form of generative pre-trained transformers and large language models is impossible to avoid. However, I have yet to try any of these out other than two questions posed to Sanctum.ai — auto-marketing — on my computer and not on some cloud. So far, these are my reasons for not jumping on this bandwagon.

Read more