sensemaking through the slop

The image below is one I have often used in explaining sensemaking with the PKM framework. It describes how we can use different types of filters to seek information and knowledge and then apply this by doing and creating, and then share, with added value, what we have learned. One emerging challenge today is that our algorithmic knowledge filters are becoming dominated by the output of generative pre-trained transformers based on large language models. And more and more, these are generating AI slop. Which means that machine filters, like our search engines, are no longer trusted sources of information.

As a result, we have to build better human filters — experts, and subject matter networks.

Read more

working for capitalists

The automation of human work is an ongoing objective of our capitalist systems. Our accounting practices amortize machines while listing people as costs, which keeps the power of labour down. The machines do not even have to be as good as a person, due to our bookkeeping systems that treat labour and capital differently. Labour is a cost while capital is an investment. Indeed, automation + capitalism = a perfect storm.

Recently, The Verge reported that the CEO of Shopify, an online commerce platform, told employees — ‘Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI.’ The underlying, completely misinformed assumption being that large language models and generative pre-trained transformers are as effective at thinking and working as humans.

Read more

rebuilding trust one catalyst at a time

I have worked in the fields of human performance improvement, social learning, collaboration, and sensemaking for several decades. Currently in all of these fields the dominant discussion is about using and integrating generative artificial intelligence [AKA machine learning] using large language models. I am not seeing many discussions about improving individual human intelligence or our collective intelligence. My personal knowledge mastery workshops focus on these and leave AI as a side issue when we discuss tools near the end of each workshop. There is enough to deal with in improving how we seek, make sense of, and share our knowledge.

Read more

let’s go on to organize

Ten years ago I wrote a series of posts for Cisco on the topic of ‘The Internet of Everything’ (IoE), which was a variation on the Internet of Things, or the idea that all objects, such as light-bulbs and refrigerators, would be connected to the internet. With AI in everything now, I guess we are at that stage of technology intrusion, or rather techno-monopolist intrusion.

I would like to review some of the highlights from a decade ago.

tl;dr — little has changed

Read more

remembering nothing and failing

In an article on the impact of AI on computer science education, the general conclusion is that all jobs will have a generative AI component and it will be necessary in most jobs to understand computer science. The piece opens with an experiment conducted by a professor with one of his computer science classes.

One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components.

Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade.

Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed.

Read more

intractable human problems

The current hype around ‘artificial intelligence’ in the form of generative pre-trained transformers and large language models is impossible to avoid. However, I have yet to try any of these out other than two questions posed to Sanctum.ai — auto-marketing — on my computer and not on some cloud. So far, these are my reasons for not jumping on this bandwagon.

Read more

analog privilege

Are we headed toward a society of feudal techno-peasants and a small class of the analog-privileged?

The Future is Analog (If You Can Afford It)

The idea of “analog privilege” describes how people at the apex of the social order secure manual overrides from ill-fitting, mass-produced AI products and services. Instead of dealing with one-size-fits-all AI systems, they mobilize their economic or social capital to get special personalized treatment. In the register of tailor-made clothes and ordering off menu, analog privilege spares elites from the reductive, deterministic and simplistic downsides of AI systems. —Maroussia Lévesque

Read more

assistive technology

Donald Clark has posted about how many people are using AI as assistive technology.

Time and time again, someone with dyslexia, or with a son or daughter with dyslexia, came up to me to discuss how AI had helped them. They describe the troubles they had in an educational system that is obsessed with text. Honestly, I can’t tell you how often I’ve had these conversations. —Plan B: 2024-08-15

Donald goes on to cite several types of assistive technology.

Read more

a rude awakening

“It might be down to the time of year; it’s always quieter in the summer months but it feels a bit different right now.

Firstly, it feels like there has been a BIG pause because of ChatGPT and other LLMs. It feels like people are still getting their heads round what they can do, their effectiveness, quality, etc. And when they do look at it, they don’t ‘get’ how they’ll use it.”Andrew Jacobs 2024-08-09

I have witnessed this same malaise in the business world for the past year. If it’s not an AI initiative, it does not get any attention. The bad and the ugly aspects of this new flavour of machine learning are dominating the IT sector and all it touches. Here are some recent examples shared in our community of practice.

Read more

the bad & the ugly

The capitalist AI future is bullshit by design — AKA ‘mansplaining as a service’.

“Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design. To be clear, bullshit can sometimes be useful, and even accidentally correct, but that doesn’t keep it from being bullshit. Worse, these systems are not meant to generate consistent bullshit — you can get different bullshit answers from the same prompts.” —Anil Dash 2023

What are the benefits of AI adoption in organizations? Not good for many workers it seems.

Read more