analog privilege

Are we headed toward a society of feudal techno-peasants and a small class of the analog-privileged?

The Future is Analog (If You Can Afford It)

The idea of “analog privilege” describes how people at the apex of the social order secure manual overrides from ill-fitting, mass-produced AI products and services. Instead of dealing with one-size-fits-all AI systems, they mobilize their economic or social capital to get special personalized treatment. In the register of tailor-made clothes and ordering off menu, analog privilege spares elites from the reductive, deterministic and simplistic downsides of AI systems. —Maroussia Lévesque

Read more

top tools 2024

Once again Jane Hart is asking, “What are the most popular digital tools for learning and why?” in the 18th Annual Top Tools for Learning survey. Voting ends on 30 August.

My tools have not changed much since last year. I am not using social bookmarks much any more, so Diigo did not make the list. It’s interesting that social bookmarking was my #3 tool in 2012, and how little I use it now.

Read more

assistive technology

Donald Clark has posted about how many people are using AI as assistive technology.

Time and time again, someone with dyslexia, or with a son or daughter with dyslexia, came up to me to discuss how AI had helped them. They describe the troubles they had in an educational system that is obsessed with text. Honestly, I can’t tell you how often I’ve had these conversations. —Plan B: 2024-08-15

Donald goes on to cite several types of assistive technology.

Read more

a rude awakening

“It might be down to the time of year; it’s always quieter in the summer months but it feels a bit different right now.

Firstly, it feels like there has been a BIG pause because of ChatGPT and other LLMs. It feels like people are still getting their heads round what they can do, their effectiveness, quality, etc. And when they do look at it, they don’t ‘get’ how they’ll use it.”Andrew Jacobs 2024-08-09

I have witnessed this same malaise in the business world for the past year. If it’s not an AI initiative, it does not get any attention. The bad and the ugly aspects of this new flavour of machine learning are dominating the IT sector and all it touches. Here are some recent examples shared in our community of practice.

Read more

the bad & the ugly

The capitalist AI future is bullshit by design — AKA ‘mansplaining as a service’.

“Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design. To be clear, bullshit can sometimes be useful, and even accidentally correct, but that doesn’t keep it from being bullshit. Worse, these systems are not meant to generate consistent bullshit — you can get different bullshit answers from the same prompts.” —Anil Dash 2023

What are the benefits of AI adoption in organizations? Not good for many workers it seems.

Read more

careening toward a meaningless world

As we get inundated with new knowledge and information regurgitated by large language models and generative pre-trained transformers — time for meaning-making becomes critical.

Meaning-making is the process by which we interpret situations or events in the light of our previous knowledge and experience. It is a matter of identity: it is who we understand ourselves to be in relation to the world around us.” —Dave Gurteen

Are we swimming in a world of meaninglessness?

Read more

worldwide synesthesia

Marshall McLuhan has influenced much of my work and I have used the tetrad from the Laws of Media many times to understand emerging technologies. A recent article in The Free Press by Benjamin Carlson was a refreshing read by someone who had just discovered McLuhan. I started reading McLuhan’s work in 1995.

I first stumbled upon Marshall McLuhan a year ago on YouTube. Within a minute or two of watching a clip, I was amazed: here was a man who, in 1977, seemed to be describing the dislocating experience of living in 2023, and he did so with more insight than people living today. That the words were coming from a craggy, mustachioed man in a rumpled suit only enhanced the eerie feeling. Here was a professor-as-prophet. McLuhan says, in part, to his TV host … I shared the clip on Twitter and it went viral with more than 6 million views —The Prophets 2024-03-02

Read more

blogging is enough

This blog turned 20 last month — dead blog walking. One of the big challenges that the growth of AI [GPT, LLM, etc.] presents us is connecting with people — not machines — for our sensemaking. A personal blog is a human way to connect. There is no algorithm to filter what others read. They can subscribe, on their terms, and with their chosen technology thanks to real simple syndication (RSS). The great thing about blogging is that there are few rules. You can write as you like, when you like, and as often as you like.

Read more

skill erosion

If you don’t use it, you will lose it. Automate what was once a skill-developed process and those skills will decline.

“Cognitive automation powered by advanced intelligent technologies is increasingly enabling organizations to automate more of their knowledge work tasks. Although this often offers higher efficiency and lower costs, cognitive automation exacerbates the erosion of human skill and expertise in automated tasks. Accepting the erosion of obsolete skills is necessary to reap the benefits of technology—however, the erosion of essential human expertise is problematic if workers remain accountable for tasks for which they lack sufficient understanding, rendering them incapable of responding if the automation fails.” —The Vicious Circles of Skill Erosion (2023)

One key factor in understanding how we learn and develop skills is that experience cannot be automated. Increasing automation requires that the Learning and Development (L&D) field must get out of the comfort zone of course development and into the most complex aspects of human learning and performance. To understand learning at work, L&D must understand the work systems. Now they also have to understand skill erosion.

Read more

stepping aside

In Only Humans Need Apply, the authors identify five ways that people can adapt to automation and intelligent machines. They call it ‘stepping’. I have added in parentheses the main attributes I think are needed for each option.

  • Step-up: directing the machine-augmented world (creativity)
  • Step-in: using machines to augment work (deep thinking)
  • Step-aside: doing human work that machines are not suited for (empathy)
  • Step narrowly: specializing narrowly in a field too small for augmentation (passion)
  • Step forward: developing new augmentation systems (curiosity)

There is a lot of talk and media coverage about stepping-up, stepping-in, and stepping-forward. I have previously discussed stepping-in and concluded that anyone affected by these technologies [AI, GPT, LLM] needs to understand their basic functions and their underlying models. These tools will be thrust into our workplaces very soon. So let’s step-in to working with machine learning but with a clear understanding of who needs to be in charge — humans. I stand by this position today.

Read more