remembering nothing and failing

In an article on the impact of AI on computer science education, the general conclusion is that all jobs will have a generative AI component and it will be necessary in most jobs to understand computer science. The piece opens with an experiment conducted by a professor with one of his computer science classes.

One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components.

Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade.

Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed.

Read more

intractable human problems

The current hype around ‘artificial intelligence’ in the form of generative pre-trained transformers and large language models is impossible to avoid. However, I have yet to try any of these out other than two questions posed to Sanctum.ai — auto-marketing — on my computer and not on some cloud. So far, these are my reasons for not jumping on this bandwagon.

Read more

analog privilege

Are we headed toward a society of feudal techno-peasants and a small class of the analog-privileged?

The Future is Analog (If You Can Afford It)

The idea of “analog privilege” describes how people at the apex of the social order secure manual overrides from ill-fitting, mass-produced AI products and services. Instead of dealing with one-size-fits-all AI systems, they mobilize their economic or social capital to get special personalized treatment. In the register of tailor-made clothes and ordering off menu, analog privilege spares elites from the reductive, deterministic and simplistic downsides of AI systems. —Maroussia Lévesque

Read more

assistive technology

Donald Clark has posted about how many people are using AI as assistive technology.

Time and time again, someone with dyslexia, or with a son or daughter with dyslexia, came up to me to discuss how AI had helped them. They describe the troubles they had in an educational system that is obsessed with text. Honestly, I can’t tell you how often I’ve had these conversations. —Plan B: 2024-08-15

Donald goes on to cite several types of assistive technology.

Read more

a rude awakening

“It might be down to the time of year; it’s always quieter in the summer months but it feels a bit different right now.

Firstly, it feels like there has been a BIG pause because of ChatGPT and other LLMs. It feels like people are still getting their heads round what they can do, their effectiveness, quality, etc. And when they do look at it, they don’t ‘get’ how they’ll use it.”Andrew Jacobs 2024-08-09

I have witnessed this same malaise in the business world for the past year. If it’s not an AI initiative, it does not get any attention. The bad and the ugly aspects of this new flavour of machine learning are dominating the IT sector and all it touches. Here are some recent examples shared in our community of practice.

Read more

the bad & the ugly

The capitalist AI future is bullshit by design — AKA ‘mansplaining as a service’.

“Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design. To be clear, bullshit can sometimes be useful, and even accidentally correct, but that doesn’t keep it from being bullshit. Worse, these systems are not meant to generate consistent bullshit — you can get different bullshit answers from the same prompts.” —Anil Dash 2023

What are the benefits of AI adoption in organizations? Not good for many workers it seems.

Read more

careening toward a meaningless world

As we get inundated with new knowledge and information regurgitated by large language models and generative pre-trained transformers — time for meaning-making becomes critical.

Meaning-making is the process by which we interpret situations or events in the light of our previous knowledge and experience. It is a matter of identity: it is who we understand ourselves to be in relation to the world around us.” —Dave Gurteen

Are we swimming in a world of meaninglessness?

Read more

blogging is enough

This blog turned 20 last month — dead blog walking. One of the big challenges that the growth of AI [GPT, LLM, etc.] presents us is connecting with people — not machines — for our sensemaking. A personal blog is a human way to connect. There is no algorithm to filter what others read. They can subscribe, on their terms, and with their chosen technology thanks to real simple syndication (RSS). The great thing about blogging is that there are few rules. You can write as you like, when you like, and as often as you like.

Read more

skill erosion

If you don’t use it, you will lose it. Automate what was once a skill-developed process and those skills will decline.

“Cognitive automation powered by advanced intelligent technologies is increasingly enabling organizations to automate more of their knowledge work tasks. Although this often offers higher efficiency and lower costs, cognitive automation exacerbates the erosion of human skill and expertise in automated tasks. Accepting the erosion of obsolete skills is necessary to reap the benefits of technology—however, the erosion of essential human expertise is problematic if workers remain accountable for tasks for which they lack sufficient understanding, rendering them incapable of responding if the automation fails.” —The Vicious Circles of Skill Erosion (2023)

One key factor in understanding how we learn and develop skills is that experience cannot be automated. Increasing automation requires that the Learning and Development (L&D) field must get out of the comfort zone of course development and into the most complex aspects of human learning and performance. To understand learning at work, L&D must understand the work systems. Now they also have to understand skill erosion.

Read more

stepping aside

In Only Humans Need Apply, the authors identify five ways that people can adapt to automation and intelligent machines. They call it ‘stepping’. I have added in parentheses the main attributes I think are needed for each option.

  • Step-up: directing the machine-augmented world (creativity)
  • Step-in: using machines to augment work (deep thinking)
  • Step-aside: doing human work that machines are not suited for (empathy)
  • Step narrowly: specializing narrowly in a field too small for augmentation (passion)
  • Step forward: developing new augmentation systems (curiosity)

There is a lot of talk and media coverage about stepping-up, stepping-in, and stepping-forward. I have previously discussed stepping-in and concluded that anyone affected by these technologies [AI, GPT, LLM] needs to understand their basic functions and their underlying models. These tools will be thrust into our workplaces very soon. So let’s step-in to working with machine learning but with a clear understanding of who needs to be in charge — humans. I stand by this position today.

Read more