the bad & the ugly

The capitalist AI future is bullshit by design — AKA ‘mansplaining as a service’.

“Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design. To be clear, bullshit can sometimes be useful, and even accidentally correct, but that doesn’t keep it from being bullshit. Worse, these systems are not meant to generate consistent bullshit — you can get different bullshit answers from the same prompts.” —Anil Dash 2023

What are the benefits of AI adoption in organizations? Not good for many workers it seems.

Read more

careening toward a meaningless world

As we get inundated with new knowledge and information regurgitated by large language models and generative pre-trained transformers — time for meaning-making becomes critical.

Meaning-making is the process by which we interpret situations or events in the light of our previous knowledge and experience. It is a matter of identity: it is who we understand ourselves to be in relation to the world around us.” —Dave Gurteen

Are we swimming in a world of meaninglessness?

Read more

worldwide synesthesia

Marshall McLuhan has influenced much of my work and I have used the tetrad from the Laws of Media many times to understand emerging technologies. A recent article in The Free Press by Benjamin Carlson was a refreshing read by someone who had just discovered McLuhan. I started reading McLuhan’s work in 1995.

I first stumbled upon Marshall McLuhan a year ago on YouTube. Within a minute or two of watching a clip, I was amazed: here was a man who, in 1977, seemed to be describing the dislocating experience of living in 2023, and he did so with more insight than people living today. That the words were coming from a craggy, mustachioed man in a rumpled suit only enhanced the eerie feeling. Here was a professor-as-prophet. McLuhan says, in part, to his TV host … I shared the clip on Twitter and it went viral with more than 6 million views —The Prophets 2024-03-02

Read more

blogging is enough

This blog turned 20 last month — dead blog walking. One of the big challenges that the growth of AI [GPT, LLM, etc.] presents us is connecting with people — not machines — for our sensemaking. A personal blog is a human way to connect. There is no algorithm to filter what others read. They can subscribe, on their terms, and with their chosen technology thanks to real simple syndication (RSS). The great thing about blogging is that there are few rules. You can write as you like, when you like, and as often as you like.

Read more

skill erosion

If you don’t use it, you will lose it. Automate what was once a skill-developed process and those skills will decline.

“Cognitive automation powered by advanced intelligent technologies is increasingly enabling organizations to automate more of their knowledge work tasks. Although this often offers higher efficiency and lower costs, cognitive automation exacerbates the erosion of human skill and expertise in automated tasks. Accepting the erosion of obsolete skills is necessary to reap the benefits of technology—however, the erosion of essential human expertise is problematic if workers remain accountable for tasks for which they lack sufficient understanding, rendering them incapable of responding if the automation fails.” —The Vicious Circles of Skill Erosion (2023)

One key factor in understanding how we learn and develop skills is that experience cannot be automated. Increasing automation requires that the Learning and Development (L&D) field must get out of the comfort zone of course development and into the most complex aspects of human learning and performance. To understand learning at work, L&D must understand the work systems. Now they also have to understand skill erosion.

Read more

stepping aside

In Only Humans Need Apply, the authors identify five ways that people can adapt to automation and intelligent machines. They call it ‘stepping’. I have added in parentheses the main attributes I think are needed for each option.

  • Step-up: directing the machine-augmented world (creativity)
  • Step-in: using machines to augment work (deep thinking)
  • Step-aside: doing human work that machines are not suited for (empathy)
  • Step narrowly: specializing narrowly in a field too small for augmentation (passion)
  • Step forward: developing new augmentation systems (curiosity)

There is a lot of talk and media coverage about stepping-up, stepping-in, and stepping-forward. I have previously discussed stepping-in and concluded that anyone affected by these technologies [AI, GPT, LLM] needs to understand their basic functions and their underlying models. These tools will be thrust into our workplaces very soon. So let’s step-in to working with machine learning but with a clear understanding of who needs to be in charge — humans. I stand by this position today.

Read more

low-quality goo

The race toward an AI-driven society is not only costly in terms of electricity and water use with the current AI data centre boom, but the longer-term impacts on how we communicate may be significant.

“This is the AI Grey Goo scenario: an internet choked with low-quality content, which never improves, where it is almost impossible to locate public reliable sources for information because the tools we have been able to rely on in the past – Google, social media – can never keep up with the scale of new content being created. Where the volume of content created overwhelms human or algorithmic abilities to sift through it quickly and find high-quality stuff.

The social and political consequences of this are huge. We have grown so used to information abundance, the greatest gift of the internet, that having that disrupted would be a major upheaval for the whole of society.” —Ian Betteridge 2024-01-24

Read more

augmentation not automation

In automation vs. augmentation, inspired by danah boyd, I wrote that I am mostly in the augmentation camp, though I am concerned that automation + capitalism = a perfect storm. This was the case with the augmented work enabled by the personal computer. Knowledge work improved significantly but wages did not. We are seeing this emerging in the ‘AI wars’ featuring ChatGPT, Bard, Co-pilot and others. It’s a battle between big money to get the biggest slice of this pie, not to augment human work or improve society, yet the mainstream press treats these algorithms like actual artificial intelligence that can think and even ‘hallucinate’ for themselves. But they are just algorithms.

Dave Snowden has a good article about this on anthropomorphising idiot savants — “AI is a set of algorithms and energy-hungry training datasets that may also manifest in physical objects.”

Read more

automation, algorithms, and us

In March 2023 I wrote — understanding the hype and hope — of AI and I highlighted several insights from various experts.

The Good

  • “With an LLM even a problem with only one user, will be doable, enter your ask, and code gets written, problem gets solved. Runtime ends, app dies. Done. Single use apps are born.” —Linus Ekenstam
  • US Copyright Office —  “ … it is well-established that copyright can protect only material that is the product of human creativity.”
  • «And in ChatGPT + Wolfram we’re now able to leverage the whole stack: from the pure “statistical neural net” of ChatGPT, through the “computationally anchored” natural language understanding of Wolfram|Alpha, to the whole computational language and computational knowledge of Wolfram Language.» —Stephen Wolfram
  • “Allen & Overy (A&O), the leading international law firm, has broken new ground by integrating Harvey, the innovative artificial intelligence platform built on a version of Open AI’s latest models enhanced for legal work, into its global practice.”—David Wakeling

Read more