In the book, Only Humans Need Apply (2016), the authors identify five ways that people can work with machines. They call it ‘stepping’. We are seeing an increasing need to do what they call ‘stepping in’ or using machines [and software] to augment work. In GPT-3 through a glass darkly I used the McLuhans’ media tetrad to examine this form of machine learning, which is all over the media today.
- Extends each voice & mimics creativity
- Obsolesces copy-writing and essays so that human insight becomes a luxury
- Retrieves the polymaths of the European Renaissance so that the best writers must be multi-talented to earn a living
- Reverses into mass deception and provides answers without real questions behind themsee quote below
We may be in a ‘golden age’ of AI, as many have claimed. But we are also in a golden age of grifters and Potemkin inventions and aphoristic nincompoops posing as techno-oracles. —Derek Thompson, The Atlantic, 2022-12-01
For many years I have promoted the development of human skills and not trying to compete with computers and machines. They are better than us with their constant diligence, compliance with rules, perseverance, and brute intelligence. I still think we should focus human work on curiosity, imagination, empathy, and creativity, among many other human talents. But I am also seeing a need to add a new important skill — stepping with the machines.
Machine assistance is coming to a workplace near you. Perhaps you do not need to use ChatGPT or DallE2 at this time but we should all understand what machine learning is, and isn’t. For example, some form of machine learning will be coming to Microsoft’s Office Suite very soon — Microsoft is adding OpenAI writing tech to Office.
Added note: For organizations, the emphasis should be on automating those skills on the left side of the diagram, while not compromising or restricting humans to exercise the skills on the right side. Don’t try to make humans act like machines.
An understanding of the strengths and weaknesses of a tool like ChatGPT becomes critical when we are using its outputs, or when policies or decisions are based on these outputs. Stephen Wolfram has written a clear contrast and comparison between ChatGPT and Wolfram|Alpha. He shows examples of ChatGPT getting simple calculations like distances between cities or the size of countries — wrong. He suggests that the two types of processing can be used together.
Machine learning is a powerful method, and particularly over the past decade, it’s had some remarkable successes—of which ChatGPT is the latest … But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human—or a systematic algorithm—will then pick from or refine. —Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT
Anyone affected by these technologies needs to understand their basic functions and their underlying models. These tools will be thrust into our workplaces very soon. So let’s step-in to working with machine learning but with a clear understanding of who needs to be in charge — humans.