In the book, Only Humans Need Apply (2016), the authors identify five ways that people can work with machines. They call it ‘stepping’. We are seeing an increasing need to do what they call ‘stepping in’ or using machines [and software] to augment work. In GPT-3 through a glass darkly I used the McLuhans’ media tetrad to examine this form of machine learning, which is all over the media today.
- Extends each voice & mimics creativity
- Obsolesces copy-writing and essays so that human insight becomes a luxury
- Retrieves the polymaths of the European Renaissance so that the best writers must be multi-talented to earn a living
- Reverses into mass deception and provides answers without real questions behind themsee quote below
We may be in a ‘golden age’ of AI, as many have claimed. But we are also in a golden age of grifters and Potemkin inventions and aphoristic nincompoops posing as techno-oracles. —Derek Thompson, The Atlantic, 2022-12-01
For many years I have promoted the development of human skills and not trying to compete with computers and machines. They are better than us with their constant diligence, compliance with rules, perseverance, and brute intelligence. I still think we should focus human work on curiosity, imagination, empathy, and creativity, among many other human talents. But I am also seeing a need to add a new important skill — stepping with the machines.
Machine assistance is coming to a workplace near you. Perhaps you do not need to use ChatGPT or DallE2 at this time but we should all understand what machine learning is, and isn’t. For example, some form of machine learning will be coming to Microsoft’s Office Suite very soon — Microsoft is adding OpenAI writing tech to Office.
Added note: For organizations, the emphasis should be on automating those skills on the left side of the diagram, while not compromising or restricting humans to exercise the skills on the right side. Don’t try to make humans act like machines.
An understanding of the strengths and weaknesses of a tool like ChatGPT becomes critical when we are using its outputs, or when policies or decisions are based on these outputs. Stephen Wolfram has written a clear contrast and comparison between ChatGPT and Wolfram|Alpha. He shows examples of ChatGPT getting simple calculations like distances between cities or the size of countries — wrong. He suggests that the two types of processing can be used together.
Machine learning is a powerful method, and particularly over the past decade, it’s had some remarkable successes—of which ChatGPT is the latest … But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human—or a systematic algorithm—will then pick from or refine. —Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT
Anyone affected by these technologies needs to understand their basic functions and their underlying models. These tools will be thrust into our workplaces very soon. So let’s step-in to working with machine learning but with a clear understanding of who needs to be in charge — humans.
Next: auto-tuning work
How about delivering a class in your own voice, without having to record it?
“In the realm of AI, this is huge in my opinion. The implications going forward could change many aspects of what we do. Someone could potentially train the AI on a few of their own voice clip samples, then feed the AI any text they like and it would sound like they’re reading it. Re-recording becomes unnecessary, which is a big deal in sound recording sessions and time consuming, because it’s only the text itself that gets edited and not the voice recording itself. Imagine what this will do to audiobook creation. Imagine what it will do for online classes and trainings. Imagine online help translated in real time to the voice of whoever the company wants you to hear. The CEO of the company could be delivering end user assistance in their own voice.” —Race Bannon
I agree with the connections you are making, and I have founded the Sentient Syllabus Project precisely because I see academia at the intersection of the human side, and the economy. But I am increasingly skeptical about the traditional division between standardized and unique tasks, and definitely no longer confident that the edge we have over AI regarding creativity and flexibility will last. What is needed is a different quality of work, and for that, we need to define how specifically human qualities translate into exchangeable value. I’d like to invite you to have a look at some of the analyses on Substack – perhaps start with the educational objectives https://sentientsyllabus.substack.com/p/how-much-is-too-much – and to get in touch, let’s see if this develops into a conversation. Thank you for sharing your insights.
Thanks for sharing, Boris. It seems that academia may get shaken up by LLMs and other emerging technologies. I have little, or very old, experience in formal education or training courses, so I lack any deep understanding of the factors at play today. I am quite certain that ‘stepping with machines’ will become reality in many workplaces though. For example, the US Air Force is already using a form of AI to help fighter pilots during ‘dog fights’