Simon Terry has a short post on Microsoft’s new Copilot and how we should be careful in fully adopting some of these generative AI tools.
LLMs [large language models] are improvements on past tools but are hardly perfect. In a world where the volume of information means many people scan everything, we need to remain alert for the risks of the models false inferences or patterns gone awry.
In the history of aviation, it became apparent that pilot personal relationships are critical to avoiding dangerous incidents. Authoritarian cultures meant senior pilot mistakes went devastatingly unchallenged. —Microsoft Co-pilot
I mentioned pilot training on a post recently — experience cannot be automated. I concluded that automation, in all fields, forces learning and development out of the comfort zone of course development and into the most complex aspects of human learning and performance. On that post is also a quote by Captain Sully Sullenberger, the famed pilot who safely landed a passenger jet on the Hudson River. A movie was made about this, which included the subsequent safety investigation. Tom Hanks plays Sully and in this sequence of videos we see the difference between human cognition of experienced pilots versus the best software/hardware simulation of the day. There is no comparison.
Part 1 — Part 2 — Part 3 — Part 4 — Part 5
Let’s get back to Microsoft Copilot, which was demonstrated last week. We should first note that this was a demonstration and the software is not in widespread use at this time. Basically, Copilot connects all Microsoft applications, with the Microsoft graph (all user data within an enterprise), and a large language model (the black box). Using natural language processing, this system interacts with people using an application to automate processes, find information, and summarize information, such as automatic meeting minutes and actions to be taken. The demonstration is definitely worth watching
Based only on the demonstration, my initial conclusion is that this can be a powerful performance support tool. But as the demo shows, there will be mistakes so that any Copilot output has to be checked by a human. Whether this system crosses any ethical boundaries may not be known.

Image source: Microsoft
MS Copilot is a tool that supports collaboration — working together for a common objective. It does not connect outside the enterprise (that I can see) so it does not support cooperation — sharing freely with no expectation of direct recompense. In revisiting cooperation I noted that working cooperatively requires a different mindset than merely collaborating on a defined project. Being cooperative means being open to others outside your group. Copilot will likely only improve actions in the lower left of the image below — collaborating in work teams.

More on Jony Ive and the Stephen Hawking Fellowship award — curiosity & resolve
Finally, Copilot and tools like it may result in auto-tuning work, especially basic skills. In the Copilot demonstration, human oversight of the output is repeatably shown. The worker using O365 reviews the output, perhaps changes the request, and adds or removes components. This requires a certain level of skill and experience, which most workers today will have. But what happens in several years when workers have not had to ‘manually’ develop these skills. I have become a professional writer through 20 years of blogging and writing for clients. I can quickly review text and see if it makes sense. But if I have always started with the output of a LLM tool, will I develop good basic writing skills?
“We become what we behold. We shape our tools and then our tools shape us.” —Father John Culkin (1967) A Schoolman’s Guide to Marshall McLuhan
Recent Conversations