Simon Terry has a short post on Microsoft’s new Copilot and how we should be careful in fully adopting some of these generative AI tools.
LLMs [large language models] are improvements on past tools but are hardly perfect. In a world where the volume of information means many people scan everything, we need to remain alert for the risks of the models false inferences or patterns gone awry.
In the history of aviation, it became apparent that pilot personal relationships are critical to avoiding dangerous incidents. Authoritarian cultures meant senior pilot mistakes went devastatingly unchallenged. —Microsoft Co-pilot
I mentioned pilot training on a post recently — experience cannot be automated. I concluded that automation, in all fields, forces learning and development out of the comfort zone of course development and into the most complex aspects of human learning and performance. On that post is also a quote by Captain Sully Sullenberger, the famed pilot who safely landed a passenger jet on the Hudson River. A movie was made about this, which included the subsequent safety investigation. Tom Hanks plays Sully and in this sequence of videos we see the difference between human cognition of experienced pilots versus the best software/hardware simulation of the day. There is no comparison.