In smarter networks through better narratives, I noted that there needs to be a dominant narrative to counter “folks who’ve got nothing but conspiracies and medieval fantasies to base their arguments upon.” A new frame is required, not factual counter-arguments. This is how George Lakoff explains it, “1) Repetition strengthens the synapses in neural circuits that people use in thinking 2) Whoever frames first has an advantage 3) Negating a frame activates and strengthens it.” Basically, Lakoff states that whoever frames the narrative first has an advantage and that negating a frame only activates and strengthens it. So responding to trolls and conspiracy theorists, which we often feel compelled to do, only makes the buggers stronger — an understanding of my confusion.
PKMastery
Personal knowledge mastery
it’s political
Everything is political — even the learning organization.
Peter Senge’s development of the fifth discipline has informed much of my work around workplace learning for three decades. Sheila Damodaran takes a deep look at this seminal book.
The Five Disciplines were not assembled aesthetically. They were assembled structurally — each closing a vulnerability left open by the others, each compensating for a failure mode observable in real institutions.
—Systems Thinking prevented local optimisation from masquerading as improvement.
—Personal Mastery prevented aspiration from collapsing under institutional pressure.
—Mental Models prevented inherited assumptions from hardening into policy dogma.
—Team Learning prevented the conversation from degenerating into positional defence.
—Shared Vision prevented purpose from fragmenting into departmental ambition.Remove one, and drift begins.
Emphasise one at the expense of others, and imbalance follows.
—The Fifth Discipline at Thirty-Five — Lineage, Surge, and Scale
“let the discourse rage without you”
Joan Westenberg covers a lot of ground in the post the discourse is a distributed denial-of-service attack. I will try to summarize and highlight what I found of importance.
A DDOS is an attack on a web server in an attempt to overload it so it can no longer function. The case that Westenberg refers to is one where thousands of internet devices — not necessarily computers — were pointed at the website of security expert Brian Krebs. As a side note, I would recommend Krebs’ Mastodon feed.
Westenberg goes on to show that the online social media space has become a massive distributed denial-of-service — for our collective brains. There is so much information — not all fake news but a lot of false information shared by people — that vies for our attention and we cannot cope with it.
continuing to step aside
I am not ignoring new technologies in the ‘AI’ field, but I believe there is a real need for people to get better at communicating and making sense with other people. Well that is what I wrote early last year in stepping aside. What have I learned since then?
I still have not found any use for generative AI in my own work.
The rush to implement generative AI in the workplace is leading to massive job cuts especially amongst software programmers. The perfect storm of neo-liberalism and automation continues to tear up 20th century social contracts.
the outrage continues
Five years ago I wrote about the tendency on the web to tend toward constant doubt and outrage. Now, five years late, that trend continues, exacerbated by the platform monopolists who understand that outrage sells more advertising. I wrote that social media have created a worldwide Dunning-Kruger effect. Our collective self-perception of knowledge acquired through social media is greater than it actually is. And the outrage continues because we ignore our common humanity. We do.
I concluded that as we become more connected we should not be cutting out social media, instead we should be using them in smarter ways. Today we all have to work and live smarter, by connecting to our networks and communities. These are essential to ensure that we do not become drowned out by the noise of the Internet of Beefs.
meta skills
[Demis] Hassabis [CEO of Google’s DeepMind, Nobel Prize winner in Chemistry 2024] emphasized the need for “meta-skills,” such as understanding how to learn and optimizing one’s approach to new subjects, alongside traditional disciplines like math, science and humanities. —AP 2025-09-12
In the third bucket I discussed a conversation I had with a senior Human Resources executive at a large corporation in 2016. He noted that when it comes to managing people and their talents, there are three buckets. Two of these are easy to fill, while the third is the real challenge:
sensemaking through the slop
The image below is one I have often used in explaining sensemaking with the PKM framework. It describes how we can use different types of filters to seek information and knowledge and then apply this by doing and creating, and then share, with added value, what we have learned. One emerging challenge today is that our algorithmic knowledge filters are becoming dominated by the output of generative pre-trained transformers based on large language models. And more and more, these are generating AI slop. Which means that machine filters, like our search engines, are no longer trusted sources of information.
As a result, we have to build better human filters — experts, and subject matter networks.
rebuilding trust one catalyst at a time
I have worked in the fields of human performance improvement, social learning, collaboration, and sensemaking for several decades. Currently in all of these fields the dominant discussion is about using and integrating generative artificial intelligence [AKA machine learning] using large language models. I am not seeing many discussions about improving individual human intelligence or our collective intelligence. My personal knowledge mastery workshops focus on these and leave AI as a side issue when we discuss tools near the end of each workshop. There is enough to deal with in improving how we seek, make sense of, and share our knowledge.
intractable human problems
The current hype around ‘artificial intelligence’ in the form of generative pre-trained transformers and large language models is impossible to avoid. However, I have yet to try any of these out other than two questions posed to Sanctum.ai — auto-marketing — on my computer and not on some cloud. So far, these are my reasons for not jumping on this bandwagon.
fields of knowledge
Stay in your lane. Stick to your knitting. These are perhaps the worst cliché words of advice anyone can give in our interconnected, networked world.
For much of history, particularly since The Enlightenment, our societies have been quite adept at creating classifications and creating fields of work and study.
At the end of the day, fields represent a specific kind of research machinery: a collection of rallying cries, norms, funders, and bureaucratic arrangements that are designed to output new insights about the world at large. Fields rise and fall on the strength of their ability to deliver knowledge and useful ideas. Researchers – particularly the good ones – coalesce around productive fields because they are also the most effective engines for pursuing the questions they want to pursue. At the end of the day, that is what matters. —Field Essentialism
Fields are often created to be useful but they can also be used for power and control. I remember visiting the Apartheid Museum in South Africa and one of the rooms showed all the laws around race that had been in place during the apartheid regime. These started as a few laws but more kept being added as there was no way to make a complex field merely complicated.