A seminal moment in my work came when I saw my first web page on a computer at Montreal’s CRIM in 1994. I finally saw computers as things that connect people around the globe. From here I completed a Master’s degree focusing on how people learn at work with information technology.
The next significant moment arrived with social media. I started blogging and sharing online. When Twitter came along it changed my relationship with hundreds of people. Social media platforms became the great connectors. But now in 2023 we know that much of the web is comprised of surveillance and tracking tools that are designed to influence our behaviour, especially our purchasing behaviour.
In whither Twitter, I wrote that more important than any single platform is our collective ability to seek diversity, think critically, and learn socially. For now I am staying on Twitter and watching the show, muting and blocking with abandon. But we know that platforms like Twitter can undermine democracy and spread disinformation and propaganda. Perhaps that is why Musk bought the company.
I had another seminal moment when I watched The AI Dilemma recorded on 9 March 2023. It shook my understanding about the current state of machine learning, which I thought I sort of understood conceptually. Tristan Harris and Aza Raskin, from The Center for Humane Technology, present on the new force that has been unleashed by several global companies with no regulatory oversight — the Generative Large Language Multi-modal Model (AKA Gollem-class AIs).
Social Media
When social media came on the scene many of us saw their positive potential — “Give everyone a voice”, “Join like-minded communities”, “Enable small businesses to reach customers”. But social media shortly caused many more problems —“Addiction, Disinformation, Polarization.”
What I find most interesting is how the presenters also look at the underlying driver behind social media — Maximize engagement, resulting in a race for attention. They show that trying to address the resulting problems without first changing the underlying motive only will result in a game of whac-a-mole. Today, no consumer social media platform is a trusted space though covenants like the fediverse may provide a better way of connecting humans.

AI
There are many who cheer AI in the form of generative pre-trained transformers (GPT) — “AI will make us more efficient. AI will make us write faster. AI will make us code faster.” Other people are warning about — “AI bias. AI taking jobs. AI lack of transparency.”
Again, the presenters look at the underlying logic behind Gollem — “to increase its capabilities and entangle itself with human society.”

Shaking my cognitive tree
Here are some of the points made in this presentation that definitely shook me.
- An AI can reconstruct what you visually see from an fMRI image of your brain.
- All of these AI are one field now, converting everything to languages that can cross fields, so increases in power are exponential.
- AI can identify human participants and their locations in a room based only on Wi-Fi signals = cameras that can locate people behind walls and in the dark
- Three seconds of a human voice can be replicated by AI and then used to communicate = authentication based verification is dead.
- TikTok could turn every user [not just their avatar] to look and sound like a different person, or even the same person
- The resulting AI is, “The total decoding and synthesizing of reality.”
- Everything we do runs on language — laws, friendships, relationships — and now non-humans can create persuasive narratives.
- Gollems already create new capabilities for themselves without human input.
- An AI can generate its own training data to learn faster —“AI makes better AI.”
I have to reflect on this presentation, and watch it a few more times. I have not made any conclusions on what we should do. I do think that developing and nurturing trusted human relationships will be critical for society and that is likely where I will keep my professional focus. Anyway, a new arms race has begun and our elected officials are once-again clueless in creating the optimal regulations for this fast-moving field.


Thanks for sharing these sobering thoughts. It changes everything doesn’t it? I’ll check out that video on YouTube.
I would be interested in your thoughts, Helen. It’s a topic of discussion in our coffee club at the moment. I really think this will be a paradigm shift that will increase inequality.
One reason I have little faith in our elected or administrative officials is because of their inept reactions to the complexity of the pandemic.
https://jarche.com/2022/04/a-profound-failure-of-ethical-action/
https://jarche.com/2022/10/leadership-in-chaos/
It certainly will contribute to inequality unless we start now to put safeguards in place Harold. Thanks for the links, and the prompts to take action.
Still digesting that video Harold. I fear that we don’t have the institutions and networks to really process the possible future outcomes. First time I got really nervous about where this headed.
On a related note, see this – ChatGPT just invented it’s own language – https://bgr.com/tech/chatgpt-just-invented-its-own-language/
Me too, Steve. It’s scary times.
Another highlight from “Gollems already create new capabilities for themselves without human input”: and we don’t know what they are and don’t have the capacity to know or find.
I would call that “The Shadow of AI”. Very scary.
Since I saw this very important video, I started following Harris and Raskin on Twitter and via them I also found out about this very interesting podcast conversation (‘Can we govern AI?’) between Harris and Marietje Schaake (former member of the EU Parliament and now the international policy director at Stanford’s Cyber Policy Center and fellow at Stanford’s Institute for Human-Centered AI): https://www.humanetech.com/podcast/can-we-govern-ai Of course it’s not that it reassures me, but I think it’s at least a start.
Thank you for sharing, Mascha!
Insightful comment by Meredith Whittaker, President of Signal
“I have very strong disagreements with their analysis (tldr they leave the claims about tech capabilities made by corporate marketers unquestioned (even amplify them) in a way that ultimately promotes corporate hype, while proposing remedies that largely leave the core logics of these companies/their business models intact)”
https://mastodon.world/@Mer__edith/112072800421264685