getting scraped

The Washington Post looked at what information feeds Google’s chatbots, particularly the C4 Data Set which scraped 15 million English language websites. This is the ‘artificial intelligence’ that feeds the chat bot — stuff that people have written and posted online. All of this is taken without authorization — “The copyright symbol — which denotes a work registered as intellectual property — appears more than 200 million times in the C4 data set.

Read more

chatting about gpt

These are some highlights from several sources focused on large language models (LLM) and generative pre-trained transformers (GPT) — all published in 2023. It might be useful to first read — Nobody knows how many jobs will “be automated” Whatever that even means.

But “AI will increase labor productivity while forcing a small number of people to find new jobs” is not the kind of story that goes viral on social media, while “300 million jobs will be lost” definitely is that kind of story. People love to read about the impending apocalypse, and it’s the media’s responsibility not to indulge that desire … Instead of telling us who will be “automated”, they [A Method to Link Advances in Artificial Intelligence to Occupational Abilities – 2018] tell us who’s more likely to be affected by automation in some way. Obviously we’d like to know whether it’ll be a good way or a bad way. But the truth is that no one knows that yet, and economists do the world a service by refusing to pretend that they do know.

ChatGPT is about to revolutionize the economy

The optimistic view: it [GPT] will prove to be a powerful tool for many workers, improving their capabilities and expertise, while providing a boost to the overall economy. The pessimistic one: companies will simply use it to destroy what once looked like automation-proof jobs, well-paying ones that require creative skills and logical reasoning; a few high-tech companies and tech elites will get even richer, but it will do little for overall economic growth.

Read more

“the total decoding and synthesizing of reality”

A seminal moment in my work came when I saw my first web page on a computer at Montreal’s CRIM in 1994. I finally saw computers as things that connect people around the globe. From here I completed a Master’s degree focusing on how people learn at work with information technology.

The next significant moment arrived with social media. I started blogging and sharing online. When Twitter came along it changed my relationship with hundreds of people. Social media platforms became the great connectors. But now in 2023 we know that much of the web is comprised of surveillance and tracking tools that are designed to influence our behaviour, especially our purchasing behaviour.

In whither Twitter, I wrote that more important than any single platform is our collective ability to seek diversity, think critically, and learn socially. For now I am staying on Twitter and watching the show, muting and blocking with abandon. But we know that platforms like Twitter can undermine democracy and spread disinformation and propaganda. Perhaps that is why Musk bought the company.

I had another seminal moment when I watched The AI Dilemma recorded on 9 March 2023. It shook my understanding about the current state of machine learning, which I thought I sort of understood conceptually. Tristan Harris and Aza Raskin, from The Center for Humane Technology, present on the new force that has been unleashed by several global companies with no regulatory oversight — the Generative Large Language Multi-modal Model (AKA Gollem-class AIs).

Read more

understanding the hype and hope

I have been keeping an eye on the hype & hope around artificial intelligence (AI), especially:

  • ML — machine learning
  • GPT — generative pre-trained transformers
  • GAI — generative artificial intelligence
  • LLM — large language models

“I’ve long been a fan and found value in AI / ML and its capabilities. Learning and finding patterns and causal patterns that in time can lead to outcomes that are problematic (a large fleet of vehicles with hundreds of sensors feeding and AI / ML to detect early engine, transmission, or other failure to address before more expensive damage or at a human cost). Generative AI from large language models is missing core pieces still and had knock-on effects that are really problematic with its lack of understanding facts (or multitudes of facts and truths), but more problematic is it blunts human learning and cognition.”
Thomas Vander Wal 2023-03-18

How Technology Influences Social Networks

Stewardship of global collective behavior —2021-06-21

“Human collective dynamics are critical to the well-being of people and ecosystems in the present and will set the stage for how we face global challenges with impacts that will last centuries. There is no reason to suppose natural selection will have endowed us with dynamics that are intrinsically conducive to human well-being or sustainability. The same is true of communication technology, which has largely been developed to solve the needs of individuals or single organizations. Such technology, combined with human population growth, has created a global social network that is larger, denser, and able to transmit higher-fidelity information at greater speed. With the rise of the digital age, this social network is increasingly coupled to algorithms that create unprecedented feedback effects.”

Read more

pilots and copilots

Simon Terry has a short post on Microsoft’s new Copilot and how we should be careful in fully adopting some of these generative AI tools.

LLMs [large language models] are improvements on past tools but are hardly perfect. In a world where the volume of information means many people scan everything, we need to remain alert for the risks of the models false inferences or patterns gone awry.

In the history of aviation, it became apparent that pilot personal relationships are critical to avoiding dangerous incidents. Authoritarian cultures meant senior pilot mistakes went devastatingly unchallenged. —Microsoft Co-pilot

I mentioned pilot training on a post recently — experience cannot be automated. I concluded that automation, in all fields, forces learning and development out of the comfort zone of course development and into the most complex aspects of human learning and performance. On that post is also a quote by Captain Sully Sullenberger, the famed pilot who safely landed a passenger jet on the Hudson River. A movie was made about this, which included the subsequent safety investigation. Tom Hanks plays Sully and in this sequence of videos we see the difference between human cognition of experienced pilots versus the best software/hardware simulation of the day. There is no comparison.

Read more

capitalism > automation > gpt

In my last post I covered in detail how ideas become ideology.

“Ideas lead technology. Technology leads organizations. Organizations lead institutions. Then ideology brings up the rear, lagging all the rest — that’s when things really get set in concrete.”Charles Green (2009)

Today, the underlying ideology is capitalism. It drives the actions of governments, such as claiming that companies are job creators.

“There is no such thing as a ‘job creator’. There are employers, who hire employees, *because they need them*. And then employers pay the employees less than the value they generate. That’s the system. How did we get to the point at which people behave as if the wealthy are giving a gift to working people? I realize it’s not a new attitude, but it remains proudly f’d up.” Mark Sumner

Read more

auto-tuning work

Are we moving into a post-job economy? Can the concept of the job continue to be the primary way that people work? Building ways to constantly change roles can be one way to get rid of the standardized job, which has decreasing usefulness in a creative, networked AI-assisted economy. We should be preempting automation by identifying what routine work should be automated as quickly as possible, so that people can focus on what machines cannot do — being curious, creative, empathetic, passionate, and humourous.

One area of dwindling jobs is at the entry level. This creates a challenge for career development. It is difficult to start as a highly skilled worker, especially since our academic institutions focus on knowledge acquisition and provide very little skill development. Standardized curricula are useless in developing those skills listed above that machines cannot do. Standardization is the enemy of creativity.

Read more

GPT-3 through a glass darkly

I have been using the tetrad (four sides) derived from Marshall & Eric McLuhan’s Laws of Media for several decades. I find it useful for examining emerging technologies, beyond the hype. For example, according to Derrick de Kerckhove, Director of the McLuhan Program in Culture & Technology at the University of Toronto, the Laws of Media state that every new medium (or technology in the broader sense of the word):

• extends a human property (the car extends the foot);

• obsolesces the previous medium by turning it into a sport or an form of art (the automobile turns horses and carriages into sports);

• retrieves a much older medium that was obsolesced before (the automobile brings back the shining armour of the chevalier);

• flips or reverses its properties into the opposite effect when pushed to its limits (the automobile, when there are too many of them, create traffic jams, that is total paralysis)

Here is what that tetrad could look like.

Read more

“the future cracked open”

Race Bannon sees AI (or really machine learning) changing many jobs, such as technical writing, in the near future.

“I believe within 5-10 years much of technical documentation will be written by AI. Certainly, the basic procedural stuff (Step 1, Step 2, and so on) will be written by AI, but even the contextual stuff surrounding the procedural documentation (use cases, examples, and implementation tips) will be written by AI eventually too.” —The Future of Technical Writing

In The Atlantic, Derek Thompson thinks that creativity will not save our jobs from AI.

We may be in a “golden age” of AI, as many have claimed. But we are also in a golden age of grifters and Potemkin inventions and aphoristic nincompoops posing as techno-oracles. The dawn of generative AI that I envision will not necessarily come to pass. So far, this technology hasn’t replaced any journalists, or created any best-selling books or video games, or designed some sparkling-water advertisement, much less invented a horrible new form of cancer. But you don’t need a wild imagination to see that the future cracked open by these technologies is full of awful and awesome possibilities. —The Atlantic 2022-12-01

Read more

learning about machine learning

Why is machine learning [ML] important for your business? If you work at Nokia, your Chairman can explain it to you in a one hour presentation he developed over six months of research. Risto Siilasmaa helped make his network smarter. Everyone needs to know if ML can help with their business problems, but first they have to understand the basics, says Siilasmaa.

  • Digitization has created an explosion of information
  • ML is based on models like logistic regression, which can be fairly easy to understand
  • ML is fitting the model to the data
  • ML is neural networks learning from data sets
  • The more high quality data, and computing power, the fewer mistakes ML will make
  • In a large neural network you can have 100 million parameters in a single layer
  • Flawed outputs can happen if human oversight confirms incorrect ML conclusions (human oversight becomes very important)
  • A neural network first learns from a data set (time consuming) and then can be tested against other data sets
  • The important work is done by systems of ML systems
  • Machines are still getting faster and more tools are being developed
  • The data we are helping create (e.g. through use of speech recognition) is feeding AI corporations
  • ML can be tricked if you know the underlying algorithms
  • Remember: Garbage-in, Garbage-out
  • Big question: What data will we need in the future to make better decisions?
  • Business and human work is moving to — Low Predictability + High Complexity
  • ML can help to experiment faster and better in order to deal with Low Predictability + High Complexity
  • The future of work: First experiment … then develop a strategy

Read more