What happens when you connect unthinking computer programs with a culture of obedience and compliance? Algorithms run much of society and business today, from applying for a mortgage to determining which passengers to eject from an overbooked aircraft. Coupled with authoritarianism, algorithms can produce devastating results, says John Robb at Global Guerillas.
“If a corporate algorithm yields a terrible result, smart organizations admit the failure. They admit it didn’t work to both your customers and employees. Algorithms don’t have feelings. They won’t cry if you talk trash about them. Also, smart organizations don’t punish employees for raising the flag on a broken algorithm. One last thought. Smart organizations know what their algorithms are (or that they even exist) and how to fix them. Dumb organizations see the process as inviable. It should be easy to spot the difference between these organizations by the number of disasters seen online,” —John Robb
One way to develop the smart organization promoted by John Robb is to understand algorithms. Jason Brownlee provides an overview of eleven Machine Learning Algorithms, grouped by how they ‘learn’ and what they do. For example, decision tree algorithms, which would be suitable for a mortgage application.
“Decision tree methods construct a model of decisions made based on actual values of attributes in the data.
Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.”
Even a cursory exploration of these algorithms is a challenge for anyone without special mathematics education or intuition. So what can we do? A quick look at the Wikipedia entry shows that machine learning algorithms are about “prediction-making through the use of computers” to “produce reliable, repeatable decisions and results and uncover hidden insights”, however “Machine Learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use, thus digitizing cultural prejudices”. Since these algorithms are running many aspects of work and society, we had better get informed and find trusted sources of knowledge on machine learning and ‘artificial intelligence’. This is the core of PKM. If you don’t know something, find trusted people who do.
As computers take over routine work we cannot turn a blind eye to how they make decisions. Not only do we have to focus on human work, we have to keep a careful eye on what the machines are doing, why they are doing it, and how they are making their decisions. For complex work we need strong (knowledge) networks, and loose (temporary & negotiated) hierarchies. With the machines making so many decisions today, we cannot control them with authoritarian organization models. Leave the hierarchies to the algorithms.
Further Reading: RAND: The Risks of Bias and Errors in Artificial Intelligence