How our structures shape us

If you pit a good performer against a bad system, the system will win almost every time.

This quote from Geary Rummler and Alan Brache in Improving Performance, sums up many of the symptoms of hierarchical systems, whether they be schools, businesses or prisons.

I believe that the great work to be done at the beginning of this century is to create new organisational models that reflect our humanity. Efficiency and effectiveness are not enough, and in many cases have become mechanistic. It’s time to discard industrial management models that emphasize command and control and ensure that individuals at all levels have opportunities to engage in and question the system.

What happens when we don’t question authority? Let me re-quote this article on ABC’s recent re-enactment of the Milgram Experiment:

One of the subjects in the television program was a 7th grade teacher who explained that she didn’t stop shocking the learner because as a teacher she had learned when a student’s complaints were phony. I thought to myself, “Has she electrocuted many students?”

The teacher asked the researcher, “There isn’t going to be any lawsuit from this medical facility, right?” When told that the teacher was not liable, she replied, “That’s what I needed to know.” It is however worth noting that this was after she induced the maximum shock and the learner demanded that the experiment be terminated.

In this interview, Dr. Philip Zimardo discusses the 1971 Stanford Prison Experiment, where students played their roles as guards or prisoners and abuses started within 24 hours:

But on the second morning, the prisoners rebelled; the guards crushed the rebellion and then instituted stern measures against these now “dangerous prisoners”. From then on, abuse, aggression, and eventually sadistic pleasure in degrading the prisoners became the daily norm. Within thirty-six hours the first prisoner had an emotional breakdown and had to be released, followed in kind by similar prisoner breakdowns on each of the next four days.

In A Schoolman’s Guide to Marshall McLuhan (1967), John Culkin wrote that, “We become what we behold. We shape our tools and then our tools shape us.” This reminds me of the question about who is the most important person on board a ship. Is it the Captain, the Navigator or the Engineer? Actually, it’s the Architect, because the initial design influences everything else.

Sometimes, no matter how hard you try, you cannot change the way things work in an organization. The problem may be the organizational model itself and it may be better to leave and create an alternative model than to help keep a flawed one going.

To train or not to train

I’ve been in the training business for most of my life, in one role or another. Training, when it’s needed and done well, can be a most effective intervention.

Training is really effective when you can clearly measure the end performance. My own experience of good training was with helicopter pilots. As the training specialist I was able to observe instructor pilots and watch the junior pilots develop their skills in the aircraft or on the flight simulator. The program was proficiency-based, meaning that once a skill was mastered, the pilot could move on, without repeating the same thing. Avoiding unnecessary training meant significant cost savings as well.

I was reminded of the down side of training in Michele Martin’s post on 5 Reasons You Don’t Need Training, where she shows that inappropriate uses of training include:

  • To make up for poorly designed work processes
  • As a replacement for corrective action
  • To satisfy a “Requirement” for professional development
  • When performance expectations have not been properly developed
  • When you don’t have management understanding and buy-in

I’ve experienced all of these, from inside the organisation and as an outside consultant. I’ve also learned to stay away from “training” projects that really aren’t about training. I’ve discussed this before in, Whither ISD, ADDIE & HPT, but it bears repeating because training is costly, in both resources and in time (trainers & trainees).

I learned early in my career as a training development officer that training should be the last option, after all other performance improvement measures have been proven inadequate. It’s a good rule of thumb.

Making A Difference

Do all of the small environmental actions of individuals make any significant difference to climate change? According to an article in In These Times, not really:

One barrier standing in the way of meaningful action is fuzzy-headed thinking on the part of those truly concerned about global warming. So worried are these activists, that their solution to the climate change problem is to marshal legions of Americans to change light bulbs, buy a Prius, or do any other number of helpful, but, in the big picture, not too significant feel-good actions.

Some of my work over the past decade has been in performance improvement, and I’ve tried to focus on the real causes of organisational problems, and not just the symptoms. Having everyone “do their part” may not be enough to reverse global warming and a more concentrated effort to address the root causes may be needed . The article goes on to make this comparison with the civil rights movement:

Take the Civil Rights movement. Yes, personal reflection and individual change had its place, but can you imagine Martin Luther King telling people to “ask” their school boards to integrate the public schools, or “encourage” corporations not to discriminate, or “tell” their elected leaders to “push” legislatures in the South to do away with Jim Crow laws?

One answer may be to act green in our decisions that can actually make a difference. For instance:

  • When voting, choose the most environmentally responsible candidate or party.
  • Don’t settle for half-measures from any elected official and let them know it.
  • Refuse to be sold short-term economic benefits in place of environmental sustainability.
  • Lobby to get rid of the worst offenders amongst our elected officials.

RFP – you get what you ask for

I’ve pretty well given up responding to RFP’s. In most cases they are are so poorly worded that you don’t really know what the client wants. Unless you have inside knowledge, responding to an RFP is a crap shoot. I am referring here to RFP’s for consulting services, especially performance improvement, and not those requesting commoditized goods or services that can be clearly specified

As the successful bidder you have to meet the requirements as stated in the RFP, even if they they make little sense. It may be cheaper to sub-contract a task that is required, but the RFP requires it, so you calculate it at double what someone else could do it for. Clients do this so that they only have to manage one contract.

I recently came across this article on The Elephant in the Room, from Hamer Associates [I wish there was an RSS feed on this site]:

And this is where the RFP process breaks down –in the case of human performance management or change consulting– the RFP seeks the cheapest (or most experienced) provider of a solution to a problem; a solution that the organization has already chosen. However, as I reflected on past RFP responses, in too many cases the problem either was not defined, not communicated, or so poorly defined that it begged discussion. And even in cases where the problem was defined, the chosen solution often would not have solved the problem.

I had a similar case a few years back where the client’s RFP required e-learning, but I was quite certain that e-learning would not address their issues. Luckily, I was able to negotiate some time for a “confirmation of the analysis”. My report enabled a significant reduction in e-learning (courses online) and a new focus on performance support and procedural changes.

Too often, consultants do just what the RFP has called for, even if it is not in the best interests of the client. RFP’s may be the safest contracting method from an accounting or a bureaucratic perspective, but for real organisational performance improvement they are definitely not the best tool.

Court understands what teachers’ college should already know

An Ontario Superior Court has directed the Ontario College of Teachers to find an alternate method to evaluate an Iranian refugee teacher’s qualifications without “official” documentation.

To teach in Ontario’s publicly funded schools, a teacher must have a Certificate of Qualification from the college, which was created in 1996. Officials there deemed her few documents insufficient to judge her abilities to teach and refused her requests for a personal interview or to develop alternate means to evaluate her abilities.

Will this first crack open the floodgates of competency-based testing for professionals? Too many professional associations have used the premise that only official documentation from a recognised and accredited institution is acceptable to show competence in a field. This is a load of hogwash, but it has helped to create entire industries around training, certification and accreditation. Some of these industries have transformed into oligopolies and monopolies, such as for the healthcare professions.

The Court has recognised that there is more than one way to exhibit competence in a field. I would go farther and say that a formal training or education program has less correlation to actual competence in a field than a well-designed performance based evaluation. How you become competent in a field should not matter. What matters is actual performance. However, such an approach would put many training and education programs out of business.

Perhaps this decision is an indication of changes to come. Formal training already accounts for very little in the IT or Web media sectors. Most employers want to see actual products or code, and don’t really care what credentials the worker has, as long as he or she can produce the goods.

Training – the 8% Solution

Does your organisation live in complicated or a complex world?

When you are developing training, are you addressing complicated or complex issues?

Via Rob Paterson, and the book More Space, are two important differentiations between complicated & complex systems given by Johnnie Moore, in Simple Ideas, Lightly Held:

complicated = not simple, but ultimately knowable (e.g. the wiring on an aircraft)
complex = not simple and never fully knowable. Just too many variables interact.

If you are working with a complicated system, such as an aircraft, then the entire system is knowable, even though it would take much time and practice. Training would be the right tool to develop your skills to fly or fix the aircraft. I know, because I’ve designed aircraft training. There’s a lot of stuff to know and do, but training works and people can eventually master the system.

Complicated systems and the training for them can be controlled. Complex systems and learning how to work with them cannot.

If you are working with a complex system, you will never be able to know everything. For instance, the environment and communities are complex systems that cannot be controlled, only influenced. There are no right answers, there are many ways of trying to achieve your goals and there are too many variables to control.

The other day I was asked about the essence of implementing informal learning, and I believe that it is the act of giving up control. This is scary for many inside the organisation, but it’s the only way to manage in a complex environment. As the world becomes more networked, interdependent and environmentally challenged, all organisations are moving into complex environments.

Here is an indicator of how complex our work is becoming. It used to be that you could master the majority of what you needed for your work. This is no longer the case, as shown by Robert Kelley of Carnegie-Mellon University, when he asked this research question (via Jay):

What percentage of the knowledge you need to do your job is stored in your own mind?

  • 1986: 75%
  • 1997: 15-20%
  • 2006: estimated 8 -10%

This is one more reason why informal learning structures (not procedures) are necessary to support individual learning in a complex environment, where it is impossible to control the process as we could with training. Informal learning is the way in which your employees, bosses and colleagues will have to learn that significant other 92% of knowledge necessary for their jobs – today. It’s not that we don’t need training; we just need a lot more informal learning.

Systems Thinking

I’m working on a couple of projects where I wanted to review some thoughts on systems design so I went to my bookshelf and re-read sections of Jamshid Gharajedaghi’s book, Systems Thinking: Managing Chaos and Complexity: A Platform for Designing Business Architecture.

In both hindsight (evaluation) and foresight (analysis), this advice resonated with me:

There is a need to deal with the problem independent of the solutions at hand. We have a tendency to define the problem in terms of the solutions we already have. We fail most often not because we fail to solve the problem we face, but because we fail to face the right problem. Rather than doing what we should, we do what we can. In the systems view, it is the solution that has to fit the problem, not vice versa.

This book can be a tough slog because it breaks new ground on almost every page, but after three years I still value the methods and the case studies contained within it.

e-Learning Project Management Book

The Canadian eLearning Enterprise Alliance (CeLEA) has recently released its new e-book Plan to Learn: case studies in e-learning project management. Edited by Beverly Pasian (who is working on her PhD in project management) and Dr. Gary Woodill (who has recently become Senior Researcher at Brandon Hall Research), this volume of 22 case studies from 8 countries documents the successes and failures of a variety of e-learning implementations. Case studies are drawn from the higher education, K-12, government, non-profit and corporate sectors. The book also contains a thorough review of the literature on elearning project management. To obtain your free copy, go to www.celea-aceel.ca.

This 192 page PDF from CeLEA covers dozens of case studies on e-learning management (focus = A-DDI-E). Almost all of the cases are academic situations, using the online course model, so this book would be best suited for those developing e-learning in higher education. There is little mention of performance support, knowledge management, communities of practice, or informal learning. Nor is there much reference to aligning the learning methods to operational or business requirements.

One exception is a case study on developing math skills for nurses at Mount Royal College. In this case, the work requirement, or gap, was quite clear:

According to a May 2004 study published by the Canadian Medical Association Journal, one in nineteen adults will be given the wrong medication or dosage upon a hospital visit.

The goal was defined, though too academic in my mind:

The goal of the online Nursing Math Tutorial was to ensure that nursing students were successful in their clinical courses without the need for so much time.

A better goal would have been to reduce the number of incorrect dosages. This is obviously the performance they were really trying to achieve.

The design considered the context of the work:

The intention of the tutorial was to provide practical information to the learner, that being the basic principles and illustrations of math sequences and their relation to practical clinical settings.

However, the ADDIE model seems to have been too constraining and resource-intensive:

In the creation of the online Nursing Math Tutorial, the successes arose from creative project management solutions, which conserved resources and maintained a higher quality of student learning as a result.

I’m wondering if a better approach in this case may have been to create a series of contextual visualizations on the necessary math concepts. These could then be placed in an online collaborative environment, such as Elgg, and the learners themselves could have constructed meaning around these visual artifacts, through discussions with each other and with facilitators. Some of the visualizations could also be the test objects, such as, “here is a case, calculate the dosage”.

For those in the thick of e-learning course development, you may find some helpful nuggets in these pages. Most of the cases discuss tools for learner-to-learner discussions, so we are seeing clear moves away from just information dissemination. However, if you’re looking for innovative performance-oriented alternatives to ADDIE, you will have to look elsewhere.

Whither ISD, ADDIE & HPT?

big-question.gif

Here is the question of the month from The Learning Circuits Blog:

Are ISD / ADDIE / HPT relevant in a world of rapid elearning, faster time-to-performance, and informal learning?

First, some definitions:

  • HPT – Human Performance Technology
  • ISD – Instructional Systems Design [or Development]
  • ADDIE – a process incorporating Analysis, Design, Development, Implementation, Evaluation, stemming from the Systems Approach to Training (SAT)

SAT, ISD and ADDIE stemmed from the need to train military personnel for the Second World War. They were necessary to train lots of people really fast. My initial experiences as a military trainer were from the point of view of ISD, SAT, & ADDIE.

Later I became immersed in HPT, and found it a good method to analyse certain aspects of organisational performance. One thing that HPT does well is to ensure that training, which is costly, isn’t prescribed unless it addresses a verifiable lack of skills and/or knowledge.

SAT, ISD and ADDIE are excellent methods to develop training that is stable. I spent several years using these methods to develop helicopter training for aircrew and maintenance personnel. These methodologies were highly suitable for the task. These methods are not suitable for developing educational programming. The problem with using training development for education is that the performance objectives are not clear. What are you supposed to do at the end of this education and how do you measure it?

As I have said before, I think that one of the problems with our education system is that there is too much of a focus on getting quantitative data, like testing. These functions are more suited to a “training” system, where the performance requirements are clear, measurable and observable. In education, the performance requirements are fuzzy. There is nothing wrong with either a training focus or an education focus; each one has its merits. The problem is when you try to mix the two.

So, are these methodologies suitable for today? The short answer is yes, but not everywhere. Too often we see training as a solution looking for a problem. Training often worked before, or at least didn’t create more problems, when work processes and organisations were stable. As we move to more networked businesses, training’s weaknesses are becoming evident. These weaknesses are also evident when we don’t really know what the performance objectives are in a constantly evolving society, economy and marketplace.

Enter the two-way web and the ubiquitously connected computer. We now have several new tools to address other performance issues that training was never good for anyway:

  • Unclear expectations – collaboratively constructed wikis and up to the minute blogs
  • Inadequate resources – user generated knowledge bases through tagging and social bookmarking
  • Unclear performance measures – direct feedback from customers via blogs

The Web is also providing an open platform for people to connect and converse with others all over the world, expanding informal education opportunities for millions. Both training and education are being opened up and exposed as individuals create their own networks and converse with each in their personal searches for knowledge and community.

The Internet is forcing us out of our self-constructed disciplinary boxes. As work and learning become connected online, the barriers are blurring between organisational development, HR, training, education, HPT, etc. A new, amalgamated field of practice requires better tools and integrated theories from which to base our practice.

These models are relevant, but they’re not enough.

Stay focused on the small stuff

A couple of recent articles reminded me about the importance of doing the small things well and possibly reaping large rewards. We often look for magic bullets or big systems to address big problems but it’s usually the little stuff that makes a difference.

Christian Long tells a story about teachable moments and how this statement from a student, “I have to go to the bathroom bad“, can be used for all kinds of learning about grammar. As Christian says, this is a “real glimpses of innovation inside the ‘learning’ space.”

Another case in point is an article from Green Chameleon about knowledge management that does not include expensive IT systems. This is a story about housekeeping and concierge staff at a hotel:

Each day, one staff member got to share in about 5-10 minutes a topic of interest just before roll-call which happens at the start of a new shift. The staff get to pick the topic and the day they would like to do the sharing. The topic could be on anything of interest or an incident they considered useful for others to learn from such as how to check-in baggage, how to deal with “weird” guests, where to buy foreign magazines, what Deepavali which is a Hindu festival coming up in October is all about, and so on – in short, topics that would help them deal with their guests better.

In either case, taking advantage of a teachable moment or adding a 5 minute sharing session, the cost of implementation is negligible. The key is in understanding the business, the issues and the organisational culture so that these kinds of informal learning activities can take place. The only way that I can see this happening is when those in charge remain connected to the day-to-day operations and when there is a climate of trust to try out new ways of working.