Training Evaluation: a mug’s game

“Efficiency is doing things right. Effectiveness is doing the right things.” —Peter Drucker

Dan Pontefract is quite clear in Dear Kirkpatrick’s: You still don’t get it:

Let me be clear – training is not an event; learning is a connected, collaborative and continuous process. It can and does occur in formal, informal and social ways every day in and out of your job. In your email, with the statement “what happens after the training event”, you have cemented (again) the root cause of the Kirkpatrick model. The ‘event’ is not solely how learning occurs. Whether in the original model, or the weakly updated model, the single largest flaw with the Kirkpatrick Four Levels model is the fact its basic premise is that learning starts with an event. Once you ultimately get past this stumbling block, the Kirkpatrick Four Levels model will potentially become relevant again, should it be suitably updated again.

Dan is not the first person to show the limitations of the Kirkpatrick model. Eric Davidove and Craig Mindrun wrote in Verifying Virtual Value:

The key to determining the business value of networked learning, however, is a more expansive view of the kinds of outcomes delivered. Traditional training analyses, such as Kirkpatrick’s four levels of evaluation, were designed to assess solutions that are delivered in a linear manner. Since networked or collaborative learning solutions are informal, integrated with the workflow and driven by the learners, these traditional assessments will not work.

Event-based instructional interventions, or the course as learning vehicle, is an outdated and useless way to look at workplace learning. Courses are an artifact of a time when information was scarce and connections were few. The internet is an environment optimized for ABC learning [Anything But Courses].

In “Not Your Father’s ROI”, Jay Cross suggests:

Make a hypothesis of cause and effect. Interview a statistically significant sample of the workforce to see if the hypothesis holds up. Often, results obtained from social science research methods will produce more meaningful feedback than solid counts of the wrong thing.

Changing our training evaluation models shouldn’t be a management focus anyway. That’s looking at the wrong thing. Even if we get 100% efficiency, and some level of effectiveness, we’re missing 90% of the  picture, as shown in this graphic by Charles Jennings.

Training more efficiently is a mug’s game. Managers and workplace performance professionals should focus on Working Smarter, by helping people learn and develop socially.

24 Responses to “Training Evaluation: a mug’s game”

  1. Judith Christian-Carter

    In my experience evaluating any training programme (reactions, learning and resulting behaviour/performance – forget the ROI bit as it’s usually not possible) is not a quick or cost-effective procedure.

    I’ve always wondered what the fuss is all about. If a training programme has been shown necessary by undertaking a proper needs analysis and has been designed and delivered in the most cost-effective manner, why on earth you would need to evaluate it, irrespective of the model chosen?

    Reply
  2. Kelly Garber

    @hjarche – nice post summarizing learning professionals’ frustration with the Kirkpatricks. They could benefit from some informal and social learning, for sure. One has to believe that while they seem slow to arrive that they will eventually get there – they have a responsibility to get there.

    @JudithELS – no kidding, right? Those of us in the trenches day-to-day think little of all of this. We design after having absorbed the issues, the pain, the deficits, the successes, the known, the unknown, the confusion and even the politics …models are for the bean counters, I suppose.

    Reply
    • Harold Jarche

      Just a point on models, especially mental models, Kelly. If they’re not explicit then they’re implicit and we have little hope of changing them.

      Reply
  3. Anthony

    Very interesting and thought-provoking. Kind of screams common sense, although never really looked at it like this. Nice post – thanks.

    Reply
  4. David Cadogan

    My interest happens to be in the way work-based learning happens within highly regulated public services in which performance is subject to scrutiny.

    In this context, organisations I work with need to:

    1. Identify the skills / competences they require from their staff in order to be able to create and deliver the product / service they specialise in

    2. Identify the most appropriate personnel to recruit and to develop

    3. Identify the most appropriate manner to deliver the training and facilitate learning

    4. Ensure that the most key elements within the training delivered has ‘stuck’

    5. For compliance reasons, have continual monitoring systems in place to provide early warning of a deteriation in skills / performance

    Given that public sector departments in the UK and elsewhere are facing cutbacks and budget decreases of staggering proportions. Senior staff and decision makers have to know whether learning provided is appropriate and is meeting the needs of their organisation and their clientale. It is all very nice and fluffy to say that evaluation is a meaningless exercise because strong TNA was in place to identify the need for training to be provided, but organisations also need to know every penny / dollar is being wisely spent.

    Organisations tend to be staffed by human beings each of whom has vested interests – not least in keeping their jobs, so independent monitoring has to be in place.

    As an educator I totally agree that learning is a process which starts well before a learning ‘event’ and will continue well after. Therefore understanding the limitations of evaluation an exercise or set of exercises is vital. Critically, within the sector I operate in, evaluation is generally seen as ‘part of the problem’ rather than ‘part of the solution’ when it comes down to providing senior staff and budget holders with meaningful information in respect to the appropriateness and efficacy of training provided.

    In short, evaluation MUST play an important part in organisational decision-making but evaluation processes need to operate within clear cut boundaries and their limitations clearly mapped out. In addition, the information provided through evaluation processes need to be less ‘techno-speak’ with evaluations written for and can only be understood by Psychology Post-Graduates.

    Reply
  5. Harold Jarche

    I agree with many of your points, David. I would differ on priorities though. It’s not better training evaluation, but better performance measures that most organizations need. Measure what matters and what can be measured. Get other data points to corroborate these, as Jay suggests.

    Accurately measuring only 10% of how people learn on the job isn’t providing much value to the organization.

    Reply
  6. James McLuckie

    I am going to stand up for Kirkpatrick’s model very slightly and say that, if I am going to evaluate a formal learning activity then I find that it is a useful *guideline* (not methodology) of things to consider. (That said, I haven’t referred to it for years. It’s not hard to remember what it involves!)

    I am all for evaluation, for a couple of key reasons. Firstly, it doesn’t matter how much analysis I do, or how carefully I put measures in place, I still want to know if they worked and what could have been done better. That holds true if it is a formal course/workshop, or implementing the conditions to help informal or social learning to thrive.

    Secondly, I actually find it stimulating to follow up on learning activities. It’s incredibly nteresting to talk to learners about their experiences and what their take on them has been.

    Harold, your comment about measuring only 10% of how people learn is key … but I think to get the point home about informal and social learning being more effective then, at this stage, we need to take the time to properly review the formal aspects to prove that they’re often ineffective .

    However, I suspect that many only evaluate what suits because, in many cases, it might involve revealing that it didn’t work. I haven’t come across many egos capable of saying “Oopps, that workshop we spent £!0,000 wasn’t really worth the effort. Sorry about that!”

    Reply
    • Harold Jarche

      One of the easiest, and perhaps most effective, forms of training validation would be to make all courses 100% voluntary, if agreed-upon by the the worker and his or her co-workers. Let the market decide. Notice I say validation, not evaluation. Valid training is effective, not necessarily efficient.

      Reply
  7. Charles Jennings

    David, I understand the constraints you’re working under in highly regulated Public Sector organisations in the UK. I’ve been there myself to some extent.

    I would argue that if HR and L&D departments can switch their focus from inputs (skills/competencies) to outputs (employee performance) it opens up a whole new set of opportunities – whether in public sector, private sector, highly regulated or other organisations.

    We all know that improving skills/competencies is only a relatively small factor in overall employee performance and has only moderate impact on performance improvement. We often conveniently forget that fact. Take a look at the research from organisations such as the Corporate Leadership Council. CLC highlighted the fact that exposing employees to stretch assignments and new experiences and encouraging reflection on those experiences had approx. 300% greater impact on performance than simply improving knowledge, skills and competencies. Yet the HR/L&D departments of many organisations persist in focusing solely on competency maps and skill matrices – c’est la vie…

    What does this have to say about evaluation? To me it says that evaluation is vital – we need to know how employee performance is contributing to organisational performance – but not the evaluation of what we generally see as ‘learning’ (i.e. knowledge and skills acquisition) – Kirkpatrick Level 2 – and what goes on in the L&D department.

    We need to evaluate performance in the workplace, but not necessarily in terms of Kirkpatrick Level 3 either. Kirk 3 is based around evaluating the transfer of knowledge/skills obtained in events – ‘courses’.

    My own view is that the Kirkpatrick model, despite the tweaks Don, Wendy and Jim have made over the past 50 years, is still not a fit-for-purpose approach for the type of evaluation we really need. Its deep ties to learning as a series of events and the concept that evaluation follows on from a learning ‘event’ is just one blocker.

    We should be evaluating performance in terms of organisational and business metrics – not some metrics dreamed up by the L&D department or educational researchers. Metrics need to focus on organisational outputs – such as customer satisfaction, response times, adherence to process etc. etc.

    If we focus on these we’ll find ourselves measuring not just knowledge, skills and competencies but also employee attitudinal and behavioural attributes and changes. That’s where the real organisational ‘gold’ is located.

    Reply
  8. Clark Quinn

    While I agree with my colleagues Harold and Charles on principle, particularly the need to move beyond the event, I think we do Kirkpatrick (collectively 🙂 a disservice if we don’t recognize that his level 4 was (or could be) “performance in terms of organizational and business metrics”. And we do need metrics for our initiatives, to decide whether to keep, tweak, or kill.

    IF it’s a training intervention, but increasingly our needs are beyond what can be taught, and our teaching should be stretch over time anyway. So, it’s not event-based, and consequently the intermediate steps may not make sense, and consequently the Kirkpatrick model loses value. If we introduce a job aid, what’s level 2 or 3? SImilarly, if we put in a social network system.

    But we should have expectations of what outcomes we want (yes, org/biz ones), and then we should be seeing if we’re achieving them. And, Judith, I wouldn’t trust to faith that our analysis is perfect and that ‘cost-effective’ is the most important criteria.

    Reply
  9. James McLuckie

    Charles, I absolutely agree with you that employee performance is what organisations should be focussing their evaluation efforts on.

    This seems like “hit over the head” common sense, but my own recent (and brief) experience in a public sector organisation suggests that common sense isn’t the most valued trait out there in the big bad world.

    This organisation has the standard formal training set-up: corporate training programme, skills matrix, personal development plans etc. Rather than change the world in a day, I decided to work with what was there. For example, I redesigned the development plan so that, rather than have them record “I want to do X course”, it prompted them to define what the team, department or org benefits going on that course would be. And I highlighted that any formal learning activity should be followed up with a conversation between manager and staff member to discuss how the team etc. would best benefit from the experience.

    However, I had to fight like a dog to get this through because “it will take too much time”. And this wasn’t the L&D or HR department saying this, or even the staff members or line managers, but the senior team. The very people who, I would have thought, would have been the most appreciative of trying to encourage staff to think in terms of benefits to the business.

    Harold, I think it’s a nice idea to have all courses as voluntary, but I’m not sure how practical this is. Certainly, “developmental” courses it makes more sense, but for roles with technical or compliance considerations, I think there is still a very good argument to be made for mandatory training.

    Reply
    • Harold Jarche

      Thanks for your comment James. I’m still a promoter of testing for proficiency on the job rather than training in the classroom and hoping for competence.

      Reply
  10. James McLuckie

    I agree with that in the vast majority of cases, Harold. But to give you an example of where I think training courses work, when I was a student I worked part-time in two centres.

    One company trained me “on-the-job”, i.e. I sat on the phone with someone beside me. The other had me in a training environment, where they set-up exercises and scenarios that I would face when I got onto the telephone.

    I felt far more prepared and confident in the second instance than I did on the first. The time I spent with my fellow trainees in the training environment (which was very social – the trainer understood the value of talking and discussing, not just barking instructions at us) helped me form some very good relationships with them. I felt far more isolated in the first scenario and, as a result, enjoyed the job a lot less.

    Reply
  11. Robert

    Researching the effectiveness of your training will always be essential. The change you prescribe is a change in what should be researched. People shouldn’t research ” a single training” but all the (2.0) factors that come in to play while learning a new skill.

    Reply
  12. Avishek

    Interesting post and views. I couldn’t help but think back to this post from Jane Bozarth. I have quoted below a couple of paragraphs.

    “The linearity and causality implied within (Kirkpatrick’s) taxonomy (for instance, the assumption that passing a test at Level 2 will result in improved job performance at Level 3) masks the reality of transferring training into measurable results. Many factors enable — or hinder — the transfer of training to on-the-job behavior change, including support from supervisors, rewards for improved performance, culture of the work unit, issues with procedures and paperwork, and political concerns. Learners work within a system, and the Kirkpatrick taxonomy essentially attempts to isolate training efforts from the systems, context, and culture in which the worker operates. Brinkerhoff, discussed below, describes this as evaluating the wedding rather than the marriage.”

    “To be fair, Kirkpatrick himself has pointed out some of the problems with the taxonomy, and suggested that in seeking to apply it, the training field has perhaps put the cart before the horse. He advises working backwards through his four levels more as a design, rather than an evaluation, strategy. That is — what business results are you after. What on-the-job behavior/performance change will this require? How can we be confident that learners, sent back to the work site, are equipped to perform as desired? And finally, how can we deliver the instruction in a way that is appealing and engaging?”

    Reply
  13. Tom Gram

    Hey Harold;
    Lots of good discussion on this one. When training was primarily formal evaluation strategy was simpler. Informal learning and especially methods to integrate learning and work are generating (welcome) confusion.
    Dan and the Kirkpatrick’s offer very representative of competing positions in the learning profession at the moment. I’m not sure the polarization is serving us well though. There are truths on both sides of the fence and the artificial dichotomy between formal and informal learning are broadening the gap. Some thought’s on that on my blog here: http://bit.ly/gOuVfY

    Say hi to Sackville for me.

    Reply
  14. Steve

    I wonder if it’s not the tools themselves that are at fault for their situational misuse? A hammer doesn’t drive a screw well but I shouldn’t blame the hammer for that. I do think that the level of Kirkpatrick’s scale is directly proportional to it’s value-.99. But it’s how you use the tool and interpret the results that matters. Strategic intent, not a matter of course…

    I’m a bit off kilter about this comment from Charles. In concept I agree with his point (I’ve drawn up similar models) but I’m not on board with this statement:

    “We all know that improving skills/competencies is only a relatively small factor in overall employee performance and has only moderate impact on performance improvement. ”

    This is entirely situational. I think this twists the meaning of the data we do have. Skills and competencies can be a tremendous factor in some problems. But as assumptions go, it’s often “over-assumed” as a contributing factor. As a whole, training has a moderate impact on performance improvement given the other things we ignore. But Skills and Competencies are often the price of admission, a condition of successful performance. Minimizing these factors as trivial oversteps the bounds of reason, in my opinion.

    Let’s not minimize the role of skills and competence in performance. Instead, let’s highlight the role of other performance influences. Optimism vs. Pessimism. The training people really don’t want to play with you anymore when you bring up how ineffective they are in the majority of situations where there is an assumed correlation. And we really need the training folks as allies… And training can be the best solution in some cases.

    Reply
  15. Mark Vickers

    Harold,

    I think you make some great points but you’re overstating the case. The research I’ve been involved with (I placed a link in below) indicates that training evaluations really are associated with better organizational performance. That makes sense. Large organizations may spend millions in any given year on formal training, and they absolutely should have more a “smile survey” to judge which training programs have the most positive impact.

    The second point I’d make is that the 90% figure, which I’ve seen used many times for informal or experiential learning, is very soft indeed. The truth is that this figure is going to vary from organization to organization and even person to person. Moreover, it’s often the most valuable and academically intense learning that is “trained.” For example, it’s unlikely that a programmer is going to “pick up” C++ programming through informal learning. There will usually be formal training or education modules involved because those are essential to learning the foundational knowledge that is critical to that position.

    The third and last point I’d make is that “learning socially” can also be a mug’s game, with experienced professionals wincing every time a neophyte walks in the door, thinking, “Oh jees, now I’ve got to spend the next few months teaching this kid the ropes and watching my own productivity go down the drain.” In some cases, this kind of learning doesn’t form bonds as much as it does resentments.

    So, I’d be careful making blanket statements. My feeling is that, as long as formal training and development are an essential and expensive tool in some industries, professions and organizations, they should be evaluated in whatever ways make most sense. Otherwise, learning becomes like flying an airplane without any instrument panels: foolish and potentially very costly.

    http://www.astd.org/NR/rdonlyres/C42C06B4-089C-42E3-88CB-941399FD887B/0/ValueofEvaluationExecutiveSummary.pdf

    Reply
    • Harold Jarche

      You’re right, Mark, 90% is a generalization. Data show that knowledge workers spend 95.12% of their time not in formal training. Robert Kelley showed that knowledge workers have less than 8% of the knowledge they need to do their jobs in their heads. That means they have to get it somewhere else and the best way would be socially, through knowledge management or via a performance support system. Learning socially does work, as evidenced by new hire training at New Seasons Market. Training (course) evaluation is a mug’s game if that’s all you do. The focus on workplace performance needs to expand to informal & social learning.

      Formal training is no longer seen as an “essential” part of workplace performance in many organizations, some of which I am working with. That’s why it is being outsourced.

      Reply

Leave a Reply

  • (will not be published)