As this summer's TJ conference, the focus was on evaluation, and I thought now might be a good time to look at the main way that organisation's evaluate their training: so-called 'Happy Sheets'.
I recently reviewed what I think is a pretty standard evaluation form. It was for a one week classroom based course for hospital medics. The evaluation form broke the course down into its individual lectures and asked participant's to rank each using the following scale:
5=Excellent, 4=Good, 3=Satisfactory, 2=Poor and 1-Very Poor.
There were also two open questions: "Is there any additional material you would have liked covered?' and 'Would you have liked to be taught more or less of any particular area?'
Sound familiar? Estimates suggest that around three quarters of firms that evaluate their training use a similar approach. The problem is there are a lot of problems with measuring reactions this way.
1. Participants were not asked on what grounds they were assessing the lectures. Two people might have rated a lecture as excellent. For one of them this might mean they enjoyed the experience. The room was nice. The fellow participants were interesting. The lecturer engaging. For another the rating meant that the learning was useful to them in their work. They could see how to apply it and how it would make them better doctors.
2. Just because trainees react well to a learning experience does not mean that they will or can do anything with what they have learnt. That will depend on a range of other factors such as the support of their manager. Reactions do not predict impact.
3. While not deliberate the rating scale is bias. Three of the five ratings are positive (Excellent, Good, Satisfactory) and only two negative (Poor and Very Poor).
4. The form was handed out at the last session of the week's training. Might it not have been better to have waited a couple of weeks when the participants were back at work and then see what they thought of it? I heard a new term at the university this week: 'sticky' learning. It means the extent to which learning is retained over time. To be effective learning needs to be sticky. That can only be assessed over time.
5. The individual results of the evaluation were averaged to provide an overall score. So for example, on average participants rated the lecture they received on communications as 4.1 - just above Good. However, when we need to remember that learning is also an individual experience. An average may hide a group of people who thought the training as poor.
6. While there is nothing wrong with them, the open questions show that the evaluation was solely organised for the trainers' needs. It might have been interesting to the medic's employers to ask: "Please state how (if at all) you think you will use this training in your job?" Too often there is a disconnect between trainers and other stakeholders.
Happy Sheets aren't all bad news. Happy customers are great ambassadors for a trainer and programme. Moreover, some research suggests a link between general satisfaction and the transfer of learning into improved job performance, albeit linked to perceptions of how useful the training is. In fact this is the nub of it. What really matters is usefulness or utility reactions. Rather than gathering data on general reactions, impact evaluations need to measure perceptions of relevance and application.
Richard Griffin
Director of the Institute of Vocational Learning
London South Bank University