I was just updating the chapter on Evaluation - specifically evaluation of learning, but it could apply to anything that you want to measure.
I thought I'd share it with you here to get you all excited about the upcoming second edition, which you can win a copy of in our competition to support this year's Learning at Work Week.
Believe it or not, I have heard it said by some trainers that
Kirkpatrick's four stage evaluation model is out of date and no
longer valid. I'd like to explain why it is valid, and how it aligns
with the modelling and installation process that we've been exploring
together.
First
of all, let's hear the prosecution's opening statement.
Kirkpatrick's
evaluation method1,
based on research first published in 1959, and even its 'successor',
Kaufmann's five level approach2,
is focused on a 'training event', when in fact, learning takes place
on an ongoing basis, both actively and passively, and these standard
evaluation models do not take into account the informal and social
learning which takes place.
Kirkpatrick's
detractors cite these alternatives:
- Brinkerhoff's Success Case Method
- Daniel Stufflebeam's CIPP Model
- Robert Stake's Responsive Evaluation
- Kaufman's Five Levels of Evaluation
- CIRO (Context, Input, Reaction, Outcome)
- PERT (Program Evaluation and Review Technique)
- Alkins' UCLA Model
- Provus' Discrepancy Model
- Eisner's Connoisseurship Evaluation Model
So
that's that. Learning in this modern age is a cool and sexy thing
which no longer takes place in a stuffy old classroom, so evaluation
is no longer meaningful. And any theory which is more than 50 years
old must be untrue.
Well,
Newton's first law of motion is over 200 years old, but it still
enables rocket scientists to throw a washing machine at a comet with
enough accuracy to hit it head on after seven months of coasting
through space.
The
problem with this kind of thinking is that it leads to cargo cults.
According to Samuel Arbesman3,
we can measure how knowledge changes or decays over time by
calculating how many facts within a given subject area are disproven
within a certain amount of time. What he describes as the 'half life'
of knowledge is the time taken for half of the facts known about a
subject to become untrue. When I was at junior school, the number of
chemical elements was a known fact; the fixed immutable number of
essential building blocks that all matter is built from. The number
was 103. Today, it is 118. Similarly, back then there were nine
planets in our solar system. Now there are eight. Where has Pluto
gone? Has he run away to a better life orbiting another star? No, we
have just redefined what we “know” about the universe that we
live in.
How
does this apply to evaluating learning? Well, saying that the brain
is a sponge and when it's full, it's full is a fact. Saying that the
brain has a fixed number of cells at birth, and as cells die off they
are never replaced is a fact. Both of these facts are now known to be
untrue.
However,
saying that a person has to learn something before they can use it,
and when they use it that action has a resulting effect in the world
which can be measured is not a fact; it is an observation. And saying
that if you want that person's action to be predictable and
structured then it would be sensible for the learning experience to
be structured is not a fact, it is an observation of a causal
relationship between learning and behaviour. And what term might we
use to describe an observation of causal relationships? That's right;
a model.
Critics
of Kirkpatrick say that the first evaluation level, that of the
learning experience, is irrelevant. 'Happy sheets' have a bad name
because how much a learner enjoyed the training course has no bearing
on their ability to apply their new knowledge. Well, I agree with
that. I've seen learners have a whale of a time and not learn a
sausage. But that has nothing to do with the importance of
evaluation, it is a reflection of outcome.
I
have seen literally hundreds of trainers in action over the years,
and what many share in common is a need to be liked, to entertain
their learners. Whatever the explicit learning outcomes are, the
trainers have an innate need to create an enjoyable experience. One
trainer even calls his work 'entertrainment' because he believes that
the more entertaining he is, the more people enjoy his courses and
therefore the more they learn. Well, like the apprentices in the
workshop, they certainly do learn, they just don't learn what you
were paying for them to learn.
Vocational
training and even academic education has a purpose, and that purpose
must surely define or at least shape the learning process. You may
have seen the old photograph of girls in a Victorian school learning
to swim. While sitting at their desks. Sitting at their desks, waving
their arms in the air, pretending to swim. Now, this photograph may
have been taken out of context. The Victorians may have taught
children the basic movements of swimming before throwing them in at
the deep end, which would be a very good idea indeed. I hated
swimming lessons at school, mainly because the water was cold and
wet, two things which I would prefer that water wasn't. So a bit of
classroom preparation would have been very useful; it may have saved
me from ingesting so much chlorine as a child.
Kirkpatrick's
first evaluation level is not a test of enjoyment; if trainers use it
that way, it tells you that they want to be liked more than they
want to educate. The first level is an evaluation of experience.
The learners may have hated the experience, and that may have been
your intention. If your overall objective was “to learn that
falling off an unsecured ladder is dangerous” then an unpleasant
experience can be very effective. Certainly, we might say that there
is no better way to learn the importance of health and safety than to
have an accident at work. So the experience doesn't have to be good,
it has to be appropriate to the learning objective.
Most
of what I've read about Kirkpatrick's model uses the word 'reaction'
to describe level one. The evaluation is how learners react to the
experience. Are they thrilled? Delighted? Enthralled? Dubious?
Confused? All of these are valid responses, but must be judged in the
context of the desired outcome of the overall learning experience.
What
about the argument that much learning today is informal, unstructured
and 'social'? Well, that's true, of course. But it is not an excuse
to spend less time in formal training. Informal learning is all very
well, but you cannot predict what someone will learn, and when.
Again, the apprentices in the workshop experienced plenty of 'social
learning', and it was mostly the kinds of things that their managers
would prefer they hadn't learned. If you want predictable outputs,
you need predictable inputs.
But
don't assume that I am only in favour of Kirkpatricks' model. I
simply want to make the point that evaluation is vital at every stage
of the learning process, from needs analysis through to practical
implementation. Every evaluation stage forms a feedback loop which
informs and shapes the entire process. Don't get hung up on which
evaluation model is right, just pick one and use it, and most
importantly, don't hide behind it. Evaluation doesn't prove whether
your training is good or bad, right or wrong, valuable or worthless.
Evaluation is an opportunity to test for unintended consequences, an
opportunity to see what happened that you hadn't expected and to fine
tune your approach to get the results you want.
Instead
of saying, “The feedback was poor, the training was ineffective”,
learn to ask the right questions so that you can say, “That's
interesting, the training delivered a different outcome than the one
we intended. How can we refine our methods to get the results we
want?”
Overall,
if you don't decide how to evaluate a program before you start
delivering it, you're just asking for trouble, like the reader who
sent me the following email. The context for the question was
training for the sales people who work on new housing developments.
Question: How can I show how effective the training we have invested in this year has been?
I am struggling - as cannot 'prove' anything e.g. if sales have increased since last year - then there are many factors that could affect that - market, prices, etc etc
How do we KNOW the Sales Execs are more effective as a result of the training ? - apart from Mystery shopping ?
Any ideas, thoughts please?
I am struggling - as cannot 'prove' anything e.g. if sales have increased since last year - then there are many factors that could affect that - market, prices, etc etc
How do we KNOW the Sales Execs are more effective as a result of the training ? - apart from Mystery shopping ?
Any ideas, thoughts please?
And
here's my answer:
The important thing in any evaluation is to ask ‘More effective at WHAT?’
If it’s selling houses, then you have to take out external factors, so choose two areas that had very similar sales performance last year, one of which has received training, the other hasn’t, and compare the two.
Otherwise, if you’re saying that you can’t control for those factors then sales can’t be the outcome to evaluate against, it has to be the ‘right steps’ that lead to a sale when other factors are right.
For example, IF the location is right AND the house is right AND the buyers can afford the mortgage AND they can sell their current house THEN what sales behaviours make the difference between sale and no sale?
As you rightly say, you can evaluate whether the sales people are building rapport, for example, with mystery shopping. What you can’t really do is prove a simple causal link between rapport and a decision to buy.
Speaking personally, I bought a new house last December. Our decision factors were location and price. All the sales staff had to do was stay out of the way. We looked at a few locations with different builders and observed a range of sales behaviours and processes. Again, speaking personally, the most influential behaviour of a sales person in this context isn’t to shape the decision but to reinforce it. They can’t make a customer buy, but they can reassure and reinforce a decision that has already been made, or at least half made.
So there is a bigger context to this discussion, which is the whole customer interaction and communication, because every ‘touch point’ influences the customer. Branding, advertising and signage to drive people to a development, the presentation of the show homes, the features of the house and so on. As you said, sales behaviour is one part of that sequence of events, and the question to ask for an evaluation to be meaningful must be about the role of the sales exec. Are they supposed to be selling houses? Or are they supposed to be making customers feel ‘at home’? Or are they supposed to be answering questions about the house? Or are they supposed to be completing purchase paperwork?
It’s easy to say that their job is to sell houses, but is it really? If that were the case then they are the major contributing factor to the sale and we can easily measure the effectiveness of the programme. But if you can quantify the other factors involved, then their role is no longer to sell houses, but to enable and manage the parts of that overall process, and that then becomes much easier to measure and quantify. Similarly, the site manager’s job isn’t to sell houses. Or it is, depending on how you look at it. The site managers and contractors also contribute towards the customer’s feeling of reassurance.
But my guess is that you’ve been asked, “Is the training helping us to sell more houses?”, in which case we have to go back to making like for like comparisons between developments and quantifying the ROI of the training, and it’s quite likely that the answer is yes, but if the most that the sales exec contributes to a customer’s decision is, say, 10%, then that limits the impact of any training. A more valuable question might be to ask what would happen if you didn’t train sales execs at all – you just got people off the street and let them get on with it. Then the question can’t be whether the training is effective or not, but WHICH training is most effective.
If it’s selling houses, then you have to take out external factors, so choose two areas that had very similar sales performance last year, one of which has received training, the other hasn’t, and compare the two.
Otherwise, if you’re saying that you can’t control for those factors then sales can’t be the outcome to evaluate against, it has to be the ‘right steps’ that lead to a sale when other factors are right.
For example, IF the location is right AND the house is right AND the buyers can afford the mortgage AND they can sell their current house THEN what sales behaviours make the difference between sale and no sale?
As you rightly say, you can evaluate whether the sales people are building rapport, for example, with mystery shopping. What you can’t really do is prove a simple causal link between rapport and a decision to buy.
Speaking personally, I bought a new house last December. Our decision factors were location and price. All the sales staff had to do was stay out of the way. We looked at a few locations with different builders and observed a range of sales behaviours and processes. Again, speaking personally, the most influential behaviour of a sales person in this context isn’t to shape the decision but to reinforce it. They can’t make a customer buy, but they can reassure and reinforce a decision that has already been made, or at least half made.
So there is a bigger context to this discussion, which is the whole customer interaction and communication, because every ‘touch point’ influences the customer. Branding, advertising and signage to drive people to a development, the presentation of the show homes, the features of the house and so on. As you said, sales behaviour is one part of that sequence of events, and the question to ask for an evaluation to be meaningful must be about the role of the sales exec. Are they supposed to be selling houses? Or are they supposed to be making customers feel ‘at home’? Or are they supposed to be answering questions about the house? Or are they supposed to be completing purchase paperwork?
It’s easy to say that their job is to sell houses, but is it really? If that were the case then they are the major contributing factor to the sale and we can easily measure the effectiveness of the programme. But if you can quantify the other factors involved, then their role is no longer to sell houses, but to enable and manage the parts of that overall process, and that then becomes much easier to measure and quantify. Similarly, the site manager’s job isn’t to sell houses. Or it is, depending on how you look at it. The site managers and contractors also contribute towards the customer’s feeling of reassurance.
But my guess is that you’ve been asked, “Is the training helping us to sell more houses?”, in which case we have to go back to making like for like comparisons between developments and quantifying the ROI of the training, and it’s quite likely that the answer is yes, but if the most that the sales exec contributes to a customer’s decision is, say, 10%, then that limits the impact of any training. A more valuable question might be to ask what would happen if you didn’t train sales execs at all – you just got people off the street and let them get on with it. Then the question can’t be whether the training is effective or not, but WHICH training is most effective.
In
any open system, you can control the outcome or you can control the
process, but you can't control both, because you can't predict what
external factors will act upon the system.
You
can either give the sales people a script, and accept that they may
or may not sell houses, or you can set them the objective of selling
houses, and not worry about how they achieve that, as long as they
stay within the law.
But
you can't tell them what to say and expect it to work, because
no script can predict what the customers will say and do.
That
would constitute a magical incantation, and if you’ve ever bought a
house, you’ll know that the only person who influences your
decision is the person you’re buying it with.
This
is why it is so important to model the behaviour within the person
within the culture, so that you can control for as many of these
variables as possible.
Only
then will evaluation have any meaning.
1Evaluating
Training Programs, Kirkpatrick, 1975
2Kaufman,
Keller, and Watkins, 1995
No comments:
Post a comment