Sunday, 7 December 2014

The End of the Year is Nigh...

The end of the year is nigh, and I'm sure you're thinking about winding down for the end of the year... and there's the problem. So are your sales people.

"There's no point doing anything in December", they say, "No-one's going to make any decisions until the New Year"

Even if that's true, why not hit the ground running in 2015?

Get your sales team together now, while they're saying things are quiet anyway, for an intensive strategy planning and goal setting session.

And best of all, if you pay before Christmas and hold the workshop before the end of January, I'm going to give you a half price offer. Normally, a full day strategy and goal setting workshop would cost £995, so I'm only going to charge £495.*

Why would you bother? Because I've doubled sales conversion rates for D&G and FIE, I've increased profitability at Parker Hannifin by 700% and I could get similar results for you.

Get ready for 2015. Before your competitors beat you to it.








* Plus travel expenses from the Midlands.

Monday, 28 April 2014

Standards

Recently, on a management development program, we advised managers to put reminders of quality standards next to staff rotas, so when the rota says 'goods in', staff know exactly what they are responsible for during that period of time.

As a manger, you have to make it easy for staff to achieve the standards that you hold them accountable for. It's simply not fair or reasonable to let them get on with it and then lay into them afterwards for not doing a job properly. Define what 'properly' means and they have at least half a chance.

This then sets the standard for performance management, so if a member of staff doesn't achieve the standard that's there in black and white, they're really out of excuses.

Compare that with a situation where the standards are all in a procedures manual, tucked away in the manager's office. Are you really expecting anyone to read it.

Make it easy for your staff to achieve the standards that you hold them accountable for.

And, lo and behold, after we told the managers this, we visited a sweet shop, and look what we found:


Thursday, 27 March 2014

Is Evaluation Still Relevant?

I'm getting ready to release the second edition of Genius at Work with some all new stuff and a whole new modelling case study.

I was just updating the chapter on Evaluation - specifically evaluation of learning, but it could apply to anything that you want to measure.

I thought I'd share it with you here to get you all excited about the upcoming second edition, which you can win a copy of in our competition to support this year's Learning at Work Week.

~

Believe it or not, I have heard it said by some trainers that Kirkpatrick's four stage evaluation model is out of date and no longer valid. I'd like to explain why it is valid, and how it aligns with the modelling and installation process that we've been exploring together.
First of all, let's hear the prosecution's opening statement.
Kirkpatrick's evaluation method1, based on research first published in 1959, and even its 'successor', Kaufmann's five level approach2, is focused on a 'training event', when in fact, learning takes place on an ongoing basis, both actively and passively, and these standard evaluation models do not take into account the informal and social learning which takes place.
Kirkpatrick's detractors cite these alternatives:
  • Brinkerhoff's Success Case Method
  • Daniel Stufflebeam's CIPP Model
  • Robert Stake's Responsive Evaluation
  • Kaufman's Five Levels of Evaluation
  • CIRO (Context, Input, Reaction, Outcome)
  • PERT (Program Evaluation and Review Technique)
  • Alkins' UCLA Model
  • Provus' Discrepancy Model
  • Eisner's Connoisseurship Evaluation Model
So that's that. Learning in this modern age is a cool and sexy thing which no longer takes place in a stuffy old classroom, so evaluation is no longer meaningful. And any theory which is more than 50 years old must be untrue.
Well, Newton's first law of motion is over 200 years old, but it still enables rocket scientists to throw a washing machine at a comet with enough accuracy to hit it head on after seven months of coasting through space.
The problem with this kind of thinking is that it leads to cargo cults. According to Samuel Arbesman3, we can measure how knowledge changes or decays over time by calculating how many facts within a given subject area are disproven within a certain amount of time. What he describes as the 'half life' of knowledge is the time taken for half of the facts known about a subject to become untrue. When I was at junior school, the number of chemical elements was a known fact; the fixed immutable number of essential building blocks that all matter is built from. The number was 103. Today, it is 118. Similarly, back then there were nine planets in our solar system. Now there are eight. Where has Pluto gone? Has he run away to a better life orbiting another star? No, we have just redefined what we “know” about the universe that we live in.
How does this apply to evaluating learning? Well, saying that the brain is a sponge and when it's full, it's full is a fact. Saying that the brain has a fixed number of cells at birth, and as cells die off they are never replaced is a fact. Both of these facts are now known to be untrue.
However, saying that a person has to learn something before they can use it, and when they use it that action has a resulting effect in the world which can be measured is not a fact; it is an observation. And saying that if you want that person's action to be predictable and structured then it would be sensible for the learning experience to be structured is not a fact, it is an observation of a causal relationship between learning and behaviour. And what term might we use to describe an observation of causal relationships? That's right; a model.
Critics of Kirkpatrick say that the first evaluation level, that of the learning experience, is irrelevant. 'Happy sheets' have a bad name because how much a learner enjoyed the training course has no bearing on their ability to apply their new knowledge. Well, I agree with that. I've seen learners have a whale of a time and not learn a sausage. But that has nothing to do with the importance of evaluation, it is a reflection of outcome.
I have seen literally hundreds of trainers in action over the years, and what many share in common is a need to be liked, to entertain their learners. Whatever the explicit learning outcomes are, the trainers have an innate need to create an enjoyable experience. One trainer even calls his work 'entertrainment' because he believes that the more entertaining he is, the more people enjoy his courses and therefore the more they learn. Well, like the apprentices in the workshop, they certainly do learn, they just don't learn what you were paying for them to learn.
Vocational training and even academic education has a purpose, and that purpose must surely define or at least shape the learning process. You may have seen the old photograph of girls in a Victorian school learning to swim. While sitting at their desks. Sitting at their desks, waving their arms in the air, pretending to swim. Now, this photograph may have been taken out of context. The Victorians may have taught children the basic movements of swimming before throwing them in at the deep end, which would be a very good idea indeed. I hated swimming lessons at school, mainly because the water was cold and wet, two things which I would prefer that water wasn't. So a bit of classroom preparation would have been very useful; it may have saved me from ingesting so much chlorine as a child.
Kirkpatrick's first evaluation level is not a test of enjoyment; if trainers use it that way, it tells you that they want to be liked more than they want to educate. The first level is an evaluation of experience. The learners may have hated the experience, and that may have been your intention. If your overall objective was “to learn that falling off an unsecured ladder is dangerous” then an unpleasant experience can be very effective. Certainly, we might say that there is no better way to learn the importance of health and safety than to have an accident at work. So the experience doesn't have to be good, it has to be appropriate to the learning objective.
Most of what I've read about Kirkpatrick's model uses the word 'reaction' to describe level one. The evaluation is how learners react to the experience. Are they thrilled? Delighted? Enthralled? Dubious? Confused? All of these are valid responses, but must be judged in the context of the desired outcome of the overall learning experience.
What about the argument that much learning today is informal, unstructured and 'social'? Well, that's true, of course. But it is not an excuse to spend less time in formal training. Informal learning is all very well, but you cannot predict what someone will learn, and when. Again, the apprentices in the workshop experienced plenty of 'social learning', and it was mostly the kinds of things that their managers would prefer they hadn't learned. If you want predictable outputs, you need predictable inputs.
But don't assume that I am only in favour of Kirkpatricks' model. I simply want to make the point that evaluation is vital at every stage of the learning process, from needs analysis through to practical implementation. Every evaluation stage forms a feedback loop which informs and shapes the entire process. Don't get hung up on which evaluation model is right, just pick one and use it, and most importantly, don't hide behind it. Evaluation doesn't prove whether your training is good or bad, right or wrong, valuable or worthless. Evaluation is an opportunity to test for unintended consequences, an opportunity to see what happened that you hadn't expected and to fine tune your approach to get the results you want.
Instead of saying, “The feedback was poor, the training was ineffective”, learn to ask the right questions so that you can say, “That's interesting, the training delivered a different outcome than the one we intended. How can we refine our methods to get the results we want?”


Overall, if you don't decide how to evaluate a program before you start delivering it, you're just asking for trouble, like the reader who sent me the following email. The context for the question was training for the sales people who work on new housing developments.

Question: How can I show how effective the training we have invested in this year has been?

I am struggling - as cannot 'prove' anything e.g. if sales have increased since last year - then there are many factors that could affect that - market, prices, etc etc

How do we KNOW the Sales Execs are more effective as a result of the training ? - apart from Mystery shopping ?

Any ideas, thoughts please?
And here's my answer:
The important thing in any evaluation is to ask ‘More effective at WHAT?’
If it’s selling houses, then you have to take out external factors, so choose two areas that had very similar sales performance last year, one of which has received training, the other hasn’t, and compare the two.

Otherwise, if you’re saying that you can’t control for those factors then sales can’t be the outcome to evaluate against, it has to be the ‘right steps’ that lead to a sale when other factors are right.

For example, IF the location is right AND the house is right AND the buyers can afford the mortgage AND they can sell their current house THEN what sales behaviours make the difference between sale and no sale?

As you rightly say, you can evaluate whether the sales people are building rapport, for example, with mystery shopping. What you can’t really do is prove a simple causal link between rapport and a decision to buy.

Speaking personally, I bought a new house last December. Our decision factors were location and price. All the sales staff had to do was stay out of the way. We looked at a few locations with different builders and observed a range of sales behaviours and processes. Again, speaking personally, the most influential behaviour of a sales person in this context isn’t to shape the decision but to reinforce it. They can’t make a customer buy, but they can reassure and reinforce a decision that has already been made, or at least half made.

So there is a bigger context to this discussion, which is the whole customer interaction and communication, because every ‘touch point’ influences the customer. Branding, advertising and signage to drive people to a development, the presentation of the show homes, the features of the house and so on. As you said, sales behaviour is one part of that sequence of events, and the question to ask for an evaluation to be meaningful must be about the role of the sales exec. Are they supposed to be selling houses? Or are they supposed to be making customers feel ‘at home’? Or are they supposed to be answering questions about the house? Or are they supposed to be completing purchase paperwork?

It’s easy to say that their job is to sell houses, but is it really? If that were the case then they are the major contributing factor to the sale and we can easily measure the effectiveness of the programme. But if you can quantify the other factors involved, then their role is no longer to sell houses, but to enable and manage the parts of that overall process, and that then becomes much easier to measure and quantify. Similarly, the site manager’s job isn’t to sell houses. Or it is, depending on how you look at it. The site managers and contractors also contribute towards the customer’s feeling of reassurance.

But my guess is that you’ve been asked, “Is the training helping us to sell more houses?”, in which case we have to go back to making like for like comparisons between developments and quantifying the ROI of the training, and it’s quite likely that the answer is yes, but if the most that the sales exec contributes to a customer’s decision is, say, 10%, then that limits the impact of any training. A more valuable question might be to ask what would happen if you didn’t train sales execs at all – you just got people off the street and let them get on with it. Then the question can’t be whether the training is effective or not, but WHICH training is most effective.
In any open system, you can control the outcome or you can control the process, but you can't control both, because you can't predict what external factors will act upon the system.
You can either give the sales people a script, and accept that they may or may not sell houses, or you can set them the objective of selling houses, and not worry about how they achieve that, as long as they stay within the law.
But you can't tell them what to say and expect it to work, because no script can predict what the customers will say and do.
That would constitute a magical incantation, and if you’ve ever bought a house, you’ll know that the only person who influences your decision is the person you’re buying it with.
This is why it is so important to model the behaviour within the person within the culture, so that you can control for as many of these variables as possible.
Only then will evaluation have any meaning.


1Evaluating Training Programs, Kirkpatrick, 1975
2Kaufman, Keller, and Watkins, 1995
3The Half Life of Facts, Arbesman, 2012

Saturday, 22 March 2014

Talent Management - an Application of Modelling


Some companies have excellent talent management processes. They have regular, fair appraisals, competency frameworks and encourage career progression through mentoring, secondments and networking events.

However, the majority of employees aren't that lucky. They have to attend to their own career development if they want to 'get on'.

One company which recognised this problem was Babcock - at the time, known as Alstec. They had an ageing workforce with decades of technical knowledge locked up in engineers' heads. There was no plan to capture that knowledge before these people retired, so the normal routine was for someone to retire on a Friday and come back as a contractor the following Monday - paid more, and with no incentive to do much of anything other than protect their knowledge, because that was what they were being paid for. Once someone reached this position, it was practically impossible to 'download' their knowledge, because that would make them expendable.

The solution we put in place was a career management coaching program for 25 nominated 'high potential' engineers and project managers. Whilst the selection process was in itself flawed as it was the same subjective selection process used for internal recruitment, at least it was a start in the right direction. Over the next 2 years, those 25 participants received 9 coaching sessions and attended 5 group sessions. The program achieved an 83% success rate, measured by the participants achieving significant promotions or relocations which they would not have achieved by doing what they had done before - sitting back and waiting to be told what their next job would be. Two of the participants, as a result of the program, decided that they didn't actually want career progression, so of the participants who actually wanted to 'get on', we had a 100% success rate.

The application of modelling high performers came in the design of the coaching process itself. Coaching is all well and good, but it relies on the coachee to provide direction, to set goals. The participants in this program didn't have a direction in mind, they were happy doing what they were doing. We needed a structure for the coaching sessions which would naturally lead to career development, by guiding the participants into a sequence of activities designed to maximise their career development opportunities.

We looked at the behaviours of people who are naturally good at driving forwards their own careers, talking both to recruiters and to people who achieved promotions in other corporate environments.

The end result was a 'career cycle' which we mapped each program participant onto. By shaping the focus of the coaching sessions around the cycle, each participant moved forwards simply by setting their own short term goals. Normally, these people would spend no time at all thinking about where they wanted to take their careers. The program forced them to take at least some time out to reflect on this, and to do so in a carefully structured way that focused their thoughts and plans on the activities which would logically move them forwards most quickly.

The result was that Babcock had a more focused group of engineers and managers, now positioned to absorb that tacit knowledge and protect the integrity of the business for many years to come, while also enjoying greater responsibilities and rewards for themselves.

Wednesday, 19 March 2014

Talent is, in Itself, Irrelevant

RSA Premiums has launched a 'Valuing your Talent Challenge' to discover new ways to identify and develop talent within an organisation, so I have submitted an idea based on Genius at Work; that talent is entirely culture dependent.


Everyone's focusing on talent; nurturing talent, the war for talent etc. and forgetting a very important point; that talent, in itself is irrelevant. If that talent cannot express itself in a which which creates value, it's a waste. In fact, we can only ever identify talent in light of an organisation's goals. Do we analyse an investment banker's talent for playing the trumpet? Or a production manager's talent for creating hybrid roses in his garden? No. So, if we don't also look at an organisation's culture then talent means nothing.

This is the subject of over 20 years of research that is shared in my book Genius at Work.

A culture can be enabling, where organisational and tacit rules inhibit the expression of rules, or it can be enabling, where those rules allow or even reinforce the expression of talent.

A culture is simply a set of rules (plus a language) which adapts as quickly as the people who make those rules. When managers talk about culture as an ethereal, intangible concept, they're talking about tacit rules - rules that aren't written down anywhere and which are passed on through experience. Our approach maps these rules as they connect with a person's behaviours and beliefs to create an interaction which either makes it difficult for that person to express their talents or easy.

We already know, intuitively, that you can have the best candidate in the world, on paper, but if they don't 'fit in', they won't perform. What we lack is a way to quantify and predict this. On the other hand, a group of average performers, working as a close-knit team to achieve shared, inspiring goals, will achieve more than a team of superstars, each fighting for the limelight.

My insight is therefore that talent is irrelevant, in itself, and you must look at the relationship between talent and culture to see how to improve performance, which is ultimately what we're aiming for.

Wednesday, 12 February 2014

Diminishing Returns in Customer Service

We're working on a customer service module for a client's management development program and came across some interesting research showing that the connection between customer service and profit is non-linear.

   

In other words, better service = more profit up to a point, and then profit declines with improvements in service.

So instead of giving your customers more, give them what they value.

Simple.


Friday, 7 February 2014

Overwhelmed by Supernormal Stimuli

This fascinating article about supersnormal stimuli is copied from Quora...
A wise man rules his passions, a fool obeys them. Publius Syrus
Given the rapid pace of technology, one has to wonder whether or not our brains (and bodies) have been able to keep up with all the new “stimulation” that is available.

Fact is, a frightening amount of research suggests that many of the things we enjoy today would be classified as supernormal stimuli, a term evolutionary biologists use describe any stimulus that elicits a response more strongly than the stimulus for which it evolved, even if it is artificial—in other words, are “fake” stimuli like junk food and video games too much for our brains to handle?

It’s a question that deserves investigating.

Thursday, 30 January 2014

Three Levels of Activity Design

When we design training exercises, they have three levels.

Well, actually, that's not quite fair. In fact, all training exercises and activities have three levels. It's just that most trainers don't know about the third level, so the results of the activity are unpredictable and the training program fails to achieve its outcomes. This is a particular problem with 'team building' events run by the people who manage outdoor activity centres and the like.

The result is that your team gets a day off, has a lovely time, but completely fails to address the reasons why they're not performing effectively.

Activity Structure The rules of the activity itself; what to do with what, where and when
Learning Outcome What the trainer wants the learners to take away from the activity, such as the conclusion that 'teamwork is important'
Learning Experience What the learners have to covertly do or experience in order to achieve the learning outcome

The third level of Learning Experience, the hidden level, is missing when the trainer focuses on the learning outcome. The problem with the top two levels is that the learners know that they're happening, so they 'play along' and second guess what's supposed to happen. Partially, we could see this as a conspiracy with the trainer, who of course wants an easy life and wants the learners to write "I learned that teamwork is important" on their feedback forms, therefore proving that the trainer did a great job.

Whilst the learners may be able to recite this great insight, it doesn't mean anything to them. Most importantly, it makes absolutely no difference to how they perform as a team.

What does make a difference is what they do, what they experience, that they don't know is happening until it's over and they have achieved something much more valuable than a superficial learning outcome.


What's the difference between conscientious and lazy staff?

Not much, it turns out.

Because we can only see what's happening on the outside, it's easy to think that conscientious, careful, customer caring staff always do the right thing, therefore they don't even think about doing things the wrong way.

Lazy staff, on the other hand, don't care about the customer, therefore they don't think about how their actions affect the customer.

Actually, this isn't quite true.

Here's what a lazy retail sales assistant thinks when it's time to unpack boxes and stack shelves:


Going to hunt for the proper safety knife is time consuming, so what's the point? (No pun intended. Though it is quite funny.) Doing things properly encroaches into break time. So the assistant uses a handy pair of scissors, thinking they'll be careful, so there's no problem.

Now, what about our caring, careful assistant? They must be well organised and always have the correct safety knife with them, yes? No. OK then, so they must automatically go and get a safety knife when they're unpacking boxes, yes? No.

In fact, both sales assistants have the first initial thought: "I can't be bothered to look for the safety knife, I'll use these scissors instead".

The difference is in how those initial thoughts play out. The lazy assistant doesn't think any further, and goes straight to work on the box.

The careful assistant, however, imagines, in the blink of an eye, what could go wrong, how that will damage the contents of the box, how that will affect the customer and how, in the long run, they create more work for themselves:


Do you see the important bit? The assistant who demonstrates the best customer care doesn't actually care about the customer at all. Well, they might do, but that's not important. They actually care about their own time. By doing the job properly now, they save themselves the trouble of having to take the damaged boxes back off the shelf, doing the returns paperwork, dealing with customer complaints and having to do it all over again, once their manager finds out, as they inevitably will.

By caring about their own time, the careful assistants do things right, which is better for the customer.

The lazy assistant doesn't care that they have to put the problem that they caused right, because while they're doing all that paperwork, they're not stacking shelves.

Modelling high performers using the Genius at Work method always leads to counter-intuitive findings. That's one of the things that makes it so much fun!


Friday, 24 January 2014

Comparing the Average

When we model high performers, we don't compare the highest to the lowest, we compare the highest to the average. The difference between your best and worst staff isn't performance - it's intention.



Let's say you have three people in your team; Alan, Ben and Colin.

Ben just ticks along, doing a fine job but needs support. He shows promise, but his performance right now is just OK.

Colin has to be told what to do, and we often find him taking an unofficial break while he 'looks for something in the staff room'.

Alan outperforms both of his colleagues put together.

So if we want to find out what Alan is doing that enables his high performance, do we compare him to Colin? No, definitely not. It's what many people would do, because they think it would show the biggest difference.

However, Alan and Colin are not trying to achieve the same goals, and understanding the high performer's goals is absolutely vital.

At least Ben is trying to do a good job, he just doesn't have the mindset, skills or experience to reach Alan's level.

Alan's goal will be something like the satisfaction of pushing himself to do more. If his job carries a bonus then his goal might be lifestyle oriented. The important point is that his goal is not job related, it is beyond that. High performance at work is a means to an end.

Colin's goal will be something to do with the avoidance of work, perhaps doing something else that he really wants to do. Maybe he likes to spend as much time on Facebook as possible, so he creates situations at work where he can disappear for a while to use his phone. His goal isn't to avoid work, because that's a negative. His goal is to achieve something that means that his work suffers, and because that's not important to him, we see a drop in performance.

Imagine that something happens that requires your attention; a problem at home, illness etc. It takes your focus off work and so your performance reduces. You didn't mean to avoid work, it's just a consequence.

If we compare Alan to Colin, we'll get a big difference in output, but that will tell us nothing about what they are doing differently to achieve that output. When we model a high performer, we need to know what's going on inside their heads.

So we compare Alan to Ben, and what we see is two people with the same goals and intentions - to do a good job - but different attitudes, perceptions and skill levels. They see the world differently, they think about it differently and they employ their skills differently. The resulting difference in performance is an output.

To analyse any complex system, we must reduce the number of variables, so we pin down the inputs - the job requirements and environment - and we pin down the outputs  - intention and performance. The difference is where we find the magic of a high performer.