sfwork logo

Coert Visser

A computer company started a rigorous and expensive training program for service technicians. At a certain point management wanted to know all kinds of things about this program: Who followed the training and who didn't? Is the training program effective? Is the investment justifiable? Should we continue? Should we adjust things?

Many Change Initiatives

As a Human Resource Manager you are probably involved in change programs on a regular basis like the implementation of new management systems or technologies. Some examples are an Enterprise Resource Planning (ERP) system, a new quality system, Competency Management, and a training program for a large category of employees.

Evaluation Is Important

Such change initiatives often are very expensive and impact the work of many in the organization. But they are never fully successful for everyone in every aspect. Sometimes there are even many problems and there is much criticism: ?Why are we doing this anyway? Do they think we have nothing else to do?? These types of reactions may be unpleasant but often they are understandable. People have to change their behaviors and invest in the change but for them it is not always immediately visible to what end.

For management, it is extremely important to know how the implementation is going: Is the system effective? How many people are using it already? What goes right? To what advantages does this lead? Can we justify the investment? What goes wrong? What can we do about this? Must we expand the initiative? Or do we have to stop it or change it drastically? How can we energize people?

Evaluation Is Often Problematic

In practice, evaluation of change initiatives is often problematic. Large change initiatives are sometimes not evaluated at all and often done in a sloppy and anecdotal way or they are not evaluated at all. When done like this, the credibility and cogency are by definition small. Skeptics will find more than enough reasons to remain skeptic.

Sometimes rigorous studies take place. Usually these produce detailed information about the use of the new system and what goes right and, in particular, what goes wrong. Still the usability of these studies is often low. The presentation of the material is often dry and mainly numerical and offers few concrete ideas for further implementation or decision-making.

The Success Case Method (SCM)

In his latest book, the Success Case Method, American professor Robert Brinkerhoff presents an evaluation method that deals effectively with the above-mentioned problem. The method roughly consists of the following steps:

  1. Preparation and planning: What precisely do we want to evaluate? What is the purpose of the evaluation study? On which people and groups do we focus the study? When do we start and which approach do we take?
  2. Making an impact model: This model defines what success looks like. In other words: when the initiative will be successful, what results will there be and how are these related to our goals? The impact model gives a simple overview of this.
  3. Survey design and administration: This survey is mainly focused on the questions: How is the system used now? What aspects are used and what aspects are not? And by whom? What goes right? What goes wrong? On the basis of the survey Best of the Best cases (BOBs) and Worst of the Worst Cases (WOWs) are identified.
  4. Designing and doing interviews: the interviews are primarily focused on identifying and analyzing success cases. What gets used? What results are produced? How did that relate to our goals? It is very important for the interviewers to probe in order to be sure that these are verifiable and compelling success cases. Often there will also be a few non-success interviews. These are intended to acquire insight into barriers and to gain suggestions to remove these.
  5. Communication of results, conclusions and suggestions: In this last phase of the study, the use and results of the system are presented. A description is given of which parts function well, of which parts don't and of which factors stimulate and which factors hinder and how further implementation can be supported and improved. In some cases, an estimation of the financial utility can be provided.

The management of the above-mentioned computer Services Company found that no less than 40% of the targeted technicians had not yet followed the training program. But due to the purposive and the success-focused character of the method, it was possible to establish that the 60% who had followed the program profited significantly from it. The thoroughness of the SCM-interviews made it possible to demonstrate convincingly that the new learned skills led to a speedier installation process and the quicker resolution of emerging problems that would otherwise have led to bigger problems. This was demonstrably to the satisfaction of customers and led to more client loyalty and higher turnover. At the same time, it became clear that too many technicians for whom the training program was not intended and not useful for had still attended it, while some other technicians for whom it would have been useful had been placed on a waiting list. The study led to an increased commitment for the program but also to a stricter selection process so that the efficiency and the output grew drastically.

A Practical And Useful Evaluation Method

As this example illustrates, this evaluation method is very worthwhile because of the following characteristics:

  1. Simplicity: the method is quick and simple to understand and use.
  2. Goal-oriented: the impact model makes it possible to work goal-oriented. What kind of success do we expect by the change program? What success do we look for?
  3. Success-focused: the focus is put on finding success cases and on analyzing what goes right. Because of this the method results in a much more usable insight into the question: what causes success. Due to this it becomes possible to improve the implementation, to share success stories and to learn from each other's experiences.
  4. Thoroughness: the success cases are so thoroughly analyzed and so visibly presented that the convincing power and the credibility is maximal.

These characteristics make the Success Case Method much more specific, credible and usable than is normally the case with evaluation studies.

Coert Visser can be contacted via (coert.visser@wxs.nl) and
http://www.m-cc.nl/MCCarticles.htm

This article was originally published on www.hr.com