Coert Visser
A computer company started a rigorous and expensive training program for service technicians. At a certain point management wanted to know all kinds of things about this program: Who followed the training and who didn't? Is the training program effective? Is the investment justifiable? Should we continue? Should we adjust things?
As a Human Resource Manager you are probably involved in change programs on a regular basis like the implementation of new management systems or technologies. Some examples are an Enterprise Resource Planning (ERP) system, a new quality system, Competency Management, and a training program for a large category of employees.
Such change initiatives often are very expensive and impact the work of many in the organization. But they are never fully successful for everyone in every aspect. Sometimes there are even many problems and there is much criticism: ?Why are we doing this anyway? Do they think we have nothing else to do?? These types of reactions may be unpleasant but often they are understandable. People have to change their behaviors and invest in the change but for them it is not always immediately visible to what end.
For management, it is extremely important to know how the implementation is going: Is the system effective? How many people are using it already? What goes right? To what advantages does this lead? Can we justify the investment? What goes wrong? What can we do about this? Must we expand the initiative? Or do we have to stop it or change it drastically? How can we energize people?
In practice, evaluation of change initiatives is often problematic. Large change initiatives are sometimes not evaluated at all and often done in a sloppy and anecdotal way or they are not evaluated at all. When done like this, the credibility and cogency are by definition small. Skeptics will find more than enough reasons to remain skeptic.
Sometimes rigorous studies take place. Usually these produce detailed information about the use of the new system and what goes right and, in particular, what goes wrong. Still the usability of these studies is often low. The presentation of the material is often dry and mainly numerical and offers few concrete ideas for further implementation or decision-making.
In his latest book, the Success Case Method, American professor Robert Brinkerhoff presents an evaluation method that deals effectively with the above-mentioned problem. The method roughly consists of the following steps:
The management of the above-mentioned computer Services Company found that no less than 40% of the targeted technicians had not yet followed the training program. But due to the purposive and the success-focused character of the method, it was possible to establish that the 60% who had followed the program profited significantly from it. The thoroughness of the SCM-interviews made it possible to demonstrate convincingly that the new learned skills led to a speedier installation process and the quicker resolution of emerging problems that would otherwise have led to bigger problems. This was demonstrably to the satisfaction of customers and led to more client loyalty and higher turnover. At the same time, it became clear that too many technicians for whom the training program was not intended and not useful for had still attended it, while some other technicians for whom it would have been useful had been placed on a waiting list. The study led to an increased commitment for the program but also to a stricter selection process so that the efficiency and the output grew drastically.
As this example illustrates, this evaluation method is very worthwhile because of the following characteristics:
These characteristics make the Success Case Method much more specific, credible and usable than is normally the case with evaluation studies.
Coert Visser can be contacted via (coert.visser@wxs.nl) and
http://www.m-cc.nl/MCCarticles.htm
This article was originally published on www.hr.com
408 days ago
695 days ago
817 days ago
824 days ago