Trainers are facing more accountability for their performance than ever before. Increased emphasis on quality control is one reason. The need for better and more comprehensive training in more areas than in the past is another. Special issues—such as affirmative action, ADA and diversity training-are so sensitive in their nature they leave no margin for error. Another reason stems from increased expectations. “The customer’s level of [expectation] keeps increasing,” Mark Fritsch, director of Northern States Power Co.’s Quality Academy based in Minneapolis, points out. “So constantly we need to stay ahead of the curve.” Simply put, the bar has been raised, and every successful organization must either keep up or be left in the dust.
Accountability is meaningless without tools for measuring results.
But given these greater expectations for trainers, how can managers measure results? How can measurement improve training programs? And how might it shape the trainer’s role in the future?
Fritsch says Northern Power makes a dedicated effort not only to measure the impact of its training programs, but also to measure them in the most meaningful and objective ways possible. “First, we measure customer satisfaction. Second, we measure the performance change due to training by looking at productivity changes or improvement of customer interaction which we measure by customer surveys. Third, we measure return on our investment…. Fourth, we conduct pre- and post-training testing.”
But it’s not always quite that easy, Fritsch concedes. Conducting surveys is one thing. But conducting surveys that deliver meaningful results is something else. To do this, trainers should cultivate good relationships with others upon whose cooperation they can depend. “Trainers need to have a partnership with their customers,” he insists. “If [our customers] are running high-performing organizations, they should be measuring the productivity of their workforces [as it relates to our training]. So they often feed us the results.”
Fritsch does acknowledge, however, the dangers that may stem from measuring performance the wrong way. For example, training officials need to distinguish among many competing variables. Are employees’ productivity gains related to the training? Trainers who rely on second or third-hand feedback need to take great care in how they interpret results. When feedback is consistently good or bad over long periods of time, however, the indications as they pertain to training are probably reliable.
Tests collect a range of useful information.
The primary role of formal testing in training programs used to focus on determining how much employees learned from their training. It also was a way of determining how well trainers were doing their jobs. But now, testing can and must do much more. In addition to measuring what the trainee learned, testing is now being used nearly as often to pinpoint what each trainee knows before training even begins. In that way, trainers not only can determine how much the trainee has learned from the session, but also can get a better idea of how to fashion the training session.
“Our test defines the person, how best to train them and in which skills they can most use training,” says Charlie Wonderlic, president of Wonderlic Personnel Test Inc. of Libertyville, Illinois. “We provide specific training placement instruction based on what employees know relative to national standards.” Wonderlic says this method helps employers get the most complete picture of their employees’ skills. “Few employers are using this approach,” he comments. “They’re using more in-house tests. But they don’t give you the important national picture.”
Measurement is only as good as the records that are kept.
Records not only are useful for purposes of substantiating claims, but they’re also important for long-term comparisons. How well did training help one team this year, versus its effect on teams during previous sessions? The ability to make those all-important comparisons, and to substantiate them in concrete terms, can help trainers know when adjustments, or even new strategies, may be needed.
“We exist because companies need to keep records about training in order to measure quality and productivity; to meet ISO or professional certification requirements; or to provide reports to such regulatory agencies as OSHA, EPA or FDA,” says Richard Silton, president of Cupertino, California-based Silton-Bookman Systems. Silton’s Registrar 5.2 for Windows™ program, for example, can handle such tasks as monitoring group progress, establishing individual development plans and forecasting needs for training programs.
Increased expectations of training have again shifted the role of trainers. They’ll need to pay better attention to a set of tools that at one time was likely relegated to secondary importance: testing and tracking scores. They must learn to be on the lookout for better methods of evaluating employees’ skills. Trainers need these measurements to quantify the impact of their strategic roles.
Workforce, June 1997, Vol. 76, No. 7, pp. 102-105.