Training evaluation is necessary and, in many ways, critical to the success of a business. But because short term priorities always seem to take precedence, it is typically something we plan to do better in the next course, or maybe next month, or even next year. After all, we’ve managed pretty well up to now, so surely another year can’t hurt!
And even if training evaluation is undertaken, it is usually at the easiest and lowest level: the measurement of student reactions through simple surveys or happy sheets. Reactions to a learning event are important and the happy sheets do serve a purpose, but will they really provide enough hard data for informed decision making when greater investment in training is needed, budgets are cut, competition for resources is fierce, and times get tough?
In the current economic context, people represent one of the primary strategic assets of a business. Cost reduction has become a priority, so measuring the effectiveness of human resources investments is an important and timely topic.
Whether making the decision to invest in people, or to simply maintain or decrease training budgets, training programs that provide immediate impact and maximum overall return on investment are an obvious choice. In this context, the adoption of assessment methodologies becomes a critical imperative for businesses and training organizations.
Return on investment (ROI) is a quantification of the relation between the benefits of a program and its costs [benefit-cost ratio (BCR)]. When BCR is greater than one, the benefits outweigh the costs and the program is considered a success. When BCR is less than one, the costs exceed the benefits and indicate that improvements or changes probably need to be made to justify the continuation of the program.
Another useful and often employed formula expresses ROI as the percentage return on the costs incurred. This has the advantage of speaking to investors and stakeholders in their own language. The formula to calculate ROI in this way is:
ROI (%) = Benefit – Cost x 100 Cost
A result greater than 100% means that the program has a net benefit after accounting for the costs involved in running it. For instance, an ROI% = 150% means that the program yielded a 150% return on money invested; i.e., the program yielded $1.50 for every dollar that the program cost.
A result less than 100% means the program had a net cost, and therefore did not recoup its cost after accounting for the benefit. When this happens, it may be useful to look for a “hidden” or social benefit that is not quantifiable, such as an increase in employee morale after an orientation program. In these cases, stakeholders and decision makers need to ascertain whether the loss is justifiable given the money being spent. Many leaders would be willing to accept a loss of 3% of several thousand dollars if the investment results in a happier workplace; but 3% of 5 million dollars? That’s some expensive happiness.
The exercise is fairly simple as long as we stick to formulas, but in order to determine the ROI of a training program, we need to collect data through assessment and evaluation of what knowledge and skills were gained and what behaviors have changed. The cost part of the formula is easier to determine, and the process of doing so has been described in our white paper. Determining the benefits of the training is more difficult, and involves knowing how the training program should be evaluated.
Training Evaluation Process
In the L&D world, we are all familiar with Kirkpatrick’s Four Levels of Evaluation. The Phillips ROI Methodology took this model a step further, adding the ROI calculation as a fifth level. Here are the five evaluation levels:
- Reaction and Planned Action – Frequency: Typically for each learning event. Included in first level are:
- Customer (trainee) satisfaction – their opinion – what did they like, what did they learn, was anything missing, Likert rating scale feedback.
- Good facilitator, interesting/useful subject, adequate facilities, opinion of atmosphere, scheduling, additional comments.
- Learning – Frequency: Typically pre- and post-training assessments. Included in the second level are:
- Change in attitude, skills, knowledge.
- Pre-/post-test, test performance, demonstration, role play.
- Application and implementation (behavior) – Frequency: Pre- and post-training, and at particular periods after training is complete (e.g., 3 months, 6 months, 1 year). Included in the third level are:
- Doing things differently at work.
- Pre-/post-test, observation, interview, allow time for change, i.e., ask employee, supervisor, subordinate for their perception of change in attitude or performance.
- Business impact (results) – Frequency: Regular intervals over the calendar or fiscal year; monthly or quarterly is typical. Included in the fourth level:
- Final overall change for the business as a result of the training program, i.e., Improved quality, improved production, decreased costs, increased job satisfaction, reduced problems or accidents, increased sales.
- ROI – Frequency: With each new training event or when significant changes are made to existing events.
- Costs of training vs. benefits of training, i.e., How did the bottom line change? Were the benefits greater than the cost?
Now that we’ve outlined the evaluation levels, let’s go back to the beginning. It is clear that in order to determine training benefits, we need to have some measurable outcomes. Setting learning and application objectives is part of the training design, so it is critical that we design with the end result in mind. For managers, the key takeaway from this phase is to ensure that the training design includes measurable application outcomes. Learning objectives map to Kirkpatrick’s Levels 1 and 2. Application and impact objectives map to Kirkpatrick’s levels 3 and 4, respectively. All of these objectives serve the purpose of calculating ROI but the application and impact objectives are not necessarily learner-focused. Application objectives could be either learner- or organization-focused. Impact objectives should align with the gap or problem that exists at an organizational level instead of an individual level. If you don’t have these three types of objectives in your training program, it cannot be properly evaluated and an ROI cannot be calculated.
Let’s look at an example of each type of objectives:
- Course objective: Learners will be able to make 5 entries in a production database in 10 minutes with no more than 1 error (learner-focused).
- Application objective: Learners will be able to reduce the data entry error rate by 50% over the next 6 months (learner-focused, but measured at a group level instead of an individual level).
- Impact objective: Employee time spent correcting database errors is reduced by 25% from last year’s rate (organization-focused).
Having the correct objectives is critical, but how do we determine, either during or at the end of training, if learning has occurred? How do we tie the assessment to the objectives?
Ideally, the objective will already specify what it is the learner should be able to do and the parameters that indicate proficiency. At the management level, the responsibility is to create the opportunities for learners to show they have mastered the objective.
Of course, there are behaviors or soft skills that are much harder to quantify. I like to look at output measures in a systematic way:
- Hard data:
- Units developed or built per hour.
- Production or process on time – the number of jobs completed on time.
- Time – the length of time needed to complete a task.
- Equipment utilization – equipment is used correctly and to capacity.
- Cost savings, including time saved because mistakes were NOT made.
- Reduced customer complaints and returns.
- Reduced accidents/scrap/rework/grievances, etc.
- Benefits and soft skills: Sometimes the goal of training is to change attitude. Kirkpatrick & Kirkpatrick (2005) literally joke that it is impossible to put a dollar value on the benefits of training for non-skills-related topics. The fact is that changes in context such as leadership, teamwork, attitude and a happier work atmosphere are often desired to achieve things such as reduced turnover, greater productivity and better teamwork.
- Attitude and behavioral changes can be measured only over time. Frequently, change will occur right after training, but employees slip back into old habits; they revert to familiar methods of doing things and/or they are pressured by peers or the work environment to avoid the change.
- To determine if behavioral change is firmly anchored, assessments should be integrated into our learning strategies, ideally spaced at 2-3 months after training, repeated at 6 months after training, and continuing as far as 1-2 years after the initial training.
While all four of Kirkpatrick’s levels of evaluation can provide useful information when properly applied, it is clear that Levels 2, 3 and 4 provide the most pertinent information on long-term learning and the benefits of a training program. A combined, multi-level approach to evaluation seems to be particularly effective when applied as a spaced program.
You can click here to see a sample of a blended evaluation schedule… or you can wait until next year…