Is Training Evaluation in E-learning Really Worth the Effort?

Is Training Evaluation in E-learning Really Worth the Effort?

An eLearning course in your organization has recorded a completion rate of 100%. That would be the ideal situation for any training manager, wouldn’t it? But does a 100% course completion rate mean the eLearning has been effective? You never know because learners might have completed the eLearning course only because it was assigned to them by their managers.

Remember, a training cycle is never complete without evaluating the training program’s effectiveness. Performing a training evaluation of your eLearning courses offers an opportunity to improve them.

Evaluating training effectiveness is important because it sheds light on the following aspects.

  • How well has the eLearning course met the learners’ needs and objectives?
  • What knowledge and skills has it imparted to learners?
  • What desirable change has it brought about in learners’ performance?
  • Is reinforcement of learning provided to learners?
  • What organizational benefits has it yielded?
  • What are the suggestions for improvement, if any?
  • Is the training close-looped, i.e. does the training program have clearly defined training goals, measurable metrics, and an iterative process to improve training?

Though training evaluation in eLearning brings a whole range of benefits, there are many organizations that are unwilling to spend their time and resources for a comprehensive ‘after-training’ evaluation. What are the consequences of not having a proactive training evaluation process? 

Learners are Unable to Apply Skills

You have taken the effort to analyze performance gaps and deliver a suitable eLearning program. Survey results immediately after learners take the course reveal that they are very happy with the eLearning. But, at the end of the year you discover that learners haven’t been able to apply the learning on their job. Had you included a proper evaluation process, this situation could have been averted.

Measurement of ROI becomes Infeasible

Calculating the return on investment (ROI) on training is one way to demonstrate training effectiveness, especially to stakeholders. When there is no training evaluation in place to measure if learners have been able to apply the skills they have learned, the measurement of ROI on your e-learning course becomes infeasible.

Future Training Programs are Not Contextual

In today’s dynamic marketplace, the need for consistent skill upgradation never loses its importance. What was said to be an effective training program in the past may not be so now. You need to continuously adapt it to suit your current needs. This is possible when you know how effective your previous training program was.

One of the popular models for training evaluation is the Kirkpatrick model which has 4 levels of evaluation. This was followed by the Philips Learning Evaluation model, which added a fifth level of training evaluation, which is the ROI. The recent addition of the New World Kirkpatrick model based on the existing 4 levels of Kirkpatrick’s model gives a new dimension to training evaluation.

  • Level 1: Reaction
  • Level 2: Learning
  • Level 3: Behavior
  • Level 4: Results

If training evaluation has to yield the right results, it is important to collect the right data, from the right stakeholders, at the right time. We’ll understand the application of New World Kirkpatrick model when we look at the approaches for training evaluation.

3 Approaches to Consider for eLearning Training Evaluation

Based on a publication in the ELearning Guild, there are 3 approaches to consider.

1. Learner-based Evaluation

According to this approach, data needs to be captured from participants at 2 different points during the learning process.

i. Data is captured immediately after the eLearning course (post-training intervention)

The data captured at this stage forms the basis for training evaluation at Kirkpatrick’s levels 1 and 2.

Level 1 (Reaction) answers the following question: Do participants find the training satisfactory, engaging, and relevant to their jobs?

Level 2 (Learning) answers the following question: Have participants acquired the intended knowledge, skills, attitude, confidence, and commitment after completing the eLearning course?

Confidence (I think I can apply the learning on the job) and commitment (I intend to apply the learning on the job) were additions made to the New World Kirkpatrick model to evaluate if learners would be able to apply learning on the job.

ii. Data is captured after the participant is back on the job (a couple of months after training)

The data captured at this stage is used to evaluate training at Kirkpatrick’s levels 3, 4 and 5 (refers to the additional layer of ROI).

Level 3 (Behavior) answers the following question: Have learners been able to apply what they have learned during the training, on the job? Processes and systems that reinforce, encourage, and reward performance on the job are taken into consideration at this level.

Level 4 (Results) answers the following question: Have the targeted outcomes been achieved by the eLearning course? This is done through observations and measurements, to check if the training has had a positive impact on learners.

The learner-based approach is low-cost if technology (e.g.: online surveys) is used to capture and report the collected data. This approach can be used for all training programs for continuous evaluation.

2. Manager-based Evaluation

The manager-based approach includes data collection points similar to the learner-based approach. But additionally, the learner’s manager is considered an important data point. A survey that focuses on levels 3, 4 (Kirkpatrick’s model) and 5 (Philip’s model) is rolled out to managers in order to evaluate the eLearning course.

The results of the survey reveal an estimate on the impact of training on the job, business results, and ROI from the manager’s perspective. Also, there is an attempt to understand the on-the-job environment through questions posed to managers.

Because there is an increased effort required to conduct manager surveys, the time and cost to use this approach for training evaluation is higher than the learner-based approach. It is better to use this approach in online training programs where manager input is relevant.

3. Analyst-based Evaluation

This approach uses surveys as in the other 2 approaches, along with analytics. Because of the detailed data collection and analytics this approach requires, this is best used only when you have the budget for an expensive and time-consuming training evaluation plan. That’s why this approach is used only for high-visibility training programs.

For organizations that do not want to invest in extensive analytics, the Learning Management System (LMS) is a tool that can be leveraged to provide simple analytics in the form of reports based on the learners’ progress in training programs.

For successful training evaluation in eLearning, it is not essential to have perfect, quantitative metrics. Instead, use data that can predict or estimate your key performance metrics. That can help you evaluate eLearning courses and provide the right results to the management for decisions on future-training programs.

Dedicating some amount of time and resources for training evaluation is certainly well-worth the effort as it can give you a clear picture of what needs to be taken care of when it’s time to deliver subsequent eLearning programs.

Measuring the Impact of Microlearning with Learning Analytics