Ask any training professional about his/her definition of success with any training program and in most cases the answer would be that ‘learners liked the training’. This statement would be backed by decent-enough customer satisfaction scores in the evaluations routinely conducted at the end of any training program. It’s great when learners like the learning program. But most times, the glow from the training usually lasts only for a few days before hard reality sets in. Learners who go back really charged about what they learnt get caught in the everyday grind and that new learning gets pushed onto the back burner. Sometimes, managers don’t ask where the impact is either – they are usually so relieved that their resource is now back on the job that they don’t care either way about long-term payoffs.
Training professionals the world over have wrestled with this issue of measuring the results of learning, more so in the case of eLearning. Most training departments treat completion of an eLearning course as the biggest measure of its success. While it is a great achievement and learners can provide feedback on content quality and effectiveness in meeting the business goal, it is not conclusive proof of the usefulness of the course. Neither do assessment scores at the end of an eLearning course give you the complete picture.
When initiating any kind of eLearning program, measures of results typically stop at Kirkpatrick’s Level 1 (how learners feel about the instruction) and Level 2 (whether learners met learning objectives). In the absence of hard data, the transfer of learning to the job (which is at Kirkpatrick’s Level 3) is an assumption at best or wishful thinking at worst. And surprisingly, stakeholders are quite happy to not probe too deeply for answers on whether the eLearning initiative has indeed affected learner performance on the job and whether this improvement in performance has impacted business results (Kirkpatrick’s Level 4 – measuring links between instructional outcomes and business outcomes.)
So why is there so much vagueness around the issue of measuring results? The fact of the matter is, it takes considerable investment of time and money to drill down deep enough to measure results in terms of training affecting performance and thereby the organizations’ business outcomes. Imagine being responsible for an eLearning initiative for customer support for instance that helped change the face of business at your organizations. Maybe customer’s ratings on support went up by several notches and this resulted in a ripple effect that improved the image and credibility of your organization. Managers also were more than happy with the improved performance of their customer support teams and could trace a direct link between the eLearning programs offered and the improvement in performance. A complete analysis on quantifying the results by doing a cost-benefit analysis and coming up with numbers related to ROI (which incidentally is at Level 5, beyond Kirkpatrick’s 4 levels of evaluation) would provide the best business case for eLearning. However, this would work only it we conduct a ‘before and after’ tracking of business results and not limit it just to individuals’ performance.
As you can see both qualitative and quantitative information provides valuable insights and by correlating improvements in business metrics to the rollout of an eLearning initiative, you have your compelling evidence that eLearning works. Yes, it will take effort, but it’ll be worthwhile to know for sure if you are getting value for your money. As Josh Bersin puts it “eLearning is a business performance improvement tool, not a training tool.”