Pilot Testing eLearning Programs: Why, Who, and How

Pilot Testing eLearning Programs: Why, Who, and How

Pilot Testing eLearning Programs: Why, Who, and How

Whoever heard of a product launch without rigorous pilot testing? And yet when it comes to our eLearning effort, we sometimes roll it out without pilot testing it and getting valuable feedback from the people who matter most – the learners themselves.


Having spent all our energy and effort into development and having reviewed our courses till our eye balls bubble, we might think that we have covered the program from all angles. And that now it’s just a matter of rolling it out. Not really. Pilot programs help in many ways. They help evaluate reactions and get feedback (provided inputs are taken seriously- a thick skin helps!). Your test group is also likely to spot errors that you were blind to and help you improve your course. What is more, they also help in promoting eLearning. How? Success from the pilot tests can be used to market the training in the organization because your pilot group can really sell the program to others and be your eLearning champions.


So who is the best audience for a pilot test? When it comes to pilot testing our eLearning course, especially for first time implementation, it’s very tempting to select a bunch of people who are likely to be comfortable and enthusiastic about eLearning. The reason is obvious- they are less likely to give critical feedback, and being more comfortable with technology, will have fewer issues in orientation to the new mode- and hence more likely to give glowing reports. And hey, who doesn’t like a few compliments at the end of all that hard work trying to put together an engaging learner experience?

However, some practitioners feel that pilot testing is likely to be more effective if run on someone who has no particular interest in eLearning and who is even a little technically challenged. (Think of it as something similar to stress testing for automobiles. Under ideal road conditions, results can be misleading.) You would want ‘lay people’ to come and test and give their candid feedback on all the issues they faced. For all you know, that slick course which you thought was interactive and engaging was probably your lay person’s navigational nightmare!


Everyone has their own preferred method for capturing feedback from pilot programs. More formal methods could include giving the selected group of learners’ questionnaires for feedback on various dimensions of the learning experience including content, comfort level, challenges with the new mode etc. My favorite is to watch people test it out live (but do ensure that your presence is not making them more uncomfortable than they are already). You get to feel their pulse and see their reactions to the entire program (ranging from ‘What? Is this for real?! to ‘Wow! This is fun!’). When you observe learners taking their first course, you can also gauge what the problem areas are. Maybe some learners are having trouble with navigation, or some are not sure of the consequences of a given click and are hesitant to proceed (this is very common when asked to submit an answer). Maybe some of them want to revisit a certain section, but are not sure how the course will respond (‘Will it throw me out and will I have to start all over again?’).

To conclude, pilot programs help – in more ways than one.

View Webinar On 1-2-3 Steps To Sell eLearning In Your Organization

  • There is only one possible answer to the question of the makeup of a pilot audience.

    The pilot audience MUST be the same as the final target audience.

    I cannot tell you how many times I have seen people pilot a course with all training personnel or management personnel – and then wonder why the course is evaluated poorly 🙂

  • Gordon Svoboda

    This depends on a few questions & answers that I ask and think about before Pilot testing.

    Have SMEs been used in the development of the course and have they piloted the material? I want them to see and try the materials before general users. This step can help assure there are no content inaccuracies that will need to be corrected later as well as assist in voiding potential scope creep outcomes.

    Then, in relation to the general users… (Is that who you are asking about as Pilot testers in your question?) I don’t think the selection needs to be an “either/or.” Better if there is a mix, a “both/and” of the folks you mention in your question. Mix the pilot testers as much as possible for the best results. I prefer to get a “representative sampling” not a “control group” sampling for Pilot feedback.

    Kranthi, I am assuming earlier testing took place through the development process and the Pilot testing you are referring to is immediately prior to implementation.

    Regards, Gordon

  • Jacky

    On the advice of our Organizational Development Department, I once had the person who was most resistant and quite negatively vocal to the proposed e-learning go and review the product for the department. She came back an advocate and turned the team into supporters.

    I think there is a lot of merit in your approach.


  • I agree with Stephen, mixture provides a balanced approach to the development of the new platform. It helps ensure all views have been considered from various perspectives; adjustimg along the way. This is true in most cases.