With most eLearning courses using multimedia to present information, typically, there is a mix of visuals and narration. To this, we also sometimes add onscreen text in the hope of catering to the learning style of our visual learners. In fact this is the primary rationale behind many designers using onscreen text, visuals as well as audio that mirrors the onscreen text to ensure that all kinds of learners can process information through their preferred sensory channel (which the learning styles hypothesis suggests is the way learners receive information). The idea is that it is better to deliver the same content through as many modes or routes as possible, so that no learning style is left out of the reckoning.
Is this learning styles approach good or bad?
The above approach to instructional design has been developed on the framework of the learning style hypothesis, which has many takers. This hypothesis is based on the information delivery theory which holds that learning consists of receiving information. However, if you turn to cognitive theory, it suggests that if we follow the approach outlined above, we could actually end up creating a cognitive overload. Ruth Clark in her book eLearning and the Science of Instruction, makes a very compelling presentation of this case through her redundancy principle one – “Avoid presenting words as narration and identical text in the presence of graphics.” She considers onscreen text that duplicates the audio component entirely redundant. She does make exceptions to when it is OK to duplicate audio and onscreen text, but for the most part recommends that we avoid it. What about the learning styles of learners then? If we don’t present the onscreen text that is a replica of the audio narration, aren’t we doing injustice to visual learners by making them rely on audio entirely?
To answer that question, here is another thought-provoking question: “Do Visual, Auditory, and Kinesthetic Learners Need Visual, Auditory, and Kinesthetic Instruction?” asked by Daniel T. Willingham. He challenges the learning styles hypothesis by making some pertinent observations about learning and suggests that rather than focusing on preferred learning styles of learners, we would do well to go with the mode that best supports the content to be taught. In other words, allow content to drive the choice of modality. The basis for his statement is that the vast majority of educational content is stored in terms of meaning and does not rely on visual, auditory, or kinesthetic memory.
How does it work in real-life?
Come to think of it, it does make a lot of sense. Just because I am an ‘auditory learner’ doesn’t mean that I will better remember how a machine or piece of equipment works by listening to someone describes the procedure for operating it. For instance, imagine if you were to close your eyes and hear this text read out to you “Most control valves use pneumatic actuators. Air pressure from the actuator positions the valve stem. A strong spring inside each valve will fully open or fully close the valve if the air supply fails.”
Now imagine the same text read out to you while you are looking at a labeled animated graphic of the machine showing the flow. I’m sure you’d agree that regardless of the learning style of the learner, visuals are best at conveying the essence of the content in this context and is an indispensible aid to learning here. So decisions on presenting content through a certain mode should be taken based on the type of content, more than anything else. Here’s another example. A course on Voice training and Accent will necessarily lean heavily on presenting most content in the audio mode because learners need to hear and practice the accent being taught. And this is an accepted instructional decision regardless of the various learners’ learning styles.
If content should really decide the modality of presentation, what about the possibility of a given piece of content being amenable to being presented equally effectively in more than one modality? We can and should do that if we can avoid placing too many demands on our learners’ capacity to process information. Two assumptions of the cognitive theory of multimedia learning (ref Ruth Clark’s book eLearning and the Science of Instruction) is that a) all people have separate channels for processing verbal and pictorial material and b) each channel is limited in the amount of processing that can take place at any time.
If we keep this in mind, we can present content in as many modalities as we prefer as long as we avoid creating a cognitive overload or presenting too many things to overload the learners’ various sensory channels.
Subscribe to Our Blogs
Get CommLab's latest eLearning articles straight to your inbox. Enter your email address below: