CommLab India Interviews Kevin Alster: Leveraging Tech for Video-Based Learning
This blog walks you through the power of leveraging emerging technology for designing video-based learning for impactful corporate training.
Welcome to CommLab India’s eLearning Champion video podcast featuring an interview with Kevin Alster, Head of Synthesia Academy. Kevin has over a decade’s experience in leveraging emerging tech for video communication, with a mission to help enterprises close the gap between understanding information and putting it to use.
Watch the podcast now!
Kevin also has his own podcast called The Video Learning Lab where he talks about innovative learning and technology experiences.
Here’s a brief paraphrase of the interview.
The Evolution of Video-Based Learning
Back in 1990, the goal of eLearning was to replicate lecture-based training, to get information out to those who couldn't be in the classroom. And so, video-based learning started out as lecture recordings, with modules broken down into objectives, knowledge/ skills, and a knowledge test at the end.
Around 2010, video-based learning began to be embedded in LMSs and LXPs. And instead of recorded lectures or high production training videos, video-based learning was being made by everyday professionals with 4k cameras and webcams. Things really took off from 2018, with cameras now fitting in our pockets and being embedded on our computers. A lot of user-created content was also being generated. So, video-based learning evolved from merely trying to replicate the classroom to different types of video instruction, from entertainment and edutainment, from micro learning to entire courses. The technology has gotten better and easier to use, with different objectives and ways to practice that have evolved from the lecture style.
Now with generative AI, it’s become even more easier to create videos. You don't have to be on camera, you don't need microphones, and for certain types of instructional media, you can use an avatar to stand in for you as a presenter or course guide.
How Avatar-Based Learning Helps Provide Personalized Experiences?
Avatar-based learning is changing the game by providing a more personalized experience that makes the learner feel he or she’s not alone. Up until 2010, a lot of eLearning was just going through slides, often with a recorded voice, and without the engagement of having an instructor. And there were a lot of obstacles to creating premium video-based eLearning, such as the cost or the difficulty in creating these things.
With AI avatars, polished videos are now becoming easier to create. And they make people feel not alone in eLearning. Let's say you have 300 new employees going through onboarding training. Instead of a series of emails or documents to consider, you can give them each a video with their name, in their own language. It's a very powerful experience to be welcomed by name, even if it's just in a presentation. That’s where there’s a lot of potential for AI avatars. It's not that we're suddenly going to have AI tutors everywhere, it’s more about the engagement and personalized approach to using video.
But where avatar-based learning is really starting to come to together is through large language models and programs like Chat GPT, which allow people to converse with information in their own natural language. AI avatars are becoming the face for that conversation, serving in different pedagogical roles.
It's important to remember that avatars are nothing new. We've had avatars before in different pedagogical roles, but they were cartoon characters. Even the avatars of standard tools like Articulate Storyline and Rise are static characters that almost look like 90s infomercials. The new version of avatar is still just a chest up representation, but is much more expressive and engaging than an animated or static character.
The Role of Technology in Modern Learning and Development Strategies
This depends on two factors – how we use the technology, and the learning dream.
How we use technology:
It's important to acknowledge that the learning comes first. We need to look at technology and the different formats to understand how the format we're applying can help learners towards a particular outcome. Here’s an example. In the 90s, when on a road trip, we had to look at different paper maps for each state, tracing the route with a finger. Then came MapQuest, where you could look up directions and print out instructions, making the task of getting from point A to point B easier. Then in the mid to late 2000s, we got Google Maps on iPhones, so now we don't even need to look at directions, we’re given information at the right place at the right time.
What does that tell us? When a new technology comes along, it allows us to do the same task in different ways. That’s our role with new technology as L&D professionals. It's not about building things that we don't have in our skill set, but about looking at the tasks employees are trying to do, and figuring out what is changing with this new tech.
An example is how the New York Times uses the exact same headline to serve up a 10-minute article (for people who have the time to read) and a 2-3-minute video (for people on the go). So, that's the first part of the answer: when the technology changes, so should the way we deliver information.
The learning dream:
The learning dream is about scaling ourselves to bring more of a one-on-one experience to our users and learners. We know the ideal situation for learning is one on one tutoring. This is referred to as Bloom’s 2 Sigma problem, where students tutored one on one surpassed their peers in group instruction almost by 2 grades.
Also, when learning on the job, we need opportunities to practice in a place where it’s safe to fail and get feedback on our performance, so that when we apply it to the job, we already have some experience in using a new skill or completing a new task. We need to consider:
AI allows you to scale yourself for your different roles as L&D professionals. To some people you may be a technologist, to others, Comms to keep them informed, and to some others, an authority on the subject. Using AI avatars will allow you to clone yourself for more asynchronous video messaging, for example, to serve as somebody who can speak 160 languages when trying to connect with a global audience.
Where Avatar-Based Video Learning is Particularly Impactful?
Right now, avatars are very much limited, they cannot gesture in any way that's effective. They are very expressive but it's only from the mid stomach up to the top of the head. So, you need to consider if the content is a good fit for an AI video.
If dealing with less than 3 people, it's better to get them on the phone or on a Google or Teams call. If it is going out to more than 5 people, you need scaled messaging.
- Is it going to be high volume?
- Will you need to send out lots of videos over a period of time?
For example, if you need to send out weekly reports for data storage, a 2-3-minute video will capture that information better than sending an entire data dashboard to your team.
If you need the message in multiple languages, an AI video makes sense for global teams to send out the same content in up to 160 different languages with a click of a button.
AI videos are the best fit for:
Staying Abreast of Industry Trends and Best Practices in L&D
L&D is a bit behind the curve when it comes to using generative AI technology, because of what we're expected to deliver to the business. It's hard to break out of the ‘I need training courses and resources’ cycle or use AI tech in new and different ways.
Companies like the New York Times have always been a little ahead of the curve thinking, “How can we deliver journalism differently?” They were thinking about VR storytelling in 2017 to bring people to new stories in different ways, about how to use technology to provide journalism in a more engaging or different way. So, it’s good to look at journalism to see what they are doing with AI, what formats they're using, and what types of stories they're telling.
You can also stay abreast of trends by looking at what people with tons of money are doing in the AI space, because it takes a lot of money and resources to find novel use cases, do prototypes and experiments, put them to use, and serve those research products in a meaningful way for the rest.
- How do we take a lot of information and create something to shorten the gap between all this information?
- How do we give people access to it?
- How can we use large language models that help them discuss and dialog?
Preparing for the Unbundling
We're going through a great unbundling event where all our previous roles and jobs suddenly don't make any sense. We need to think on these lines now:
You need to go through the projects you've worked on in the past 3 months, and look at:
Look at those tasks and start to mesh that with your understanding of Chat GPT and what it can and can't do. Because while Chat GPT can really raise the overall foundation of, for example, copywriting and creative thinking, it’s not so much for business strategy and problem solving. So figure out what tasks can be handled with the tools that you have, so you're prepared for this big unbundling of roles and projects that we're going to see.