Discover the “AJA” insights behind learning impact evaluations.
After so much time and effort spent creating an eLearning course, one of the most critical steps following its creation is evaluation. Determining how well the course is performing and where it can be optimized is key to have a greater impact.
It used to be enough to evaluate the course’s success through efficiency (course completion) and ROI (saved time and money in the development process). However, this traditional method is limited as it reveals very little about what is going on between the course and its learner.
Our times allow for different methods of evaluation. In this post, we invite companies to consider going beyond ROI-only based metrics to LEARNING & PERFORMANCE based insights.
1) If you wanna win the game, focus on the ball
Depending solely on ROI will merely give you an overall take on whether the eLearning program is working as a whole. You won’t be able to leverage where the TRUE VALUE is coming from and worse than that, you won’t be able to see things on a granular level. So what’s bad can’t be bad, and what’s good can’t be built off or replicated.
If you know program A and method B work, you can continue to use those formats and cut the rest.
Some questions you might actually want to answer are:
- What courses/modules are being taken by whom?
- How often are they going back to review the course?
- Which elements/approaches seemed most popular? Do they enjoy watching videos or prefer downloading podcasts?
- Where are they accessing the learning? From smartphone, tablet or laptop?
- Which courses/modules have the greatest completions?
- What are the programs that are most leading to behavioral changes?
- In what formats are those (more successful) programs in?
For example, learning that the shorter (microlearning) courses have higher completion rates and better reviews by students provides direction for future formats and courses. Besides giving you insights on what your learners prefer, this can also result in a change in production hours and a drop in employee course hours, all of which will lead to lower costs and all of which will eventually be reflected on your bottom line.
We all love information because it gives us a better idea of where we stand. It’s from there that we can set goals and make an effort in a direction for the desired outcome. Things come up, and insights are required to make it work.
Some other examples of how granular insights can go a long way:
- If you can identify where learners are trying to cheat the system, (aka. skipping through content), there is a learning opportunity there for you to make a change.
- If you learn that there is particular day or time for incompletion rates, this provides an opportunity to able some easy changes that might reduce dropouts.
The bird’s eye view of your efforts isn’t going to help you make the play by play decisions that you need to win the game.
2) Evaluation: Before, During & After the Learning Event
Rest assured that there are different points of entry to establish a learning impact method of measurement.
Before launching: You can create pre-evaluations (self-assessment surveys, pre-tests, blogs, reading assignments with questions) that will align the material with the learning objective.
During learning: Placing evaluations after every lesson is very common in many of the learning apps that learners are already exposed to on their phones. Microlearning apps have done a great job of placing these formative evaluation tools right after learners receive information. This helps emphasize the objective of the lesson and requires an immediate recall.
Post-learning: Design a summative evaluation for the learner to take after the entire lesson has been completed. This will help you gauge how each learner is performing. There are two commons ways of doing this: a checklist format measuring the number of questions answered or completed correctly or a rating scale, measuring what they achieved and how well this was done.
3) Advanced Evaluation Techniques
Many instructional designers don’t move past the previous point, but once you establish the analysis above, you can consider taking things a bit further. Though assessments are excellent for evaluating recall, you are still not determining whether your learner is actually applying the skills correctly back at his/her job.
You need to evaluate PERFORMANCE - how much trainees actually change their behavior based on what they learned when back on the job
An example of how to do this:
A simpler way to determine the applicability of the lesson is to ask learners to apply the skills/concepts that they have learned to resolve specific, real-life issues. Ask them to perform a punctual task when they finish the course to “force” them to put the knowledge into practice.
Think what are some of the precise job-related scenarios that you can present so they can apply what they’ve learned?
- Were the knowledge and skills shared with learners reapplied in their roles?
- Did this training contribute to the overall improvement of their everyday tasks?
- Were there noticeable changes in performance after the training? Surveying supervisors on employee improvement can be a way to verify this.
- Was the new information applied and sustained? Measure employee performance 3 to 6 months after training to make sure that these learnings are long-term adjustments.
Applying these exercises will open a window to “AJA” moments that will help you know how your employees and your training are clashing or aligning.
There are many reasons that companies focus on ROI to measure impact. Many stick to this method because it’s what they’ve always done. Others don’t know how to make the shift. Though valid, these are still just barriers to knowing what you are investing in and how to improve that investment. Going beyond ROI might help the ROI go up.