New to microlearning? Learn 9 ways our customers are using this highly effective form of training today. Get the guide >

It’s a myth that microlearning is hard to measure. Worse, this myth obscures the fact that microlearning challenges, constructed correctly, are arguably the best way to measure training impact on business outcomes. Companies who adopt best practice microlearning methods are reaching a new granularity in their learning measurements and by doing so, create a real-time view of the performance readiness of their organization. 

This brings us to our myth: that microlearning is hard to quantify. 

Learner Engagement: Maybe the Most Important Learning Metric?

The biggest battle for learning professionals and the business leaders they support is getting people to engage with learning that’s critical for their collective job performance and success. Embedded in the popularity of “learner experience”, or “LXP”, is the recognition that traditional learning methods are not engaging learners. 

In fact, first-generation eLearning technologies (especially the “LMS” or learning management system) often genuinely frustrate learners and drive them away from attaining the types of critical knowledge they need to do their jobs correctly.

For that reason, learner engagement in a best-practice microlearning experience — measuring the number of learners who start, continue and ultimately finish a microlearning challenge — may be the most important measure. 

If the learner experience isn’t compelling enough to achieve a greater than 90% engagement rate, it isn’t reaching its audience. The success (or failure) of any learning effort, corresponding learning ROI and impact on business outcomes start with learner engagement.

Microlearning Provides the Most Granular Learning Measurement: Proficiency

Second, the granularity of microlearning permits learning program managers and administrators to analyze and report on learning data at a much more detailed level. What’s different about measuring microlearning? 

Traditional eLearning approaches are disproportionately focused on measuring activity-based metrics: course completion and completion dates. Perhaps there’s an assessment or quiz associated with a SCORM class. Too often, this results in something closer to confirmation of attendance than a measure of individual or cohort proficiency (or proficiency gain) for a particular learning topic.

Best-practice microlearning breaks content down into its component parts. This content transformation serves the purpose of providing learners with content that they can digest in 2-3 minute increments — increments that work within busy schedules — but it also creates content that is much more granular and specific in nature than longer learning classes.  

Micro-content by its very nature delivers specific learning topics and scenarios, therefore creating a much more vivid analysis of which topics are well-understood by learners and those where additional training investments may be necessary. Each individual microlearning challenge and its corresponding learning analytics reveal an incredibly accurate measure of proficiency and proficiency gain.

Spaced Repetition Delivers an Objective Measure of Proficiency

The third advantage of best-practice microlearning analysis is how it delivers an objective measure of proficiency through data gathered from the spaced repetition of questions (scientifically proven to drive knowledge retention).

What Qstream customers also discover in their measurement is the objectivity that comes from asking a question more than once, as “one answer can be a fluke or a guess, but a series of responses demonstrates whether someone has true mastery of a concept.” 

Analyzing how someone responds over a series of questions versus one response reveals who really understands a concept versus those who do not. The resulting objective measure of proficiency helps confirm the true readiness of the individual or team on a particular topic – or where additional training investment may be merited.

The Effect of Leaderboards and Gamification

Making real-time learning analytics available via leaderboards is the type of real-time feedback that motivates people to engage and continue participating when done in a professional way. Game mechanics and the exposure that comes from them show relative performance measures to learners in a peer group at the same time this information is visible to their managers. In fact, managers are also in the process of being measured in the form of their overall team participation and performance. 

The same measures that result in a granular, reliable analysis of performance readiness are reflected by the participants in a microlearning challenge as it’s happening. 

This also has the benefit of guiding managers, in real-time, to coach their team. For example, they can recognize mastery, encourage participation, or micro-coach on a weaker area of knowledge or skill. This is vital to keeping a team ready and agile in a fast-paced working environment, for example, in enterprise sales or operational teams.

A Real-Time View Into Organizational Performance Readiness

Learners who are engaged and respond objectively to repeated microlearning challenges in a defined timetable benefit from science-proven knowledge reinforcement. At the same time, these attributes lead to a real-time view of organizational performance readiness. 

The results of a microlearning challenge, simply put, reveal who’s ready to do what they need to do, who’s on message, or who has proficiency (or gained proficiency) as part of the microlearning experience. 

Of course, the opposite is true, too. When a person’s proficiency on a particular topic is low and doesn’t improve, it presents the organization with an opportunity to provide coaching or further training to close the gap. 

To the extent that the microlearning challenge is focused on critical business topics, can the organization afford to have someone who doesn’t know the message selling a product? Is it wise to have someone in the field who can’t answer a key question correctly when provided three spaced chances? Is below-average proficiency a problematic leading indicator of on-the-job performance? 

There may be extenuating circumstances, but this is at the head of microlearning transformation: The vivid discovery of where gaps and learning opportunities exist in the business and where the business has risk based on knowledge or skills gaps.

This information is especially vivid when displayed in a proficiency heatmap that acts as a learning diagnostic tool to show areas of strength and weakness on the topic. Knowing where knowledge or skills gaps exist gives organizations the opportunity to discover, pinpoint and address them through coaching or further learning intervention. 

This real-time portrait of team proficiency against performance and productivity data is invaluable in terms of applying learning to the performance of the business. And that is why microlearning, when blended with spaced reinforcement, testing and professional game mechanics, is not hard to measure. In fact, microlearning can provide a vital set of learning analytics that invoke practical action – engagement, proficiency and knowledge gaps. 

Need more information?
We’ve got it covered

PRODUCT

Learn what Qstream is made of and how it can help you

RESOURCES

Browse our library of white papers, e-books, webinars and more

NEWS

Read and see the latest about Qstream and microlearning

Want to see Qstream in action?

Schedule a call >

What are people saying?