Make Learning Stick: Engage, Reinforce, Measure

eLearning Guild Logo



With over $200 billion being spent on global corporate training annually, it is still a conundrum that some research shows that only 10% of corporate training is effective.  Harvard Business School Professor Michael Beer calls it “the great training robbery”. We call it human.  It is not that training content is flawed, or learning and business leaders aren’t committed to building a culture of learning, it is quite simply a human phenomenon: the brain is wired to forget if new knowledge is not reinforced or put into practice.

In this webcast, learning technology product expert Bryony McIndoe from Qstream discussed how technology is transforming the way corporate learning programs are designed, executed and iterated for lasting impact on business.  You will take away some practical actions that will set you up to maximize every dollar invested into corporate learning.



Bryony McIndoe

Bryony McIndoe
Product Manager

Bryony McIndoe is using her background in research and science-based behavioral psychology to deliver innovations to Qstream customers. A 10-year product management veteran, Bryony is an expert at user experience and direct customer research and feedback, using real-world observation to build and prioritize the microlearning capabilities that support Qstream client business objectives. Before Qstream, McIndoe worked at a series of SMEs including San Francisco-based Imfuna, and Dublin-based MoneyMate (now CSS) and Altify. Previously, McIndoe studied at Oxford Brookes University where she earned a degree in Psychology and History, accredited by the BPS (British Psychological Society).



Karen: Welcome everyone to Make Learning Stick: Engage, Reinforce, Measure, Repeat, presented brought by Bryony McIndoe, the Product Manager at Qstream. Bryony McIndoe is using her background in research and science-based behavioral psychology to deliver innovations to Qstream customers. A 10-year product management veteran, Bryony is an expert at user experience and direct customer research and feedback. She uses real world observation to build and prioritize the micro-learning capabilities that support Qstream client business objectives. Before Qstream, McIndoe worked at a series of SMEs including San Francisco-based Imfuna, and Dublin-based MoneyMate, which is now CSS, as well as Altify. Previously, McIndoe studied at Oxford Brookes University, where she earned a degree in Psychology and History accredited by the BPS, the British Psychological Society. And with that, I am going to turn things over to Bryony. And so Bryony, when you are ready, please unmute yourself, and show us the presentation.

Bryony McIndoe: Great. Thanks so much for that, Karen. Hi, everyone. Welcome, and thank you so much for joining us here today. It looks like we have a good group on the line. So I’m just going to dive straight in here. So today, we’re going to talk a little bit about learning, so while Harvard Professor Michael Beer calls the great training robbery, which is not only a fantastic pun, but also a really pertinent topic on why so much time and money spent on learning just isn’t getting the desired long term results that we’re looking for. So we’re going to look at some of the next generation of learning methods that are surfacing to help combat this. And also at the key elements of any program that will help make that learning stick with your later group, and deliver those long term behavior changes that we’re all striving for. The key elements that we’re going to look at are engagement, and we’re going to explore some methods here to help get your learner’s attention and focus exactly when it matters. We’re then going to talk about the importance of reinforcement of information and learning to ensure those long-term benefits. And then we’ll move on to looking at what areas of analysis are really key in understanding the impact of learning programs on individuals and groups, and also on the quality of learning programs themselves. Finally, we’re going to take a look at how this data can help to create an incremental and informed evolution in embedded learning programs to ensure that you’re always delivering, the learning that matters, when it matters. So we’re actually going to kick it off with a poll. So Karen, if you wouldn’t mind kicking that off. And so the question we’re asking here is: How much of any new information or learning do you think that we forget, on average in 24 hours?

It looks like a good few of you are even more pessimistic about this than I thought you would be. So we’ve got 34% saying that it’s 79%. Only 10% think that it’s 42%. So that’s great. So actually, the answer is 67%. So it looks like a lot of you might actually already recognize at least some of the stats here, which come from and inspired by Hermann Ebbinghaus’s research into the beginning curve. It’s kind of hard to believe that Ebbinghaus started talking about this and started publishing this research on the forgetting curve 140 years ago, because we’re still so clearly coming up against this obstacle today. In fact, in an environment where technology and the markets and product shifts are almost constant, and in fact, are often points of competitive differentiation between organizations, it feels like we’re trying to learn 10 times as much disparate information in a fraction of the time. It’s really more of an obstacle than ever. We have even less time to learn, and more importantly, embed those learnings into our day to day activities than ever before. So if you’re looking at this, two thirds of learning is lost in 24 hours, in two months, only 10% is retained. And the reality is, is that you have no control over what 10% that is, is it the important things, and is that 10% having any impact on anyone’s job performance anyway?

So if only 10% of the learning sticks, that means that you know, in a 200 billion dollar industry, 180 billion dollars just isn’t being used effectively. And even if the holy grail of learning is getting 100% or 90%, or 70% of learning retention, even if that isn’t possible, if the reality is 10% of learning is retained over two months, even if you’re only getting an extra 5% or an extra 10% on top of that, if you’re also able to link that to performance, that’s actually huge. So in order to kind of really understand this problem with forgetting, we need to understand how it is that we learn and remember information in the first place. So how we remember information seems fairly straightforward. On the face of it, we’re exposed to something. We encode it into our memories and it’s there when we need to retrieve it, at which point it’s reinforced and encoded all the deeper. But of course, we know it’s much more complicated than that. Because so often when we come to retrieve this learning, it’s just not there.

So why do we forget it all? It feels like this major design floor in the human brain. That’s you know, why is that we’re forgetting so much. Wouldn’t we just all be super humans if we could just remember everything that we’ve ever been exposed to? But actually, among many other reasons why forgetting is helpful, forgetting is actually really vital in making sure that we only learn the right things. And to illustrate those points, I’m going to introduce you to an Iberian green frog tadpole. I don’t actually know whether this is an Iberian green frog tadpole, but it is a tadpole, nonetheless. So a study was done using tadpoles to help to prove this thesis that we forget in order to avoid learning incorrect behaviors. So the way it worked was that the tadpoles were exposed to a chemical signature that they already knew means danger, it’s actually the chemical signature that a dead tadpole gives off so they know that it means that something seriously serious has gone wrong. So along with this kind of dangerous signal, they are also exposed to a completely neutral chemical signal that they had never encountered before. So the tadpoles reacted, as they usually would when they’re exposed to danger, which is tadpoles. So they hold very still to avoid being noticed by predators. So what the researchers did was that after a space of time, they expose these tadpoles to just that neutral chemical, again, without the kind of chemical that meant danger. So the tadpoles had learned this correlation and so in response to this neutral chemical, they held very still, again, because of this concern that there might be a predator around. What the researchers did was then expose the tadpoles to this mutual chemical by itself a second time, so a third time in total.

And what happened was, what they found was that the tadpoles, then ignored it completely. So they had learned to spurious correlation which they had reacted to and then they forgot it when it wasn’t reinforced, which, as you can tell, is really vital, because if tadpoles stayed still every time they encountered something that wasn’t dangerous, they would never be able to get on with the important and according to this picture, a possibly slightly joyful work of eating as much as they can and growing into frogs. So clearly, the process of learning is just a little bit more complicated. After we’re exposed to new information, we need to understand it in a broader context, we need to be able to connect it to the things that we’ve already learned, the things we already know, to the things we already do to make it kind of fit in with our worldview. And before we’re at risk of forgetting it, forgetting these correlations, we need to have that information reinforced and be allowed to build on it, to elaborate on it to really embed the right detail and behavior changes.

So clearly, a big part of the problem is that we’re putting a huge amount of effort into producing content and courses and curriculums only for learners to forget 90% of what they’ve seen. But that’s really only part of the problem. We talk about learners having short attention spans, although hopefully slightly longer than a tadpole’s, which has more about how much time they have to devote to learning than any actual impact that technology or, you know, modern living has had on attention spans, which is because of the next point on competing priorities. So there’s even less time to get that focus learning that we’re looking for. And those competing priorities, of course, aren’t just other things to learn. Excuse me. It’s, of course, also the actual job itself, the work itself. Our customers often tell us that so often, the job takes over, leaving individuals overwhelmed by just getting the job done by doing the work, driven through gut instinct, or embedded behaviors that perhaps just aren’t best practice, instead of being able to take any time to empower themselves to do the job more effectively and efficiently. And where there are competing priorities in a space where everyone’s based idea, you know, and content is the most important and the most current and the biggest priority, you end up with nothing being a clear priority. We talked about this as you know, throwing things, just becoming the digital junk drawer, and learning stuff to start to self-prioritize what they feel like they themselves need on any given day. And this is how a lot of current generation learning tools are geared up to promote high levels of content within a context of learning, instead of targeted content in the context of working, in the context of you know, kind of getting stuff done. But as we’ve seen, that’s just not how the brain works. All this extra learning and content, you know, as I said, kind of becomes this digital junk drawer of stuff for learners and managers and trainers alike to sift through. Or, as is so often the case just choose to ignore it entirely.

The next generation of learning tools then kind of seeks to step back from the reliance on, you know, the self-curation, and focus in on what really matters now. And this is a key element in getting out of the cycle that the learning industry has been in. Learning isn’t just content management, obviously. But it’s so often what it starts to feel like, for more durable learning, a more active engagement isn’t necessary. So how do we do this? Embracing the next generation of learning tools, of course, doesn’t by any means, mean abandoning everything, or indeed anything that we’re already doing. It’s just about kind of augmenting and evolving what we’re doing today, to meet the learners where they are. It’s moving kind of continuously on from what and how employees are learning, and towards a bit more of a focus on and a bit more of an articulation of the why, why do they need to learn it? Why is it important now? And why would they want to engage at all?

So we talked about managing content. So it’s moving from, you know, managing these kind of avalanches of content to transforming content. So in the case of micro learning methods, it’s not just about miniaturizing content, making it smaller, but it is focusing in on the content that matters to who right now, and producing something that is focused, digestible and reusable. Using something like micro learning allows you to create content that is quicker to produce, and flexible to adjust in order to keep up with the real time change of your organization. Also, using mechanisms like scenario-based questioning, helps to give your content real context, real challenge and clear indications of real world performance of how it will be used in practice, and illustrate how it demonstrates real applicable best practices. The next thing is, I mean, we all know the importance of aligning with business called goals to ensure meaningful learning outcomes. I don’t need to talk to anyone about that. But kind of the next day run from that is making sure that we are zeroing in on specific and precise objectives to make sure that we end up with something tangible and measurable. If something is tangible, it’s so much easier to link that to ROI.

It’s a complete given that content is digital. Now it’s digital, it’s accessible. But today’s learners and that’s all learners, not just millennials, and Gen Z needs to evolve beyond that. It’s not just about things being mobile-ready but it’s about having processes and content that are mobile first. Something that is digestible in a mobile format and a mobile mindset. So thinking about where people are when they’re looking at their mobiles, when they’re kind of consuming this information, what is the mind space that they’re in? What is the environment that they’re in to make sure that those are considered, and it’s not just producing content that’s available on a smaller screen. We’ve already talked a little bit about self-prioritization, self-curation. But the other aspect to that is that it adds even more strain and time to a learner’s already limited resources. So by the time I found the thing that I wanted to learn, I actually no longer have time to learn the thing. So we need a bit of a shift to understanding exactly what each learner or each role, or each group need, and strive to deliver, you know, what they need when they need it. In terms of learners being self motivated, the data is increasingly showing that employees are more focused on learning in the workplace than ever before, formal and informal learning.

It’s something that more and more people are talking about when they’re talking about leaving a job. It’s something that more and more people are talking about following when they’re moving to a new job when they’re, you know, assessing a new employer. So it’s clearly on people’s minds but we still need to make sure that learners are motivated to learn the key concepts at the time that they need to know them as opposed to this to you know, big self-curation exercise, which is where the additional motivation of game mechanics really comes into its own. And finally, we’re all pretty inundated at the stage with kind of metrics and data, particularly on who looked at what piece of content when but in order to kind of genuinely assess and report and optimize the impact of learning programs, we need a little bit more. And often, we need to start with an understanding of where learners started to link to an assessment of where they’ve ended up. So with that as the basis, let’s talk a little bit about ways of supporting engagement. So if people are overwhelmed by this digital junk drawer, then just asking them again, to pay attention, to open that drawer and dig through it to find the thing and Dropbox to dig through the emails, to look in the CRM, to look in the LMS, and wherever else the learning is being found, this isn’t working, and we need some more actionable messages. So in this context, concerns about what people are and are not remembering might not actually be your biggest concern at all. It’s getting them to turn up in the first place, and more importantly, stay engaged once they’re there.

Which is really where game mechanics comes in. So game mechanics is defined as the use of the same kinds of dynamics and frameworks found in games, to promote desired behaviors in non-gaming environments. Bernard Suits talks about, he calls a game of voluntary attempt to overcome obstacles. So the real key value of a game then is making people want to engage in something that is challenging. And we’ll come back to that in a minute. But incorporating these gainful designs into learning, it isn’t just for fun, or even just reduce engagement in many ways. They have very specific effects on people engaged in learning, and on the kind of learning environments that you are curating. We can think about these effects in three different categories, which will go through so it’s cognitive, emotional, and social impacts. So if we think about the cognitive impacts of game mechanics, so gamification means making things challenging enough to not be boring, the concept of flow is pretty well-known. But making sure that you are building in the right level of existing skills versus challenge is really key to keeping people engaged. It’s all about finding that balance between your questions and content that is so simple and doable, that it’s boring, and you can’t get people to care in paying attention. And something that’s, that’s so hard that it’s difficult for a learner to see how they would learn it or overcome it at all, you know, making it so challenging as to be kind of anxiety-inducing, and not approachable and not engaging at all. So it’s finding that balance, which is really key to the way we structured these programs.

Game mechanics also offers immediate goals, which also keep learners motivated to continue, which is where we kind of start crossing over into emotional engagement. So game design, it creates a positive relationship with obstacles, and so essentially creates a positive relationship with failure by making feedback loops really quick, and importantly, keeping stakes low. So going back to the voluntary attempt to overcome obstacles, this is what people really think about often when they think about gamification, you know, it’s badges and leveling up and that kind of thing, but badges aren’t just a quick dopamine hit. It’s about creating, as I said, it’s about creating an environment where if you’re not winning, you’re learning and you are comfortable doing both. And by comfortable, I mean, you are supported, that it is clear from the learning organization, and the rest of the organization, then, that in a big sense, it’s the learning, not the knowing that they really care about. And having that support of the organization and thinking about your learning in the context of your team and peers and the larger group brings us into the science of social. So social interactions, and gamification are obviously key. And there are generally two parts of this. So it’s both about receiving recognition and collaborating with your peers in this open learning environment. And also building in that aspect of competition to further increase motivation. So competition can be a social pressure to support your team. So in this example, here, you know, if I’m in the South team, you can see that there will be quite a lot of pressure to just kind of put the next ones to the post. But it can also be that individual drive to be the top of the leaderboard or at the very least, to not be seen by your peers or your manager, or your manager’s manager, or however it gets structured, to be seen to be at the bottom of the leaderboard. So there’s two different ways that those individual leaderboards can work. But positioning this kind of challenging, and potentially competitive and flow-inducing learning, it really needs the right kind of communication to make it work.

And again, it’s not enough to just have a badge or level or leaderboard, learners really need to understand from the get-go, the context of the learning, and what the desired outcomes are, and why it matters for them for their job and for their professional development. The delivery of this learning needs to be really respectful of their time going back to you know, people just don’t have enough time to engage, and they wish that they did, but the reality is that they don’t. So kind of needs to be respectful of all those competing priorities. It needs to be these communications and those learning needs to be delivered with the right regularity to avoid both overwhelming people with new information. And also to make sure that you are reinforcing existing learning at the right time before it gets lost. Which all adds up to an expectation of a kind of respectful accountability. So if it’s easy to use and consume, learners don’t really have an excuse not to invest a few minutes a day. And if it is simple for managers to view progress and give, you know, the kind of “Just-in-time” coaching that a lot of current learning tools don’t facilitate, then no one across the team has an excuse not to engage. Some of the game mechanics we talked about earlier, obviously help to facilitate this, to facilitate and empower your audience, and that audience is your learners and your managers as well.

As you’ve set up the social pressure aspect of your program well above, then your managers are already kind of part of a team. You know, as again, engagement is not just about the learners, but it’s the whole team that they work in. And it’s the culture of those teams. And it’s the attitudes that those teams have to learning. So we’ll see a little bit later on when we talk about metrics. But if it’s set up in this way, people obviously may still choose to ignore the whole thing. But it means that ignoring the whole thing is not invisible. And the impacts of that are clear from birth, social and a data perspective, often pretty much in real time, particularly from that social perspective, you know. You can see that your team was the bottom of the leaderboard, or you can, you know, you can kind of see, in real time what’s going on there. So now that you’ve hopefully gotten your learners to turn up, now we can going back to that consideration of ensuring that they’re actually retaining what you’ve, you know, gone to all the effort of teaching through considered reinforcement. And we’re going to do another quick poll here. Karen, if you wouldn’t mind setting that up. So what we’re asking here is, is obviously, what were the steps in the what we learned process presented at the beginning of this webcast?

Brilliant. So actually, a good solid third of you remember that this is a great example of the testing effect. So the answer was B.  It was learning consolidation of getting retrieval and enhancement. So, you know, this is forcing your brain to retrieve information that it’s been storing. And this is because this is how the brain works to retain knowledge. Just seeing information a second time isn’t enough. It’s not enough to really reinforce that learning. So the key point here is the difference between seeing information and retrieving information. So you can think of it as you know, cramming versus retrieval practice, it takes effort. Clearly, it takes effort to retrieve information and it feels harder. And in some cases, learning often might feel a bit slower, or it might feel less immediately rewarding. But it is through this process, that more and more neural pathways are created to recall that same piece of information in a variety of contexts. In fact, studies show that while cramming might make you retain 50% of the information over two days, which is still better than the, you know, 33% that we saw in that first slide, where you know, nothing much has been done to help you retain it. Retrieval practice gives you a boost up to 87% over that same two day period and boosts the retention over the longer term as well. So it’s really impactful on learning. We probably all remember kind of reading and rereading and highlighting notes in preparation for exams in school or college or university. And we probably all had that friend who spent, you know, six hours on a Tuesday night, putting together some beautifully-designed flashcards, or maybe you were that friend. I was not that person when I was in high school. But those flashcards, those were micro-learning challenges in action, they meant that the information has to be actively retrieved, as opposed to trying to just passively reabsorb it.

And like flashcards, micro-learning challenges in a corporate learning context, they need to be equally low risk, which is what I was talking about earlier, and we’re talking about gamification. So by low risk, I mean that you need to ensure that the emphasis is on supporting learning, not just testing knowledge. So a challenge needs to be responsive. So it’s the back of the flash card that’s giving immediate feedback and deeper contextual learning. But a challenge also works based when you know that it can and will be repeated. So it’s not just a case of asking a question wants to test knowledge, it’s asking a question multiple times to ensure that the learner has had a chance to understand that and that the learning knows that it is this low risk, you know, they’re not being judged on one question, they’re really being given an opportunity to learn the information that you’re trying to impart. Overcoming this fallibility, that we all so instinctively, you know, that we all instinctively find inherent in incorrect answers. It rarely reaps great benefits. There is, of course, an assessing aspect to it. But in this context, it is for the learners own understanding and benefit, most importantly, is to help them understand their own knowledge gaps.

Humans are famously and tragically poor, and understanding our own levels of knowledge in any given topic, I’m sure that you can all think of about 50 examples of that off the top of your head. But as we’ve discussed already, recurring challenges are themselves learning, it’s part of learning. It’s not just understanding what you don’t know or where your knowledge gaps are but the act of retrieving information, you know, it ever expands those neural pathways. So, here’s an example of, of how one of our customers has implemented micro-learning for their teams within their learning process. So here a learner is being prompted to engage and learning in their own space and in their in time, kind of unobtrusively in the flow of their workday. They’re presented with a position scenario, so a challenge that is meaningful to them in their role and activities. And it’s really kind of specific and precise, it’s not asking too much. The feedback loop is really small, so they know immediately that they’re on the right track or not. And in the case of this customer, they can also see at the stage some data about how the rest of their cohort on to the rest of the learning group answer the question, which is introducing that social aspect of learning really early on, and helping people kind of orientate or gauge themselves against their peers. And then they receive immediate coaching.

So whether they answer the question right or wrong, the question is supported by explanatory context on, first of all, why the answer is what it is, but also, why it matters at all. And finally, the competitive aspect ensures that this, you know, positive social feedback loop of engagement is supported and perpetuated for when they get the next question. Which is all well and good. But how do you know if it’s working, science and tadpoles aside? How do we know if the learning is sticking? And how do we know how to move forward from here? So traditional learning, analysis kind of tends to focus on who took a class, when they took it? At what point did they stop taking it? When did they you know, when did they complete it, and maybe whether or not they’ve passed some kind of post class assessment. But that leaves learning metrics quite flat. And it leaves very few data points, doing quite a lot of heavy lifting, doing quite a lot of work, covering off the status of the learner, the individual themselves, the professional performance, and also the engagement and contents. So you’re trying to extrapolate a whole lot of things out of just actually quite a small amount of data.

If, however, you’re using space, repeated, bite size learning, suddenly, your data gets a lot more objective. So now you’re able to measure trends. One of our customers articulated this, as, you know, one right answer can be an anomaly, it can just be a guess, but three answers over a series of days, now you’ve got an objective trained. And not only do you have this low risk assessment, creating a learning environment that we’ve already seen promotes engagement but you’re able to go beyond kind of single point in time topic mastery and in so real performance measures for cohorts and individuals. If you started from a position of very clear and precise learning objectives, these performance trends start to become a measure of organizational health on topics that are most critical for your company’s goals. Your data stops being a lagging indicator on how engaging your content is and it starts being a leading indicator of professional performance. You’re moving from scoring the learning to scoring performance, and where this information is being reported out to learners and managers and senior leadership, everyone starts to have kind of an even fuller and more nuanced understanding of the importance of the learning in the first place, in the context of these simple and concrete organizational objectives.

So for example, with some of this data, for, you know, learner groups, or cohorts, you can now assess the success of a training program by proficiency improvements. This goes back to the baseline versus current performance assessments that we talked about earlier. And this does make sure that you’re learning data can be a meaningful indication of the impact of training programs, it goes beyond, you know, taking the box on learner engagements and start moving more towards actual performance and talking to that ROI, of the learning. And more than that, with this kind of granularity of data, you can drill down into what that means, not only for individual learners, but also for teams, and even for training topics. So you can see exactly where learning might not be being supported by managers or coaches, or maybe the marketing material is different in that geography or whatever it is. And that kind of brings us back to the respectful accountability concept we saw earlier. So you can see in this in the current version of the heat map, you can see the third column along, they’re clearly just not doing as well as the other geographies. They’re just not doing as well as the rest of the other cohorts in this particular set of information. So you’re learning now offers an invitation to do more intervention in the process of learning in real time. And it’s also an invitation for leaders to have the opportunity to really dig into the reasons why some teams are doing really well and some are doing less well. So what’s the point of differentiation between those groups? You’re beginning to give leaders that’s really irrefutable data on performance, which is obviously, like gold. With that you can, you know, you can also see in the data, you know, where marketing or macro-learning content might be lacking or outdated. So, you can see there’s a, I think it’s the second one down those particular topics that just everyone’s just not quite as strong or not as everything else. So there’s clearly a gap in the kind of information that people are receiving on this, which could be filled.

One of our customers actually told us a story the other day that they were running a training program on a topic, and they had managers come back to them saying that the topic was too basic, it was a waste of time, they didn’t want their teams doing it. But when they ran a micro-learning challenge on the same information, the result really clearly showed some not insignificant knowledge gaps on the topic. So well, maybe it was a basic topic, this made it even more important to know that there were knowledge gaps there. So that was kind of surprising both for the learning organization and clearly for the managers themselves. It was, you know, kind of was not their expectation at all. So having this kind of information was really vital. And this kind of objective data, both on the training program, and on the broader organization really makes it easier to articulate the business value of the learning in the first place. Its efficacy, its business impact, and its impact on employee productivity. And, as you can see, having access to this kind of data also makes it easier to optimize and repeat your programs.

So knowing where the strengths and gaps within your learning populations lie, will really help to hone in on precision learning, with clear objectives. With every iteration, the content can improve, particularly with the flexibility of micro content. And you also know more about the information that you’re sending out, we’re clearly not let’s stop producing and making content available to people, you know, we’re never going to stop producing this macro learning. It’s really vital for people to be able to have access to that. But now we might start having a better understanding of where the gaps are. And because you can drill down into the data with such granularity, you can start assessing how you’re assessing, too. So maybe it’s the question that’s wrong, not the responses. Maybe there was a little bit more nuance and conversation around a particular topic that needs to be explored. Maybe there is something more to be learned about general based practices in a particular area, which then can be fed back into other learning and into other groups. You can also leverage this data to help you understand how to challenge your learners to keep them engaged, and where to focus for groups or for individuals to keep it relevant, both to their interests and to what’s relevant, but also to their own performance baselines and to any shifts in organizational priorities, of course. It also allows you to assess how best to spot gaps. So is it through additional challenges? Is it through a broader view of available macro learning courses and content? Or is it down to more engagement with and from managers? And how do you sustain program interest?

So there are two key considerations here. Embedding a program and the second is keeping close connections to business objectives, which means that learning can become day to day, but it never becomes so habitual that it stops being important and relevant. If micro learning is going to be effective, it kind of it needs to be part of your program throughout the year. And going back to thinking about how we transform our content, micro-content needs to be a consideration up point of origin, you know, when the content is being treated in the first place, not just as an afterthought, because otherwise, it’s always just going to feel like an afterthought. There’s nothing new about a shift from one of learning to continuous learning. But there are increasingly more concepts and tools available to support this. Microlearning can act as kind of a punctuation point to any training program, an objective way to validate the learning and to link it to ROI. Continuous learning is, it’s all well and good. But the hard work is always going to go back to the take home messages that matter most of the business, so continuously learning watch, which brings us back to that self-curation aspect of modern learning. It’s important to ensure employees have consistent clarity on what the important areas of focus for them are, just as it’s important for you to be able to look at what has come before and really understand what needs more enforcement and for whom, how to fill those gaps and the heat map, we saw a couple of slides back.

So now that we’ve already reinforced this beautiful loop here, in our poll. I’m just going to give you a little bit more kind of final context. So when considering the science of learning, and vitally how to make those tadpoles adopt the correct behaviors, there are a few things to keep in mind. So in order to learn, we must have time to allow our brains to consolidate new information, that time between repeated exposure to new information is known as the spacing effect. It allows time for our minds to consider that new information to complete that initial encoding in the context of the rest of our work day. Once we’ve reached the point where the information is starting to fade, being made to recall it again, really beds in those memories. Being made to retrieve something from memory, especially when it’s a bit of a challenge to do so, that’s the testing effect. And the short feedback loops not only help to identify gaps in our knowledge, and present coaching opportunities really quickly and in real time, they also take the sting out of any knowledge lapses and do more to embed the learning in the long term. And if the loop begins, again, leveraging mechanisms like scenario-based questions to help learners elaborate on what they’ve learned, or start using it in different contexts, well, now you’re growing those neural pathways, I mean, really into full blown neural maps.

And even of all of this learning gives you an extra 10% of knowledge retention, which you know, as we’ve seen from some of the data, it’s very likely that it gives you much more than that. But as we saw in those first few slides, an extra 10% after two months is 100% increase in your learning. And even better, that 10% is now likely to be on what’s been really consciously considered to be vital as the 10% that you really want your learners to take away. So much for the science. What does this actually mean in practice? So we’re living in a world where would you believe… I had to look this up yesterday. Four million hours of content is uploaded to YouTube every day and 293 billion emails are sent every day. Helping your organization identify and focus on what they need to know right now is really key. And when those 293 billion emails also exist in the same space as 682 million tweets every day, getting engagement from those who need it, as well as those who may need to coach to it, it means making content available, digestible, and relatable enough that there’s no excuse not to ignore it. To get the most out of your content, how do you make sure that 90% of it hasn’t disappeared by the time the next quarter rolls around. Reinforcement is not just showing people the same information over and over. It’s creating learning programs where assessment is fun and challenging, not career defining, and where knowledge gaps are uncovered and managed early and quickly, while they’re still fresh and relevant. Finally, wrapped around both of those concepts is ensuring that you’re able to collect objective measures on the program for the benefit of the individual of the team and of the learning program, as a whole. Ensuring that those measures were linked clearly to specific and precise outcomes that the business really cares about from the get go, where the data is more objective, and the outcomes are clearly articulated, ROI conversations become, you know, so much more less convoluted.

So I’m just going to tell you a little bit about what we do here at Qstream. We’re founded on the proven thesis that space accessible microlearning challenges, work to create long term behavior change. The Qstream principles and tools, seek to help you enrich your learning programs for more tangible benefits over the long term. We work across a number of industries and supporting every one of our clients in ensuring that each of their own critical learning goals succeed.

Are there any questions? So there’s just a question here. This seems great. But don’t learners get sick of microlearning too? So it’s a fair question. That’s a great question, actually. It’s basically the question that lies at the heart of a learning program success and failure. So I mean, in a knowledge economy in which organizations compete in large part on how quickly they can bring their learners up to speed on what they need to know, it’s really amazing how much knowledge people forget even more so on how much learning avoidance exists out there, both for managers who need their people to learn skills, so essential to their jobs, and learners, who should really want the information that they need to do that, you know, that they need to do their jobs. But the learning experience in traditional learning systems, you know, the current gen tools, it’s kind of created a level of learner apathy that companies have spent the last, you know, five years overcoming micro learning really, it comes into its own here, we have seen dramatic engagement numbers with our clients, typically 90% or above results that, you know, some of our customers have said they find hard to believe.

There are a lot of reasons for that. But a lot of it is simply being respectful of people’s time. So it’s aspiring to two to three minutes a day, versus you know, 20 to 60 minutes per element as course, and not often really goes a long way. People are so used to seeing a lot of stuff in the digital junk drawer that they really appreciate it when someone’s actually considerate of the fact that they have other things to do, that they have other priorities during the day. That said, the goal for novelty is you know, so going back to the goal for novelty to not wear off. We’ve accomplished that by making sure that the cohorts that learning groups, the question types, the visuals, and the game mechanics, vary just enough that while remaining bite size, and only, you know that two to three minutes, only a few minutes a day in concept. But there’s also enough variation to keep people interested. And we talked earlier about running the program, we give our clients advice about not just the spacing effect within the Qstream, so the amount of space that you leave between questions but also the spacing effect between the Qstream challenges themselves to avoid the challenges kind of disappearing into the noise of the digital junk drawer. And there has also been pretty good, we typically see engagement levels go up in a second and third year, people get used to it. And so it’s going back to that it’s habitual, without losing the value of it. They like the pace of the learning, and they tend to come back for more. And, you know, these results made it really fun for us to partner with our clients to roll up the challenges that are critical to their organizations.

Were there any other? Does anyone else have any other questions? We’ll leave the line open for a minute. So someone’s asking, does this completely replace an LMS? We would tend to say, no, it complements it. I think I was saying earlier, talking about it, using something like micro learning as a punctuation point to your learning. I mean, there’s always going to be space for an LMS in learning programs but it just feels like we do just need something a little bit extra to get that engagement and to get that retention. I, you know, speaking for Qstream, we’re not trying to be the next big content management system. That’s never going to be the best use of it. So it’s definitely a complementary, not replacement product. And I think any of the other questions we might get back, we’ll answer them elsewhere. So this is perfect. So I think, Karen, if you want to take over, I just want to say thank you to you, Karen, for all your work. I know that you put a lot of work into this. And thank you to the E-learning guild for the opportunity to speak here today. And of course, thank you everyone on the line. Thank you all for attending and for all the great questions and the great engagements. Over to you, Karen.

Karen: Thank you so much Bryony. That was a great presentation, and if you click forward on your presentation, we can see some contact information that you were going to share with us. So right there if you’d like to learn more about Qstream or if any other questions occurred to you, then certainly get in contact with Bryony or with Qstream. They also have a lot of different links there for you. This presentation will be posted as a PowerPoint. And the recording will also be included in our sponsored library. So thank you so much for that session on micro learning, it was interesting to see some of those statistics. I had no idea at the staggering amount of information that is out there every day. I believe you said 4 million hours in YouTube every single day. Just amazing to me. It is really something. So I’m glad that we have some ideas here, how to cut through all of that and really help our learners. So appreciate that. And I appreciate the time and I thank the Qstream for indeed sponsoring this webinar. And I do thank GoToWebinar, who is by log me in to provide us this platform in order to bring this session to you today. And with that, I’m going to say thank you to everyone. Thank you for spending your time with us and have a good rest of your day.