Microlearning Under the Microscope: 6 Myths Debunked
Learning leaders have so many choices on how to deliver on the strategic promise of a learning organization, but there’s so little time. Making these choices mean the difference between a growing, high-performance organization and one that struggles with the skill gaps that erode competitive value proposition.
Long gone is the dominance and singular choice of the LMS as a central learning system for corporate training and development. With microlearning and learning experience solutions emerging as independent learning tech categories leaders have options to experiment with new ways of learning – that might work better in their cultures, for different demographic segments of their workforces and in general.
But with so much changing (and with no slow-down in day-to-day learning responsibilities) how can organizations sort through the myths and deploy next-generation solutions that will work best in their unique setting?
In this webcast, HCM tech expert Jim Bowley, VP, Product at Qstream, puts microlearning under the microscope…what it is, what “good” looks like, and where microlearning can complement existing learning technologies and practices.
Vice Preseident, Product
Jim guides the Qstream product direction to ensure customers benefit from ongoing innovation as their L&D needs evolve. In the two decades prior at PeopleFluent he held senior product management, product strategy and sales roles, and led innovation strategy for their suite of talent management and learning solutions. Jim has held organizational and communications consultant positions for Willis Towers Watson and Mercer. A recognized thought leader in the human capital management software industry, Jim has been a regular contributor to leading publications such Talent Management Magazine, Chief Learning Officer, Human Resource Executive and HR Magazine.
[START OF TRANSCRIPT]
Gary: I am glad to welcome you here today. We have a session here we haven’t done before. There have been a lot of attention on microlearning, a lot of different approaches to it, a lot of different people talking about it. We’ve had a lot of speakers develop interest in it, a number of sponsors too, and today though, Jim Bowley – he’s VP of product at Qstream – is going to take us through some of the myths that you find involved with it.
You’ll recognize some of them and maybe you’ve been, maybe even fallen for some of them in the past. Jim’s going to help us get a better understanding of what’s the real truth to that are, and so we’ll start that in a few moments. If you’re watching the recording and you want a certificate that you participated in and learned from today’s session, you can click that link right there on the screen to use the insights widget to create and print a certificate stating and proving that you were involved in this session. So, if you need help with that, we’ll show you how to get some of that at the end of the webinar too.
A lot of the events coming up. In particular, next week, James Bishop– well this one is tomorrow actually. It’s a real special; we don’t usually do them on Friday, but James is in Hong Kong and he’s going to talk to us about how game based learning is being used to support the UN’s 2030 sustainable development goals. We just added this recently. Well, if you haven’t already registered and that interested you at all, then be sure and click the link on the bottom of the slide. You can finish the registration process later on. It’s going to be a great session. Please note it’s at 8:00 a.m. Pacific, 11 Eastern, not our usual time tomorrow, okay.
Then Michael Salard is going to be with us, as you can see, and Katrina Baker from Adobe will be back to talking about 10 ways to improve your L&D program on a budget which I know many of you are on. You can find all of those, and all of the rest of the webinars by going to trainingmagnetwork.com, and then clicking on the button on the right hand side where it says calendar of free webinars, and to find the recording of this session right now with Jim, you can just click on the recorded webinar archive button and you will find all 1064 recordings of all the webinars we’ve ever done, but you will find this one listed at the very top of the page. Very easy to find. So that’s how you do that.
We want to thank Qstream today. They are experts at making people better at what they do. They are the only mobile microlearning app that is proven by science, validated by the market, and driven by data to improve knowledge retention and most importantly change behaviors. So, thank you Qstream for making this webinar possible, for being a great partner in this and also for making Jim Bowley available to help us through this. So Jim, I want to welcome you to training magazine network. We are delighted to have you here. Everybody help me welcome Jim, please.
Jim: Thanks a lot, Gary, and I’m really appreciate, also appreciate everybody who’s turned out today. Got a lot of folks on the line. So, it’s exciting and I appreciate the opportunity to share some information. One thing that I wanted to mention is that there’s the training magazine folks do a great job on social media. So, obviously follow there and our own Qstream folks will be live tweeting today. So, if you want to follow along there, you can tag @Qstream or use the hashtags, #microlearningsixmyths or #microlearning.
Just a little bit of background on myself. I’m VP of product at Qstream. I’ve spent a couple of decades in the learning and talent management space, usually in product roles, and coming to this from that vantage point with the observations I’ve been able to learn along the way. Probably an obvious thing we see on this slide, but right away if you search microlearning, you get an awful lot of results on Google. I just searched the other day and 1.2 million results. The interest in microlearning has been something that has surged, maybe particularly a half a decade ago and it’s only built, but that leads to some of the things that we’re discussing today and that is the idea that there are an awful lot of different ways people define this challenge.
The questions that we want to answer today are partially, can we come up with a common definition or at least one that’s valuable to the types of audiences that we’re serving within organizations? Is there a way to get a more consistent usage of the term microlearning? In some cases, one of the things we wrestle with a lot is, is it the right terminology or does it in some ways not capture the things that really make it valuable? Then, how can you quantify specifically how it helps an organization? These are all things that we’re thinking about all the time because this ties back to the idea that there’s a serious problem here in how traditional learning has worked in automation.
Again, I’ve spent a couple of decades in the space. So, the ambition of learning online has always been consistent in terms of what we’re seeking to achieve, but there’s been a challenge, and the challenge is that, in the digital world, people are flooded with information and a lot of cases they forget it. It doesn’t even have to be digital information. Lot of things that are competing for attention, and that flood of information leads to people forgetting things. There is an uncertain learning ROI in a lot of cases with so much learning going on. Really quantifying the legitimate impact of learning is something I’m interested in and something I know most learning professionals are as well.
Getting people to actually engage, so in this case, I don’t mean employee engagement as in engagement surveys, but in terms of getting people to actually be active participants in the hard work that people do to set up learning classes, there’s a challenge there in my experience wherein a lot of cases people risk paying lip service to training activities versus actually taking them as seriously as we want them to. Then finally, behavior change. Are we offering learning that is actually going to not just convey the information but confirm that the intended behavior change is achieved by the people who are taking the learning course.
So, today I put together for our discussion six myths. I won’t read them out here in the interest of time, but you see them on the screen. Each of them in different categories. Some of them intersect a bit, but I’ll go ahead and dive into them. We’re going to look at how these myths are solved, but also how we might be able to look at how they’re creating value, that overcoming them has the opportunity to create value in your organization.
So first. The first myth is it’s just shrunken content. The first one, this is the biggest myth of all, the shrunken content, that microlearning is just versions of the same stuff that’s been out there all along. The reality is that there is– that this isn’t about shrinking. It’s about rethinking. I think Mark, I think I read your note. I think I’m going to stay on more now, so thank you for the pointer. I’ll try to control that. Let’s reconsider the digital junk drawer and how that works.
There’s a lot of organizations that risk creating what’s a big digital junk drawer, a drawer of a well intentioned ideas that inadvertently create confusion, and this glut of information that’s there gets in the way of people focusing and prioritizing information. So, when you look at just shrinking content that’s not really doing much to actually improve how people learn because the reality is that shrinking things, just the same things recaptured, what we’re looking for is the ability to transform the content. So, transforming content versus content curation means that we’re able to look at the content differently and take the opportunity of microlearning to really change how the information is displayed to people. So, rather than just shrinking things, this is the opportunity to rethink how the content is messaged to people and how it’s delivered.
Second, there’s a strong level of people who view microlearning as just chaptering existing content. Once again, when you’re looking at the ability for microlearning to have impact, the idea that this is something that’s just the same stuff repackaged misses an opportunity. Microlearning has the opportunity for this to be precision learning versus the generalized learning that in effect hasn’t really reached audiences. So, there’s been a lot of discussions about just being able to package things in a different way. But again, this misses the opportunity that this isn’t just about the stuff that’s big being the content that we want to avoid. It’s about really thinking about is the content right for the audience? Is it right to have the kind of impact that we want it to have? So chunking or chaptering content is not what we’re looking for that– not what we’re looking for in order to actually take advantage of this and transform how people learn through microlearning.
Then finally, the idea that this is something that is a transformation opportunity. When we look at traditional learning online, the size of the screen, how people link courses together, the amount of time that things can be packaged into Scorm courses, webcasts, et cetera, and then repackaged for the audience, there’s an audience attention span here that, in terms of mobile being a driver, people want things at the size of the phone. There’s nothing new about that, but it’s not just the phone. We want to design this experience to have impact, and so rather than just shrinking bigger things, what we’ve seen have the most impact is that, is the ability to say, design it for the brevity that people have, design it for the attention span of the audience versus just trying to break things up into little pieces. This is an opportunity. Microlearning is an opportunity to really change the nature of the information that’s pushed to the audience.
Speaking of microlearning, let’s take a look at one example. First in this example, we’re looking at how a learner could use microlearning in only a few minutes of the day to actually gain value from microlearning. In the left, we’re looking at the unobtrusive reality and being able to push this to someone’s phone. So, a challenge that’s delivered to the phone versus becoming part of the larger digital junk drawer of things that are competing for attention. The ability to say there’s something that has impact that introduces itself to someone’s day at a size they’re willing to accept it and unobtrusive to their actual day to day activities.
Number two, rather than just having the content be smaller, one of the approaches we’ve had clients take is the ability for this to be a precision generic scenario or a challenge. So not first presentation of content broken into smaller sizes, but rather a challenge that leads to engagement by the users coupled with choices. So, not here’s information for you to digest, but here’s a problem and a challenge you need to solve. When someone answers the question, the responsiveness of the answer and the availability of the right answers. So, whether the answer is right or wrong and what the right answer is, whether you’re right or wrong is part of the principals of space learning, engaging people to actually interact with the content and repeating it and making it small enough so that people don’t really have an excuse not to digest it.
Then finally, what you’re seeing here is the coaching on number four that provides the ability to see what the right answer is, and number five, competition, being able to add the competition of who’s participating, how they’re doing, how many right answers there are on a team or individual basis is all an effort to make sure you’re engaging people. But setting this up so this isn’t just smaller versions of the same things, shrunken content, but rather a challenge that engages users is where we’ve seen microlearning have real impact. So the myth one takeaway is microlearning presents the opportunity to transform content for urgency and the intended audience impact relevance in retention. It has less value of just taking the same things and turning them into shrunken content.
So with that, let’s do a quick audience poll of how you use microlearning at your organization today. I think this poll will refresh as we go in real time on the screen. It looks so far that the leading answer seems to be don’t use microlearning at 46% and growing, which is interesting. You get some healthy response for self-service learning and quick fire training and some answers for push programs. But interesting, the largest response so far seems to be the fact that people don’t use microlearning today, which presents the reason we’re having the discussion today. I think that’s probably, let me think, Gary, enough time on the poll. Should we keep moving?
Gary: Sure enough. That’s just fine.
Jim: Thank you very much. Second myth. Second myth is that this is something that is just valuable for learning. The opportunity here is that if you’re sending, if you’re getting something that you can gather people’s attention– and candidly I think learning and the agenda of learning professionals is valuable to be able to say what’s the punctuation point, what’s the message the organization needs to send? So, as I’ve continued to look at microlearning, it’s obvious that it’s valuable for learning activities, but it’s just as important some ways to be able to send the focus message that people really need to understand.
The notion that it’s the same delivery mechanism versus the challenge or in a lot of cases if something’s working just fine and it’s just learning management system it doesn’t need to be in microlearning, if there’s something someone needs to know right now, it’s a perfect thing to send that message through microlearning, and that could be concepts that are important to the business of the organization or it could be something that’s changing in the business that can’t wait for a full course to take place.
The ability for it to send that message and be urgent and critical in the business is part of what we’re trying to do to make microlearning be as relevant and have as much business impact as we can within the organization. To get that right, it’s tied to what we mentioned a second ago. Rather than just turning things into a message that is the same one repackaged. Ideally, if this is going to be a messaging system, it will be edited by your team to be something that’s both precise and small, but the content and the focus of it is clear and its value to the audience. Perhaps a challenge in the way I mentioned it on the earlier slide will help, but to the extent that the value is obvious, that it’s designed to be a smaller thing to send, that it’s interrupting me, but I know why it should value, it’s part of where we’ve seen microlearning the be achieved or be delivered most effectively in organizations.
Then finally, the last concept we have here is one of respectful accountability. In respectful accountability, what we’re looking at is the idea that when you make things as impact centric as microlearning is intended to be, when you look at the editorial process that gets you to a message that’s precise, you’ll still have some people who ignore it, but it’s created in a format that’s simple enough and clear enough that whether it’s a manager or an employee, nobody really has an excuse not to engage in the process.
So, to the extent that part of what we’ve learned from having conversations with microlearning over time is that managers can be a failure point in the learning process. Employees and learners themselves can be a failure point in the learning process. Both of those things with content rewritten and created for microlearning context, it’s easy to turn around and say, you no longer have the excuse to not integrate, interact with this content in effective way. It’s up to you to get it right, and for those who don’t participate, there’s variety of game mechanics, we’ll talk about it in a few minutes, that should make it very difficult for people to have an excuse to disengage with the content.
Here’s one example of that. Just a quick example of a client deployment here. At the left you’re seeing a measure of performance within the actual microlearning process, which teams are participating, which aren’t and the ability to actually interact with those teams to encourage them to participate, particularly on the most critical issues. Second, the engagement process. How many people are actually active in the process? Are there individual employees that aren’t participating and how can we reach out to them to make sure that they’re really digging into the content?
Then finally, is proficiency gaining? We’ll talk a lot more about proficiency and measures, but this is a measure of, is particularly in the context of space learning and microlearning. Did someone answer the question right the first time and then answer it incorrectly the second time? Have they answered it incorrectly both times? And are they actually improving? Is the message meeting the audience? So to the extent that we can now be intervene in this process to really coach people to participate, it should help the measure of proficiency grow over time.
So the message on myth two and the takeaway is that this can be as much about sending a message that sticks with its audience as it is about learning. Of course it’s about learning, but I think all learning is intended to have that kind of impact, and we’re of the belief that microlearning is a great way to put a punctuation point on the kinds of things that are critical for the learning function to impress upon its organization.
The third concept that we want to go through today, the third myth is that push isn’t required, and I’ve seen very effective content libraries including microlearning content libraries. I’ve seen great examples, particularly of video being used in that, but in the definition that I’m using in microlearning tip for today, there’s something that’s most important, that’s the reason it was distilled and created as a microlearning micro asset. So, in this short attention span world, you can’t really assume that people will go into a knowledge store and pull things down, particularly on the things that if they reached the level of being important enough to be here in the first place need to be called out. So the idea of pushing things through alerts, punctuating the most important stuff, this will always be complimentary to broader and macrolearning principles that are in a learning management system.
But when we’re thinking about spaced microlearning, we’re thinking about something that needs to stand out from the digital junk drawer. It needs to be able to be something that is clearly of value, and for that it needs to be pushed. Crowdsource is great, social is great, self-curation are great, but they’re fallible, and this idea that this is something people need to know is behind how we can really have microlearning serve a purpose that’s different from just storing, just more content management within an organizational context.
I’ve mentioned space learning as we’ve gone along here. The game mechanics behind space learning and the repetition of it, at a minimum as we saw on the prior slide, encourages participation. People should want the– people who care about this should care about the social pressure that comes from game mechanics. But it’s also a form of respecting the audience, knowing that they can only take on so much at a time. That being reminded of something is scientifically proven to actually help people not forget things. In all of these dimensions, game mechanics really do help with this push. So it’s not just pay attention to this being sent to someone via a mobile alert or an email, but the entire process designed to not interrupt the day but not let people off the hook in terms of the repetition underlying, at least in our experience, those game mechanics, the competition of them, but also the ability for that to tie into a workflow that repeats and reinforces the information, have proven to be incredibly effective in a microlearning context.
Then finally, this is part of digital expectation, whether it’s– there’s plenty of things out there where people are having things pushed to them all the time. Most online shopping experiences are one big digital push at this point, and so people are accustomed to not having to go find things, they’re accustomed to things curating for them. So, to the extent that it’s good to have a digital store for certain types of things, but for the things that people actually want to make sure that they need to really understand, the demand and the digital push, this is something people are just used to from the standpoint of how they experienced the web. This isn’t just pushed to the phone, it can come to an inbox, but that idea of I don’t have to think about it. I have it delivered to me, it’s not overwhelming to me and I can actually take action on it. These are all things that become extremely effective in the context of microlearning.
Just one of the things that goes with this is the idea of microlearning is an active program. So, I don’t want to make it too formal, but this is an example of a customer and how they have rolled out microlearning to be able to say, we only have so much audience. We only can gain so much of the audience’s attention. How do we make sure that this information is being received in a programmatic way? So just one recommendation is running. Microlearning is an active program versus kind of randomly putting things up as another way to overcome the idea that the assets and the library will somehow stand out to people versus choosing moments and dates where people need to understand things. These are things that are best served in a program.
Then here’s one example of a leaderboard. One of our clients recently had a focus group with their audience where it was observed, the person was observed to say, I wasn’t going to do this, but when I saw it as a competition, I jumped in. I overheard one manager saying, I wasn’t going to do it and I saw the competition, so, what you’re seeing here is the ability for– some people will do the competition because they want to succeed and they want to be near the top of the leaderboard. Other people do it because they don’t want to be at the bottom of the leaderboard. But either way, this notion of the gentle push of game mechanics tied to microlearning is really a big help in how the reinforcement of the knowledge works. The difference between microlearning being more content in just a smaller size and the challenge that people will actually digest and retain in terms of information.
So the myth number three takeaway today is that the most important messages can’t remain passively in a digital junk drawer. They need to be messaged to audiences in the ways we’ve just described. This is the opportunity of microlearning. A quick update, and Diane, I don’t mean it to be an advertisement and an information overload, but I do want to give one real life example, so that’s not meant to be an advertisement. Apologize if it came across that way, but that’s not the intention, and as a footnote, I welcomed Dan on the webcast just looking at some of the comments.
The next myth we want to go through is that this is somehow a generational thing. The reality is that technology isn’t generational and that everything we’re talking about here is something that is– the audience penetration, the ability to reach an audience in a concise digital format isn’t something that’s unique to this audience. It really has no boundaries, and people kind of expect this either way. So, there’s a whole level of notions that– and I agree with the notion that there are different ways that people learn in the workforce, but at the same time the other side of it is that there’s an awful lot of opportunity here to make sure that this is something that’s appealing to everyone.
So, the second thing is tied to here. Attention span also isn’t a generational concept. Everyone has devices now. Everyone wants the information faster, so transforming the information down to a level that meets the demand for relevant information. The faster information, the team and a faster information is universal. The ability to try to tie in just what matters versus a larger group of information is also how this works, well with macrolearning. There is something that, there’s only so many chances to reach an audience, and so designing this to the attention span of all generations is pretty critical.
Then finally, ideally from my perspective, if you’re editing the content of the microlearning correctly, the message is one that’s supposed to be for an entire cohort, not something generational. So to the extent that we’re getting the content right and the unified message right, this should be about organizational alignment. So even though there’s other ways to learn, having microlearning do what we mentioned before, send a messagek, is part of the opportunity here. So, I want to pause for one second to take a look. Anybody have any comments, that I haven’t read on the chat that we’d want to pause and take a look at right now?
Gary: You said, you fomented a great deal of discussion over here on microlearning, Jim–
Jim: Okay, and in general, it looks like, if we were looking at the attention span piece, it looks like people agree that it’s not a generational thing, which of course would lead to my myth four takeaway, which is avoiding millennial clichés. I see that some questions may be coming up at this point, so I’ll pause to see if that’s the case, or maybe it’s the commentary, but I feel like I’ve been missing a really good dialogue. So, that’s why I thought maybe I’ll take a glance over versus trying to read and present at the same time. It’s suspenseful. I see multiple people responding. Thanks, Jonathan. I appreciate the feedback.
Gary: So just a question for you. Peter asked, how do you assess what the user will be interested in?
Jim: Yes. So, I think there’s a variety of methodologies we’ve used to do that, but yeah, I think starting with the stuff that’s most critical and then transforming it into– what we start with, at least in my experience, is making it a challenge, because sometimes the content they’re going to need isn’t going to be stuff they’re interested in by definition, it’s stuff they have to know. But the important part of it is transforming it into a format that they are unlikely to miss. So, some of the art of microlearning, from my perspective, is getting it down to a challenge where the answers are accessible enough and the challenge that people can actually interact and be able to have a chance of getting it right, but are descriptive enough that they’re challenged and that they get the answer directly afterward.
There’s a lot of the people who I’ve seen write this best are really good at editing, but I don’t want to underestimate the fact that particularly when you’re dealing with a broader group of content manager, content owners and subject matter experts, it can be really difficult to do things simply versus longer, but getting that level of, it’s a challenge. It’s been transformed. It’s distilled to the pieces. Even it’s distilled into its component parts of microlearning, even the stuff that people aren’t interested in should be at least digestible by the audience. Excellent. I think the rest of them are comments. So, maybe I’ll push where– do we have any other questions in the chat or shall I move on to myth five?
Gary: None that you’ve missed.
Jim: Okay, great. So with that, let’s do an audience poll of how people measure learning today. So completion rate is winning so far. Actually it’s too early. It’s neck and neck with one off post learning assessment. Yes. I see a note that it’s a good point that it could be more than one. So, maybe that’s why it’s neck and neck. Ron added in reaction surveys. Getting a little bit on quantifiable impact on business schools. One of the assessment. So it looks like there’s a few different ways. What are some of the ones that are here? Bruce adds success case methodology. Excellent.
Gary: Looks like everybody’s weighed in there, Jim.
Jim: So the end result was right about where we started, I guess, right? The completion rate and one off post learning assessment. Excellent. Great.
Jim: Let’s head onto the next one with that. So, not surprisingly, our next one’s that it’s hard to quantify. We started with talking about the right content. So, one of the things that gets to be, gets to be important in this context is that if you get the editorial content right, if you get it to a size that’s consistent and if you get it to the right cohort, and particularly if you run these as a program, there is going to be a baseline created and there’ll be a time boundary created that allows the measurement of how microlearning has an impact. In fact, in some ways what we found is that it creates a bit of a real-time window into people’s proficiency into critical items.
A lot of examples of this, but what we’ve found is that the initial answers in space learning and the latter answers, you could see where people have absolutely grown. You could see where someone hasn’t attained proficiency and you can see where people have just simply not engaged. So, that measurement though is something that’s happening in real time and becomes a really powerful element of the program. To that extent, this is why on the earlier slide we were looking at a program being something where we’re able to actually run people through a timing of these, because a timing of the program, because having a beginning and an end, by definition, lets you look at it and say, did people actually do it? Did it happen in the time frame that it needed to happen? So we have a lot of clients that use this prior to a sales kickoff meeting in a sales training session.
It has a boundary. It shows whether people engaged with the learning, and it also shows what started and what ended. How did people’s growth in their proficiency go? So, that space learning impacts and having it be a consistent baseline for the whole cohort is great for the individual, but it also shows the strengths and weaknesses of the team as a whole. We’ve been using proficiency most often as the right accurate measurement for microlearning courses. It fits as I mentioned before with space learning intervals, that show the growth or lack of growth of someone’s proficiency from start to finish, where that didn’t happen, which questions didn’t happen on and provides a real business fitness heat map that ties things to a microlearning payoff that has real business, illustrates real business value.
So here’s an example of that. On the left axis you can see a sample of the types of questions that are being asked. At the top axis you can see different divisions and their relative scores in the boxes in the middle. This is a sample of one client’s deployment of this, but over time, this heat maps illustrating the questions where people actually grew in terms of their proficiency, which areas still have weaknesses, which group really engaged well. So, to the extent that microlearning is run as a program and in space learning context, this creates real visibility into the actual current health of the organization. Of course this can drill down into individuals within the groups, but it also creates a picture of our people learning the most critical stuff.
Going back to my original premise on this, for the stuff that shouldn’t be in the digital junk drawer but stand out separately, if this is a concept that meets the threshold editorially of what microlearning really should be, this is stuff people can’t afford to not know, and to the extent you still see red things on the slide, that means that there’s work to do from a coaching standpoint. In some cases, we found when we’ve been able to do qualitative analysis, it might mean the questions being asked incorrectly. So for future versions and for future cohorts, it can easily be feedback on the people who are creating the content. It also can be feedback to managers who may not be taking it seriously, and by definition, becoming that show point in a learning process, that point of failure that’s taking good content and not delivering it to people in a serious enough way.
The myth number five takeaway is that, running this program correctly and the precision learning that goes with it is something that goes hand in hand that for those things that you do deem to want in microlearning as defined by space learning in the way we’ve talked about it today, there’s a real opportunity here to make sure that the measurements are timely and have the kind of output we looked at just a moment ago.
Before we go ahead, I’ve looked at the chat as we’ve gone. Gary, if you don’t mind me asking you, is there anything, any questions or things that we want to call out?
Gary: There are comments from the members to each other on some of their other various comments. So I’d go ahead and let the discussions continue?
Jim: Great. So no questions yet. It looks, I will say–
Gary: Wait, wait, wait, wait, wait, hold it. Hi Greg. Greg, just asked what is editorial threshold?
Jim: Okay, great. Editorial threshold, by that– thank you for the question, Greg. That there’s the same thing I mentioned earlier that something just being shrunken isn’t really going to be the kind of challenge that changes the content and makes it fit the metaphors, so that level of is it phrased in such a way and candidly just like I mentioned a second ago where we see good feedback to clients who are using microlearning. Sometimes the question wasn’t phrased correctly. So getting the question right, making it be a precision in a challenge that it works in a microlearning context is what I mean by editorial threshold.
Gary: We actually do have a couple of more questions if you have time.
Gary: Roxanne wants to know, is the current output measured by self-report of the learner, and if so, does it make it a little hard to have competence that’s accurate?
Jim: So this is not– in the case of the example I’m using here, this isn’t self-reported by the learner. Thank you for the question. It’s tied to their responses to the microlearning space learning challenges. So in the context of how this example worked, each person gets the question three times. If they answer it correctly twice, the question is retired. If they answered incorrectly, they get three chances to answer it correctly. So everybody goes through the process and there’s different ways to score it in how we work, but ultimately that’s what creates the baseline is, did people actually finish the questions? Did they answer correctly? Did they answer correctly originally? Did they answer it incorrectly first and then answer it correctly the second time? So part of that’s measuring, did people get it right the first time?
The more optimistic side of that is the principle of space learning that sometimes difficult concepts are difficult to grasp from the first time. If something is enough of a challenge, and going back to the editorial threshold. If something’s enough of a challenge, it should be a little bit hard, and if you want people to learn, challenging and is a good way to do it. Did they move from a place where they didn’t know the answer on a critical concept to where they did? The process of answering the questions, the challenge is what creates these measurements and what creates the output. So it ends up being quite mathematically sound even if someone didn’t answer a question, which itself delivers some data.
Gary: Okay, and do you have time for one more? We might have to save another one or two until later.
Jim: Yeah, it can take one more because we’re on myth five or six. So we’re eating into the question time, but I figured this might be– I’m really impressed by how, just like an editorialize for a second gear. I think the ability for this to be so active in the chat is amazing. So kudos to you for this. I really appreciate the back and forth.
Gary: One of the reasons we love having a community instead of just doing webinars, and another reason we will love Adobe connect. So Jonathan asked this question, what would task based microlearning look like from your perspective?
Jim: Task based microlearning. So, I want to make sure I understand task based. In all cases, the way I’m defining microlearning today is that it’s a concept that’s broken into a challenge that’s through space learning and repetition, but I want to make sure I answer the task nuance correctly, how to fix a broken pump. Okay, so that’s a great example. Is that an example of task based? So, let me use that as an example where you wouldn’t look to– if someone would need to have– if it’s a really complex engineered pump, you wouldn’t want someone to use microlearning to learn how to build a pump in the first place. But let’s say there’s a software element to the pump that needs a patch and you need the people who are fixing the pump to actually know how to fix that pump right now and there’s been a change.
Microlearning is great at interrupting people who by all accounts know how to do their jobs and tell them that there’s been a deviation from that understanding and then measure whether they paid attention because if they aren’t doing it, they’re not going to fix the pump correctly. That level of, I’m interrupting your day to make sure that you know information that you might otherwise not find somewhere in a learning course or admit it 23 of a Scorm course, that’s the notion of where I’d define micro learning and a task based scenario, like a broken pump.
Gary: Great. Well thanks for taking time for that. I see that we are on 37 of 50, so we’d better move along and then we’ll catch the rest if we can.
Jim: Luckily we already did several of those right away, so we are now up to 40 or 50. Again Gary, I really appreciate this. It’s been a great session. So I promise you I’ll bring it in on time.
Jim: The last myth that it’s something that’s synonymous with learning management. I have nothing against learning management systems and I’m not naive enough to think that they’re not incredibly important, but what we’re talking about today is different than the traditional LMS. It’s not intended to be– I love Mike’s comment, LMS where courses go to die and that’s what I’ve been referring to as the digital junk drawer, but having used LMS in the past, it’s pretty critical to understand that there’s a difference between compliance on a macrolearning level and what we’re talking about here, so there is a real difference.
It can seem like a point difference to the LMS and a lot of people will look at this as just shrinking quizzes, but if you’re trying to put this in the form of the challenges that we’re talking about today, that indeed and you’re agreeing with Mike Miguel here; it is where courses risk going to die versus the stuff like the pump example we just talked about in the task based microlearning, something that people can’t afford to forget right now that they can’t do their job without. Microlearning is more like a scalpel than an anesthetic in that sense, and to the extent that we want to apply the things that people need to know, it becomes pretty critical and it’s very different than what the LMS was intended to do, which is prove that people took a course more than be sure that they learned it.
Second. There is an intersection here and it’s what I’ve been talking about relative to the learner’s profile. We have a lot of folks who use proficiency scores that are delivered through the heat map and the process we looked at earlier to be able to say proficiency becomes an important part of someone’s overall talent or learner profile. This can also tie to things like if it’s used in a certification context for example, people could have this fed in to say someone did or didn’t pass certification, which can affect all the things you’d expect in learner profiles or in talent profiles such as compensation and so forth. But so there is an intersection on the proficiency scores that works well to be able to actually go back and record back to the LMS a data point that will be valuable to the extent that the LMS remains the system of record for the learning process and the learner records. Some people have moved to learning record stores for that, so it could just as easily go there.
Then finally, probably implied by everything you’ve been saying so far, this isn’t an either or proposition. This is intended to be something that is a very specific learning process for a very specific need. So diagnosing when you need this kind of power in a learning program is an important part of this, and that’s something that’s very much, those proficiency scores I mentioned earlier, very much tied to an overall strategy of using the right learning method at the right time. Too much microlearning fatigues the audience as surely as anything else, so that’s why I’ve been suggesting throughout that the idea that this is something that should be a program, should be spaced, et cetera, is really important to also avoid the fatigue factor.
To Peter’s question, this isn’t the intention that for companies that use the Qstream products example do feedback is part of a learning ecosystem that would tie into a provider and allow that provider to have another dimension on an individual’s learning progress and career path. So the reason we of course do this, just as an example that I think everybody’s probably familiar with this, but there’s an awful lot of money spent on corporate training globally today. In addition to that though, studies have shown that whether it’s 79% higher or lower, I’ll leave that to everybody’s decision. But it’s where the term digital junk drawer comes from, that these learning courses are themselves often extremely good stuff. Sometimes they’re not, but they’re often really good stuff, particularly if they’ve been developed by learning professionals versus just throwing a webcast someone did online.
But microlearning is there for the part of it, that 79% of that information is forgotten within 30 days. So, when we’re looking at the ability to have a better way to do it, the notion of precision reinforcement through microlearning becomes pretty critical to this. Andrew, to that point, I don’t mean to demon–, or I guess you’re not suggesting I’m demonizing it, but it’s a really good point. I think on the learning management systems, it’s just a necessary thing. Using the pump example again, you can’t expect someone to know how to build a pump from scratch on an assembly line, or a task without having this information, but microlearning is really good at when you want to make sure someone remembered everything they knew about building that pump half year after the course is taking place or when the pump needs a software patch.
The myth number six takeaway. Microlearning is complimentary to a learning management system, particularly when it’s described as the miniaturization of content space learning and the ability to add the proficiency challenge to this context. Those are the six myths I wanted to go through today. Again, I appreciate the time here.
Just a little bit about Qstream and I’ll make this brief so we have time for questions. We are, of course, a microlearning provider. We are in the business of breaking down content into the bite sized scenario based challenges that have been talking about today to make it easy for people to actually get information that is easy to consume and hard not to consume candidly. We’re using a proven methodology to improve knowledge and learn skills and change on the job behavior and make it stick. All of this is designed to work first on a mobile phone, or through a mobile device. So, the idea that this is something that takes a lot of bandwidth on a desktop goes away.
One of the common things that people are excited by is the game mechanics make it more fun and more competitive for people to learn, so that’s something that– I know that’s a sort of see it to believe it, but the best comments we get are people saying our learners do engage more effectively when they use the solution.
Then finally the proficiency things we’ve walked through. We do take it very seriously on how to create the baseline to really make sure that that proficiency output is a fact about the organization, a fact about the questions that are being asked the organization, and a fact about the people in the organization that shows the real impact that learning has on it, and the necessity of those concepts when delivered through a microlearning context. There’s another small point 400 customers. We’ve run tens of thousands of Qstream’s with hundreds of thousands of learners using it. The average proficiency gain of a Qstream customer is 17%, and the average engagement is 93% or above in terms of people actually interacting with the system, and these are some of the customers that we’ve worked with today or to date and particularly in life sciences, medical devices, healthcare technology, financial services, and then some other brands that are listed at the bottom.
Thank you again to everyone today. I’m happy to take more questions. Was thrilled to take questions along the way, and you can reach me here at the addresses that are listed and I would be thrilled to talk to anyone offline or at any point just in general about this topic. Always excited to share information, and once again, Gary, happy to answer more questions. I want to thank you and the team for letting us do this exciting conversation today.
Gary: My pleasure. My pleasure, Jim. We’re very happy to have you here doing this session where they get into a topic we haven’t tackled before. Qstream has such great expertise with it, so thanks very much. I have some questions here for you. So, we don’t get behind the clock here. I want to get this one out really quickly. Peter and Roxanne sort of asked a similar question. So if we accept correct answers as learning, what’s the correlation to behavior change, or as Peter put it, is answering a question really proof of proficiency? I know that you don’t say that exactly, but what are your thoughts on that?
Jim: The key thing is not answering the question right, but answering it correctly multiple times because that’s where we do a lot with space learning. Answering some of the question right and answering it right once doesn’t do anything on the proficiency side of it. Seeing the trend from the initial answer to a later answer has for us proven to really increase that retention, and one footnote is, you’ll see in our marketing that it’s scientifically proven.
All of this space learning was based on scientific research done through the Harvard medical school on space learning that proved, the effectiveness of dialysis care on patients who use the microlearning approach versus standard learning. So, this is all structured and the algorithms are structured to be able to take advantage of those clinical trials that literally millions and over $6 million were spent on them, and apply them in the corporate setting. So, we’re using that same proven technique to drive the answers to the questions, not once, but asking the questions multiple times to make it stickier than a single question with a correct answer.
Gary: Oh, great. Thank you for that. I’ll give you another one in a moment. I just want to thank those of you who are listening the ideas that you learned in this session over on the poll there on the left side of the screen. If you haven’t done that, I’d appreciate it if you would do that. I know Jim would appreciate seeing what resonated with you during the session. Now Pam had an interesting– I’m not sure I’d say it’s a challenge, but at least an interesting statement. She said, I’d love to see you use microlearning to reinforce our learning around this topic.
Jim: Okay, great. So microlearning, how would we use microlearning on microlearning as a topic? Interesting. Well, so if I were taking this presentation, and the folks in our services group would be far better at this than me, but what they would challenge me to do is say, Jim, you have 50 slides and I know Gary probably should be challenged on this anyway. Although we did bring it in on time, I probably should have had much fewer slides, but they would ask me to editorially break this down into parts that would say, what are the five or six key points you want to make? Break them into questions and form them as a challenge.
So part of it would be the editorial thing of getting it into a smaller format. Part of it would be two weeks after this presentation sending you an online prompt, which could be through our mobile app or through an email, that would challenge you to come in and say, what did you remember about this presentation? So, in a sense, it’s an assessment, but what is it that you retained, and to the extent you didn’t retain something, that’s the point of asking the second question, because the second question would reinforce the first and give you the opportunity. There’s something nice about the fallibility of being able to answer a question incorrectly. People don’t mind being told they’re wrong if you tell them how to be right.
A lot of cases, and this is where we see proficiency gain, people often get it right the second time, and if they don’t, they still have a third chance to get a right the third time. At that point it’s a coaching opportunity for a manager, but that would be exactly how I break this down is to take the content, choose some key facts and then send it out as that kind of challenge based thing with three opportunities to get the answers correct. Then the real test would be on me, whether I was convincing enough today to actually share some information that won’t be forgotten.
Gary: Well thank you. I would say that you were, and apparently our list over there on the left is proof of that as well. I just had some couple of people write me private chats asking me for more information about microlearning, and one place you can always find it and this will join that body of work is by going to training mag network to the recordings there. We’ve been talking about microlearning for years. Ray Jimenez and Shannon Tipton, a number of other talking about microlearning from the very beginning. So, if you want to dig deeper into it, I would go there and I just put the link to the search results in the chat there. You can just go to trade and make that work, find the search and do that.
Greg has always been helpful suggesting a future webinar on how to create the proper context real or that’s interesting. Okay. So, Jim, it’s straight up at the top of the hour. What thoughts do you really want to nail down here as a final takeaway for us?
Jim: If you were looking, the three takeaways are really experiment with microlearning, it’s a great opportunity, and it’s really, for me as someone who cares deeply about learning and the industry and the process, it’s a really empowering thing. I’ve benefited and enjoyed it as part of, in the context of Qstream. I’ve really liked the client interactions I’ve had. Second, it’s an editorial process, and it’s worth the effort, but I don’t need to tell this audience that content can be hard. Then third, it’s surprisingly fun and engaging, and people can’t tell you how many times I’ve talked to customers who have said that they’ve taken content that wasn’t working in a traditional source and excited audiences they didn’t expect to be excited using microlearning. So, the real takeaway would be this is something to have fun with, and for the people who run programs and I encourage everyone to experiment within their various organizations.
Gary: Well, thank you very much, Jim. You know just because we’re at the very top of the hour doesn’t mean we can’t take another couple of minutes. I know you would like to answer a couple of questions, so let’s give you a couple more. There’s a point at which I have to stop because I have another activity, but Amy wanted to know, does Qstream only work on mobile devices?
Jim: It does not only work– it works effectively on a mobile device, but that would limit the populations we serve, so it also can be as simple as an email. So the emails of course work on a mobile as well, but for those who would prefer to do it on their desktop, it works in any email packages as well.
Gary: Okay, and James asked, what design thinking tools are most appropriate for microlearning development?
Jim:Well, so I’m sure there’s a much bigger answer on the tools, but so I would be probably abbreviate in my answer for that, but design thinking in general is very useful for this because that’s the key to the editorial breakdown of this stuff into the right increments. I think I’d give an incomplete answer on the tools that are out there to actually manage the content that way. We have a methodology and ways to manage in our product, but I think there’s a lot of design thinking tools that are very good, and I don’t have as much expertise in that area to be solid in my answer.
Gary: Well, it’s nice that somebody asked that because we have the webinar coming up with Felicia Banner and Monica Cornett next week, understanding orders, leveraging personas and design thinking for better learning design. So James, you might want to attend that, and frankly, you can go in with almost everything to the recording archive of training mag network webinars, and just search for design thinking and you’ll find we’ve just recently done some webinars there. So, we’ve had a lot of requests for more from Qstream, more from Jim on some of the topics that he’s talked about here today. That might be one to think about for next time. Jim, what do you think?
Jim: Yeah, it sounds like a great idea.
Gary: Okay, great. Well, now we sort of have gone through all the questions I think, it looks like. So thank you again very much, Jim, and to Qstream as well. Thanks everybody for joining us today and I hope you have a great weekend. I hope you start having a– well, right after tomorrow’s webinar. Don’t start before the webinar at 8:00 a.m. Pacific tomorrow, but right after that, have a great weekend too. Jim, safe flying back to Massachusetts and Points west, I guess.
Jim: Thanks, Gary. Speak to you soon. Thanks again for everything today,
Gary: All right. Take care everybody.
[END OF TRANSCRIPT]