Setting Corporate Learning Programs up for Success: Proficiency and Engagement as a Driver of ROI

 

Measuring the ROI of L&D programs has always been challenging. It’s often prone to qualitative or lagging indicators, with assumptions made to loosely connect training to business performance. As a foundation of program success, learning leaders need to partner with executive peers to frame the learning needs that will have the most impact on results. But what success metrics are best to use, and what methodologies and technologies will support the goal of improving performance?

Download this on-demand webinar to:

  • Explore the inherent barriers to proving the ROI of learning programs.
  • Challenge L&D professionals to reconsider learning metrics and ROI measures.
  • Demonstrate how best-practice microlearning improves employee proficiency and impacts performance.
  • Educate learning leaders on how to build an effective learning business case for the C-suite audience.

Speakers:

 

Liza Solley
Director of Customer Success
Qstream

 

Meredith Odgers
Director of Global Marketing
Qstream

Webinar Transcript:

Amanda: Hello and welcome to today’s training industry webinar, Setting Corporate Learning Programs Up for Success: Proficiency and Engagement as a Driver of ROI. I’m Amanda Longo, Creative Marketing Director here at Training Industry. I’m super glad you could join us for today’s event.

Before we get started, a few tips and tricks to help you interact with our speakers. Of course, you’ll notice at the bottom of your screen you have a couple of interaction options. You can pop open your chat or Q&A at any time during today’s event, submit your questions in the Q&A pod, chat us your comments in the chat window, and we will address them throughout and at the end during our Q&A section.

Of course, we always encourage you to share the information you receive during our events with your colleagues and network via social media. Follow @Qstream and #TIWebinars, so we’re able to track your contribution to the conversation. When our program ends, you’ll notice that there is a short evaluation survey that pops up in a new tab of your browser. Of course, we would greatly welcome your feedback about today’s content speaker or topics you might want to see here in the future.

As always, today’s event will be recorded in archives on trainingindustry.com and you will receive a follow up email from us that would include a link beyond demands program you can share with your team.

Now, if it is your first webinar with us, as I know it is for some of you on the call, a very special welcome goes out to you. Here at Training Industry, we offer dozens of events each year on subjects ranging from technical training and product demos to modern learning systems, authenticity, learner preferences. I mean, we cover just about every topic relevant to leaders and training organizations around the globe.

If you’ve attended one of our programs in the past, welcome back. Our objective and offering sponsored webinar events is to allow our subscribers the opportunity to learn from the most innovative thought leaders in our industry. I know that today’s program is one that you’ll find to be useful, so I’d like to go ahead and introduce you to today’s dynamic duo.

Liza and her team of customer service managers work strategically with customers to maximize the business impact of every Qstream. By collaborating with executive sponsors and program managers to integrate Qstream’s microlearning solutions and services into learning and development path, they help customers deliver measurable and effective learning outcomes. Liza brings a wealth of experience in customer-centric positions that underpins the implementation of Qstream’s DSM model of true partnership, just that customer’s up for success.

Meredith and her team bring the Qstream brand to life for customers, partners, and Qstreamers alike. As corporate learning and development needs evolve, she is focused on continually communicating the value of Qstream’s microlearning solution as a strategic tool for learning leaders to adapt to new demands. She works closely with her team, sales, products, and services to drive business growth through the adaption and use of Qstrteam. Prior to joining Qstream in 2016, Meredith spent over 20 years as a B2B marketing and demand generation leader in Australia and Asia. Together, they have tons of insightful content to share with you today. Without any further ado, ladies, the microphone is yours.

Meredith: Thank you, Amanda. I’ll just make sure we can get those slides up and rolling.

Amanda: Perfect.

Meredith: Thank you very much. Thank you for everyone for joining today. I can see we have good representation from all around the globe. I’d like to make a special mention to William Bear from the UK. I’m also doing this from the UK today, so we’re both on a late shift this evening. Anyway, welcome everybody.

Amanda did mention a few ways you can also get involved in the conversation. Our team will also be light waiting from today with the hashtags microlearning and training ROI. Please do join our conversation. Also, you may ask questions as Amanda mentioned throughout. I’m sure we’ll have time to get to those at the end.

Today, we’ll be talking a little about setting up a framework for corporate program learning success. Measuring the return of learning programs for us with our customers always stimulates a lot of debate and discussion. Our intent today is for you to take away a few key learnings that can help you define, plan, and measure ROI.

First, I won’t go over and spend too much time on the agenda here. We’ll jump right in after this, but just to recap on what we’re getting out of today. We’ll first look at use of the traditional approaches to learning and ask some serious but necessary questions on their effectiveness but most importantly on their measurability.

We’ll then move on to talk about a few of the barriers or challenges which prevent effective measurement of learning programs and we’re going to have a look at some metrics to consider the measuring programs in the future. We’ll move on to do a deep dive into what best practice microlearning can look like and how technology can enable overcoming some of those barriers. Then Liza will spend some time with you to talk through a methodology to set up for success.

Okay, so let’s just have a look at the barriers and some of these that we’ll bring up today are probably well known to many of you in the audience, but linking learning goals to business goals always seems very obvious, how can learning programs really be put in place without that connection one may ask. We get asked this all the time ourselves. Some of these may come from a long history of traditional learning programs being in place to fulfill a compliance or a mandatory training need, a regulation need. That really in some ways, is to really fill an HR need. That there is always accreditation, certification, mandatory regulatory compliance training. There will always be a need for those. It’s just necessary. There’s more work being done now on talking about how they can better engage our corporate learners and support their professional development, support their individual needs so that they’re engaged, they’re more productive, and ultimately, that can impact performance. Things are changing really rapidly in this space. This is the way we’re going to focus the conversation today.

Can someone just take some polls here and think about your current training and enablement efforts? You may be responsible for broader HR learning programs. You could be responsible for in business line training and enablement programs. Whichever the case, one thing I’d like you to do is have a little think about while we’re on the call today is thinking about those methods and whether they are right for the learner and they’re measurable.

Now, ultimately, the magic answer here is that a blended learning method is most effective depending on what the objectives are for the learning, the type of learner and the end outcome that you’re trying to achieve. I always like to think about asking a question of that learning methodology, is it showing measurable impact of the learning initiative and how to actually measure that in the first place? Does it help you identify gaps for the individual learner, not just at group or cohort level? Ultimately, does it enable your managers to deliver precision coaching and be more effective in that area? Does it result in behavior change and then how can you measure that? It seems a very intangible end outcome, but I think it’s probably what we’re all trying to achieve with our program, changing behaviors to have more effective performance outcomes.

Lastly, I’ll just touch on the point here, does your learning program today genuinely inform what your next training and enablement needs are? Which areas need to get the most focus that we’ll have for most impact on the business, and actually which don’t, that’s also really important because there could be a lot of time, effort, and cost spend on the areas where your learners may have already mastered.

If you answered no to any of those questions, then it’s good that we’re talking here today. You’re definitely not alone. You’re talking with our customers, we really do find that many of the learning leaders that we interact with share some of… all of the things, challenges or pain points themselves. If there’s any particular one of these that resonate with you, I do welcome you to send in your comments on the chat or ask questions around that through the console, so please do let us know what you’re thinking now.

Okay, so let’s look at some of the barriers to achieving some of those things that we just talked about, learning in the flow of work measurability, etcetera. Quite typically, we’ve often found when we talk with customers that a lot of the indicators or metrics for learning programs are often lagging indicators. You’re talking about activity and performance which only often can be measured after the fact.

The speed of change, I mean this is just a well-known fact. It’s just happening in life; it’s happening in the workplace. How do we keep training programs agile and quick to roll out, in the moment, fresh with what’s happening today? What our learners need today to do their job really well?

Always a challenge. Misalignment, perhaps no alignment to base scales or lack of definition. We hear this one a lot, and that means that sometimes there’s a disconnect between the learning program and what it’s really trying to achieve and whether all had genuine impact on business results, so all results at the end of the day.

This may not be the case where the learning program is intrinsically tied in with the business unit and it used to be quite often in large corporate’s learning programs, but there may be a learning business partner that’s really embedded into the business. We see this situation a little bit less in those cases, but overall having written definition and alignment collaboratively is really key.

Standardizing learning programs, so think about distributed and all remote workforces across the globe, across geographies, and that’s really difficult to scale learning programs ineffective, but even a cost effective way for all of those people, always a big challenge here. There are a number that the learning text back and even the HR text stack is extremely complicated today, often quite fragmented and sometimes not integrated. There’s a lot of work being challenged on day in and day out and how we can leverage all those technologies together, including the learning programs that we have.

Quite often we find that observations can be subjective. Think about the interview performance of appraisal, there is talk out there as many of you know that that is a dying area and more continuous on the spot feedback and competency building continuously all year round is really key to getting the best performance out of the workforce, but keeping them really engaged and then definitely going for a dynamic assessment almost and a stagnant.

Okay, all right. Let’s just take a look here about what you’re measuring today and then we’ll talk a little bit about whether these are the right metrics. We’re going to go on to a poll here. Amanda will help us from the poll here today. The question is here and please do participate and we’ll see what comes out at the end. There are many metrics clearly to measure corporate learning programs. I just distilled based down into five. These are just the few. As you start answering here, what metrics do you use to measure your programs today and the success of those? Is that course completion? Do you do feedback surveys with your learning afterwards? Do you do post calls testing and do you just do it once? Do you look at their engagement through the learning process? Do you look at any proficiency improvement? Have they improved their capabilities to do the job?

Now, I’m seeing here on the poll that we couldn’t select more than one on the chat, sorry. I guess, perhaps, if you can’t select more than one, just choose the one that you most often use. We got some comments here coming on the chat. Thank you for that, Heather, all that engagement. We have post calls testing feedback. We shouldn’t be reading this out, but you can see it on the chat. How are we going then, Amanda, with the poll?

Amanda: I’m going to go ahead and share the results.

Meredith: Excellent. Okay, so let’s take a look here. This is perhaps most surprising on the whole. We typically find that read and understand course completion, feedback survey is actually asking the learners what they think. Perhaps their managers and post calls testing are usually the most common. I’m surprised that proficiency itself rate is highly as it does here. We can talk about that a bit more. Engagement at 8%, that’s a very, very interesting one. Engagement could be anything from completion, general participation throughout the learning program. You can see that that’s quite low here. We’ll talk about that a little bit in a minute, so just hold that thought until we get on to the next slide. Thanks everyone for getting involved in that.

So here we’re just going to go a little bit into challenging some of the ways that we think about measuring programs today. Quite often we see that productivity and activity, what people do, how much of it they do, et cetera, tends to have a direct linkage to performance outcomes, but really interestingly, one would argue that there’s a third P to productivity and performance. That third P, we see that as proficiency. That can be really hard to get out. I notice quite a few of you do measure that. I’ll be really curious to hear some comments on how you measure the proficiency and the method for doing that, please do send in your comments, that’d be great to hear from you. Really at the end of the day, we see proficiency as the third P. When you think about it, it’s really not just about what they do and the end result; it’s really about how well a learner or a teammate performs, how well they do their job. That can be very, very difficult to measure. In some ways, it’s really one of the most valuable measurements. Every learner is different. They will take in information, they’ll recall it differently at different times, and they will apply it differently when they’re actually in the field or in the workplace. I think with that in mind, we basically say that proficiency is the mixing link between productivity and performance.

There’s actually a fourth factor here and essentially goes back to one of the polling results. That is engagement. Unfortunately, it doesn’t start with a P, so it’s sort of spoils my alliteration there a little bit. Really without engagement, you won’t get productivity. You won’t get learning. Learning won’t happen. People will not become more proficient if they’re disengaged. Ultimately, the lack of engagement will have a negative impact on performance.

We work with customers a lot to help them measure engagement and improve proficiency at global scale, often highly complex, highly distributed knowledge worker roles and help endure connection between engagement proficiency and ultimately performance. Depending which area you’re responsible, they could be very much business line type of metrics for sales. The obvious ones are quite entertainment, pipeline conversion, et cetera. The other areas, call centers, reduction in call times, and how skilled the call center worker is able to handle and move through those inquiries quickly. In the healthcare system, it could be reduction in admission rates because the proper patient’s safety and hygiene practices are being followed. Wherever that learning program is trying to aim, proficiency, engagement we argue are the key to get in there.

Okay, so what could best practice learning look like? I’ll just spend a little bit more time with you before handing over to Liza. I’ve done a lot of research around this and just to break this down into five simple areas. There’s some foundational principles today for learning programs to be effective. Firstly, we need to think about the learner, make it easy for them to engage, make it a friction-less experience. Think about the plethora of information that any learner needs to know to be able to do their job. It’s not just know, have the skill and the situation awareness to apply, and by breaking that complex content down into bite size, we do it through scenario-based challenges, highly related to a job role to make it highly relevant to me makes that learning process a little bit easier for them. Doing that continuously and repetitively is really important, too. That leads me on to the make it stick.

With the metrics that we looked at before, how does one know that the learner is really taking that learning after feedback survey, after the post calls test or assessment and into the field for the long term, proven methodologies is one tip to really look at and not just the science behind the delivery of that learning program and the technology is there, but there’s also real world evidence to back that up. Liza, later on, will talk about a couple of examples to that advancing these customers.

Make it mobile. I think this one today is a fairly obvious one, not only does mobile delivery often reduce training costs and times especially for those distributive workforces, but it’s really a necessity. It’s not even an option now given the generational shift in mobile natures that we see coming through with millennials, clearly I’m not one. I do spend my life connected as we all often do today. Mobile is really necessary to make learning anytime and anywhere.

Then we talked about engagement a little before so I won’t hop on this too much, but there is a lot of research done around building game mechanics and piece socialization commentary, et cetera, through that learning experience. We often find or we have found that keeping the game mechanics professional like fund and engaging, for example, point scoring, leader board, et cetera, it’s really the perfect balance when you’ve got quite intensive often job roles, knowledge, workers, highly complex industries, no gimmicky trophies and badges are not always the most effective, but a little bit of healthy competition there between my peers is always quite motivating and teams as well.

Then finally, to make it measurable. This is really the core of what we’re trying to get to you today. Really using proficiency as a proxy really or an indicator, a leading indicator for the return on those training investments. The other part of that is being able to identify gaps and inform the training initiatives. That’s really important and we’ll talk about that a bit later.

Just taking that and looking at what that experience could look like for the different stakeholders in the learning process. If you think about the learner themselves, they could be mobile, they could be desk-bound, and that’s fine too, but often people are working remotely or engaging with their mobiles, work well at their desk.

Here we can see that presenting a scenario highly relevant to my job gets me interested straight away. This learning, content is exactly for me to make me better of what I do. Giving immediate feedback, I know immediately how I went, what I got right or wrong. Having that explained is another microlearning opportunity. Short explanations within the flow of work. I can always access deeper learning through links and other rich media if I wish to go further.

Finally, the specialization became mechanics, keeping me engaged, playing on my natural competitiveness. Now, we do find in certain geographies around the world that individual identification, effort, use the PR and privacy is not allowed and that’s all okay. We can look a look at this at team or group level as well, rather than the individual level. I think especially in the sales role, that friendly competition certainly rates pretty high in keeping those engagement rates up.

The other key stakeholder in the whole learning process is really the front line manager, enabling them to more effectively coach, be a better manager, have one-to-one interaction to help their team where they need the most help, surfacing those insights to precision coaching opportunities can really deepen the learning experience and help with the engagement of the learner themselves. My manager is personally vested into me. They’re here to help me on the areas I need. Equally, if teammates show proficiency in certain areas give them a pat on the back and always comparing that to other few changes is always part of that healthy competition.

Here, we have another view which could be for front line managers, learning program managers, or even executive sponsors and leadership. Looking at the heat maps, there’s a really quick way to visually identify where there are learning improvements, but also whether at learning needs. In this case, you can see that some topic areas, competitive res– this is actually for a sales team. Competitive responsiveness, differentiation, et cetera, other topic areas that have been built into the Qstream against different geographies.

At the beginning of the Qstream, which can take anywhere between six to eight weeks, could be shorter, could be longer. At the end and all throughout, you can see the current proficiency. We can see here in this example, very quickly as a program manager or a learning leader, that North America is still is not showing up to average proficiency across quite a number different areas, and that can directly tell me where I need to spend time, either on next learning program or engaging the front line managers to support their teams on these particular areas.

Finally, the program manager themselves a key to keeping track and being able to drill down into the multiple learning programs they may have and train at the time. Being able to see immediately who’s engaging, who’s not, being able to effect that in real-time. Give encouragement to start the learning program, to complete the learning program can really make a difference to the end proficiency of the overall program, and ultimately linking that through to… could impact on performance.

With that, probably heard enough from me and I’m going to hand it over here to Liza, who will take you through the next part and some good takeaways for the program.

Liza: Great. Thanks, Meredith. All right, so we just covered a lot and we want to take a little bit more into some of the details here. It’s commonly understood that there’s a need to measure L&D programs. As Meredith covered, there’s some of those notable barriers to actually recognizing return on investment. We’ll talk a little bit about how we’ve seen our customers get started on setting up their various programs for success.

Often our customers come to us with this specific project in mind, but to really drive ROI, we try to begin with a broader planning before jumping into that point in time execution. Here’s some of the foundational steps that we found to be successful and that we recommend in order to establish a really successful Qstream program.

To start, you need to be able to define your macro level business goals. These goals may be key performance indicators or core metrics that might be relevant to your entire business, or maybe they’re relevant to your specific department or your business unit. They usually align with the broader focus for the year ahead. It’s important to think about the business value you want to get from Qstream throughout the year so that your Qstream project in particular can tie back to validating that return on investment.

Some of the measurable business goals that we see in this category may be things like increasing quota payment by a certain percent, or increasing another key performance metric, tracking and increasing manager involvement and training initiative, decreasing time to peak performance, improving selling acumen, or maybe it’s something very tactical like reducing callbacks or call escalations by a certain percent.

Once you have your business goals defined and you’ve thought about the macro level strategy and how that ties back to what the businesses looking to accomplish for the next 6, 12, 18 months, we then recommend that you align your KPIs and those business goals to more Qstream specific objective. This step really dives one level deeper and helps to define what expectations our customers have from Qstream and they typically focus on target engagement rates, completion rates, or a target proficiency improvement.

We work with our customers to set these objectives on a project-by-project basis. Some common examples that we see are things like a target engagement rate of 75% across all of your Qstream participants, or maybe the goal is to have double digit proficiency improvement in each of the Qstreams that you’re running. The objectives can also be more qualitative, maybe it’s delivering effective knowledge reinforcement of topics within key business areas, or embedding Qstream seamlessly within your businesses’ existing training schedule, or it could just be simply using Qstream to identify knowledge gaps in order to enhance overall training effectiveness.

Lastly, we want our customers to be thinking about how to incorporate this macro level and Qstream’s specific objectives into an annual or a long-term program plan. Really doing an annual roadmap helps to plot out specific projects that also have proactive goals. We like to talk about preparing our customers for more than just that first point in time Qstream so that they build a comprehensive program that ties all of their projects or their Qstream back to those business goals in the overarching objectives they defined at the start of the engagement.

A successful annual plan will not only just thought out and define key events, it could also be looking at inflection points like upcoming product releases, other on-boarding and training initiatives, really anything you have going on for the next 6, 12, 18 months. We work to actually coordinate this so that you can look at that in conjunction with some of the other initiatives that you’re rolling.

Along with this, we also want to help teams determine an internal promotion plan for Qstream, and that really centers on top down communication and having a strategy that can then be repeated throughout the year. We also help our customers to begin to identify those key internal stakeholders that they’re going to be needed to help execute on the Qstream program, which also includes promotion, the actual implementation, who’s going to be doing the coaching, how was the reporting going to be interpreted, et cetera.

What does this really look like for our customers? Once they’ve thought about the goal of the business and what are align means to you, then we like to focus on creating a visual roadmap that start to knit together these swim lanes throughout the year. Here’s what a sample roadmap tends to look like, and you can see here that there a number of different initiatives that are overlapping throughout the year. They’re going to ultimately help to plan what the year is going to look like with regards to your projects.

The roadmaps also ensure that the projects are proper spaced. They help to target useful cross-project reporting attributes or topics, and they really organize this big picture that aligns around the L&D measurements and the objectives. We often find that our customers actually have versions of a training calendar within their organization that they work on for the year until very often this conversation starts with looking at those training calendars and starting to layer on top of that, the Qstream roadmaps, so that we have a consolidated and really a thoughtful plan for the year ahead.

Once we have talked to our customers about our holistic roadmap, it’s really time to customize the specific projects. We do that in order to fit the businesses’ needs. We start by asking, what are the objectives of the given Qstream? These could be anything from long-term retention, maybe you want to be running a pre-event assessment or a post-event call through that focuses on the key takeaways, or it could be product-related in various points in time related to a brand launch or a product launch, and the Qstream is focused on reinforcement.

Our spacing settings can select to accommodate a wide range of different types of Qstreams. Here on the screen, you’re seeing some of the variables that we use to customize that. Based on a project-by-project customization, it then comes up with a plan that actually drives the duration of the Qstream itself, the amount of content that we plan on rolling out, the delivery format, how leader boards will be, and get back to some of those ways that Meredith was describing, that Qstream fits into the learner’s day-to-day.

Once we’ve had our customer’s rethinking about those macro level goals and micro level goals that are looking at project-by-project initiatives, really the next step is to ensure that our reporting can capture the right measurements. Valuable reporting really begins with targeted business metrics that are understood across the organization, and the foundation of insightful reports is data that tells the right story.

Once of the ways that we recommend stringing together those macro level business goals to and through project level execution is through applying strategic user and topic tags to your data. This is actually done with the help of our Qstream team on a project-by-project basis. We look for common user attributes or ways to actually group the Qstream questions by topic to make it more intuitive and more informative when you’re looking at the data downstream.

Then as your Qstream projects get underway, you have hundreds of data points immediately flowing in from your participants and those are going to become the real-time measures of proficiency. Qstream has a bunch of built-in reporting capabilities that allow managers to coach proactively, and that tends to be happening not only during a Qstream project, which has a varied duration as is on the previous slide, but also afterward. We have tools like the heat map Meredith featured there a few slides ago as well as proficiency by question breakdown and all of these things are built right in to really help to identify where there are gaps in knowledge.

We want to talk about a few examples here, they come from customers. What they’ve done in this example that’s actually execute on this comprehensive program plan. The plan is to help them to plot out that strategic roadmap like we looked at, and to have projects that are aligned and setup in order to tie back for the key business goals and ultimately to provide those measurable output.

The first customer example is a global bio-pharma company. They use Qstream to improve critical on-the-job business practices. They actual use scenario-based questions that could send out to site monitors. Those are their employees that govern clinical trials. Bringing drugs to market can take years, even a decade or more, and can cost billions of dollars. The clinical trial programs are very complex and they’re also real risks to that, and the results can be consequential. These clinical trials site monitors are responsible for reporting accurate data back to the trial sponsor, and they have to closely follow very specific protocols.

This particular customer of ours, this biopharmaceutical company, chose Qstream for this clinical research education for a few very specific reasons. They were interested in looking at the real-time proficiency data for each of those individual site monitors. They also wanted to identify gaps related to behavior that focused on whether or not the site monitors were adhering to compliance norms, specific regulations, standard operating procedures.

They also taught Qstream as a tool for continuous assessment and reinforcement of the correct or the desired behaviors. They like that it focused on targeted opportunities for those coaches to be looking at the gap and actually focusing on real-time remediation.

This company ran a series of Qstreams that were set on an odd reinforcement based on those scenario-based questions that they mentioned. It actually went out to 1,200 site monitors. As you’re seeing on the slide here, there were some very impressive metrics that were collected during the course of this program. Really to focus on the highlights here, this customer grouped questions by topics as they mentioned a few minutes ago. They really did this to assess proficiency across five key training themes.

The Qstream results show that the site monitors had a strong handle on three of the five key topics. The annual training plan that they had and really their calendar of events that was planned for the rest of the year had actually budgeted for additional training on all of those key topics. After running the Qstream, they were able use the proficiency information from that program and they actually determine that additional trainings on three of the five topics were not needed. That was coming directly from analyzing the proficiency results. They saw that there was a material cost-savings by being able to focus resources only on the two key topics where the site monitors were not yet sufficiently proficient, and where they were not meeting initial goals.

The cost-savings here was remarkable. They had a cost-savings to the tune of hundreds of thousands of dollars. That was calculated by looking at the training hours that were saved by only going back to the two topics where they lacked proficiency. Then there was an hourly rate and then the expectation that those trainings would have had to have been rolled out to the 1,200 or more site monitors that they had already planned to train.

The bottom line here was that Qstream was used to informed data backed decisions around where they actually invest time and money moving forward.

The next customer example we have is a global medical device company. This company has been a long-term customer of Qstreams. Their champion was really focused on a number of things when we started our relationship with them. He was really focused on identifying commercial learning challenges. He also had a focus on the concept of shifting the middle to focus on the middle group of their performers. Then he was interested in scaling a continuous commercial learning program. The key here was that he managed this commercial learning program across multiple geographies that were spread out across the globe.

This company ran a dozen Qstreams that went out to several hundred participants. They saw a 16% of proficiency improvement and had over 80% engagement across this dozen or so Qstreams they launched. During the program, the user data was tagged by those user attribute that I mentioned earlier, to be able to actually correlate individual Qstream results back to the company’s failed results. They analyzed the Qstream results against these predefined business KPIs. In the case of this customer, they were looking at sales to target the tenure of the reps, unit pricing, et cetera.

The results show that people who are performing well on Qstream also had higher business performance. In fact, the majority of the Qstream winners, which was a part of the leader board in the competition that they introduced, those folks were also president club winners that year. Those participants that have the best handle on Qstream content were proven over time to also be the strongest sales reps.

These Qstream results really showed that reps with the high end of their performance curve not only reselling more units, but they also sold units at a higher cost.

The bottom line here with the reps did well on Qstream understood the product well, but they also understood the data well. They were not just talking the prospects about price, but they knew more about the actual value of their products.

Here are just some of the many types of businesses and projects that we support. Really customizing an effective training plan that ties back to measurable ROI can vary widely, but the point is that the fundamentals of execution are the same. The approach to enterprise planning that we’ve covered today is applicable to a wide range of use cases that really does span a lot of different business functions and protocols. This is just a few of the areas that we’d helped our customers focused on, and they’ve come to us with different projects where we’ve broken it down into this enterprise program planning methodology and have seen successes in correlating Qstream results back to business goals.

What are some of the things that are key takeaways here? Qstream really has found that our best practice recipe, so to speak, for proving our life, are L&D programs have included a few things. The two examples that we just talked through are really very illustrative of this.

Thinking about macro level business objectives and defining relevant KPIs that also have spanned multiple projects. This is an exercise that very often our customers ask us if we can help with. The short answer is yes, of course. We can drive those conversations, but this is something that the business seems to be thinking about and that we always stress that the owner that’s going to be on you or the customer to be thinking about what you want to tie Qstream results or tie the engagement of proficiency metrics back to.

Next, we definitely recommend using a roadmap to plan the right cadence and strategy from launching multiple projects.

Next, we talk about customizing the types of Qstreams that get rolled out throughout the year. This also helps with the participants, so that they can stay engaged and interested, so that you can be sure that you’re not bombarding them with different pieces of information, and that Qstream roadmap is really fitting with your broader training calendar.

We’d also recommend using those consistent tags, those become associated with the users, and we think of them as user attributes. Then there’s also the topics. How can you think about the content and grouping it under the topics? Those topics are best thought of when they’re consistent throughout the year. That helped that data become as robust and relevant as the top of the candy.

Next, we always think about digging into Qstream participant engagement and proficiency data, and that allows us through those heat maps and the other tools that are built directly into our products to be getting real-time feedback. We can go back and look at the results to tie back the data from Qstream to employee performance metrics that may be tracked out outside of Qstream and in other non-Qstream related systems.

Finally, revisiting those business objectives and really thinking about those business performance metrics and specific, like your training budget and where you’re spending dollars in time, and using that analysis from Qstream to understand whether or not that’s going into the right places. That’s really how it started to complete that feedback loop.

With that, I want to wrap up and ensure that we leave enough time for questions here. Thank you guys very much for your time. Hopefully, we gave you a lot to think about here. I know there’s been a lot of chat coming, so we’ll turn it back over to you, Amanda, to maybe help us facilitate some questions.

Amanda: Perfect. If you have questions for Liza or Meredith, go ahead and pop open your Q&A now and ask them. I’m going to jump in with this one from Yvonne. She said, “You talked about proficiency and engagement metrics, but this are your metrics, they’re not common standard in L&D. Usually it is a passed or failed test, or another scale. If they are not common knowledge, then how can I explain these metrics to my senior stakeholders?”

Liza: Yes, that’s a great question. Meredith, I’ll take a stab at this and you can provide some color commentary. We actually get this a lot and we do understand that sometimes it’s more common to be analyzing in more of a pass-fail, or a testing, or assessment scenario. I would answer this in a couple of different ways.

First of all, we can absolutely help our customers by providing some benchmarks by industry and to help them level that on what a target engagement or completion rate might be based on common use cases or similar businesses. Here at Qstream, we have thousands and thousands of launches over nearly a decade that the business has been in place. We have seen some average engagement rates and average completion rates, and probably most notably, average proficiency improvement that we then can slice down to provide insight view for some more businesses.

The other point I would make is that there are also ways, and we have customers that are actually using Qstream to set criteria around passing or failing. We actually have customers that are using Qstream metrics like final individual proficiency of, let’s say, greater than 80%. They set a pass-fail threshold. We actually have a very large financial services customer that we’re working with that does this on a weekly basis. They analyzed passing or failing in a certification program that they have rolled out to thousands of sales reps. They used that greater than 80% on final proficiency to evaluate whether or not their reps have sufficient knowledge of those topics.

It’s really more about trying to weave together some of the metrics that are being produced in Qstream to tell that story in a way that your business would understand.

Meredith: Thanks, Amanda. I don’t have anything more to add there.

Amanda: All right. Next question is, “What kind of indicator can be used to measure on participant’s perspective?”

Meredith: Okay, I think if I’m interpreting this question correctly. In our terms, very active means when a learner is responding quickly to the challenges that they’re receiving, engaging in the learning process quickly, adding comments, and really moving through the learning program at a very regular pace. I don’t know if there’s anything more to add there, but if I understand the question correctly, that’s the response.

Liza: Yes, that only other thing I would add there is that the Qstream’s basing settings or the algorithm that is really been derived from some of our scientific research is a best practice. On one of the sites that I talked through, we have a best practice of delivering two questions every other day, and so we look at how long a question has sit in somebody’s, in a participant’s queue essentially, and how quickly they responded to that. That’s a determination of activity and whether or not somebody is very active. We do always provide an opportunity for participants to catch up so to speak on, in the Qstream challenge itself. Then what we look at in terms of engaged participants is where we follow those folks through, to understand whether or not they’ve actually answered all of the questions at the end of the Qstream. Engagement and completion rate are a little bit different than that measurement of very active versus less active.

Amanda: All right. This question comes from Allen who say, “We provide many different types of learning contents including formal e-learning job aids, et cetera. Should we be measuring all of them? In particular, we create a lot of job aids, should we be measuring the ROI of all of them? If not, how should we determine the eligible criteria?”

Meredith: A good question, Allen. I think really measurement should be across all about to your program. If you look at your formal learning program, e-learning, job aids, etcetera, perhaps what you might want to do is form a program going through the methodology that lies or presented to define your overall objectives with those programs and the topic areas that you might want to question that the capability of the proficiency of your learners, for the primary ones, the ones that you think will be driving performance the most. That might seem quite daunting, but one way to do that would be to setup a test, a little bit like you saw in the case study that Liza went through where upfront they picture five general topic areas that they felt that they needed to train or ensure the site limiters were proficient on. Once they did their initial testing, they actually deemed that they only needed to reinforce and focus on two out of those five.

You could take an approach like that. I don’t know how many job aids you’re talking about that you have and what format they take, but perhaps starting with a methodology like that would be a good way to get going. Ultimately, if you can’t measure it, it’s hard to know what’s working and what’s not, so that’s something for you to think about.

Amanda: All right. This question came in from William who asks or says, “I’m in charge of training for our customer service team who has spread across a number of branches in the country. In practice, how can I roll out this kind of standardized methodology to all customer service agents?”

Liza: Sure. I can take a stab at this. That’s a very common used case for us. We actually launched regional or even global projects all the time. Really some of the ways that we try to tie together, just for groups like that, is through using user attribute tags that I’ve been mentioning. That helps the group users logically. It also helps so that managers can view their participants as granularly as they need to. A lot of this happens behind the scenes where we construct for a team hierarchies or manager visibility to span a number of different groups, and those could be spread across the country or even the globe. It’s also worth noting that Qstream itself definitely has the ability to be translated into other languages. We can certainly launch Qstreams in other languages, so that we can be interacting with groups that may not be English-speaking.

I also saw earlier that somebody had made a comment about mobile access and that they had a population that was unable to be using their mobile devices to take Qstream. That is not a problem at all, while the demonstrations certify that Meredith had put up there in terms of integrating learning into the day-to-day, which tends to be on-the-go and on an iPhone or a mobile device. We actually find that a majority of our customers are still engaging with the content from their desktop. I think it’s a pretty common practice where somebody will sit down at the beginning of the day and open up their email and be responding to their knowledge reinforcement in the flow of their normal work day.

That absolutely is a part of this as well, where we’re going to be delivering contents to folks even if they’re spread out across the globe at the time that’s convenient for them. That helps to drive up that question about activity as well to keep people engaged at the time that truly is convenient for them.

The last thing I’ll say is that the benefit of doing some of this annual planning is to help you thinking about populations, right? Maybe you’re a manager of multiple teams, where you want to be running Qstream programs that abut rather than overlap and to be sensitive to what else might be going on in the training calendar.

All of those things are things that we try to think about and to help our customers to really visualize something like the roadmaps so that we’re being very sensitive to the participants and ensuring that we’re not fatiguing anybody and keeping that engagement very high.

Meredith: Just to add on to that, too, so often we find in quite complex or highly regulated industries, we work a lot with pharmaceutical organizations, medical device, financial services, et cetera, and those business units are training across the globe. They may be local culture, language for certain… even in some cases, local regulation that may need to be considered. Working with the program manager to still roll a standardized local program that takes in some of those nuances is something that the technology and Qstream can help with. Those things are quite common to have a global standardization of the delivery that rolled out in the overall content that working with you as the customer to ensure that that is localized appropriately.

Amanda: Okay. This question came in from Janet and she says, “Enterprise sales can be quite tactical at times for us. Everything changes so quickly even though sales cycles are long and solution selling is not one-size-fits-all. How can this kind of sales training be agile enough and quick enough to help our sales team here and now? Usually, sales needs to compete in days or even weeks.”

Meredith: I can give that a first shout if you like. Yes, solution selling is a long cycle. There’s a lot of things that happened throughout that sales cycle. It’s often about what the salesperson needs to know at the right time in that sales process. Ultimately, if you think about that they’re going to be foundational skills and knowledge that a sales rep would need to know at each of those stages, so either marrying up to the content and training content against those.

The other thing, too, is take a look back at Liza’s program roadmap and planning that out. There may be a need to do some shorter, quicker testing on the sales team and where they are showing proficiencies or proficiency gaps. For example, one sales rep may have a great pipeline built, but is there Qstream proficiency on negotiation closing skills showing that they are capable of closing that pipeline, capable of knowing what to do in those scenarios in the solution selling environment. If they’re not showing proficiency in those areas, we know exactly where that sales rep needs to be best helped.

There is a lot of quick and early indicators ahead of time that can be deemed from rolling these programs out, and that’s where some of those leading indicators actually come into place.

Amanda: All right. Here’s a question from Patty. She wants to know if you could do a little comparison, this learning solution versus something more scaled to mass media of electrifying like LinkedIn learning.

Meredith: Okay. I’ll take that one. The difference with Qstream and some of the other microlearning or LinkedIn learning methods, we very much work with customers on their specific training content and really focusing on that 80% of knowledge that that skilled work that they need to know to do their job well. It’s highly job-related.

When we go to some of the learning content platforms where it’s very menu-choiced and open content I suppose, it’s generalized quite often and not always to a specific job function, not even job function, job function within the organization with the products and services that you’re responsible for working on and to the customers that you have.

Having those very specific scenarios that we looked at a little bit earlier in the talk chart, highly related to my job role is really the sweet spot to getting critical thinking situation online and setting pre-job proficiency, how am I going to be able to do my job the best that I can, not just in a generalized sense? That’s probably the key one.

The other thing is the definition of micro learning is varied out there. Our definition of micro learning is certainly breaking up complex or detailed content into those bite size scenario based challenges. There’s a lot of other things that can be added into micro learning against those principles that we talked about before. How do I make that micro learning stay with my long-term knowledge that I can recall in a situation, in real life months after that training, maybe years? How can it keep me engaged in that learning process? There’s a number of techniques married with the micro learning methodology that makes it effective. I would say that the primary one is about that the quality and the relevance of the content to that particular worker.

Amanda: Well, Liza, Meredith, that is all of the time that we have for today’s event. I really appreciate you taking time out of your day to put together for us and share all this great knowledge. I know the audience is giving you lots of kudos there in the chat. Thank you once again for being here.

Meredith: Thanks for having us today. We’ve enjoyed it.

Amanda: All right everyone, if you would like to catch us in a few upcoming webinars, check out trainingindustry.com/webinars for more information there. Of course, all of our events are free. We qualify you for one credit hour with ISPI and CPTM. What’s CPTM? It’s a Certified Professional and Training Management program. If you’re looking to up your game both personally and professionally by linking your training initiative through organizational objective, this is the program for you. You can join us at any of our intercontinental press to come or from anywhere in the world you can join us for one of our virtual events. Check out more information on our website. There are also master classes and certificate programs available there. I’d love to meet you in person at the Training Industry Conference and Expo happening in just another month here in Raleigh, North Carolina. Still possible for you to join us for this very unique engagement where you will meet everyone there. It’s pretty awesome actually.

One last time, thanks so much to Liza and Meredith and everyone at Qstream for sponsoring today’s event. Thanks to all of you for attending. Until we see you back here next time. Enjoy.