What does xAPI mean for instructional designers?

Megan Torrance from TorranceLearning discusses xAPI and how it can bring instructional designers back to the roots of focusing on learning performance.

Subscribe using your favourite podcast player or RSS

Subscribe: Apple PodcastsSpotify | Amazon MusicAndroid | RSS

Sign up to our email newsletter to get podcast updates

Links from the podcast

Transcript

Robin:
Hi, it's Robin here, the host of the Learning While Working podcast and the founder of Sprout Labs. In this podcast I'm talking with Megan Torrance. Megan has become really well known as a thought leader around what xAPI means for instructional designers.

We had a fantastic conversation around what happens in instructional design when you move from measuring and thinking about learning, to measuring outcomes and performance and thinking about business results. There's a shift towards performance architecture that's happening in instructional design at the moment.

It's a really exciting time in L&D at the moment. It's feeling like things are starting to shift, that we're making changes and really moving forward to becoming a lot more business-focussed.

Megan finishes by talking about a program that she runs called the 'xAPI Cohort'. This sounds like a fantastic way of being able to get started in xAPI, that I would encourage anyone who has been listening to these podcasts or who has come across them at some time to hunt out.

Welcome to the Learning While Working podcast, Megan. It's great to have you here today.

Megan:
Thank you so much. I'm really looking forward to this, Robin.

Robin:
Megan, you seem to be doing a whole lot of work around thinking about what xAPI means to learning design and instructional designers. What are some of the big things you think are really shifting, in terms of learning design?

Megan:
You know, what I think is interesting is that when - you can have all sorts of conversation around the technology and the specification and the standard, but from an instructional design perspective this new thing actually really brings us back to the roots of what we know we should be doing, or what our business acumen tells us we should be doing, which is tying back to performance and getting out of a rut of information dumps.

Really tying back to what works and what improves performance on the job. We're actually going to be getting better data to help us answer that question more smartly.

Robin:
It's really fascinating for me. Seems like the first step is people realise they can get lots of data from xAPI, then they get overwhelmed by it, and then go, "Actually, what we can now do is possibly get some things around people, how people experience something or how someone performed, and get those into their learning reporting systems." That can then change some of the ways they think about measurement as well.

Megan:
Exactly, right? There's that first eye candy, or that proof-of-life, right? The “Woohoo! I got 77 experienced verbs.” That's the first step, but xAPI's going to let us look more deeply at the learning experience. There's an inward focus if you look at it from that perspective. Then also that outward, that look at results, and the impact that we're doing. The combination of those two is, like you said, a gigantic amount of data, but also holds a whole lot of promise.

Robin:
What’s interesting is we're working more with the possibilities of it. I’m now at the spot where I’m starting to think that the first step from a learning and data point of view is to do a whole classic thing from Cathy Moore of what needs to be measured. Then, with  xAPI we can get that into the system. You can actually literally measure it, and then you have the learning experience flow from that actual hard, numerical number.

Megan:
Absolutely. It's funny that you said that, Robin, because when I talk to instructional designers about xAPI, sometimes they're getting all excited about the fact that they can find out who looked at what page for how long in a particular eLearning course, but I want to pull the conversation out into whether it's Cathy Moore's model, which is an excellent starting place for: what do I measure? I measure results, I measure behaviours, and I measure practise against those behaviours, and learning about those behaviours. That stacks up so nicely.

I look at Kirkpatrick levels evaluation where with Scorm you are really limited to level two evaluation. With xAPI, you can be looking across the entire spectrum of evaluation from: Did they like it? Did they do it? And could they pass the test? Are they actually doing it on the job? What are the results? That's another way of looking at all the things we could be doing and measuring.

Then we as instructional designers have to say what matters. I do the same thing. Conrad Gottfredson and Bob Mosher's Five Moments of Learning and the 70:20:10, all of these give us insight as to what we could be measuring. I want to measure a lot of things. I want to have data even if I don't always know how I'm going to use every piece of it because there's that discovery process after you have it.

It can start us looking at what really counts about the learning intervention we're about to design. I think it makes us better business people. Not just better at making nifty interactions.

Robin:
Yes. This is where the shift for an instructional designer and learning designer can be moved to being a performance architect by starting to have these conversations around what's the measurable outcome.

Recently the conversation was a blueprint for a six month long training programme. When I started to talk about the top levels of the Kirkpatrick model, the project manager went, "Are you talking about business case evaluation?". I went, "Yeah. I am," and she went, “Oh.”

She was quite taken aback that learning people were actually thinking it through from the business perspective. After about five minutes she sat there and went, "Okay, all right, I've got my evaluation plan for the business case now and for the project."

Megan:
I think we can be catalysts, right? I think we can be catalysts for our organisations. I love to kick off a conversation with a new client with, "How will we measure if this is successful?" If the answer is how many people take the course, I keep digging because that's not the answer.

We can help people, I think, perhaps reduce the - at least reduce the quantity of ineffective learning by focusing on what's really effective just by asking some of these questions, much less the data that comes from them.

Robin:
Yes. There's a couple of things about this. Heard a few people, and we've had a couple of times, where we’ve started measure learning inputs using xAPI. Then all of a sudden the people realise, oh, we actually haven’t got our performance data in. Then they’ve struggled to actually get that performance data out of other systems. What are your gems of wisdom?

Megan:
That is a big kicker, actually, right? To get the data out. I think it starts around conversations, right? If we're the training department saying, "Knock, knock, knock. Can we please have this data?" It's a very different request than if we're the business, we go with the business, and we say, "we're all trying to solve a problem. Can we get this data so that we can find out if our solution is working?"

We start taking training out of the picture. I get asked every once in a while: “Is my business system, is my engineering system, is my ERP, is my manufacturing system, is YouTube going to use xAPI?” You know, probably not anytime soon. You've got to figure out a way to get the data out whatever form you can, and then format that as xAPI and do a little bit of lifting on your own to get that business data out.

There's some push and some pull in there, and a little bit of work and some making friends with our friends who own the data and friends who can manipulate the data to pull that all together.

Robin:
Yes. That's really nice way of thinking about it, that you do have to do a little bit of legwork. The fascinating thing we're finding is the people who essentially own the business intelligence systems and organisations, when you start asking about getting access to it, they're more than happy to and they're more than happy to start to suck the learning data into those systems, is the other side of that.

A couple of times we're starting to explore whether or not we can actually do things like build forms where people can put performance data. It’s maybe not automatic. You have to do it manually but it's at least getting it into the xAPI world as well. And you guys do a little bit of work with Zapier?

Megan:
We're using Zapier as one of those workhorse tools to move data around. One of the things we love to do is just pop-up experiments or maker-space kind of projects. We actually use Zapier in one of, actually two of those projects, to pick something up from Slack, or pick something up from a Google Drive, and when you see a new image here, for example, take these parts of the data off the image, format an xAPI statement, and send it over to this learning record store.

That does some of that heavy lifting for us. It's not complex. Here's the thing about xAPI, and I don't want to take us into a data or a geeky conversation, but the magic of xAPI's not the technology. It is not hard. If you give the spec to a developer, they're like, "Yep, whatever," and they go, right? The hard part about xAPI, I'm making quote bunnies around the ‘hard’, is twofold, right?

One is for our industry to actually come together and agree to use it and agree on how we're going to use it. That's the magic, right? I don't think there are a lot of other industries that have that kind of standardisation like we do. That allows for a very rich and interchangeable and very fluid market. Then the other is for the instructional designer to think differently about that.

Robin:
Cool. I know. I'm just go going to go back to my other question in a little while. I think that standard thing is really fascinating because essentially marketing and digital marketing has become data-driven and things adapt to you, things get personalised, your behaviour is predicted. But, marketing data has no standards whatsoever. We actually have some really awesome standards and we just need to get better at adopting them in learning.

Megan:

That's totally true because - what that does, right? If think about why, because I've been thinking a lot about why don't people adopt xAPI? Of course people are. But one of the things is that in marketing, or in any of the tools in which you've built yourself a beautiful, beautiful thing, as a vendor, a vendor has a vested interest in keeping their beautiful stuff nice and neat and organised so they can control it and serve it up.

Once they've got a client using it it's incredibly difficult to leave, right? Once I'm in and I'm using my marketing tool, or I'm using some black box of a learning tool, and once I'm in if I can't get my data out, if my data isn't portable, it's incredibly difficult to leave.

Robin:
This is where when we're thinking through learning ecosystems now, we put a learning record store in the middle because that gives you the most flexibility to be able to move your data around at some time.

To return back to the other, the second, question, this has to do with learning measurement. It's a really fascinating statement from an L&D client of ours who firmly doesn't believe in evaluation. That is actually because he used to run a leadership and coaching business. He used to be on the vendor side.

He used to sit there and say, "There's too many variables. I could never guarantee that I'd get an outcome because the business would never - the client would never do this or they'd never do that. It was just easy enough to do evaluation." What's your take on that sort of - sometimes we in the L&D, we deal with wicked problems. Things are difficult, multi-layered. What's your take on measuring those things that are more nebulous?

Megan:
I fear that saying there are too many other factors is a cop out. It's true. There's a lot of factors that affect performance, and we can be measuring a lot of factors that affect performance. I think that if we're not measuring I want to wonder why. If we're not measuring the outcomes, right? Is it because we're afraid that we might find out that what we're doing doesn't work? It may be pretty, it may be snazzy, it may be exciting and fun, and it might not be effective.

I really struggle with people who are hesitant to measure because there are leading and lagging indicators of success all over the place. You don't have to measure everything. You don't have to get all the pieces, but I question whether or not not measuring is the solution.

Robin:
Yes. Just to think as a discipline and area, not measuring is just too risky for L&D. It's interesting. Even as I said that statement and repeated it back, I sat there and thought, "Yeah, maybe the core measurement has too many factors involved in it. Maybe there's something you can slice off at the edge of it that's an observable behaviour or something that you can link directly to the learning invention, you can get in this data."

Megan:
Absolutely, absolutely. Here's the other thing, Robin. I was recently talking with a colleague from another firm about a project. They're building training to support an engineering software, a piece of software. What do we do, right? In the L&D industry we do a few things. We provide training about systems and then we provide performance support. When we provide our performance support, or we write the manuals, I think of it as three different ways you can go about it, right?

You can have just the list of the steps, you can have a list of steps with screenshots, and you can have the full-blown screen grab, “watch me do this” while it's narrated whole shebang, right? As instructional designers we like the whole shebang. We like the videos. We can measure whether or not people take videos and all this stuff.

Well, this group used xAPI to track simply usage. Now, you could also do this with Google Analytics, right? They tracked usage of the various ways in which they provided support for this software. What they learned was that people just weren't watching the videos.

If you think about the amount of the biggest effort is in creating those videos, the hardest part of that to change when something in the software changes is those videos. They're most time-consuming. Their budget is spent on developing those videos. They find out those are the ones that are not even being used that much.

That's painful. That's one of the things we're afraid of finding out, but what they then did was they said, "Well, if nobody's using this, let's stop doing this." They saved themselves hundreds of hours a year in the L&D team's effort to produce those, which opened them up hundreds of hours of effort each year they could spend on something that might matter better and might be something more impactful. They had a mindset toward if we find out this isn't working we will stop doing it, which I thought was really, really bold.

Robin:
Yes. It's a data-driven decision that's driven by a test-driven approach. It’s a really nice example of learning measurement, and learning from it. The other thing I find fascinating is when you start to have those conversations about measurement. You already naturally lead to solutions that aren't always around learning and training. You quickly get to the core that's quite often quite different. This might be about actual measurement of people’s performance or how knowledge is managed, or how people are supported.

I think that's a really nice way to be able to then talk about what's become really common in Australia, for example, is to talk about capability development rather than learning, as well as a more holistic way of working.

Megan:
Well, that really gets to a phrase I've heard you use, the ‘performance architect’. Not just making learning, right? I'm looking at the entire work experience and figuring out how we make that better.

Robin:
Yes. It's interesting because a lot of instructional designers come from such a sort of strong information background that some people just aren't interested in that either. They are interested in building the shiny things. Are you finding that you're starting to see that trend, that some people are just sitting there doing a whole, actually that new way of working is not for them?

Megan:
I see it bubbling up, right? I think that now with Scorm, right? We're limited to five boring things that we're tracking about the learning experience. We can track those same boring things about every learning experience. I can write one report from my learning management system and get all the possible information about everything and it fits on one report. That report is shaped rectangularly. It's neat and orderly, but it means I'm not thinking and I don't have to think about all those other things.

The people for whom that that really drove them crazy to not be able to get good data are very, very excited about this opportunity to work more closely with their businesses and really measure the things that are making a difference in the work. That concept of really solving problems and not just providing training is emerging and now has some data and some strength behind it. It's very exciting.

Robin:
I think it's a really exciting time. There was a conference I was regularly going to. I was starting to get very negative about the fact that the language was very similar. Then last year it was actually the language has changed. There's been a shift. We're actually moving forward finally, after a little bit of a whole--

Megan:
Absolutely. Absolutely. It's funny because I go and I talk to people about xAPI. I'm not passionate about the colons and brackets and the indenting and all the details of the specification. I'm passionate about the concept of a shared language and a shared grammar, and what means for the industry and what that means for instructional designers, right?

I spend a lot of time introducing people to xAPI, talking to people about it. What's really amazing is to watch the shift in the questions because I'm no longer saying, "Wow, look at the power of xAPI," and somebody says, "Does that work in my authoring tool?" Because if we're focused on the authoring tools, we're only focused on a teeny, tiny slice of what happens in learning.

For the first couple of years there was a lot of: Why doesn't my storyline do this? When will Captivate have this? Those are interesting and useful places that might be a good start, but there's so much there.

Now I'm getting questions like yours, and from other people: How do I get access to my data? What kind of things are we measuring? How do we capture? For your example, right? How do we capture the, and measure, the effectiveness of coaching by leaders in an organisation? Those are the much better conversations to be having.

Robin:
I like to always wrap up these xAPI interviews - and we've travelled a lot of distance very quickly - with a question about, well, what would be your advice to someone or an organisation that was thinking about getting started?

Megan:
Getting started is a tricky one, right? People sometimes get resistance in their organisations to try something new or something that may seem untested. One of the things that we do at TorranceLearning is host what we call the xAPI Learning Cohort. We picked it up from ADL, the Advanced Distributed Learning Group.

The xAPI Learning Cohort is a 12 week learning-by-doing experience. It's virtual. We run it through Slack and Adobe Connect, and actually keep track of xAPI statements about what's going on in Slack. People come together with either their own teams or people from other organisations. They build a project start to finish. We have a structure. They learn about xAPI as they go, or maybe they already know xAPI and they're using it but they want to experiment with something different.

We'll kick off August 31st. There will be a team that's looking at xAPI with virtual reality and unity. There's always a team that's looking at learning record stores and how we talk to learning management systems. There's always a team that's looking at the authoring platforms.

There was a team, the last cohort we ran this spring and fall, that started from scratch. They built an ebook about xAPI that sent xAPI statements to a learning records store. They started at the beginning of the cohort. At the end of the cohort they had it working and submitted for the eLearning Guild's Demo Fest at the Focus On conference and won the Best of Show.

Not every project finishes and not every project is Best in Show, but you're making something you can bring back to your organisation then and say, "Hey, I made this. Do we want something like this inside here?" That helps prove the point, it helps experiment and learn where it's a little bit more safe.

Robin:
That sounds like an awesome experience and a fantastic thing. We’ll make sure to include some links to it in the show notes and blog posts for this podcast.

Megan:
For sure.

Robin:
Thank you so much for joining me today, Megan. It's a really, really wide conversation. Really interesting. It's great to be talking about learning and measurement.

Megan:
Thank you, Robin. I've enjoyed it thoroughly.