Automating learning and measuring social learning, with Ben Betts

Ben Betts from HT2 Labs is back on the podcast in this interview. Ben is passionate about using social learning to build high-impact learning experiences. HT2 Labs are also the people behind Learning Locker, which is an open source learning record store for xAPI data.

At the Learning While Working conference, Ben talked about the work that HT2 Labs have been doing on how they are measuring social learning. The recording of that conference session goes into more depth on that work than this podcast does, so if you’re interested in what Ben is doing I encourage you to watch that recording.

The work HT2 Labs is doing is built around a series of machine learning methods that are called ‘natural language processing’, which I think of as a series of methods that look for patterns in language. It’s maybe the most complex area of machine learning.

The podcast starts by exploring data, AI, and automation in L&D, then moves into using natural language processing and measuring social learning data.

Download the how artificial intelligence is changing the way L&D is working eBook

To go along with the podcast series on design thinking and L&D, we have released an eBook with transcripts of all the interviews. The eBook also gives a brief explanation of what AI is and an overview of how it is being used in L&D.

In the eBook you will learn:

  • Some of the jargon behind the technologies e.g. what data scientists mean when they talk about ‘training a model’. 
  • How AI is being used in L&D today to gain insights and automate learning. 
  • Why you should be starting to look at using chatbots in your learning programs.
  • How you can get started with recommendation engines 

 

Subscribe using your favourite podcast player or RSS

Subscribe: Apple PodcastsSpotify | Amazon MusicAndroid | RSS

 

Useful links for the podcast

Transcript - Automating learning and measuring social learning, with Ben Betts

Robin: Last time we talked, I had been doing a series of podcast interviews around xAPI. The world seems really different now. We've had an explosion of AI and machine learning technologies and people starting to think about automation. What do you think is the potential for AI in learning and development?

Ben: I think you're right, Robin. I think we're starting to move on from ‘so what’ with data, to getting more out of data. Data is one of those natural things that people want, a bit like money. Do you want more money? Sure, but what are you actually going to spend it on? Now don't get me wrong, I'm probably better at spending money than I am at using data; but once you've actually got it, people imagine that benefits will accrue. In some cases they might, but you need to put it to work. It's difficult sometimes, and there's been kind of two angles.

Using data to understand what works in L&D

Ben: People start off with analysis and analytics, and wanting to understand what works. That's wonderful, and we've had some nice case studies where it has been quite successful. But it's also rare. It's difficult to have the luxury of time to do that analysis. It's difficult to get the breadth and depth of data that you want from various other silos in the organisation, to prove a performance improvement. It can be a luxury because the reality is a lot of L&D departments are seen at the moment as a procurement function: I've got a problem, go procure some training, and then we move on.

It's a bit like a chicken-and-egg thing. If we could be more like Marketing and bring KPI’s to the table then we'd get listened to. But what Marketing has is that very clear relationship (in Marketing's mind) to the performance of the business. L&D is still stuck at a crossroads. I think that's a tricky manoeuvre, and one that is still worth pursuing. Don't get me wrong, I don't think it's the end of analytics, but I think people are starting to look more broadly at things like automation.

Next-stage automation – personalisation

Ben: Knowing what works and being able to link that back to performance, that'd be wonderful. Let's keep pursuing, but we need to accrue more benefits from data. Automation seems to me to be the next logical step. How can L&D move faster at a lower cost, be more personalised, more relevant, and more engaging? I think, increasingly, automation and powering the machines that sit behind learning are where the benefits are going to be – from accruing and collecting that data.

Robin: In another podcast in this series, I talked with Marc from Filtered about how they are profiling people to be able to serve up the right bit of learning at the right time. I think what they are doing is a great example of automation.

Two good examples Filtered and Degreed

Ben: Yes, that is a particularly good example because we have a partnership with Filtered. Filtered uses one of our tools, Learning Locker, and they use xAPI. I think what they're doing is genuine. What I am seeing is that companies are delivering either a content portal, whether it's their own content, or curated content. And there's a heck of a lot of language around machine learning seeping into marketing at the moment. I've seen it in space repetition tools in learning experience platforms. The ultimate would be that these platforms have machine learning, and the platforms are better because of that. I see precious little of that happening at the moment.

There are two things that concern me about when people say, ‘We use machine learning to improve our recommendations’, or something like that. One is scale. Because to actually do that machine learning at scale, you need a huge amount of data. So places like Filtered have done eight years of research. It's not a side gig. They have released publicly available products that get people using their stuff for free. Not that this is necessarily an endorsement for Degreed, but Degreed has a credible story in that space because they had a B to C offering. Which means they had at least the potential to accrue a heck of a lot of data that they could then spot patterns from and use machine learning to understand those patterns of usage.

Some of other systems might just be ranking systems

Ben: If you don't have a set of training data that you can let the machine run over to understand the potential relationships and then to prove or disprove those hypotheses, then you’re not really using machine learning.

What I think is, what a lot of people are doing is coming up with some ranking stack and then being bold enough to call it an algorithm. What this system is basically saying is, there's a number that every piece of content has and an individual has a number ranked against it. That number is getting nudged up and down by a set of rules. These are some sort of hyper parameters that a person has made.

With some platforms, there is a lot talk about machine learning and AI, but they're probably not actually doing it. There is no evidence for how they got the scale of data to do it, and how the machine actually learns. It seems much more likely that folks are kind of nudging a recommendation up and down a scale in a linear fashion, based on what people have liked or what people have done. That's not quite the same for me as true machine learning.

Robin: If you drill into these systems, they are often just logic systems. It's got some smarts to it and it has complexity. But that's not using these types of technologies that we're talking about, that build on existing data to build insights, predictions and recommendations.

Beyond ranking – thinking like a human

Ben: When we want things to be artificially intelligent, what are we asking for? We're asking for something to demonstrate a human level of intelligence. Could some simple rule-based thing do that? Yes, it certainly can. There's been a whole field of very explicit rule-based stuff that pushed down AI for a while. It doesn't make it machine learning, although it could still be an AI system.

My concern is: if you're trying to be intelligent about something, you're trying to demonstrate what a human would do. And if you are coming to me and saying, ‘Hey Ben, what film do you recommend?’ I don't recommend films to you, Robin, on the basis of their title, their genre, and who starred in it. That would be a terrible human thing – no human would do that. You'd obviously know about – you'd have seen a bunch of films and you'll have understood them; you'd have remembered them, and you'd make links between content and emotions and activities that occurred in the content. I think a lot of people are doing a lot of this stuff based on titles, keywords, descriptions, so you're not even really reaching that level of – it's not human-like intelligence. That's not how a human would recommend it.

Robin: I just realised, we dived deep into recommendation engines, Ben. What are some of the other types of automations you think could be really useful in learning and development?

Ben: That's a really good point. I mean, recommendations dominate machine learning at the moment. I think there's a chance that when people think of machine learning in our field, they think about getting a recommendation.

Robin: One of the classic bits of L&D involves figuring out what is needed for an organisation or for a person and then making a suggestion or delivering something. It's a very natural fit.

Machine learning for understanding what people say

Ben: Yes, exactly. It's kind of what we should be doing as curators of collections of content. One of my personal interests is around text analysis and semantic analysis to understand what people are saying, what they're talking about with each other in a learning environment. This has been one of my areas of passion for a while because I'm big into social and peer-to-peer stuff.

The initial problem in the social learning field was getting people to talk, because we’ve had decades of empty forums, empty wiki blogs that no-one used. We could apply social media tools to learning, but we couldn’t make people talk. Now folks are getting smarter about increasing engagement. We've had a lot of experience using things like nudges to structure a social learning experience. We see communication tools in the organisation blurring the boundary between learning and working, with platforms like Slack or Yammer. Conversation online is less of a problem in terms of fostering it at the moment. Then you get to the second problem of: so what you've got is a lot of people making comments, conversations – but where's the learning? That is a twofold issue. One is from an instructional design point of view: how can you understand if what people were talking about to you was useful and showed progression? The second thing I think is useful is: how can you then use that understanding of what was said to resurface it to people down the line?

Answers to your problems are often found in forums

Ben: I've got a young son, so I spend time googling questions like, ‘Is it normal that a child does not go to sleep before 9 o'clock?’ Google often finds the results in forums. I get back things like ‘mumsnet’ that shows me a thread where somebody had a question a bit like what I asked Google, and then various folks have chimed in and said what the answer could be. Now some of those replies are really useful. Some of those – typically the health ones – always end in cancer and death. You've got to try and avoid those. But some of them are useful and give some level of reassurance. I think that's kind of interesting. If the conversations that happen in something like Slack could be queryable to understand what people have said about a problem, that is useful.

Making tacit knowledge in an organisation explicit – understanding and resurfacing

Ben: That kind of a semantic-powered surge – you're almost able to build up the knowledge base. You're making the tacit explicit in the organisation. I think there are two parts to what I'm excited about here. One is text analysis and being able to use a machine to do that. One is to be able to understand what people are saying and when they're making progress, and the other is to be able to surface those conversations that happened weeks, months, years ago, back in other contexts, to try and make the tacit a bit more explicit. This is what I’m quite excited about.

Robin: Being able to mine social learning to find what is engaged and useful is valuable. Our developers have talked a couple of times about how often they search Slack for past solutions to problems.

Ben: It depends whether you want to pay for the Slack thing or not.

Crowdsourcing your knowledge base

Ben: I'm sure this has come up before, and search will get you a way down that, but it's almost a sort of crowdsource knowledge base. If an organisation has a strategic problem, they could come up with a social learning event: a MOOC, Massive Open Online Course, where they post questions and get people talking about it. Then they'd have a crowdsourced knowledge base; they could use a machine learning system to pick out where the best threads came from, or where particular trends were showing in those threads, and potentially crowdsource that into a knowledge base.

There was a trend of trying to video subject matter experts for years, for a knowledge transfer area. If someone was an expert in something, then we'd sit them down in front of a video camera for as long as humanly possible and capture it all, and put it somewhere. That is not hugely useful or practical or long-lived. But the theme of making people’s thoughts or opinions searchable in some way, shape, or form remains. I think that could be a big piece of value that we could unlock, if we understood more about what people were saying when they were saying it.

Robin: The area of natural language processing is still quite primitive. In your session for the virtual conference you talked about how natural language processing is used to look at the patterns and occurrences of words. Even with the technology being so primitive, what sort of results are you starting to get, Ben?

The limitation of natural language processing

Ben: You're quite right, Robin. It's basic. It's one of those tasks that's easy for a human to do. We can make judgments about the value of two different pieces of text really quickly. But trying to codify that into a machine, into a system about how you came to that judgement, is incredibly difficult and quite subjective. We can only do relatively basic things at the moment, like understanding whether something has probably demonstrated what we would call a higher order piece of thought. When I'm saying ‘higher order critical thought’ I'm talking about when someone takes two or more ideas together and comes to a conclusion in some way, shape, or form. We can also understand the grammar, the tense, the structure of what they say. We are limited to English at the moment. If you want to do this for different languages, we'd have to start again.

The process – a human classifies the comments, and the machine learns from this

Ben: We're asking the machine to understand patterns or to create hypotheses about patterns between words – to see whether it can consistently arrive at a label that we've given a piece of language. A human actually makes the first judgement as to whether a comment is good or bad. Is it a zero: lower order, or is it a one: higher order thinking? That can be quite subjective, but we have a rubric and some research about how we come to that. With that training data, the machine can look for patterns between words to understand whether or not it can consistently predict a label for a pattern.

Understanding – information, micro behaviour and macro behaviour

Ben: We then have what we call hyper-parameters, which we know from research. If it’s a particularly short comment, it's unlikely to be higher order and we use these things to help the machine and poke it in the right direction. There are all sorts of things that we're getting out of the data, it’s kind of threefold. We call it informational, or micro-behavioural and macro-behavioural, information. We can understand from an instructional design point of view which pieces of content that had conversations attached to them seemed to lead to more higher order thoughts than lower order thoughts. You can also apply libraries to understand the sentiment, e.g. is it positive or negative? There are quite a lot of open source libraries that do that.

At the micro-behavioural level we're talking about an individual, and you can start to think about the streaks that people go on. We can look at the last 10 comments: how many of those were higher order in nature, and are people starting to put together streaks of consistency? Or are they making the occasional good thoughts and mostly just holding water?

The final one – the macro trend – is how can I sum this up to understand the general journey that a set of people went through, and whether or not people appear to be pushing towards the new behaviour in this particular area or not? As an aggregate, as a whole.

We've got lots of signals. At some point it probably takes a human to interpret some of those signals to apply logic. There is a path to automation where if somebody made a number of comments in a row that we judge to be higher order thinking, then we can give them a nudge and congratulate them. This might be something like, ‘Hey look, the last five comments you've made have been fantastic. Keep going like this. The level of detail you're giving us is superb.’ If not, then maybe we can push them with some hints and techniques. This might be things like talking about what the best contributors in our group do, or giving examples of good comments. It’s about nudging people towards better activity.

Robin: A whole series of things are rushing through my mind. This is incredibly powerful in terms of the feedback tool around the learning design and being able to personalise responses. But it may be that really quickly getting those insights to the facilitators who might be part of a conversation means that they can adapt to how they are working.

Ben: That's potentially interesting on a couple of points. One is, we still get folks that are worried about deploying social because what if somebody says something wrong? Obviously, I offer a rabbit hole there to say, ‘Well that's brilliant.’ Because if somebody said something wrong then they were thinking it. If they said it, you've got a chance, and if they didn't say it, you'd never know. How can we highlight people to potentially get involved in those areas to set people right? That's important. But the other area where I've seen people do this is to judge the facilitators themselves, or the lead educators in certain circumstances.

How can I understand the difference between a really high quality teacher and someone who could perhaps improve and contribute more? You've seen it from studies of teachers: for all of the teachers in the world – and I'm sure there are many wonderful ones – a relatively minor percentage are outliers who are fantastic and then there's everybody else, who's got a job. Understanding what a fantastic facilitator does and the difference that can make could be huge. It might even be that you don't go to the learner with that feedback; you use it to help a teacher improve their practice.

Robin: The other side of this is being able to order the comments into higher order thinking, so that the contributions that have more insights can be captured and shared.

Ben: Yes. I guess we're coming full circle back to the recommendation routine here. The recommendation stuff is coming because we're so rich in content. There's so much content that I need some sort of curator to get down on it. We're back there again. If I've got all this social conversation: if Slack's taken off, if Yammer's going through the roof, then being able to bring a degree of focus to the things that appear to matter is the next layer. We come full circle again, to say we're not short of information but we're potentially short of insight, and maybe this could help.

Robin: That's a really nice concept and idea, that the machines can help us see the insights.

Ben: Yes. I think that's where ‘the boogeymen of machines are coming to replace us’ and things like that come from. There may be some truth to elements of that, but overwhelmingly we should be looking at opportunities for machines to improve our practice. There's a great opportunity here for us to focus on things that really do make a difference, and not having to do the data cleansing, the searching, the sorting or the rest of it. That doesn't add any actual value. It doesn't bring us to an action or an outcome. It just gets us to the starting point. If we can use a machine to do some of that mundane stuff and nudges towards things that would be genuinely useful interventions, then that's a pretty good use of that technology, for me.

Robin: I’m sure there are people listening to this podcast who are thinking, ‘Wow, that sounds exciting.’ Ben, what would be your advice about the first steps to get started with AI/ML in L&D?

Ben: Well, I guess I'd say you've got to arm yourself with a little bit of knowledge and a little bit of insight into how things like machine learning actually work. I don't mean you've got to get into the weeds of it. I mean goodness knows, I don't know the depths of this. You don't need to become some sort of analyst or data scientist or anything like that. You really don't. But I do think that increasingly folks are going to come up against a vendor's solutions and other things that say: the way this works is AI; the way this works is machine learning; we're different because of our machine learning. Being able to ask a few probing questions to understand the depth behind that would be really relevant. Otherwise you're just buying into a bit of a magic box that this thing will work. There's a sort of get-out clause here, where vendors can say, ‘Ah yes, well I can't really explain it to you because it's the machine. The machine is doing the pattern recognition. So you know it works, because it works.’

That's not good enough. If you really are going to differentiate, if you’re going to put it in something that's truly useful and adds value, or if someone's going to charge you more because it's got a bit of machine learning around the outside of it – ask a few core pointed questions about how the machine was trained, how it learns, how it understands patterns. What sort of machine learning is being applied here and what isn't really machine learning? Or is it AI, and how will that improve? That will lead on to some deeper conversations – or not, as the case may be – that will help you potentially weed out those folks who are using AI as a marketing mechanism, and those folks who are actually giving you some advantage that you could use to your benefit in the near future.