eLearning Review Processes and Quality Assurance
One of the stages of the eLearning development process that I seem to continue to in conflict with and confused by is the review and quality assurance stage. What I mean is the type of review process in which a group of learning experts reviews a learning experience during the development or at the end of the production process. The conflict and frustration have led to the thoughts and suggestions discussed below.
A review process is great if there is a set known standard and process.
For me, eLearning is all about innovation.
Most of what is currently standard is average, and it's time for change. The area has much more potential than is being realized at the moment.
The danger in these review process is groupthink, and groupthink is the enemy of innovation. Standard groupthink responses are statements and thoughts such as the following:
"Oh, it looks different."
"Oh, that is not the way we would have done it."
"Oh, we know what works, and . . . new ideas are not going to work for technical reasons."
Doesn't academic research have a peer review process to make sure the quality of the research is good? Some of the features of peer review are the following:
- It's actually the "audience" doing the reviewing. Other academics from the community will be the ones reading the paper.
- The review process is not being undertaken by generic "research experts."
- Peer review is actually about making sure that good research methodologies have been undertaken.
- So often, I've heard that the best new thinking has had trouble at the review stage.
Now if I actually think about the types of quality assurance process used in eLearning they are not all peer review.
- It's not the actual audience; it's often learning experts.
- The outcomes, not the process, are being reviewed.
If you think you need a review or quality assurance process, here are some ideas about the way a review could be achieved in other ways.
- I'm 100% sure teachers and developers would find services that focus on things such as proofreading, checking dates, and checking links valuable.
- Instead of the feedback coming from reviews, focus on prototypes and start testing the learning experience with learners as soon as possible. This is what academic "peer review" is actually about"”testing it with the audience.
- Set up self-assessment checklists in which the developer measures the experience against the standards. If you feel like you need external input, then have an external person coach the developer through the self-assessment.
If for some reason you feel like you still need an external review process, here are some suggestions.
- Don't let a committee of people make the decisions. Have individuals review and provide feedback. Committees will just breed groupthink, and what will get through are the average decisions that everyone agrees on.
- Review the thinking, not the outcomes. Have the reviewer focus on the process that has been undertaken. A good example of this is to look at the learner analyses and see if the learning activities match up with what is known about learners.
- Make it a dialogue. Have the reviewer be a "critical friend": questioning the decisions and thinking all through the project, instead of just handing over a report or checklist.
I suppose the key message is: If you have a review process, make sure it doesn't allow "groupthink" to happen and that the process doesn't get in the way of innovation. Trust people and help them to make the right decisions and let innovation happen.