Examining the Formulae

SOME THOUGHTS ON DIFFERENT DESIGN MODELS

Ideally, developing training curriculum is a process completely unobservable by the learner. The learner sits down, learns the material, and stands up a better worker without considering the decisions of the designer.

I mention in passing that, in my experience, noticing a designer’s decisions is almost always a negative. It is the mistakes rather than the successes that stand out; when we consider what the designer did, it’s usually to notice what they should have done differently.

Since the birth of codified training programs, there have been myriad attempts to determine the most effective and efficient methods to communicate professional information to those who need to hear it. Many of these are exercises in mental games and corporate wordplay; however, some have stood the test of time and provide the foundation for proper, well-considered training programs.

When I first entered the practice, I had little knowledge of these models and based my designs intuitively on how I felt my trainees would respond to my lessons. As I developed professionally, I realized just how precarious was an intuitive approach; though I had some successes, I was leaving myself open for complete disaster should my intuition prove fallible. Which of course it did.

And so it was with a few notable failures behind me that I began to approach my work from a much more studied position. I was at first surprised to find how much had been researched and written about the field; there are, as many of you know, very many models for the best way to design training curricula.

Here are some highlights:

ADDIE

On the surface, ADDIE is a remarkable description of how any program should be developed, training or otherwise. The acronym stands for Analysis, Design, Development, Implementation, and Evaluation. Basically, it means that you figure out what you need to do, plan out how to do it, make it happen, and then determine what needs to be changed or improved.

Its virtues were and are obvious to me; starting any kind of work from a standpoint of preparation and knowledge rather than improvisation is, of course, a firm foundation. When I taught writing, this was how I had my students do it: decide what you need to say, say it, then fix it.

On the front end, the ADDIE model is heavily data-driven. Development and evolution of a training program become calculations. As the data come in, they can tell you precisely what areas are lacking; where the gaps in instruction are; where to expand, improve, or cull.

I am a fan of this approach, but I am also aware of its shortcomings. Applying this model to training presents an obvious weak point: figure out what the trainees need to know, decide the best way to explain it, write the module based on that plan, and then…well, see how it goes. Without the data to support your design, you are missing a key element, and ADDIE often requires unrealistic pre-design analysis. If a module is designed poorly, then you won’t notice it until you’ve implemented it.

There is a lack of flexibility to this model; strictly data-driven processes tend to fall victim to the same problem across industries. There is more to good work than the bare numbers. The real goal of any training program is to change people’s behavior; you want them to understand the material and then go out and do something about it. A highly analytical model like ADDIE will miss a lot of opportunity to create the kind of human connection that most effectively brings about correct action.

 

Bloom’s Taxonomy

A model like Bloom’s Taxonomy corrects the more impersonal aspects of ADDIE and other data-based processes by speaking more to how people actually learn.

The idea here is that learning takes place in a hierarchy of processes, similar to Maslow’s Hierarchy of Needs, in which each higher tier builds upon the one below it. This hierarchy was eventually revised:

6. CREATING
5. EVALUATING
4. ANALYZING
3. APPLYING
2. UNDERSTANDING
1. REMEMBERING

I find that this model makes some intuitive sense. First, we receive information, which we retain and remember. The more we know, the more we can develop an understanding beyond the basic facts. With this understanding, we can question and evaluate what we know, ultimately leading us to be able to develop new ideas.

One of the things that captivates me about this model is that it is a good description of how I myself develop curricula. In order to teach a concept or process, I must first have a grasp of the facts, be able to recall them when needed. I must understand what these facts mean, what makes them important to know, and then apply these facts to a coherent lesson—which I can only do by the process of analysis, evaluation, and ultimately creation.

I have long held that one of the keys to instruction is empathy with the learner, and having an internalized understanding of how these things are learned is invaluable to my work. Watching my own process as I develop my storyboards is part of how I design my modules. I often refer to Bloom to help delineate specifically how to structure a lesson in order to achieve its maximum efficacy.

However, it is important to keep in mind that most consider Bloom to be outdated, despite its recent revisions. Like ADDIE, it is an excellent surface-level reminder, a good shorthand for how learning and teaching take place, but it is not without its deeper structural flaws.

The original intent of the Taxonomy is to define learning objectives clearly: This is the part where you acquire the basic knowledge; this is the part where you grasp it on a conceptual level; this is the part where you apply it to your work; and so on.

Unfortunately, actual learning does not happen in such a rigidly defined process. Though evaluation can be considered a “higher-level” cognitive process than simple memorization, one does not necessarily build on the other. Proper instruction requires the instructor to consider the different perspectives from which the material can be viewed, and this includes the different internal processes in which learning takes place. The Taxonomy is too rigid to take too seriously.

 

The ARCS Model

My criticism of many models, including ADDIE and Bloom, is that they eliminate the human quality to learning. Education on any but the most basic levels is never strictly mechanical. If you don’t speak to a learner’s sense of self, then you are not teaching properly.

This is difficult to accomplish in an industry like eLearning, in which you design single lessons to be taken by all sorts of different people; the task of speaking to each one individually isn’t, at first glance, achievable.

The technique to this isn’t dissimilar to how stage magicians perform their tricks. Magicians assume that their audiences will meet them halfway in order to make the trick work—in fact, their techniques are all geared to make sure that that’s exactly what happens. By directing the audiences’ perceptions and attentions, they achieve their effects.

Now, I am not out to trick anybody, and I want people’s attentions fixed firmly on what I am actually doing. The overlap is that I am working on people’s instincts to direct their attention to what I want them to pay attention to.

This is the virtue of the ARCS model of design. The four elements—Attention, Relevance, Confidence, and Satisfaction—lose the technical precision of other ideas and focus directly on reaction to material. The emphasis is the experience, not the process.

 

Here’s how I interpret and use it:

ATTENTION – The official terminology is “perceptual arousal” and “inquiry arousal,” and all it really says is that lessons need to be interesting. There are a lot of different recommended methods for this—variability, humor, active participation, practical examples, and the like—and those who are interested can reference my blog post on how not to bore people for a deeper exploration.

The real question is, Have I written something that can keep people’s minds from wandering?

RELEVANCE – The learner needs to believe that what they are learning has some bearing on their work. In my own experience, it has been often that I have sat in front of a video or presenter secretly rejecting everything I was hearing, simply because I saw no connection between what I did and what I was being told to do.

The content needs to demonstrate two things:

  1. How it is immediately useful, and
  2. How it can help in the future

CONFIDENCE – I have designed many modules in which the details pile up and up. It can be overwhelming. A learner can leave a twenty-minute module—with attention intact and a strong grasp of the importance of the material—and have no idea what he or she heard. There was too much, or it seemed to complicated, or for whatever other reason the learner just did not feel able to process the information.

Built into any good training module is the recognition of how or why it may intimidate people. This is where reinforcement plays a major role. As a guideline, my modules tend to repeat themselves—not simply as dry review, but as an assumption of knowledge. As the lesson moves forward, references to previously-learned facts remind the learner what he or she has already heard. “Ah yes, I remember that,” thinks the learner, and the new material becomes that much less daunting and alien.

SATISFACTION – This is hardest to achieve in an eLearning environment. The best lessons are rewarding and satisfying. Knowing how to reward a class or student in-person is part of being a teacher, but, alas, it is not part of the training background for an instructional designer.

The situation is compounded by the fact that being over-effusive in generic praise is a sure-fire way to disengage people from an eLearning course.

Again, my answer is reinforcement. Yes, you do know this; you have learned it; it is valuable. Through tone, diction, and carefully placed moments of congratulation, an ineffable sense of the worth of learning can be communicated. Of course, it is not the high edification of an enlightened thinker, but the value of it can always be shown.

 

ARCS, on the whole, has always struck me as most in line with my philosophies and methods, but I recognize that there are some serious risks to taking it too seriously. Great care must be taken not to dive too deeply into the abstract. Worrying too much about the emotions and experiences of the learner can get in the way of the actual material that needs to be taught. Without something more precise, such as ADDIE or the Taxonomy, the actual goals of the module can get lost.

 

What I Learned

There is no single, set BEST WAY to design eLearning. The attempts to codify adult learning have all, in some way, fallen short. The way people learn is too complex and variegated for a short set of guidelines to describe, no matter how exhaustive the research behind it.

I learned some invaluable techniques in studying these methods. However, at the end of my research, I found myself confirming rather than scrapping my original assumptions: Learning as a psychological phenomenon is something that you can guide, empathize with, help facilitate, but it is up to the learner to take the steps. My job is to make those steps as efficacious as possible.

Leave a Reply