Learning Science in Practice: Do’s and Don’ts
Around the beginning of the 1980s, folks in fields like psychology, linguistics, philosophy, sociology, and more all realized they were interested in cognition. As a result, they created the new cross-disciplinary field of cognitive science. Around the beginning of the 1990s, folks similarly began realizing that interests in learning were coming in from fields like instructional design, educational psychology, and, yes, cognitive science. Thus, the interdisciplinary field of the learning sciences was created. The consolidation of research allowed for productive advancements.
Roughly 30 years later, despite robust promotion of the results, we are not seeing systematic application of these findings. We instead see an education system resistant to change, a higher-education system focused on knowledge, and corporate learning practicing ‘awareness’ and compliance training. All too often, we see information presentation and knowledge test rather than actual skill development. We aren’t practicing what we know to be useful and necessary!
Rather than cover all of learning science, here I want to cover those things that we aren’t doing that would make the biggest difference, and the worst things we are doing. They’re linked at the wrists and ankles, after all.
To begin with, there’s an important distinction to be made; we create instruction to develop an ability to do. Learning may be intrinsically valuable, but for the purposes of investing in solutions, we want to achieve relevant outcomes. We want to attain impact, by improving performance. That's a foundational point behind the argument here.
Defining the outcome also is important. I’ll suggest that what will make the most difference to your organizations isn’t the ability to recite information, but instead to make good decisions in the face of increasingly volatile, uncertain, complex, and ambiguous situations. Thus, our instructional goals should mostly be focused on developing cognitive skills, not just on knowledge. Even when you have knowledge, you do things with it. So, learning medical terminology typically isn’t for intellectual self-gratification, but instead to be used to determine what to investigate, where to intervene, and how to assess outcomes. Further, that’s for essentially all the outcomes organizations will require.
What is Learning?
Learning to do things requires doing things. We learn to play piano by playing the piano. We learn to build spreadsheets by taking financial models and rendering them through digital tools. We learn to interact with customers by interacting with customers, perhaps virtually. In all cases, to learn to do, you must do.
Importantly, learning information about doing doesn’t lead to the ability to do. In cognitive science, we talk about ‘inert knowledge’. These are things that we study, and can pass a test on, but when there's a situation where that knowledge is relevant, it doesn’t even get activated! (Fun fact: what we’re thinking about is represented as patterns of activation across the neurons in our brain. If it’s not activated, we’re not thinking about it.) This is because we haven’t used the information in the ways that the situation requires, so there’s no trigger.
This says a lot about how our brains work: what we sense from the current context activates patterns in our brain, and our thinking comes from the interaction of the current situation and what we’ve learned. So, we have to learn before we can react to it in context. (Unless it’s a reflex, but we’re talking about things that aren’t ‘hardwired’ into our physiology.)
Learning is about building those long-term patterns. To do that, we first have to process it through our limited attention (not 8 seconds, that’s a myth), to our conscious thinking. Then we have to elaborate it, by processing it in multiple ways. Most importantly, we then have to access that information by retrieving it and using it in the ways that we want to happen after the learning experience. We also need feedback about not only right and wrong, but about why they were right or wrong. With sufficient practice and feedback, simple skills become automated, and we can develop more complex versions.
As an adjunct, motivation matters. If folks don’t know why this learning is relevant to them, they are less likely to retain it. If they’re afraid of the consequences of performing, they are less likely to participate, and focus on the surface features rather than the underlying structure.
There’s no short-circuit here. We can’t download new abilities, despite fictional portrayals. We need to be consciously aware of the relevant information, build up the strength of those representations, and use that knowledge to guide our performance until what needs to be automated has been, so we can use our conscious effort to make important decisions. There are nuances to all this, of course, this is a coarse overview.
Where do we go wrong?
With that said, it’s easy to identify where most of learning goes wrong. It’s arrayed across the entire learning experience, and we can use this perspective to identify flaws at every step of the way.
First, we too frequently focus on the wrong outcome. We believe, too often, that if we give people information, they’ll change their behavior. Yet, as laid out above, just giving them conscious information doesn’t give them the repeated processing to retain it, let alone the practice in retrieving and applying it. So, having objectives for learning interventions such as ‘know’ or ‘understand’ don’t provide a basis for actual outcomes. What we need are objectives that specify what people will be able to do, and then criteria for detecting when that capability has been acquired by specifying how we know it's been done. We state this in the form of performance objectives.
Without solid objectives, we can have learners recite knowledge back to us instead of demonstrating ability. Much too frequently, it’s about having learners recite arbitrary bits of information that has been presented. Such information may not be relevant, frequently hasn’t been highlighted as important, and is arbitrary or too complex for an initial presentation. Too frequently, as well, the feedback is just ‘right’ or ‘wrong’ without specific information about why it was wrong (or right). Yet that information is critical to develop a rich ability to perform in an optimal timeframe.
Similarly, we don’t provide sufficient practice. We have a tendency to practice until we get it right. Which might feel good, but isn’t an effective strategy to develop sustained new capabilities. Instead, our goal should be performing until we can’t get it wrong! Even within the constraints of time and budget, we can and should do better if we actually want a return from our interventions (and investments).
Further, what we’re presenting as ‘content’ isn’t differentiated by its cognitive role in learning. To guide decisions, mental models that are causal and connected are important to support making predictions about the consequences of action. Seeing examples of those models in practice across various contexts has been shown to be effective when it precedes having learners practice. Yet, we tend to present content as ‘about’ information and pretty pictures instead of delving into the underlying behaviors.
Finally, we ignore the emotional side of the equation. We too often don’t motivate the reason why this learning is relevant to the audience. In addition, we don’t work hard to make sure learners feel safe to experiment. Finally, we also don’t celebrate outcomes, nor assist the learner in joining the community of practitioners that can extend the learning.
What does ‘right’ look like?
What, then, would an appropriate learning experience look like? Learning Experience Design is a label that can be viewed as adding in the element of emotion, and talking about learning as a process, not just an event. Both are relevant. There are specific things we’ll see as designers, and then it matters in what learners do.
It starts before you start designing, with the analysis of the need. Don’t take what you’re asked for as gospel; drill in and find the real behavior that isn’t what it should be, and how you’ll know when it’s remedied. Don’t stop there; also find out the underlying reason why performance isn’t at the level it should be. Some performance issues aren’t solved by learning interventions, and instead there may be environmental issues, or it may be more effective to put the information in the world.
Then, when a skill gap is the barrier to performance, ensure that you’re focused on developing a real ability, not just knowledge about the performance. This means that the objective should be having learners able to do things, and you align the design to that end.
What matters here is having meaningful practice. This, I’ll suggest, is the biggest barrier to success. This means sufficient, challenging, relevant practice, with appropriate feedback. Again, there are nuances here, about the necessary quantity, the escalation of challenge, and the choice of contexts. Still, focusing on practice first, not content, is key.
Then, the content should be models to guide performance, and examples of the models being used to solve problems (rightly or wrongly). And, nothing else! Okay, except the emotional opening and closing of the experience. We tend to overemphasize content over practice, as content is cheaper to produce than practice, but we really need to have more time on practice than on consuming content.
The learning experience should kick-off by helping the learner understand why this learning experience is worth their time. It should continue by providing contexts for examples and practice that are interesting and relevant. The practice should strive to be what Seymour Papert called ‘hard fun’, engaging because it’s relevant and appropriately challenging. Also, the experience is unlikely to be an event, but instead is developed to extend the learning through continued practice, reflection and planning, and coached performance. Then, we should close the experience emotionally as well as cognitively.
When we align our design practices with research principles, we create learning that is truly transformative. Our media and interactives should be designed to work both cognitively and emotionally to create lasting changes in our performance. We can do this, and we should. So, please do.
About Clark Quinn
Clark Quinn, Ph.D. is the executive director of Quinnovation, co-director of the Learning Development Accelerator, and former chief learning strategist at Upside Learning. Quinn is an internationally recognized leader in the learning technology space with over 40 years of experience designing, developing, and evaluating educational technology for government, corporate, education, and non-profit organizations.