Keith Devlin is (@profkeithdevlin) Co-founder & Chief Scientist at BrainQuake and a mathematician in the Stanford University Graduate School of Education. Randy Weiner (@randybw15) is Co-founder & CEO at BrainQuake, a former teacher, & Co-founder and former Chair of the Board at Urban Montessori Charter School in Oakland, CA.
This opinion piece appeared in EdSurge, June 30, 2016.
As one variant of the saying goes, if your strength is using a hammer, everything can look like a nail. Examples abound in attempts to use new technologies to enhance (if not “transform”, or even “disrupt”) education. Technologists who have built successful systems in other domains—and who frequently view education as just another market in which to apply their expertise—often doom their project to fail at the start, by adopting a narrow and outdated educational model.
Namely, they see education as the provision of facts, techniques, and procedures to be delivered and explained by instruction and then practiced to mastery. Their role, then, is to bring their technological prowess to bear to make this process more efficient. In most cases they can indeed achieve this. But optimizing a flawed model of education is not in the best interests of our students, and from a learning outcomes perspective may make things worse than they already are.
In the case of adaptive learning, education commentator Audrey Watters gave examples of how things can go badly wrong on her blog. “Serendipity and curiosity are such important elements in learning,” she asks. “Why would we engineer those out of our systems and schools?” More recently, Alfie Kohn provided another summary of the numerous reasons to be skeptical of education technology solutions.
Watters’ bleak future will only come to pass if the algorithms continue to be both naïvely developed and naïvely applied, and moreover, in the case of mathematics learning (the area we both work in) applied to the wrong kind of learning tasks. Almost all the personalized math learning software systems we have seen fall into this category. But there is another way—as our work, and a thorough review by a third-party research organization—has shown.
We both work in the edtech industry and have a background in education. One of us is a university mathematician who spent several years on the US Mathematical Sciences Education Board and is now based in Stanford University’s Graduate School of Education, the other an edtech veteran who is a former teacher and who co-founded Urban Montessori Charter School.
We are both very familiar with the common “production line” model of education, and recognize that it not only appeals to many (perhaps most) technologists, but in fact is a system that they themselves did well in. But collectively, the two of us have many years of experience that indicates just how badly that approach works for the vast majority of students.
Last year, with funding from the Department of Education’s Institute for Educational Sciences, our company, BrainQuake, spent six months designing, testing and developing an adaptive engine to supply players of our launch product, Wuzzit Trouble, with challenges matched to their current ability level. We were delighted when classroom studies conducted by WestEd showed that the adaptive engine worked as intended (i.e., kept students in their zone of proximal development), straight out of the gate.
We developed the game based on a number of key insights accumulated over many years of research by mathematics education professionals that should be applicable to all edtech developers—even those who are not building math tools.
Experience Over Knowledge
First, the most effective way to view K-8 education is not in terms of “content” to be covered, acquired, mastered (and regurgitated in an exam) but as an experience. This is particularly (but not exclusively) true for K-8 mathematics learning. Mathematics is primarily something you do, not something you know.
To be sure, there is quite a lot to know in mathematics—there are facts, rules, and established procedures. Imagine the skills expected of a physician. None of us, we are sure, would want to be treated by someone who had read all the medical textbooks and passed the written tests but had no experience diagnosing and treating patients. And indeed, no medical school teaches future physicians solely by instruction, as any doctor who has gone through the mandatory, long, grueling internship can attest.
In the case of math, the inappropriateness of the classical, instruction-practice-testing dominated model of education has been made particularly acute as a result of the significant advances made in the very technology field we are working in. (Advances we wholeheartedly applaud. Our beef is not with technology—we love algorithms, after all—but with applying it poorly.) In today’s world, all of us carry around in our pockets a device that can execute almost any mathematical procedure, much faster and with greater accuracy than any human. Your smartphone, with its access to the cloud (in particular, Wolfram Alpha), can solve pretty well any university mathematics exam question.
What that device cannot do, however, is take a real world mathematical problem and solve it. To do that, you need the human brain. In order to do that, the human brain has to acquire two things, in particular: a rich and powerful set of general metacognitive problem solving skills, and a more specific ability known as mathematical thinking (a component of which is known as number sense, a term that crops up a lot in the K-8 math education world, since the development of number sense is the first key step toward mathematical thinking).
Another key insight that guided the design of our adaptive engine is that the main adaptivity is provided by the user. After all, the human being is the most adaptive cognitive system on the planet! With good product design, it is possible to leverage that adaptivity.
Most “adaptive” math algorithms will monitor a student’s progress to select the next problem algorithmically. But it is important that these puzzles allow for a wide range of of solutions and a spectrum of “right answers,” leaving the student or teacher in full control of how to move forward and what degree of success to accept. (Of course, such an approach is not possible if the digital learning experiences are of the traditional math problem type, where the problem focuses on one particular formula or method and there is a single answer, with “right” or “wrong” the only possible outcomes.)
Indeed, students still need to grasp the basic concepts of arithmetic, understand what the various rules mean, and know when and how the different procedures can be applied. But what they do not need is to be able to execute the various procedures efficiently in a paper-and-pencil fashion on real world data.
Today’s mathematical learning apps can—and should—focus on the valuable 21st century skills of holistic thinking and creative problem solving. The mastery of specific procedures should be skills that a student acquires automatically, “along the way,” in a meaningful context of working on a complex performance task—an outcome every one of us knows works from our own experience as adults.
Breaking the Symbol Barrier
Mastery of symbolic mathematics is a major goal of math education. But as has been shown by a great deal of research stretching back a quarter of a century, the symbolic representation is the most significant reason why most people have difficulty mastering K-8 grade level math—the all-important “basics.” Almost everyone can achieve a 98 percent success rate at K-8 math if it is presented in a natural-seeming fashion (for example, understanding and perhaps calculating stats at a baseball game), but their performance drops to a low 37 percent if presented with the same math problems expressed in textbook symbolic form.
Well-designed technologies that take advantage of some unique affordances of a computer or tablet can help obliterate this historical impediment to K-8 mathematics proficiency. Students should be able to explore problems on their own until they discover—for themselves—the solution. They don’t require instruction, and they don’t need anyone to evaluate their effort. Students should get instant feedback not in the form of “right” or “wrong,” but information about how their hypotheses varied from their actual experience and how they might revise their strategy accordingly.
An analogy we are particularly fond of is with learning to play a piano (or any other musical instrument). You may benefit greatly from a book, a human teacher, or even YouTube videos, but the bulk of the learning comes from sitting down at the keyboard and attempting to play.
What could be a better example of adaptive learning than that? Tune too easy? Try a harder piece. Too difficult? Back off and practice a bit more with easier ones, or break the harder one up into sections and master each one on its own at a slower pace, and then string them all together. The piano is not adapting. Rather, its design as an instrument makes it ideal for the learner to adapt.
A well-designed math tool should be an instrument on which you can learn mathematics, free from the Symbol Barrier. Now imagine we present a student with an orchestra of instruments.
We think this kind of approach is the future of adaptive learning in math and believe we, the edtech community, should choose to go beyond the “low hanging fruit” approaches to adaptive learning that the first movers adopted.