“The consumerist path of least resistance in America takes you to Amazon for books, Uber for transportation, Starbucks for coffee, and Pandora for songs. Facebook’s ‘Trending’ list shows you the news, while Yelp ratings lead you to a nearby burger. The illusion of choice amid such plenty is easy to sustain, but it’s largely false; you’re being herded by algorithms from purchase to purchase.”
Mario Bustillos, This Brand Could be Your Life, June 28, 2016
I wish I had written that paragraph. It captures a definite feature not only of our consumerist-driven society but also in recent school reform (e.g., the growth of charter schools and expanded parental choice). I also include the media hype and techno-enthusiasm for “personalized learning.” The centerpiece of any form of “personalized learning” (or “adaptive learning“) is algorithms for tailoring lessons to individual students (see here, here, and here). What Bustillos omits in the above article about the dominance of consumerism driven by algorithms is that regression equations embedded in algorithms make predictions based on data. Programmers decide on how much weight to put on particular variables in the equations. Such decisions are subjective; they contain value judgments about the independent and dependent variables and their relationship to one another. The numbers hide the subjectivity within these equations.
Like Facebook designers altering its algorithm so as to direct news tailored to each Facebook consumer “to put a higher priority on content shared by friends and family,” software engineers create different versions of “personalized learning” and insert value judgments into the complicated regression equations with which they have written for online lessons. These equations are anchored in the data students produce in answering questions in previous lessons. These algorithms predict (not wholly since engineers and educators do tweak–“massage” is a favored word–the equations) what students should study and absorb in individualized, daily, online software lessons (see here).
Such “personalized” lessons alter the role of the teacher for the better, according to promoters of the trend. Instead of covering content and directly teaching skills, teachers can have students work online thereby freeing up the teacher to coach, give individual attention to students who move ahead of their classmates and those who struggle.
Critics, however, see the spread of online, algorithmic-based lessons as converting teaching to directing students to focus on screens and automated lessons thereby shrinking the all-important role of teacher-student relationships, the foundation for social, moral, and cognitive learning in public schools. Not so, advocates of “personalized learning” aver. There might be fewer certified teachers in schools committed to lessons geared to individual students (e.g., Rocketship) but teachers will continue to perform as mentors, role models, coaches, and advisers not as mere purveyors of content and skills.
As in other policy discussions, the slippage into either/or dichotomies beckons. The issue is not whether or not to use algorithms since each of us uses algorithmic thinking daily. Based on years of experiential data we have compiled in our heads (without regression equations) step-by-step routines just to get through the day (e.g., which of the usual routes to work should I take; how best to get the class’s attention at the beginning of a lesson). Beyond our experiences, however, we depend on mathematical algorithms embedded in the chips that power our Internet searches, control portions of our driving cars and operate home appliances.
The issue is not that algorithms are value-free (they are not) or data rich (they are). The issue is whether practitioners and parents–consumers of fresh out-of-the-box products–come to depend automatically on carefully constructed algorithms which contain software designers’ value judgments displayed in flow charts and written into code for materials and lessons students will use tomorrow. Creators of algorithms (including ourselves) juggle certain values (e.g., favorite theory of learning, student-centered instruction, small group collaboration, correctness of information, increasing productivity and decreasing cost, ease of implementation) and choose among them in constructing their equations. They judge what is important and select among those values since time, space, and other resources are limited in creating the “best” or “good enough” equation for a given task. Software designers choose to give more weight to some variables over others–see Facebook decision above. Rich, profuse data, then, never speaks for itself. Look for the values embedded in the algorithmic equations. Such simple facts are too often brushed aside.
What are algorithms?
Wikipedia’s definition of an algorithm is straight forward: a sequence of steps taken to solve a problem and complete a task. Some images make the point for simple algorithms.
Or if you want a Kahn Academy video to explain an algorithm, see here.
Most algorithms are hardly simple, however. Amazon’s proprietary algorithms on searches and popularity of books, for example, are unavailable to the public yet are heavily leaned upon by advertisers, authors, and consumers (e.g., also Amazon’s algorithmic feature that appears on your screen: “customers who viewed this also viewed….”). Among school reformers interested in evaluating teachers on the basis of students’ test scores, algorithms and their complex regression equations have meant the difference between getting a bonus or getting fired, for example, in Washington, D.C. . And for those “personalized learning” advocates eager to advance student-centered classrooms, algorithms contain theories of action of what-causes-what that tilt toward one way of learning. In short, software designers’ value judgements matter as to what pops out at the other end of the equation. and then is used in making an evaluative judgment and an instructional decision.
Part 2 will look at values in algorithms that evaluate teachers and customize learning.