In Part 1, I made the point that consumer-driven or educationally-oriented algorithms for all of their mathematical exactness and appearance of objectivity in regression equations contain different values among which programmers judge some to be more important than others. In making value choices (like everyone else, programmers are constrained by space, time, and resources), decisions get made that have consequences for both teachers and students. In this post, I look first at those algorithms used to judge teachers’ effectiveness (or lack of it) and then I turn to “personalized learning” algorithms customized for individual students.
Washington, D.C.’s IMPACT program of teacher evaluation
Much has been written about the program that Chancellor Michelle Rhee created during her short tenure (2007-2010) leading the District of Columbia public schools (see here and here). Under Rhee, IMPACT, a new system of teacher evaluation has been put into practice. The system is anchored in The “Teaching and Learning Framework,” that D.C. teachers call the “nine commandments” of good teaching.
1. Lead well-organized, objective-driven lessons.
2. Explain content clearly.
3. Engage students at all learning levels in rigorous work.
4. Provide students with multiple ways to engage with content.
5. Check for student understanding.
6. Respond to student misunderstandings.
7. Develop higher-level understanding through effective questioning.
8. Maximize instructional time.
9. Build a supportive, learning-focused classroom community.
IMPACT uses multiple measures to judge the quality of teaching. At first, 50 percent of an annual evaluation was based upon student test scores; 35 percent based on judgments of instructional expertise (see “nine commandments”) drawn from five classroom observations by the principal and “master educators,” and 15 percent based on other measures. Note that policymakers initially decided on these percentages out of thin air. Using these multiple measures, IMPACT has awarded 600 teachers (out of 4,000) bonuses ranging from $3000 to $25,000 and fired nearly 300 teachers judged as “ineffective” in its initial years of full operation. For those teachers with insufficient student test data, different performance measures were used. Such a new system caused much controversy in and out of the city’s schools (see here and here)
Since then, changes have occurred. In 2012, the 50 percent of a teacher’s evaluation based on student test scores had been lowered to 35 percent (why this number? No one says) and the number of classroom observations had been reduced. More policy changes have occurred since then (e.g., “master educator” observations have been abolished and now principals do all observations; student surveys of teachers added). All of these additions and subtractions to IMPACT mean that the algorithms used to judge teachers have had to be tweaked, that is, altered because some variables in the regression equation were deemed more (or less) important than others. These policy changes, of course, are value choices. For a technical report published in 2013 that reviewed IMPACT, see here.
And the content of the algorithms have remained secret. An email exchange between the overseer of the algorithm in the D.C. schools and a teacher (who gave her emails to a local blogger) in 2010-2011 reveal the secrecy surrounding the tinkering with such algorithms (see here). District officials have not yet revealed in plain language the complex algorithms to teachers, journalists, or the general public. That value judgments are made time and again in these mathematical equations is clear. As are judgements in the regression equations used to “personalize learning.”
Personalized Learning algorithms
“The consumerist path of least resistance in America takes you to Amazon for books, Uber for transportation, Starbucks for coffee, and Pandora for songs. Facebook’s ‘Trending’ list shows you the news, while Yelp ratings lead you to a nearby burger. The illusion of choice amid such plenty is easy to sustain, but it’s largely false; you’re being herded by algorithms from purchase to purchase.”
Mario Bustillos, This Brand Could be Your Life, June 28, 2016
Bustillos had no reason to look at “personalized learning” in making her case that consumers are “herded by algorithms from purchase to purchase.” Had she inquired into it, however, she would have seen the quiet work of algorithms constructing “playlists” of lessons for individual students and controlling students’ movement from one online lesson to another absent any teacher hand-prints on the skills and content being taught. Even though the rhetoric of “personalized learning” mythologizes the instructional materials and learning as student-centered, algorithms (mostly proprietary and unavailable for inspection) written by programmers making choices about what students should learn next are in control. “Personalized learning” is student-centered in its reliance on lessons tailored to ability and performance differences among students. And the work of teachers is student-centered in coaching, instructing, and individualizing their attention as well as monitoring small groups working together. All of that is important, to be sure. But the degree to which students are making choices out of their interests and strengths in a subject area, such as math, they have little discretion. Algorithms rule (see here, here, and here).
Deeply embedded in these algorithms are theories of learning that seldom are made explicit. For example, adaptive or “personalized learning” are contemporary, high-tech versions of old-style mastery learning. Mastery learning, then and now, is driven by behavioral theories of learning. The savaging of “behaviorism” by cognitive psychologists and other social scientists in the past few decades has clearly given the theory a bad name. Nonetheless, behaviorism and its varied off-shoots drive contemporary affection for “personalized learning” as it did for “mastery learning” a half-century ago (see here and here). I state this as a fact, not a criticism.
With advances in compiling and analyzing masses of data by powerful computers, the age of the algorithm is here. As consumers, these rules govern choices we make in buying material goods and, as this post claims, in evaluating teachers and “personalized learning.”