Every single federal, state, and district policy decision aimed at improving student academic performance has a set of taken-for-granted assumptions that link the adopted policy to classroom lessons.
From widespread adoption of Common Core standards, to the feds funding “Race to the Top” to get states to adopt charters and pay-for-performance schemes to a local school board and superintendent deciding to give tablets to each teacher and student, these policies contain crucial assumptions–not facts–about outcomes that supposedly will occur once those new policies enter classrooms.
And one of those key assumptions is that new policies aimed at the classroom will get teachers to change how they teach for the better. Or else why go through the elaborate process of shaping, adopting, and funding a policy? Unfortunately, serious questions are seldom asked about these assumptions before or after super-hyped policies were adopted, money allocated, expectations raised, and materials (or machines) entered classrooms.
Consider a few simple questions that, too often, go unasked of policies heralded as cure-alls for the ills of low-performing U.S. schools and urban dropout factories:
1. Did policies aimed at improving student achievement (e.g., Common Core standards. turning around failing schools, pay-for performance plans, and expanded parental choice of schools) get fully implemented?
2. When implemented fully, did they change the content and practice of teaching?
3. Did changed classroom practices account for what students learned?
4. Did what students learn meet the goals set by policy makers?
These straightforward questions about reform-driven policies inspect the chain of policy-to-practice assumptions that federal, state, and local decision-makers take for granted when adopting their pet policies. These questions distinguish policy talk (e.g. “charter schools outstrip regular schools,” “online instruction will disrupt bricks-and-mortar schools”) from policy action (e.g., actual adoption of policies aimed at changing teaching and learning) to classroom practice (e.g. how do teachers actually teach everyday as a result of new policies),and student learning (e.g., what have students actually learned from teachers who teach differently as a result of adopted policies).
Let’s apply these simple (but not simple-minded) questions to a current favorite policy of local, state, and federal policymakers: buy and deploy tablets for every teacher and student in the schools.
1. Did policies aimed at improving student achievement get fully implemented?
For schools in Auburn (ME) to Chicago to Los Angeles Unified School District, the answer is “yes’ and “no.” The “yes” refers to the actual deployment of devices to children and teachers but, as anyone who has spent a day in a school observing classrooms, access to machines does not mean daily or even weekly use. In Auburn (ME), iPads for kindergartners were fully implemented. Not so in either Chicago or LAUSD.
2. When implemented fully, did they change the content and practice of teaching?
For Auburn (Me), LAUSD, and all districts in-between those east and west coast locations, the answer is (and has been so for decades): we do not know. Informed guesses abound but hard evidence taken from actual classrooms is scarce. Classroom research of actual teaching practices before and after a policy aimed at teachers and students is adopted and implemented remains one of the least researched areas. To what degree have teachers altered how they teach daily as a result of new devices and software remains unanswered in most districts.
3. Did changed classroom practices account for what students learned?
The short answer is no one knows. Consider distributing tablets to teachers and students. Sure, there are success stories that pro-technology advocates beat the drums for and, sure, there are disasters, ones that anti-tech educators love to recount in gruesome detail. But beyond feel-good and feel-bad stories yawns an enormous gap in classroom evidence of “changed classroom practice,” “what students learned,” and why.
What makes knowing whether teachers using devices and software actually changed their lessons or that test score gains can be attributed to the tablets is the fact that where such results occur, those schools have engaged in long-term efforts to improve, say, literacy and math (see here and here). Well before tablets, laptops, and desktops were deployed, serious curricular and instructional reforms with heavy teacher involvement had occurred.
4. Did what students learn meet the goals set by policy makers?
Determining what students learned, of course, is easier said than done. With the three-decade long concentration on standardized tests, “learning” has been squished into students answering selected multiple choice questions with occasional writing of short essays. And when test scores rise, exactly what caused the rise causes great debate over which factor accounts for the gains (e.g., teachers, curricula, high-tech devices and software, family background–add your favorite factor here). Here, again, policymaker assumptions about what exactly improves teaching and what gets students to learn more, faster, and better come into play.
Take-away for readers: Ask the right (and hard) questions about unspoken assumptions built into a policy aimed at changing how teachers teach and how students learn.
Pingback: “On The Table” – Scenario #6; Promise and Principles | Timbered Classrooms...
Pingback: Educational Policy Information
Reblogged this on David R. Taylor-Thoughts on Texas Education.
Thanks for re-blogging post, David.
Pingback: Asking the Right Questions for Getting School-D...
Reblogged this on The Echo Chamber.
Again, thank you for re-blogging post on “Asking the Right Questions.”
Hi Larry,
I saw this article and thought about your thinking around some of these questions regarding the use of technology in classrooms and this piece of asking the right questions. Recent article about Rocketship Education: http://www.mercurynews.com/education/ci_26055309/rocketship-education-sputters-expansion-classroom
Tina Cheuk
Thanks, Tina, for linking to San Jose Mercury News piece on Rocketship schools. I had not seen the piece.
Dr. Cuban, the ship has sailed on asking questions like this about 1:1 projects. Technologies, like tablets or smart phones or netbooks or whatever, have become for an increasing percentage of society so embedded in daily life that completing any information-related task or personal learning effort relies upon these “external brains” for a growing segment of parents, teachers, and especially, students.
We don’t need to question the if or why of these devices that provide ubiquitous access to information – written, visual, and human – any more than we question whether the old technologies of paper, pen, and print are effective teaching tools. They are simply what our society uses to record, access, manipulate, and communicate information. And it is the application of the tool, not any inherent value of the tool, that should be assessed. (Which tool makes a student a better writer: a number 2 pencil or a ball point pen?)
And as the cost of these devices drop, Internet access becomes more widespread, applications become more powerful and less complicated to use, the remarkability of personal technologies in the classroom will decrease – and we can get back to measuring the efficacy of pedagogy rather than silicon.
You might be right, Doug. You make fine points. Here comes the “however.” However, the mindless (and even mindful) purchase and deployment of hardware and software still need to be evaluated by these or other questions since wrapped up in these new technologies is the chimera of students learning more, faster, and better simply by having access to the devices and software.
Pingback: Why Do We Question 1:1 Effectiveness | Teaching AND Learning ABOVE the Line
Larry:
I’ve noticed that many parallels exist between corporate America’s litany of management fads the past three plus decades and public education’s policy-driven teaching methods and resources over a similar timeframe.
As with management fads, where a top-down approach is taken to optimize the performance of a product, service, organization, or the interactions thereof, educational fads too are mostly mandated from above with the expectation that educational outcomes will improve for the targeted population(s) as the student, teacher, administrator, or system, itself, adopts the fad with little room for *adapting* the fad to suit the unique needs of the individual classroom or school. Failure abounds in both arenas for rarely are systemic issues remedied so simply; simple in the essence of a declaring a mandate versus its implementation, which too often, in industry or education, is woefully underestimated.
Perhaps it is our nation’s “can do” heritage that undermines success as we overextend the possible success of individual effort to that of entire organizations and systems? Or, as I intimate above, it is the inability for the missive to be adapted to fit the unique circumstances of the lowest level unit of analysis that impedes success?
The comparisons you make between businesses (large and medium size) and education policy are on the money–in my opinion–Dave. Thanks for taking the time to do so.
Pingback: Brothers from Another Mother: Management Fads and Educational Policies | Reflections of a Second-career Math Teacher
Thanks for the comment, Dave.
Pingback: Educational Policy Information