The Don’t Do It Depository (Morgan Polikoff)

“Morgan Polikoff is an Associate Professor of Education at the USC Rossier School of Education. He researches the design, implementation, and effects of standards, assessment, and accountability policies. His current research is focused on teachers’, schools’, and districts’ implementation of new college and career-readiness standards, including the Common Core. His research has been supported by the National Science Foundation, Institute of Education Sciences, and WT Grant Foundation, among other sources.”

This post appeared on the FutureEd blog July 24, 2017


We have known for quite a while that schools engage in all manner of tricks to improve their performance under accountability systems. These behaviors range from the innocuous—teaching the content in state standards—to the likely harmful—outright cheating.

A new study last week provided more evidence of the unintended consequences of another gaming behavior—reassigning teachers based on perceived effectiveness. Researchers Jason A. Grissom, Demetra Kalogrides and Susanna Loeb analyzed data from a large urban district and found that administrators moved the most effective teachers to the tested grades (3-6) and the least effective to the untested grades (K-2).

On the surface, this might seem like a strategy that would boost accountability ratings without affecting students’ overall performance. After all, if you lose 10 points in kindergarten but gain 10 in third grade, isn’t the net change zero?

In fact, the authors found that moving the least effective teachers to the earlier grades harmed students’ overall achievement, because those early grades simply matter more to students’ long-term trajectories. The schools’ gaming behaviors were having real, negative consequences for children.

This strategy should go down in the annals of what doesn’t work, a category that we simply don’t pay enough attention to. Over the past 15 years, there has been a concerted effort in education research to find out “what works” and to share these policies and practices with schools.

The best example of this is the push for rigorous evidence in education research through the Institute of Education Sciences and the What Works Clearinghouse. This may well be a productive strategy, but the WWC is chock full of programs that don’t seem to “work,” at least according to its own evidence standards, and I don’t think anyone believes the WWC has had its desired impact. (The former director of IES himself has joked that it might more properly be called the What Doesn’t Work Clearinghouse).

These two facts together led me to half-joke on Twitter that maybe states or the feds should change their approach toward evidence. Rather than (or in addition to) encouraging schools and districts to do good things, they should start discouraging them from doing things we know or believe to be harmful.

This could be called something like the “Don’t Do It Depository” or the “Bad Idea Warehouse” (marketing experts, help me out). Humor aside, I think there is some merit to this idea. Here, then, are a couple of the policies or practices that might be included in the first round of the Don’t Do It Depository.

The counterproductive practice of assigning top teachers to tested grades is certainly a good candidate. While we’re at it, we might also discourage schools from shuffling teachers across grades for other reasons, as recent research finds this common practice is quite harmful to student learning.

Another common school practice, particularly in response to accountability, is to explicitly prepare students for state tests. Of course, test preparation can range from teaching the content likely to be tested all the way to teaching explicit test-taking strategies (e.g., write longer essays because those get you more points). Obviously the latter is not going to improve students’ actual learning, but the former might. In any case, test preparation seems to be quite common, but there’s less evidence that you might think that it actually helps. For instance:

  • A study of the ACT (which is administered statewide) in Illinois found test strategies and item practice did not improve student performance, but coursework did.
  • An earlier study in Illinois found that students exposed to more authentic intellectual work saw greater gains on the standardized tests than those not exposed to this content.
  • In the Measures of Effective Teaching Project, students were surveyed about many dimensions of the instruction they received and these were correlated with their teachers’ value-added estimates. Survey items focusing on test preparation activities were much more weakly related to student achievement gains than items focusing on instructional quality.
  • Research doesn’t even indicate that direct test preparation strategies such as those for the ACT or SAT are particularly effective, with actual student gains far lower than advertised by the test preparation companies.

In short, there’s really not great evidence that test preparation works. In light of this evidence, perhaps states or the feds could offer guidance on what kind of and how much test preparation is appropriate and discourage the rest.

Other activities or beliefs that should be discouraged include “learning styles,” the belief that individuals have preferred ways of learning such as visual vs. auditory. The American Psychological Association has put out a brief explainer debunking the existence of learning styles. Similarly, students are not digital natives, nor can they multitask, nor should they guide their own learning.

There are many great lists of bad practices that already exist; states or the feds should simply repackage them to make them shorter, clearer, and more actionable. They should also work with experts in conceptual change, given that these briefs will be directly refuting many strongly held beliefs.

Do I think this strategy would convince every school leader to stop doing counterproductive things? Certainly I do not. But this strategy, if well executed, could probably effect meaningful change in some schools, and that would be a real win for children at very little cost.


Filed under Uncategorized

13 responses to “The Don’t Do It Depository (Morgan Polikoff)

  1. Interesting thoughts. I wonder if reducing accountability and increasing professional trust might thwart some of the issues mentioned?

  2. Laura H. Chapman

    Another thing to notice, in the MET project participating teachers and students were enrolled in math and English language arts (ELA) in grades 4 through 8, algebra I at the high school level, biology (or its equivalent) at the high school level, and English in grade 9.
    The MET project, like so much research in education, reflects a systematic neglect of almost all content not easily tested and the belief that test scores in select subjects should determine “best practices” for all. Note also that the MET project results were sent into the world without the benefit of any peer review, but plenty of publicity. The major researchers were economists.
    The initial student survey, a project led by economist Ron Feguson, was over the top in soliciting from students information about their time and space for doing homework at home, whether parents or others helped with homework or checked homework and other question not relevant to classroom instruction. I could not find any conclusions that looked at the relationship of those items to the judgments of teachers or how the time taken to answer them may have influenced how students responded to the rest of the survey.
    In my judgment, the underlying constructs in the student survey were also biased to favor conventional sage-on-the-stage teaching with homework assignments and in class review of homework treated as if essential.
    Overall, the MET project promoted student surveys as if these should be measures of effective teaching and are not riddled with some fundamental problems as a method for judging teachers. Surveys are proliferating and their use strikes me as an integral part of the “customer satisfaction” mind-set overtaking other criteria for judging good teaching and the value of education.

    I have been getting regular alerts from IES and the “What Works” reports. This arm of IES is intended to serve as a marketing tool for programs and “interventions.” There has never been much there there and so much is about math, or ELA, or managerial moves. I find this true of much of the research in education. As I recall, The WW Clearing House was spawned by a battle between qualitative and mixed methods researchers and researchers committed to variants of a medical model for judging educational research, gold standard as random assignment to treatments, and all the rest.

    In any case, I love the idea of a dumpster collection service for what clearly does not work and also some modesty about the idea of scaling up anything in education that seems to “work” as if the context of schooling is nearly steady state or so malleable that anything can be put into play without some unintended consequences.

  3. Larry Since tests of the sort considered scientific are amazingly predictive of the testers family wealth, even calling them measures of achievement or “what works” is inaccurate and assures the continuation of the status who “by design.”

    Yes, there is “measurement error” which the rich can use money and status to compensate for and gives an occasional poor or Black student a shot at success.

    Sent from my iPhone


  4. Sarah

    Thanks very much. I am glad to see such things as learning styles, digital natives and self-directed learning get some questioning as they have always struck me as a little like “fuzzy math”–confusing, not explanatory, and without scientific basis. The invention of computers does not equate to an evolution in the human brain so that it suddenly has all these new ways of learning. At least I’ve not heard of any such evolution.

    • larrycuban

      Thanks for taking the time to comment, Sarah.

    • James

      I teach at a two year college in Texas and I find my students the opposite of digital natives. Most of them do not appear to have much of a clue of how to learn from things like the internet, which most of them approach as a tool to confirm their already held views. Try to get them to find credible internet sources for writing a history research paper and prepare to be disappointed.

  5. Laura H. Chapman

    Just up
    USDE is inviting comments now on proposed refinements of the term “evidence-hased” in issuing grants and reviewing other proposals. Here is a section of the proposed changes, pertaining to the WW Clearing House. Comments close August 30.

    (9) Replace the term “What Works Clearinghouse Evidence Standards” with the term “What Works Clearinghouse Handbook,” to clarify that the Handbook’s procedures—not just standards—are relevant to evidence determinations, consistent with current practice. We also incorporate this Handbook, which provides a detailed description of the standards and procedures of the WWC, by reference. The WWC is an initiative of the U.S. Department of Education’s National Center for Education Evaluation and Regional Assistance, within the Institute of Education Sciences (IES), which was established under the Education Sciences Reform Act of 2002. The WWC is an important part of IES’s strategy to use rigorous and relevant research, evaluation, and statistics to inform decisions in the field of education. The WWC provides critical assessments of scientific evidence on the effectiveness of education programs, policies, products, and practices (referred to as “interventions”) and a range of publications and tools summarizing this evidence. The WWC meets the need for credible, succinct information by reviewing research studies; assessing the quality of the research; summarizing the evidence of the effectiveness of programs, policies, products, and practices on student outcomes and other outcomes related to education; and disseminating its findings broadly. This Handbook is available to interested parties at the Web site address included in the regulation (

    Much more at

  6. I passed this article on to my K-12 teachers. Great responses from the staff.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s