Donors Reform Schooling: Evaluating Teachers (Part 1)

The Bill and Melinda Gates Foundation’s Intensive Partnerships for Effective Teaching  aimed at identifying effective teachers whose practices raised students’ test scores, giving minority and poor students access to such teachers, and creating new ways of evaluating teachers than currently exist. I offer this example to illustrate how policy elites (top public education officials, civic and business leaders including donors) influence every step of the policy process: framing the problems to be solved, proposing top-down solutions, relying (or not relying) on research, and making requisite policy changes that ripple through the entire decentralized system of U.S. schooling.

Intensive Partnerships for Effective Teaching

Between 2009 and 2016, three school districts (Hillsborough County, Florida; Memphis City Schools, Tennessee; Pittsburgh Public Schools, Pennsylvania) and four California-based charter networks (Alliance College-Ready Public Schools, Aspire, Green Dot, and Partnerships to Uplift Communities) spent over a half-billion dollars of which Bill and Melinda Gates Foundation contributed $213 million in creating IPET policies that would identify, recruit, train, and evaluate effective teachers while giving low-income minority children and youth access to those effective teachers. Giving children heretofore excluded from having the best teachers would offer equal opportunity to children and youth, one goal of the project. [i]

Teachers would learn how to do peer evaluations, collaborate with other teachers, receive professional development and dollar bonuses if their students scored well on tests. Finally, the project would determine whether student test scores, graduation rates, and attendance in college improved as a result of these policies.

Money would go to those teachers who meet the criteria (i.e., student test score gains, highest ratings from peer and supervisor observations). Such money-loaded programs spur many individual teachers to secure the highest ratings from evaluators. That such programs also encourage collaboration through peer evaluation, that is, teachers learning together how to judge fellow teachers while giving every teacher the chance to participate in such efforts reveals the values embedded in the process of determining “successful” teachers. [ii]

These years then brought together national policymakers and donors to push ahead on programs that policy elites determined were the best levers to improve the performance of U.S. public schools. Even prior to this, the Gates Foundation had funded research to identify valid measures of effective teaching that were then incorporated into proposed policies that participating districts and charter networks could put into practice.

In the first decade of the 21st century, then, there was a convergence of the largest U.S. foundation investing in both research on effective teaching and the Common Core curriculum standards intersecting with President Obama’s educational initiative –designed and shepherded by U.S. Secretary of Education Arne Duncan–for a competitive Race to the Top, and the policy elite’s passion for holding teachers accountable by evaluating teachers using test scores. The dollar-infused IPET partnership of districts and charter networks fueled by sponsored research into effective teaching was a top-down initiative that national and state policymakers enthusiastically endorsed. To keep tabs on this massive effort, the Gates Foundation funded the RAND corporation to independently evaluate the reform.

In short, then, reigning educational policy elites embraced and enacted targeted teacher accountability as the lever for lifting public schools out of the morass of mediocrity. Part 2 looks at what happened to this initiative when it got implemented in schools.

______________________________________

[i] Brian Stecher, et. al., “Intensive Partnerships for Effective Teaching Enhanced How Teachers Are Evaluated But Had Little Effect on Student Outcomes,” Santa Monica, CA: RAND Corporation, 2018, p. 3; Matt Barnum, Chalkbeat

[ii] Brian Stecher, et. al., “Intensive Partnerships for Effective Teaching Enhanced How Teachers Are Evaluated But Had Little Effect on Student Outcomes,” Santa Monica, CA: RAND Corporation, 2018

 

 

 

Advertisements

8 Comments

Filed under how teachers teach, school reform policies

8 responses to “Donors Reform Schooling: Evaluating Teachers (Part 1)

  1. David F

    Hi Larry, I think I’ve said this before here, but for the sake of thoroughness I’d point out that the method of evaluating teachers was via the Danielson Framework in almost all of the schools. The question ought to be asked is whether this flawed system (premised on constructivist pedagogy and emphasizing engagement over everything else) was why the program failed.

    • larrycuban

      I do know, David, that the Danielson Framework was used for the Measures of Teaching project funded by Gates. What I do not know for sure is whether the three districts and four charter school networks in IPET used the Framework in identifying and evaluating teachers. If you do know, please send along the info and source. Thanks for the comment. And, yes, the Danielson Framework is anchored in constructivist concepts and practices.

      • David F

        Hi Larry, it’s in the RAND report. On page 73, the report’s authors note: “Although sites designed and implemented their observation systems differently, most used Danielson’s FFT as a starting point. All but one of the sites developed rubrics based on the Danielson framework, which meant that these sites emphasized a constructivist approach to pedagogy that involves high levels of student engagement and communication (Danielson Group, undated).”

      • larrycuban

        Thank you very much, David, for confirming what you said and the source. I appreciated that.

  2. Good discussion. My recollection was that the early MET work tested 5 different rubrics, including Danielson. There was a paper that showed the correlation to VAM for each of those 5. Why there was so much use of Danielson in the next stage, I’m not sure.

  3. larrycuban

    Chris Thorn wrote:
    I was one of a team providing technical assistance to Teacher Incentive Fund grant awardees and was often engaging with the MET team at meetings. There was a study done to compare Danielson, CLASS, etc. but participating districts were allowed to select which observational framework to use. Many had already been using Danielson.
    There was a more in-depth study of reliability done by Kane and Ho. http://k12education.gatesfoundation.org/resource/the-reliability-of-classroom-observations-by-school-personnel/
    It laid out out the return of different observational approaches to reliability of the scoring.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s