Program Success or Failure? A Note from the Past

Here is a story about a program I taught in and directed over 50 years ago.  What I experienced raised puzzling questions about what constitutes program success and failure that I thought I could answer then but cannot do so now.

In the mid-1960s, I taught in and later directed a federally-funded teacher training program located in Grimke elementary school, Banneker and Garnet-Patterson Junior High Schools, and Cardozo High School in Washington, D. C. The Cardozo Project in Urban Teaching, as it was then called, prepared returned Peace Corps Volunteers to teach in urban schools. The “interns,” as they were called, taught for half-days under the supervision of master teachers, took university seminars on-site after-school, and in evenings and late-afternoons developed curriculum materials and worked in the community. At the end of the year the “interns”  were certified to teach in the District of Columbia and were on their way to earning a master’s degree in their field through two local universities (see here and here).

An independent evaluation confirmed that 61 interns had completed the training between 1963-1967. Of the 56 who had finished (two had died and three left the program to raise families), 42 (or 75 percent) were teaching in urban schools, other federally-funded programs, or overseas–one goal of the program. The on-site training, the supervision by D.C. teachers, and after-school seminars seemed to be a fruitful mix for channeling trained rookies into the system. The evaluation and praise for the program led the D.C. School Board to fund and rechristen the program as the Urban Teacher Corps–another goal of the pilot program.

Getting a school board to use its limited monies to continue a federally-funded pilot program meant that school officials saw its worth in meeting District of Columbia goals. That is a mark of success in any playbook of school reform then and now.

Consider further that the program model of training new teachers became the poster-child for a federal initiative to train teachers nationally for high-poverty urban and rural schools. The National Teacher Corps legislation (1966) adopted the model used at Cardozo, Banneker, and Grimke for training teachers on-site but rather than fund districts, federal officials funneled monies to universities that took responsibility for awarding degrees (see here and here).

Surely, the pilot program achieved two of its goals: three of four interns became full-time teachers after completing the program. And the program was adopted by an urban district using locally budgeted funds. Accomplishing both goals suggests program effectiveness, a sign of clear success. That the pilot program became the model for a National Teacher Corps further cements the sweet smell of success.

There is a “however” to this seeming success story that needs to be mentioned. After the Urban Teacher Corps became part of the D.C. schools, the Board of Education fired its superintendent and in 1970 appointed Hugh Scott its first black superintendent. During Scott’s brief administration (he resigned in 1973), he dismantled the Urban Teacher Corps. Together the federally-funded pilot program and locally-funded UTC existed for just under a decade.

Similarly, with the election of Ronald Reagan in 1980, the National Teacher Corps disappeared as federal monies for education went to the states in bloc grants. The NTC had lasted just over a decade.

The disappearance of both teacher-training programs within ten years suggests failure even though they could be fairly characterized as successes in achieving their primary goals. Success or failure? If it is either/or then determining whether the program should go down in the books as a successful reform in districts recruiting and training new teachers and then have it disappear raises questions about how does one define program success—achieving goals? Longevity? Or are there gray areas in defining success that seldom get attention?

I want to add one other piece to the puzzle and finish the story.

On the cusp of leaving the Cardozo Project in Urban Teaching, I asked a professor at the University of Maryland to find out if the students in classes of elementary and secondary school interns achieved less, about the same, or more than students who had non-intern teachers. While raising student achievement was not an explicit goal for the Cardozo Project in Urban Teaching or the Urban Teacher Corps there was an assumption (which I shared), that well trained teachers would eventually lead to better teaching and better teaching would lead to higher student  achievement.

The professor designed a study where students in classes taught by interns were matched with students taught by regular D.C. teachers. With no district-wide standardized test available, the professor used the Iowa Test of Basic Skills as the outcome measure of teaching in reading, math, and other skills. About a year later, the professor called me up and said he had the results. We met for coffee and he showed me what he had found.

In both elementary and secondary classrooms of interns and regular teachers, students in regular classrooms did marginally better than those in intern classrooms. While the percentile scores in both sets of classes were fairly low compared to the national average,  I was still shocked. I had believed that the teacher-training program I taught in and eventually directed was so strong that even in one year with “interns” D.C. students would do well academically. I was wrong.

Although standardized testing was becoming common–the year is 1968–as a consequence of the Coleman Report (1966) and the federal Elementary and Secondary Education Act (1965) which required outcome measures to hold districts accountable for the federal dollars they received, I knew little then about the design and methodology the professor had used to evaluate student achievement.

Of course, now I realize that there were flaws in the evaluation design—it was not a random sample of students or interns; the test questions covered content and skills that students had not yet reached in the D.C. curriculum; only one year was covered–still I was shaken by the results.

So I come to the end of my story and the puzzle of defining success that still has its hooks in me. Here is an example of a pilot program that initially appeared as a success in achieving its primary goals. The Cardozo Project in Urban Teaching baptized later by the D.C. administration as the Urban Teacher Corps had the distinct smell of success. Adding to the fragrance was the founding of the National Teacher Corps. Yet within a decade these teacher training programs disappeared.

And, finally, as an after-thought, I discovered that students achieved less well in  classes taught by interns than did students in regular classrooms. Even though raising student achievement was not one of the goals of these programs, the results turned my assumptions inside out. Reform outcomes are seldom tidy.

Why this story? It is part of a puzzle that policymakers, administrators, practitioners, and political leaders still cannot resolve when it comes to determining success or failure of a particular reform. And it is one that I continue to work on.












Filed under Uncategorized

9 responses to “Program Success or Failure? A Note from the Past

  1. Laura H. Chapman

    Here is a brief narrative that seems to be from a similar time frame. When I was a young professor in art education, the Arts and Humanities “branch” of the US Office of Education was in the process of shaping its identity and programming. I participated in several post-Sputnik, Brunerian inspired projects that became known as “discipline-based art education” or DBAE. Federally funded programs of that era (1960s), all with a focus on versions of DBAE were located at Ohio State, the University of Illinois, Indiana University, and Penn State.

    One of the federal grant programs was known as the Experienced Teacher Fellowship. This competitive program was designed to enhance the knowledge of experienced art teachers in the “disciplines” of art history, art criticism, esthetics, and studio practice. Applicants had to propose a project for their district or school, and sent along a signed letter of endorsement from the principal or lead administrator on behalf of the applicant and the applicant’s proposed project

    If all of the required coursework was completed and the student was in good standing, then a master’s degree would be awarded. In addition to taking course work with faculty in art history, art criticism, aesthetics, a studio practice, these art teachers were engaged in studies that took them into discussions of the aims of education in art, concepts that could and should inform their work on curriculum, and evaluation strategies.

    A thesis project was required and it was supposed to be tailored to the teaching assignment they had prior to the fellowship and were expected to return to.

    Students came from very different teaching environments, ranging from the private school in Hawaii that Barack Obama attended to rural districts and inner city schools. I cannot recall any formal evaluation of the program other than a brief follow-up report from each participant and an apparent approval that leads to a renewal of those federal grants for several years, as well as funding for a version for “inexperienced teachers.”

    The hey day of federal funding for projects specific to education in the arts coincided with the post-Sputnik decades; start-up programs of the National Endowment of the Arts and National Endowment of the Humanities beginning in1965; and creation of regional educational R&D centers with competition among these for long-term work in the visual arts. Two R& D centers survived the competition and continued work, primarily in curriculum development, until the mid 1970s.

    Federal support for programs exclusively for the visual arts ended in 1966, at which time political maneuvering by Washington officials established a rule: “all of the arts or none.” I am not aware of a parallel edict that federal funding for science education had to address “all of the sciences or none.” There are many stories from the era. One of the most memorable is woven around of the fate of Bruner’s own curriculum, intended to demonstrate the principles for “reform” articulated in the Process of Education. Some may recall the project,” Man: A Course of Study.”

    • larrycuban

      The story of the federally-funded Experienced Teacher Fellowship in the arts sounds so familiar to the many science, social studies,math summer and year-long programs of the same era funded by the feds. I gather you had one of them. How would characterize the ETF as a program in the years that it existed? Success? Failure? Or is there another word for the program aimed at improving teacher expertise? As for”Man:A Course of Study,” I am very familiar with it. As you know, political flak from conservatives killed funding for it. Thanks for describing the ETF, Laura.

  2. mstegeorge

    Posts like this the and the interesting comments like Laura Chapman’s are why I read this blog. This history discussion and documentation seems so important. Thank you.

    But I want to ask, what is the racial breakdown in these programs? Where they largely White interns teaching Black children? Do you think such considerations are at all relevant to successful programs being continued or abandoned? Or is it just the tendency to move on to the next thing in education, as if constant change could be interpreted as progress?

    • larrycuban

      At first, the program I described was mostly white interns then by the time it was incorporated into the D.C. schools, at least half were minority. Was it a consideration at the time, perhaps. But the key decisions that were made about the continuation or termination of these programs were largely political ones involving competition for limited funds compared to other priorities that local and national leaders had and, as you said, George, jazzier innovations around the corner. Thanks for comment.

  3. mike g

    Just to add to the puzzle, here’s a story which maps to yours almost exactly.
    1. Started a program similar to your Teaching Corps. 45 full time tutors, recent college grads, integrated into a single school. Large gains.
    2. Program replicated in Houston district schools – this time 250 full time tutors, across 9 schools. Extremely large gains, even better evaluation design.
    3. Replicated in Denver, Lawrence, Chicago district schools. In each: large measurable gains for kids. Nice stories in NY Times, etc.
    4. Houston superintendent leaves, and program ends. Lawrence ends too.
    5. Chicago expands, but with heavy philanthropy; Denver (I think) maintains…but when supe leaves?
    6. Back at the original charter school, over time the tutoring dosage has fallen, little by little, each year.
    Success or failure?

    • larrycuban

      Mike, Thanks for puzzle.

      We would have to agree openly on criteria for success/failure before answering your question because without that agreement, we would be all over the place. So, for example, if improved student performance on tests is a goal of project, then 1, 2, and 3 suggest success. Reformers often see faithful replication of model as a marker of success so,again, for 2 and 3, the answer would be yes, successful. Expansion of program,as in Chicago, is not necessarily the same as faithful replication (sometimes called “fidelity”) but if the expansion retains the key program pieces, and results are satisfactory then here again success. Longevity of a “successful” program (you have to put in how long “longevity” is–for me around a decade but others may give different lengths of time) is another criterion often used to determine success. So disappearance in Houston and Lawrence, on the longevity criterion, would mean failure but here the distinction is not a program failure but a political failure insofar as sustainability of program. Now as to 6, you say tutoring as diminished at the mother charter school. You do not say whether goal of increasing student test scores has remained and what the results are. Thus, if improved student performance on tests has remained a goal, then you need to supply missing info. I offer a multi-answer response because where my thinking is now depends greatly on who defines success/failure and what criteria are used (and why) to make judgments. What do you think?

      • mike g

        I appreciate the thoughtful response. The missing info is that test gains are down, so there’s correlation. But as you’ve written, schools have changes all the time, in both strategy and staffing, so hard to assess causation…

      • larrycuban

        Thanks for reply, Mike.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s