This story is not about current classrooms and schools. Neither is this story about coercive accountability, unrealistic curriculum standards or the narrowness of highly-prized tests in judging district quality. This is a story well before Race to the Top, Adequate Yearly Progress, and “growth scores” entered educators’ vocabulary.
The story is about a district over 40 years ago that scored one point above comparable districts on a single test and what occurred as a result. There are two lessons buried in this story–yes, here’s the spoiler. First, public perceptions of standardized test scores as a marker of “success” in schooling has a long history of being far more powerful than observers have believed and, second, that the importance of students scoring well on key tests predates A Nation at Risk (1983), Comprehensive School Reform Act (1998), and No Child Left Behind (2002)
I was superintendent of the Arlington (VA) public schools between 1974-1981. In 1979 something happened that both startled me and gave me insight into the public power of test scores. The larger lesson, however, came years after I left the superintendency when I began to understand the potent drive that everyone has to explain something, anything, by supplying a cause, any cause, just to make sense of what occurred.
In Arlington then, the school board and I were responsible for a district that had declined in population (from 20,000 students to 15,000) and had become increasingly minority (from 15 percent to 30). The public sense that the district was in free-fall, we felt, could be arrested by concentrating on academic achievement, critical thinking, expanding the humanities, and improved teaching. After five years, both the board and I felt we were making progress.
State test scores–the coin of the realm in Arlington–at the elementary level climbed consistently each year. The bar charts I presented at press conferences looked like a stairway to the stars and thrilled school board members. When scores were published in local papers, I would admonish the school board to keep in mind that these scores were a very narrow part of what occurred daily in district schools. Moreover, while scores were helpful in identifying problems, they were severely inadequate in assessing individual students and teachers. My admonitions were generally swept aside, gleefully I might add, when scores rose and were printed school-by-school in newspapers. This hunger for numbers left me deeply skeptical about standardized test scores as signs of district effectiveness.
Then along came a Washington Post article in 1979 that showed Arlington to have edged out Fairfax County, an adjacent and far larger district, as having the highest Scholastic Aptitude Test (SAT) scores among eight districts in the metropolitan area (yeah, I know it was by one point but when test scores determine winners and losers as in horse-races, Arlington had won by a nose).
I knew that SAT results had nothing whatsoever to do with how our schools performed. It was a national standardized instrument to predict college performance of individual students; it was not constructed to assess district effectiveness. I also knew that the test had little to do with what Arlington teachers taught. I told that to the school board publicly and anyone else who asked about the SATs. Few listened.
Nonetheless, the Post article with the box-score of test results produced more personal praise, more testimonials to my effectiveness as a superintendent, and, I believe, more acceptance of the school board’s policies than any single act during the seven years I served. People saw the actions of the Arlington school board and superintendent as having caused those SAT scores to outstrip other Washington area districts.
The lessons I learned in 1979 is that, first, public perceptions of high-value markers of “quality,” in this instance, test scores, shape concrete realities that policymakers such as a school board and superintendent face in making budgetary, curricular, and organizational decisions. Second, as a historian of education I learned that using test scores to judge a district’s “success” began in the late-1960s when newspapers began publishing district and school-by-school test scores pre-dating by decades the surge of such reporting in the 1980s and 1990s.
This story and its lessons I have never forgotten.