Category Archives: testing

Can Superintendents Raise Test Scores?

I first asked this question in a post published over six years ago. I have updated and revised that post because the answer is popularly and resoundingly “yes” although the evidence is squirmy.  I revisit both the question and answer.

 

After Atlanta (GA) school administrators and teachers went to trial and were convicted and sentenced to jail for cheating and before that the El Paso (TX) superintendent convicted of the same charge and in prison, the generally accepted idea that district superintendents can pump up student  achievement has taken a serious hit. Cheating scandals across the country have turned the belief in superintendents raising test scores into something tawdry.

For decades, many superintendents have been touted as earnest instructional leaders, expert managers, and superb politicians who can mobilize communities and teacher corps to improve schools and show gains in students’ test scores. From Arlene Ackerman  in Philadelphia to Joel Klein in New York City to Kaya Henderson in Washington, D.C., big city superintendents are at the top rung of those who can turn around failing districts.

Surely the Atlanta cheating scandal and others around the country have tarnished the image of dynamic superintendents taking urban schools from being in dumpsters to $1 million Broad Prize winners. A tainted image, however, will not weaken the Velcro belief that smart district superintendents will lead districts to higher student achievement. Just look at contracts that school boards and mayors sign with new superintendents. Contract clauses call for student test scores, graduation rates, and other academic measures to increase during the school chief’s tenure (see here and here).

Then along comes a study that asks whether superintendents are “vital or irrelevant.” Drawing on state student achievement data from North Carolina and Florida for the years 1998-2009, researchers sought to find out how much of a relationship existed between the arrival of new superintendents, how long they served, and student achievement in districts (see PDF SuperintendentsBrown Center9314 ).

Here is what the researchers found:

  1. School district superintendent is largely a short-term job. The typical superintendent has been in the job for three to four years.
  2. Student achievement does not improve with longevity of superintendent service within their districts.
  3. Hiring a new superintendent is not associated with higher student achievement.
  4. Superintendents account for a small fraction of a percent (0.3 percent) of student differences in achievement. This effect, while statistically significant, is orders of magnitude smaller than that associated with any other major component of the education system, including: measured and unmeasured student characteristics; teachers; schools; and districts.
  5. Individual superintendents who have an exceptional impact on student achievement cannot be reliably identified.

Results, of course, are from only one study and must be handled with care. The familiar cautions about the limits of the data and methodology are there. What is remarkable, however, is that the iron-clad belief that superintendents make a difference in student outcomes held by the American Association of School Administrators, school boards, and superintendents themselves has seldom undergone careful scrutiny. Yes, the above study is correlational. It does not get into the black box of exactly how and what superintendents do improves student achievement.

Ask superintendents how they get scores or graduation rates to go up.  The question is often answered with a wink or a shrug of the shoulders. Among most researchers and administrators who write and grapple with this question of whether superintendents can improve test scores, there is no explicit model of effectiveness. That is correct, there is no theory of change, no theory of action.

How exactly does a school chief who is completely dependent on an elected school board, district office staff, a cadre of principals whom he or she may see monthly, and teachers who shut their doors once class begins–raise test scores, decrease dropouts, and increase college attendance? Without some theory by which a superintendent can be shown to have causal effects, test scores going up or down remain a mystery or a matter of luck that the results occurred during that school chief’s tenure (I exclude cheating episodes where superintendents have been directly involved because they have been rare).

Many school chiefs, of course, believe–a belief is a covert theory–that they can improve student achievement. They hold dear the Rambo model of superintending. Strong leader + clear reform plan + swift reorganization + urgent mandates + crisp incentives and penalties =  desired student outcomes. Think former New York City Chancellor Joel Klein, ex-Miami-Dade Superintendent Rudy Crew, ex-Chancellor of Washington D.C.and ex-school chief Alan Bersin in San Diego. Don’t forget John Deasy in Los Angeles Unified School District. And now, Pedro Martinez in San Antonio Independent School District

There are, of course, other less heroic models or theories of action that mirror more accurately the complex, entangled world of moving school board policy to classroom practice. One model, for example, depicts stable, ongoing, indirect influence where superintendents slowly shape a district culture of improvement, work on curriculum and instruction, insure that  principals run schools consistent with district goals, support and prod teachers to take on new classroom challenges, and communicate often with parents about what’s happening. Think ex-superintendents Carl Cohn in Long Beach (CA), Tom Payzant in Boston (MA) and Laura Schwalm in Garden Grove (CA). Such an indirect approach is less heroic, takes a decade or more, and ratchets down the expectation that superintendents be Supermen or Wonder Women.

Whether school chiefs or their boards have a Rambo model, one of indirect influences, or other models, some theory exists to explain how they go about improving student performance. Without some compelling explanation for how they influence district office administrators, principals, teachers, and students to perform better than they have, most school chiefs have to figure out their own personal cause-effect model, rely upon chance, or even in those rare occasions, cheat.

What is needed is a crisp GPS navigation system imprinted in school board members’ and superintendents’ heads that contain the following:

*A map of the political, managerial, and instructional roles superintendents perform, public schools’ competing purposes, and the constant political responsiveness of school boards to constituencies that inevitably create persistent conflicts.

*a clear cause-effect model of how superintendents directly influence principals and teachers and they, in turn,influence students to do better as in creating incentives and sanctions, a culture of trust that encourages both risk-taking and willingness to learn.

*a practical and public definition of what constitutes success for school boards, superintendents, principals,teachers, and students beyond standardized test scores, higher graduation rates, and college admissions.

Such a navigation system and map are steps in the right direction of answering the question of whether superintendents can raise test scores.

Advertisements

3 Comments

Filed under leadership, testing

What Makes a Great School? (Jack Schneider)

Jack Schneider is an Assistant Professor of Education at the University of Massachusetts, Lowell. He is  “a historian and policy analyst who studies the influence of politics, rhetoric, culture, and information in shaping attitudes and behaviors. His research examines how educators, policymakers, and the public develop particular views about what is true, what is effective, and what is important. Drawing on a diverse mix of methodological approaches, he has written about measurement and accountability, segregation and school choice, teacher preparation and pedagogy, and the relationship between research and practice. His current work, on how school quality is conceptualized and quantified, has been supported by the Spencer Foundation and the Massachusetts State Legislature.

The author of three books, Schneider is a regular contributor to “The Washington Post” and “The Atlantic” and co-hosts the education policy podcast “Have You Heard.” He also serves as the Director of Research for the Massachusetts Consortium for Innovative Education Assessment.”

This piece appeared October 23, 2017.

 

What are the signs that a school is succeeding?

Try asking someone. Chances are, they’ll say something about the impact a school makes on the young people who attend it. Do students feel safe and cared for? Are they being challenged? Do they have opportunities to play and create? Are they happy?

If you’re a parent, getting this kind of information entails a great deal of effort — walking the hallways, looking in on classrooms, talking with teachers and students, chatting with parents, and watching kids interact on the playground.

Since most of us don’t have the time or the wherewithal to run our own school-quality reconnaissance missions, we rely on rumor and anecdote, hunches and heuristics, and, increasingly, the Internet.

So what’s out there on the web? Are our pressing questions about schools being answered by crowdsourced knowledge and big data sets?

As it turns out, no.

There’s information, certainly. But mostly it doesn’t align with what we really want to know about how schools are doing. Instead, most of what we learn about schools online — on the websites of magazines, on school rating sites, and even on real estate listings — comes from student standardized test scores. Some may include demographic information or class size ratios. But the ratings are derived primarily from state-mandated high stakes tests.

The first problem with this state of affairs is that test scores don’t tell us a tremendous amount about what students are learning in school. As research has demonstrated, school factors explain only about 20 percent of achievement scores — about one-third of what student and family background characteristics explain. Consequently, test scores often indicate much more about demography than about schools.

Even if scores did reflect what students were learning in school, they’d still fail to address the full range of what schools actually do. Multiple-choice tests communicate nothing about school climate, student engagement, the development of citizenship skills, student social and emotional health, or critical thinking. School quality is multidimensional. And just because a school is strong in one area does not mean that it is equally strong in another. In fact, my research team has found that high standardized test score growth can be correlated with low levels of student engagement. Standardized tests, in short, tell us very little about what we actually value in schools.

One consequence of such limited and distorting data is an impoverished public conversation about school quality. We talk about schools as if they are uniformly good or bad, as if we have complete knowledge of them, and as if there is agreement about the practices and outcomes of most value.

Another consequence is that we can make unenlightened decisions about where to live and send our children to school. Schools with more affluent student bodies tend to produce high test scores. Perceived

as “good,” they become the objects of desire for well-resourced and quality-conscious parents. Conversely, schools with more diverse student bodies are dismissed as bad.

GreatSchools.org gives my daughter’s school — a highly diverse K–8 school — a 6 on its 10-point scale. The state of Massachusetts labels it a “Level 2” school in its five-tier test score-based accountability system. SchoolDigger.com rates it 456th out of 927 Massachusetts elementary schools.

How does that align with reality? My daughter is excited to go to school each day and is strongly attached to her current and former teachers. A second-grader, she reads a book a week, loves math, and increasingly self-identifies as an artist and a scientist. She trusts her classmates and hugs her principal when she sees him. She is often breathlessly excited about gym. None of this is currently measured by those purporting to gauge school quality.

Of course, I’m a professor of education and my wife is a teacher. Our daughter is predisposed to like school. So what might be said objectively about the school as a whole? Over the past two years, suspensions have declined to one-fifth of the previous figure, thanks in part to a restorative justice program and an emphasis on positive school culture. The school has adopted a mindfulness program that helps students cope with stress and develop the skill of self-reflection. A new maker space is being used to bring hands-on science, technology, engineering, and math into classrooms. The school’s drama club, offered free after school twice a week, now has almost 100 students involved.

The inventory of achievements that don’t count is almost too long to list.

So if the information we want about schools is too hard to get, and the information we have is often misleading, what’s a parent to do?

Four years ago, my research team set out to build a more holistic measure of school quality. Beginning first in the city of Somerville, Massachusetts, and then expanding to become a statewide initiative — the Massachusetts Consortium for Innovative Education Assessment — we asked stakeholders what they actually care about in K–12 education. The result is a clear, organized, and comprehensive framework for school quality that establishes common ground for richer discussions and recognizes the multi-dimensionality of schools.

Only after establishing shared values did we seek out measurement tools. Our aim, after all, was to begin measuring what we value, rather than to place new values on what is already measured.

For some components of the framework, we turned to districts, which often gather much more information than ends up being reported. For many other components, we employed carefully designed surveys of students and teachers — the people who know schools best. And though we currently include test score growth, we are moving away from multiple-choice tests and toward curriculum-embedded performance assessments designed and rated by educators rather than by machines.

Better measures aren’t a panacea. Segregation by race and income continues to menace our public schools, as does inequitable allocation of resources. More accurate and comprehensive data systems won’t wash those afflictions away. But so much might be accomplished if we had a shared understanding of what we want our schools to do, clear and common language for articulating our aims, and more honest metrics for tracking our progress.

 

2 Comments

Filed under Reforming schools, school leaders, testing

Principals And Test Scores

I read a recent blog from two researchers who assert that principals can improve students’ test scores. The researchers cite studies that support their claim (see below). These researchers received a large grant from the Wallace Foundation to alter their principal preparation program to turn out principals who can, indeed, raise students’ academic achievement.

I was intrigued by this post because as a district superintendent I believed the same thing and urged the 35 elementary and secondary principals I supervised—we met face-to-face twice a year to go over their annual goals and outcomes and I spent a morning or afternoon at the school at least once a year—to be instructional leaders and thereby raise test scores. Over the course of seven years, however, I saw how complex the process of leading a school is, the variation in principals’ performance, and the multiple roles that principals play in his or her school to engineer gains on state tests (see here and here). And I began to see clearly what a principal can and cannot do. Those memories came back to me as I read this post.

First the key parts of the post:

A commonly cited statistic in education leadership circles is that 25 percent of a school’s impact on student achievement can be explained by the principal, which is encouraging for those of us who work in principal preparation, and intuitive to the many educators who’ve experienced the power of an effective leader. It lacks nuance, however, and has gotten us thinking about the state of education-leadership research—what do we know with confidence, what do we have good intuitions (but insufficient evidence) about, and what are we completely in the dark on? ….

Quantifying a school leader’s impact is analytically challenging. How should principal effects be separated from teacher effects, for instance? Some teachers are high-performing, regardless of who leads their school, but effective principals hire the right people into the right grade levels and offer them the right supports to propel them to success.

Another issue relates to timing: Is the impact of great principals observed right away, or does it take several years for principals to grapple with the legacy they’ve inherited—the teaching faculty, the school facilities, the curriculum and textbooks, historical budget priorities, and so on. Furthermore, what’s the right comparison group to determine a principal’s unique impact? It seems crucial to account for differences in school and neighborhood environments—such as by comparing different principals who led the same school at different time points—but if there hasn’t been principal turnover in a long time, and there aren’t similar schools against which to make a comparison, this approach hits a wall.

Grissom, Kalogrides, and Loeb carefully document the trade-offs inherent in the many approaches to calculating a principal’s impact, concluding that the window of potential effect sizes ranges from .03 to .18 standard deviations. That work mirrors the conclusions of Branch, Hanushek, and Rivkin, who estimate that principal impacts range from .05 to .21 standard deviations (in other words, four to 16 percentile points in student achievement).

Our best estimates of principal impacts, therefore, are either really small or really large, depending on the model chosen. The takeaway? Yes, principals matter—but we still have a long way to go to before we can confidently quantify just how much.

I thoroughly agree with the researchers’ last sentence. But I did have problems with these assertions supported by two studies they listed.

*That principals are responsible for 25 percent of student gains on test scores (teachers, the report account for an additional 33 percent of those higher test scores). I traced back the source they cited and found these statements:

A 2009 study by New Leaders for New Schools found that more than half of a school’s impact on student gains can be attributed to both principal and teacher effectiveness – with principals accounting for 25 percent and teachers 33 percent of the effect.

The report noted that schools making significant progress are often led by a principal whose role has been radically re-imagined. Not only is the principal attuned to classroom learning, but he or she is also able to create a climate of hard work and success while managing the vital human-capital pipeline.

These researchers do cite studies that support their points about principals and student achievement but cannot find the exact study that found the 25 percent that principals account for in student test scores. Moreover, they omit  studies that show  higher education programs preparing principals who have made a difference in their graduates raising student test scores (see here).

I applaud these researchers on their efforts to improve the university training that principals receive but there is a huge “black box” of unknowns that explain how principals can account for improved student achievement. Opening that “black box” has been attempted in various studies that Jane David and I looked at a few years ago in Cutting through the Hype

The research we reviewed on stable gains in test scores across many different approaches to school improvement all clearly points to the principal as the catalyst for instructional improvement. But being a catalyst does not identify which specific actions influence what teachers do or translate into improvements in teaching and student achievement.

Researchers find that what matters most is the context or climate in which the actions occurs. For example, classroom visits, often called “walk-throughs,” are a popular vehicle for principals to observe what teachers are doing. Principals might walk into classrooms with a required checklist designed by the district and check off items, an approach likely to misfire. Or the principal might have a short list of expected classroom practices created or adopted in collaboration with teachers in the context of specific school goals for achievement. The latter signals a context characterized by collaboration and trust within which an action by the principal is more likely to be influential than in a context of mistrust and fear.

So research does not point to specific sure-fire actions that instructional leaders can take to change teacher behavior and student learning. Instead, what’s clear from studies of schools that do improve is that a cluster of factors account for the change.

Over the past forty years, factors associated with raising a school’s academic profile include: teachers’ consistent focus on academic standards and frequent assessment of student learning, a serious school-wide climate toward learning, district support, and parental participation. Recent research also points to the importance of mobilizing teachers and the community to move in the same direction, building trust among all the players, and especially creating working conditions that support teacher collaboration and professional development.

In short, a principal’s instructional leadership combines both direct actions such as observing and evaluating teachers, and indirect actions, such as creating school conditions that foster improvements in teaching and learning. How principals do this varies from school to school–particularly between elementary and secondary schools, given their considerable differences in size, teacher peparation, daily schedule, and in students’ plans for their future. Yes, keeping their eyes on instruction can contribute to stronger instruction; and, yes, even higher test scores. But close monitoring of instruction can only contribute to, not ensure, such improvement.

Moreover, learning to carry out this role as well as all the other duties of the job takes time and experience. Both of these are in short supply, especially in urban districts where principal turnover rates are high.

I am sure these university researchers are familiar with this literature. I wish them well in their efforts to pin down what principals do that account for test score improvement and incorporate that in a program that has effects on what their graduates do as principals in the schools they lead.

 

 

9 Comments

Filed under school leaders, testing

A Story about District Test Scores

This story is not about current classrooms and schools. Neither is this story about coercive accountability, unrealistic curriculum standards or the narrowness of highly-prized tests in judging district quality. This is a story well before Race to the Top, Adequate Yearly Progress, and “growth scores” entered educators’ vocabulary.

The story is about a district over 40 years ago that scored one point above comparable districts on a single test and what occurred as a result. There are two lessons buried in this story–yes, here’s the spoiler. First, public perceptions of  standardized test scores as a marker of “success” in schooling has a long history of being far more powerful than observers have believed  and, second, that the importance of students scoring well on key tests predates A Nation at Risk (1983), Comprehensive School Reform Act (1998), and No Child Left Behind (2002)

 

I was superintendent of the Arlington (VA) public schools between 1974-1981. In 1979 something happened that both startled me and gave me insight into the public power of test scores. The larger lesson, however, came years after I left the superintendency when I began to understand the potent drive that everyone has to explain something, anything, by supplying a cause, any cause, just to make sense of what occurred.

In Arlington then, the school board and I were responsible for a district that had declined in population (from 20,000 students to 15,000) and had become increasingly minority (from 15 percent to 30). The public sense that the district was in free-fall, we felt, could be arrested by concentrating on academic achievement, critical thinking, expanding the humanities, and improved teaching. After five years, both the board and I felt we were making progress.

State  test scores–the coin of the realm in Arlington–at the elementary level climbed consistently each year. The bar charts I presented at press conferences looked like a stairway to the stars and thrilled school board members. When scores were published in local papers, I would admonish the school board to keep in mind that these scores were  a very narrow part of what occurred daily in district schools. Moreover, while scores were helpful in identifying problems, they were severely inadequate in assessing individual students and teachers. My admonitions were generally swept aside, gleefully I might add, when scores rose and were printed school-by-school in newspapers. This hunger for numbers left me deeply skeptical about standardized test scores as signs of district effectiveness.

Then along came  a Washington Post article in 1979 that showed Arlington to have edged out Fairfax County, an adjacent and far larger district, as having the highest Scholastic Aptitude Test (SAT) scores among eight districts in the metropolitan area (yeah, I know it was by one point but when test scores determine winners  and losers as in horse-races, Arlington had won by a nose).

I knew that SAT results had nothing whatsoever to do with how our schools performed. It was a national standardized instrument to predict college performance of individual students; it was not constructed to assess district effectiveness. I also knew that the test had little to do with what Arlington teachers taught. I told that to the school board publicly and anyone else who asked about the SATs. Few listened.

Nonetheless, the Post article with the box-score of  test results produced more personal praise, more testimonials to my effectiveness as a superintendent, and, I believe, more acceptance of the school board’s policies than any single act during the seven years I served. People saw the actions of the Arlington school board and superintendent as having caused those SAT scores to outstrip other Washington area districts.

The lessons I learned in 1979 is that, first, public perceptions of high-value markers of “quality,” in this instance, test scores, shape concrete realities that policymakers such as a school board and superintendent face in making budgetary, curricular, and organizational decisions. Second, as a historian of education I learned that using test scores to judge a district’s “success” began in the late-1960s when newspapers began publishing district and school-by-school test scores pre-dating by decades the surge of such reporting in the 1980s and 1990s.

This story and its lessons I have never forgotten.

 

5 Comments

Filed under leadership, testing

Don’t Grade Schools on Grit (Angela Duckworth)

“Angela Duckworth is the founder and scientific director of the Character Lab, a professor of psychology at the University of Pennsylvania and the author of the forthcoming book “Grit: The Power of Passion and Perseverance.” This op-ed appeared in the New York Times, March 26, 2016. 

 

THE Rev. Dr. Martin Luther King Jr. once observed, “Intelligence plus character — that is the goal of true education.”

Evidence has now accumulated in support of King’s proposition: Attributes like self-control predict children’s success in school and beyond. Over the past few years, I’ve seen a groundswell of popular interest in character development.

As a social scientist researching the importance of character, I was heartened. It seemed that the narrow focus on standardized achievement test scores from the years I taught in public schools was giving way to a broader, more enlightened perspective.

These days, however, I worry I’ve contributed, inadvertently, to an idea I vigorously oppose: high-stakes character assessment. New federal legislation can be interpreted as encouraging states and schools to incorporate measures of character into their accountability systems. This year, nine California school districts will begin doing this.

Here’s how it all started. A decade ago, in my final year of graduate school, I met two educators, Dave Levin, of the KIPP charter school network, and Dominic Randolph, of Riverdale Country School. Though they served students at opposite ends of the socioeconomic spectrum, both understood the importance of character development. They came to me because they wanted to provide feedback to kids on character strengths. Feedback is fundamental, they reasoned, because it’s hard to improve what you can’t measure.

This wasn’t entirely a new idea. Students have long received grades for behavior-related categories like citizenship or conduct. But an omnibus rating implies that character is singular when, in fact, it is plural.

In data collected on thousands of students from district, charter and independent schools, I’ve identified three correlated but distinct clusters of character strengths. One includes strengths like grit, self-control and optimism. They help you achieve your goals. The second includes social intelligence and gratitude; these strengths help you relate to, and help, other people. The third includes curiosity, open-mindedness and zest for learning, which enable independent thinking.

Still, separating character into specific strengths doesn’t go far enough. As a teacher, I had a habit of entreating students to “use some self-control, please!” Such abstract exhortations rarely worked. My students didn’t know what, specifically, I wanted them to do.

In designing what we called a Character Growth Card — a simple questionnaire that generates numeric scores for character strengths in a given marking period — Mr. Levin, Mr. Randolph and I hoped to provide students with feedback that pinpointed specific behaviors.

For instance, the character strength of self-control is assessed by questions about whether students “came to class prepared” and “allowed others to speak without interrupting”; gratitude, by items like “did something nice for someone else as a way of saying thank you.” The frequency of these observed behaviors is estimated using a seven-point scale from “almost never” to “almost always.”

Most students and parents said this feedback was useful. But it was still falling short. Getting feedback is one thing, and listening to it is another.

To encourage self-reflection, we asked students to rate themselves. Thinking you’re “almost always” paying attention but seeing that your teachers say this happens only “sometimes” was often the wake-up call students needed.

This model still has many shortcomings. Some teachers say students would benefit from more frequent feedback. Others have suggested that scores should be replaced by written narratives. Most important, we’ve discovered that feedback is insufficient. If a student struggles with “demonstrating respect for the feelings of others,” for example, raising awareness of this problem isn’t enough. That student needs strategies for what to do differently. His teachers and parents also need guidance in how to help him.

Scientists and educators are working together to discover more effective ways of cultivating character. For example, research has shown that we can teach children the self-control strategy of setting goals and making plans, with measurable benefits for academic achievement. It’s also possible to help children manage their emotions and to develop a “growth mind-set” about learning (that is, believing that their abilities are malleable rather than fixed).

This is exciting progress. A 2011 meta-analysis of more than 200 school-based programs found that teaching social and emotional skills can improve behavior and raise academic achievement, strong evidence that school is an important arena for the development of character.

But we’re nowhere near ready — and perhaps never will be — to use feedback on character as a metric for judging the effectiveness of teachers and schools. We shouldn’t be rewarding or punishing schools for how students perform on these measures.

MY concerns stem from intimate acquaintance with the limitations of the measures themselves.

One problem is reference bias: A judgment about whether you “came to class prepared” depends on your frame of reference. If you consider being prepared arriving before the bell rings, with your notebook open, last night’s homework complete, and your full attention turned toward the day’s lesson, you might rate yourself lower than a less prepared student with more lax standards.

For instance, in a study of self-reported conscientiousness in 56 countries, it was the Japanese, Chinese and Korean respondents who rated themselves lowest. The authors of the study speculated that this reflected differences in cultural norms, rather than in actual behavior.

Comparisons between American schools often produce similarly paradoxical findings. In a study colleagues and I published last year, we found that eighth graders at high-performing charter schools gave themselves lower scores on conscientiousness, self-control and grit than their counterparts at district schools. This was perhaps because students at these charter schools held themselves to higher standards.

I also worry that tying external rewards and punishments to character assessment will create incentives for cheating. Policy makers who assume that giving educators and students more reasons to care about character can be only a good thing should take heed of research suggesting that extrinsic motivation can, in fact, displace intrinsic motivation. While carrots and sticks can bring about short-term changes in behavior, they often undermine interest in and responsibility for the behavior itself.

A couple of weeks ago, a colleague told me that she’d heard from a teacher in one of the California school districts adopting the new character test. The teacher was unsettled that questionnaires her students filled out about their grit and growth mind-set would contribute to an evaluation of her school’s quality. I felt queasy. This was not at all my intent, and this is not at all a good idea.

Does character matter, and can character be developed? Science and experience unequivocally say yes. Can the practice of giving feedback to students on character be improved? Absolutely. Can scientists and educators work together to cultivate students’ character? Without question.

Should we turn measures of character intended for research and self-discovery into high-stakes metrics for accountability? In my view, no.

17 Comments

Filed under testing

Why Common Core Standards Will Succeed

Even though there is little evidence that state standards have increased student academic achievement since the 1980s, the District of Columbia and 45 states have embraced the Common Core–(see here and here).

Even though there is little evidence that countries with national standards do not necessarily score higher on international tests than nations without national standards, many states have already aligned their standards to textbooks, lessons, and tests– (see here and here).

Even though there is little evidence Common Core standards will produce the skilled and knowledgeable graduates that employers and college teachers have demanded of public schools, most state and federal officials have assured parents and taxpayers that the new standards and tests will do exactly that–(see here and here).

Even though there is little evidence that state and national officials have resolved tough issues in the past when it came to curriculum standards (e.g., supplying professional development for teachers and principals, appropriate instructional materials, determining whether teachers altered their practices) much less reduce the inevitable problems that will occur in implementing the Common Core standards (e.g., resources for computer-based testing), cheerleaders  continue to beat the drums for national standards–(see here and here)

images-1

butterfly image standards

With all of these “even though”s  (and there are more), Common Core standards will succeed. How can that be?

The short answer is that evidence of success doesn’t matter much to those who make policy decisions. Oh sure, decision-makers have to mention evidence along with research studies and they do but not much when it comes to Common Core standards. Instead what they talk about are failing schools, the low-quality of teaching and how unless academic standards are raised–drum roll here at mention of Common Core–the economy will sink under the weight of graduates unprepared for an information-based workplace. Getting everyone to go to college, especially minority and poor students is somehow seen as a solution to economic, political, and social inequalities that have persistently plagued the U.S. for the past four decades

Reform-minded policy elites–top federal and state officials, business leaders, and their entourages with unlimited access to media (e.g., television, websites, print journalism)–use these talking points to engage the emotions and, of course, spotlight public schools as the reasons why the U.S. is not as globally competitive as it should be. By focusing on the Common Core, charter schools, and evaluating teachers on the basis of student test scores, these decision-makers have shifted public attention away from fiscal and tax policies and economic structures that not only deepen and sustain poverty in society but also reinforce privilege of the top two percent of wealthy Americans. Policy elites have banged away unrelentingly at public schools as the source of national woes for decades.

National, state, and local opinion-makers in the business of school reform know that what matters is not evidence, not research studies, not past experiences with similar reforms–what matters is the appearance of success. Success is 45 states adopting standards, national tests taken by millions of students, and public acceptance of Common Core. Projecting positive images (e.g., the film Waiting for Superman, “everyone goes to college”) and pushing myths (e.g., U.S schools are broken, schools are an arm of the economy) that is what counts in the theater of school reform.

Within a few years–say, by 2016, a presidential election year–policy elites will declare the new standards a “success” and, hold onto your hats, introduce more and better and standards and tests.

This happened before with minimum competency tests in the 1970s. By 1980, thirty-seven states had mandated these tests for grade-to-grade promotion and high school graduation. The Nation at Risk report (1983) judged these tests too easy since most students passed them. So goodbye to competency tests. That happened again in the 1990s with the launching of upgraded state curriculum standards (e.g., Massachusetts) and then NCLB and later Common Core came along. It is happening now and will happen again.

Policy elites see school reform as a form of theater. Blaming schools for serious national problems, saying the right emotionally-loaded words, and giving the appearance of doing mighty things to solve the “school” problem matter far more than hard evidence or past experiences with similar reforms.

41 Comments

Filed under school reform policies, testing

Buying iPads, Common Core Standards, and Computer-Based Testing

The tsunami of computer-based testing for public school students is on the horizon. Get ready.

For adults, computer-based testing has been around for decades. For example, I have taken and re-taken the California online test to renew my driver’s license twice in the past decade. To get certified to drive as a volunteer driver for Packard Children’s Hospital in Palo Alto, I had to read gobs of material about hospital policies and federal regulations on confidentiality before taking a series of computer-based tests. To obtain approval from Stanford University for a research project of which I am the principal investigator and where I would interview teachers and observe classrooms, I had to read online a massive amount of material on university regulations about consent of subjects to participate, confidentiality, and handling of information gotten from interviews and classroom observations.  And again, I took online tests that I had to pass in order to gain approval from the University to conduct research.  Beyond the California Department of Motor Vehicles, Children’s Hospital, and Stanford University, online assessment has been a staple in the business sector from hiring through employee evaluations.  So online testing is already part of adult experiences

What about K-12 students?  Increasingly, districts are adopting computer-based testing. For example, Measures of Academic Progress, a popular test used in many districts is online. Speeding up this adoption of computer-based testing is the Common Core Standards and the two consortia that are preparing assessments for the 45 states on the cusp of implementing the Standards. Many states have already mandated online testing for their own standardized tests to get prepared for impending national  assessments. These tests will require students to have access to a computer with the right hardware, software, and bandwidth to accommodate online testing by 2014-2015 (See here, here, and here).

There are many pros and cons with online testing as, say, compared with paper-and-pencil tests. But whatever those pros are for paper-and-pencil tests, they are outslugged and outstripped by the surge of buying new devices and piloting of computer-based tests to get ready for Common Core assessments (see here and here). Los Angeles Unified school district, the second largest in the nation, just signed a $50 million contract with Apple for  iPads. One of the key reasons to buy these devices for the initial rollout for 47 schools was Common Core standards and assessment. Each iPad comes with an array of pre-loaded software compatible with the state online testing system and impending national assessments. The entire effort is called The Common Core Technology Project.

The best (and most recent) gift to the hardware and software industry has been the Common Core standards and assessments. At a time of fiscal retrenchment in school districts across the country when schools are being closed and teachers are let go, many districts have found the funds to go on shopping sprees to get ready for the Common Core.

And here is the point that I want to make. The old reasons for buying technology have been shunted aside for a sparkling new one. Consider that for the past three decades the rationale for buying desktop computers, laptops, and now tablets has been three-fold:

1. Make schools more efficient and productive so that students learn more, faster, and better than they had before.

2. Transform teaching and learning into an engaging and active process connected to real life.

3. Prepare the current generation of young people for the future workplace.

After three decades of rhetoric and research, teachers, principals, students, and vendors have their favorite tales to prove that these reasons have been achieved. But for those who want more than Gee Whiz stories, who seek a reliable body of evidence that shows students learning more, faster, and better, that shows teaching and learning to have been transformed, that using these devices have prepared the current generations for actual jobs—well, that body of evidence is missing for each of these traditional reasons to buy computers.

With Common Core standards adopted, the rationale for getting devices has shifted. No longer does it  matter whether there is sufficient evidence to make huge expenditures on new technologies. Now, what matters are the practical problems of being technologically ready for the new standards and tests in 2014-2015: getting more hardware, software, additional bandwidth, technical assistance, professional development for teachers, and time in the school day to let students practice taking tests.

Whether the Common Core standards will improve student achievement–however measured–whether students learn more, faster, and better–none of this matters in deciding on which vendor to use. It is not whether to buy or not. The question is: how much do we have and when can we get the devices. That is tidal wave on the horizon.

17 Comments

Filed under technology, testing