Category Archives: research

Whatever Happened to the Core Knowledge Program?

No, I do not refer to the Common Core standards.

I mean the Core Knowledge program that unfolded in U.S. schools in the decade following the 1987 publication of University of Virginia Professor E.D. Hirsch, Jr.’s Cultural Literacy: What Every American Needs to Know.

The book, the creation of the Core Knowledge Foundation and subsequent publication of curricular sequences across academic subjects taught in elementary schools produced a reform that again brought to the surface the historical struggle over what kind of knowledge and skills are worth teaching and learning in tax-supported public schools.

51WYH4VBY1L._SX258_BO1,204,203,200_.jpg

 

Hot embers of previous traditional vs. progressive wars in the early 20th century and then in the 1950s over the importance of phonics vs. whole language in reading, exposure to disciplinary knowledge rather than students creating their own meaning  re-ignited in the last decade of the century after Hirsch’s book and the spread of Core Knowledge programs in schools.

 

slide_28.jpg

 

What Problems Did the Core Knowledge Program Intend To Solve?

According to Hirsch and advocates for Core Knowledge, the current concentration on building skills–“student will be able to do…”–has handicapped children and youth by ignoring the importance of teaching systematically sequential knowledge as a way of developing reading comprehension, problem-solving, inquiry, and most important understanding the world. Core Knowledge tries to solve this endemic problem in U.S. schooling. As one description put it:

The Core Knowledge Sequence identifies that knowledge base in the core subjects. For example, the American history portion of the Core Knowledge Sequence includes specific events and aspects of history such as the Boston Tea Party, the Louisiana Purchase, and the Underground Railroad; it does not include an objective such as “identify a sequence of events in history.” The Core Knowledge Sequence does indicate study of significant people, stories, and issues, including William Penn and the Quakers, Susan B. Anthony and the right to vote, Jackie Robinson and the integration of major league baseball, Cesar Chavez and the rights of migrant workers, Dorthea Dix and the treatment of the insane, Sojourner Truth and women’s rights, and Chief Joseph and the ordeal of the Nez Perce Indians. The American history sequence does not include an objective such as “explain how various cultural groups have participated in the development of the United States.” As the students learn about specific people and events, teachers can guide them to deeper understanding and teach them to apply problem-solving and other analytical skills to what they have learned.

E.D. Hirsch, Jr. argues that educational Progressives such as Dewey wanted children to construct their knowledge, learn by doing and come to understand the world. Such Progressive ideas have ruined American schools, according to Hirsch, by ignoring the importance of children having intellectual capital, that is, a broad and deep base of knowledge to understand core ideas and the present moment.

Diane Ravitch, a member of the Core Knowledge Foundation board, reviewed  another of Hirsch’s books in 2006 and located his place in the historic struggle between Progressives and traditionalists:

In his assault on the precepts of progressive education, Hirsch enters a battle that has been waged for over a century. In the late 19th and early 20th centuries, almost every high-school student studied Latin. Teachers and parents believed that the study of Latin taught certain skills that could be transferred to any other pursuit or activity, such as precision, judgment, logical thinking, clarity, and so on. It was, in the words of its defenders, a valuable form of mental gymnastics, intended to improve one’s faculties. The same argument was made for algebra and other areas of advanced mathematics. The first generation of education psychologists (such as Edward L. Thorndike of Teachers College) took aim at this belief and sought to demonstrate through their studies that “transfer of training” was a myth, and that there was no reason at all to study Latin or any subject that was not immediately useful.

Progressive educators were heartened by Thorndike’s work and concluded that “you study what you study, and you learn what you learn.” In other words, what was the point of learning Latin or algebra or even history since they had no demonstrable utility? ….

In this century-old debate, the great error of traditionalist educators was their failure to defend cultural values in education, that is, the importance of knowledge. By making the case for Latin or history dependent on “transfer of training,” they lost the debate. The culturally important studies such as literature, history, and foreign language never should have been defended for their value in “training the mind,” but for their importance in shaping an educated, civilized human being.

Hirsch now makes that case, and it is a very important contribution to American education. He shows that research is now firmly on the side of those who advocate knowledge as the goal of learning….

What Does a Core Knowledge Program Look Like in Practice?

In elementary schools, Core Knowledge is used for part of the day. Scheduled times are allocated to lessons in language arts, science, social studies, and math. For the rest of the school day familiar activities including art, drama, and physical education occur.

Deanna Zarichansky, Assistant Principal at Trousdale County Elementary School in Hartsville, TN, describes   the program.

Our district adopted Core Knowledge [Language Arts] at the beginning of this school year [2017]. This has been the single most powerful curriculum implementation I have seen in my 16 years of education. We are a small district with a high rate of poverty, with many students who enter school with little to no experiences with literacy. Our school is charged with the difficult task of educating students who come to us with little vocabulary and limited knowledge of the world around them.

At first glance, many teachers were rather skeptical that their students could be successful with themes such as The War of 1812 and Astronomy. These same teachers soon became strong supporters of the program. The students began to use vocabulary and content knowledge they were being exposed to by Core Knowledge in conversations and in writing. Walking down the hallways of our school, you can hear chatter about the Earth’s atmosphere, Rosa Parks, Machu Picchu, and paleontologists. Many second grade students wanted to dress as gods and goddesses for Halloween. They collect rocks on the playground and discuss how they were formed. Parents often tell stories of their children combing through the cabinets and discussing what is healthy and what they shouldn’t be eating, catching their children peeking out of the window looking for the North Star, and rousing dinner conversations about the Civil War. Our librarian shared that students are choosing to check out more nonfiction than ever before.

The walls of our school used to be decorated with holiday items and have now been replaced with diagrams of constellations and descriptive paragraphs about Human Body Systems. This curriculum has changed the culture of our school. It has allowed equalization for students who are now exposed to deep knowledge building about the world around them.

Bridgit McCarthy, a third grade teacher at New Dimensions, a public charter school in Morganton, North Carolina, describes her unit on Rome.

Today in social studies, we assassinated Julius Caesar!

My students’ faces registered shock, sadness, and a sprinkling of outrage, all nicely mixed with understanding.

How mean!  Why would anyone kill their ally? I bet his wife feels sad.

JC helped get France for them—except it was, you know, Gaul back then. Plus, his rules helped the plebeians get more stuff from the laws.

These comments show comprehension and recall—a good start. Here’s one of the most telling comments from our class discussion; notice how it combines historical knowledge and understanding with a bit of empathy.

Well, it did kinda seem like he wanted to be a king—and the Romans said no way to kings waaaay back—like in last week’s … lesson.

These quotes demonstrate comprehension of rigorous content and use of sophisticated vocabulary. They came from third graders.

Yes, the words “stuff” to describe political change, and “sad” to describe a distraught wife may smack of 8 and 9 year olds and, but “plebeians” and “ally”? I would have expected such vocabulary from the middle school students I used to teach. This is my first year teaching third grade; I’ve been delighted to see how eager younger students are to dig into history and science content….

The assassination and subsequent discussion came about two-thirds of the way through our Core Knowledge Language Arts unit on ancient Rome. That unit takes about three weeks, starting with the basic question “What Is Rome?” and then introducing students to legends and mythology, daily life in Rome, and major wars and leaders. It ends with Rome’s lasting contributions.

original-1880942-1.jpg

I am thrilled with what students are saying and writing as we progress. While I always have high expectations in my classroom, I was a bit nervous when we started the ancient Rome unit. The objectives are complex, the vocabulary is challenging. The content itself includes a great deal of geography and culture, plenty of politics, and an assumption that Core Knowledge kids already knew quite a bit about ancient Greece.

The opportunity to check and refresh some of that knowledge of Greece was an early order of business. In CKLA, second graders spend several weeks on ancient Greece with two back-to-back units: The Ancient Greek Civilization and Greek Myths. In the third-grade unit on Rome, a review of the Greek gods and goddesses was the introduction to a lesson on their Roman counterparts. Seventeen of my twenty students attended second grade at New Dimensions, and sixteen attended first (which has a unit on Early World Civilizations), so I was curious to see how much they would remember.

In theory, recall of these facts of Greece ought to come fairly easily. According to one student, they spent “forever” on ancient Greece—and they loved it. In our school, teachers combined the CKLA materials and additional teacher-created materials to really immerse students.

As a result, my third graders had no problems here. Building on their existing knowledge of other cultures’ gods and goddesses made the new material easier to access. I also didn’t have to “teach” polytheism because the very idea that people had separate deities for different aspects of their lives was old hat to them, having explored it in first grade with Mesopotamia and Egypt and again in second with ancient Greece. The three students who didn’t attend New Dimensions in second grade did need a little more support. I helped them do some additional reading and partnered each one with a student who has been at New Dimensions since kindergarten. Because the unit lasted a few weeks, these new students had time to catch up by learning about Greece and Rome together.

Do Core Knowledge Programs Work?

As for many school reforms over the past century, answering the “effectiveness” question–does it work?–is no easy task. The first major issue is answering the question of whether Core Knowledge was fully implemented in classrooms. If not completely implemented, then judging outcomes become suspect. Many of the early studies of Core Knowledge in schools were mixed, some showing higher test scores and some showing no positive effects (see here, here, here, and here). The Core Knowledge Foundation has a list of studies that they assert show positive outcomes. What is so often missing from research on reforms such as Core Knowledge are descriptions of the contextual conditions in which the reform is located and researchers saying clearly: under what conditions does this program prove effective? That is too often missing including the research on Core Knowledge schools.

What Has Happened to Core Knowledge Programs in Schools?

There is now a network of 770 schools using the Core Knowledge Program (there are about 90,000 public elementary schools in the U.S.).

When the Common Core standards initially were published in 2010, Hirsch criticized the standards as having insufficient content. After reviewing the next set of standards and grade-by-grade sequence, Hirsch decided that there was sufficient content and the Core Knowledge Foundation aligned its sequence to the Common Core Standards.

Hirsch commented on this alignment of the program to Common Core Standards:

“This could be bigger than any other reform I can think of. We’ve had a hell of an incoherent system. It’s been based on a how-to theory, and not enough attention has been paid to the build-up of knowledge. This is a moment when we really could change the direction.”

 

Advertisements

9 Comments

Filed under how teachers teach, Reforming schools, research

Principals As Instructional Leaders: Hype and Reality

Six years ago, I published a post on the highly popular slogan of principal as instructional leader. Following up on this blog’s post of Chicago Mayor’s Rahm Emmanuel’s publicized reversal of his initial school reform beliefs and what he ultimately learned about the importance of Chicago’s principals in turning around schools’ low academic performance, I re-visited this earlier post.  I was surprised that few, if any, observational studies of principal behavior linked to student achievement have been published since 2013. The one I did find is included below.

The strong belief held by practitioners and researchers that of the three essential roles principals perform (instructional, managerial, and political), they “must” be first and foremost, instructional leaders continues its dominance in the literature in spite of weak evidence.

 

 

Past and current research on principals reveal that school-site leaders perform managerial, instructional, and political roles in and out of their schools. Of these multiple (and often conflicting) roles, however, the instructional leader role has been spotlighted as a “must” for these men and women because, as the theory (and rhetoric) goes, it is crucial to improving teacher performance and student academic achievement.

Yet recent studies (https://cepa.stanford.edu/sites/default/files/grissom%20loeb%20%26%20master%20instructional%20time%20use_0.pdf

of principal behavior in schools makes clear that spending time in classrooms to observe, monitor, and evaluate classroom lessons do not necessarily lead to better teaching or higher student achievement on standardized tests. Where there is a correlation between principals’ influence on teachers and student performance, it occurs when principals create and sustain an academic ethos in the school, organize instruction across the school, and align school lessons to district standards and standardized test items. There is hardly any positive association between principals walking in and out of classrooms a half-dozen times a day and conferring briefly with teaches about those five-minute visits.The reality of daily principal actions conflicts with the theory.

Much of the rhetoric of instructional leadership flowing from true believers in the theory rings hollow when researchers actually go into schools and shadow principals, observing what they do day-after-day in a school a week or more at a time. Such time-and-motion studies have been done ever since the days of Frederick Winslow Taylor and “scientific management” in the early 20th century. When such studies were done, they showed that the bulk of the a principal’s time was spent on managing the building, teachers, students, and parents. That was then.

Now, a few published studies make the same point: what principals do is largely manage people and buildings spending most of their time outside of the classroom, not inside watching teachers teach.

A recent report ( Shadow Study Miami-Dade Principals) of what 65 principals did each day during one week in 2008 in Miami-Dade county (FLA) shows that even under NCLB pressures for academic achievement and the widely accepted (and constantly spouted) ideology of instructional leadership, Miami-Dade principals spend most of their day in managerial tasks that influence the climate of the school but may or may not affect daily instruction. What’s more, those principals who spend the most time on organizing and managing the instructional program have test scores and teacher and parental satisfaction

 

results  that are higher than those principals who spend time coaching teachers and popping into classroom lessons.

The researchers shadowed elementary and secondary principals and categorized their activities minute-by-minute through self-reports, interviews, and daily logs kept by the principals.

In the academic language of the study:

The authors find that time spent on Organization Management activities is associated with positive school outcomes, such as student test score gains and positive teacher and parent assessments of the instructional climate, whereas Day-to-Day Instruction activities are marginally or not at all related to improvements in student performance and often have a negative relationship with teacher and parent assessments. This paper suggests that a single-minded focus on principals as instructional leaders operationalized through direct contact with teachers may be detrimental if it forsakes the important role of principals as organizational leaders (p. iv)

Two things jump out of this study for me. First, the results of shadowing principals in 2008 mirror patterns in principal work that researchers have found since the 1920s although the methodologies of time-and-motion studies have changed.

Second, there is an association–a correlation, by no means a cause-effect relationship–between principals who spend more time managing the organization and climate of the school than those principals who spend time in direct contact with teachers in classrooms.

Another study of first- year urban principals prepared by New Leaders,  a program imbued with beliefs in instructional leadership, revealed that new principals, a large fraction of whom left the post after two years, had little impact on student achievement even while observing and monitoring teacher lessons (see RAND_TR1191)

A few studies, of course, will not banish a theory lacking convincing evidence, temper the rhetoric of principal-as-instructional-leader,  or alter principal preparation programs.  Current rhetoric and ideology highlighting instructional leadership trump research studies, past and present, again and again.

Some donor-funded efforts try combining the results of the above studies and earlier research about principals managing the instructional program with their direct involvement in teachers’ classroom practices. See, for example, the Wallace Foundation’s recent publication The-School-Principal-as-Leader-Guiding-Schools-to-Better-Teaching-and-Learning.    In their well-intentioned effort, however, they give life to a failed theory and pump oxygen into the prevailing rhetoric.

The rose-colored view that principals of schools, big and small, urban and suburban, elementary and secondary, can throw fairy dust over teacher lessons and improve student academic performance continues to dominate professional associations of principals and university preparation programs.

5 Comments

Filed under research, school leaders

How Much Do Educators Care About Edtech Efficacy? Less Than You Might Think (Jenny Abamu)

Jenny Abamu is a reporter at WAMU. She was previously an education technology reporter at EdSurge where she covered technology’s role in K-12 education.

She previously worked at Columbia University’s EdLab’s Development and Research Group, producing and publishing content for their digital education publication, New Learning Times. Before that, she worked as a researcher, planner, and overnight assignment editor for NY1 News Channel in New York City. She holds a Master’s degree in International and Comparative Education from Columbia University’s Teachers College.”

 

This article appeared in EdSurge, July 17, 2017

Dr. Michael Kennedy, an associate professor at the University of Virginia, was relatively sure he knew the answer to this research question: “When making, purchasing and/or adoption decisions regarding a new technology-based product for your district or school, how important is the existence of peer-reviewed research to back the product?” Nevertheless, as part of the Edtech Research Efficacy Symposium held earlier this year, Kennedy created a research team and gathered the data. But, to his surprise, the results challenged conventional wisdom.

I hypothesized that the school leaders we talked to and surveyed would say, ‘Oh yeah, we privilege products that have been sponsored by high-quality research.’ Of course, we found that that wasn’t exactly correct

Michael Kennedy

“I hypothesized that the school leaders we talked to and surveyed would say, ‘Oh yeah we privilege products that have been sponsored by high-quality research,’” says Kennedy. “Of course we found that that wasn’t exactly correct.”

With a team of 13 other academics and experts, Kennedy surveyed 515 people from 17 states. Out of those they surveyed, 24 percent were district technology supervisors, 22 percent were assistant superintendents, 7 percent were superintendents, 27 percent were teachers, and 10 percent were principals. Within this diverse group, 76 percent directly made edtech purchases for their school or were consulted on purchase decisions. This was the group Kennedy expected would put its trust in efficacy research. To his team’s surprise, however, about 90 percent of the respondents said they didn’t insist on research to be in place before adopting or buying a product.

In contrast, respondents prioritized factors such as ‘fit’ for their school, price, functionality and alignment with district initiatives; these were all rated by those surveyed as “extremely important” or “very important.” In the report, one of the administrators interviewed is quoted saying, “If the product was developed using federal grant dollars, great, but the more important factor is the extent to which it suits our needs.” Kennedy also noted other statements made him pause.

“Research, according to one of the quotes I received was the icing on the cake,” says Kennedy “Having a lot of research evidence, like the type demanded by the feds, was cool but not essential. I found that to be pretty surprising and a little bit troubling.”

The consumer is the one who is going to have to demand the market changes. If school districts say, ‘I am not buying with without any research evidence,’ that would be the only thing, I think, the business community will listen to.

Kennedy defines randomized control trials, a research methodology that tries to remove bias and external effects as much as possible from the experiment, as the gold standard of research. Though this type of extensive and carefully planned research is expensive, the federal government does offer funds to support groups willing to go through the process. However, without schools demanding such research, Kennedy says while the government has made a way, but there is no will—and that could dry up funds.

“The consumer is the one who is going to have to demand the market changes. If school districts say, ‘I am not buying with without any research evidence,’ that would be the only thing, I think, the business community will listen to,” says Kennedy.

So what explains theme educators who did put research at the top of their list? Kennedy speculates it’s a question of exposure to quality research and district funding.

“Some people who responded to our survey had doctorates, other had advanced degrees, and they understand the value of research,” says Kennedy. “Some respondents are from districts that are very well-funded, and they have the luxury of being picky. Other districts have very limited budgets, very limited time and they are going to what is cheapest and easiest.”

Whether rich or poor, all school districts do have to answer to their tax bases, who often foot the bill for edtech purchases. Schools that cannot show academic gains are often under more scrutiny from outside forces, including parents and local officials. However, Kennedy notes that the complicated nature of education and all the variables that can affect student achievement water down any accountability that can be placed on edtech product purchase decisions made by the school districts.

“I suspect they will look at how are we teaching reading and math because technology is often used as a supplementary tool,” says Kennedy. “I hear parents say they want more technology, but they don’t know what they want. They think any tech is good tech, and I think that myth has pervaded as well. It’s a wicked problem, a layered contextual kind of issue, that will take more than the field can do to fix.”

5 Comments

Filed under research, school leaders, technology use

No, Educators and Policymakers Shouldn’t Just ‘Do What the Research Shows’ (Rick Hess)

…. I routinely advise policymakers and practitioners to be real nervous when an academic or expert encourages them to do “what the research shows.” As I observed in Letters to a Young Education Reformer, 20th-century researchers reported that head size was a good measure of intelligence, girls were incapable of doing advanced math, and retardation was rampant among certain ethnic groups. Now, I know what you’re thinking: “That wasn’t real research!” Well, it was conducted by university professors, published in scholarly journals, and discussed in textbooks. Other than the fact that the findings now seem wacky, that sure sounds like real research to me.

Medical researchers, for instance, change their minds on important findings with distressing regularity. Even with their deep pockets and fancy lab equipment, they’ve gone back and forth on things like the dangers of cholesterol, the virtues of flossing, whether babies should sleep on their backs, how much exercise we should get, and the effects of alcohol. Things would be messy if lawmakers or insurers were expected to change policies in response to every new medical study.

In truth, science is frequently a lot less absolute than we imagine. In 2015, an attempt to replicate 97 studies with statistically significant results found that more than one-third couldn’t be duplicated. More than 90 percent of psychology researchers admit to at least one behavior that might compromise their research, such as stopping data collection early because they liked the results as they were, or not disclosing all of a study’s conditions. And more than 40 percent admit to having sometimes decided whether to exclude data based on what it did to the results.

Rigorous research eventually influences policy and practice, but it’s typically after a long and gradual accumulation of evidence. Perhaps the most famous example is with the health effects of tobacco, where a cumulative body of research ultimately swayed the public and shaped policy on smoking—in spite of tobacco companies’ frenzied, richly funded efforts. The consensus that emerged involved dozens of studies by hundreds of researchers, with consistent findings piling up over decades.

When experts assert that something “works,” that kind of accumulated evidence is hardly ever what they have in mind. Rather, their claims are usually based on a handful of recent studies—or even a single analysis—conducted by a small coterie of researchers. (In education, those researchers are not infrequently also advocates for the programs or policies they’re evaluating.) When someone claims they can prove that extended learning time, school turnarounds, pre-K, or teacher residencies “work,” what they usually mean is that they can point to a couple studies that show some benefits from carefully executed pilot programs.

The upshot: When pilots suggest that policies or programs “work,” it can mean a lot less than reformers might like. Why might that be?

Think about it this way. The “gold standard” for research in medicine and social science is a randomized control trial (RCT). In an RCT, half the participants are randomly selected to receive the treatment—let’s say a drug for high blood pressure. Both the treatment and control groups follow the same diet and health-care plan. The one wrinkle is that the treatment group also receives the new drug. Because the drug is the only difference in care between the two groups, it can be safely credited with any significant difference in outcomes.

RCTs specify the precise treatment, who gets it, and how it is administered. This makes it relatively easy to replicate results. If patients in a successful RCT got a 100-milligram dosage of our blood pressure drug every twelve hours, that’s how doctors should administer it in order obtain the same results. If doctors gave out twice the recommended dosage, or if patients got it half as often as recommended, you wouldn’t expect the same results. When we say that the drug “works,” we mean that it has specific, predictable effects when used precisely.

At times, that kind of research can translate pretty cleanly to educational practice. If precise, step-by-step interventions are found to build phonemic awareness or accelerate second-language mastery, replication can be straightforward. For such interventions, research really can demonstrate “what works.” And we should pay close attention.

But this also helps illuminate the limits of research when it comes to policy, given all the complexities and moving parts involved in system change. New policies governing things like class size, pre-K, or teacher pay get adopted and implemented by states and systems in lots of different ways. New initiatives are rarely precise imitations of promising pilots, even on those occasions when it’s clear precisely what the initial intervention, dosage, design, and conditions were.

If imitators are imprecise and inconsistent, there’s no reason to expect that results will be consistent. Consider class-size reduction. For decades, advocates of smaller class sizes have pointed to findings from the Student Teacher Achievement Ratio (STAR) project, an experiment conducted in Tennessee in the late 1980s. Researchers found significant achievement gains for students in very small kindergarten and first-grade classes. Swayed by the results, California legislators adopted a massive class-size reduction program that cost billions in its first decade. But the evaluation ultimately found no impact on student achievement.

What happened? Well, what “worked” on a limited scale in Tennessee played out very differently when adopted statewide in California. The “replication” didn’t actually replicate much beyond the notion of “smaller classes.” Where STAR’s small classes were 13 to 17 students, California’s small classes were substantially larger. STAR was a pilot program in a few hundred classrooms, minimizing the need for new teachers, while California’s statewide adoption required a tidal wave of new hires. In California, districts were forced to hire thousands of teachers who previously wouldn’t have made the cut, while schools cannibalized art rooms and libraries in order to find enough classrooms to house them. Children who would have had better teachers in slightly larger classrooms were now in slightly smaller classrooms with worse teachers. It’s no great shock that the results disappointed.

Research should inform education policy and practice, but it shouldn’t dictate it. Common sense, practical experience, personal relationships, and old-fashioned wisdom have a crucial role to play in determining when and how research can be usefully applied. The researchers who play the most constructive roles are those who understand and embrace that messy truth.

2 Comments

Filed under research, school reform policies

Assessing My Writing: A Look Backward

A month ago, a colleague wrote to me and asked me to write about my career as a practitioner/scholar over the past half-century. I accepted. Part of the request was to include what I have written about policy and practice as a historian of education that contributed to both research and practice.

Sure, there are metrics that suggest what a “contribution” may be. There are Google scholar and Edu-Scholar rankings. There are Web of Science citations. All well and good but influence or impact on practitioners and researchers? Maybe yes, a bit here and there. And maybe no, not a trace. Rankings and citations are, at best, no more than fragile, even shaky proxies of a “contribution.”

I thought about these metrics a lot and decided instead to describe those works that gave me the most satisfaction in writing. This is not false modesty. What I think may be a contribution, others may yawn at its banality. What I think is a mundane article,  I will receive notes from readers about how powerful the piece was in altering their thinking. Writing to me is a form of teaching: some lessons fly and others flop.

So what follows is my self-assessment of those writings that gave me the most satisfaction and feeling of pride in doing something worthwhile.  Others would have to judge whether what I have written over the past half-century has contributed to what practitioners, policymakers, researchers, and the general public—audiences I have written for—know and do. In all instances, what I offer are publications that were prompted by questions that grew out of my teaching and administrative experience and what I learned as a researcher. Both have played a huge part in what I chose to research, write, and teach.

How Teachers Taught (1984, 1993)

This study of three different generations of reformers trying to alter the dominant way of classroom teaching (1900s, 1960s, and 1980s) was my first historical analysis of teaching. The question that prompted the study came out of my visits to Arlington (VA) public school classrooms over the seven years I served as superintendent in the 1970s and early 1980s. I kept seeing classroom lessons that reminded me of how I was taught in elementary and secondary schools in Pittsburgh (PA) in the 1940s. And how I taught in Cleveland (OH) in the 1950s. How could that be, I asked myself? That question led to a three-year grant to study how teachers taught between 1880-1990.

I used district archives, photographs, and first-hand accounts to cover a century of policy efforts to shift teaching from teacher-centered to student-centered instruction. I documented the century-long growth of classroom hybrids of both kinds of classroom instruction. Few historians, sadly, have since pursued the question of how reform policies aimed at altering teachers’ classroom behavior actually get put into practice.

 The Managerial Imperative and the Practice of Leadership (1988)

Here again, a question that grew out of my being in classrooms as a teacher and a district administrator nudged me. What I saw and experienced in classrooms and administrative offices looked a great deal alike insofar as the core roles that both teachers and administrators had to perform. Was that accurate and if so, how did that come to be? So I investigated the history of teaching, principaling and superintending. I saw that three core roles dominated each position: instructional, managerial, and political. I compared and contrasted each with vivid examples and included chapters on my experiences as both a teacher and administrator.

Reform Again, Again, and Again (1990)

The article that appeared in Educational Researcher looked at various cycles of change that I had documented in How Teachers Taught and The Managerial Imperative. The central question that puzzled me was how come school reform in instruction, curriculum, governance and organization recurred time and again. I was now old enough and had experienced these reform cycles.

I presented a conceptual framework that explained the recurring reforms. My prior studies and direct school experiences gave me rich examples to illustrate the framework.

Tinkering toward Utopia (1995)

David Tyack and I collaborated in writing this volume. We drew heavily from the “History of School Reform” course we had been teaching to graduate students and each of our prior studies. In only 142 pages (endnotes and bibliography excluded), we summed up our thinking about the rhetoric and actuality of school reform policies in curriculum, school organization, governance, and instruction over the past two centuries in the U.S.

Oversold and Underused: Computers in the Classroom (2001)

In 1986, Teachers and Machines: The Classroom use of Technology since 1920 was published. In that study, I looked at teacher access and use of film and radio in classrooms during the 1920s and 1930s, educational television in the 1950s and 1960s and the first generation of desktop computers in the early 1980s. The central question driving that study was: what did teachers do in their lessons when they had access to film, radio, television, and later computers?

The question derives from the larger interest I have had in school reform policies and the journey they take as they wend their way into classroom practice. Like new curricula, governance changes, and shifts in how best to organize schools, grasping at new technologies that promise deep changes in how teachers teach is simply another instance of school reformers using policy mandates to alter classroom instruction. In short, adopting new technologies is simply another thread in the recurring pattern of school reformers seeking classroom changes during the 20th and 21st centuries.

Fifteen years after Teachers and Technology appeared, computers had become common in schools. So in Oversold and Underused, I asked: to what degree were teachers in Silicon Valley schools using computers in their classrooms, labs, and media centers for lessons they taught? Such questions about classroom use go beyond the rhetoric surrounding new devices and software. I wanted to see what actually occurred in classrooms when districts adopted policies pushing new technologies into pre-school, high school, and university classrooms.

Teaching History Then and Now (2016)

The question that prompted this study came out of writing for my blog on how I taught history in two urban high schools in the 1950s and 1960s. I wondered how history was taught in those very same high schools a half-century later. Those personal questions led to reconstructing my teaching a half-century ago from personal records and archives I found at each school and then traveling to those very same schools to do observations and interviews with current teachers of history.

***********************************************

These publications have given me great satisfaction in writing. Converting questions and ideas into words on a screen or jottings on a piece of paper is what I have done since I published my first article in 1960 in the Negro History Bulletin. Have I written things that have never left my home and remain in closets and bottom drawers? You bet. But writing, a different way of teaching, remains important to me and as I long as I can I will write about the past as it influences the present, especially policies that aim at at altering how teachers teach.

Yet the act of writing historically remains mysterious to me. Why do the words flow easily and excite me in their capturing illusive ideas and rendering them in a graceful way and other times what I see on paper or on the screen are clunky sentences, if not clumsy wording? I do not know. Immersed in writing about policy and practice historically (as it has been for me in teaching in high school and graduate seminars) has given me highs and lows over the years and much satisfaction. While I may not understand the mystery of writing, I remain most grateful to Clio, the muse of historians.

 

Leave a comment

Filed under Reforming schools, research

How To Get Your Mind To Read (Daniel Willingham)

 

“Daniel T. Willingham (@DTWillingham) is a professor of psychology at the University of Virginia and the author, most recently, of ‘The Reading Mind: A Cognitive Approach to Understanding How the Mind Reads.'”

This post appeared as an op-ed in the New York Times November 25, 2017.

 

Americans are not good readers. Many blame the ubiquity of digital media. We’re too busy on Snapchat to read, or perhaps internet skimming has made us incapable of reading serious prose. But Americans’ trouble with reading predates digital technologies. The problem is not bad reading habits engendered by smartphones, but bad education habits engendered by a misunderstanding of how the mind reads.

Just how bad is our reading problem? The last National Assessment of Adult Literacy from 2003 is a bit dated, but it offers a picture of Americans’ ability to read in everyday situations: using an almanac to find a particular fact, for example, or explaining the meaning of a metaphor used in a story. Of those who finished high school but did not continue their education, 13 percent could not perform simple tasks like these. When things got more complex — in comparing two newspaper editorials with different interpretations of scientific evidence or examining a table to evaluate credit card offers — 95 percent failed.

There’s no reason to think things have gotten better. Scores for high school seniors on the National Assessment of Education Progress reading test haven’t improved in 30 years.

Many of these poor readers can sound out words from print, so in that sense, they can read. Yet they are functionally illiterate — they comprehend very little of what they can sound out. So what does comprehension require? Broad vocabulary, obviously. Equally important, but more subtle, is the role played by factual knowledge.

All prose has factual gaps that must be filled by the reader. Consider “I promised not to play with it, but Mom still wouldn’t let me bring my Rubik’s Cube to the library.” The author has omitted three facts vital to comprehension: you must be quiet in a library; Rubik’s Cubes make noise; kids don’t resist tempting toys very well. If you don’t know these facts, you might understand the literal meaning of the sentence, but you’ll miss why Mom forbade the toy in the library.

Knowledge also provides context. For example, the literal meaning of last year’s celebrated fake-news headline, “Pope Francis Shocks World, Endorses Donald Trump for President,” is unambiguous — no gap-filling is needed. But the sentence carries a different implication if you know anything about the public (and private) positions of the men involved, or you’re aware that no pope has ever endorsed a presidential candidate.

You might think, then, that authors should include all the information needed to understand what they write. Just tell us that libraries are quiet. But those details would make prose long and tedious for readers who already know the information. “Write for your audience” means, in part, gambling on what they know.

These examples help us understand why readers might decode well but score poorly on a test; they lack the knowledge the writer assumed in the audience. But if a text concerned a familiar topic, habitually poor readers ought to read like good readers.

In one experiment, third graders — some identified by a reading test as good readers, some as poor — were asked to read a passage about soccer. The poor readers who knew a lot about soccer were three times as likely to make accurate inferences about the passage as the good readers who didn’t know much about the game.

That implies that students who score well on reading tests are those with broad knowledge; they usually know at least a little about the topics of the passages on the test. One experiment tested 11th graders’ general knowledge with questions from science (“pneumonia affects which part of the body?”), history (“which American president resigned because of the Watergate scandal?”), as well as the arts, civics, geography, athletics and literature. Scores on this general knowledge test were highly associated with reading test scores.

Current education practices show that reading comprehension is misunderstood. It’s treated like a general skill that can be applied with equal success to all texts. Rather, comprehension is intimately intertwined with knowledge. That suggests three significant changes in schooling.

First, it points to decreasing the time spent on literacy instruction in early grades. Third-graders spend 56 percent of their time on literacy activities but 6 percent each on science and social studies. This disproportionate emphasis on literacy backfires in later grades, when children’s lack of subject matter knowledge impedes comprehension. Another positive step would be to use high-information texts in early elementary grades. Historically, they have been light in content.

Second, understanding the importance of knowledge to reading ought to make us think differently about year-end standardized tests. If a child has studied New Zealand, she ought to be good at reading and thinking about passages on New Zealand. Why test her reading with a

passage about spiders, or the Titanic? If topics are random, the test weights knowledge learned outside the classroom — knowledge that wealthy children have greater opportunity to pick up.

Third, the systematic building of knowledge must be a priority in curriculum design. The Common Core Standards for reading specify nearly nothing by way of content that children are supposed to know — the document valorizes reading skills. State officials should go beyond the Common Core Standards by writing content-rich grade-level standards and supporting district personnel in writing curriculums to help students meet the standards. That’s what Massachusetts did in the 1990s to become the nation’s education leader. Louisiana has recently taken this approach, and early results are encouraging.

Don’t blame the internet, or smartphones, or fake news for Americans’ poor reading. Blame ignorance. Turning the tide will require profound changes in how reading is taught, in standardized testing and in school curriculums. Underlying all these changes must be a better understanding of how the mind comprehends what it reads.

 

12 Comments

Filed under research, school reform policies

Research Counts for Little When It Comes to Adopting “Personalized Learning”

The K-12 sector is investing heavily in technology as a means of providing students with a more customized educational experience. So far, though, the research evidence behind “personalized learning” remains thin.

Ben Herold, Education Week, October 18, 2016

The pushers of computer-based instruction want districts to buy products and then see if the product works. Students and teachers are being used for marketing research, unreimbursed research. Districts are spending money based on hype and tests of the educational efficacy of an extremely narrow range of products as if this is a reasonable way to proceed in this era of extreme cuts in budgets.

Laura Chapman, comment on above guest post, May 21, 2017

Both Ben Herold and Laura Chapman are correct in their statements about the thinness of research on “personalized learning” and that districts spend “money based on hype and tests of the educational efficacy of an extremely narrow range of products….”

In short, independent studies of “personalized learning,” however, defined, are rare birds but of even greater importance, are subordinate to decisions on buying and deploying software and programs promising to tailor learning to each and every student from kindergarten through high school. To provide a fig leaf of cover for spending on new technologies, policymakers often use vendor-endorsed studies and quick-and-dirty product evaluations. They are the stand-ins for “what the research says” when it comes to purchasing new products advertising a platform for “personalized learning.”

Why is research nearly irrelevant to such decisions? Because other major criteria come into play that push aside educational research either independent or vendor-sponsored, on technology. Policymakers lean far more heavily upon criteria of effectiveness, popularity, and longevity in spending scarce dollars on new technologies championing “personalized learning.”

Criteria policymakers use 

The dominant standard used by most policymakers, media editors, and administrators to judge success is effectiveness: What is the evidence that the policy of using new technologies for classroom instruction has produced desired outcomes? Have you done what you said you were going to do and can you prove it? In a society where “bottom lines,” Dow Jones averages, Super Bowl victories, and vote-counts matter, quantifiable results determine effectiveness.

Since the Elementary and Secondary Education Act (1965), federal and state policymakers have relied on the effectiveness standard to examine what students have learned by using proxy measures such as test scores, high school graduation rates, college attendance, and other indicators. For example, in the late-1970s policymakers concluded that public schools had declined because scholastic aptitudes test (SAT) scores had plunged downward. Even though test-makers and researchers repeatedly stated that such claims were false—falling SAT scores fueled public support for states raising academic requirements in the 1980s and adding standardized tests to determine success. With the No Child Left Behind Act (2001-2016) test scores brought rewards and penalties. [i]

Yet test results in some instances proved unhelpful in measuring a reform’s success. For example, studies on computer use in classroom instruction show no substantial gains in students’ test scores. Yet buying more and more tablets and laptops with software programs has leaped forward in the past decade.

Or Consider the mid-1960s’ evaluations of Title I of the Elementary and Secondary Education Act (ESEA). They revealed little improvement in low-income children’s academic performance thereby jeopardizing Congressional renewal of the program. Such evidence gave critics hostile to federal initiatives reasons to brand President Lyndon Johnson’s War on Poverty programs as failures. [ii]

Nonetheless, the program’s political attractiveness to constituents and legislators overcame weak test scores. Each successive U.S. president and Congress, Republican or Democrat, have used that popularity as a basis for allocating funds to needy students in schools across the nation including No Child Left Behind (2001) and its successor, Every Student Succeeds Act (2016). Thus, a reform’s political popularity often leads to its longevity (e.g., kindergarten, new technologies in classrooms).

Popularity, then, is a second standard that public officials use in evaluating success. The spread of an innovation and its hold on voters’ imagination and wallets has meant that attractiveness to parents, communities, and legislators easily translates into political support for reform. Without the political support of parents and teachers, few technological innovations such as “personalized learning” could fly long distances.

The rapid diffusion of kindergarten and preschool, special education, bilingual education, testing for accountability, charter schools, and electronic technologies in schools are instances of innovations that captured the attention of practitioners, parents, communities, and taxpayers. Few educators or public officials questioned large and sustained outlays of public funds for these popular reforms because they were perceived as resounding successes regardless of the research. And they have lasted for decades. Popularity-induced longevity becomes a proxy for effectiveness. [iii]

A third standard used to judge success is assessing how well innovations mirrored what designers of reforms intended. This fidelity standard assesses the fit between the initial design, the formal policy, the subsequent program, and its implementation.

Champions of the fidelity standard ask: How can anyone determine effectiveness if the reform departs from the design? If federal, state, or district policymakers, for example, adopt and fund a new reading program because it has proved to be effective elsewhere, teachers and principals must follow the blueprint as they put it into practice or else the desired outcomes will go unfulfilled (e.g., Success for All). When practitioners add, adapt, or even omit features of the original design, then those in favor of fidelity say that the policy and program cannot be determined effective because of these changes. Policy adaptability is the enemy of fidelity. [iv]

Seldom are these criteria debated publicly, much less questioned. Unexamined acceptance of effectiveness, fidelity, and popularity avoids asking the questions of whose standards will be used, how they are applied and alternative standards that can be used to judge reform success and failure.

Although policymakers, researchers, practitioners have vied for attention in judging the success of school reforms such as using new technologies in classroom instruction, policy elites, including civic and business leaders and their accompanying foundation- and corporate-supported donors have dominated the game of judging reform success.

Sometimes  called a “growth coalition,” these civic, business, and philanthropic leaders see districts and schools as goal-driven organizations with top officials exerting top-down authority through structures. They juggle highly prized values of equity, efficiency, excellence, and getting reelected or appointed. They are also especially sensitive to public expectations for school accountability and test scores; they also reflect societal optimism that technologies can solve individual and community problems. Hence, these policy making elites favor standards of effectiveness, fidelity, and popularity—even when they conflict with one another. Because the world they inhabit is one of running organizations, their authority and access to the media give them the leverage to spread their views about what constitutes “success.” [v]

The world that policy elites inhabit, however, is one driven by values and incentives that differ from the worlds that researchers and practitioners inhabit. Policymakers respond to signals and events that anticipate reelection and media coverage. They consider the standards of effectiveness, fidelity, and popularity rock-hard fixtures of their policy world. [vi]

Most practitioners, however, look to different standards. Although many teachers and principals have expressed initial support for high-performing public schools serving the poor and children of color, most practitioners have expressed strong skepticism about test scores as an accurate measure of either their effects on children or the importance of their work.

Such practitioners are just as interested in student outcomes as are policymakers, but the outcomes differ. They ask: What skills, content, and attitudes have students learned beyond what is tested? To what extent is the life lived in our classrooms and schools healthy, democratic, and caring? Can reform-driven programs, curricula, technologies be bent to our purposes? Such questions, however, are seldom heard. Broader student outcomes and being able to adapt policies to fit the geography of their classroom matter to practitioners.

Another set of standards comes from policy and practice-oriented researchers. Such researchers judge success by the quality of the theory, research design, methodologies, and usefulness of their findings to policy and student outcomes. These researchers’ standards have been selectively used by both policy elites and practitioners in making judgments about high- and low-performing schools. [vii]

So multiple standards for judging school “success” are available. Practitioner-and researcher- derived standards have occasionally surfaced and received erratic attention from policy elites. But it is this strong alliance of policymakers, civic and business elites, and friends in the corporate, foundation, and media worlds that relies on standards of effectiveness, fidelity, and popularity. This coalition and their standards continue to dominate public debate, school reform agendas, and determinations of “success” and “failure.”

And so for “personalized learning,” the effectiveness criterion lacking solid evidence of student success, gives way to the political popularity criterion that currently dominates policy debates over districts buying tablets and laptops to get teachers to put the new technological fad into classroom practice.

____________________________________________________

[i] Patrick McGuinn, No Child Left Behind and the Transformation of Federal Education Policy, 1965-2005 (Lawrence, KS: University Press of Kansas, 2006)

[ii]Harvey Kantor, “Education, Reform, and the State: ESEA and Federal Education Policy in the 1960s,” American Journal of Education, 1991, 100(1), pp. 47-83; Lorraine McDonnell, “No Child Left Behind and the Federal Role in Education: Evolution or Revolution?” Peabody Journal of Education, 2005 80(2), pp. 19-38.

[iii] Michael Kirst and Gail Meister, “Turbulence in American Secondary Schools: What Reforms Last,” Curriculum Inquiry, 1985, 15(2), pp. 169-186; Larry Cuban, “Reforming Again, Again, and Again,” Educational Researcher, 1991, 19(1), pp. 3-13.

[iv]Janet Quinn, et. al., Scaling Up the Success For All Model of School Reform, final report, (Santa Monica (CA): Rand Corportation, 2015).

[v] Sarah Reckhow, Follow the Money: How Foundation Dollars Change Public School Politics (New York: Oxford University Press, 2013); Frederick Hess and Jeff Henig (eds.) The New Education Philanthropy: Politics, Policy, and Reform (Cambridge, MA: Harvrd Education Press,, 2015).

[vi] Linda Darling Hammond,”Instructional Policy into Practice: The Power of the Bottom over the Top,” Educational Evaluation and Policy Analysis, 1990, 12(3), pp. 339-347. Charles Payne, So Much Reform, So Little Change (Cambridge, MA: Harvard Education Press, 2008). Joyce Epstein, “Perspectives and Previews on Research and Policy for School, Family, and Community Partnerships,” in(New York: Routledge, 1996), pp. 209-246.

[vii] Anita Zerigon-Hakes, “Translating Research Findings into Large-Scale Public Programs and Policy,” The Future of Children, Long-Term Outcomes of early Childhood Programs, 1995, 5(3), pp. 175-191; Richard Elmore and Milbrey McLaughlin, Steady Work (Santa Monica, CA: RAND Corporation, 1988);

11 Comments

Filed under research, school reform policies, technology use