Tag Archives: research and practice

iPads for Young Children in School

Occasionally, I receive letters from parents concerned about the rollout of 1:1 iPads in their elementary school, especially for five to eight year-olds. The parents who write me may have concerns about the uses of devices in schools but, in this case, the Mom and Dad are concerned about their children and how the principal and staff are putting the 1:1 program into practice.

Here is one letter I recently received and answered. I have deleted the name of the school, principal, and parents who sent me the letter.

 

Dear Larry Cuban,

We have been attempting to influence better practices for 1:1 teaching practices with iPads at our daughters’ elementary school [in Southern California] for 4 months now.

Towards the end of last school year, the school announced they were going to implement [a 1:1   iPad program] starting in the fall.  At first we were open to the idea, but after much research of journal articles we realized that the school is following a trend rather than implementing correctly.  We agree that implementing technology is inevitable and there are likely good ways to enhance learning, but are very disappointed at how our daughters’  school is implementing it.  At this point, because many parents are not buying their kids iPads, the school is stuck in a worse situation…a hybrid of school shared iPads and kids with their own.  The school has even teamed up with Project Red, but [is not] even following Project Red’s guidelines.

[The parents sent me a recent letter that the principal sent to everyone in school community.]

A message from _______ ELEMENTARY SCHOOL

Families of __________,
 
In April, we shared with you a plan for our [1:1] initiative to personalize learning for our … students utilizing technology tools. Over the past month, the staff and I have listened to parents’ voices and have heard both support and reservations around this proposed program. As a result of that input, we have decided to pause and rethink our next steps.
 
We now realize that while the staff and I enthusiastically created and rolled out this plan for transforming student learning, we had not fully engaged our parent community in the process. The … parent community has always been closely knit and very supportive. We need and want your support and we truly value your input.

As the staff and I rethink next steps, we will be communicating opportunities for you to engage with us and share your ideas about technology and learning.
 
While we are pausing on our full implementation of [1:1], we remain firm in our belief that technology can enhance student learning and ensure that each one of our students reaches his or her potential. Staff will continue to integrate technology into their daily lessons. We will also continue to provide options to any K-5 family who would like to purchase an iPad through the district for their child to use at school or to have their child bring an iPad from home. We will continue to have shared devices in the classroom to support teaching and learning.
 
Families wishing to purchase an iPad through the district should return your Option Letter by May 30, 2014. We will be following up with those of you who have already returned your letters requesting to purchase an iPad through the district to confirm your selection.

The staff and I value and appreciate your involvement and support. Thank you for engaging in this conversation and for being part of our process. We look forward to working together as we move forward.

[BACK TO PARENTS' LETTER TO ME]

We’ve been attempting to influence the Principal and also the school board without success.  We believed there will be no substantial impact except extra cost to parents and the school after reading articles from your website.  I’ve read many journal articles about technology implementation in schools and generally find:

1) We cannot find any success stories in grades lower than 3rd or 4th grade….
2) all success stories seem to be subjective rather than showing statistically significant and measurable improvements

We are trying to remain hopeful and wondering if you can help us with any of the following:
1) can you point us to any case studies or journal articles (if any) that show statistically significant success and proper ways to implement 1:1?  We are especially interested in success in lower grades (K-3)….

LC: I do not have any studies to offer you. There may be single studies out there that do show success–as measured by increased student scores on standardized tests–but they are rare indeed. And single studies seldom forecast a trend. Overall, there is no substantial body of evidence that supports the claim that laptops, ipads, or devices in of themselves will produce increases in academic achievement or alter traditional ways of teaching. As you said in your email, anecdotes trump statistically significant results again and again when it comes to use of devices with young children and youth.

The claims that such devices will increase engagement of students in classwork and the like are supported. Keep in mind, however, two caveats: first, there is a novelty effect that advocates mistake for long-term engagement in learning but the effect wears off. And even if the effect is sustainable the assumption that engagement leads to academic gains or higher test scores remains only that–an assumption.

 2) do you have any advice on influencing better practices with the Principal or school board?

LC: Looks like your principal erred in ignoring a first principle of implementation: inform and discuss any innovation with parents before launching it. Just consider the massive foul up in Los Angeles Unified School District in their iPad purchase and deployment. It does, however, look like, at least from the principal’s letter that you sent me calling for a pause, that you and others may have, indeed, had some influence.

When I receive letters like yours I reply with the same advice. Go to the school and see how k-2 teachers use the devices over the course of a day. I know that such visits take a lot of time but such observations sort out the rhetoric from what actually occurs–some of which you may like, some of which you may not. I do not know your principal; she might get threatened and defensive or she might be the kind that will seek out help from parents in her efforts to implement iPads.

 In short, gather data on what is going on at [your elementary school]. Going to the school board without such data is futile.

 

21 Comments

Filed under technology use

How to Read Education Data Without Jumping to Conclusions (Jessica Lahey & Tim Lahey)

In an earlier post, I offered fundamental questions that parents, teachers, administrators, researchers, and policymakers could (and yes, should) ask of any policies being considered for improving classroom teaching and student learning.

In this post, a teacher and M.D. offer the basic questions that educators and non-educators should ask of any research study tweeted, blogged about, and appearing in newspapers or on TV programs.

Jessica Lahey is an English, Latin, and writing teacher in Lyme, New Hampshire. She writes about education and parenting for The New York Times and on her site, Coming of Age in the MiddleTim Lahey, MD, is an infectious diseases specialist and associate professor of medicine at Dartmouth’s Geisel School of Medicine

This piece appeared on AtlanticOnline, Jul 8 2014

Education has entered the era of Big Data. The Internet is teeming with stories touting the latest groundbreaking studies on the science of learning and pedagogy. Education journalists are in a race to report these findings as they search for the magic formula that will save America’s schools. But while most of this research is methodologically solid, not all of it is ready for immediate deployment in the classroom.

Jessica was reminded of this last week, after she tweeted out an interesting study on math education. Or, rather, she tweeted out what looked like an interesting study on math education, based on an abstract that someone else had tweeted out. Within minutes, dozens of critical response tweets poured in from math educators. She spent the next hour debating the merits of the study with an elementary math specialist, a fourth grade math teacher, and a university professor of math education.

Tracy Zager, the math specialist, and the author of the forthcoming book Becoming the Math Teacher You Wish You’d Had, emailed her concerns about the indiscriminate use of education studies as gospel:

Public education has always been politicized, but we’ve recently jumped the shark. Catchy articles about education circulate widely, for understandable reason, but I wish education reporters would resist the impulse to over-generalize or sensationalize research findings.

While she conceded that education journalists “can’t be expected to be experts in mathematics education, or science education, or literacy education,” she emphasized that they should be held to a higher standard than the average reader. In order to do their jobs well, they should not only be able to read studies intelligently,“they should also consult sources with field-specific expertise for deeper understanding of the fields.”

After she was schooled on Twitter, Jessica called up Ashley Merryman, the author of Nurture Shock: New Thinking About Children, and Top Dog: The Science of Winning and Losing. “Just because something is statistically significant does not mean it is meaningfully significant,” Merryman explained. “The big-picture problem with citing the latest research as a quick fix is that education is not an easy ship to turn around.” When journalists cite a press release describing a study without reading and exploring the study’s critical details, they often end up oversimplifying or overstating the results. Their coverage of education research therefore could inspire parents and policymakers to bring half-formed ideas into classroom. Once that happens, said Merryman, “the time, money, and investment that has gone into that change means we are stuck with it, even if it’s later proven to be ineffective in practice.”

As readers and writers look for solutions to educational woes, here are some questions that can help lead to more informed decisions.

 1. Does the study prove the right point?

It’s remarkable how often far-reaching education policy is shaped by studies that don’t really prove the benefit of the policy being implemented. The Tennessee Student Teacher Achievement Ratio (STAR) study is a great example.

In the late 1980s, researchers assigned thousands of Tennessee children in grades K-3 to either standard-sized classes (with teacher-student ratios of 22-to-1) or smaller classes (15-to-1) in the same school and then followed their reading and math performance over time. The landmark STAR study concluded that K-3 kids in smaller classes outperformed peers in larger classes. This led to massive nationwide efforts to achieve smaller class sizes.

A key step is to avoid extrapolating too much from a single study.

Subsequent investigations into optimal class size have yielded more mixed findings, suggesting that the story told in STAR was not the whole story. As it turns out, the math and reading benefits experienced by the K-3 kids in Tennessee might not translate to eighth grade writing students in Georgia, or geography students in Manhattan, or to classes taught using different educational approaches or by differently skilled teachers. A key step in interpreting a new study is to avoid extrapolating too much from a single study, even a well-conducted one like STAR.

 2. Could the finding be a fluke?

Small studies are notoriously fluky, and should be read skeptically. Recently Carnegie Mellon researchers looked at 24 kindergarteners and showed that those taking a science test in austere classrooms performed 13 percent better than those in a “highly decorated” setting. The authors hypothesized that distracting décor might undermine learning, and one article in the popular press quoted the researchers as saying they hoped these findings could inform guidelines about classroom décor.

While this result may seem to offer the promise of an easy 13-percent boost in students’ learning, it is critical not to forget that the results may come out completely different if the study were replicated in a different group of children, in a different school, under a different moon. In fact, a systematic review has shown that small, idiosyncratic studies are more likely to generate big findings than well-conducted larger studies. Would that 13 percent gap in student performance narrow in a larger study that controlled for more variables?

In other words, rather than base wide-reaching policy decisions on conclusions derived from 24 kindergarteners, it would seem reasonable, for now, to keep the Jane Austen posters and student art on the classroom wall.

 3. Does the study have enough scale and power?

Sometimes education studies get press when they find nothing. For instance, Robinson and Harris recently suggested that parental help with homework does not boost academic performance in kids. In negative studies like these, the million-dollar question is whether the study was capable of detecting a difference in the first place. Put another way, absence of evidence does not equal evidence of absence.

Absence of evidence does not equal evidence of absence.

There are multiple ways good researchers can miss real associations. One is when a study does not have enough power to detect the association. For example, when researchers look for a rare effect in too small a group of children, they sometimes miss the effect that could be seen within a larger sample size. In other cases, the findings are confounded—which means that the factor being studied is affected by some other factor that is not measured. For example, returning to Robinson and Harris, if some parents who help their kids with homework actually do the kids’ homework for them while others give their kids flawed advice that leads them astray, then parental help with homework might appear to have no benefit because the good work of parents who help effectively is cancelled out by other parents’ missteps.

It’s always a good idea to check whether a negative study had enough power and scale to find the association it sought, and to consider whether confounds might have hidden—or generated—the finding.

 4. Is it causation, or just correlation?

It turns out that the most important way for parents to raise successful children is buy bookcases. Or at least this is what readers could conclude if they absorbed just the finding summarized in this Gizmodo article, and not the fourth paragraph caveat that books in the home are likely a proxy for other facets of good parenting—like income, emphasis on education, and parental educational attainment.

Correlation—in this case, of bookshelves at home with achievement later in life—does not indicate causation. In fact, it often does not. The rooster might believe it causes the sun to rise, but reality is more complex. Good researchers—such as the original authors of the bookcase study—cop to this possibility and explain how their results might only refer to a deeper association.

No research study is perfect, and all of the studies we cited above have real merit. But, by asking good questions of any research finding, parents and journalists can help bring about sounder conclusions, in life and in policy-making. It’s easy to believe catchy, tweet-able headlines or the pithy summaries of institutional press releases. But since our kids’ education ultimately depends on the effectiveness and applicability of the available research, we should ensure that our conclusions are as trustworthy and formed as they can possibly be.

 

 

 

8 Comments

Filed under school reform policies

Cutting Through the Hype: Principals as Instructional Leaders

Jane David and I wrote a book called Cutting through the Hype: (Harvard Education Press, 2010). This is one chapter on principals. I have updated some references and language.

Effective manager? Savvy politician? Heroic leader? School CEO? Reformers press for principals who can not only play these roles but also raise test scores and do so quickly. These days principals can earn thousands of dollars in bonuses for boosting student achievement.

Principals are expected to maintain order, to be shrewd managers who squeeze a dollar out of every dime spent on the school, and astute politicians who can steer parents, teachers, and students in the same direction year after year. They are also expected to ensure that district curriculum standards are being taught, as well as lead instructional improvement that will translate into test score gains.

Being a principal is a tall order. As one New York City small high school principal put it: “You’re a teacher, you’re Judge Judy, you’re a mother, you’re a father, you’re a pastor, you’re a therapist, you’re a nurse, you’re a social worker.” She took a breath and continued: “You’re a curriculum planner, you’re a data gatherer, you’re a budget scheduler, you’re a vision spreader.” Yet, at the end of the day, the pressures and rewards are for raising test scores and graduation rates, today’s measure of instructional leadership.

Where did the idea of instructional leadership originate?

Historically, the title principal comes from the phrase “principal teacher,” that is, a teacher who was designated by a mid-19th century school board to manage the non-classroom tasks of schooling a large number of students and turning in reports. Principals examined students personally to see what was learned, evaluated teachers, created curriculum, and took care of the business of schooling. So from the very beginning of the job over 150 years ago principals were expected to play both managerial and instructional roles.

Over the decades, however, district expectations for principals’ instructional role have grown without being either clarified, or without lessening managerial and political responsibilities. Over the past quarter-century, the literature on principals has shifted markedly from managing budgets, maintaining the building, hiring personnel, and staff decision-making to being primarily about instruction. And, within the past decade, directly being held accountable for student results on tests has been added to the instructional role. As instructional leaders, principals now must also pay far closer attention to activities they hope will help teachers produce higher student scores such as aligning the school curriculum to the state test.

Today’s reformers put forth different ideas of what instructional leaders should do to meet standards and increase achievement. Some argue that principals need to know what good instruction looks like, spend time in classrooms, analyze teachers’ strengths and weaknesses, and provide helpful feedback. Other reformers say principals need to motivate teachers and provide opportunities for teachers to learn from each other and from professional development. Still others say principals should focus on data, continually analyzing student test scores to pinpoint where teachers need help.

The list goes on. Some reformers argue that principals should exercise instructional leadership by hiring the right curriculum specialists or coaches to work with teachers on improving instruction. Finally, others suggest that the most efficient way to improve instruction and achievement is to get rid of the bad teachers and hire good ones, an option not always open to leaders of struggling schools. Most of these ideas are not mutually exclusively but together pose a Herculean task, landing on top of all the other responsibilities that refuse to simply disappear.

What problem is the principal as instructional leaders intended to solve?

The short answer is raise a school’s low academic performance. New Leaders for New Schools, a program that trains principals for urban schools, captures the expectation that principals can end low academic performance through their instructional leadership:

Research shows – and our experience confirms – that strong school leaders have a powerful multiplier effect, dramatically improving the quality of teaching and raising student achievement in a school.

Such rhetoric and the sharp focus on the principal as an instructional leader in current policymaker talk have made principals into heroic figures who can turn around failing schools, reduce the persistent achievement gap single-handedly, and leap tall buildings in a single bound.

If the immediate problem is low academic performance, then the practical problem principals must solve is how to influence what teachers do daily since it is their impact on student learning that will determine gains and losses in academic achievement.

Does principal instructional leadership work?

The research we reviewed on stable gains in test scores across many different approaches to school improvement all clearly points to the principal as the catalyst for instructional improvement. But being a catalyst does not identify which specific actions influence what teachers do or translate into improvements in teaching and student achievement.

Researchers find that what matters most is the context or climate in which the actions occurs. For example, classroom visits, often called “walk-throughs,” are a popular vehicle for principals to observe what teachers are doing. Principals might walk into classrooms with a required checklist designed by the district and check off items, an approach likely to misfire. Or the principal might have a short list of expected classroom practices created or adopted in collaboration with teachers in the context of specific school goals for achievement. The latter signals a context characterized by collaboration and trust within which an action by the principal is more likely to be influential than in a context of mistrust and fear.

So research does not point to specific sure-fire actions that instructional leaders can take to change teacher behavior and student learning. Instead, what’s clear from studies of schools that do improve is that a cluster of factors account for the change.

Over the past forty years, factors associated with raising a school’s academic profile include: teachers’ consistent focus on academic standards and frequent assessment of student learning, a serious school-wide climate toward learning, district support, and parental participation. Recent research also points to the importance of mobilizing teachers and the community to move in the same direction, building trust among all the players, and especially creating working conditions that support teacher collaboration and professional development.

In short, a principal’s instructional leadership combines both direct actions such as observing and evaluating teachers, and indirect actions, such as creating school conditions that foster improvements in teaching and learning. [i] How principals do this varies from school to school–particularly between elementary and secondary schools, given their considerable differences in size, teacher knowledge, daily schedule, and in students’ plans for their future. Yes, keeping their eye on instruction can contribute to stronger instruction; and, yes, even higher test scores. But close monitoring of instruction can only contribute to, not ensure such improvement.

Moreover, learning to carry out this role as well as all the other duties of the job takes time and experience. Both of these are in short supply, especially in urban districts where principal turnover rates are high.

The solution … in our view

By itself, instructional leadership is little more than a slogan, an empty bumper sticker. In some schools principals follow all the recipes for instructional leadership: They review lesson plans, make brief visits in classrooms, check test scores, circulate journal articles that give teachers tips, and dozens of other instructional activities that experts advise. Yet they do not manage to create school-wide conditions that encourage teacher collaboration, high standards for student work, and a climate where learning flourishes for students and teachers. Creating these conditions is the essence of instructional leadership.

Principals who are effective instructional leaders do not follow a recipe. Like teachers, they diagnose their school’s conditions and figure out what actions are needed to create a school environment where teachers feel supported and where students, parents, and teachers strive to achieve common goals and have a stake in helping one another do their best. When all pull together, the chances of test score gains and other measures of academic achievement rise also.

 

************************************************************************

[i] Of the many studies and books Cuban has examined, one in particular offers both a conceptual design and practical techniques to increase the leadership of principals in supervising and evaluating teachers, major functions of every school-site leader. See, Kim Marshall, Rethinking Teacher Supervision and Evaluation (San Francisco: Jossey-Bass, 2009).

 

9 Comments

Filed under school leaders

Evidence Based Education Policy and Practice: A Conversation (Francis Schrag)

 

This fictitious exchange between two passionate educators over making educational policy and influencing classroom practice through careful scrutiny of evidence–such as has occurred in medicine and the natural sciences–as opposed to relying on professional judgment anchored in expertise gathered in schools brings out a fundamental difference among educators and the public that has marked public debate over the past three decades. The center of gravity in making educational policy in the U.S. has shifted from counting resources that go into schooling and relying on professional judgment to counting outcomes students derive from their years in schools and what the numbers say.

That shift can be dated from the Elementary and Secondary Education Act of 1965 but gained sufficient traction after the Nation at Risk report (1983) to dominate debate over innovation, policy, and practice. Although this is one of the longest guest posts I have published, I found it useful (and hope that viewers will as well) in making sense of a central conflict that exist today within and among school reformers, researchers, teachers, policymakers and parents.

Francis Schrag is professor emeritus in the philosophy of education at the University of Wisconsin, Madison. This article appeared in Teachers College Record, March 14, 2014.

A dialogue between a proponent and opponent of Evidence Based Education Policy. Each position is stated forcefully and each reader must decide who has the best of the argument.

Danielle, a professor of educational psychology and Leo, a school board member and former elementary school teacher and principal, visit a middle-school classroom in Portland Maine where students are deeply engaged in building robots out of Lego materials, robots that will be pitted against other robots in contests of strength and agility.  The project requires them to make use of concepts they’ve learned in math and physics.  Everything suggests that the students are deeply absorbed in what is surely a challenging activity, barely glancing around to see who has entered their classroom.

Leo:  Now this is exciting education. This is what we should be moving towards.  I wish all teachers could see this classroom in action.

Danielle:  Not so fast.  I’ll withhold judgment till I have some data.  Let’s see how their math and science scores at the end of the year compare with those of the conventional classroom we visited this morning.  Granted that one didn’t look too out of the ordinary, but the teacher was really working to get the kids to master the material.

Leo:  I don’t see why you need to wait.  Can’t you see the difference in level of engagement in the two classrooms?  Don’t you think the students will remember this experience long after they’ve forgotten the formula for angular momentum? Your hesitation reminds me of a satirical article a friend showed me; I think it came from a British medical journal.  As I recall the headline went: “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomized controlled trials.”

Danielle:  Very cute, but let’s get serious.  Spontaneous reactions can be misleading; things aren’t always what they appear to be, as I’m sure you’ll agree.  I grant you that it looks as if the kids in this room are engaged, but we don’t know whether they’re engaged in the prescribed tasks and we don’t know what they’re actually learning, do we?  We’ll have a much better idea when we see the comparative scores on the test.  The problem with educators is that they get taken in with what looks like it works, they go with hunches, and what’s in fashion, but haven’t learned to consult data to see what actually does work.  If physicians hadn’t learned to consult data before prescribing, bloodletting would still be a popular treatment.

Suppose you and I agreed on the need for students to study math and physics.  And suppose that it turned out that the kids in the more conventional classroom learned a lot more math and physics, on average, as measured on tests, than the kids in the robotics classroom.  Would you feel a need to change your mind about what we’ve just seen?  And, if not, shouldn’t you?  Physicians are now on board with Evidence Based Medicine (EBM) in general, and randomized controlled trials (RCTs) in particular, as the best sources of evidence.  Why are teachers so allergic to the scientific method?  It’s the best approach we have to determine educational policy.

Leo:  Slow down Danielle.  You may recall that a sophisticated RCT convincingly showed the benefits of smaller class sizes in elementary schools in Tennessee, but these results were not replicated when California reduced its elementary school class size, because there was neither room in the schools for additional classrooms nor enough highly skilled teachers to staff them.  This example is used by Nancy Cartwright and Jeremy Hardie in their book on evidence-based policy to show that the effectiveness of a policy depends, not simply on the causal properties of the policy itself, but on what they call a “team” of support factors (2012, p. 25).  If any one of these factors were present in the setting where the trial was conducted but is lacking in the new setting, the beneficial results will not be produced.  This lack of generalizability, by the way, afflicts RCTs in medicine too.  For instance, the populations enrolled in teaching hospital RCTs are often different from those visiting their primary care physician.

Danielle:  I have to agree that educators often extrapolate from RCTs in a way that’s unwarranted, but aren’t you, in effect, calling for the collection of more and better evidence, rather than urging the abandonment of the scientific approach.  After all, the Cartwright and Hardie book wasn’t written to urge policy makers to throw out the scientific approach and go back to so-called expert or professional judgment, which may be no more than prejudice or illicit extrapolation based on anecdotal evidence.

Leo:  You seem to be willing to trust the data more than the judgment of seasoned professionals.  Don’t you think the many hours of observing and teaching in actual classrooms counts for anything?

Danielle: If your district has to decide which program to run, the robotics or the traditional, do you really want to base your decision on the judgment of individual teachers or principals, to say nothing of parents and interested citizens?  In medicine and other fields, meta-analyses have repeatedly shown that individual clinical judgment is more prone to error than decisions based on statistical evidence (Howick, 2011, Chap. 11). And, as I already mentioned, many of the accepted therapies of earlier periods, from bloodletting to hormone replacement therapy, turned out to be worse for the patients than doing nothing at all.

Now why should education be different?  How many teachers have “known” that the so-called whole-word method was the best approach to teaching reading, and years later found out from well-designed studies that this is simply untrue?  How many have “known” that children learn more in smaller classes?  No, even if RCTs aren’t always the way to go, I don’t think we can leave these things to individual educator judgment; it’s too fallible.

And you may not need to run a new study on the question at issue.  There may already be relevant, rigorous studies out there, testing more exploratory classrooms against more traditional ones in the science and math area for middle-schoolers.  I recommend you look at the federal government What Works website, which keeps track of trial results you can rely on.

Leo:  I’ve looked at many of these studies, and I have two problems with them.  They typically use test score gains as their indicator of durable educational value, but these can be very misleading.  Incidentally, there’s a parallel criticism of the use of “surrogate end points” like blood levels in medical trials.  Moreover, according to Goodhart’s Law—he was a British economist—once a measure becomes a target, it ceases to be a good indicator.  This is precisely what happens in education: the more intensely we focus on raising a test score by means of increasing test preparation to say nothing of cheating—everything from making sure the weakest, students don’t take the test to outright changing students’ answers—the less it tells us about what kids can do or will do outside the test situation.

Danielle:  Of course we need to be careful about an exclusive reliance on test scores.  But you can’t indict an entire approach because it has been misused on occasion.

Leo: I said there was a second problem, as well.  You recall that what impressed us about the robotics classroom was the level of involvement of the kids.  When you go into a traditional classroom, the kids will always look at the door to see who’s coming in.  That’s because they’re bored and looking for a bit of distraction.  Now ask yourself, what does that involvement betoken. It means that they’re learning that science is more than memorizing a bunch of facts, that math is more than solving problems that have no meaning or salience in the real world, that using knowledge and engaging in hard thinking in support of a goal you’ve invested in is one of life’s great satisfactions.  Most kids hate math and the American public is one of the most scientifically illiterate in the developed world.  Why is that?  Perhaps it’s because kids have rarely used the knowledge they are acquiring to do anything besides solve problems set by the teacher or textbook.

I’m sure you recall from your studies in philosophy of education the way John Dewey called our attention in Experience and Education to what he called, the greatest pedagogical fallacy, “the notion that a person learns only the particular thing he is studying at the time” (Dewey, 1938, p. 48).  Dewey went on to say that what he called “collateral learning,” the formation of “enduring attitudes” was often much more important than the particular lesson, and he cited the desire to go on learning as the most important attitude of all.  Now when I look at that robotics classroom, I can see that those students are not just learning a particular lesson, they’re experiencing the excitement that can lead to a lifetime of interest in science or engineering even if they don’t select a STEM field to specialize in.

Danielle:  I understand what Dewey is saying about “collateral learning.”  In medicine as you know, side effects are never ignored, and I don’t deny that we in education are well behind our medical colleagues in that respect.  Still, I’m not sure I agree with you and Dewey about what’s most important, but suppose I do.  Why are you so sure that the kids’ obvious involvement in the robotics activity will generate the continuing motivation to keep on learning?  Isn’t it possible that a stronger mastery of subject matter will have the very impact you seek?  How can we tell?  We’d need to first find a way to measure that “collateral learning,” then preferably conduct a randomized, controlled trial, to determine which of us is right.

Leo:  I just don’t see how you can measure something like the desire to go on learning, yet, and here I agree with Dewey, it may be the most important educational outcome of all.

Danielle:  This is a measurement challenge to be sure, but not an insurmountable one.  Here’s one idea: let’s track student choices subsequent to particular experiences.  For example, in a clinical trial comparing our robotics class with a conventional middle school math and science curriculum, we could track student choices of math and science courses in high school.  Examination of their high school transcripts could supply needed data.  Or we could ask whether students taking the robotics class in middle school were more likely (than peers not selected for the program) to take math courses in high school, to major in math or science in college, etc.  Randomized, longitudinal designs are the most valid, but I admit they are costly and take time.

Leo: I’d rather all that money went into the kids and classrooms.

Danielle:  I’d agree with you if we knew how to spend it to improve education.  But we don’t, and if you’re representative of people involved in making policy at the school district level, to say nothing of teachers brainwashed in the Deweyian approach by teacher educators, we never will.

Leo:  That’s a low blow, Danielle, but I haven’t even articulated my most fundamental disagreement with your whole approach, your obsession with measurement and quantification, at the expense of children and education.

Danielle:  I’m not sure I want to hear this, but I did promise to hear you out.  Go ahead.

Leo:  We’ve had about a dozen years since the passage of the No Child Left Behind Act to see what an obsessive focus on test scores looks like and it’s not pretty.  More and more time is taken up with test-prep, especially strategies for selecting right answers to multiple-choice questions.  Not a few teachers and principals succumb to the temptation to cheat, as I’m sure you’ve read.  Teachers are getting more demoralized each year, and the most creative novice teachers are finding jobs in private schools or simply not entering the profession.  Meanwhile administrators try to game the system and spin the results.  But even they have lost power to the statisticians and other quantitatively oriented scholars, who are the only ones who can understand and interpret the test results.  Have you seen the articles in measurement journals, the arcane vocabulary and esoteric formulas on nearly every page?

And do I have to add that greedy entrepreneurs with a constant eye on their bottom lines persuade the public schools to outsource more and more of their functions, including teaching itself.  This weakens our democracy and our sense of community.  And even after all those enormous social costs, the results on the National Assessment of Educational Progress are basically flat and the gap between black and white academic achievement—the impetus for passing NCLB in the first place—is as great as it ever was.

Danielle:  I agree that it’s a dismal spectacle.  You talk as if educators had been adhering to Evidence Based Policy for the last dozen years, but I’m here to tell you they haven’t and that’s the main reason, I’d contend, that we’re in the hole that we are.  If educators were less resistant to the scientific approach, we’d be in better shape today.  Physicians have learned to deal with quantitative data, why can’t teachers, or are you telling me they’re not smart enough?  Anyhow, I hope you feel better now that you’ve unloaded that tirade of criticisms.

Leo:  Actually, I’m not through, because I don’t think we’ve gotten to the heart of the matter yet.

Danielle:  I’m all ears.

Leo:  No need to be sarcastic, Danielle.  Does the name Michel Foucault mean anything to you?  He was a French historian and philosopher.

Danielle:  Sure, I’ve heard of him.  A few of my colleagues in the school of education, though not in my department, are very enthusiastic about his work.  I tried reading him, but I found it tough going.  Looked like a lot of speculation with little data to back it up.  How is his work relevant?

Leo:   In Discipline and Punish, Foucault described the way knowledge and power are intertwined, especially in the human sciences, and he used the history of the school examination as a way of illustrating his thesis (1975/1995, pp. 184-194).  Examinations provide a way of discovering “facts” about individual students, and a way of placing every student on the continuum of test-takers.  At the same time, the examination provides the examiners, scorers and those who make use of the scores ways to exercise power over kids’ futures.  Think of the Scholastic Assessment Tests (SATs) for example.  Every kid’s score can be represented by a number and kids can be ranked from those scoring a low of 600 to those with perfect scores of 2400.  Your score is a big determinant of what colleges will even consider you for admission.  But that’s not all: Foucault argued that these attempts to quantify human attributes create new categories of young people and thereby determine how they view themselves.  If you get a perfect SAT score, or earn “straight As” on your report card, that becomes a big part of the way others see you and how you see yourself.  And likewise for the mediocre scorers, the “C” students, or the low scorers who not only have many futures closed to them, but may see themselves as “losers,” “failures,” “screw-ups.”  A minority may, of course resist and rebel against their placement on the scale—consider themselves to be “cool”, unlike the “nerds” who study, but that won’t change their position on the continuum or their opportunities.  Indeed, it may limit them further as they come to be labeled “misfits” “ teens at-risk,” “gang-bangers” and the like. But, and here’s my main point, this entire system is only possible due to our willingness to represent the capabilities and limitations of children and young people by numerical quantities.  It’s nothing but scientism, the delusive attempt to force the qualitative, quirky, amazingly variegated human world into a sterile quantitative straight-jacket.  You recall the statement that has been attributed to Einstein, don’t you, “Not everything that can be counted counts, and not everything that counts can be counted.” I just don’t understand your refusal to grasp that basic point; it drives me mad.

Danielle:  Calm down, Leo.  I don’t disagree that reducing individuals to numbers can be a problem; every technology has a dark side, I’ll grant you that, but think it through.  Do you really want to go back to a time when college admissions folks used “qualitative” judgments to determine admissions?  When interviewers could tell from meeting a candidate or receiving a letter of recommendation if he were a member of “our crowd,” would know how to conduct himself at a football game, cocktail party, or chapel service, spoke without an accent, wasn’t a grubby Jew or worse, a “primitive” black man or foreign-born anarchist or communist.  You noticed I used the masculine pronoun:  Women, remember, were known to be incapable of serious intellectual work, no data were needed, the evidence was right there in plain sight.  Your Foucault is not much of a historian, I think.

Leo:  We have some pretty basic disagreements here.  I know we each believe we’re right.  Is there any way to settle the disagreement?

Danielle:  I can imagine a comprehensive, longitudinal experiment in a variety of communities, some of which would carry out EBEP and control communities that would eschew all use of quantification.  After a long enough time, maybe twenty years, we’d take a look at which communities were advancing, which were regressing.  Of course, this is just an idea; no one would pay to actually have it done.

Leo:  But even if we conducted such an experiment, how would we know which approach was successful?

Danielle:  We shouldn’t depend on a single measure, of course.  I suggest we use a variety of measures, high school graduation rate, college attendance, scores on the National Assessment of Educational Progress, SATs, state achievement tests, annual income in mid-career, and so on.  And, of course, we could analyze the scores by subgroups within communities to see just what was going on.

Leo:  Danielle, I can’t believe it.  You haven’t listened to a word I’ve said.

Danielle:  What do you mean?

Leo:   If my favored policy is to eschew quantitative evidence altogether, wouldn’t I be inconsistent if I permitted the experiment to be decided by quantitative evidence, such as NAEP scores or worse, annual incomes?  Don’t you recall that I reject your fundamental assumption—that durable, significant consequences of educational experiences can be represented as quantities?

Danielle:  Now I’m the one that’s about to scream.  Perhaps you could assess a single student’s progress by looking at her portfolio at the beginning and end of the school year.  How, in the absence of quantification, though, can you evaluate an educational policy that affects many thousands of students?  Even if you had a portfolio for each student, you’d still need some way to aggregate them in order to be in a position to make a judgment about the policy or program that generated those portfolios.  You gave me that Einstein quote to clinch your argument.  Well, let me rebut that with a quotation by another famous and original thinker, the Marquis de Condorcet, an eighteenth century French philosopher and social theorist.  Here’s what he said:  “if this evidence cannot be weighted and measured, and if these effects cannot be subjected to precise measurement, then we cannot know exactly how much good or evil they contain” (Condorcet, 2012, p.138).  The point remains true, whether in education or medicine.  If you can’t accept it, I regret to say, we’ve reached the end of the conversation.

References

Cartwright, N & Hardie, J. (2012). Evidence-based policy:  A practical guide to doing it better.  Oxford and New York: Oxford University Press.

Condorcet, M. (2012). The sketch. In S. Lukes, and N. Urbinati (Eds.), Political Writings (pp. 1-147). Cambridge: Cambridge University Press.

Dewey, J. (1938/1973). Experience and education.  New York: Collier Macmillan Publishers.

Foucault, M. (1995).  Discipline and punish: The birth of the prison. (A. Sheridan, Trans.) New York: Vintage Books. (Original work published in 1975)

Howick, J. (2011). The Philosophy of evidence-based medicine. Oxford: Blackwell Publishing.

21 Comments

Filed under comparing medicine and education, school reform policies

What’s The Evidence on School Devices and Software Improving Student Learning?

The historical record is rich in evidence that research findings have played a subordinate role in making educational policy. Often, policy choices were (and are) political decisions. There was no research, for example, that found establishing tax-supported public schools in the early 19th century was better than educating youth through private academies. No studies persuaded late-19th century educators to import the kindergarten into public schools. Ditto for bringing computers into schools a century later.

So it is hardly surprising, then, that many others, including myself, have been skeptical of the popular idea that evidence-based policymaking and evidence-based instruction can drive teaching practice. Those doubts have grown larger when one notes what has occurred in clinical medicine with its frequent U-turns in evidence-based “best practices.”

Consider, for example, how new studies have often reversed prior “evidence-based” medical procedures.

*Hormone therapy for post-menopausal women to reduce heart attacks was found to be more harmful than no intervention at all.

*Getting a PSA test to determine whether the prostate gland showed signs of cancer for men over the age of 50 was “best practice” until 2012 when advisory panels of doctors recommended that no one under 55 should be tested and those older  might be tested if they had family histories of prostate cancer.

And then there are new studies that recommend women to have annual mammograms, not at age  50 as recommended for decades, but at age 40. Or research syntheses (sometimes called “meta-analyses”) that showed anti-depressant pills worked no better than placebos.

These large studies done with randomized clinical trials–the current gold standard for producing evidence-based medical practice–have, over time, produced reversals in practice. Such turnarounds, when popularized in the press (although media attention does not mean that practitioners actually change what they do with patients) often diminished faith in medical research leaving most of us–and I include myself–stuck as to which healthy practices we should continue and which we should drop.

Should I, for example, eat butter or margarine to prevent a heart attack? In the 1980s, the answer was: Don’t eat butter, cheese, beef, and similar high-saturated fat products. Yet a recent meta-analysis of those and subsequent studies reached an opposite conclusion.

Figuring out what to do is hard because I, as a researcher, teacher, and person who wants to maintain good health has to sort out what studies say and  how those studies were done from what the media report, and then how all of that applies to me. Should I take a PSA test? Should I switch from margarine to butter?

If research into clinical medicine produces doubt about evidence-based practice, consider the difficulties of educational research–already playing a secondary role in making policy and practice decisions–when findings from long-term studies of innovation conflict with current practices. Look, for example, at computer use to transform teaching and improve student achievement.

Politically smart state and local policymakers believe that buying new tablets loaded with new software, deploying them to K-12 classrooms, and watching how the devices engage both teachers and students is a “best practice.” The theory is that student engagement through the device and software will dramatically alter classroom instruction and lead to improved  achievement. The problem, of course–sure, you already guessed where I was going with this example–is that evidence of this electronic innovation transforming teaching and achievement growth is not only sparse but also unpersuasive even when some studies show a small “effect size.”

Turn now to the work of John Hattie, a Professor at the University of Auckland (NZ), who has synthesized the research on different factors that influence student achievement and measured their impact on learning. For example, over the last two decades, Hattie has examined over 180,000 studies accumulating 200, 000 “effect sizes”  measuring the influence of teaching practices on student learning. All of these studies represent over 50 million students.

He established which factors influenced student learning–the “effect size–by ranking each from 0.1 (hardly any influence) to 1.0 or a full standard deviation–almost a year’s growth in student learning. He found that the “typical” effect size of an innovation was 0.4.

To compare different classroom approaches shaped student learning, Hattie used the “typical” effect size (0.4) to mean that a practice reached the threshold of influence on student learning (p. 5). From his meta-analyses, he then found that class size had a .20 effect (slide 15) while direct instruction had a .59 effect (slide 21). Again and again, he found that teacher feedback had an effect size of .72 (slide 32). Moreover, teacher-directed strategies of increasing student verbalization (.67) and teaching meta-cognition strategies (.67) had substantial effects (slide 32).

What about student use of computers (p. 7)? Hattie included many “effect sizes” of computer use from distance education (.09), multimedia methods (.15), programmed instruction (.24), and computer-assisted instruction (.37). Except for “hypermedia instruction” (.41), all fell below the “typical ” effect size (.40) of innovations improving student learning (slides 14-18). Across all studies of computers, then, Hattie found an overall effect size of .31 (p. 4).

According to Hattie’s meta-analyses, then, introducing computers to students will  fall well below other instructional strategies that teachers can and do use. Will Hattie’s findings convince educational policymakers to focus more on teaching? Not as long as political choices trump research findings.

Even if politics were removed from the decision-making equation, there would still remain the major limitation of  most educational and medical research. Few studies  answer the question: under what conditions and with which students and patients does a treatment work? That question seldom appears in randomized clinical trials. And that is regrettable.

 

 

29 Comments

Filed under comparing medicine and education, how teachers teach, technology use

Don’t Help Your Kids With Their Homework (Dana Goldstein)

Dana Goldstein is a Brooklyn-based journalist, a Schwartz Fellow at the New America Foundation, and a Puffin Fellow at the Nation Institute. This article appeared March 19, 2014 in Atlantic Online

One of the central tenets of raising kids in America is that parents should be actively involved in their children’s education: meeting with teachers, volunteering at school, helping with homework, and doing a hundred other things that few working parents have time for. These obligations are so baked into American values that few parents stop to ask whether they’re worth the effort.

Until this January, few researchers did, either. In the largest-ever study of how parental involvement affects academic achievement, Keith Robinson, a sociology professor at the University of Texas at Austin, and Angel L. Harris, a sociology professor at Duke, mostly found that it doesn’t. The researchers combed through nearly three decades’ worth of longitudinal surveys of American parents and tracked 63 different measures of parental participation in kids’ academic lives, from helping them with homework, to talking with them about college plans, to volunteering at their schools. In an attempt to show whether the kids of more-involved parents improved over time, the researchers indexed these measures to children’s academic performance, including test scores in reading and math.

What they found surprised them. Most measurable forms of parental involvement seem to yield few academic dividends for kids, or even to backfire—regardless of a parent’s race, class, or level of education.

Do you review your daughter’s homework every night? Robinson and Harris’s data, published in The Broken Compass: Parental Involvement With Children’s Education, show that this won’t help her score higher on standardized tests. Once kids enter middle school, parental help with homework can actually bring test scores down, an effect Robinson says could be caused by the fact that many parents may have forgotten, or never truly understood, the material their children learn in school.

Similarly, students whose parents frequently meet with teachers and principals don’t seem to improve faster than academically comparable peers whose parents are less present at school. Other essentially useless parenting interventions: observing a kid’s class; helping a teenager choose high-school courses; and, especially, disciplinary measures such as punishing kids for getting bad grades or instituting strict rules about when and how homework gets done. This kind of meddling could leave children more anxious than enthusiastic about school, Robinson speculates. “Ask them ‘Do you want to see me volunteering more?

Going to school social functions? Is it helpful if I help you with homework?’ ” he told me. “We think about informing parents and schools what they need to do, but too often we leave the child out of the conversation.”

One of the reasons parental involvement in schools has become dogma is that the government actively incentivizes it. Since the late 1960s, the federal government has spent hundreds of millions of dollars on programs that seek to engage parents—especially low-income parents—with their children’s schools. In 2001, No Child Left Behind required schools to establish parent committees and communicate with parents in their native languages. The theory was that more active and invested mothers and fathers could help close the test-score gap between middle-class and poor students. Yet until the new study, nobody had used the available data to test the assumption that close relationships between parents and schools improve student achievement.

While Robinson and Harris largely disproved that assumption, they did find a handful of habits that make a difference, such as reading aloud to young kids (fewer than half of whom are read to daily) and talking with teenagers about college plans. But these interventions don’t take place at school or in the presence of teachers, where policy makers exert the most influence—they take place at home.

What’s more, although conventional wisdom holds that poor children do badly in school because their parents don’t care about education, the opposite is true. Across race, class, and education level, the vast majority of American parents report that they speak with their kids about the importance of good grades and hope that they will attend college. Asian American kids may perform inordinately well on tests, for example, but their parents are not much more involved at school than Hispanic parents are—not surprising, given that both groups experience language barriers. So why are some parents more effective at helping their children translate these shared values into achievement?

Robinson and Harris posit that greater financial and educational resources allow some parents to embed their children in neighborhoods and social settings in which they meet many college-educated adults with interesting careers. Upper-middle-class kids aren’t just told a good education will help them succeed in life. They are surrounded by family and friends who work as doctors, lawyers, and engineers and who reminisce about their college years around the dinner table. Asian parents are an interesting exception; even when they are poor and unable to provide these types of social settings, they seem to be able to communicate the value and appeal of education in a similarly effective manner.

As part of his research, Robinson conducted informal focus groups with his undergraduate statistics students at the University of Texas, asking them about how their parents contributed to their achievements. He found that most had few or no memories of their parents pushing or prodding them or getting involved at school in formal ways. Instead, students described mothers and fathers who set high expectations and then stepped back. “These kids made it!,” Robinson told me. “You’d expect they’d have the type of parental involvement we’re promoting at the national level. But they hardly had any of that. It really blew me away.”

Robinson and Harris’s findings add to what we know from previous research by the sociologist Annette Lareau, who observed conversations in homes between parents and kids during the 1990s. Lareau found that in poor and working-class households, children were urged to stay quiet and show deference to adult authority figures such as teachers. In middle-class households, kids learned to ask critical questions and to advocate for themselves—behaviors that served them well in the classroom.

Robinson and Harris chose not to address a few potentially powerful types of parental involvement, from hiring tutors or therapists for kids who are struggling, to opening college savings accounts. And there’s the fact that, regardless of socioeconomic status, some parents go to great lengths to seek out effective schools for their children, while others accept the status quo at the school around the corner.

Although Robinson and Harris didn’t look at school choice, they did find that one of the few ways parents can improve their kids’ academic performance—by as much as eight points on a reading or math test—is by getting them placed in the classroom of a teacher with a good reputation. This is one example for which race did seem to matter: white parents are at least twice as likely as black and Latino parents to request a specific teacher. Given that the best teachers have been shown to raise students’ lifetime earnings and to decrease the likelihood of teen pregnancy, this is no small intervention.

All in all, these findings should relieve anxious parents struggling to make time to volunteer at the PTA bake sale. But valuing parental involvement via test scores alone misses one of the ways in which parents most impact schools. Pesky parents are often effective, especially in public schools, at securing better textbooks, new playgrounds, and all the “extras” that make an educational community come to life, like art, music, theater, and after-school clubs. This kind of parental engagement may not directly affect test scores, but it can make school a more positive place for all kids, regardless of what their parents do or don’t do at home. Getting involved in your children’s schools is not just a way to give them a leg up—it could also be good citizenship.

 

11 Comments

Filed under raising children

The Seductive Lure of Big Data: Practitioners Beware

Big Data beckons policymakers, administrators and teachers with eye-popping analytics and snazzy graphics. Here is Darrell West of the Brookings Institition laying out the case for teachers and administrators to use Big Data:

Twelve-year-old Susan took a course designed to improve her reading skills. She read short stories and the teacher would give her and her fellow students a written test every other week measuring vocabulary and reading comprehension. A few days later, Susan’s instructor graded the paper and returned her exam. The test showed that she did well on vocabulary, but needed to work on retaining key concepts.

In the future, her younger brother Richard is likely to learn reading through a computerized software program. As he goes through each story, the computer will collect data on how long it takes him to master the material. After each assignment, a quiz will pop up on his screen and ask questions concerning vocabulary and reading comprehension. As he answers each item, Richard will get instant feedback showing whether his answer is correct and how his performance compares to classmates and students across the country. For items that are difficult, the computer will send him links to websites that explain words and concepts in greater detail. At the end of the session, his teacher will receive an automated readout on Richard and the other students in the class summarizing their reading time, vocabulary knowledge, reading comprehension, and use of supplemental electronic resources.

In comparing these two learning environments, it is apparent that current school evaluations suffer from several limitations. Many of the typical pedagogies provide little immediate feedback to students, require teachers to spend hours grading routine assignments, aren’t very proactive about showing students how to improve comprehension, and fail to take advantage of digital resources that can improve the learning process. This is unfortunate because data-driven approaches make it possible to study learning in real-time and offer systematic feedback to students and teachers (education technology west-1).

West sees teachers and administrators as data scientists mining information, tracking individual student and teacher performance and making subsequent changes based on the data. Unfortunately, so much of the hype for using Big Data ignores time, place, and people.

Context matters.

Consider what occurred when Nick Bilton, a New York University journalist and adjunct professor designed a project for his graduate students in a course called “Telling Stories with Data, Sensors, and Humans.” Could sensors, Bilton and students asked, be reporters, collect information, and tell what happened?

The students built small electronic machines with sensors that could detect motion, light, and sound. They then asked the straightforward question whether students in the high-rise classroom building used the elevators more than the stairs  and whether they shifted from one to the other during the day. They set the device in some elevators and stairwells. Instead of a human counting students, a machine did.

Bilton and his graduate students were delighted with the results. They found that students seemed to use the elevators in the morning “perhaps because they were tired from staying up late, and switch to the stairs at night, when they became energized.”

That night when Bilton was leaving the building, the security guard who watched students set up the devices in elevators asked him what happened with the experiment. Bilton said that the sensors had captured students taking elevators in morning and stairs at night. The security guard laughed and told Bilton: “One of the elevators broke down a few evenings last week, so they had no choice but to use the stairs.”

Context matters.

In mining data, using analytics, and reading dashboards (see DreamBox) for classrooms and schools, the setting, time, and the quality of adult-student relationships count also. For Darrell West and others who see teachers and students profiting from instantaneous feedback from computers, context is absent. They fail to consider that the age-graded school is required to do far more than stuff information into students. They fail to reckon with the age-old wisdom (and research to support it) that effective student learning beyond test scores resides in the relationship between student and teacher.

And when it comes to evaluating individual teachers on the basis of student test scores, the  context of teaching–as complex an endeavor as can be imagined, one that is only partially mapped by researchers–trumps Big Data even when it is amply funded by Big Donors.

Big Data, of course, will be (and is) used by policymakers and administrators for tracking school and district performance and accountability. But the seductive lure of mining data and creating glossy dashboards will entice many educators to grab numbers to shape lessons and judge individual students and teachers. If they do succumb to the seduction without considering the complex context of teaching and learning, they risk making mistakes that will harm both teachers and students.

19 Comments

Filed under school reform policies