Monthly Archives: January 2013

Uncertainty in School Reform: The Untold Story

Listen to Michael Mann, a climatolgist at Penn State University who talked about the science behind global warming and rising sea levels.

Any honest assessment of the science is going to recognize that there are things we understand pretty darn well and things that we sort of know. But there are things that are uncertain and there are things we just have no idea about whatsoever. (Nate Silver, The Signal and the Noise, 2012, p. 409).

Ah, if only federal and state policymakers, researchers, and reform-minded educators would see the “science” of school reform in K-12 and higher education in similar terms. “Science” is in quote marks because there is no reliable, much less valid, theory of school reform that can predict events or improvements in schools and classrooms.

Still, for K-12 children and youth there are “things we understand pretty darn well.”

*We understand that socioeconomic status of children’s families has a major influence on students’ academic achievement.

*We understand that a knowledgeable and skilled teacher is the most important in-school factor in student learning.

*We understand the wide variability in student interests, abilities, and motivation.

*We understand that children and youth develop at different speeds as they move through the age-graded school.

Readers can add other “things we understand pretty darn well.”

Then there are “things that we sort of know.” Such as some schools with largely low-income, minority enrollments out-perform not only similarly-situated schools but schools that serve families from middle- and upper-middle income schools.

Or that the more educational credentials graduates collect over time, chances are they will earn more in their lifetime than those who fail to finish school and college.

Or that curriculum standards can outline what students have to learn but the tests–and the rewards and penalties tied to those tests–measuring whether students have reached those standards have a powerful extensive influence on what teachers teach and what students learn.

And there are “things that are uncertain” in schooling children and youth. Consider that over the past quarter-century, the dominant goal for public schools has been college preparation. This is a political decision driven by fear of unskilled U.S. graduates unable to work in ever-changing companies which will fall behind in global competition. The primary way of insuring that administrators and teachers achieve that goal has been regulatory structures of federal and state accountability accompanied by high-stakes incentives and penalties. This also is a political decision for the same reason given above.

Uncertainty has arisen because some parents, researchers, teachers, and policymakers have contested both the goal and structures. Why? Because so many high school graduates have failed to meet college admission standards. Because so many who do go to college drop out after a year or two. Because costs of going to college climb annually. Because K-12 curriculum standards and accountability rules have narrowed what is taught to that which is tested.

Thus, conflicts over the goal and regulatory accountability have created many doubts about the wisdom of these reforms especially in light of mounting evidence that overall academic achievement or the achievement gap between whites and minorities remains pretty much stuck where it was when the reforms were enacted.

And there is uncertainty over the value-added to student learning from new technologies ranging from children using 1:1  iPads or laptops to students learning online.  Uncertainty increases ambivalence among policymakers, researchers, practitioners, and parents over whether deploying expensive hardware, software, and professional development increases, has little effect, or even diminishes academic achievement.

Finally, “there are things we just have no idea about whatsoever.”  Can anyone predict with any confidence the probability, for example, that when Common Core standards and their accompanying tests kick in by 2015, students academic achievement will rise, propel college ready students into higher education, have them graduate, and get jobs that will grow the U.S. economy?  Can anyone predict with confidence, to cite another reform, what will occur as a consequence of evaluating teachers, on the basis of how well or poorly their students do on standardized tests?

Or whether or not MOOCs will  “revolutionize” higher education.

No one can.

The fact is that few who style themselves as school and university reformers in positions of authority sort out publicly what they know from what they don’t know. Or say out loud their doubts about reform proposals under consideration or which have just been launched. Name me a top-level decision-maker who has publicly stated his or her qualms about the worth of a reform they championed. Instead, policymakers and pundits talk from their bully pulpits and deliver overconfident predictions often overstating what will happen and underestimating the difficulties and complexities of making changes.

Unlike Michael Mann, a scientist who publicly says what is known, unknown, and when uncertainty is present, reform-driven educators and non-educators, working with little theory and even less scientifically gathered evidence, bang the drum daily for transforming schools and higher education. They do so without telling recipients what they know, do not know, and what is uncertain in these innovations or revealing to any extent what are the political, social, and economic costs of putting the reform into practice. School reform is filled with ambiguity and guesswork. That is the untold story.

18 Comments

Filed under Reforming schools, school reform policies

“Irrational Exuberance”: The Case of the MOOCs

Federal Reserve Board Chairman Alan Greenspan said in 1996 that the high-flying stock market was an instance of “irrational exuberance.”

images

Nearly two decades later, were he so inclined  to inspect the swift expansion of elite universities into sponsoring Massive Online Open Courses (MOOCs), he might have said pretty much the same thing.

Certainly, there is “exuberance.”  The hype, the constant flow of words like “revolutionary,” “transformational,” speak to university officials becoming trumpeters for  expanding the reach of top-notch professors and brand-name institutions into every corner of the world where there is an Internet connection. The inspired hopes of university-based entrepreneurs to monetize these courses and bring in fresh dollars drives some professors to leave tenured positions and start new companies. The dream of pedagogically-driven faculty to use MOOCs to spread their expert knowledge to thousands of hungry students and, at the same time, enhance student-centered collaboration through networks where they come together to share ideas and help one another spurs professors to finally convert typical lecture courses into truly learner-centered experiences.  So there is exuberance.

And “irrational?” The Harvards, MITs, Dukes, Berkeleys, and Stanfords of higher education  offer these free courses now to anyone in the world. They give certificates of completion to the few who end up completing MOOCs. But not for credit toward a degree. That is a lose-lose proposition for elite institutions. Even irrationality has its limits.

Where the incoherence and mindlessness enter the picture is the current thinking among university officials and digital-minded faculty that delivering a degree or college-level courses to anyone with an Internet connection will revolutionize U.S. higher education institutions. While teaching is clearly an important activity of universities, doing research and publishing studies is the primary function. The structures (e.g., departmental organization, professional schools) and incentives (e.g., tenure, promotion) of top- and middle-tier institutions drive tenure, promotion, and time allocation for faculty. MOOCs will do nothing to alter those structures and incentives. If anything, MOOCs could accelerate and deepen the split between tenure-line faculty and adjuncts with the latter taking on these larger courses for a pittance. To think that such offerings by professors will transform higher education  gives new meaning to the word “flaky.”

The phrase, then, “irrational exuberance,” came back to me when I listened a few days ago to four enthusiastic Stanford University professors talk about their experiences teaching online courses including MOOCs. These professors in mechanical engineering, computer science, management science, and human biology told a filled auditorium of faculty and graduate students of their excitement, hard work, and surprises in re-engineering their courses to teach  MOOCs that included Stanford students in face-to-face classrooms.

The professors’ enthusiasm was infectious. They were animated in their remarks and energized by the experience. I was delighted to see professors so engaged in figuring out how best to teach a particular topic, how to get their students across the globe to work as teams on projects, and how they creatively went beyond pre-recorded lectures.

As I listened to them tell how satisfying these experiences were, how students across the globe gave feedback of how appreciative they were to learn from the professor and classmates–it occurred to me that I was hearing a great deal about student and professorial satisfaction but I was not hearing about what students learned.

Had there been more time for the Q & A after the presentations, perhaps the issue of student learning would have come up. Or the often-asked question in K-12 when an innovation is launched: does it work? Is it effective? Have students learned?

If degree of student and professor satisfaction is a measure in evaluating higher education courses, the anecdotal evidence on MOOCs thus far points to much student delight, the enjoyment of absorbing new knowledge, and professorial exhilaration. Both professors and students appear engaged in offering and taking these courses. Widespread student participation in course activities and collaboration in completing tasks seem to have increased, according to professors’ reports. But satisfaction, engagement, and networking, while important in of themselves,  cannot be assumed to have led to student learning. Such outcomes fall short of answering the basic question: Have students who have completed MOOCs–recall that these courses have more than three-quarters of students dropping out— learned and applied the knowledge and skills? That is the question asked repeatedly in K-12 schools. Why not for MOOCs?

To duck this basic question becomes another instance of “irrational exuberance.”

images-1

52 Comments

Filed under how teachers teach

Students and Teachers Again: Cartoons

For millennia, teachers and students have loved and hated one another. The excitement of kindergartners eager to tell their teacher what happened at home last night and the teenager with head down on desk waiting to hear the buzzer end 54-minutes of  listening to the teacher–all are part of the relationship between students and teachers. The classroom was (and is) a place where students watched the second-hand of the clock move ever so slowly, a room where adults and children eagerly learned from each other, and, yes, a site for humor. Here is another collection of cartoons showing the funny side of teachers and students interacting.

For readers interested in looking at the monthly posts of cartoons in this blog, see: “Digital Kids in School,” “Testing,” “Blaming Is So American,”  “Accountability in Action,” “Charter Schools,” and “Age-graded Schools,” Students and Teachers, Parent-Teacher Conferences, Digital Teachers, Addiction to Electronic DevicesTesting, Testing, and Testing, Business and Schools, Common Core Standards, Problems and Dilemmas, Digital Natives (2), and Online Courses.

Enjoy!

2nd grade teacher

121008_cartoon_054_a16350_p465

images

Blended_Cartoon

learning slow process

prepare for future

stud eval tchr

The C is inconsistent

6 Comments

Filed under how teachers teach

The Past Lives on in the Present: Customized Learning then and Now

Pupils are working on their own. The second and third grade reading class of 63 pupils … is using a learning center and two adjoining rooms. Two teachers and  the school librarian act as coordinators and tutors as the pupils proceed with the various materials prepared by the school’s teachers and … developer, The Learning and Research Development Center at the U. of Pittsburgh. Each pupil sets his own pace. He is listening to records and completing workbooks. When he has completed a unit of work, he is tested, the test is corrected immediately, and if he gets a grade of 85% or better he moves on. if not, the teacher offers a series of alternative activities to correct the weakness, including individual tutoring, There are no textbooks. There is virtually no lecturing by the teacher to the class as a whole. Instead, she is busy observing the child’s progress, evaluating his tests, writing prescriptions, and instructing individually or in small groups of pupils who need help.*

The school is Oakleaf elementary near Pittsburgh (PA) and the time is 1965. Implemented across all grades, the innovative program was called Individually Prescribed Instruction or IPI (el_197203_tillman-2, p. 495).

Nearly a half-century ago, before there were desktop computers, university developers and school-site practitioners championed IPI as a program where students move through materials at differentiated paces until each achieved mastery of the content and skills to then continue on to the next unit of study.  Observers found students engaged in the process, pleased with the prompt feedback, and delighted that each could move at his or her pace rather than wait for the entire class to move to the next lesson.

Sound familiar?

It should. IPI was a more sophisticated version of psychologist B.F. Skinner’s “teaching machine” in the 1950s that evolved from “programmed learning” engineered by psychologist Sidney Pressey in the 1920s.

fetch

IPI was a prototype for subsequent online learning once electronic devices became widespread in K-12 and higher education. The DNA of present-day blended learning (e.g., Rocketship schools’ Learning Labs, Carpe Diem schools) and MOOCs in  higher education reaches back nearly a century into  “programmed learning,” “teaching machines,” and  IPI.

images

Alright, Larry, you made the self-evident point that earlier renditions of self-paced, individualized learning appeared nearly a century ago. So what?

At that time and now, those various incarnations of individualized, self-paced learning sprang from competing ideologies of what children and youth should learn and how they should learn it. Student-centered vs. teacher-centered ways of teaching and learning (and mixes of both) have competed for time and space in K-12 schools for the past two centuries in schools. Teacher-centered instruction (e.g., lecture, discussion, textbook, worksheets, quizzes and tests) has won time and again and dominates classroom lessons. Yet student-centered instruction challenged conventional practice repeatedly.

Connecting students to the real world, students working in small groups and individually, teachers acting as guides and mentors, and a host of other student-centered activities that blend different subjects and skills (e.g., math, science, art, and poetry) moved to center stage of public attention on different occasions (e.g., progressive curriculum and instruction in the 1920s; open classrooms in late-1960s). But after a brief fling in the spotlight receded to the wings in past decades.  Of course, there have been hybrids of both where many teachers hug the middle of the spectrum of instruction, but advocates for each pedagogical ideology continue to contest one another even today when K-12 battles erupt over different kinds of math content, reading textbooks, and early childhood programs.

In higher education, rival ideas about teaching and learning, albeit under wraps, drive  different versions of MOOCs.  The answer, then, to my “so what” question is that  pedagogical ideologies that drove earlier versions of individualized, self-paced instruction are active in current versions of MOOCs.

The prevailing version of MOOCs offers traditional, technology-enriched teacher-centered instruction, that is, lecturing to large groups of people, asking occasional questions, online discussion sections, and multiple-choice questions on exams. Such MOOCs possess advantages of efficiency in delivering information especially in particular subjects (e.g. procedural knowledge in computer science, mathematics). Computer science departments at Stanford, MIT, and Harvard launched the initial MOOC offerings, not the Humanities, social sciences, or natural sciences, according to Keith Devlin, a Stanford University mathematician currently teaching a MOOC course on mathematical thinking.

There are other ways of teaching these courses, however. Some enthusiasts for MOOCs see opportunities for non-traditional forms of teaching where students learn from one another, form online communities, crowd-source answers to problems, create networks that distribute learning in ways that seldom occur in bricks-and-mortar colleges and universities. To Devlin, “the key to real learning has always been bi-directional human-human interaction (even better in some cases, multi-directional, multi-person interaction), not unidirectional instruction.” In other words, student-centered or learner-centered pedagogy.

So these rival ideologies contend with one another in MOOCs as they did when “teaching machines” and IPI were garnering public attention. Chances are efficiencies in cost and delivery will drive MOOCs toward teacher-centered instruction, as has occurred in the past. I would hope, however, that there would be attention to (and discussions of) MOOCs where benefits derived from student-centered ways of learning occur.

 

___________

*Thanks to Justin Reich and Dan Meyer for pointing me to IPI as a past reform that lives in the present.

22 Comments

Filed under how teachers teach, school reform policies, technology use

Algorithms, Accountability, and Professional Judgment (Part 3)

So much of the public admiration for Big Data and algorithms avoids answering basic questions: Why are some facts counted and others ignored? Who decides what factors get included in an algorithm? What does an algorithm whose prediction might lead to  getting fired actually look like? Without a model, a theory in mind, every table, each chart, each datum gets counted, threatens privacy, and, yes, becomes overwhelming. A framework for quantifying data and making algorithmic decisions based on data is essential. Too often, however, they are kept secret or, sadly, missing-in-action.

Here is the point I want to make. Big Data are important; algorithmic formulas are important. They matter. Yet without data gatherers and analyzers using frameworks that make sense of the data, that asks questions about the what and why of phenomena–all the quantifying, all the regression equations and analysis can send researchers, policymakers, and practitioners down dead ends. Big Data become worthless and algorithms lead to bad decisions.

Few champions of Big data have pointed to its failures.  All the finely-crafted algorithms available to hedge fund CEOs, investment bankers, and Federal Reserve officials before 2008, for example, were of no help in predicting the popping of the housing bubble, the near-death of the financial sector, the spike in unemployment, and the very slow recovery after the financial crisis erupted.

So Big Data, as important as it is in determining which genes trigger certain cancers, shaping strategies for marketing products, and identifying possible terrorists, still hardly becomes a solution to problems in curing diseases, losses in advertising revenues, or terrorist actions. Frameworks for understanding data, asking the right questions, constant scrutiny, if not questioning, of the algorithms themselves, and professional judgment are necessities in making decisions once data are collected.

In the private sector the business model of decision-making (i.e., profit-making and returns on investment) drives interpretations of data, asking questions, and making organizational changes. It works most of the time but when it fails, it fails big. That business model has migrated to public schools.

In the past half-century, the dominant model for local, state, and federal decision-making in schools has become anchored in student performance on standardized tests. It is the “business model” grafted onto schools. If students score above the average, the model says that both teachers and students are doing their jobs well. If test scores fall below average, then changes have to be made in schools.

State and federal accountability regulations and significant penalties have been put into place (e.g., No Child Left Behind) that have set in concrete this model of test score-driven schooling.   Algorithms that distribute benefits and penalties for individual students, teachers, and schools are the steel rods embedded in the concrete that strengthen the entire structure leaving little room for teachers, principals, and superintendents to use their professional judgments.

Nonetheless, in fits and starts the entire regulatory model of performance-driven schooling  has come slowly under scrutiny by some policymakers, researchers, practitioners, and parents. Teachers, administrators, and parents have spoken out against too much standardized testing and constricting what students learn. These protests point to fundamental reasons why criticizing the use of Big Data and algorithmic decision-making has taken hold and is slowly spreading.

First, unlike private sector companies, tax-supported schools are a public enterprise and accountable to voters. If high-stakes decisions e.g., grading a school “F” and closing it) driven by algorithms are made, those decisions need to be made in public and those algorithm-driven rules on, say, evaluating teacher effectiveness (e.g., value-added measures in Los Angeles and Washington, D.C.), need to be transparent. easily understandable to voters and parents, and undergo public scrutiny.

Google, Facebook, and other companies keep their algorithms secret because they say revealing the formula they have created would give their competition valuable information that would hurt company profits. School districts, however, are public institutions and cannot keep algorithms buried in jargon-laden technical reports that are released months after consequential decisions on schools and teachers are made (see Measuring Value Added in DC 2011-2012).

Second, within a regulatory, test-driven structure teacher and principal expertise about students, how much and how they learn, school organization, innovation, and district policies has been miniaturized and shrink-wrapped to making changes in lessons based on test results delivered to individual schools.

Teacher and principal judgments about academic and non-academic performance of students matter a great deal. Such data appear in parent-teacher conferences, retention decisions when teachers meet with principals, and the portfolio of observations about individual students that teachers compile over the course of a school year. Teachers and principals use algorithmic decisions but they are seldom quantified and put into formulas. It is called professional judgment. Such data and thinking seldom, if ever, show up in official judgments about individual students, a class, or school. Such data are absent in mathematical formulas that judge student, teacher, and school performance.

Yet there are instances when professional judgments about regulations and tests make news. Two high school faculties in Seattle refused to give the Measures of Academic Progress (MAP) test recently. New York principals have lobbied the state legislature against standardized testing.

Such rebellions, and there will be more, are desperate measures. They reveal how professional expertise of those hired to teach and lead schools has been ignored and degraded. They also reveal the political difficulties facing professionals who decide to take on the regulatory test-driven model that use Big Data and algorithmic decision-making. Protesters appear to be against being held accountable and for preserving their jobs.

That is a must-climb political mountain that can be conquered. In questioning policymaker use of standardized tests to determine student futures, grade schools and judge teacher effectiveness, teachers and principals end up questioning the entire model of  regulatory accountability and algorithmic decision-making borrowed from the private sector. It is about time.

16 Comments

Filed under school reform policies

Policy by Algorithm (Jeff Henig), Part 2

 Jeff Henig is a professor of political science and education at Teachers College, Columbia University. This post appeared July 27, 2011 on Rick Hess’s blog in Education Week.

There is a satisfying solidity to the term “data-based” decision-making. But basing decisions on data is not the same thing as basing them on knowledge. Data are collections of nuggets of information. Compared with “soft” rationales for action–opinion, intuition, conventional wisdom, common practice–they are hard, descriptive, often quantitative.

When rich and high quality sets of data are mined by sophisticated and dynamically-adjusted algorithms, the results can be powerful. Google’s search engine is the prime example here. Google scores web pages based on indicators like the number of other websites that link to the page, the popularity and selectivity of those linking sites, how long the target site has existed, and how prominently on the site the search keywords appear. The resulting score determines the order in which sites are listed in response to Google searches–and listing position is critical. According to one source, the top spot typically attracts 20 percent to 30 percent of the search page’s clicks, with a very sharp diminishing return to those listed further down.

A February 2011 change in the Google algorithm was estimated to shift about $1 billion in revenue.

Little wonder that policy technocrats are drawn to the algorithm as a way to improve governmental performance. In the education world, well-tuned algorithms promise to tell us which students need what kind of interventions, which schools are good candidates for closure, which teachers should get tenure, how much a teacher should be paid. I have come to think of this as policy by algorithm.

Policy by algorithm relies on statistical formulas that shift through existing indicators to generate a predicted outcome score, then assigning automatic rewards or penalties to individuals or organizations that fail to meet the expected targets. In education, this can work by penalizing teachers whose value-added scores leave them in the bottom 10 percent or 20 percent over a one, two, or three-year period.

Education is not the only sector where policy by algorithm is currently in vogue.

The Obama administration in May announced a new plan to hold hospitals more accountable for outcomes involving Medicare patients. The formula to be applied in judging their efficiency would look not only at the cost of the services while the patient is hospitalized, but also for the cost of services performed by doctors and other health care providers in the 90 days after the patient leaves the hospital. Under the plan, a hospital that conducted, say, a hip replacement, would get a lower reimbursement rate if the patient later needed follow-up for an infection, even if the infection develops weeks after the original operation.

But the high promise of policy by algorithm mutates into cause for concern when data are thin, algorithms theory-bare and untested, and results tied to laws that enshrine automatic rewards and penalties. Current applications of value-added models for assessing teachers, for example, enshrine standardized tests in reading and math as the outcomes of import primarily because those are the indicators on hand. A signature element of many examples of contemporary policy by algorithm, moreover, is their relative indifference to the specific processes that link interventions to outcomes; there is much we do not know about how and how much individual teachers contribute to their students’ long-term development, but legislators convince themselves that ignorance does not matter as long as the algorithm spits out a standard that has a satisfying gleam of technological precision.

Google makes up for what it might lack in theory and process-knowledge by continually tweaking its formula. The company makes about 500 changes a year, partly in response to feedback from organizations complaining that they have been unjustly “demoted,” but largely out of a continued need to stay ahead of others who keep trying to game the system in ways that will benefit their company or clients. State laws are unlikely to be so responsive and agile.

Both data and algorithms should be an important part of the process of making and implementing education policy, but they need to be employed as inputs into reasoned judgments that take other important factors into account. The last thing we need are accountability policies that undermine education as a profession or erode the elements of community and teamwork that mark and make good schools. But when law and policy outrun knowledge, the results are likely to be unanticipated, paradoxical, and occasionally perverse.

10 Comments

Filed under school reform policies

Big Data, Algorithms, and Professional Judgment in Reforming Schools (Part 1)

The crusade among reformers for data-driven decision-making in classrooms, schools, and districts didn’t just begin in the past decade. Its roots go back to Frederick Winslow Taylor‘s “scientific management” movement a century ago. In the decade before World War I and through the 1930s, borrowing from the business sector where Taylorism reigned, school boards and superintendent adopted wholesale ways of determining educational efficiency producing a Niagara Falls of data which policymakers used to drive the practice of daily schooling.

Before there were IBM punchcards, before there were the earliest computers, there were city-wide surveys, school scorecards, and  statistical tables recording the efficiency and effectiveness of principals, teachers, and students. And, yes, there were achievement test scores as well.

In Raymond Callahan’s Education and The Cult Of Efficiency (1962), he documents Newton (MA) superintendent Frank Spaulding telling fellow superintendents at the annual conference of the National Education Association in 1913 how he “scientifically managed” his district (Review of Callahan book). The crucial task, Spaulding told his peers, was for district officials to measure school “products or results” and thereby compare “the efficiency of schools in these respects.” What did he mean by products?

I refer to such results as the percentage of children of each year of age [enrolled] in school; the average number of days attendance secured annually from each child; the average length of time required for each child to do a given definite unit of work…(p. 69).

Spaulding and other superintendents measured in dollars and cents whether the teaching of Latin was more efficient than the teaching of English, Latin, or history. They recorded how much it cost to teach vocational subjects vs. academic subjects.

What Spaulding described in Newton for increased efficiency (and effectiveness) spread swiftly among school boards, superintendents, and administrators.  Academic experts hired by districts produced huge amounts of data in the 1920s and 1930s describing and analyzing every nook and cranny of buildings, how much time principals spent with students and parents, and what teachers did in daily lessons.

That crusade for meaningful data to inform policy decisions about district and school efficiency and effectiveness continued in subsequent decades. The current resurgence of a “cult of efficiency,” or the application of scientific management to schooling appears in the current romance with Big Data and the onslaught of models that use algorithms applied to grading schools, individual teacher performance,  and customizing online lessons for students.

As efficiency-driven management began in the business sector a century ago, so too have contemporary business-driven practices in using “analytics”  harnessed computer capacity to process, over time, kilo-, mega-, giga-,  tera-, and petabytes of data filled policymakers determined to reform U.S. schools with confidence. Big Data, the use of complex algorithms, and data-driven decision-making in districts, schools, and classrooms have entranced school reformers. The use of these “analytics” and model-driven algorithms for grading schools, evaluating teachers, and finding the right lesson for the individual student have, sad to say, pushed teachers’ professional judgment off the cliff.

images

The point I want to make in this and subsequent posts on Big Data and models chock full with algorithms is that using data to inform decisions about schooling is (and has been) essential to policymakers and practitioners. For decades, teachers, principals, and policymakers have used data, gathered systematically or on-the-run, to make decisions about programs, buildings, teaching, and learning. The data, however, had to fit existing models, conceptual frameworks–or theory, if you like, to determine whether the numbers, the stories, the facts explained what was going on. If they didn’t fit, some smart people developed new theories, new models  to make sense of those data.

In the past few years, tons of data about students, teachers, and results surround decision-makers. Some zealots for Big Data believe that all of these quantifiable data mean the end of theory and models.  Listen to Chris Anderson:

Out with every theory of human behavior, from linguistics to sociology…. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

Just like facts from the past do not speak for themselves and historians have to interpret those facts, neither do numbers speak for themselves.

The use of those data to inform and make decisions require policymakers and practitioners to have models in their heads that capture the nature of schooling and teaching and learning. From these models and Big Data, algorithms–mathematical rules for making decisions– spill out. Schooling algorithms derived from these models often aim to eliminate wasteful procedures and reduce costs–recall the “cult of efficiency”–without compromising quality. Think of computer-based algorithms to mark student essays. Or value-added measures to determine which teachers stay and which are fired. Or Florida grading each and every school in the state.

The next post takes up making school policy by algorithm.

15 Comments

Filed under Reforming schools

Whose View of the Past Matters on School Reform?

History is more or less bunk. It is tradition. We want to live in the present, and the only history that’s worth a tinker’s dam is the history we make today.

In 1916, as the U.S. was gearing up to enter World War I, Henry Ford, who had applied new technologies to mass manufacturing of  cars while earning profits for his company, said  those words. He wanted the kind of history that would speak to the present, not those school-taught accounts of kings, queens, generals, and diplomacy students learned. To Ford, that kind of history was “”more or less bunk.” He wanted a different history that was relevant to the here-and-now, that could answer tough questions today (p. 1).

Today, political, military, social, economic, and education historians gather, analyze, and interpret facts to answer questions about the past as objectively as they can. The past, then, never speaks for itself in coughing up answers; historians establish facts, interpret the past, some even rendering their judgments, to inform the present.

Yet those in authority who make decisions then and now pursue a different view of the past.

Case in point. Before the housing bubble burst and cascaded through the financial community here and abroad leading to the crippling Great Recession in 2008 economists, investments bankers, Federal Reserve officials, the President of the U.S., and hundreds of other policymakers had been warned time and again about the housing boom. For example, Yale University economist Robert Shiller examined historical records dating back centuries–yes centuries–when housing prices spiked and then plunged in the Netherlands, Norway, and other countries. As recent as the early 1990s, another housing bubble burst in Japan. These popped bubbles damaged these nations’ economies badly.

Shiller said the same thing had been occurring in the U.S. since the early 1990s. He told that to Federal Reserve officials; he gave interviews to network journalists; he wrote op-ed pieces. He talked to hedge funds CEOs and top officials in investment banks all of whom were hip-deep in packaging subprime mortgages for sale to investors even though few understood what was being bought and sold. When did he say all of these things? 2005. His research findings were ignored.

But the housing bubble did pop in 2008 and the near-financial collapse of the nation has led to high unemployment and a severely damaged economy that is just barely recovering in 2013.

Why did so few hedge fund managers, CEOs of financial institutions, and investors–much less top federal and state officials and legislators–heed these lessons from the past. Because these policy elites  had a different view of the past in their heads. To them, accelerating housing prices was not a bubble it was economic growth in the American way. What happened elsewhere couldn’t happen in the U.S. because it was different. Rising housing prices were another mark of American exceptionalism. The U.S. had won wars with Britain, Mexico, and Spain in the 19th century, and twice defeated Germany in the 20th century (Vietnam was a forgettable error while the 100-hour first Gulf War in 1991 was the historical pattern). U.S. capitalism had triumphed over Soviet Union.  That was the historical map that these very smart people had in their heads. So why take heed of a Yale economist and other Cassandras warning about an economic debacle around the corner?

So the issue in front of policymakers who influence the economy–like those who seek school reform–is not ignoring the past. They like voters, taxpayers, and those interested in school reform such as practitioners, parents, researchers already have historical maps in their heads.

Years ago, David Tyack and I wrote about the history of school reform. We said:

Whether they are aware of it or not, all people use history (defined as an interpretation of past events) when they make choices about the present and future. The issue is not whether people use a sense of the past … but how accurate and appropriate are their historical maps. Are their inferences attentive to context and complexity? Are their analogies plausible? And how might alternative understandings of the past produce different visions of the future? (p. 7).

The questions we asked nearly two decades ago about the accuracy of the historical maps that reform-driven policymakers use in shaping the future of schools apply to K-12 and higher education rhetoric and action in either championing new technologies or using student test scores to evaluate teachers. Are the inferences policymakers make attentive to school contexts and complexity? Are the analogies plausible? Do other interpretations of past reforms contain different visions of the future?

After living through the housing bubble and its popping and now listening to the rhetoric of those committed to technology transforming teaching and learning, I do not hear these questions being asked. The historical maps advocates of “disruptive” technologies have in their heads do not permit such questions. Not asking these questions lead me to slightly amend Henry Ford: Their history “is more or less bunk.”

2 Comments

Filed under school reform policies

Being a Physics Teacher and Father: The Story of Jeffrey Wright

The following article and YouTube selection comes from a story written by Tara Pope published in the New York Times, December 24, 2012. It is an uncommon story of a gifted teacher whose life story becomes part of the physics lessons that he teaches. I saw this story on Joanne Jacob’s blog, “Linking and Thinking on Education” (http://www.joannejacobs.com/)

Jeffrey Wright is well known around his high school in Louisville, Ky., for his antics as a physics teacher, which include exploding pumpkins, hovercraft and a scary experiment that involves a bed of nails, a cinder block and a sledgehammer.

But it is a simple lecture — one without props or fireballs — that leaves the greatest impression on his students each year. The talk is about Mr. Wright’s son and the meaning of life, love and family.

It has become an annual event at Louisville Male Traditional High School (now coed, despite its name), and it has been captured in a short documentary, “Wright’s Law,” which recently won a gold medal in multimedia in the national College Photographer of the Year competition, run by the University of Missouri.

The filmmaker, Zack Conkle, 22, a photojournalism graduate of Western Kentucky University and a former student of Mr. Wright’s, said he made the film because he would get frustrated trying to describe Mr. Wright’s teaching style. “I wanted to show people this guy is crazy and really amazing,” Mr. Conkle said in an interview.

The beginning of the film shows Mr. Wright, now 45, at his wackiest. A veteran of 23 years teaching, he does odd experiments involving air pressure and fiery chemicals — and one in which he lies on a bed of nails with a cinder block on his chest. A student takes a sledgehammer and swings, shattering the block and teaching a physics lesson about force and energy.

But each year, Mr. Wright gives a lecture on his experiences as a parent of a child with special needs. His son, Adam, now 12, has a rare disorder called Joubert syndrome, in which the part of the brain related to balance and movement fails to develop properly. Visually impaired and unable to control his movements, Adam breathes rapidly and doesn’t speak.

Mr. Wright said he decided to share his son’s story when his physics lessons led students to start asking him “the big questions.”

“When you start talking about physics, you start to wonder, ‘What is the purpose of it all?’ ” he said in an interview. “Kids started coming to me and asking me those ultimate questions. I wanted them to look at their life in a little different way — as opposed to just through the laws of physics — and give themselves more purpose in life.”

Mr. Wright starts his lecture by talking about the hopes and dreams he had for Adam and his daughter, Abbie, now 15. He recalls the day Adam was born, and the sadness he felt when he learned of his condition.

“All those dreams about ever watching my son knock a home run over the fence went away,” he tells the class. “The whole thing about where the universe came from? I didn’t care. … I started asking myself, what was the point of it?”

All that changed one day when Mr. Wright saw Abbie, about 4 at the time, playing with dolls on the floor next to Adam. At that moment he realized that his son could see and play — that the little boy had an inner life. He and his wife, Nancy, began teaching Adam simple sign language. One day, his son signed “I love you.”

In the lecture, Mr. Wright signs it for the class: “Daddy, I love you.” “There is nothing more incredible than the day you see this,” he says, and continues: “There is something a lot greater than energy. There’s something a lot greater than entropy. What’s the greatest thing?”

“Love,” his students whisper.

“That’s what makes the ‘why’ we exist,” Mr. Wright tells the spellbound students. “In this great big universe, we have all those stars. Who cares? Well, somebody cares. Somebody cares about you a lot. As long as we care about each other, that’s where we go from here.”

As the students file out of class, some wipe away tears and hug their teacher.

Mr. Wright says it can be emotionally draining to share his story with his class. But that is part of his role as a physics teacher.

“When you look at physics, it’s all about laws and how the world works,” he told me. “But if you don’t tie those laws into a much bigger purpose, the purpose in your heart, then they are going to sit there and ask the question ‘Who cares?’

“Kids are very spiritual — they want a bigger purpose. I think that’s where this story gives them something to think about.”

Mr. Wright says the lecture has one other purpose: to inspire students to pursue careers in science and genetic research.

“That’s where I find hope in my students,” he said. “Maybe if I can instill a little inspiration to my students to go into these fields, who knows? We might be able to come up with something we can use to help Adam out one day.”

If you wish to see the YouTube 12 minute excerpt from the documentary on Jeffrey Wright, it is at:

http://www.youtube.com/watch?v=CbMH3XtAMqg&feature=player_embedded

25 Comments

Filed under how teachers teach