Category Archives: higher education

Do We Know What History Students Learn? (Wineburg, Breakstone, and Smith)

“Sam Wineburg is the Margaret Jacks Professor of Education and of history (by courtesy) at Stanford University. Joel Breakstone is the executive director and Mark Smith is director of assessment at the Stanford History Education Group.”

This article appeared in Inside Higher Ed, April 3, 2018

“What are you going to do with that — teach?” Uttered with disdain, it’s a question history majors have been asked many times. Clio’s defenders have a response. The head of the American Historical Association says that the study of history creates critical thinkers who can “sift through substantial amounts of information, organize it, and make sense of it.” A university president asserts that the liberal arts endow students with the “features of the enlightened citizen” who possesses “informed convictions … and the capacity for courageous debate on the real issues.” Historians pride themselves on the evidence for their claims.

So, what’s the evidence?

Not much, actually. Historians aren’t great at tracking what students learn. Sometimes they even resent being asked. Recently, however, the winner of the Bancroft Prize, one of history’s most distinguished awards, washed the profession’s dirty laundry in public. The article’s title: “Five Reasons History Professors Suck at Assessment.”

Anne Hyde described what happened when accreditors asked her colleagues to document what students learned. They paid little heed to the requests — that is, until Colorado College’s history department flunked its review. Committed teachers all, her colleagues “had never conducted assessment in any conscious way beyond reporting departmental enrollment numbers and student grade point averages.”

Among many college history departments, this is routine. To address the issue of assessment, the American Historical Association in 2011 set out on a multiyear initiative to define what students should “be able to do at the end of the major.” Eight years, dozens of meetings and hundreds of disposable cups later, the Tuning Project produced a set of ambitious targets for student learning. But when it came to assessing these goals, they left a big question mark.

Which is one of the reasons why we were convinced of the need to create new assessments. With support from the Library of Congress, we came up with short tasks in which history students interpreted sources from the library’s collection and wrote a few sentences justifying their response. For example, one assessment, “The First Thanksgiving,” presented students with a painting from the beginning of the 20th century and asked if the image of lace-aproned Pilgrim women serving turkey to bare-chested Indians would help historians reconstruct what may have transpired in 1621 at the supposed feast between the Wampanoag and English settlers.

The_First_Thanksgiving_cph.3g04961.jpg

 

In the March issue of the Journal of American History, we describe what happened when we gave our assessments to students at two large state universities. On one campus, we quizzed mostly first-year students satisfying a distribution requirement. All but two of 57 ignored the 300-year time gap between the Thanksgiving painting and the event it depicts. Instead, they judged the painting on whether it matched their preconceptions, or simply took its contents at face value — an answer we dubbed the “picture’s worth a thousand words” response.

We weren’t terribly surprised. When we tested high school students on these tasks, they struggled, too, and many of these college students were in high school only months earlier. But what would happen, we wondered, if we gave our tasks to college juniors and seniors, the majority of whom were history majors and all of whom had taken five or more history courses? Would seasoned college students breeze through tasks originally designed for high school?

What we found shocked us. Only two in 49 juniors and seniors explained why it might be a problem to use a 20th-century painting to understand an event from the 17th century. Another one of our assessments presented students with excerpts from a soldier’s testimony before the 1902 Senate Committee investigating the war in the Philippines. We asked how the source provided evidence that “many Americans objected to the war.” Rather than considering what might prompt a congressional hearing, students mostly focused on the document’s content at the expense of its context. Rare were responses — only 7 percent — that tied the testimony to the circumstances of its delivery. As one student explained, “If there hadn’t been such a huge opposition by Americans to this war, I don’t believe that the investigation would have occurred.”

We suffer no illusions that our short exercises exhaust the range of critical thinking in history. What they do is provide a check on stirring pronouncements about the promised benefits of historical study. In an age of declining enrollments in history classes, soaring college debt and increased questions about what’s actually learned in college, feel-good bromides about critical thinking and enlightened citizenship won’t cut it. Historians offer evidence when they make claims about the past. Why should it be different when they make claims about what’s learned in their classrooms?

 

Advertisements

6 Comments

Filed under higher education, how teachers teach

Observing College Professors Teach

I came to Stanford University in 1981. After being at Stanford for five years, a new dean asked me to serve as his Associate Dean. Being superintendent for seven years prior to coming to Stanford and tasting the privileged life of a full professor I had no inclination to return to being an administrator whose influence on tenured colleagues, was at best sorely limited and at worst, non-existent. The Dean wanted me bad enough that he and I negotiated a higher salary–I would be working twelve months rather than nine (it is, after all, a private institution where everything is negotiated)–I would only serve two years, I could teach at least one or two courses each year I served, and I would get a sabbatical quarter after completing the second year. OK, I said.

What did I do?

I had to insure that all of my colleagues taught at least four courses over three quarters–some did not and I had to badger them to do so. I handled students’ dissatisfaction with particular professors’ poor teaching or their being habitually inattentive to students’ work. I followed up on doctoral students’ complaints about unavailability of their advisers, and I represented the Dean on occasions he could not attend campus meetings or social events. So with the help of an skillful administrative secretary, the first year went smoothly.

The second year I had an idea. University professors seldom get observed as they teach except by their students. As a superintendent I had observed over a thousand teachers in my district over the years. Even prior to that I was a supervisor of intern history teachers. Observe and discuss observations with teachers, I could do.

I sent out a personal letter (this was before email became standard communication) to each of my 36 colleagues asking them if they wanted me to observe one of their classes and meet afterwards to discuss what I had seen. I made clear that I would make no judgment on their class but describe to them what I saw and have a conversation around what they had intended to happen in the lesson, what they thought had occurred, and what I had observed. Nothing would be written down (except for my notes which I shared with each faculty member). It would be a conversation. I did ask them to supply me with the readings that students were assigned for the session I observed and what the professor wanted to accomplish during the hour or 90-minute session.

Of the 36 who received the letter, 35 agreed (the 36th came to me in the middle of the year and asked me to observe his class). None of them–yes, that is correct–none had ever been observed before by anyone in the School of Education for purposes of having a conversation about their teaching. Two had been observed by me and a former Associate Dean because of student complaints; I had discussed those complaints with the professor and then observed lectures and discussions they had conducted. Both of them invited me to their classes when I wrote my subsequent letter. So for each quarter of the school year, I visited two professors a week. Each scheduled a follow-up conversation with me that we held in their office.

What happened?

I did observe 36 colleagues. For me, it was a fine learning experience. I got to read articles in subject matter I knew a smattering (e.g., economics of education, adolescent psychological development, standardized test development). I heard colleagues lecture, saw them discuss readings from their syllabi, and, for me, I pick edup new knowledge and ways of teaching graduate students I had not tried in my courses.

As for my colleagues, a common response during the conversations we had following the observations was gratitude for an experience they had not had as a professor. Simply talking about the mechanics of a lecture or discussion, what they thought had worked and had not, the surprises that popped up during the lesson–all of that was a new experience for nearly all of the faculty. A few asked me to return again and we negotiated return visits. Overall, I felt–and seemingly most of my colleagues felt similarly–that the experience was worthwhile because I and they wanted to talk about the ins-and-outs of teaching and had lacked opportunities to do so in their career as professors.

Those conversations over the year got me thinking more deeply about why universities like Stanford preach the importance of teaching–the rhetoric is omnipresent. Moreover, professors and graduate students receive annual teaching awards, and there are programs to help professors to improve their teaching. Yet the University had not created the conditions for faculty to share with colleagues the how and what of their teaching through observation and discussion of lectures and seminars.

That year as Associate Dean sitting in on faculty lectures and seminars led me on an intellectual journey plumbing a question that nagged at me as I observed and conversed with colleagues: how come universities say teaching is important yet all of the structures and actual (not symbolic) rewards go to research in tenure, promotion, and salary? To answer that question I did a historical study of teaching and research at Stanford in two departments–history and the School of Medicine. In completing How Scholars Trumped Teachers: Change without Reform in University Curriculum, Teaching, and Research, 1890-1990, I learned how universities like Stanford, have structures and incentives that insure teaching will be subordinate to the primary tasks of researching and publishing.

To my knowledge, no observations of professors and conversations about teaching have occurred in the Graduate School of Education since 1987-1988.

 

 

 

11 Comments

Filed under higher education, how teachers teach

Whatever Happened To MOOCs?

The splash began in 2012 when Massive Open Online Courses were touted as the coming revolution in higher education.

Wait, Larry, that was only five years ago, a mere blip in the life-cycle of an educational innovation.  Why are you including MOOCs when you have featured posts asking “whatever happened to” half-century old innovations such as Open Classrooms, Total Quality Management, and Behavioral Objectives?

With advances in digital technology and social media, the life cycle of a “disruptive innovation,” or a “revolutionary” program has so sped up that what used to take decades to stick  or slip away now occurs in the metaphorical blink of the eye. So whatever happened to MOOCs?

Where Did the Idea Originate?

One answer is that MOOCs are the next stage of what began as correspondence courses in the late 19th century for those Americans who wanted to expand their knowledge and found going to college was next to impossible. From home-delivered lessons to professors on television delivering lectures to online courses since the early aughts, MOOCs evolved from the DNA of correspondence courses.

Another answer is that in 2001, the Massachusetts Institute of Technology opened up its list of courses for anyone to take online at no cost. Through Open Courseware, professors’ syllabi, assignments and videotaped lectures were made available to everyone with an Internet connection.

And a third answer is that in 2008, two Canadian professors George Seimens and Stephen Downes who offered a course through the University of Manitoba creating the first officially labeled MOOC called “Connectivism and Connected Knowledge” from a regular class they taught for 25 students to over 2200 off-campus adults and students for free who had Internet-connected computers.

All three answers suggest that the lineage of MOOCs has a history located in higher education seeking to educate students who lacked access to college and universities.

What erupted in 2012 was a lava flow of MOOCs from elite U.S. universities accompanied by hyperbolic language and promises for the future of higher education becoming open to anyone with a laptop. Since 2012, that hype cycle has dipped into the Trough of Disillusionment and only now edging upward on the Slope of Enlightment. Verbal restraint and tamed predictions of slow growth, smart adaptations, and commercial specialization have become the order of the day. And, fortunately, a humility about the spread and staying power of innovations initially hyped o steroids. All in five years.

What is a MOOC?

Taught by experts in the field, a Massive Open Online Course in higher education is accessible and free to anyone with an Internet connection. College students, those who work and are not registered in a college or university, and others who simply want information about a topic in which they are interested take courses. See a brief video made at the beginning of the MOOC innovation that explains what they are.

What Problems Did MOOCs Intend to Solve?

Limited accessibility to knowledge and skills offered in higher education. High cost of going to universities. MOOCs offer broader accessibility to students who because of geography, age, cost, and having a family could not take courses. Now anyone with a computer can learn what they wanted to learn. MOOCs are, as one reporter put it:  “Laptop U.”

Do MOOCs Work?

Depends upon what someone means by “work.” Since the usual measures of “success” in taking courses are attendance, grades, test scores, and similar outcomes, only one of these familiar measures has been applied to MOOCs: how many students completed the course?  Attrition has been very high. About ten percent of enrolled students in the early years of MOOCs did all of the assignments, communicated with course assistants, and took the final exam. Sorting out claims of “success” amid sky-high attrition rates has been an issue for both champions and skeptics of the innovation See here, here, here, and here)

What Happened to MOOCs?

They are still around but strikingly downsized and in the middle of being monetized and re-directed. The initial cheerleaders for MOOCs such as Sebastian Thrun, Daphne Koller, and Andrew Ng formed companies (e.g., Udacity, Coursera) that either stumbled badly, and subsequently altered their business plan. Many of these founders also departed for greener pastures (see here, here, and here).

MOOCs persist but as in the case of so many other hyped innovations using new technologies, a slimmer, more tempered, and corporate version exists in 2017 awarding certificates and micro-credentials (see here and here).

 

14 Comments

Filed under higher education, technology use