Category Archives: higher education

The Puzzle of Similar Teaching in Universities and Schools: The Case of Technology Use

Why does so much teaching in K-12 schools and universities look the same over time? To be accurate, however, what appears as timeless stability and similarity in teaching has obscured incremental changes. Now professors ask more questions for students in lectures,organize more small group work, and more use of new devices–from clickers to moodles–than academics had done a half-century ago. So, too, for K-12 teachers who have, again over time, made small and significant changes in their classroom teaching. There is more guided discussion, more group work, increased academic content in lower and upper grades, more adventurous teaching by larger fractions of teachers, and, yes, more and more teachers using high-tech devices for instruction.

Yet looking back on one’s experience in most university and secondary school classrooms, the teaching–even accounting for these incremental changes over the decades– sure looks like the same o,’ same o.’

Here’s the heart of the puzzle: In universities, student attendance is voluntary; in K-12 attendance is compulsory. Note also that the complexity of the subject matter, freedom of movement, course choices, student ages, and teachers’ deep knowledge of their subject are other critical markers that distinguish university classrooms from those in K-12 schools. Yet–and you knew there was a “yet” coming–with all of these essential differences many studies point out the similarities in teaching. Including the use of technology for instruction.

Technology Use in Universities

Academics use computers at home and in their offices to write, analyze data, communicate with colleagues, and compose syllabi and handouts for their courses.  Personal accounts and surveys report again and again that most academics use computers and other technologies for routine tasks in laboratories, lecture halls, and data analysis (see here and here). A 2018 survey noted that 75 percent of responding faculty adopt and adapt new technologies to their instruction. That same survey reported only 11 percent of professors opposed increased use of classroom technologies. Moreover, many professors blog, make podcasts, create web-based classes and teach online courses.

Yet using computers and other new technologies to improve instruction has had little tangible effect on undergraduate classroom teaching or learning. The lecture has remained central to undergraduate instruction.

Except now lectures are often conveyed through Powerpoint and similar software. According to a 2008 national student survey, 63 percent of professors use PowerPoint software in their undergraduate courses. At some institutions, the percentage runs higher. Except for a small fraction of faculty, abundant high-tech hardware, software, and services have hardly made a difference in how professors teach and students learn in most undergraduate classrooms.

While the exact same statement cannot be made for K-12 teaching, there are enough similarities to make even the most ardent high-tech advocate wince.

Why?

Unlocking this puzzle of same o,’ same o’ for university and school teaching requires different answers for for each institution. For universities, look at institutional goals and organizational structure. Consider that a primary goal of universities is to produce knowledge (i.e., doing research) and disseminate it (i.e., teach and publish). Structures and incentives to achieve that goal are faculty rewards in tenure and promotion for research productivity rather than effective teaching. To insure that faculty have time to do research and publish, university administrators reduce teaching obligations by creating large lecture classes in the undergraduate courses and small classes in graduate courses. Those goals, incentives, and structures shape how classes are organized and influence how professors teach.

Technology use in K-12

Rather than cite again all of the surveys (10.1.1.90.6742-1), ethnographic studies, and reports (Bebell_04) of direct observation of classrooms over the past thirty years, the evidence seems clear, at least to me, that nearly all teachers endorse the use of technology for both administrative and instructional tasks but prevailing use falls short of that endorsement. Nonetheless, an increasing fraction of teachers are integrating high-tech devices into their daily lessons. A larger group of teachers use laptops/desktops/ hand-held devices occasionally–say once a week–and now, only a fraction of teachers in most districts, both urban and suburban, refrain from even minimal use–once a month or never.

Reasons for this frequency and type of use by K-12 teachers? When I and others (David K. Cohen on Teaching PDF) look at the organizational conditions of teaching in the age-graded school, the flaws in the technological innovation and its implementation, and the lack of incentives for teachers to go the extra mile even when they endorse technology, it becomes understandable why there have been far more laggards than early adopters of technology among schoolteachers. But that is slowing changing.

Two crucial educational institutions differ in governance, organization, curriculum, and authority to compel attendance yet show similar patterns in instruction and use of technology. Changes in both institutions continue to occur. Will the patterns of instruction diverge or remain the same ‘o, same o’?

 

Leave a comment

Filed under higher education, how teachers teach, technology use

Professor Quits Teaching Because of Students’ Use of Technology in Class

The combination of computer use, Internet, and smart phone, I would argue, has changed the cognitive skills required of individuals…. The student can rapidly check on his or her smartphone whether the professor is right, or indeed whether there isn’t some other authority offering an entirely different approach. With the erosion of that relationship [between professor and students] goes the environment that nurtured it: the segregated space of the classroom where, for an hour or so, all attention was focused on a single person who brought all of his or her experience to the service of the group.

Tim Parks, 2019

In the above epigraph taken from “Dying Art of Instruction in the Digital Classroom,” Novelist, literary scholar, and translator Tim Parks gives the reasons why he is leaving his professorship at the University Institute for Modern Languages in Milan, Italy.  Parks describes his experience with students using devices in his class teaching translation:

In the late 1990s, I had my first experience of students bringing laptops into the classroom. At that time, there was no question of their having wifi connections. Since these were translation lessons, students argued that their computers were useful for the fifteen or twenty minutes when I invited them to translate a short paragraph. They translated better on their computers, they said; they could make corrections more easily.

Nevertheless, I noticed at once the tendency to hide behind the screen. Who could know whether a student was really taking notes or doing something else? The tippety-tapping of keyboards while one was speaking was distracting. I insisted laptops be kept closed except for the brief period of our translation exercise.

When the University renovated classrooms with laptops at each desk, Parks requested an “old fashioned” classroom and got it until the University had no more such classrooms for Parks to use. Bad as that was for Parks’ struggle with students using laptops during translation lessons, the advent of the smart phone did him in.

I continued to fight my fight and keep the laptops mainly closed, and I was holding my own pretty well I think, until the smartphone came into the classroom….

So I have thirty students behind computer screens attached to the Internet. If I sit behind my desk at the front of the class, or even stand, I cannot see their faces. In their pockets, in their hands, or simply open in front of them, they have their smartphones, their ongoing conversations with their boyfriends, girlfriends, mothers, fathers, or other friends very likely in other classrooms. There is now a near total interpenetration of every aspect of their lives through the same electronic device.

To keep some kind of purpose and momentum, I walked back and forth here and there, constantly seeking to remind them of my physical presence. But all the time the students have their instruments in front of them that compel their attention. While in the past they would frequently ask questions when there was something they didn’t understand—real interactivity, in fact—now they are mostly silent, or they ask their computers. Any chance of entering into that “passion of instruction” is gone. I decided it was time for me to go with it.

So Parks retired.

Is Parks’ experience as a professor in a Milan university who vainly tries to cope with students use of electronic devices common? I cannot answer for the European professoriate but there is data on U.S. faculty attitudes and actions when it comes to computers in classrooms..

There have been U.S. professors who have complained about student use of laptops and phones in their classrooms (see here, here, and here). Yet few leave the privileged job as Parks has done. Is he an anomaly, a singleton, or is he in one of the familiar categories that capture the range of classroom use by both professors and K-12 teachers?

For example, Everett Rogers divides users of innovation, in this instance, classroom technologies into groups. Put Parks in the laggard category.

diffusion_2.jpg

A recent survey of U.S. faculty attitudes toward technology use in their classrooms, however, would place Parks in a tiny minority of just over 10 percent of faculty. That same survey puts 90 percent of tenured professors describing themselves as early adopters or inclined to adopt when seeing peers using classroom technologies effectively.

iStock-597962108_0.jpg

I cite Tim Parks’ experience as a professor to illustrate the many changes–I do not use the word “improvements”–that have occurred in higher education’s embrace of classroom technologies. That embrace has been duplicated and enlarged among K-12 teachers. I take up access, use, and results of putting technology in public schools in the next post.

4 Comments

Filed under higher education, how teachers teach, technology use

Do We Know What History Students Learn? (Wineburg, Breakstone, and Smith)

“Sam Wineburg is the Margaret Jacks Professor of Education and of history (by courtesy) at Stanford University. Joel Breakstone is the executive director and Mark Smith is director of assessment at the Stanford History Education Group.”

This article appeared in Inside Higher Ed, April 3, 2018

“What are you going to do with that — teach?” Uttered with disdain, it’s a question history majors have been asked many times. Clio’s defenders have a response. The head of the American Historical Association says that the study of history creates critical thinkers who can “sift through substantial amounts of information, organize it, and make sense of it.” A university president asserts that the liberal arts endow students with the “features of the enlightened citizen” who possesses “informed convictions … and the capacity for courageous debate on the real issues.” Historians pride themselves on the evidence for their claims.

So, what’s the evidence?

Not much, actually. Historians aren’t great at tracking what students learn. Sometimes they even resent being asked. Recently, however, the winner of the Bancroft Prize, one of history’s most distinguished awards, washed the profession’s dirty laundry in public. The article’s title: “Five Reasons History Professors Suck at Assessment.”

Anne Hyde described what happened when accreditors asked her colleagues to document what students learned. They paid little heed to the requests — that is, until Colorado College’s history department flunked its review. Committed teachers all, her colleagues “had never conducted assessment in any conscious way beyond reporting departmental enrollment numbers and student grade point averages.”

Among many college history departments, this is routine. To address the issue of assessment, the American Historical Association in 2011 set out on a multiyear initiative to define what students should “be able to do at the end of the major.” Eight years, dozens of meetings and hundreds of disposable cups later, the Tuning Project produced a set of ambitious targets for student learning. But when it came to assessing these goals, they left a big question mark.

Which is one of the reasons why we were convinced of the need to create new assessments. With support from the Library of Congress, we came up with short tasks in which history students interpreted sources from the library’s collection and wrote a few sentences justifying their response. For example, one assessment, “The First Thanksgiving,” presented students with a painting from the beginning of the 20th century and asked if the image of lace-aproned Pilgrim women serving turkey to bare-chested Indians would help historians reconstruct what may have transpired in 1621 at the supposed feast between the Wampanoag and English settlers.

The_First_Thanksgiving_cph.3g04961.jpg

 

In the March issue of the Journal of American History, we describe what happened when we gave our assessments to students at two large state universities. On one campus, we quizzed mostly first-year students satisfying a distribution requirement. All but two of 57 ignored the 300-year time gap between the Thanksgiving painting and the event it depicts. Instead, they judged the painting on whether it matched their preconceptions, or simply took its contents at face value — an answer we dubbed the “picture’s worth a thousand words” response.

We weren’t terribly surprised. When we tested high school students on these tasks, they struggled, too, and many of these college students were in high school only months earlier. But what would happen, we wondered, if we gave our tasks to college juniors and seniors, the majority of whom were history majors and all of whom had taken five or more history courses? Would seasoned college students breeze through tasks originally designed for high school?

What we found shocked us. Only two in 49 juniors and seniors explained why it might be a problem to use a 20th-century painting to understand an event from the 17th century. Another one of our assessments presented students with excerpts from a soldier’s testimony before the 1902 Senate Committee investigating the war in the Philippines. We asked how the source provided evidence that “many Americans objected to the war.” Rather than considering what might prompt a congressional hearing, students mostly focused on the document’s content at the expense of its context. Rare were responses — only 7 percent — that tied the testimony to the circumstances of its delivery. As one student explained, “If there hadn’t been such a huge opposition by Americans to this war, I don’t believe that the investigation would have occurred.”

We suffer no illusions that our short exercises exhaust the range of critical thinking in history. What they do is provide a check on stirring pronouncements about the promised benefits of historical study. In an age of declining enrollments in history classes, soaring college debt and increased questions about what’s actually learned in college, feel-good bromides about critical thinking and enlightened citizenship won’t cut it. Historians offer evidence when they make claims about the past. Why should it be different when they make claims about what’s learned in their classrooms?

 

6 Comments

Filed under higher education, how teachers teach

Observing College Professors Teach

I came to Stanford University in 1981. After being at Stanford for five years, a new dean asked me to serve as his Associate Dean. Being superintendent for seven years prior to coming to Stanford and tasting the privileged life of a full professor I had no inclination to return to being an administrator whose influence on tenured colleagues, was at best sorely limited and at worst, non-existent. The Dean wanted me bad enough that he and I negotiated a higher salary–I would be working twelve months rather than nine (it is, after all, a private institution where everything is negotiated)–I would only serve two years, I could teach at least one or two courses each year I served, and I would get a sabbatical quarter after completing the second year. OK, I said.

What did I do?

I had to insure that all of my colleagues taught at least four courses over three quarters–some did not and I had to badger them to do so. I handled students’ dissatisfaction with particular professors’ poor teaching or their being habitually inattentive to students’ work. I followed up on doctoral students’ complaints about unavailability of their advisers, and I represented the Dean on occasions he could not attend campus meetings or social events. So with the help of an skillful administrative secretary, the first year went smoothly.

The second year I had an idea. University professors seldom get observed as they teach except by their students. As a superintendent I had observed over a thousand teachers in my district over the years. Even prior to that I was a supervisor of intern history teachers. Observe and discuss observations with teachers, I could do.

I sent out a personal letter (this was before email became standard communication) to each of my 36 colleagues asking them if they wanted me to observe one of their classes and meet afterwards to discuss what I had seen. I made clear that I would make no judgment on their class but describe to them what I saw and have a conversation around what they had intended to happen in the lesson, what they thought had occurred, and what I had observed. Nothing would be written down (except for my notes which I shared with each faculty member). It would be a conversation. I did ask them to supply me with the readings that students were assigned for the session I observed and what the professor wanted to accomplish during the hour or 90-minute session.

Of the 36 who received the letter, 35 agreed (the 36th came to me in the middle of the year and asked me to observe his class). None of them–yes, that is correct–none had ever been observed before by anyone in the School of Education for purposes of having a conversation about their teaching. Two had been observed by me and a former Associate Dean because of student complaints; I had discussed those complaints with the professor and then observed lectures and discussions they had conducted. Both of them invited me to their classes when I wrote my subsequent letter. So for each quarter of the school year, I visited two professors a week. Each scheduled a follow-up conversation with me that we held in their office.

What happened?

I did observe 36 colleagues. For me, it was a fine learning experience. I got to read articles in subject matter I knew a smattering (e.g., economics of education, adolescent psychological development, standardized test development). I heard colleagues lecture, saw them discuss readings from their syllabi, and, for me, I pick edup new knowledge and ways of teaching graduate students I had not tried in my courses.

As for my colleagues, a common response during the conversations we had following the observations was gratitude for an experience they had not had as a professor. Simply talking about the mechanics of a lecture or discussion, what they thought had worked and had not, the surprises that popped up during the lesson–all of that was a new experience for nearly all of the faculty. A few asked me to return again and we negotiated return visits. Overall, I felt–and seemingly most of my colleagues felt similarly–that the experience was worthwhile because I and they wanted to talk about the ins-and-outs of teaching and had lacked opportunities to do so in their career as professors.

Those conversations over the year got me thinking more deeply about why universities like Stanford preach the importance of teaching–the rhetoric is omnipresent. Moreover, professors and graduate students receive annual teaching awards, and there are programs to help professors to improve their teaching. Yet the University had not created the conditions for faculty to share with colleagues the how and what of their teaching through observation and discussion of lectures and seminars.

That year as Associate Dean sitting in on faculty lectures and seminars led me on an intellectual journey plumbing a question that nagged at me as I observed and conversed with colleagues: how come universities say teaching is important yet all of the structures and actual (not symbolic) rewards go to research in tenure, promotion, and salary? To answer that question I did a historical study of teaching and research at Stanford in two departments–history and the School of Medicine. In completing How Scholars Trumped Teachers: Change without Reform in University Curriculum, Teaching, and Research, 1890-1990, I learned how universities like Stanford, have structures and incentives that insure teaching will be subordinate to the primary tasks of researching and publishing.

To my knowledge, no observations of professors and conversations about teaching have occurred in the Graduate School of Education since 1987-1988.

 

 

 

11 Comments

Filed under higher education, how teachers teach

Whatever Happened To MOOCs?

The splash began in 2012 when Massive Open Online Courses were touted as the coming revolution in higher education.

Wait, Larry, that was only five years ago, a mere blip in the life-cycle of an educational innovation.  Why are you including MOOCs when you have featured posts asking “whatever happened to” half-century old innovations such as Open Classrooms, Total Quality Management, and Behavioral Objectives?

With advances in digital technology and social media, the life cycle of a “disruptive innovation,” or a “revolutionary” program has so sped up that what used to take decades to stick  or slip away now occurs in the metaphorical blink of the eye. So whatever happened to MOOCs?

Where Did the Idea Originate?

One answer is that MOOCs are the next stage of what began as correspondence courses in the late 19th century for those Americans who wanted to expand their knowledge and found going to college was next to impossible. From home-delivered lessons to professors on television delivering lectures to online courses since the early aughts, MOOCs evolved from the DNA of correspondence courses.

Another answer is that in 2001, the Massachusetts Institute of Technology opened up its list of courses for anyone to take online at no cost. Through Open Courseware, professors’ syllabi, assignments and videotaped lectures were made available to everyone with an Internet connection.

And a third answer is that in 2008, two Canadian professors George Seimens and Stephen Downes who offered a course through the University of Manitoba creating the first officially labeled MOOC called “Connectivism and Connected Knowledge” from a regular class they taught for 25 students to over 2200 off-campus adults and students for free who had Internet-connected computers.

All three answers suggest that the lineage of MOOCs has a history located in higher education seeking to educate students who lacked access to college and universities.

What erupted in 2012 was a lava flow of MOOCs from elite U.S. universities accompanied by hyperbolic language and promises for the future of higher education becoming open to anyone with a laptop. Since 2012, that hype cycle has dipped into the Trough of Disillusionment and only now edging upward on the Slope of Enlightment. Verbal restraint and tamed predictions of slow growth, smart adaptations, and commercial specialization have become the order of the day. And, fortunately, a humility about the spread and staying power of innovations initially hyped o steroids. All in five years.

What is a MOOC?

Taught by experts in the field, a Massive Open Online Course in higher education is accessible and free to anyone with an Internet connection. College students, those who work and are not registered in a college or university, and others who simply want information about a topic in which they are interested take courses. See a brief video made at the beginning of the MOOC innovation that explains what they are.

What Problems Did MOOCs Intend to Solve?

Limited accessibility to knowledge and skills offered in higher education. High cost of going to universities. MOOCs offer broader accessibility to students who because of geography, age, cost, and having a family could not take courses. Now anyone with a computer can learn what they wanted to learn. MOOCs are, as one reporter put it:  “Laptop U.”

Do MOOCs Work?

Depends upon what someone means by “work.” Since the usual measures of “success” in taking courses are attendance, grades, test scores, and similar outcomes, only one of these familiar measures has been applied to MOOCs: how many students completed the course?  Attrition has been very high. About ten percent of enrolled students in the early years of MOOCs did all of the assignments, communicated with course assistants, and took the final exam. Sorting out claims of “success” amid sky-high attrition rates has been an issue for both champions and skeptics of the innovation See here, here, here, and here)

What Happened to MOOCs?

They are still around but strikingly downsized and in the middle of being monetized and re-directed. The initial cheerleaders for MOOCs such as Sebastian Thrun, Daphne Koller, and Andrew Ng formed companies (e.g., Udacity, Coursera) that either stumbled badly, and subsequently altered their business plan. Many of these founders also departed for greener pastures (see here, here, and here).

MOOCs persist but as in the case of so many other hyped innovations using new technologies, a slimmer, more tempered, and corporate version exists in 2017 awarding certificates and micro-credentials (see here and here).

 

14 Comments

Filed under higher education, technology use