After decades of school and classroom use of new technologies, some facts have emerged that puzzle me.
Since the early 1980s, the federal government, states, and districts—not to mention philanthropists—have invested billions of dollars in wiring schools, buying and deploying machines, and preparing teachers and students to use high tech devices. Nearly all teachers now have access to one or more computers at school. As for students, the number of students per computer across the U.S. has gone from 125 per computer in 1983 to 4 per computer in 2006. Teacher and student access to computers has increased even more in the past decade with thousands of schools issuing computers to each and every student and teacher.
With increased access to new technologies, there is little reliable and valid evidence showing that these technology investments have yielded gains in student achievement.
One answer is simply that access to machines does not necessarily lead to teachers regularly using high-tech devices in daily lessons. Consider that after nearly 30 years of access to computers in the U.S., based on national surveys and research studies (here and here) of schools, about 40 percent of teachers are regular users, that is, using computers for instruction one or more times a week. These teachers use interactive white boards, laptops, and hand-held devices to have students do Internet searches, turn in typed rather than hand-written homework, take notes on lectures, watch videos, and other familiar classroom activities. A small sub-set of these teachers, however, do use electronic devices weekly in far more creative and imaginative ways inside and outside classrooms with their students. That’s the 40 percent of the teachers.
But the majority of teachers, most of whom–paradoxically–use their home computers a few hours each night, are either occasional or non-users in integrating available machines into their daily lessons.
So one explanation for the first puzzling fact is the flawed assumption that deploying computers to teachers and students will lead to teachers regularly using high-tech devices for instruction. Note that without regular use by teachers, establishing a causal relationship between computers and, say, student test scores, is impossible.
Another explanation for the puzzle of so little linkage between computers and student achievement examines how researchers go about studying the connections between technology and student outcomes.
Many researchers fail to consider that the common designs and methodologies they use to determine linkages between classroom technology use and student achievement cannot capture the inherent complexity and unpredictability of teaching and learning. So researchers use shortcuts to get around that complexity and unpredictability.
I need to unpack the previous sentence. Consider that teaching students involves many factors relating to who the teacher is, what content and skills are taught, and what activities and tasks occur while teaching. Also consider student factors: who they are, what experiences, motivations and interests they bring to the classroom, and what they do during lessons. Then consider the school itself, its organization, culture, and its neighborhood. Finally consider the district, its resources, leadership, and culture of learning or non-learning that it cultivates. All of these interacting factors, sometimes unpredictably, affect classroom teaching and learning.
Yet look at the majority of research designs and methods used to determine the effects of teachers using computers with student. Most common are surveys of teachers and students who report their perceptions of classroom use supplemented by researchers’ descriptions of practices, and interviews with teachers and students. Some researchers set up comparison groups of classes that use computers to study a topic with classes not using computers studying the same topic. Then the classes using and not using computers are pre- and post-tested.
Both research designs have serious defects. Short of establishing an experimental and control design with students and teachers randomly assigned to each group, it is nearly impossible to establish a causal linkage between the use of high-tech devices and student achievement. Such experimental or quasi-experimental designs are uncommon and usually too expensive to mount.
Because surveys and class-comparisons are less expensive in dollars and labor, thousands of studies have been done since the introduction of desktop computers into schools in the early 1980s. Many show minute gains or “no significant difference” in test scores from student use of computers. The results, however, are correlations—associations between presence of computers and gains in test scores, not evidence that student use of the machines caused a rise in test scores.
Here, then, are two ways to make sense of the puzzling fact over the paltry results in student outcomes of so much investment in high-tech devices and so little return on those dollars.
Have I missed another explanation? Is what I say flawed? If so, how?
- Blogs in Education? (successful-blog.com)