The Seductive Lure of Big Data: Practitioners Beware

Big Data beckons policymakers, administrators and teachers with eye-popping analytics and snazzy graphics. Here is Darrell West of the Brookings Institition laying out the case for teachers and administrators to use Big Data:

Twelve-year-old Susan took a course designed to improve her reading skills. She read short stories and the teacher would give her and her fellow students a written test every other week measuring vocabulary and reading comprehension. A few days later, Susan’s instructor graded the paper and returned her exam. The test showed that she did well on vocabulary, but needed to work on retaining key concepts.

In the future, her younger brother Richard is likely to learn reading through a computerized software program. As he goes through each story, the computer will collect data on how long it takes him to master the material. After each assignment, a quiz will pop up on his screen and ask questions concerning vocabulary and reading comprehension. As he answers each item, Richard will get instant feedback showing whether his answer is correct and how his performance compares to classmates and students across the country. For items that are difficult, the computer will send him links to websites that explain words and concepts in greater detail. At the end of the session, his teacher will receive an automated readout on Richard and the other students in the class summarizing their reading time, vocabulary knowledge, reading comprehension, and use of supplemental electronic resources.

In comparing these two learning environments, it is apparent that current school evaluations suffer from several limitations. Many of the typical pedagogies provide little immediate feedback to students, require teachers to spend hours grading routine assignments, aren’t very proactive about showing students how to improve comprehension, and fail to take advantage of digital resources that can improve the learning process. This is unfortunate because data-driven approaches make it possible to study learning in real-time and offer systematic feedback to students and teachers (education technology west-1).

West sees teachers and administrators as data scientists mining information, tracking individual student and teacher performance and making subsequent changes based on the data. Unfortunately, so much of the hype for using Big Data ignores time, place, and people.

Context matters.

Consider what occurred when Nick Bilton, a New York University journalist and adjunct professor designed a project for his graduate students in a course called “Telling Stories with Data, Sensors, and Humans.” Could sensors, Bilton and students asked, be reporters, collect information, and tell what happened?

The students built small electronic machines with sensors that could detect motion, light, and sound. They then asked the straightforward question whether students in the high-rise classroom building used the elevators more than the stairs  and whether they shifted from one to the other during the day. They set the device in some elevators and stairwells. Instead of a human counting students, a machine did.

Bilton and his graduate students were delighted with the results. They found that students seemed to use the elevators in the morning “perhaps because they were tired from staying up late, and switch to the stairs at night, when they became energized.”

That night when Bilton was leaving the building, the security guard who watched students set up the devices in elevators asked him what happened with the experiment. Bilton said that the sensors had captured students taking elevators in morning and stairs at night. The security guard laughed and told Bilton: “One of the elevators broke down a few evenings last week, so they had no choice but to use the stairs.”

Context matters.

In mining data, using analytics, and reading dashboards (see DreamBox) for classrooms and schools, the setting, time, and the quality of adult-student relationships count also. For Darrell West and others who see teachers and students profiting from instantaneous feedback from computers, context is absent. They fail to consider that the age-graded school is required to do far more than stuff information into students. They fail to reckon with the age-old wisdom (and research to support it) that effective student learning beyond test scores resides in the relationship between student and teacher.

And when it comes to evaluating individual teachers on the basis of student test scores, the  context of teaching–as complex an endeavor as can be imagined, one that is only partially mapped by researchers–trumps Big Data even when it is amply funded by Big Donors.

Big Data, of course, will be (and is) used by policymakers and administrators for tracking school and district performance and accountability. But the seductive lure of mining data and creating glossy dashboards will entice many educators to grab numbers to shape lessons and judge individual students and teachers. If they do succumb to the seduction without considering the complex context of teaching and learning, they risk making mistakes that will harm both teachers and students.


Filed under school reform policies

19 responses to “The Seductive Lure of Big Data: Practitioners Beware

  1. Pingback: The Seductive Lure of Big Data: Practitioners Beware | Digital Delights |

  2. Jim

    Your example of the flawed elevator study is not very convincing. Any data collected on any topic might be flawed. Does that mean that we should abandon any attempt to empirically investigate any topic? Should we throw out all of today’s science because we can’t be sure that any particular piece of the supporting data is completely free from any error? You may as well as argue that because some things we teach in sschool today may turn out to be mistaken that therefore we should abandon any attempt to educate anybody about anything. After all that is the only way we can be absolutely sure of not teaching anything false.

  3. Jeffrey Elkner

    Each of the two learning environments described lacks something. The first one has a “work flow” problem – it takes too long for students to get feedback on what they are doing, making it more difficult to use the feedback to help them learn. The second one has a more serious problem – it tries to solve the workflow problem of the first environment by reducing student learning to measurable “data” – potentially killing creativity and divergent, “out of the box” thinking, the very things most essential to real learning.

    I’m interested in trying to find a way to improve on the workflow problem without destroying learning by removing the human element so essential to it. I’m working with a free software student information system called SchoolTool ( to develop a skills tracker integrated with a quiz program that allow teachers to give open ended quiz questions to a classroom full of students, and then to immediately evaluate the responses together as a class. Students get immediate feedback, and the teacher gets lots of data, organized by the computer to help make decisions about what to do in the classroom, but it is human feedback in all its richness, nuance and complexity.

    I’d like anyone interested in finding out more about what we are doing with SchoolTool to track and evaluate student learning, please feel free to email me at

  4. Pingback: The Seductive Lure of Big Data: Practitioners Beware | Common Core Controversy |

  5. Bob Calder

    Jeff is right on more than one level. West’s essay is a good example of how system analysis goes wrong in schools. Whether it deliberately ignores feedback loops is another question. The point of the essay should be feedback, not how computers magically solve problems.

    I would also like to point out that fast feedback is not dependably “better” under every circumstance. Look at the lurching progress of the last few years that is caused by rapidly changing frameworks in response to annual feedback in state systems.

    Once more let me point out that this is demonstrably not “big data”. If we were to take a full longitudinal sampling of the system that included courses taken, grades in those courses, drawing ability, measurement of persistence, home environment, family income, health inputs and outcomes, museum visits, teacher subject knowledge, textbook quality down to the individual benchmark (because they vary up to 30%), school attendance, family and extended family educational attainment, and last, access and familiarity with networked knowledge, THAT would be bid data.

  6. Reblogged this on From experience to meaning… and commented:
    I think Big Data has Big Potential, but indeed one still needs to be careful and don’t think it is everything.

  7. Pingback: The Seductive Lure of Big Data: Practitioners Beware | Educational Leadership and Technology |

  8. Pingback: The Seductive Lure of Big Data: Practitioners Beware | (I+D)+(i+c): Gamification, GBL, AR, Learning Analytics, SNA, Big Data, Robotics & Partners |

  9. Larry

    Always love your work. In The Global Fourth Way, Dennis Shirley and I actually look at good uses of achievement data and data walls where teachers are prompted to talk about the children they know and have taught across grades; and poor uses where data replace relationships and lead teachers to concentrate undue attention on those just beneath the cut scores

    Andy Hargreaves

    • larrycuban

      Hi Andy,
      Thanks for letting me now what you and Dennis have written about ways teacher use data. I will look at it.

  10. Mike

    I work in an elementary school who recently underwent a shift from more teacher based reading evaluation to computer based reading evaluation. Here are some thoughts on it.

    For years we used TPRI, Texas Primary Reading Inventory, to diagnose our kids reading level and discover what they needed help with. I would usually take 2 whole days to test a class. The teacher would test each kid individually. We started out getting subs for each class and teachers would pull one kid at a time. The test took anywhere from 15-30 mins per student.It would generally take almost 2 whole days to test a class of 20-25 students. The student would take a spelling test, read a few sets of words, then based on how they scored there, would be placed on a reading level appropriate story to read to the teacher. The student would read the story to the teacher, then answer questions at the end of the story. The whole time the teacher has a Palm Pilot in his/her hand and keep track of mistakes that are made. He/she marks if the student misses words, which words, it times the child as they read. The teacher would then download that information to the computer which would compile the data into nice numbers and columns for the teacher to use.

    Now we use Istation. The teacher brings the class to the computer lab, each child sits in front of a computer, puts on headphones and follows the onscreen prompts. The program is set up like a video game. It is quite entertaining. The students choose pictures based on vocabulary words given to them, listens to a story, chooses pictures based on questions to the story, has words blended to them and chooses the correct picture, reads a story and selects certain words that are missing from drop down menus where that word should be, and for the older kids (3-5) read a story and answer questions based on the story. THe program times how long it takes to answer every question so you can basically see if they guessed, or didn’t answer at all. The whole assessment takes about 30-45 mins for an entire class to be finished. Istation sounds amazing on the surface, you can minimize your time and maximize the data mined. You can get a whole class worth of data in under an hour!

    Here is the problem I, and every teacher in my school have with Istation. You have no idea if he kids are actually doing what they are supposed to be doing. If the kid can’t work a mouse properly they are in trouble. And in an inner city elementary where not too many kids have home access to computers that can be a problem. also the kinder kids’ hands are barely big enough to work a mouse. You now have 1 teacher having to monitor 20-25 kids. The teacher has no idea what the kids are hearing on the computer, The teacher has no way of know why the kid may have missed a question. On TPRI the teacher sat 1 on 1 with the student and could hear what words were being missed, was a certain spelling pattern giving a kid problems?

    Even with computer games kids start looking around. If they do that they are getting things wrong they may know. Kids love to help each other, Some kids would ignore their computer to help the kids next to them. I know if you put up blockers they can’t see the next kid, but we don’t have any big enough to do that, we tried. You don’t really know if the kid just clicked whatever picture the mouse was on when it was time to answer. To all of us who work with the kids, the TPRI data, while long and labor intensive, was light years better than the IStation data simply because the teacher is sitting with the kid, all attention on the task at hand, hearing every mistake made and marking it down and getting understanding of the kid’s thought process as they went along. We would even have our principal call us in to ask about the data and who it looked the way it did and usually if it was low would get “yelled” at. Every teacher on my campus really wants to go back to TPRI. But someone in admin was sold on the “convenience” of this new system, or has a “brother in law” at Istation.

    I guess the point I am trying to make is that data can be good, but if not gotten the proper way it can be bad as well. (duh :)) And that sometimes the convenience of technology in collecting that data can make it bad data. We need to choose the right place and way to collect that data. TPRI may be labor intensive, but if the data it collects is far more accurate who cares. I hope this makes sense to the original post.

    • larrycuban

      Thanks, Mike, for describing the two ways of assessing kids, one labor-intensive and long, the other seemingly efficient insofar as time is concerned but unclear whether it is effective in accurately assessing kids’ performance. The detailed description is useful. I guess there was little teacher input on Istation. When it comes to accuracy in assessment students’ knowledge and skills, that should, in my opinion, trump faster and more convenient ways.

    • Bob Calder

      There are really three possible outcomes. If you trust the study that compared the two systems, you could be making a mistake. Why not design a double blind test and do it yourselves?

  11. Pingback: The Seductive Lure of Big Data: Practitioners Beware | Common Core and You |

  12. Pingback: Big Data Pragmatism: Avoiding the Extremes and Looking for a Way Forward | Ed-Fi Student Data Tool Suite

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s