The rollout of the Gates Foundation 2015 survey of teacher opinion about technology sought to gain attention from entrepreneurs, policymakers, administrators, and teachers. I do not know if it did attract such attention but a brief analysis of what teachers told pollsters is in order because the report reveal strong biases that need attention.
report teaching in classrooms where students generally
learn the same content, working at the same pace together
as a class.
The implied meaning is that over two-thirds of teachers teaching where students generally learn the same content, working at the same pace together as a class is bad especially because there is a proliferation of technology that enables student learning experiences to be tailored to meet individual skills, needs, and interests.
However, the majority of teachers (65 percent) report grouping students of similar abilities together for differentiated instruction or other supports. These groupings are also increasingly responsive to ongoing changes in student learning, with 73 percent of teachers changing the composition of student groups at least monthly.
The report suggests that teachers using multiple groupings to differentiate instruction is a “good” thing. Here nearly two-thirds of teachers are doing the right thing. Again there the history and organizational context for teachers using small groups are missing.
The fact of the matter is that Progressive reformers started grouping of students in the 1920s (following the use of IQ tests) and it has taken decades for the practice to become a mainstay in the nation’s schools. Why? Because organizational factors that teachers have faced (and do so now) account for the slow but steady adoption of an innovation a century old.
As I and many others have pointed out (see here and here) there are historical, political, and organizational reasons why teachers teach as they do that have little to do with whether new technologies are available. State standards, tests, accountability, class size, demographics, teacher turnover, historical patterns of instruction, and many other factors account for this stability in instruction, not the availability of new devices and software.
METHODOLOGICAL ISSUES WITH ONLINE SURVEYS OF TEACHER OPINION
The Gates Foundation-funded study depends upon online responses from teachers. Researchers know the dangers of unreliable estimates that plague such survey responses. For example, when investigators examined classrooms of teachers and students who reported high frequency of usage, these researchers subsequently found large discrepancies between what was reported and what was observed. None of the gap between what is said on a survey and what is practiced in a classroom is intentional. The discrepancy often arises from what sociologists call the bias of “social desirability,” that is, respondents to a survey put down what they think the desirable answer should be rather than what they actually do.
Online surveys are also plagued with selection bias. While the Boston Consulting Group that the Gates Foundation hired to survey over 3000 teachers claims that the respondents mirror the nation’s pool of teachers (p. 3), those who take the time to respond to an online survey may well differ from those teachers who cannot be bothered to answer the questions. That is called selection bias and flaws many online surveys (see here, here, and here). I could not find in the report whether the BCG corrected for selection bias or simply used the excuse of teachers who responded as being representative of the nation’s teachers.
The foundation’s report Teachers Know Best offers a biased view (both substantively and methodologically) of what should happen in the nation’s classrooms. Handle with care.