Tag Archives: research and practice

Did That Edtech Tool Really Cause That Growth? (Mary Jo Madda)

The quality of research on technology use in schools and classrooms leaves much to be desired. Yet academics and vendors crank out studies monthly. And they are often cited to justify using particular programs. How practitioners can make sense of research studies is an abiding issue. This post offers viewers some cautionary words in looking carefully at findings drawn from studies of software used in schools.

“Mary Jo Madda (@MJMadda) is Senior Editor at EdSurge, as well as a former STEM middle school teacher and administrator. In 2016, Mary Jo was named to the Forbes ’30 Under 30′ list in education.” This post appeared in EdSurge, August 10, 2016.

How do you know whether an edtech product is effective in delivering its intended outcomes? As the number of edtech products has ballooned in the past five years, educators—and parents—seek information to help them make the best decision. Companies, unsurprisingly, are happy to help “prove” their effectiveness by publishing their own studies, sometimes in partnership with third-party research groups, to validate the impact of a product or service.

But oftentimes, that research draws incorrect conclusions or is “complicated and messy,” as Alpha Public Schools’ Personalized Learning Manager Jin-Soo Huh describes it. With a new school year starting, and many kids about to try new tools for the first time, now is a timely moment for educators to look carefully at studies, scrutinizing marketing language and questioning the data for accuracy and causation vs. correlation. “[Educators] need to look beyond the flash of marketing language and bold claims, and dig into the methodology,” Huh says. But it’s also up to companies and startups to question their own commissioned research.

To help educators and companies alike become the best critics, here are a few pieces of advice from administrators and researchers to consider when reviewing efficacy studies—and deciding whether or not the products are worth your time or attention.

For Educators

#1: Look for the “caveat statements,” because they might discredit the study.

According to Erin Mote, co-founder of Brooklyn Lab Charter School in New York City, one thing she and her team look for in studies are “caveat statements,” where the study essentially admits that it cannot fully draw a link between the product and an outcome.

“[There are] company studies that can’t draw a definitive causal link between their product and gains. The headline is positive, but when you dig down, buried in three paragraphs are statements like this,” she tells EdSurge, pointing to a Digital Learning Now study about math program Teach to One (TtO):

The report concludes, “The TtO students generally started the 2012-13 academic year with mathematics skills that lagged behind national norms. Researchers found that the average growth of by TtO students surpassed the growth achieved by students nationally. Although these findings cannot be attributed to the program without the use of an experimental design, the results appear encouraging. Achievement gains of TtO students, on average, were strong.”

Mote also describes her frustration with companies that call out research studies as a marketing tactic, such as mentioning both studies and the product within a brief, 140-character Tweet orFacebook post—even though the study is not about the product itself, as in the Zearn Tweet below. “I think there is danger in linking studies to products which don’t even talk about the efficacy of that product,” Mote says, calling out that companies that do this effectively co-opt research that is unrelated to their products.

Research from @RANDCorporation shows summer learning is key. Use Zearn this summer to strengthen math skills.”

#2: Be wary of studies that report “huge growth” without running a proper experiment or revealing complexities in the data.

Research at Digital Promise, something consumers should look for is “whether or not the study is rigorous,” specifically by asking questions like the following four:

  • Is the sample size large enough?
  • Is the sample size spread across multiple contexts?
  • Are the control groups mismatched?
  • Is this study even actually relevant to my school, grade, or subject area?

Additionally, what if a company claims massive growth as indicated by a study, but the data in the report doesn’t support those claims?

Back in the early 2000s, John Pane and his team at the RAND Corporation set out to demonstrate the effectiveness of Carnegie Cognitive Tutor Algebra. Justin Reich, an edtech researcher at Harvard University, wrote at length about the study, conceding that the team “did a lovely job with the study.”

However, Reich pointed out that users should be wary of claims made by Carnegie Learning marketers that the product “doubles math learning in one year” when, as Reich describes, “middle school students using Cognitive Tutor performed no better than students in a regular algebra class.” He continues:

“In a two-year study of high school students, one year Cognitive Tutor students performed the same as students in a regular algebra class, and in another year they scored better. In the year that students in the Cognitive Tutor class scored better, the gains were equivalent to moving an Algebra I student from the 50th to the 58th percentile.”

Here’s another example: In a third-party study released by writing and grammar platform NoRedInk involving students at Shadow Ridge Middle School in Thornton, CO, the company claims that every student who used NoRedInk grew at least 3.9 language RIT (student growth) points on the popularly-used MAP exam or—by equivalence—at least one grade level, demonstrated in a graph (shown below) on the company’s website. But upon further investigation, there are a few issues with the bar graph, says Alpha administrator Jin-Soo Huh.


While the graph shows that roughly 3.9 RIT points equate to one grade level of growth, there’s more to the story, Huh says. That number is the growth expected for an average student at that grade level, but in reality, this number varies from student to student: “One student may need to grow by 10 RIT points to achieve one year of typical growth, while another another student may just need one point,” Huh says. The conclusion: these NoRedInk student users who grew 3.9 points “may or may not have hit their yearly growth expectation.”

Additionally, one will find another “caveat” statement on Page 4 of the report, which reads: “Although answering more questions is generally positively correlated with MAP improvement, in this sample, there was not a statistically significant correlation with the total number of questions answered.”

According to Jean Fleming, NWEA’s VP of Communications, “NWEA does not vet product efficacy studies and cannot offer insight into the methodologies used on studies run outside our organization when it comes to MAP testing. Hence, all the more reason for users to be aware of potential snags.

For Companies

#1: Consider getting your “study” or “research” reviewed.

No one is perfect, but according to Alpha administrator Jin-Soo Huh, “Edtech companies have a responsibility when putting out studies to understand data clearly and present it accurately.”

To help, Digital Promise launched on Aug. 9 an effort to help evaluate whether or not a research study meets its standard of quality. (Here are a few studies that the nonprofit says pass muster, listed on DP’s “Research Map.“) Digital Promise and researchers from Columbia Teachers College welcome research submissions between now and September from edtech companies in three categories:

  • Learning Sciences: How developers use scientific research to justify why a product might work
  • User Research: Rapid turnaround-type studies, where developers collect and use information (both quantitative and qualitative) about how people are interacting with their product
  • Evaluation Research or Efficacy Studies: How developers determine whether a product has a direct impact on learning outcomes

#2: Continue conducting or orchestrating research experiments.

Jennifer Carolan, a teacher-turned-venture capitalist, says both of her roles have required her to be skeptical about product efficacy studies. But Carolan is also the first to admit that efficacy measurement is hard, and needs to continue happening:

“As a former teacher and educational researcher, I can vouch for how difficult it can be to isolate variables in an educational setting. That doesn’t mean we should stop trying, but we need to bear in mind that learning is incredibly complex.”

When asked about the state of edtech research, Francisco responds that it’s progressing, but there’s work to be done. “We still have a long way to go in terms of being able to understand product impact in a lot of different settings,” she writes. However, she agrees with Carolan, and adds that the possibility of making research mishaps shouldn’t inhibit companies from conducting or commissioning research studies.

“There’s a lot of interest across the community in conducting better studies of products to see how they impact learning in different contexts,” Francisco says.

Disclosure: Reach Capital is an investor in EdSurge.



Filed under Uncategorized

Stages of Technology Integration in Classrooms (Part 3)

Technology integration is not a binary choice: you either do it or you don’t. Anyone who has taught, observed classrooms and thought about what it means to include electronic devices and software into daily lessons knows that technology integration, like raising a child, learning to drive or cultivating a garden, is a process–not an either/or outcome. One goes through various stages in learning how to raise a child, drive a car, grow a garden. In each instance, a “good” child, driving well, a fruitful garden is the desired but not predictable outcome.

A host of researchers and enthusiasts have written extensively about the different phases a teacher, school, and district goes through in integrating technology into their daily operations. Most of the literature seldom mentions that such movement through increasingly complicated stages is really phases of putting a new idea or practice into action. The labels for the levels of classroom practice vary–novice to expert, traditional to innovative, entry-level to transformational.

Writers and professional associations have described how individuals and organization stumble or glide from one phase to another before smoothly using electronic devices to reach larger ends. And it is the ends (e.g., content, skills, attitudes) that have to be kept in sight for those who want teachers to arrive at the top (or last) stage. Buried in that final implementation stage is a view of “good” technology integration and, implicitly, “good” teaching. Often obscured but still there, these notions of what are “good” teaching and learning are embedded in that last stage. Figuring out those ends and what values are concealed within them is difficult but revealing in the biases that model-builders and users have.

As with arriving at a definition (see last post), I have examined many such conceptual frameworks that lay out a series of steps going from a beginner to an expert (across frameworks the names for each step vary). Most often mentioned are the Apple Classroom of Tomorrow (ACOT) and the SAMR models. Many implementation frameworks in use are variations of these two.

The ACOT model.

The earliest stage model came from the demonstration project Apple launched in the mid-1980s when the company placed in five elementary and secondary classrooms across the country, a desktop computer for each student and teachers—the earliest 1:1 classrooms. Moreover, each classroom had a printer, laser disc, videotape player, modem, CD-ROM drivers and software packages. The project grew over the years to 32 teachers in ACOT schools in four states. [i]

One of the longer initiatives ever undertaken in creating technology-rich classrooms—ACOT lasted nearly a decade—researchers drew from observations and interviews with teachers and students a host of findings one of which was the process that teachers went through in integrating technology into daily lessons.


That five-stage process that ACOT teachers traversed went from Entry where teachers coped with classroom discipline problems, managed software, technical breakdowns and physical re-arrangement of rooms to Adoption where beginners’ issues were resolved. The next stage of implementing technology in the classroom, Adaptation, occurred when teachers figured out ways to use the devices and software to their advantage in teaching—finding new ways of monitoring student work, grading tests, creating new materials, and tailoring content and skills to individual students. At this stage, teachers had fully integrated the technology into traditional classroom practice.

The Appropriation phase comes next when teachers have shifted in their attitudes toward using technology in the classroom. At this point, the teacher uses the technology seamlessly in doing lessons. New classroom habits and ways of thinking about devices and software occur. The authors of Teaching with Technology say: “Appropriation is the turning point for teachers…. It leads to the next stage, invention, where new teaching approaches promote the basics yet open the possibility of a new set of student competencies.” [ii]

In the Invention stage, teachers try out new ways of teaching (e.g., project-based learning, team teaching, individualized lessons) and new ways of connecting to students and other teachers (e.g., students providing technical assistance to other students and teachers, collaboration among students). As the authors summed up: “Reaching the invention stage … was a slow and arduous process for most teachers.” In short, at this stage of implementing technology, ACOT researchers believed that teachers would replace their traditional teacher-centered practices. The majority of teachers, however, never made it to this stage. [iii]

The SAMR model.

Developed by Ruben Puentedura, SAMR stands for: Subsitution, Augmentation, Modification, Redefinition. The four rungs of the implementation ladder go from the lowest, replacing an existing tool (e.g., overhead projector) with an electronic device (e.g., interactive whiteboard) but displaying no change in the pedagogy or lesson content to the next rung where the lesson is modified through use of new technology (e.g., study the concept of the speed of light by using a computer simulation). The third rung of the ladder of putting technology into practice is where the teacher modifies the lesson and “allows for significant task redesign” (e.g., students show their understanding of content in class by recording audio and then saving it as a sound file) and, finally, to the top of the ladder, redefinition, where the technology “allows for the creation of new tasks previously inconceivable.” Examples here would be students creating a movie or podcast and putting it on the Internet to get comments or students writing posts for a class blog on the web about the history of the Great Depression. At this final stage of technology integration, student engagement is highest. The SAMR model assumes that high student engagement leads to gains in student academic achievement. Thus, the SAMR model implicitly promises improved student achievement.


More popular with practitioners and consultants marketing professional development in the U.S. and abroad than among researchers, this implementation model is context-free, hierarchical, and unanchored in the research literature on integrating technology. While some researchers have criticized it extensively, it remains popular among teachers and technology coordinators. [iv]

Both ACOT and SAMR involve what teachers know of subject-matter content, insights into their own teaching, and what they know about using technology. This interplay between content, pedagogy, and technology has led to another popular model among technology coordinators, practitioners, and researchers in the field.

Not a stage model of implementation, these domains of “Content Knowledge, Pedagogical Content Knowledge, and Technological Knowledge”, like intersecting circles in a Venn, overlap. The resulting clumsy acronym is TPACK for Technological Pedagogical Content Knowledge. TPACK slides easily into SAMR adding to what teachers are expected to know and do in moving from one stage to another. Like the other models, TPACK also has come in for extensive criticism.[v]


These models—and there are others as well—seek to move teacher use of technology in daily lessons from the primitive to the sophisticated, from exchanging pencil-and-paper for word processing, from redesigning classroom activities through available software to engaging students in learning. The top stages of these implementation models reject traditional modes of teaching and implicitly lean toward a preferred manner of instruction—student-centered. [vi]

Too often, however, the top rung of the ladder—where putting technology integration into creating active learning tasks for students–becomes a proxy for success. Either “Invention” in the ACOT model or “Redefinition” in SAMR becomes surrogates for judging teacher success in not only effectively integrating their use of technology but also in improving student outcomes. And that is unfortunate.

The next and final post explains why I say “unfortunate.”


[i] Judith Sandholtz, Cathy Ringstaff, and David Dwyer, Teaching with Technology : Creating Student-Centered Classrooms (New York: Teachers College Press, 1997). See p. 187 for number of ACOT teachers, schools, and states.

[ii] Ibid., p. 43.

[iii] Ibid., p. 47.

[iv] For a description of SAMR, see Ruben Puentadura’s presentation at: http://www.hippasus.com/rrpweblog/archives/2014/06/29/LearningTechnologySAMRModel.pdf

For a short video on SAMR, see: https://www.youtube.com/watch?v=OBce25r8vto

Critics include Erica Hamilton, et. al., “The Substitution Augmentation Modification Redefinition (SAMR) Model: a Critical Review and Suggestions for its Use,” Tech Trends, 2016, 60(5), pp. 433-441; Jonas Linderoth, “Open Letter to Dr. Ruben Puentadura, October 17, 2013 at


I did a Google search for “SAMR model” and got 245,000 hits; “ACOT model” received just over 63,000 entries. September 4, 2016.
[v] Punya Mishra and Matthew Koehler, “Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge,” Teachers College Record, 2006, 108(6), pp. 1017-1054. For criticism of TPACK, see Leanna Archambault and Joshua Barnett, “Revisiting Technological Pedagogical Content Knowledge: Exploring the TPACK Framework,” Computers & Education, 2010, 55, pp. 1656-1662; Scott Bulfin, et. al., “Stepping Back from TPACK,” Learning with New Media, March 19, 2013 at: http://newmediaresearch.educ.monash.edu.au/lnm/stepping-back-from-tpack/

A Google search for “TPACK model” on September 4, 2016 produced just under 90, 000 hits.

[vi] The summary of ACOT research and practice is in: Judith Sandholtz, Cathy Ringstaff, and David Dwyer, Teaching with Technology : Creating Student-Centered Classrooms (New York: Teachers College Press, 1997). The sub-title captures the intent of the model. The SAMR model highlights increasing student engagement at each rung of the ladder. Among advocates of student-centered classrooms, engagement is a synonym for “active learning,” a principle undergirding student-centeredness in teaching. While increasing active student involvement at each stage beyond Substitution, Ruben Puentadura has not stated directly his preference for student-centeredness as a goal. I have found no direct statements on his seeking student-centered instruction. Those curriculum specialists, teachers, technology coordinators and independent consultants who have picked up and ran with SAMR, however, have indeed seen the model as a strategy for teachers to alter their classroom practices—with qualifications and amendments–and embrace student-centered instruction.

See, for example, Cathy Grochowski, “Interactive Technology: What’s SAMR Got To Do With It?” June 1, 2016 at: http://edblog.smarttech.com/2016/06/11471/

Jennifer Roberts, “Turning SAMR into TECH: What Models Are Good For,” November 30, 2013 at: http://www.litandtech.com/2013/11/turning-samr-into-tech-what-models-are.html

Kathy Schrock, “SAMR and Bloom’s,” (no date) at: http://www.schrockguide.net/samr.html



Leave a comment

Filed under research, technology

Defining Technology Integration (Part 2)

Current definitions of technology integration are a conceptual swamp. Some definitions focus on the technology itself and student access to the devices and software. Some concentrate on the technologies as tools to help teachers and students reach curricular and instructional goals. Some mix a definition with what constitutes success or effective use of devices and software. Some include the various stages of technology integration from simple to complex. And some include in their definitions a one-best-way of integrating technology to advance an instructional method such as student-centered learning. Thus, a conceptual swamp sucks in unknowing enthusiasts and fervent true believers into endless arguing over exactly what is technology integration. [i]

To avoid such a swamp and get into semantic arguments in identifying teachers and schools where a high degree integrated devices in daily practices had occurred, I relied upon informal definitions frequently used by practitioners.

From what practitioners identified as “best cases” of technology integration, I learned that varied indicators came into play when I asked for exemplars. These indicators helped create a grounded definition of technology integration in identifying districts, schools and teachers:

* District had provided wide access to devices and established infrastructure for use . System administrators and cadre of teachers had fought insistently for student access to hardware (e.g., tablets, laptops, interactive whiteboards) and software (e.g., the latest programs in language arts, math, history, and science) either through 1:1 programs for the entire schools, mobile carts, etc.

*District established structures for how schools can improve learning and reach desired outcomes through technology. District administrators and groups of teachers had established formal ways for monitoring academic student progress, created teacher-initiated professional development, launched on-site coaching of teachers and daily mentoring of students, and provided easily accessible assistance when glitches in devices or technological infrastructure occurred. They sought to use technology to achieve content and skill goals.

* Particular schools and teacher leaders had requested repeatedly personal devices and classroom computers for their students. Small teacher-initiated projects–homegrown, so to speak–flowered and gained support of district administrators. Evidence came from sign-up lists for computer carts, volunteering to have pilot 1:1 computer projects in their classrooms and purchase orders from specific teachers and departments.

* Certain teachers and principals came regularly to professional development workshops on computer use in lessons. Voluntary attendance at one or more of these sessions indicated motivation and growing expertise.

* Students had used devices frequently in lessons. Evidence of use came from teacher self-reports, principal observations, student comments to teachers and administrators and word-of-mouth among teachers and administrators in schools.

Note that in all of these conversations, no district administrator, principal, or teacher ever asked me what I meant by “technology integration.” Some or all of the above indicators repeatedly came up in our discussions. I leaned heavily upon the above signs of use and less upon a formal definition (see above) in identifying candidates to study.

I wanted a definition that would fit what I had gleaned from administrators and teachers about how they informally concluded what schools and which teachers were exemplars of technology integration. I wanted a definition that got past the issue of access to glittering new machines and Gee Whiz applications. I wanted a definition that focused on classroom and school use aimed toward achieving teacher and district curricular and instructional goals. I wanted a definition that put hardware and software in the background, not the foreground. I wanted a definition grounded in what I heard and saw in classrooms, schools, and districts.

Of the scores of formal definitions in the literature I have sorted through, I looked for one that would be clear and make sense to experts, professionals, parents, and taxpayers. Only a few met that standard. [ii]

I did fashion one that avoided the conceptual morass of defining technology integration and matched the “best cases” that superintendents, technology coordinators, and teachers had selected for me to observe.[iii]

“Technology integration is the routine and transparent use in learning, teaching, and assessment of computers, smartphones and tablets, digital cameras, social media platforms, networks, software applications and the Internet aimed at helping students reach the district’s and teacher’s curricular and instructional goals.”*

If this definition succeeds in putting technology in the background, not the foreground, then the next step in my research is to elaborate how such a process unfolds in classrooms, schools, and districts by examining the various stages teachers go through in integrating technology before moving to assessments of how successful (or not) the technology integration works.


*Thanks to reader Seb Schmoller for adding to this definition

[i] Examples of the different definitions mentioned in text can be found at:














[ii] Rodney Earle, “The Integration of Instructional Technology into Public Education: Promises and Challenges,” Education Technology Magazine, 2002, 42(1), pp. 5-13. His definition of integration concentrates on the teaching, not hardware or software:

“Computer technology is merely one possibility in the selection of media and the delivery mode—part of the instructional design process —not the end but merely one of several means to the end.”

Khe Foon Hew and Thomas Brush, “Integrating Technology into K-12 Teaching and Learning,” Education Tech Research Development, 2007, 55, pp. 223-252. Their definition is:

“[T]echnology integration is thus viewed as the use of computing devices such as desktop computers, laptops, handheld computers, software, or Internet in K-12 schools for instructional purposes.

[iii] I took a definition originally in Edutopia and revised it to make clear that the integration of technology in daily lessons is harnessed to achieving curricular and instructional goals of the teacher, school, and district. The devices and software are not front-and-center but routinely used in lessons.  I then stripped away language that connected usage of technologies to “success” or preferred ways of teaching. (No author) “What is Successful Technology Integration, “ Edutopia, November 5, 2007 at: http://www.edutopia.org/technology-integration-guide-description



Filed under research, technology use

How I Am Researching Technology Integration in Classrooms and Schools (Part 1)

Beginning last spring, I began publishing posts of classrooms  in which I observed lessons (see here and here). These posts were one part of a larger  research project on technology integration (see here). 

Two questions have guided the case study design of the project:

  1. How have classroom, school, and district exemplars of technology integration been fully implemented and put into classroom practice?
  2. Have these exemplars made a difference in teaching practice?

 In this and subsequent posts I will detail the methodology I use, what I mean by technology integration and describe models commonly used to determine its extent in schools.

The following posts are drafts that will be revised since I will be visiting more teachers and schools this fall.  I welcome comments from readers who wish to take issue, suggest revisions, and recommend changes.

How I Got Started

In fall 2015, I wrote to district superintendents and heads of charter management organizations explaining why I was writing about instances of technology integration in their schools. At no point did these administrators ask me to define “technology integration” or even ask about the phrase; all seemed to know what I meant. In nearly all instances, the superintendent, school site administrator, technology coordinator, and CMO head invited me into the district. Administrators supplied me with lists of principals and teachers to contact. Again, neither my contacts nor I defined the phrase “technology integration” in conversations. They already had a sense of what the phrase meant.

I contacted individual teachers explaining how I got their names, what I was doing, and asked for their participation. More than half agreed. Because of health issues, I did not start the project until January 2016. For four months I visited schools and classrooms, observed lessons and interviewed staff. I resumed observations this fall and hope to complete all observations by December 2016.

In visiting classrooms, I interviewed teachers before and after the lessons I observed in their classrooms. During the observation, I took notes every few minutes about what both teacher and students were doing. I used a protocol to describe class activities while commenting separately about what both teacher and students were doing. I had used this observation protocol in previous studies. The point of the description and commentary was to capture what happened in the classroom, not determine the degree of teacher effectiveness. I avoided evaluative judgments about the worth of the lesson or teacher activities.

The major advantage of this approach is being in the room and picking up non-verbal and verbal asides of what is going on every few minutes as well as noting classroom conditions that often go unnoticed.  I, as an experienced teacher familiar with schooling historically and the common moves that occur in lessons, can also assess the relationship between the teacher and students that other observers using different protocols or videos may miss or exclude. Teachers know that I will not judge their performance.

The major disadvantage of this way of observing lessons is the subjectivity and biases I bring to documenting lessons. So I work hard at separating what I see from what I interpret. I document classroom conditions from student and teacher desk arrangements through what is on bulletin boards, photos and pictures on walls, and whiteboards and which, if any, electronic devices are available in the room. I describe, without judging, teacher and student activities and behaviors. But biases, as in other approaches researching classroom life, remain.

After observing classes, I sit down and have half-hour to 45-minute interviews at times convenient to teachers. After jotting down their history in the district, the school, and other experiences, I turned to the lessons and asked questions about what teachers’ goals were and whether they believed those goals were reached. Then, I asked about the different activities I observed during the lesson. One key question was whether the lesson I observed was representative or not of how the teacher usually teaches.

In answering these questions, teachers gave me reasons they did (or did not do) something in lessons.  In most instances, individual teachers told me why they did what they did, thus, communicating a map of their beliefs and assumptions about teaching, learning, and the content they teach. In all of the give-and-take of these discussions with teachers I made no judgment about the success or failure of different activities or the lesson itself.

I then drafted a description of the lesson and sent it to the teacher to correct any factual errors I made in describing the lesson. The teacher returned the draft with corrections.[i]

To provide context for the classrooms I observed, I collected documents and used school and teacher websites to describe what occurred within each school and district in integrating devices and software into teachers’ daily lessons.

All of these sources intersected and overlapped permitting me to assess the degree to which technology integration occurred. Defining what the concept of “technology integration,” however, was elusive and required much work. Even though when I used the phrase it triggered nods from teachers and administrators as if we all shared the same meaning of the phrase.  I still had to come up with a working definition of the concept that would permit me to capture more precisely what I saw in classrooms, schools, and districts.


[i] The protocol is straightforward and subjective. I write out in longhand or type on my laptop what teachers and students do during the lesson. Each sheet of paper or laptop screen is divided into a wide column and a narrow column. In the wide column I record every few minutes what the teacher is doing, what students are doing, and teacher-directed segues from one activity to another. In the narrow column, I comment on what I see.


Subsequent posts will deal with defining technology integration, common models describing its stages, and determining success of technology integration.


Filed under research, technology use

What Guides My Thinking on School Reform: Pulling the Curtain Aside *

From time to time readers will ask me what I believe should be done about teaching, learning, and school reform. They usually preface their request with words such as: “Hey, Larry, you have been a constant critic of existing reforms. You have written about schools not being businesses and have pointed out the flaws in policymaker assumptions and thinking about reform. And you have been skeptical about the worth of new computer devices, software, and online instruction in promoting better teaching and faster learning. So instead of always being a critic just tell us what you think ought to be done.”

Trained as a historian of education and knowledgeable about each surge of school reform to improve teaching and learning over the past century, I cannot offer specific programs for school boards, superintendents, principals, teachers, parents, and voters to consider. But I do embrace certain principles that guide my thinking about teaching, learning, and reform. And also this blog for the past six years. These principles come out of my five decades of being a teacher, administrator, and scholar. These principles come out of my school experiences and as a site-based researcher. Most readers will be familiar with what I say. No surprises here. But these principles do steer my thinking about teaching, learning, and reform.

Context matters. Suggesting this program or that reform for all math classes or urban districts or elementary schools is impossible because the setting in of itself influences what happens in the school and classrooms. There is no  reform I know of aimed at improving classroom teaching and student performance that should be applied across the board (e.g., school uniforms, teaching children to code, project-based learning). Policies and programs delivered to teachers need to be adapted to different settings.

No single way of teaching works best with all students. Because students differ in motivation, interests, and abilities, using a wide repertoire of approaches in lessons and units is essential. Direct instruction, small groups, whole-group guided discussions, student choice, worksheets, research papers, project-based instruction, online software, etc., etc., etc. need to be in teachers’  tool kits. There are, of course, reformers and reform-minded researchers who try to alter how teachers teach and the content of their instruction from afar such as Common Core State Standards, the newest version of New Math, New Science, New History, or similar curricular inventions. I support such initiatives as long as they rely upon a broad repertoire of teacher approaches to content and skills. When the reforms do not, when they ask teachers to adhere to a certain best way of teaching (e.g., online “personalized” lessons, project-based teaching, direct instruction) regardless of context, I oppose such reforms.

Small changes in classroom practice occur often and slowly; fundamental and rapid changes in practice seldom happen. While well-intentioned reformers seek to basically change how teachers teach reading, math, science, and history, such 180 degree changes in the world of the classroom (or hospital, or therapist’s office, or law enforcement or criminal justice) seldom occur. Over the decades, experienced teachers have become allergic to reformer claims of fast and deep changes in what they do daily in their classrooms. As gatekeepers for their students, teachers, aware of the settings in which they teach, have learned to adapt new ideas and practices that accord with their beliefs and that they think will help their students. Reforms that ignore these historical realities are ill-fated. I support those efforts to build on this history of classroom change, teacher wisdom of practice, and awareness of the context in which the reform will occur.

Age-graded school structures influence instruction. The age-graded school structure, a 19th century innovation that is now universally cemented to K-12 schooling across the U.S., does influence what happens in classrooms. Teachers adapt to this dominant structure in following a schedule as they prepare 50-minute (or hour-long) lessons. Age-graded structures harnessed to accountability regulations have demanded that teachers prepare lesson to get students ready for high-stakes annual tests. These structures require teachers to judge each student as to whether he or she will pass at the end of the school year. School and district structures (e.g., curriculum standards, evaluation policies) like the age-graded school have intended and unintended influences on the what and how of teaching.

Yet adding new structures to shift the center of gravity from prevailing teacher-centered lessons to student-centered ones (e.g., “personalized” learning, project-based instruction) while retaining the larger organizational structure of the age-graded organization fails to alter daily classroom practices.

Teacher involvement in instructional reform. From the mid-19th century through the early decades of the 21st century, no instructional reform imposed upon teachers has been adopted by most teachers and used in lessons as intended. The history of top-down classroom reform is a history of failed efforts to alter what teachers do daily. I include new ways of teaching reading, math, science, and history over the past century. Where and when there have been changes in classroom instruction, teachers were involved in the planning and implementation of the reform. Examples range from Denver curriculum reform in the 1920s, the Eight Year Study in the 1930s, creation of alternative schools in the 1960s, the Coalition of Essential Schools in the 1980s, designed classroom interventions ala Ann Brown in the 1990s, and teacher-run schools in the 2000s. Reforms aimed at altering classroom instruction require working closely with teachers from the very beginning of a planned change and includes building on their existing expertise.

These principles guide my views of school reform, teaching, and learning.


*This is a revised version of a post that appeared September 15, 2015.


Filed under Reforming schools

Recycling Poverty, Segregated Schools, and Academic Achievement: Then and Now

A recent spate of reports and books  linking family poverty, segregated schools, and academic achievement (see here, here, and here) have concluded that school improvement (insofar as test scores are the measure) has hit a wall. Over the past decade, test scores have plateaued in reading and math or even fallen (see here and here). After thirty years of reform after reform, achievement gaps between high- and low-income schools run to four or more grade levels between schools within and across districts (see here and here)   How come?

Researchers have pointed out for decades that the largest influence on school achievement (as measured by test scores),  has been family socioeconomic status. No surprise now with the release of new data on test scores that the same findings about poverty and segregation shape student achievement. Such findings have been around since the massive Coleman Report (1966) and have appeared regularly every decade since. With such findings appearing again and again,  the question asked a half-century ago is the same questions now: Can schools make a difference when socioeconomic conditions (e.g., poverty) clearly play a large role in determining academic achievement?

Those who say “yes,” then and now, have urged upon elected decision-makers different reform policies from better teachers and teaching, more parental choice in schools, higher standards, more testing, accountability, new technologies in schools, and larger investments in education. “No excuses” school leaders, acknowledge that poverty exists but  “good” schools can overcome zip codes.

Those who say “no,” then and now, have pointed out consistently meager outcomes in academic achievement and constancy in test score gaps between minorities and whites. These naysayers have urged those very same decision-makers to improve schools but politically work on reducing poverty in the U.S. (see here) because of the powerful effects of family background on student academic outcomes. The back-and-forth between reformers who see successful schools as the  solvent for poverty and their critics who see family and neighborhood poverty as factors that cannot be washed away by the solvent of schooling. That debate has been reignited in 2016 by recent reports documenting gaps in achievement and few test score gains.

Here’s the rub, however. Much has been written (again by researchers) that policymakers seldom use social science research to make decisions. Instead, they define crises that must be solved and use research to support solutions  they have already decided (see here, here, and here).  Research studies are dragged in to bolster agreed-upon policy directions. At best, then, these research findings get smuggled into the debate after a new policy has been decided. Making policy, then and now, has been far more about political will, mobilizing coalitions to back solutions, and the power to decide what should be done to end the crisis than leaning on rigorous research findings. Educational policy, then, is politics writ small.

Consider what happened to the Coleman Report (1966)–mandated by the Civil Rights Act of 1964. James Coleman, a highly respected sociologist and his team surveyed pupil expenditures, quality of facilities and teacher certification because federal officials then were sure that low student achievement, especially in urban minority and poor districts  was due to inequitable allocation of resources. Instead, the Coleman Report showed a weak correlation between resources and achievement but a strong  association between family background and student test scores.

When government officials saw results that challenged their assumptions about the “problem” of low achievement, they kept these findings under wraps for months until the results leaked out (see here). These results gave plenty of ammunition to critics of the “War on Poverty,”  the Elementary and Secondary Education Act (1965), and federal agencies pushing for more desegregation in the nation’s  school districts. All of these initiatives had the political muscle of  President Lyndon Johnson behind them. Educational policy and political will were joined at the hip then.

The Coleman Report’s controversial findings, however, gave a shot of adrenalin to opponents of these new policies and ventures in the early 1970s, particularly the huge increases in federal spending to end poverty and improve schools.  Opponents of desegregating residential communities in order to have blacks and whites attend school together found sustenance in these results also (see here). Schools remained a battleground in these years as the “War on Poverty” became a historical footnote.

So these current policy research findings, either supporting those who say “yes” or those who say “no” to the question of schools making a difference even amid strong socioeconomic influences, like similar studies in the past will revive the same old question that has divided the nation for the past half-century. But the research findings will not answer the question.

Results from 2016 studies such as Stanford University Professor Sean Reardon’s may recapture the argument used by earlier policymakers that investing more money in school improvement might be a fool’s errand, given the results from earlier reforms. Rebuttals to this line of argument come from social scientists  who urge expanded investment in pre-Kindergarten, and those, like Reardon and other researchers who point to the tiny fraction of high poverty, segregated schools that somehow perform beyond what researchers would have ordinarily predicted. Ditto for charter school proponents and advocates of “no excuses” schools who point to the high graduation rates, college admissions, and yes, high test scores that they have racked up and, according to their advocates, deserve more money and political support.

What’s missing now in 2016 from this brew of research, policy solutions, and advocacy, however, is what was present a half-century ago, a muscular political coalition, a sizable group of elected policymakers with the will to provide a popularly supported response to this conundrum that has divided this nation for decades over the role of schooling in a capitalist democracy.






Filed under school reform policies

Technology Integration in Districts and Schools: Next Project (Part 1)

For decades, as a teacher, administrator, and researcher I have been a consumer and a skeptic of new technologies in both K-12 schools and higher education. My books, articles, talks, and this blog have documented the hype, adoption, and partial implementation of new devices from the 16mm film in the early 20th century, radio in classrooms in the 1930s, instructional television in the 1950s and 1960s, and the desktop computer since the early 1980s. And within the past decade, I have researched and written about the exponential growth in laptops, tablets, and hand-held devices with a cornucopia of apps and software that have swept through U.S. schools and colleges.

Student and teacher access to these shiny, new devices–ones that often become obsolete in the blink of an eye–and increased use in districts, schools, and classrooms for data gathering and instructional materials have been stunning to early adopters in and out of schools. Results of these major investments especially in the last decade, however, have been less stunning, even disappointing because the initial reasons for distributing the digital wealth have fallen short time and again. Gains in academic achievement, major shifts in teacher methods, and entry into decent-paying jobs–original goals for buying new technologies–have been missing-in-action when it comes to evaluating the return on investment in digital classroom tools. Thus, I have remained a skeptic and will continue to question the claims of high-tech entrepreneurs and avid champions when it comes to “transforming” the organization and practice of schooling.

Being skeptical, however, does not mean I have a closed mind. I have diligently looked for instances where districts, schools, and classroom teachers have mindfully infused software into their lessons to reach the learning outcomes they seek for their students. On my blog, I have featured such examples (see here, here, and here). For my next project I want to be more systematic in seeking out exemplars of technology integration in districts, schools, and classrooms. Why select exemplars?

First, the often-told story that highly promoted devices and software fall short of the promised outcomes is accurate. The literature on technology use in schools and universities is strewn with examples of broken dreams. I have no enthusiasm to contribute further to that literature since I know that others will document the holes in the Swiss cheese of high-tech hype. Furthermore, stories of failure have hardly blunted the continuing promotion of districts, schools, and classrooms that have come to rely on the latest app, software, and device. The volleying back-and-forth between uncritical advocates and skeptical users will continue into the next decade whatever I think and do. So I want to take a break from that badminton game.

Second, seeking out exemplars of technology integration leap-frogs over the current debates by examining (yes, critically) those instances where experts and local users believe that they are infusing software seamlessly into actual instruction. For them, the technology “works” (what I and others mean by “works” will be addressed later). By describing and analyzing “best cases” of technology integration I can delve deeper into puzzles that have rattled around in my mind as I researched access and use of new hardware and software over the past three decades.

And exactly what are those puzzles?

One that has bothered me for a long time is why “technology” in education is considered separate, an add-on, when that is not the case when observers look at technological tools applied to business, medicine, architecture, engineering and other professional work. For some reasons in these other domains high-tech tools are part-and-parcel of the daily work that professionals do in getting the job done well. Doctors, for example, diagnose illnesses. New technologies—hand-held devices that do EKGs and monitor heartbeats, machines that do CAT-scans–help doctors in figuring out what’s wrong with a patient. In medicine, technology helps in making diagnoses. That’s it. Not in schools and higher education. There, use of such tools is the subject and predicate. The problem to be solved is secondary. Why, then, unlike other professional work, has the use of educational technology been front-and-center in discussions about improving schools, changing teaching, and preparing students for the labor market? In looking at exemplars of educators infusing technology into their daily activities, perhaps a few clues will emerge to unravel this puzzle.

The other puzzle that has bothered me over the years is that teachers, like clinical physicians, nurses, and therapists engage in the “helping professions” where the use of their expertise is wholly dependent upon the responses of their students, patients, and clients. These helping professionals depend a great deal on frequent interactions to achieve any degree of success in improving learning and maintaining health. The introduction of online lessons, 1:1 tablets, Google glasses for doctors, robots in hospitals, and the like raise significant questions about the nature of the work these professionals do and how success is defined. Keeping this view of teaching as a “helping profession” and the crucial importance of teacher-student interactions lays out questions for me to answer in examining exemplars in districts, schools, and classrooms. In what ways do the best cases of technology infusion improve or hinder (or both) relationships between teachers and students?

Part 2 describes my thinking about how I will go about this project in the next year.



Filed under how teachers teach, school reform policies, technology use