Category Archives: technology

Changing Technologies in Classrooms

A friend and former colleague, Henry Levin, recently wrote about his experience in a 1940s classroom.

I started school in 1943, and by the time we were in third grade we were introduced to writing cursive using an ink pen.  Initially these were the pens with long tapered wooden handles with replaceable pen tips or nibs, but by sixth grade we were expected to use fountain pens because they were less messy.  I remember filling carefully my pen by maneuvering a lever on its side that compressed a rubber bladder inside to draw ink from the inkwell on its release.  

Related image

I was also given the responsibility of refilling the inkwells each day or every other day.  We used huge bottles of Quink (perhaps a liter), and they had to be manipulated in just the right way to fill (three quarters), but not overfill the inkwell.  My recollection is that this was a permanent ink that could not be removed from my clothing.  Once I dropped the entire bottle on the floor, leading to a large spill.  That required initially placing newsprint and paper tissues to soak up most of it, followed by a mopping and scrubbing with water and suds.  Still, a shadow of the ink remained, and the teacher reminded me periodically that I needed to be careful not to further damage her floor.  Towards the end of high school some very expensive ballpoint pens began to replace the ink pens, and we were no longer expected to use the ink paraphernalia.But, the old desks last for a long time.  Even in the late fifties (I was in college), I visited my old high school and found that all of the student desks still had inkwells.  Students wondered what they were for.

I also have a memory of a later technology that, like the inkwell, became obsolescent.

In the late 1960s Stanford University administrators secured federal funds to build a multi-million dollar facility called the Stanford Center for Research, Development, and Teaching (SCRDT). A fully furnished television studio with “state-of-the-art” cameras, videotape recorders, and monitors occupied the main floor with the star-in-the-crown of the new building located in the Large-Group Instruction room (LGI).

LGI

The amphitheater-shaped room with half-circular rows looked down on a small stage with a lectern, a massive pull-down screen, and 2 large monitors suspended from the ceiling. At most of the individual seats was a small punch-button pad called the “student responder.” The responder contained the numbers 1-10 and letters T and F.

student responder

At the very top of the amphitheater was a glass-enclosed technician’s station where an aide could assist the professor with amplification of sound, simultaneous interpretation of various languages, show slides or films, and put on monitors data that the professors wanted.  Administrators had designed the room for professors to enhance the delivery of lectures.

For lectures, the student responder came into play. Designers created the pad for students to punch in their choices to communicate instantaneously to the lecturer their answers to the professor’s questions, such as “If you agree, press 1, disagree, press 2.” “If statement is true, press T.”  As students pressed the keypad, the data went directly to a mainframe computer where the students’ responses were immediately assembled and displayed for the professor at a console on the lectern. The lecturer was then able to adjust the pace and content of the lecture to this advanced interactive technology, circa 1970, that linked students to teacher.

By 1972 when I came to Stanford as a graduate student, the LGI was being used as a large lecture hall for classes from other departments. The now-disconnected keypads were toys that bored students played with during lectures. The pull-down screen was used for overheads and occasional films. The fixed position cameras purchased in the late 1960s were already beyond repair and obsolete.

In 1981, when I returned to teach at Stanford, the SCRDT had been renamed the Center for Educational Research at Stanford (CERAS). In the LGI, none of the original equipment or technology (except the sound system and simultaneous translation) was used by either students or professors. The student responders, however, were still there.

By 2011, nearly a half-century after the SCRDT installed the LGI, the amphitheater room was still in use as a regular lecture hall. I was in that room that year to hear a colleague talk about his career in education and, you guessed it, as I listened, my fingers crept over to the “student responder” and I began to click the keys.

In 2012, the LGI was renovated and the numeric pads disappeared just as those holes in classroom desks to store ink did decades ago.*

Whoever said classrooms don’t change?

___________________________

*Thanks to Deborah Belanger for supplying the date of the LGI renovation.

 

 

Advertisements

9 Comments

Filed under technology

‘It’s Not a Bug, It’s a Feature.’ Trite—or Just Right? (Nicholas Carr)

Nicholas Carr is an author who has written extensively on information technology (IT) for the past 15 years. His 2010 book The Shallows was a finalist for the Pulitzer Prize. I include this recent essay of his because nearly all readers of this blog and I have experienced “bugs” in the software we use daily. He tells the story of an IT phrase that has entered our idiom and become a cliche.

This appeared in Wired, August 19, 2018

We’ll never know who said it first, nor whether the coiner spoke sheepishly or proudly, angrily or slyly. As is often the case with offhand remarks that turn into maxims, the origin of It’s not a bug, it’s a feature is murky. What we do know is that the expression has been popular among programmers for a long time, at least since the days when Wang and DEC were hot names in computing. The Jargon File, a celebrated lexicon of hacker-speak compiled at Stanford in 1975 and later expanded at MIT, glossed the adage this way:

A standard joke is that a bug can be turned into a feature simply by documenting it (then theoretically no one can complain about it because it’s in the manual), or even by simply declaring it to be good. “That’s not a bug, that’s a feature!” is a common catchphrase.

When 19th-century inventors and engineers started using bug as a synonym for defect, they were talking about mechanical ­malfunctions, and mechanical malfunctions were always bad. The idea that a bug might actually be something desirable would never have crossed the mind of an Edison or a Tesla. It was only after the word entered the vocabulary of coders that it got slippery. It’s not a bug, it’s a feature is an acknowledgment, half comic, half tragic, of the ambiguity that has always haunted computer programming.

In the popular imagination, apps and other programs are “algorithms,” sequences of clear-cut instructions that march forward with the precision of a drill sergeant. But while software may be logical, it’s rarely pristine. A program is a social artifact. It emerges through negotiation and compromise, a product of subjective judgments and shifting assumptions. As soon as it gets into the hands of users, a whole new set of expectations comes into play. What seems an irritating defect to a particular user—a hair-trigger ­toggle between landscape and portrait mode, say—may, in the eyes of the programmer, be a specification expertly executed.

Who can really say? In a 2013 study, a group of scholars at a German university sifted through the records of five software projects and evaluated thousands of reported coding errors. They discovered that the bug reports were themselves thoroughly buggy. “Every third bug is not a bug,” they concluded. The title of their paper will surprise no one: “It’s Not a Bug, It’s a Feature.”

INABIAF—the initialism has earned a place in the venerable Acronym Finder—is for programmers as much a cri de coeur as an excuse. For the rest of us, the saying has taken on a sinister tone. It wasn’t long ago that we found software ­dazzling, all magic and light. But our perception of the programmer’s art has darkened. The friendly-seeming apps and chatbots on our phones can, we’ve learned, harbor ill intentions. They can manipulate us or violate our trust or make us act like jerks. It’s the features now that turn out to be bugs.

The flexibility of the term bug pretty much guaranteed that INABIAF would burrow its way into everyday speech. As the public flocked online during the 1990s, the phrase began popping up in mainstream media—The New York Times in 1992, The New Yorker in 1997, Time in 1998—but it wasn’t until this century that it really began to proliferate.

A quick scan of Google News reveals that, over the course of a single month earlier this year, It’s not a bug, it’s a feature appeared 146 times. Among the bugs said to be features were the decline of trade unions, the wilting of cut flowers, economic meltdowns, the gratuitousness of Deadpool 2’s post-credits scenes, monomania, the sloppiness of Neil Young and Crazy Horse, marijuana-induced memory loss, and the apocalypse. Given the right cliché, nothing is unredeemable.

The programmer’s “common catchphrase” has itself become a bug, so trite that it cheapens everything it touches. But scrub away the tarnish of overuse and you’ll discover a truth that’s been there the whole time. What is evolution but a process by which glitches in genetic code come to be revealed as prized biological functions? Each of us is an accumulation of bugs that turned out to be features, a walking embodiment of INABIAF.

 

2 Comments

Filed under technology

“Personalized Learning”: The Difference between a Policy and a Strategy

“Personalized learning”–and whatever it means–has been the mantra for policymakers. technology entrepreneurs, and engaged practitioners for the past few years. Mention the phrase and those whose bent is to alter schooling nod in assent as to its apparent value in teaching and learning.  Mentions of it cascade through media and research reports as if it is the epitome of the finest policy to install in classrooms.

But it is not a policy, “personalized learning” is a strategy.

What’s the difference?

Read what Yale University historian Beverly Gage writes about the crucial distinction between the two concepts:

A strategy, in politics, can be confused with a policy or a vision, but they’re not quite the same thing. Policies address the “what”; they’re prescriptions for the way things might operate in an ideal world. Strategy is about the “how.” How do you move toward a desired end, despite limited means and huge obstacles? We tend to associate strategy with high-level decision makers — generals, presidents, corporate titans — but the basic challenge of, in [Saul] Alinsky’s words, “doing what you can with what you have” applies just as much when working from the bottom up.

While the two are connected, making the distinction between policy and strategy is essential to not only political leaders but military ones as well. Strategies are instruments to achieve policy goals so, for example, in the 17 year-old war in Afghanistan, ambiguous and changing U.S. goals—get rid of Taliban, make Afghanistan democratic, establish an effective Afghan military and police force–influenced greatly what strategies U.S. presidents–three since 2001–have used such as sending special forces, army, and marines into the country—frontal assaults on Taliban strongholds, counter-insurgency, etc. (see here and here).

Without recognizing this distinction between policy and strategy military and political leaders behave as blind-folded leaders  taking one action while devising another plan to implement to achieve ever-changing goals.

09mag-firstwords-image1-jumbo.png

Photo illustration by Derek Brahney. Source image of painting: Bridgeman Images.

 

But the key distinction that Gage draws between policy and strategy does not only apply to politics or the military, it just as well covers continual reform efforts to improve public schools. A successful reform often gets converted into policies–the vision–and those policies get implemented–the how– as strategies to achieve those policy goals in districts and schools

Also keep in mind that public schools are political institutions. Taxpayers fund them. Voters elect boards of education to make policies consistent with the wishes of those who put them into office. And those policies are value-driven, that is, the policy goals school boards and superintendents pursue in districts, principals in schools, and lessons teachers teach contains community and national values or, as Gage put it above: prescriptions for the way things might operate in an ideal world. Of course, these value-laden goals, e.g., build citizens, strengthen students’ moral character, insure children’s well being, prepare graduates for jobs, can be contested and, again become political as tax levies and referenda on bilingual or English only instruction get voted up or down. So policies do differ from strategies in schooling. The distinction becomes important particularly when it comes to media-enhanced school reforms.

In light of this distinction, consider “personalized learning.” When I ask the question of teachers, principals, superintendents and members of school boards about”personalized learning”: toward what ends? I get stares and then answers that are all over the landscape–higher test scores, reducing achievement gap between minorities and whites, getting better jobs and motivating students to lifelong learning (see here).

The question is essential because entrepreneurs, advocates, and promoters  pushing “personalized learning” expect practitioners to reorganize time and space in schools, secure new talent, buy extensive hardware and software, shift from teacher-centered to student-centered instruction, and provide scads of professional development to those putting what has now become a policy into practice.

The fact is that “personalized learning” is not a policy; it is a strategy. What has happened here as it has in politics and the military is that a “strategy” has become the desired end replacing the initial policy goal.  Leaders forget that a policy is a “what,” a prescription for the way things might operate better than they do, a solution to a problem, not a “how”  do you move toward a desired end, despite limited means and huge obstacles? While this switch from policy-to-strategy is common it is self-defeating (and consequential) in an organization aiming to help children and youth live in the here and now while getting ready for an uncertain future.

The fundamental question that must be asked of “personalized learning” is: toward what ends? It seldom gets asked much less answered without flabby phrases or impenetrable jargon. The conflicts that arise when the goals of PL are unclear or ambiguous (or worse, unexplored) occur because PL as a strategy–the “how” –has morphed into the “what” of a policy. Here is what Facebook’s Mark Zuckerberg says:

We want to make sure that [PL], which seems like a good hypothesis and approach, gets a good shot at getting tested and implemented.

One example taken from a recent report on PL:

Personalized learning is rooted in the expectation that students should progress through content based on demonstrated learning instead of seat time. By contrast, standards-based accountability centers its ideas about what students should know, and when, on grade-level expectations and pacing. The result is that as personalized learning models become more widespread, practitioners are increasingly encountering tensions between personalized learning and state and federal accountability structures.

Noting these conflicts between PL and standards-based accountability–both of which are strategies to achieve higher test scores, change school organization, raise students’ self-confidence in mastering content, and demonstrate responsibility to voters. Nothing, however, is ever said how raising test scores, altering how schools are organized, lifting students’ self-esteem, or holding schools accountable to voters is connected to graduating engaged citizens, shaping humane adults, getting jobs in an ever-changing workplace, or reducing economic inequalities.  These are the policy ends that Americans say they want for their public schools. Instead, distinctions between policy and strategy go unnoticed and the “how” becomes far more important than the “what.”

 

 

 

12 Comments

Filed under school reform policies, technology

12 Things Everyone Should Understand About Tech (Anil Dash)

“Anil Dash is an entrepreneur, activist and writer recognized as one of the most prominent voices advocating for a more humane, inclusive and ethical technology industry. He is the CEO of Fog Creek Software, the renowned independent tech company behind Glitch, the friendly new community that helps anyone make the app of their dreams, as well as its past landmark products like Trello and Stack Overflow.

Dash was an advisor to the Obama White House’s Office of Digital Strategy, and today advises major startups and non-profits including Medium and DonorsChoose. He also serves as a board member for companies like Stack Overflow, the world’s largest community for computer programmers, and non-profits like the Data & Society Research Institute, whose research examines the impact of tech on society and culture; the NY Tech Alliance, America’s largest tech trade organization; and the Lower East Side Girls Club, which serves girls and families in need in New York City…. Dash is based in New York City, where he lives with his wife Alaina Browne and their son Malcolm. Dash has never played a round of golf, drank a cup of coffee, or graduated from college.”

This post appeared March 14, 2018 on Humane Tech

 

Tech is more important than ever, deeply affecting culture, politics and society. Given all the time we spend with our gadgets and apps, it’s essential to understand the principles that determine how tech affects our lives.

Understanding technology today

Technology isn’t an industry, it’s a method of transforming the culture and economics of existing systems and institutions. That can be a little bit hard to understand if we only judge tech as a set of consumer products that we purchase. But tech goes a lot deeper than the phones in our hands, and we must understand some fundamental shifts in society if we’re going to make good decisions about the way tech companies shape our lives—and especially if we want to influence the people who actually make technology.

Even those of us who have been deeply immersed in the tech world for a long time can miss the driving forces that shape its impact. So here, we’ll identify some key principles that can help us understand technology’s place in culture.

What you need to know:

1. Tech is not neutral.

One of the most important things everybody should know about the apps and services they use is that the values of technology creators are deeply ingrained in every button, every link, and every glowing icon that we see. Choices that software developers make about design, technical architecture or business model can have profound impacts on our privacy, security and even civil rights as users. When software encourages us to take photos that are square instead of rectangular, or to put an always-on microphone in our living rooms, or to be reachable by our bosses at any moment, it changes our behaviors, and it changes our lives.

All of the changes in our lives that happen when we use new technologies do so according to the priorities and preferences of those who create those technologies.

2. Tech is not inevitable.

Popular culture presents consumer technology as a never-ending upward progression that continuously makes things better for everybody. In reality, new tech products usually involve a set of tradeoffs where improvements in areas like usability or design come along with weaknesses in areas like privacy & security. Sometimes new tech is better for one community while making things worse for others. Most importantly, just because a particular technology is “better” in some way doesn’t guarantee it will be widely adopted, or that it will cause other, more popular technologies to improve.

In reality, technological advances are a lot like evolution in the biological world: there are all kinds of dead-ends or regressions or uneven tradeoffs along the way, even if we see broad progress over time.

3. Most people in tech sincerely want to do good.

We can be thoughtfully skeptical and critical of modern tech products and companies without having to believe that most people who create tech are “bad”. Having met tens of thousands of people around the world who create hardware and software, I can attest that the cliché that they want to change the world for the better is a sincere one. Tech creators are very earnest about wanting to have a positive impact. At the same time, it’s important for those who make tech to understand that good intentions don’t absolve them from being responsible for the negative consequences of their work, no matter how well-intentioned.

It’s useful to acknowledge the good intentions of most people in tech because it lets us follow through on those intentions and reduce the influence of those who don’t have good intentions, and to make sure the stereotype of the thoughtless tech bro doesn’t overshadow the impact that the majority of thoughtful, conscientious people can have. It’s also essential to believe that there is good intention underlying most tech efforts if we’re going to effectively hold everyone accountable for the tech they create.

4. Tech history is poorly documented and poorly understood.

People who learn to create tech can usually find out every intimate detail of how their favorite programming language or device was created, but it’s often near impossible to know why certain technologies flourished, or what happened to the ones that didn’t. While we’re still early enough in the computing revolution that many of its pioneers are still alive and working to create technology today, it’s common to find that tech history as recent as a few years ago has already been erased. Why did your favorite app succeed when others didn’t? What failed attempts were made to create such apps before? What problems did those apps encounter — or what problems did they cause? Which creators or innovators got erased from the stories when we created the myths around today’s biggest tech titans?

All of those questions get glossed over, silenced, or sometimes deliberately answered incorrectly, in favor of building a story of sleek, seamless, inevitable progress in the tech world. Now, that’s hardly unique to technology — nearly every industry can point to similar issues. But that ahistorical view of the tech world can have serious consequences when today’s tech creators are unable to learn from those who came before them, even if they want to.

5. Most tech education doesn’t include ethical training.

In mature disciplines like law or medicine, we often see centuries of learning incorporated into the professional curriculum, with explicit requirements for ethical education. Now, that hardly stops ethical transgressions from happening—we can see deeply unethical people in positions of power today who went to top business schools that proudly tout their vaunted ethics programs. But that basic level of familiarity with ethical concerns gives those fields a broad fluency in the concepts of ethics so they can have informed conversations. And more importantly, it ensures that those who want to do the right thing and do their jobs in an ethical way have a firm foundation to build on.

But until the very recent backlash against some of the worst excesses of the tech world, there had been little progress in increasing the expectation of ethical education being incorporated into technical training. There are still very few programs aimed at upgrading the ethical knowledge of those who are already in the workforce; continuing education is largely focused on acquiring new technical skills rather than social ones. There’s no silver-bullet solution to this issue; it’s overly simplistic to think that simply bringing computer scientists into closer collaboration with liberal arts majors will significantly address these ethics concerns. But it is clear that technologists will have to rapidly become fluent in ethical concerns if they want to continue to have the widespread public support that they currently enjoy.

6. Tech is often built with surprising ignorance about its users.

Over the last few decades, society has greatly increased in its respect for the tech industry, but this has often resulted in treating the people who create tech as infallible. Tech creators now regularly get treated as authorities in a wide range of fields like media, labor, transportation, infrastructure and political policy — even if they have no background in those areas. But knowing how to make an iPhone app doesn’t mean you understand an industry you’ve never worked in!

The best, most thoughtful tech creators engage deeply and sincerely with the communities that they want to help, to ensure they address actual needs rather than indiscriminately “disrupting” the way established systems work. But sometimes, new technologies run roughshod over these communities, and the people making those technologies have enough financial and social resources that the shortcomings of their approaches don’t keep them from disrupting the balance of an ecosystem. Often times, tech creators have enough money funding them that they don’t even notice the negative effects of the flaws in their designs, especially if they’re isolated from the people affected by those flaws. Making all of this worse are the problems with inclusion in the tech industry, which mean that many of the most vulnerable communities will have little or no representation amongst the teams that create new tech, preventing those teams from being aware of concerns that might be of particular importance to those on the margins.

7. There is never just one single genius creator of technology.

One of the most popular representations of technology innovation in popular culture is the genius in a dorm room or garage, coming up with a breakthrough innovation as a “Eureka!” moment. It feeds the common myth-making around people like Steve Jobs, where one individual gets credit for “inventing the iPhone” when it was the work of thousands of people. In reality, tech is always informed by the insights and values of the community where its creators are based, and nearly every breakthrough moment is preceded by years or decades of others trying to create similar products.

The “lone creator” myth is particularly destructive because it exacerbates the exclusion problems which plague the tech industry overall; those lone geniuses that are portrayed in media are seldom from backgrounds as diverse as people in real communities. While media outlets may benefit from being able to give awards or recognition to individuals, or educational institutions may be motivated to build up the mythology of individuals in order to bask in their reflected glory, the real creation stories are complicated and involve many people. We should be powerfully skeptical of any narratives that indicate otherwise.

8. Most tech isn’t from startups or by startups.

Only about 15% of programmers work at startups, and in many big tech companies, most of the staff aren’t even programmers anyway. So the focus on defining tech by the habits or culture of programmers that work at big-name startups deeply distorts the way that tech is seen in society. Instead, we should consider that the majority of people who create technology work in organizations or institutions that we don’t think of as “tech” at all.

What’s more, there are lots of independent tech companies — little indie shops or mom-and-pop businesses that make websites, apps, or custom software, and a lot of the most talented programmers prefer the culture or challenges of those organizations over the more famous tech titans. We shouldn’t erase the fact that startups are only a tiny part of tech, and we shouldn’t let the extreme culture of many startups distort the way we think about technology overall.

9. Most big tech companies make money in just one of three ways.

It’s important to understand how tech companies make money if you want to understand why tech works the way that it does.

  • Advertising: Google and Facebook make nearly all of their money from selling information about you to advertisers. Almost every product they create is designed to extract as much information from you as possible, so that it can be used to create a more detailed profile of your behaviors and preferences, and the search results and social feeds made by advertising companies are strongly incentivized to push you toward sites or apps that show you more ads from these platforms. It’s a business model built around surveillance, which is particularly striking since it’s the one that most consumer internet businesses rely upon.
  • Big Business: Some of the larger (generally more boring) tech companies like Microsoft and Oracle and Salesforce exist to get money from other big companies that need business software but will pay a premium if it’s easy to manage and easy to lock down the ways that employees use it. Very little of this technology is a delight to use, especially because the customers for it are obsessed with controlling and monitoring their workers, but these are some of the most profitable companies in tech.
  • Individuals: Companies like Apple and Amazon want you to pay them directly for their products, or for the products that others sell in their store.
  • (Although Amazon’s Web Services exist to serve that Big Business market, above.) This is one of the most straightforward business models—you know exactly what you’re getting when you buy an iPhone or a Kindle, or when you subscribe to Spotify, and because it doesn’t rely on advertising or cede purchasing control to your employer, companies with this model tend to be the ones where individual people have the most power.

That’s it. Pretty much every company in tech is trying to do one of those three things, and you can understand why they make their choices by seeing how it connects to these three business models

10. The economic model of big companies skews all of tech.

Today’s biggest tech companies follow a simple formula:

  1. Make an interesting or useful product that transforms a big market
  2. Get lots of money from venture capital investors
  3. Try to quickly grow a huge audience of users even if that means losing a lot of money for a while
  4. Figure out how to turn that huge audience into a business worth enough to give investors an enormous return
  5. Start ferociously fighting (or buying off) other competitive companies in the market

This model looks very different than how we think of traditional growth companies, which start off as small businesses and primarily grow through attracting customers who directly pay for goods or services. Companies that follow this new model can grow much larger, much more quickly, than older companies that had to rely on revenue growth from paying customers. But these new companies also have much lower accountability to the markets they’re entering because they’re serving their investors’ short-term interests ahead of their users’ or community’s long-term interests.

The pervasiveness of this kind of business plan can make competition almost impossible for companies without venture capital investment. Regular companies that grow based on earning money from customers can’t afford to lose that much money for that long a time. It’s not a level playing field, which often means that companies are stuck being either little indie efforts or giant monstrous behemoths, with very little in between. The end result looks a lot like the movie industry, where there are tiny indie arthouse films and big superhero blockbusters, and not very much else.

And the biggest cost for these big new tech companies? Hiring coders. They pump the vast majority of their investment money into hiring and retaining the programmers who’ll build their new tech platforms. Precious little of these enormous piles of money are put into things that will serve a community or build equity for anyone other than the founders or investors in the company. There is no aspiration that making a hugely valuable company should also imply creating lots of jobs for lots of different kinds of people.

To outsiders, creating apps or devices is presented as a hyper-rational process where engineers choose technologies based on which are the most advanced and appropriate to the task. In reality, the choice of things like programming languages or toolkits can be subject to the whims of particular coders or managers, or to whatever’s simply in fashion. Just as often, the process or methodology by which tech is created can follow fads or trends that are in fashion, affecting everything from how meetings are run to how products are developed.

Sometimes the people creating technology seek novelty, sometimes they want to go back to the staples of their technological wardrobe, but these choices are swayed by social factors in addition to an objective assessment of technical merit. And a more complex technology doesn’t always equal a more valuable end product, so while many companies like to tout how ambitious or cutting-edge their new technologies are, that’s no guarantee that they provide more value for regular users, especially when new technologies inevitably come with new bugs and unexpected side-effects.

12. No institution has the power to rein in tech’s abuses.

In most industries, if companies start doing something wrong or exploiting consumers, they’ll be reined in by journalists who will investigate and criticize their actions. Then, if the abuses continue and become serious enough, the companies can be sanctioned by lawmakers at the local, state, governmental or international level.

Today, though, much of the tech trade press focuses on covering the launch of new products or new versions of existing products, and the tech reporters who do cover the important social impacts of tech are often relegated to being published alongside reviews of new phones, instead of being prominently featured in business or culture coverage. Though this has started to change as tech companies have become absurdly wealthy and powerful, coverage is also still constrained by the culture within media companies. Traditional business reporters often have seniority in major media outlets, but are commonly illiterate in basic tech concepts in a way that would be unthinkable for journalists who cover finance or law. Meanwhile, dedicated tech reporters who may have a better understanding of tech’s impact on culture are often assigned to (or inclined to) cover product announcements instead of broader civic or social concerns.

The problem is far more serious when we consider regulators and elected officials, who often brag about their illiteracy about tech. Having political leaders who can’t even install an app on their smartphones makes it impossible to understand technology well enough to regulate it appropriately, or to assign legal accountability when tech‘s creators violate the law. Even as technology opens up new challenges for society, lawmakers lag tremendously behind the state of the art when creating appropriate laws.

Without the corrective force of journalistic and legislative accountability, tech companies often run as if they’re completely unregulated, and the consequences of that reality usually fall on those outside of tech. Worse, traditional activists who rely on conventional methods such as boycotts or protests often find themselves ineffective due to the indirect business model of giant tech companies, which can rely on advertising or surveillance (“gathering user data”) or venture capital investment to continue operations even if activists are effective in identifying problems.

This lack of systems of accountability is one of the biggest challenges facing tech today.

If we understand these things, we can change tech for the better.

If everything is so complicated, and so many important points about tech aren’t obvious, should we just give up hope? No.

Once we know the forces that shape technology, we can start to drive change. If we know that the biggest cost for the tech giants is attracting and hiring programmers, we can encourage programmers to collectively advocate for ethical and social advances from their employers. If we know that the investors who power big companies respond to potential risks in the market, we can emphasize that their investment risk increases if they bet on companies that act in ways that are bad for society.

If we understand that most in tech mean well, but lack the historic or cultural context to ensure that their impact is as good as their intentions, we can ensure that they get the knowledge they need to prevent harm before it happens.

So many of us who create technology, or who love the ways it empowers us and improves our lives, are struggling with the many negative effects that some of these same technologies are having on society. But perhaps if we start from a set of common principles that help us understand how tech truly works, we can start to tackle technology’s biggest problems.

10 Comments

Filed under technology, technology use

Reflections on 2017

EdSurge asked me to offer reflections and predictions for 2017. The following  appeared in EdSurge, December 27, 2017.

As someone who has taught high school history, led a school district, and researched the history of school reform including the use of new technologies in classrooms over the past half-century, except for one event noted below, I found little that startled me in 2017. For digital tools in classrooms, it was the same o’ same o’.

Sure, I am an oldster and have seen a lot of school reform both successes and failures but I am neither a pessimist nor a nay-sayer about public schools. I am a tempered idealist who is cautiously optimistic about what U.S. public schools have done and still can do for children, the community, and the nation. Both the idealism and optimism—keep in mind the adjectives I used to modify the nouns—have a lot to do with what I have learned over the decades about school reform especially when it comes to technology. So for 2017, I offer no lessons that will shock but ones distilled from my experience.

LESSON 1

When it comes to student use of classroom technologies, talk and action are both important. Differentiating between the two is crucial.

Anyone interested in improving schooling through digital tools has to distinguish between media surges of hyped news about, say, personalized learning transforming schools and virtual reality devices in classrooms from actual policies that are adopted (e.g., standards, testing, and accountability, buying 1:1).

Then one has to further distinguish between the hyperbole and adopted policies and programs before determining what teachers actually do in their classroom lessons. The process is the same as parsing hyped ads from the unwrapped product in your hand.

These distinctions are crucial in making sense of what teachers do once the classroom door closes.

LESSON 2

Access to digital tools is not the same as what happens in daily classroom activities.

District purchases of hardware and software continue to go up. In 1984, there were 125 students for each computer; now the ratio is around 3:1 and in many places 1:1. Nothing startling here—the trend line in buying stuff began to go up in the early years of this century and that trend continues. Because this nearly ubiquitous access to new technologies has spread across urban, suburban, exurban, and rural school districts, too many pundits and promoters leap to the conclusion that all teachers integrate these digital tools into daily practice seamlessly. While surely the use of devices and software has gained full entry into classrooms, anyone who regularly visits classrooms sees the wild variation in lessons among teachers using digital technologies.

Yes, teachers have surely incorporated digital tools into daily practice but—there is always a “but”—even those who have thoroughly integrated new technologies into their lessons reveal both change and stability in their teaching.

In 2016, I visited 41 elementary and secondary teachers in Silicon Valley who had a reputation for integrating technology into their daily lessons.

They were hard working, sharp teachers who used digital tools as familiarly as paper and pencil. Devices and software were in the background, not foreground. The lessons they taught were expertly arranged with a variety of student activities. These teachers had, indeed, made changes in creating playlists for students, pursuing problem-based units, and organizing the administrative tasks of teaching.

But I saw no fundamental or startling changes in the usual flow of lessons—setting goals, designing varied activities and groupings, eliciting student participation, assessing student understanding— that differed from earlier generations of experienced teachers. The lessons I observed were teacher-directed and post-observation interviews revealed continuity in how teachers have taught for decades. Again, stability and change in teaching with digital tools.

Oh yes, there was one event that did startle me. That was the election of Donald Trump as President. I do not believe that his tenure in the White House or that of his Secretary of Education will alter the nation’s direction in schooling–my first prediction. Every Student Succeeds Act (2016) shifts policymaking from federal to state offices. Sure, there is much talk in D.C. about more choice, charters, and vouchers but much of it remains talk. Little change in what schools do or what happens in classrooms will occur.

What is disturbing is the President’s disregard for being informed, making judgments based on whim, tweeting racist statements, and telling lies (Politifact has documented 325 Trump statements that it judges mostly or entirely false) . These Presidential actions in less than a year have already shaped a popular culture where “fake news,” “truthful hyperbole,” and “post-truth” are often used phrases.

Indirectly, the election of Donald Trump—and here is my second prediction—will spark a renaissance in districts and schools working on critical thinking skills and teachers and students parsing mainstream and social media for accuracy. Maybe the next generation will respect facts, think more logically, be clearer thinkers, and more intellectually curious than our current President.

6 Comments

Filed under technology

A Few Teachers Speak Out on Technology in Their Classrooms

I am fortunate to have many readers who are classroom teachers. I have published posts over the past year about my research on teachers identified as exemplary in integrating technology into their lessons. Some of those posts triggered responses from teachers. I offer a few of those comments here.

Louise Kowitch, retired social studies teacher from Connecticut:

….The impact of technology can vary greatly depending on the subject matter (among all the other things you’ve addressed). While some pedagogical practices are universal, when “doing the work of the discipline”, content-specific practices,and by extension the impact of technology, might vary widely.

I mention this to say that as someone who lived through the IT revolution in the classroom (from mimeographs, scantrons, and filmstrips to floppy disks and CD-ROM, and finally to smart boards, Skype and Chromebooks), by the time I reached three decades as a full time classroom teacher, I was spending MORE time on my lessons and interacting with students, than less. Some tasks were indeed more efficient (for example, obtaining and sharing maps, artifacts, art, primary sources). Others, like collecting data about student performance for our superintendent, became arduous, weekend long affairs that sucked the life out of the joy of teaching.

That said, I loved how Chromebooks and Smartboards freed up my instruction to empower students to do their own research and conduct substantive debates. For example, a simulation of the post WWI debates over the Treaty of Versailles from the perspectives of different countries – something I had done before Chromebooks – became a powerful lesson for students in the art of diplomacy, the value of historical perspective, and the grind of politics, as a result of THEIR OWN RESEARCH, not my selection of primary sources. This was MORE time consuming (2 weeks of instructional time, not 8 days) and LESS EFFICIENT, but MORE STUDENT CENTERED and COLLABORATIVE.

Was it “better” instruction? Yes, if the point was for kids to experience “the art of negotiation”. No, if it meant having to drop a four day mini unit on elections in the Weimar Republic that I used to do after the WWI unit. Something is lost, and something is gained. Like you, I grapple with it’s a zero sum game.

Garth Flint,  high school teacher of computer science and technology coordinator in Montana private school:

My question has always been what effect does the increase in classroom tech have on the students? Do they do better through out the years? How do we measure “better”? We have an AP History teacher who is very traditional. Kids listen to the lecture and copy the notes on the whiteboard.
About the only tech he uses are some minor YouTube videos. His AP test results are outstanding. Would any tech improve on those results? At the middle school we have a teacher who uses a Smartboard extensively. It has changed how he does his math lectures. But he is still lecturing. Has the Smartboard improved student learning? I do not know. I have observed teachers that have gone full tech. Google Docs, 1-1, videos of lectures on line, reversed classroom, paperless. Their prep time increased. Student results seemed (just from my observation, I did not measure anything) to be the same as a non-tech classroom. It would be interesting to have two classrooms of the same subject at the same grade level, one high tech, one old-school and feed those students into the same classroom the next year. Ask that next year teacher if there is a measurable difference between the groups.

 

Laura H. Chapman, retired  art teacher from Ohio:

“So answering the question of whether widespread student access and teacher use of technologies has “changed daily classroom practices” depends upon who is the asker, who is the doer, and what actually occurs in the classroom.”

Some other questions.
Who is asking questions about the extent of access and use of technology by students and teachers and why? Who is not asking such questions, and why not?

Is there a map of “daily classroom practices” for every subject and grade/or developmental level such that changes in these practices over time can be monitored with the same teachers in the same teaching assignments?

Are there unintended consequences of widespread student access and teacher use of technologies other than “changes in daily classroom practices?” Here I am thinking about the risky business of assuming that change is not only inevitable but also positive(e.g., invigorates teaching and learning, makes everything moe “efficient”).

Who is designing the algorithms, the apps, the dashboards, the protocols for accessing edtech resources, who is marketing these and mining the data from these technologies, and why? These questions bear on the direct costs and benefits of investments and indirect costs/benefits…. Continue reading

12 Comments

Filed under how teachers teach, technology

Fads and Fireflies: The Difficulties of Sustaining Change

I have written a lot in the past 50 years about the history of classroom practice, uses of technology in lessons, policymaker decisions, and school reform that is faddish and permanent. From time to time I will look through my writings to see what I said then and what I think now. I find that common themes (not necessarily the same words) appear again and again over the decades.

In some respects that bothers me. Am I a Johnny One Note who says the same thing over and over again without questioning the one note? Even with the life and professional experiences I have had over the decades in and out of schools, do I still play the same strings on my harp? Yes and no.

The “yes” part is that themes that are woven into the articles and books I have written deal with abiding issues in the history of a politically vulnerable institution embedded in every community throughout the U.S.  Issues such as “good” teachers, “good” schools, how to improve lessons, get better principals and superintendents, and make the “system” better have tracked the history of American schooling for at least two centuries. Every generation, “reforms” arise to deal with those issues.

The “no” part is that the contexts for school reform change over decades and what is important at one time is often less important at another moment of reform. Yet if contexts shift, still many of the same reforms get recycled and appear again. Puzzling but accurate and, in my opinion, in need of explanation.

I try to deal with the “yes” and “no” of being a Johnny One Note in an interview I did 17 years ago with journalists at Educational Leadership about the history of school reform and other persistent issues that accompany efforts to improve U.S. schools.

This is what I said then. In looking at it in 2017, I stick by what I said in the interview that follows. A Johnny One Note?

John O’Neil of the journal Educational Leadership conducted this interview and it appeared in April 2000, v. 57(7), pp. 6-9

___________________________________________________________

 

Educator and historian Larry Cuban reflects on why reforms are proposed and what happens when they are brought to the complex laboratory of schools.

With a background that includes teaching and serving as a school superintendent, as well as training as a historian, Larry Cuban is uniquely positioned to analyze the past century’s many waves of school change. He is author of several books, among them Teachers and Machines and Tinkering Toward Utopia. He is coeditor, with Dorothy Shipps, of a new book due out this year, Reconstructing the Common Good in Education: Coping with Intractable Dilemmas.

In this interview with EL staff members John O’Neil, Holly Cutting Baker, Carol Tell, and Marge Scherer, Cuban returns to a central theme of his research: School reforms are a product of the cultural, political, and economic forces of their times. Although critics have charged that schools are too faddish, too prone to bend to the current “reform du jour,” Cuban’s view is that the implementation and sustainability of school reforms are heavily influenced by public deliberation and discourse. After all, “schools reflect what the public wants,” Cuban reminds us.

On the whole, do you think that schools are too resistant to change or too faddish?

Our society is faddish. Schools as one institution experience these fads. Think of the corporate sector, for example. Total quality management didn’t start in the schools, it started in corporations! Medicine, the fashion industry, the media—all are subject to these gusts of innovation.

People are highly critical of schools because they seemingly bend to every new fashion, but when we begin thinking about it, we could easily say that schools are one of the most democratic institutions we have. Schools reflect what the public wants.

In what ways?

Schools are extremely vulnerable to pressures from different constituencies. So if members of a school board or a cadre of parents say that schools ought to have tutors or a new writing program, school boards have a hard time saying no. This is so especially because there is often a lack of scientific evidence that shows that one kind of innovation is clearly superior to another.

When David Tyack and I wrote Tinkering Toward Utopia, 1   we used the metaphor of fireflies. We were speaking about the way that changes or reforms so frequently appear, shine brightly for a few moments, and then disappear again.

What innovations have the most staying power?

The innovations that have the best chance of sticking are those that have constituencies that grow around them. For example, when Title I funds were first appropriated in 1965 as part of the Elementary and Secondary Education Act, this program quickly got a lot of support from constituents, ranging from educators to parents to members of Congress. So Title I and many of the other titles of the Elementary and Secondary Education Act have stuck around, even though there has been controversy over whether Title I funds were being used effectively. Another example is the constituencies that have come together to support special education.

What else besides a constituency helps sustain a change in schools?

One of the biggest factors seems to be that the reform reflects some deep-rooted social concern for democracy, for equity, or for preparing students to lead fulfilling adult lives. Basically, schools reflect cultural, political, social, and economic changes in the larger society. The school is not an institution apart—if anything, schools tend not to be at the forefront of change in the society. They tend to reflect what the elites and coalitions of parents and taxpayers believe is important. Because of how the nation came about, there is an enormous stake in schooling as a way to improve the life chances of any child—we don’t depend on hereditary privileges being passed from generation to generation.

Can you give some examples of social changes that have promoted lasting changes in schools?

Take the example of kindergarten. The nation was industrializing rapidly, and urban living for families, particularly immigrant and poor ones, had become more difficult. Kindergartens were introduced to public schools in the 1870s; before that, there were private kindergartens that were mostly aimed at middle- and upper-income families in the Midwest and New England and other places. Public kindergartens were introduced as a way of “preserving” childhood before kids encountered the rigor of grammar school or high school, as well as of teaching parents how to live in the cities. And kindergarten slowly spread, so that by the 1960s, kindergarten was a mainstay.

This gradual growth came not only from the formation of constituencies but also from a general belief that the earlier a child learns in formal situations, the better chance that child will have at academic and financial success. Public schools have always been looked at as an escalator for social mobility, and parents have always wanted to give their children an edge. So this notion of an early start gradually became fixed, and no one today would think of banning kindergarten or preschool.

Another example is the growth of high schools, and the development of “comprehensive” high schools that provided different curricula for diverse students. Up to the turn of the century, schooling for most children ended after grade 8. But by World War I, the comprehensive high school had been introduced and enrollment expanded. Labor laws kept children in school longer—and out of the workforce, where they were competing with adults. The democratic belief that every child has a different employment future pushed school administrators to provide different curricula for different students. The high school was called comprehensive because it had a job future in mind for every kid coming to school and was seen as a very democratic institution because of the equality of economic opportunity that was presumed to be embedded in the different curriculua.

What characterizes reforms that don’t stick?

The reforms that have the least potential for sticking are those that try to bring about changes in teaching, primarily because those innovations are often proposed by policymakers and officials who know little about classrooms as work places.

A lot of people think that because they’ve been in schools, they understand teaching, but the true complexity of the classroom is not clear to them. So what happens is that non-educators often will propose teaching innovations, and they may be successful in getting new laws and policies approved, but these policies will not necessarily be implemented. Attempts to change teaching and learning have often had a very short-term or inconsequential effect.

In Tinkering Toward Utopia, we make a clear distinction between policy talk, which is the current rhetoric in the media; policy action, which means that programs or innovations are adopted; and policy implementation, which relates to what actually occurs in the classroom. It’s important not to confuse these very different levels, but that frequently happens.

An obvious example is what’s going on with the teaching of reading. People were led to believe that many classrooms were being taught through whole language because there was a lot of talk about it among educators and in journals and in the media. Actually, most classrooms were not teaching reading through whole language; most teachers were using combinations of phonics and whole language to begin with. The evidence about the takeover of reading instruction by whole-language enthusiasts was very slim, but it was a great talking point for public officials who wanted to make a major issue out of it. So there’s an important distinction between the policy talk and the policy implementation, and we shouldn’t forget that.

You’re working on a new book about school technology. What can you say about how technologies are being used in the average classroom?

Computers have become one of the tools teachers use, and many teachers have in their repertoire instructional strategies that use technologies. But I think that these will still be peripheral—I don’t see the evidence that they’ll affect the core practices of teaching.

Why not?

First, I reject the argument that’s been made that teachers are resistant or incompetent or lack expertise or are technophobes. In the research we’ve done, we’ve found that teachers and students are using computers—both groups that we interviewed said that they use computers at home all the time. That made us refocus our attention on what goes on in school to try to explain the infrequent and limited use of computers for instruction even in those schools where there are abundant technological resources.

What we see is that the structure of school—for example, in the high school, where you have grades organized by age and departments—works against a lot of the changes that have to be made for technology to be used in more imaginative and creative ways. So there are institutional kinds of concerns that have to be raised about the structures of elementary and secondary schools that I think come between teachers and their use of the technology.

Another reason we’ve found in our research that the technologies themselves have flaws. Time and time again, we found teachers scrambling to cope when the server was down, or the cascading effects of new software on two-year-old machines would cause the computers to metaphorically “blow up.” And schools can’t keep investing capital costs to purchase newer computers all the time. These are the realities facing teachers. You can’t expect a teacher to have a contingency lesson B when lesson A, which relies on using the computer, doesn’t work. That’s why teachers continue to use the textbook, the overhead projector, the chalk. They’re reliable. They’re flexible.

As you know, some analysts have said that to achieve true change in public education, we have to look to reforms that challenge the status quo of school governance. That’s one of the arguments made in support of vouchers or charter schools. What are your impressions of these as an impetus for change?

Well, changes in government do not automatically mean changes in teaching and learning. That’s often forgotten in the heat of slogans and bumper stickers about vouchers or charter schools.

To the degree that the schools can provide more choice within the public sector for parents and for children, I think that’s a plus. When I was a school superintendent in Arlington, Virginia, we encouraged more alternatives. And I believe in that. But vouchers, which call for using public funds for private uses, give me pause. The use of private funds or public funds for private purposes will ultimately decrease the amount of resources for public schools. And I think that’s unconscionable.

Basically, tax-supported public schools were set up in this country to build citizens, to help kids become literate, to strengthen their moral character, and, ultimately, to help them succeed in the workplace. So schools serve many essential social functions. They are institutions designed to promote democratic purposes and the common good. But the idea is that they are public. Vouchers assume a marketplace metaphor that suggests that every parent, every teacher, every school will compete to improve. Well, who’s going to be concerned about the public good? The advocates for marketplace competition and for breaking up the public monopoly forget that. Schools were set up to develop citizens who care for a community, who can contribute to that community. You don’t have that when you go to the local supermarket. You’re in there to get a product and get out.

Some surveys suggest that people have lost faith in public schools. What’s your view?

Schools are part of the larger national fabric of institutions. There has been a general erosion of faith in government institutions, period. So maybe there’s been some loss of faith, but I think that core faith that Americans have about education is still there. People believe deeply in the ability of schools to solve societal problems and to help children reach their potential. Think of that parent who wants her 2-year-old to get into a great preschool program that’s going to be the escalator to Harvard or Stanford. Think of the recent immigrants—the first thing they want is to have their kids enrolled in school. So I believe the core faith is there. It’s been rocked, but not shattered.

We’ve talked about the ways reforms have changed schools—what about the ways schools change reforms?

Schools, like other institutions, adapt most changes to reflect the unique environment. Think about kindergarten, where the change—as it first emerged—was to promote the emotional, intellectual, and social development of children through play and exploration. Well, kindergartens are now becoming boot camps for the 1st grade. This trend began, by the way, in the 1930s and 1940s, although it accelerated greatly in the 1960s and 1970s. Preschools have become more like kindergartens, and both now aim to get kids ready for that 1st grade.

Another example is what’s going on right now with social promotion and accountability. The “reform” was to hold kids accountable for meeting learning goals, and a lot of policymakers were adamant that social promotion needed to end. Students who didn’t get satisfactory scores on tests should be held back.

But when these proposals collide with the complex reality of teaching and learning, there are often counter-movements, and schools must adapt again. I read recently that three states are now moving to lower their cut-off score for holding children back or denying a diploma. This is consistent with what occurred during the last great wave of testing—the competency movement of the mid-1970s. As soon as it becomes apparent to middle-class parents that their kids are not going to be promoted, or will have to attend summer school, official positions of school boards start to crumble.

Again, does this mean that schools have failed to “reform”? My answer is that schools as democratic institutions are continually adapting to these external pressures and, in doing so, maintain old practices as they invent new ones.

Endnote

1   Tyack, D., & Cuban, L. (1995). Tinkering toward utopia: A century of public school reform. Cambridge, MA: Harvard University Press.

 

5 Comments

Filed under how teachers teach, school reform policies, technology