Category Archives: technology

The Virtue of Slow Software: Fewer Fads in Schools?

Online commerce has made it easier than ever to shop, right? Maybe too easy. A recent study by comparison-shopping site Finder revealed that more than 88 percent of Americans admitted to spontaneous impulse buying online, blowing an average of $81.75 each time we lose control. Clothes, videogames, concert tickets. One in five of us succumb weekly. Millennials do it the most.

With the above paragraph, journalist Clive Thompson opens his article on “Slow Software” in Wired magazine. His argument is straight-forward: devices speed up our lives, encourage impulsivity, and buyer’s remorse. For the above example of excessive buying–which, of course, is crucial to the economy which depends upon Americans shopping–Thompson describes a piece of software that slows the shopper down.

[A team of software designers] created Icebox, a Chrome plug-­in that replaces the Buy button on 20 well-known e-commerce sites with a blue button labeled “Put it on ice.” Hit it and your item goes into a queue, and a week or so later Icebox asks if you still want to buy it. In essence, it forces you to stop and ponder, “Do I really need this widget?” Odds are you don’t.

The pace of life has surely accelerated with Facebook Newsfeed, incessant tweets, over the top Instagram pics, and pop-up ads everywhere you click on the web. Misinformation on Facebook spreading swiftly and harassment campaigns on Twitter ever-present, slowing down software seems to be a way of thinking twice before deciding on something important to us. But it is not easy as Thompson concludes:

It’s a Sisyphean battle, I admit. Offered the choice, we nearly always opt for convenience…. Icebox is brilliant but hasn’t yet taken off. Socratic deliberation improves our lives—but, man, what a pain!

Slow software reminded me of what Steve Arnett reported in an earlier post. Ninety-eight percent of the software that school administrators purchased for classroom use was not used intensively (at least 10 hours between assessments)–yes, 98 percent.

The apps with the most licenses purchased are ConnectEd, WeVideo, Blender Learn Discovery and Education Streaming Plus. The apps with the highest intensive users are Google Drive, Canvas, Dreambox Learning, Lexia Reading Core5 and IXL. (Some apps, such as Google Drive, have more users than licenses purchased because they offer their services for free.)

All of this got me thinking anew about who makes district decisions about buying software for classrooms and the muting of teacher voices when it comes to these district office decisions–which, of course, have to ultimately be approved by boards of education.

School leaders need slow software before going on buying sprees of teaching and learning software peddled by companies. Impulsive shopping–see opening paragraph above–hits school leaders as it does the typical consumer surfing Amazon or similar sites. This impulse buying is the way that fads get started (hype transforms fads into “innovations”).

Of course, district officials who spend the money do not need software to slow their decisions down for a week that Icebox proposes. Instead of slow software, they can use some old-fashioned, analog ways of decision-making that bring teachers into the decision cycle at the very beginning with teachers volunteering to try out the new software (and devices) in lessons, administrators collecting data, and analysis of data by mix of a teachers and administrators. And I do not mean token representation on committees already geared to decide on software and devices. With actual groups of teachers using software (and devices) with students, then a more deliberate, considered, and informed decision can be made on which software (or devices) should get licensed for district. Of course, this suggestion means that those who make decisions have to take time to collaborate with those who are the objects of those decisions before any district money can be spent. And time is a scarce resource especially for teachers. Not to be squandered, but there are tech-savvy teachers who would relish such an opportunity.

My hunch is that there are cadres of teachers who do want to be involved in classroom use of software before they are bought and would appreciate the chance to chime in with their experiences using the software in lessons. Teacher validation of an innovation aimed at teaching and learning can not be sold or bought without teachers using the software in lessons.

As Thompson points out it is a struggle to restrain impulsivity when buying stuff because “[o]ffered the choice, we nearly always opt for convenience.” That applies to district leaders buying software for teachers to use in their lessons. And faddishness is the last thing that schools need when budgets are tight and entrenchment is in the air.

A Fad Dissolver period declared at the onset of a classroom trial that runs three-to-six months to determine how valid and useful the software is could halt the impulse buying that so characterizes districts wanting to show how tech savvy they are and avoid the common practice of storing in drawers and closets unused software and devices.

 

Advertisements

3 Comments

Filed under technology, technology use

Changing Technologies in Classrooms

A friend and former colleague, Henry Levin, recently wrote about his experience in a 1940s classroom.

I started school in 1943, and by the time we were in third grade we were introduced to writing cursive using an ink pen.  Initially these were the pens with long tapered wooden handles with replaceable pen tips or nibs, but by sixth grade we were expected to use fountain pens because they were less messy.  I remember filling carefully my pen by maneuvering a lever on its side that compressed a rubber bladder inside to draw ink from the inkwell on its release.  

Related image

I was also given the responsibility of refilling the inkwells each day or every other day.  We used huge bottles of Quink (perhaps a liter), and they had to be manipulated in just the right way to fill (three quarters), but not overfill the inkwell.  My recollection is that this was a permanent ink that could not be removed from my clothing.  Once I dropped the entire bottle on the floor, leading to a large spill.  That required initially placing newsprint and paper tissues to soak up most of it, followed by a mopping and scrubbing with water and suds.  Still, a shadow of the ink remained, and the teacher reminded me periodically that I needed to be careful not to further damage her floor.  Towards the end of high school some very expensive ballpoint pens began to replace the ink pens, and we were no longer expected to use the ink paraphernalia.But, the old desks last for a long time.  Even in the late fifties (I was in college), I visited my old high school and found that all of the student desks still had inkwells.  Students wondered what they were for.

I also have a memory of a later technology that, like the inkwell, became obsolescent.

In the late 1960s Stanford University administrators secured federal funds to build a multi-million dollar facility called the Stanford Center for Research, Development, and Teaching (SCRDT). A fully furnished television studio with “state-of-the-art” cameras, videotape recorders, and monitors occupied the main floor with the star-in-the-crown of the new building located in the Large-Group Instruction room (LGI).

LGI

The amphitheater-shaped room with half-circular rows looked down on a small stage with a lectern, a massive pull-down screen, and 2 large monitors suspended from the ceiling. At most of the individual seats was a small punch-button pad called the “student responder.” The responder contained the numbers 1-10 and letters T and F.

student responder

At the very top of the amphitheater was a glass-enclosed technician’s station where an aide could assist the professor with amplification of sound, simultaneous interpretation of various languages, show slides or films, and put on monitors data that the professors wanted.  Administrators had designed the room for professors to enhance the delivery of lectures.

For lectures, the student responder came into play. Designers created the pad for students to punch in their choices to communicate instantaneously to the lecturer their answers to the professor’s questions, such as “If you agree, press 1, disagree, press 2.” “If statement is true, press T.”  As students pressed the keypad, the data went directly to a mainframe computer where the students’ responses were immediately assembled and displayed for the professor at a console on the lectern. The lecturer was then able to adjust the pace and content of the lecture to this advanced interactive technology, circa 1970, that linked students to teacher.

By 1972 when I came to Stanford as a graduate student, the LGI was being used as a large lecture hall for classes from other departments. The now-disconnected keypads were toys that bored students played with during lectures. The pull-down screen was used for overheads and occasional films. The fixed position cameras purchased in the late 1960s were already beyond repair and obsolete.

In 1981, when I returned to teach at Stanford, the SCRDT had been renamed the Center for Educational Research at Stanford (CERAS). In the LGI, none of the original equipment or technology (except the sound system and simultaneous translation) was used by either students or professors. The student responders, however, were still there.

By 2011, nearly a half-century after the SCRDT installed the LGI, the amphitheater room was still in use as a regular lecture hall. I was in that room that year to hear a colleague talk about his career in education and, you guessed it, as I listened, my fingers crept over to the “student responder” and I began to click the keys.

In 2012, the LGI was renovated and the numeric pads disappeared just as those holes in classroom desks to store ink did decades ago.*

Whoever said classrooms don’t change?

___________________________

*Thanks to Deborah Belanger for supplying the date of the LGI renovation.

 

 

10 Comments

Filed under technology

‘It’s Not a Bug, It’s a Feature.’ Trite—or Just Right? (Nicholas Carr)

Nicholas Carr is an author who has written extensively on information technology (IT) for the past 15 years. His 2010 book The Shallows was a finalist for the Pulitzer Prize. I include this recent essay of his because nearly all readers of this blog and I have experienced “bugs” in the software we use daily. He tells the story of an IT phrase that has entered our idiom and become a cliche.

This appeared in Wired, August 19, 2018

We’ll never know who said it first, nor whether the coiner spoke sheepishly or proudly, angrily or slyly. As is often the case with offhand remarks that turn into maxims, the origin of It’s not a bug, it’s a feature is murky. What we do know is that the expression has been popular among programmers for a long time, at least since the days when Wang and DEC were hot names in computing. The Jargon File, a celebrated lexicon of hacker-speak compiled at Stanford in 1975 and later expanded at MIT, glossed the adage this way:

A standard joke is that a bug can be turned into a feature simply by documenting it (then theoretically no one can complain about it because it’s in the manual), or even by simply declaring it to be good. “That’s not a bug, that’s a feature!” is a common catchphrase.

When 19th-century inventors and engineers started using bug as a synonym for defect, they were talking about mechanical ­malfunctions, and mechanical malfunctions were always bad. The idea that a bug might actually be something desirable would never have crossed the mind of an Edison or a Tesla. It was only after the word entered the vocabulary of coders that it got slippery. It’s not a bug, it’s a feature is an acknowledgment, half comic, half tragic, of the ambiguity that has always haunted computer programming.

In the popular imagination, apps and other programs are “algorithms,” sequences of clear-cut instructions that march forward with the precision of a drill sergeant. But while software may be logical, it’s rarely pristine. A program is a social artifact. It emerges through negotiation and compromise, a product of subjective judgments and shifting assumptions. As soon as it gets into the hands of users, a whole new set of expectations comes into play. What seems an irritating defect to a particular user—a hair-trigger ­toggle between landscape and portrait mode, say—may, in the eyes of the programmer, be a specification expertly executed.

Who can really say? In a 2013 study, a group of scholars at a German university sifted through the records of five software projects and evaluated thousands of reported coding errors. They discovered that the bug reports were themselves thoroughly buggy. “Every third bug is not a bug,” they concluded. The title of their paper will surprise no one: “It’s Not a Bug, It’s a Feature.”

INABIAF—the initialism has earned a place in the venerable Acronym Finder—is for programmers as much a cri de coeur as an excuse. For the rest of us, the saying has taken on a sinister tone. It wasn’t long ago that we found software ­dazzling, all magic and light. But our perception of the programmer’s art has darkened. The friendly-seeming apps and chatbots on our phones can, we’ve learned, harbor ill intentions. They can manipulate us or violate our trust or make us act like jerks. It’s the features now that turn out to be bugs.

The flexibility of the term bug pretty much guaranteed that INABIAF would burrow its way into everyday speech. As the public flocked online during the 1990s, the phrase began popping up in mainstream media—The New York Times in 1992, The New Yorker in 1997, Time in 1998—but it wasn’t until this century that it really began to proliferate.

A quick scan of Google News reveals that, over the course of a single month earlier this year, It’s not a bug, it’s a feature appeared 146 times. Among the bugs said to be features were the decline of trade unions, the wilting of cut flowers, economic meltdowns, the gratuitousness of Deadpool 2’s post-credits scenes, monomania, the sloppiness of Neil Young and Crazy Horse, marijuana-induced memory loss, and the apocalypse. Given the right cliché, nothing is unredeemable.

The programmer’s “common catchphrase” has itself become a bug, so trite that it cheapens everything it touches. But scrub away the tarnish of overuse and you’ll discover a truth that’s been there the whole time. What is evolution but a process by which glitches in genetic code come to be revealed as prized biological functions? Each of us is an accumulation of bugs that turned out to be features, a walking embodiment of INABIAF.

 

2 Comments

Filed under technology

“Personalized Learning”: The Difference between a Policy and a Strategy

“Personalized learning”–and whatever it means–has been the mantra for policymakers. technology entrepreneurs, and engaged practitioners for the past few years. Mention the phrase and those whose bent is to alter schooling nod in assent as to its apparent value in teaching and learning.  Mentions of it cascade through media and research reports as if it is the epitome of the finest policy to install in classrooms.

But it is not a policy, “personalized learning” is a strategy.

What’s the difference?

Read what Yale University historian Beverly Gage writes about the crucial distinction between the two concepts:

A strategy, in politics, can be confused with a policy or a vision, but they’re not quite the same thing. Policies address the “what”; they’re prescriptions for the way things might operate in an ideal world. Strategy is about the “how.” How do you move toward a desired end, despite limited means and huge obstacles? We tend to associate strategy with high-level decision makers — generals, presidents, corporate titans — but the basic challenge of, in [Saul] Alinsky’s words, “doing what you can with what you have” applies just as much when working from the bottom up.

While the two are connected, making the distinction between policy and strategy is essential to not only political leaders but military ones as well. Strategies are instruments to achieve policy goals so, for example, in the 17 year-old war in Afghanistan, ambiguous and changing U.S. goals—get rid of Taliban, make Afghanistan democratic, establish an effective Afghan military and police force–influenced greatly what strategies U.S. presidents–three since 2001–have used such as sending special forces, army, and marines into the country—frontal assaults on Taliban strongholds, counter-insurgency, etc. (see here and here).

Without recognizing this distinction between policy and strategy military and political leaders behave as blind-folded leaders  taking one action while devising another plan to implement to achieve ever-changing goals.

09mag-firstwords-image1-jumbo.png

Photo illustration by Derek Brahney. Source image of painting: Bridgeman Images.

 

But the key distinction that Gage draws between policy and strategy does not only apply to politics or the military, it just as well covers continual reform efforts to improve public schools. A successful reform often gets converted into policies–the vision–and those policies get implemented–the how– as strategies to achieve those policy goals in districts and schools

Also keep in mind that public schools are political institutions. Taxpayers fund them. Voters elect boards of education to make policies consistent with the wishes of those who put them into office. And those policies are value-driven, that is, the policy goals school boards and superintendents pursue in districts, principals in schools, and lessons teachers teach contains community and national values or, as Gage put it above: prescriptions for the way things might operate in an ideal world. Of course, these value-laden goals, e.g., build citizens, strengthen students’ moral character, insure children’s well being, prepare graduates for jobs, can be contested and, again become political as tax levies and referenda on bilingual or English only instruction get voted up or down. So policies do differ from strategies in schooling. The distinction becomes important particularly when it comes to media-enhanced school reforms.

In light of this distinction, consider “personalized learning.” When I ask the question of teachers, principals, superintendents and members of school boards about”personalized learning”: toward what ends? I get stares and then answers that are all over the landscape–higher test scores, reducing achievement gap between minorities and whites, getting better jobs and motivating students to lifelong learning (see here).

The question is essential because entrepreneurs, advocates, and promoters  pushing “personalized learning” expect practitioners to reorganize time and space in schools, secure new talent, buy extensive hardware and software, shift from teacher-centered to student-centered instruction, and provide scads of professional development to those putting what has now become a policy into practice.

The fact is that “personalized learning” is not a policy; it is a strategy. What has happened here as it has in politics and the military is that a “strategy” has become the desired end replacing the initial policy goal.  Leaders forget that a policy is a “what,” a prescription for the way things might operate better than they do, a solution to a problem, not a “how”  do you move toward a desired end, despite limited means and huge obstacles? While this switch from policy-to-strategy is common it is self-defeating (and consequential) in an organization aiming to help children and youth live in the here and now while getting ready for an uncertain future.

The fundamental question that must be asked of “personalized learning” is: toward what ends? It seldom gets asked much less answered without flabby phrases or impenetrable jargon. The conflicts that arise when the goals of PL are unclear or ambiguous (or worse, unexplored) occur because PL as a strategy–the “how” –has morphed into the “what” of a policy. Here is what Facebook’s Mark Zuckerberg says:

We want to make sure that [PL], which seems like a good hypothesis and approach, gets a good shot at getting tested and implemented.

One example taken from a recent report on PL:

Personalized learning is rooted in the expectation that students should progress through content based on demonstrated learning instead of seat time. By contrast, standards-based accountability centers its ideas about what students should know, and when, on grade-level expectations and pacing. The result is that as personalized learning models become more widespread, practitioners are increasingly encountering tensions between personalized learning and state and federal accountability structures.

Noting these conflicts between PL and standards-based accountability–both of which are strategies to achieve higher test scores, change school organization, raise students’ self-confidence in mastering content, and demonstrate responsibility to voters. Nothing, however, is ever said how raising test scores, altering how schools are organized, lifting students’ self-esteem, or holding schools accountable to voters is connected to graduating engaged citizens, shaping humane adults, getting jobs in an ever-changing workplace, or reducing economic inequalities.  These are the policy ends that Americans say they want for their public schools. Instead, distinctions between policy and strategy go unnoticed and the “how” becomes far more important than the “what.”

 

 

 

13 Comments

Filed under school reform policies, technology

12 Things Everyone Should Understand About Tech (Anil Dash)

“Anil Dash is an entrepreneur, activist and writer recognized as one of the most prominent voices advocating for a more humane, inclusive and ethical technology industry. He is the CEO of Fog Creek Software, the renowned independent tech company behind Glitch, the friendly new community that helps anyone make the app of their dreams, as well as its past landmark products like Trello and Stack Overflow.

Dash was an advisor to the Obama White House’s Office of Digital Strategy, and today advises major startups and non-profits including Medium and DonorsChoose. He also serves as a board member for companies like Stack Overflow, the world’s largest community for computer programmers, and non-profits like the Data & Society Research Institute, whose research examines the impact of tech on society and culture; the NY Tech Alliance, America’s largest tech trade organization; and the Lower East Side Girls Club, which serves girls and families in need in New York City…. Dash is based in New York City, where he lives with his wife Alaina Browne and their son Malcolm. Dash has never played a round of golf, drank a cup of coffee, or graduated from college.”

This post appeared March 14, 2018 on Humane Tech

 

Tech is more important than ever, deeply affecting culture, politics and society. Given all the time we spend with our gadgets and apps, it’s essential to understand the principles that determine how tech affects our lives.

Understanding technology today

Technology isn’t an industry, it’s a method of transforming the culture and economics of existing systems and institutions. That can be a little bit hard to understand if we only judge tech as a set of consumer products that we purchase. But tech goes a lot deeper than the phones in our hands, and we must understand some fundamental shifts in society if we’re going to make good decisions about the way tech companies shape our lives—and especially if we want to influence the people who actually make technology.

Even those of us who have been deeply immersed in the tech world for a long time can miss the driving forces that shape its impact. So here, we’ll identify some key principles that can help us understand technology’s place in culture.

What you need to know:

1. Tech is not neutral.

One of the most important things everybody should know about the apps and services they use is that the values of technology creators are deeply ingrained in every button, every link, and every glowing icon that we see. Choices that software developers make about design, technical architecture or business model can have profound impacts on our privacy, security and even civil rights as users. When software encourages us to take photos that are square instead of rectangular, or to put an always-on microphone in our living rooms, or to be reachable by our bosses at any moment, it changes our behaviors, and it changes our lives.

All of the changes in our lives that happen when we use new technologies do so according to the priorities and preferences of those who create those technologies.

2. Tech is not inevitable.

Popular culture presents consumer technology as a never-ending upward progression that continuously makes things better for everybody. In reality, new tech products usually involve a set of tradeoffs where improvements in areas like usability or design come along with weaknesses in areas like privacy & security. Sometimes new tech is better for one community while making things worse for others. Most importantly, just because a particular technology is “better” in some way doesn’t guarantee it will be widely adopted, or that it will cause other, more popular technologies to improve.

In reality, technological advances are a lot like evolution in the biological world: there are all kinds of dead-ends or regressions or uneven tradeoffs along the way, even if we see broad progress over time.

3. Most people in tech sincerely want to do good.

We can be thoughtfully skeptical and critical of modern tech products and companies without having to believe that most people who create tech are “bad”. Having met tens of thousands of people around the world who create hardware and software, I can attest that the cliché that they want to change the world for the better is a sincere one. Tech creators are very earnest about wanting to have a positive impact. At the same time, it’s important for those who make tech to understand that good intentions don’t absolve them from being responsible for the negative consequences of their work, no matter how well-intentioned.

It’s useful to acknowledge the good intentions of most people in tech because it lets us follow through on those intentions and reduce the influence of those who don’t have good intentions, and to make sure the stereotype of the thoughtless tech bro doesn’t overshadow the impact that the majority of thoughtful, conscientious people can have. It’s also essential to believe that there is good intention underlying most tech efforts if we’re going to effectively hold everyone accountable for the tech they create.

4. Tech history is poorly documented and poorly understood.

People who learn to create tech can usually find out every intimate detail of how their favorite programming language or device was created, but it’s often near impossible to know why certain technologies flourished, or what happened to the ones that didn’t. While we’re still early enough in the computing revolution that many of its pioneers are still alive and working to create technology today, it’s common to find that tech history as recent as a few years ago has already been erased. Why did your favorite app succeed when others didn’t? What failed attempts were made to create such apps before? What problems did those apps encounter — or what problems did they cause? Which creators or innovators got erased from the stories when we created the myths around today’s biggest tech titans?

All of those questions get glossed over, silenced, or sometimes deliberately answered incorrectly, in favor of building a story of sleek, seamless, inevitable progress in the tech world. Now, that’s hardly unique to technology — nearly every industry can point to similar issues. But that ahistorical view of the tech world can have serious consequences when today’s tech creators are unable to learn from those who came before them, even if they want to.

5. Most tech education doesn’t include ethical training.

In mature disciplines like law or medicine, we often see centuries of learning incorporated into the professional curriculum, with explicit requirements for ethical education. Now, that hardly stops ethical transgressions from happening—we can see deeply unethical people in positions of power today who went to top business schools that proudly tout their vaunted ethics programs. But that basic level of familiarity with ethical concerns gives those fields a broad fluency in the concepts of ethics so they can have informed conversations. And more importantly, it ensures that those who want to do the right thing and do their jobs in an ethical way have a firm foundation to build on.

But until the very recent backlash against some of the worst excesses of the tech world, there had been little progress in increasing the expectation of ethical education being incorporated into technical training. There are still very few programs aimed at upgrading the ethical knowledge of those who are already in the workforce; continuing education is largely focused on acquiring new technical skills rather than social ones. There’s no silver-bullet solution to this issue; it’s overly simplistic to think that simply bringing computer scientists into closer collaboration with liberal arts majors will significantly address these ethics concerns. But it is clear that technologists will have to rapidly become fluent in ethical concerns if they want to continue to have the widespread public support that they currently enjoy.

6. Tech is often built with surprising ignorance about its users.

Over the last few decades, society has greatly increased in its respect for the tech industry, but this has often resulted in treating the people who create tech as infallible. Tech creators now regularly get treated as authorities in a wide range of fields like media, labor, transportation, infrastructure and political policy — even if they have no background in those areas. But knowing how to make an iPhone app doesn’t mean you understand an industry you’ve never worked in!

The best, most thoughtful tech creators engage deeply and sincerely with the communities that they want to help, to ensure they address actual needs rather than indiscriminately “disrupting” the way established systems work. But sometimes, new technologies run roughshod over these communities, and the people making those technologies have enough financial and social resources that the shortcomings of their approaches don’t keep them from disrupting the balance of an ecosystem. Often times, tech creators have enough money funding them that they don’t even notice the negative effects of the flaws in their designs, especially if they’re isolated from the people affected by those flaws. Making all of this worse are the problems with inclusion in the tech industry, which mean that many of the most vulnerable communities will have little or no representation amongst the teams that create new tech, preventing those teams from being aware of concerns that might be of particular importance to those on the margins.

7. There is never just one single genius creator of technology.

One of the most popular representations of technology innovation in popular culture is the genius in a dorm room or garage, coming up with a breakthrough innovation as a “Eureka!” moment. It feeds the common myth-making around people like Steve Jobs, where one individual gets credit for “inventing the iPhone” when it was the work of thousands of people. In reality, tech is always informed by the insights and values of the community where its creators are based, and nearly every breakthrough moment is preceded by years or decades of others trying to create similar products.

The “lone creator” myth is particularly destructive because it exacerbates the exclusion problems which plague the tech industry overall; those lone geniuses that are portrayed in media are seldom from backgrounds as diverse as people in real communities. While media outlets may benefit from being able to give awards or recognition to individuals, or educational institutions may be motivated to build up the mythology of individuals in order to bask in their reflected glory, the real creation stories are complicated and involve many people. We should be powerfully skeptical of any narratives that indicate otherwise.

8. Most tech isn’t from startups or by startups.

Only about 15% of programmers work at startups, and in many big tech companies, most of the staff aren’t even programmers anyway. So the focus on defining tech by the habits or culture of programmers that work at big-name startups deeply distorts the way that tech is seen in society. Instead, we should consider that the majority of people who create technology work in organizations or institutions that we don’t think of as “tech” at all.

What’s more, there are lots of independent tech companies — little indie shops or mom-and-pop businesses that make websites, apps, or custom software, and a lot of the most talented programmers prefer the culture or challenges of those organizations over the more famous tech titans. We shouldn’t erase the fact that startups are only a tiny part of tech, and we shouldn’t let the extreme culture of many startups distort the way we think about technology overall.

9. Most big tech companies make money in just one of three ways.

It’s important to understand how tech companies make money if you want to understand why tech works the way that it does.

  • Advertising: Google and Facebook make nearly all of their money from selling information about you to advertisers. Almost every product they create is designed to extract as much information from you as possible, so that it can be used to create a more detailed profile of your behaviors and preferences, and the search results and social feeds made by advertising companies are strongly incentivized to push you toward sites or apps that show you more ads from these platforms. It’s a business model built around surveillance, which is particularly striking since it’s the one that most consumer internet businesses rely upon.
  • Big Business: Some of the larger (generally more boring) tech companies like Microsoft and Oracle and Salesforce exist to get money from other big companies that need business software but will pay a premium if it’s easy to manage and easy to lock down the ways that employees use it. Very little of this technology is a delight to use, especially because the customers for it are obsessed with controlling and monitoring their workers, but these are some of the most profitable companies in tech.
  • Individuals: Companies like Apple and Amazon want you to pay them directly for their products, or for the products that others sell in their store.
  • (Although Amazon’s Web Services exist to serve that Big Business market, above.) This is one of the most straightforward business models—you know exactly what you’re getting when you buy an iPhone or a Kindle, or when you subscribe to Spotify, and because it doesn’t rely on advertising or cede purchasing control to your employer, companies with this model tend to be the ones where individual people have the most power.

That’s it. Pretty much every company in tech is trying to do one of those three things, and you can understand why they make their choices by seeing how it connects to these three business models

10. The economic model of big companies skews all of tech.

Today’s biggest tech companies follow a simple formula:

  1. Make an interesting or useful product that transforms a big market
  2. Get lots of money from venture capital investors
  3. Try to quickly grow a huge audience of users even if that means losing a lot of money for a while
  4. Figure out how to turn that huge audience into a business worth enough to give investors an enormous return
  5. Start ferociously fighting (or buying off) other competitive companies in the market

This model looks very different than how we think of traditional growth companies, which start off as small businesses and primarily grow through attracting customers who directly pay for goods or services. Companies that follow this new model can grow much larger, much more quickly, than older companies that had to rely on revenue growth from paying customers. But these new companies also have much lower accountability to the markets they’re entering because they’re serving their investors’ short-term interests ahead of their users’ or community’s long-term interests.

The pervasiveness of this kind of business plan can make competition almost impossible for companies without venture capital investment. Regular companies that grow based on earning money from customers can’t afford to lose that much money for that long a time. It’s not a level playing field, which often means that companies are stuck being either little indie efforts or giant monstrous behemoths, with very little in between. The end result looks a lot like the movie industry, where there are tiny indie arthouse films and big superhero blockbusters, and not very much else.

And the biggest cost for these big new tech companies? Hiring coders. They pump the vast majority of their investment money into hiring and retaining the programmers who’ll build their new tech platforms. Precious little of these enormous piles of money are put into things that will serve a community or build equity for anyone other than the founders or investors in the company. There is no aspiration that making a hugely valuable company should also imply creating lots of jobs for lots of different kinds of people.

To outsiders, creating apps or devices is presented as a hyper-rational process where engineers choose technologies based on which are the most advanced and appropriate to the task. In reality, the choice of things like programming languages or toolkits can be subject to the whims of particular coders or managers, or to whatever’s simply in fashion. Just as often, the process or methodology by which tech is created can follow fads or trends that are in fashion, affecting everything from how meetings are run to how products are developed.

Sometimes the people creating technology seek novelty, sometimes they want to go back to the staples of their technological wardrobe, but these choices are swayed by social factors in addition to an objective assessment of technical merit. And a more complex technology doesn’t always equal a more valuable end product, so while many companies like to tout how ambitious or cutting-edge their new technologies are, that’s no guarantee that they provide more value for regular users, especially when new technologies inevitably come with new bugs and unexpected side-effects.

12. No institution has the power to rein in tech’s abuses.

In most industries, if companies start doing something wrong or exploiting consumers, they’ll be reined in by journalists who will investigate and criticize their actions. Then, if the abuses continue and become serious enough, the companies can be sanctioned by lawmakers at the local, state, governmental or international level.

Today, though, much of the tech trade press focuses on covering the launch of new products or new versions of existing products, and the tech reporters who do cover the important social impacts of tech are often relegated to being published alongside reviews of new phones, instead of being prominently featured in business or culture coverage. Though this has started to change as tech companies have become absurdly wealthy and powerful, coverage is also still constrained by the culture within media companies. Traditional business reporters often have seniority in major media outlets, but are commonly illiterate in basic tech concepts in a way that would be unthinkable for journalists who cover finance or law. Meanwhile, dedicated tech reporters who may have a better understanding of tech’s impact on culture are often assigned to (or inclined to) cover product announcements instead of broader civic or social concerns.

The problem is far more serious when we consider regulators and elected officials, who often brag about their illiteracy about tech. Having political leaders who can’t even install an app on their smartphones makes it impossible to understand technology well enough to regulate it appropriately, or to assign legal accountability when tech‘s creators violate the law. Even as technology opens up new challenges for society, lawmakers lag tremendously behind the state of the art when creating appropriate laws.

Without the corrective force of journalistic and legislative accountability, tech companies often run as if they’re completely unregulated, and the consequences of that reality usually fall on those outside of tech. Worse, traditional activists who rely on conventional methods such as boycotts or protests often find themselves ineffective due to the indirect business model of giant tech companies, which can rely on advertising or surveillance (“gathering user data”) or venture capital investment to continue operations even if activists are effective in identifying problems.

This lack of systems of accountability is one of the biggest challenges facing tech today.

If we understand these things, we can change tech for the better.

If everything is so complicated, and so many important points about tech aren’t obvious, should we just give up hope? No.

Once we know the forces that shape technology, we can start to drive change. If we know that the biggest cost for the tech giants is attracting and hiring programmers, we can encourage programmers to collectively advocate for ethical and social advances from their employers. If we know that the investors who power big companies respond to potential risks in the market, we can emphasize that their investment risk increases if they bet on companies that act in ways that are bad for society.

If we understand that most in tech mean well, but lack the historic or cultural context to ensure that their impact is as good as their intentions, we can ensure that they get the knowledge they need to prevent harm before it happens.

So many of us who create technology, or who love the ways it empowers us and improves our lives, are struggling with the many negative effects that some of these same technologies are having on society. But perhaps if we start from a set of common principles that help us understand how tech truly works, we can start to tackle technology’s biggest problems.

10 Comments

Filed under technology, technology use

Reflections on 2017

EdSurge asked me to offer reflections and predictions for 2017. The following  appeared in EdSurge, December 27, 2017.

As someone who has taught high school history, led a school district, and researched the history of school reform including the use of new technologies in classrooms over the past half-century, except for one event noted below, I found little that startled me in 2017. For digital tools in classrooms, it was the same o’ same o’.

Sure, I am an oldster and have seen a lot of school reform both successes and failures but I am neither a pessimist nor a nay-sayer about public schools. I am a tempered idealist who is cautiously optimistic about what U.S. public schools have done and still can do for children, the community, and the nation. Both the idealism and optimism—keep in mind the adjectives I used to modify the nouns—have a lot to do with what I have learned over the decades about school reform especially when it comes to technology. So for 2017, I offer no lessons that will shock but ones distilled from my experience.

LESSON 1

When it comes to student use of classroom technologies, talk and action are both important. Differentiating between the two is crucial.

Anyone interested in improving schooling through digital tools has to distinguish between media surges of hyped news about, say, personalized learning transforming schools and virtual reality devices in classrooms from actual policies that are adopted (e.g., standards, testing, and accountability, buying 1:1).

Then one has to further distinguish between the hyperbole and adopted policies and programs before determining what teachers actually do in their classroom lessons. The process is the same as parsing hyped ads from the unwrapped product in your hand.

These distinctions are crucial in making sense of what teachers do once the classroom door closes.

LESSON 2

Access to digital tools is not the same as what happens in daily classroom activities.

District purchases of hardware and software continue to go up. In 1984, there were 125 students for each computer; now the ratio is around 3:1 and in many places 1:1. Nothing startling here—the trend line in buying stuff began to go up in the early years of this century and that trend continues. Because this nearly ubiquitous access to new technologies has spread across urban, suburban, exurban, and rural school districts, too many pundits and promoters leap to the conclusion that all teachers integrate these digital tools into daily practice seamlessly. While surely the use of devices and software has gained full entry into classrooms, anyone who regularly visits classrooms sees the wild variation in lessons among teachers using digital technologies.

Yes, teachers have surely incorporated digital tools into daily practice but—there is always a “but”—even those who have thoroughly integrated new technologies into their lessons reveal both change and stability in their teaching.

In 2016, I visited 41 elementary and secondary teachers in Silicon Valley who had a reputation for integrating technology into their daily lessons.

They were hard working, sharp teachers who used digital tools as familiarly as paper and pencil. Devices and software were in the background, not foreground. The lessons they taught were expertly arranged with a variety of student activities. These teachers had, indeed, made changes in creating playlists for students, pursuing problem-based units, and organizing the administrative tasks of teaching.

But I saw no fundamental or startling changes in the usual flow of lessons—setting goals, designing varied activities and groupings, eliciting student participation, assessing student understanding— that differed from earlier generations of experienced teachers. The lessons I observed were teacher-directed and post-observation interviews revealed continuity in how teachers have taught for decades. Again, stability and change in teaching with digital tools.

Oh yes, there was one event that did startle me. That was the election of Donald Trump as President. I do not believe that his tenure in the White House or that of his Secretary of Education will alter the nation’s direction in schooling–my first prediction. Every Student Succeeds Act (2016) shifts policymaking from federal to state offices. Sure, there is much talk in D.C. about more choice, charters, and vouchers but much of it remains talk. Little change in what schools do or what happens in classrooms will occur.

What is disturbing is the President’s disregard for being informed, making judgments based on whim, tweeting racist statements, and telling lies (Politifact has documented 325 Trump statements that it judges mostly or entirely false) . These Presidential actions in less than a year have already shaped a popular culture where “fake news,” “truthful hyperbole,” and “post-truth” are often used phrases.

Indirectly, the election of Donald Trump—and here is my second prediction—will spark a renaissance in districts and schools working on critical thinking skills and teachers and students parsing mainstream and social media for accuracy. Maybe the next generation will respect facts, think more logically, be clearer thinkers, and more intellectually curious than our current President.

8 Comments

Filed under technology

A Few Teachers Speak Out on Technology in Their Classrooms

I am fortunate to have many readers who are classroom teachers. I have published posts over the past year about my research on teachers identified as exemplary in integrating technology into their lessons. Some of those posts triggered responses from teachers. I offer a few of those comments here.

Louise Kowitch, retired social studies teacher from Connecticut:

….The impact of technology can vary greatly depending on the subject matter (among all the other things you’ve addressed). While some pedagogical practices are universal, when “doing the work of the discipline”, content-specific practices,and by extension the impact of technology, might vary widely.

I mention this to say that as someone who lived through the IT revolution in the classroom (from mimeographs, scantrons, and filmstrips to floppy disks and CD-ROM, and finally to smart boards, Skype and Chromebooks), by the time I reached three decades as a full time classroom teacher, I was spending MORE time on my lessons and interacting with students, than less. Some tasks were indeed more efficient (for example, obtaining and sharing maps, artifacts, art, primary sources). Others, like collecting data about student performance for our superintendent, became arduous, weekend long affairs that sucked the life out of the joy of teaching.

That said, I loved how Chromebooks and Smartboards freed up my instruction to empower students to do their own research and conduct substantive debates. For example, a simulation of the post WWI debates over the Treaty of Versailles from the perspectives of different countries – something I had done before Chromebooks – became a powerful lesson for students in the art of diplomacy, the value of historical perspective, and the grind of politics, as a result of THEIR OWN RESEARCH, not my selection of primary sources. This was MORE time consuming (2 weeks of instructional time, not 8 days) and LESS EFFICIENT, but MORE STUDENT CENTERED and COLLABORATIVE.

Was it “better” instruction? Yes, if the point was for kids to experience “the art of negotiation”. No, if it meant having to drop a four day mini unit on elections in the Weimar Republic that I used to do after the WWI unit. Something is lost, and something is gained. Like you, I grapple with it’s a zero sum game.

Garth Flint,  high school teacher of computer science and technology coordinator in Montana private school:

My question has always been what effect does the increase in classroom tech have on the students? Do they do better through out the years? How do we measure “better”? We have an AP History teacher who is very traditional. Kids listen to the lecture and copy the notes on the whiteboard.
About the only tech he uses are some minor YouTube videos. His AP test results are outstanding. Would any tech improve on those results? At the middle school we have a teacher who uses a Smartboard extensively. It has changed how he does his math lectures. But he is still lecturing. Has the Smartboard improved student learning? I do not know. I have observed teachers that have gone full tech. Google Docs, 1-1, videos of lectures on line, reversed classroom, paperless. Their prep time increased. Student results seemed (just from my observation, I did not measure anything) to be the same as a non-tech classroom. It would be interesting to have two classrooms of the same subject at the same grade level, one high tech, one old-school and feed those students into the same classroom the next year. Ask that next year teacher if there is a measurable difference between the groups.

 

Laura H. Chapman, retired  art teacher from Ohio:

“So answering the question of whether widespread student access and teacher use of technologies has “changed daily classroom practices” depends upon who is the asker, who is the doer, and what actually occurs in the classroom.”

Some other questions.
Who is asking questions about the extent of access and use of technology by students and teachers and why? Who is not asking such questions, and why not?

Is there a map of “daily classroom practices” for every subject and grade/or developmental level such that changes in these practices over time can be monitored with the same teachers in the same teaching assignments?

Are there unintended consequences of widespread student access and teacher use of technologies other than “changes in daily classroom practices?” Here I am thinking about the risky business of assuming that change is not only inevitable but also positive(e.g., invigorates teaching and learning, makes everything moe “efficient”).

Who is designing the algorithms, the apps, the dashboards, the protocols for accessing edtech resources, who is marketing these and mining the data from these technologies, and why? These questions bear on the direct costs and benefits of investments and indirect costs/benefits…. Continue reading

12 Comments

Filed under how teachers teach, technology