In Part 1, I pointed out earlier predictions of futuristic schools and one in New York City that offers “blended” and “personalized” learning. A school that uses multiple ways teaching includes software customized to each student’s math knowledge and skills. It is an example of automated teaching and learning that champions of school technology see as the future of schooling. Maybe algorithms will indeed become standard in the next generation so that by 2025, schools will no longer be recognizable. But “maybe” not.
There are fewer “maybes,” however, when it comes to the spread of automation in the U.S. beyond ATMs, supermarket self-checkout counters, and industrial robots. Piloting jumbo jets, self-driving cars, trading stock on Wall Street, practicing law and medicine, and other occupations once thought to be invulnerable to automation are either wholly run by software or largely guided by programmed instructions.
Once a job or task from reading X-rays to robo-calling homes become automated there is no reversability. Familiar jobs disappear and new jobs pop up. Yet a “technological unemployment gap” between lots of food-service-call center sales-health care jobs at one end and high finance jobs, top software designers at the other end has emerged. Middle-skills jobs (e.g., machinists, accountants, paralegals, stock market traders, architects) either have disappeared or are shrinking.
It is, of course, the private sector that initially adopts automated work. The for-profit imperative to increase efficiency, productivity and decrease labor costs drives (and has driven) the pursuit of automation. The public sector usually follows in adopting programmed work. Automated machines do remove drudgery; software does increase individual and collective productivity. No argument here that automation is bad. But there are consequences of having more and more work automated that must be considered and evaluated.
For example, once particular jobs and tasks have become automated, researchers and journalists (see here and here) have pointed out that deskilling of work occurs. That is, the people who watch the dials, punch in data, and respond to computer-generated instructions end up losing their smarts and key skills. They make mistakes. They get dumber.
Consider piloting planes, driving cars, and caring for patients.*
On typical passenger flights, pilots have their hands on the controls for about three minutes–about a minute or so to take off and another minute or so to land. What they do most of the time in flight is check their screens, type in data, and talk to one another. As one researcher wrote: “As automation has gained in sophistication, the role of the pilot has shifted toward becoming a monitor or supervisor of the automation.”
Researchers have found that essential manual and cognitive skills that pilots need degrade from constant reliance on automated controls. The Federal Aviation Agency (FAA) released a report in 2010 that in the previous decade pilot errors had been involved in nearly two-thirds of the crashes. A veteran United Airlines pilot put the dependence on automatic controls concisely: “We’re forgetting how to fly.”
The safety record of airlines, that is passenger deaths per million miles, remains far better than automobile safety yet when aircraft crashes do occur hundreds of passengers die.
In 2013, the FAA, after gathering data from crash investigations, reports of near-crashes, and research into cockpit communication among pilots, sent to all airlines a safety notice that said over-reliance on flight software could “lead to degradation of the pilot’s ability to quickly recover the aircraft from an undesired state.” The notice “encourages [pilots] to promote manual flight operations when appropriate.”
Google and other companies have developed and are refining software—sets of precise and written instructions–that captures human “sensory perceptions, pattern recognition, and conceptual knowledge” sufficiently to dispense with human drivers. The director of self-driving cars at Google predicted last week that within five years driverless cars will be on the market. Four states and D.C. already allow cars on roads completely programmed to drive automatically. Driverless cars leave owners sitting elsewhere in a car, doing work, communicating with family and friends, and completing the daily crossword puzzle. Even were their skills driving a car to erode, the convenience of negotiating commutes and shopping while getting other work done in the car compensates for the loss of knowing how to turn left, hit the brakes, and accelerate when necessary.
Digging a little deeper, however, reveals that Google requires that backup drivers takeover the experimental car “manually … on most urban and residential streets and any employee who wants to operate one of the vehicles has to complete rigorous training in emergency driving techniques ” Sure, software algorithms can decide thousands of actions to take when driving but so far, there are thousands of other decisions that humans make–distinguishing between a chunk of plastic and a child at night–where current software falls short.
Even before the widespread deskilling millions of car drivers occurs, larger issues of who is legally and morally responsible for accidents that damage cars and kill pedestrians will have to be publicly debated.
Caring for patients
Digitizing paper charts of 900,000 doctors and over 4000 hospitals into Electronic Medical Records (EMR) have surged with federal actions under Presidents George W. Bush and Barack Obama. Billions of dollars have been (and are) spent on EMR. The belief is that EMRs will decrease health costs, increase efficiency, and significantly improve patient health. So far, with billions spent and for-profit companies that supply automated systems posting huge gains in revenue, computerizing medical records has yet to reduce health costs, increase efficiency, and improve the well-being of patients (see here, here, here, and here).
The issue of deskilling physicians through digitized health records has also emerged. A number of researchers pointed out that many doctors used cut-and-paste functions on computers to add boiler-plate language into EMRs when reporting patient visits. Previously clinicians dictated or wrote notes that “gave greater consideration to the quality and uniqueness of the information being [put] into the record.”
Another issue about doctors swiveling their chairs during a patient visit between the patient and the computer to tap in information for an EMR. In effect, as studies have shown, the computer itself “competes with the patient for the clinicians’ attention” thereby affecting their “capacity to be fully present, and alters the nature of communication, relationships, and the physicians’ sense of professional role.” In short, physicians’ skills slowly erode.
I have seen effects of EMRs first-hand as a patient over the past decade. One doctor now wears Google Glass and has a scribe in another room transcribing the audio of our conversation. My doctor can now attend directly to me rather than turn in his chair to the computer. In spending less time on EMRs during patient visits, that Health Maintenance Organization (HMO) hopes that their doctors can see more patients each week.
I offer these examples of piloting aircraft, driverless cars, and caring for patients make the basic point that automation continues to spread as confidence in artificial intelligence and sophisticated software making algorithmic decisions touches professionals in every part of our daily lives.
And what about the schoo and classrooml becoming automated? I turn to that question in Part 3.
I draw quotes and examples from Nicholas Carr, The Glass Cage.
7 responses to “Automated Living in U.S. (Part 2)”
Reblogged this on The Echo Chamber.
Thanks for re-blogging post on automated living in U.S., Andrew.
Reblogged this on Deborah Meier on Education and commented:
Thanks to Larry Cuban, again and again. He helps me see what forces are driving the de-persnalizing of human relationships. The automation of our humanity. Think of all the sci fi we’ve absorbed about this. Yes, it’s related to the profit motive–inexorably, I fear. And it’s moving fast, starting with the youngest who relate now not only “not to people” but not even to dolls–or other people-like or living objects as we replace play with computerized devices and school lessons. It’s a good moment for re-reading Mike Rose’s Lives on the Boundary–and reminding ourselves of the power of the human touch, the human voice, the human interaction.
Thanks for re-blogging the post, Deb.
I’ve recently been advising a US client on adaptive learning applications. My research found this gem which seems acutely apposite here.
“A quantitative research synthesis (meta-analysis) was conducted on the literature concerning the effects of feedback on learning from computer-based instruction (CBI). Despite the widespread acceptance of feedback in computerized instruction, empirical support for particular types of feedback information has been inconsistent and contradictory….Results indicate that the diagnostic and prescriptive management strategies of computer-based adaptive instructional systems provide the most effective feedback. The implementation of effective feedback in computerized instruction involves the computer’s ability to verify the correctness of the learner’s answer and the underlying causes of error.” Azevedo & Bernard 1995, A Meta-Analysis of the Effects of Feedback in Computer-Based Instruction.
It seems the machine’s ability to understand those pesky little “underlying causes of error” is what really matters. This spells out the problem of automated living, as it relates to teaching. Teachers “learn” what the child did wrong through unremitting, complex interactions with real, flesh and blood children: through experience. Machines “guess” …because they have nothing to go on but the input from a keyboard.
Thanks, Joe, for the quote and your translation of it into clear terms.
Pingback: Larry Cuban’s three-part series on automation in education | Citizen4: A citizen's blog about Champaign Unit 4