How To Build Artificial Intelligence We Can Trust (Gary Marcus and Ernest Davis)

For those K-12 educators and higher education professors who bite their nails over whether automation will replace teachers with robots who make out seating charts, answer student questions, explain the causes of the Civil War, do shortcuts on solving quadratic equations, wipe kindergartners’ noses, and hug crying 3rd graders bullied during recess—stop biting your nails. AI will not replace you.

This op-ed appeared in the New York Times September 7, 2019.

“Gary Marcus, the founder and chief executive of Robust AI, and Ernest Davis, a professor of computer science at New York University, are the authors of the forthcoming book “Rebooting AI: Building Artificial Intelligence We Can Trust,” from which this essay is adapted.”

Artificial intelligence has a trust problem. We are relying on A.I. more and more, but it hasn’t yet earned our confidence.

Tesla cars driving in Autopilot mode, for example, have a troubling history of crashing into stopped vehicles. Amazon’s facial recognition system works great much of the time, but when asked to compare the faces of all 535 members of Congress with 25,000 public arrest photos, it found 28 matches, when in reality there were none. A computer program designed to vet job applicants for Amazon was discovered to systematically discriminate against women. Every month new weaknesses in A.I. are uncovered.

The problem is not that today’s A.I. needs to get better at what it does. The problem is that today’s A.I. needs to try to do something completely different.

In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality.

Today’s A.I. systems know surprisingly little about any of these concepts. Take the idea of time. We recently searched on Google for “Did George Washington own a computer?” — a query whose answer requires relating two basic facts (when Washington lived, when the computer was invented) in a single temporal framework. None of Google’s first 10 search results gave the correct answer. The results didn’t even really address the question. The highest-ranked link was to a news story in The Guardian about a computerized portrait of Martha Washington as she might have looked as a young woman.

Google’s Talk to Books, an A.I. venture that aims to answer your questions by providing relevant passages from a huge database of texts, did no better. It served up 20 passages with a wide array of facts, some about George Washington, others about the invention of computers, but with no meaningful connection between the two.

The situation is even worse when it comes to A.I. and the concepts of space and causality. Even a young child, encountering a cheese grater for the first time, can figure out why it has holes with sharp edges, which parts allow cheese to drop through, which parts you grasp with your fingers and so on. But no existing A.I. can properly understand how the shape of an object is related to its function. Machines can identify what things are, but not how something’s physical features correspond to its potential causal effects.

For certain A.I. tasks, the dominant data-correlation approach works fine. You can easily train a deep-learning machine to, say, identify pictures of Siamese cats and pictures of Derek Jeter, and to discriminate between the two. This is why such programs are good for automatic photo tagging. But they don’t have the conceptual depth to realize, for instance, that there are lots of different Siamese cats but only one Derek Jeter and that therefore a picture that shows two Siamese cats is unremarkable, whereas a picture that shows two Derek Jeters has been doctored.

In no small part, this failure of comprehension is why general-purpose robots like the housekeeper Rosie in “The Jetsons” remain a fantasy. If Rosie can’t understand the basics of how the world works, we can’t trust her in our home.

 

Without the concepts of time, space and causality, much of common sense is impossible. We all know, for example, that any given animal’s life begins with its birth and ends with its death; that at every moment during its life it occupies some particular region in space; that two animals can’t ordinarily be in the same space at the same time; that two animals can be in the same space at different times; and so on.

We don’t have to be taught this kind of knowledge explicitly. It is the set of background assumptions, the conceptual framework, that makes possible all our other thinking about the world.

Yet few people working in A.I. are even trying to build such background assumptions into their machines. We’re not saying that doing so is easy — on the contrary, it’s a significant theoretical and practical challenge — but we’re not going to get sophisticated computer intelligence without it.

f we build machines equipped with rich conceptual understanding, some other worries will go away. The philosopher Nick Bostrom, for example, has imagined a scenario in which a powerful A.I. machine instructed to make paper clips doesn’t know when to stop and eventually turns the whole world — people included — into paper clips.

In our view, this kind of dystopian speculation arises in large part from thinking about today’s mindless A.I. systems and extrapolating from them. If all you can calculate is statistical correlation, you can’t conceptualize harm. But A.I. systems that know about time, space and causality are the kinds of things that can be programmed to follow more general instructions, such as “A robot may not injure a human being or, through inaction, allow a human being to come to harm” (the first of Isaac Asimov’s three laws of robotics).

We face a choice. We can stick with today’s approach to A.I. and greatly restrict what the machines are allowed to do (lest we end up with autonomous-vehicle crashes and machines that perpetuate bias rather than reduce it). Or we can shift our approach to A.I. in the hope of developing machines that have a rich enough conceptual understanding of the world that we need not fear their operation. Anything else would be too risky.

 

 

 

 

6 Comments

Filed under how teachers teach, technology use

6 responses to “How To Build Artificial Intelligence We Can Trust (Gary Marcus and Ernest Davis)

  1. Laura H. ca

    Thank you for posting this eloquently clear critique of current AI systems. For some reason the focus on time, space,and causality brought to mind some of the work of Jean Piaget and linguists such as George Lakoff. I am also reminded of the horrible Chance for Success metrics published by EdWeek, these constructed as if stack ratings for every state and the District of Columbia should function as predictions for the fate of children in those states. I am pleased to see that you are addressing the issue of AI in education with attention to the proliferation of what my generation called programmed instruction.

  2. Laura H. Chapman

    My first comment was from my ipad too early in the AM. Here is a more developed version.
    Thanks for these updates on education and technology especially AI. This post took me down memory lane to some of my enchantments with these books. I wonder if Gary Marcus, and Ernest Davis are familiar with these.
    Jean Piaget. The Child’s Conception of Time (London: Routledge and Kegan Paul, 1969)
    Jean Piaget. With Inhelder, B., The Child’s Conception of Space (New York: W.W. Norton, 1967).
    Jean Piaget. The child’s conception of physical causality (London: Kegan Paul, 1930)

    I was also reminded of this volume by Kenneth Boulding: The Image: Knowledge in Life and Society. Ann Arbor, MI: University of Michigan Press. 1956. I read it as an undergraduate and returned to it more than once. Boulding sets forth a series of questions about how we think of time and the degree to which culture and experience intervene. He addressed other topics in the same way. Here is brief reflection on that book by Boulding. http://garfield.library.upenn.edu/classics1988/A1988Q864300001.pdf
    I have been trying to track some of the uses of AI in education, as you have. I am amazed at the ready acceptance of so many software programs, without the benefit of any understanding that their use is providing a boatload of training data needed for perfecting that software in an AI system. The newest AI systems are tapping into brainwaves. Beam me up.
    https://www.scmp.com/tech/innovation/article/3008439/brainco-ceo-says-his-mind-reading-tech-here-improve-concentration

    • larrycuban

      Laura,
      I do not know whether authors were familiar with Piaget. I have not read their book, only this article. Ah, the Boulding book brings back memories of my reading that and Daniel Boorstin’s The Image–both dissecting how important images are in American culture and, in Boorstin’s book (1971), predicting Reagan and now Trump as first becoming celebrities and then politicians (do not forget Governor Arnold Schwarzenegger of California). Thanks for comment and link to article on BrainCo. I read that and will have future posts on hyped AI software and schools’ grasping for “solutions.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s