The knowledge illusion Why we never think alone

Steven A. Sloman

Book - 2017

"Two cognitive scientists explain how the human brain relies on the communal nature of intelligence and knowledge, constantly gathering information and expertise stored outside our mind and bodies, to overcome its shortcomings of being error prone, irrational and often ignorant,"--NoveList.

Saved in:

2nd Floor Show me where

153.42/Sloman
1 / 1 copies available
Location Call Number   Status
2nd Floor 153.42/Sloman Checked In
Subjects
Published
New York : Riverhead Books 2017.
Language
English
Main Author
Steven A. Sloman (author)
Other Authors
Philip Fernbach (author)
Physical Description
296 pages : illustrations ; 24 cm
Bibliography
Includes bibliographical references and index.
ISBN
9780399184352
  • Introduction: Ignorance and the Community of Knowledge
  • 1. What We Know
  • 2. Why We Think
  • 3. How We Think
  • 4. Why We Think What Isn't So
  • 5. Thinking with Our Bodies and the World
  • 6. Thinking with Other People
  • 7. Thinking with Technology
  • 8. Thinking About Science
  • 9. Thinking About Politics
  • 10. The New Definition of Smart
  • 11. Making People Smart
  • 12. Making Smarter Decisions
  • Conclusion: Appraising Ignorance and Illusion
  • Acknowledgments
  • Notes
  • Index
Review by New York Times Review

HAMLET GLOBE TO GLOBE: Two Years, 190,000 Miles, 197 Countries, One Play, by Dominic Dromgoole. (Grove, $27.) To celebrate the 450th anniversary of Shakespeare's birth, London's Globe Theater performed "Hamlet" all around the world. Dromgoole's witty account of the ambitious two-year tour offers insight about the play and its enduring appeal. ONE OF THE BOYS, by Daniel Magariel. (Scribner, $22.) After a brutal custody battle, two brothers watch their father drift into addiction in a gripping and heartfelt first novel that brims with wisdom about the self-destructive longing for paternal approval. A RABBLE OF DEAD MONEY: The Great Crash and the Global Depression, 1929-1939, by Charles R. Morris. (PublicAffairs, $29.99.) This accessible overview of the policy response to the Great Depression is a deft synthesis, blending colorful accounts of the past with the scholarly literature of the present. THE KNOWLEDGE ILLUSION: Why We Never Think Alone, by Steven Sloman and Philip Fernbach. (Riverhead, $28.) Two cognitive scientists argue that not only rationality but the very idea of individual thinking is a myth, and that humans think in groups. Providing people with more and better information is unlikely to improve matters. AMERICAN WAR, by Omar El Akkad. (Knopf, $26.95.) El Akkad's first novel, a dark dystopian thriller, is set at the end of this century, when climate change, plague and intrastate conflict have laid the country to waste. MY CAT YUGOSLAVIA, by Pajtim Statovci. Translated by David Hackston. (Pantheon, $25.95.) Statovci's strange, haunting and utterly original exploration of displacement and desire interweaves the stories of a Kosovan woman and her son roiled by the aftershocks of exile. A singing, dancing cat encountered in a gay bar plays a role. PORTRAITS OF COURAGE: A Commander in Chief's Tribute to America's Warriors, by George W. Bush. (Crown, $35.) The former president's paintings of veterans reveal a surprisingly adept artist who has dramatically improved his technique while also doing penance for a great disaster of American history. YOU SAY TO BRICK: The Life of Louis Kahn, by Wendy Lesser. (Farrar, Straus&Giroux, $30.) Lesser's narrative of Kahn's tumultuous life and remarkable career is magnificently researched and gracefully written. SIGNS FOR LOST CHILDREN, by Sarah Moss. (Europa, paper, $19.) This fine exploration of a marriage between a doctor in Victorian England and her architect husband feels contemporary.

Copyright (c) The New York Times Company [May 5, 2017]
Review by Publisher's Weekly Review

Sloman, a professor of cognitive, linguistic, and psychological sciences, and Fernbach, a cognitive scientist and professor of marketing, attempt nothing less than a takedown of widely held beliefs about intelligence and knowledge, namely the role of an individual's brain as the main center for knowledge. Using a mixture of stories and science from an array of disciplines, the authors present a compelling and entertaining examination of the gap between knowledge one thinks one has and the amount of knowledge actually held in the brain, seeking to "explain how human thinking can be so shallow and so powerful at the same time." The book starts with revelatory scholarly insights into the relationship between knowledge and the brain, finding that humans "are largely unaware of how little we understand." Sloman and Fernbach then take the reader through numerous real-life applications of their findings, such as the implications for non-experts' understanding of science, politics, and personal finances. In an increasingly polarized culture where certainty reigns supreme, a book advocating intellectual humility and recognition of the limits of understanding feels both revolutionary and necessary. The fact that it's a fun and engaging page-turner is a bonus benefit for the reader. Agent: Christy Fletcher, Fletcher and Co. (Mar.) © Copyright PWxyz, LLC. All rights reserved.

(c) Copyright PWxyz, LLC. All rights reserved
Review by Library Journal Review

We wander around in a fog of unknowing, argue Sloman (cognitive, linguistic, & psychological sciences, Brown Univ.; editor in chief, Cognition) and Fernbach (marketing, Leeds Sch. of Business, Univ. of Colorado). We depend on a web of experts and the technology they've created to keep our world going. Even Paleolithic societies had specialists-shamans, flint knappers, etc. The downside is that we tend to think that we know more than we do. Most people, for example, say that they understand how a toilet works or why a certain social policy should be enacted. But when asked to describe plumbing or explain why they advocate a policy, they are unable to do so. This is called the "Illusion of Explanatory Depth." It can lead to flooded bathrooms and wars. Sloman and Fernbach offer suggestions for minimizing the damage that this can cause, but, interestingly enough, this book illustrates the problem of specialization. The authors apparently aren't aware of some of the classic work done on values change by social psychologists. -VERDICT General readers who like the work of Malcolm Gladwell will enjoy this book.-Mary Ann Hughes, Shelton, WA © Copyright 2017. Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.

(c) Copyright Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.
Review by Kirkus Book Review

A tour of the many honeycombs of the hive mind, courtesy of cognitive scientists Sloman (Brown Univ.) and Fernbach (Univ. of Colorado).You know more than I do, and you know next to nothing yourself. That's not just a Socratic proposition, but also a finding of recent generations of neuroscientific researchers, who, as Cognition editor Sloman notes, are given to addressing a large question: "How is thinking possible?" One answer is that much of our thinking relies on the thinking of othersand, increasingly, on machine others. As the authors note, flying a plane is a collaboration among pilots, designers, engineers, flight controllers, and automated systems, the collective mastery or even understanding of all of which is beyond the capacity of all but a very few humans. One thought experiment the authors propose is to produce from your mind everything you can say about how zippers work, a sobering exercise that quickly reveals the superficiality of much of what we carry inside our heads. We think we know, and then we don't. Therein lies a small key to wisdom, and this leads to a larger purpose, which is that traditional assessments of intelligence and performance are off-point: what matters is what the individual mind contributes to the collectivity. If that sounds vaguely collectivist, so be it. All the same, the authors maintain, "intelligence is no longer a person's ability to reason and solve problems; it's how much the person contributes to a group's reasoning and problem-solving process." This contribution, they add, may not just lie in creativity, but also in doing the grunt work necessary to move a project along. After all, even with better, more effectively distributed thinking, "ignorance is inevitable." Some of the book seems self-evident, some seems to be mere padding, and little of it moves with the sparkling aha intelligence of Daniel Dennett. Still, it's sturdy enough, with interesting insights, especially for team building. Copyright Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.

One What We Know Nuclear warfare lends itself to illusion. Alvin Graves was the scientific director of the U.S. military's bomb testing program in the early fifties. He was the person who gave the order to go ahead with the disastrous Castle Bravo detonation discussed in the last chapter. No one in the world should have understood the dangers of radioactivity better than Graves. Eight years before Castle Bravo, in 1946, Graves was one of eight men in a room in Los Alamos, the nuclear laboratory in New Mexico, while another researcher, Louis Slotin, performed a tricky maneuver the great physicist Richard Feynman nicknamed "tickling the dragon's tail." Slotin was experimenting with plutonium, one of the radioactive ingredients used in nuclear bombs, to see how it behaved. The experiment involved closing the gap between two hemispheres of beryllium surrounding a core of plutonium. As the hemispheres got closer together, neutrons released from the plutonium reflected back off the beryllium, causing more neutrons to be released. The experiment was dangerous. If the hemispheres got too close, a chain reaction could release a burst of radiation. Remarkably, Slotin, an experienced and talented physicist, was using a flathead screwdriver to keep the hemispheres separated. When the screwdriver slipped and the hemispheres crashed together, the eight physicists in the room were bombarded with ­dangerous doses of radiation. Slotin took the worst of it and died in the infirmary nine days later. The rest of the team eventually recovered from the initial radiation sickness, though several died young of cancers and other diseases that may have been related to the accident. How could such smart people be so dumb? It's true that accidents happen all the time. We're all guilty of slicing our fingers with a knife or closing the car door on someone's hand by mistake. But you'd hope a group of eminent physicists would know to depend on more than a handheld flathead screwdriver to separate themselves from fatal radiation poisoning. According to one of Slotin's colleagues, there were much safer ways to do the plutonium experiment, and Slotin knew it. For instance, he could have fixed one hemisphere in position and raised the other from below. Then, if anything slipped out of position, gravity would separate the hemispheres harmlessly. Why was Slotin so reckless? We suspect it's because he experienced the same illusion that we have all experienced: that we understand how things work even when we don't. The physicists' surprise was like the surprise you feel when you try to fix a leaky faucet and end up flooding the bathroom, or when you try to help your daughter with her math homework and end up stumped by quadratic equations. Too often, our confidence that we know what's going on is greater at the beginning of an episode than it is at the end. Are such cases just random examples, or is there something more systematic going on? Do people have a habit of overestimating their understanding of how things work? Is knowledge more superficial than it seems? These are the questions that obsessed Frank Keil, a cognitive scientist who worked at Cornell for many years and moved to Yale in 1998. At Cornell, Keil had been busy studying the ­theories people have about how things work. He soon came to realize how shallow and incomplete those theories are, but he ran into a ­roadblock. He could not find a good method to demonstrate scientifically how much people know relative to how much they think they know. The methods he tried took too long or were too hard to score or led participants to just make stuff up. And then he had an epiphany, coming up with a method to show what he called the illusion of explanatory depth (IoED, for short) that did not suffer from these problems: "I distinctly remember one morning standing in the shower in our home in Guilford, Connecticut, and almost the entire IoED paradigm spilled out in that one long shower. I rushed into work and grabbed Leon Rozenblit, who had been working with me on the division of cognitive labor, and we started to map out all the details." Thus a method for studying ignorance was born, a method that involved simply asking people to generate an explanation and showing how that explanation affected their rating of their own understanding. If you were one of the many people that Rozenblit and Keil subsequently tested, you would be asked a series of questions like the following: 1.On a scale from 1 to 7, how well do you understand how zippers work? 2.How does a zipper work? Describe in as much detail as you can all the steps involved in a zipper's operation. If you're like most of Rozenblit and Keil's participants, you don't work in a zipper factory and you have little to say in answer to the second question. You just don't really know how zippers work. So, when asked this question: 3.Now, on the same 1 to 7 scale, rate your knowledge of how a zipper works again. This time, you show a little more humility by lowering your rating. After trying to explain how a zipper works, most people realize they have little idea and thus lower their knowledge rating by a point or two. This sort of demonstration shows that people live in an illusion. By their own admission, respondents thought they understood how zippers work better than they did. When people rated their knowledge the second time as lower, they were essentially saying, "I know less than I thought." It's remarkable how easy it is to disabuse people of their illusion; you merely have to ask them for an explanation. And this is true of more than zippers. Rozenblit and Keil obtained the same result with speedometers, piano keys, flush toilets, cylinder locks, helicopters, quartz watches, and sewing machines. And everyone they tested showed the illusion: graduate students at Yale as well as undergraduates at both an elite university and a regional public one. We have found the illusion countless times with undergraduates at a different Ivy League university, at a large public school, and testing random samples of Americans over the Internet. We have also found that people experience the illusion not only with everyday objects but with just about everything: People overestimate their understanding of political issues like tax policy and foreign relations, of hot-button scientific topics like GMOs and climate change, and even of their own finances. We have been studying psychological phenomena for a long time and it is rare to come across one as robust as the illusion of understanding. One interpretation of what occurs in these experiments is that the effort people make to explain something changes how they interpret what "knowledge" means. Maybe when asked to rate their knowledge, they are answering a different question the first time they are asked than they are the second time. They may interpret the first question as "How effective am I at thinking about zippers?" After attempting to explain how the object works, they instead assess how much knowledge they are actually able to articulate. If so, their second answer might have been to a question that they understood more as "How much knowledge about zippers am I able to put into words?" This seems unlikely, because Rozenblit and Keil used such careful and explicit instructions when they asked the knowledge questions. They told participants precisely what they meant by each scale value (1 to 7). But even if respondents were answering different questions before and after they tried to explain how the object worked, it remains true that their attempts to generate an explanation taught them about themselves: They realized that they have less knowledge that they can articulate than they thought. This is the essence of the illusion of explanatory depth. Before trying to explain something, people feel they have a reasonable level of understanding; after explaining, they don't. Even if they lower their score because they're defining the term "knowledge" differently, it remains a revelation to them that they know relatively little. According to Rozenblit and Keil, "many participants reported genuine surprise and new humility at how much less they knew than they originally thought." A telling example of the illusion of explanatory depth can be found in what people know about bicycles. Rebecca Lawson, a psychologist at the University of Liverpool, showed a group of psychology undergraduates a schematic drawing of a bicycle that was missing several parts of the frame as well as the chain and the pedals. She asked the students to fill in the missing parts. Try it. What parts of the frame are missing? Where do the chain and pedals go? It's surprisingly difficult to answer these questions. In Lawson's study, about half the students were unable to complete the drawings correctly (you can see some examples on the next page). They didn't do any better when they were shown the correct drawings as well as three incorrect ones and were asked to pick out the correct one. Many chose pictures showing the chain around the front wheel as well as the back wheel, a configuration that would make it impossible to turn. Even expert cyclists were far less than perfect on this apparently easy task. It is striking how sketchy and shallow our understanding of familiar objects is, even objects that we encounter all the time that operate via mechanisms that are easily perceived. How Much Do We Know? So we overestimate how much we know, suggesting that we're more ignorant than we think we are. But how ignorant are we? Is it possible to estimate how much we know? Thomas Landauer tried to answer this question. Landauer was a pioneer of cognitive science, holding academic appointments at Harvard, Dartmouth, Stanford, and Princeton and also spending twenty-five years trying to apply his insights at Bell Labs. He started his career in the 1960s, a time when cognitive scientists took seriously the idea that the mind is a kind of computer. Cognitive science emerged as a field in sync with the modern computer. As great mathematical minds like John von Neumann and Alan Turing developed the foundations of computing as we know it, the question arose whether the human mind works in the same way. Computers have an operating system that is run by a central processor that reads and writes to a digital memory using a small set of rules. Early cognitive scientists ran with the idea that the mind does too. The computer served as a metaphor that governed how the business of cognitive science was done. Thinking was assumed to be a kind of computer program that runs in people's brains. One of Alan Turing's claims to fame is that he took this idea to its logical extreme. If people work like computers, then it should be possible to program a computer to do what a human being can. Motivated by this idea, his classic paper "Computing Machinery and Intelligence" in 1950 addressed the question Can machines think ? In the 1980s, Landauer decided to estimate the size of human memory on the same scale that is used to measure the size of computer memories. As we write this book, a laptop computer comes with around 250 or 500 gigabytes of memory as long-term storage. Landauer used several clever techniques to measure how much knowledge people have. For instance, he estimated the size of an average adult's vocabulary and calculated how many bytes would be required to store that much information. He then used the result of that to estimate the size of the average adult's entire knowledge base. The answer he got was half of a gigabyte. He also made the estimate in a completely different way. Many experiments have been run by psychologists that ask people to read text, look at pictures, or hear words (real or nonsensical), sentences, or short passages of music. After a delay of between a few minutes and a few weeks, the psychologists test the memory of their subjects. One way to do this is to ask people to reproduce the material originally presented to them. This is a test of recall and can be quite punishing. Do you think you could recall a passage right now that you had heard only once before, a few weeks ago? Landauer analyzed a number of experiments that weren't so hard on people. The experiments tended to test recognition--whether participants could identify a newly presented item (often a picture, word, or passage of music) as one that had been presented before or not. In some of these experiments, people were shown several items and had to pick the one they had seen before. This is a very sensitive way of testing memory; people would be able to do well even if their memories were weak. To estimate how much people remembered, Landauer relied on the difference in recognition performance between a group that had been exposed to the items and a group that had not. This difference is as pure a measure of memory as one can get. Landauer's brilliant move was to divide the measure of memory (the difference in recognition performance between the two groups) by the amount of time people spent learning the material in the first place. This told him the rate at which people are able to acquire information that they later remember. He also found a way to take into account the fact that people forget. The remarkable result of his analysis is that people acquire information at roughly the same rate regardless of the details of the procedure used in the experiment or the type of material being learned. They learned at approximately the same rate whether the items were visual, verbal, or musical. Landauer next calculated how much information people have on hand--what the size of their knowledge base is--by assuming they learn at this same rate over the course of a seventy-year lifetime. Every technique he tried led to roughly the same answer: 1 gigabyte. He didn't claim that this answer is precisely correct. But even if it's off by a factor of 10, even if people store 10 times more or 10 less than 1 gigabyte, it remains a puny amount. It's just a tiny fraction of what a modern laptop can retain. Human beings are not warehouses of knowledge. From one perspective, this is shocking. There is so much to know and, as functioning adults, we know a lot. We watch the news and don't get hopelessly confused. We engage in conversations about a wide range of topics. We get at least a few answers right when we watch Jeopardy! We all speak at least one language. Surely we know much more than a fraction of what can be retained by a small machine that can be carried around in a backpack. But this is only shocking if you believe the human mind works like a computer. The model of the mind as a machine designed to encode and retain memories breaks down when you consider the complexity of the world we interact with. It would be futile for memory to be designed to hold tons of information because there's just too much out there. Excerpted from The Knowledge Illusion: Why We Never Think Alone by Steven Sloman, Philip Fernbach All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.