Calculated risks How to know when numbers deceive you

Gerd Gigerenzer

Book - 2002

Saved in:

2nd Floor Show me where

519.2/Gigerenzer
1 / 1 copies available
Location Call Number   Status
2nd Floor 519.2/Gigerenzer Checked In
Subjects
Published
New York : Simon & Schuster c2002.
Language
English
Main Author
Gerd Gigerenzer (-)
Physical Description
viii, 310 p. : ill. ; 25 cm
Bibliography
Includes bibliographical references (p. 281-296) and index.
ISBN
9780743205566
  • Acknowledgments
  • Part I. Dare to Know
  • 1. Uncertainty
  • 2. The Illusion of Certainty
  • 3. Innumeracy
  • 4. Insight
  • Part II. Understanding Uncertainties in the Real World
  • 5. Breast Cancer Screening
  • 6. (Un)Informed Consent
  • 7. AIDS Counseling
  • 8. Wife Battering
  • 9. Experts on Trial
  • 10. DNA Fingerprinting
  • 11. Violent People
  • Part III. From Innumeracy to Insight
  • 12. How Innumeracy Can Be Exploited
  • 13. Fun Problems
  • 14. Teaching Clear Thinking
  • Glossary
  • Notes
  • References
  • Index
Review by Choice Review

The subtitle, "How To Know When Numbers Deceive You," summarizes accurately what this book is about. Gigerenzer (Max Planck Institute for Human Development, Germany) discusses how statistics are often misunderstood and misinterpreted, and how the inherent uncertainty is replaced by a false certainty. For example, he explains how in the case of rare diseases, even a positive result from a test does not mean that there is a high probability that a person has the disease. Similarly, he explains how even a very high probability of a match between a particular person's DNA and the DNA found at a crime scene does not necessarily mean that the chance of the person being the guilty person is also high. The author explains these and many other practical important issues brilliantly, with excellent real-world examples, and in a way that holds reader interest and attention. This book can be read with pleasure and profit by readers at all levels of statistical sophistication, and should be in all public libraries as well as in the libraries of all educational institutions. R. Bharath emeritus, Northern Michigan University

Copyright American Library Association, used with permission.
Review by Booklist Review

Gigerenzer effectively proves Mark Twain's adage about lies, damned lies, and statistics in this fascinating, frequently startling study of the ways numbers are manipulated and misrepresented. If Gigerenzer's reasoning is complex, his examples are all too familiar as he offers unsettling alternative perspectives on the reliability of AIDS tests, the usefulness of breast cancer screening, and the accuracy of DNA matches. He demonstrates that margins of error are often much greater than the general public is led to believe, and that many 100-percent claims are far from it. After shocking readers into such enlightenment, he shows how the same statistical rules apply on a more abstract level by using story problems to reveal the fundamental deceptiveness of odds. Throughout, his wit and humor transform what could have been a turgid academic exercise into an intriguing lesson from a master teacher. As a bonus, he even tells us how best to avoid winning the goat on Let's Make a Deal. If only all math courses were so practical. --Will Hickman

From Booklist, Copyright (c) American Library Association. Used with permission.
Review by Publisher's Weekly Review

If a woman aged 40 to 50 has breast cancer, nine times out of 10 it will show up on a mammogram. On the other hand, nine out of 10 suspicious mammograms turn out not to be cancer. Confused? So are many people who seek certainty through numbers, says Gigerenzer, a statistician and behavioral scientist. His book is a successful attempt to help innumerates (those who don't understand statistics), offering case studies of people who desperately need to understand statistics, including those working in AIDS counseling, DNA fingerprinting and domestic violence cases. Gigerenzer deftly intersperses math lessons explaining concepts like frequency and risk in layperson's terms with real-life stories involving doctors and detectives. One of his main themes is that even well-meaning, statistically astute professionals may be unable to communicate concepts such as statistical risk to innumerates. (He tells the true story of a psychiatrist who prescribes Prozac to a patient and warns him about potential side effects, saying, You have a 30 to 50 percent chance of developing a sexual problem. The patient worries that in anywhere from 30% to 50% of all his sexual encounters, he is going to have performance problems. But what the doctor really meant is that for every 10 people who take Prozac, three to five may experience sexual side effects, and many have no sexual side effects at all.) All innumerates buyers, sellers, students, professors, doctors, patients, lawyers and their clients, politicians, voters, writers and readers have something to learn from Gigerenzer's quirky yet understandable book. Agent, John Brockman. (June) Forecast: What's the probability of Gigerenzer's work becoming a bestseller? Let's just say it's hard to imagine a book about statistics flying off the shelves (although John Allen Paulos's Innumeracy was a bestseller just two years back). Still, if Gigerenzer gets enough publicity his book has an exposE, here's what those statistics really mean edge to it audiences might respond. (c) Copyright PWxyz, LLC. All rights reserved

(c) Copyright PWxyz, LLC. All rights reserved

Math is hard. Let's go shopping! Barbie Chapter 3 INNUMERACY At the beginning of the twentieth-century, the father of modern science fiction, H. G. Wells, is reported to have predicted, "Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write." At the end of the century, the mathematician John Allen Paulos investigated how far we had -- or, rather, hadn't -- come in this respect. In his best-selling book, Innumeracy, Paulos related the story of a weather forecaster on American television who reported that there was a 50 percent chance of rain on Saturday and a 50 percent chance of rain on Sunday, from which he concluded that there was a 100 percent chance of rain that weekend! The inability to reason appropriately about uncertainties is by no means strictly an American affliction. The word "percentage" has become one of the most frequent nouns in the German media. In a survey, 1,000 Germans were asked what "40 percent" means: (a) one-quarter, (b) 4 out of 10, or (c) every 40th person. About one-third of respondents did not choose the right answer. Political decision makers are, likewise, not immune to innumeracy. For example, commenting on the dangers of drug abuse, a Bavarian minister of the interior once argued that because most heroin addicts have used marijuana, most marijuana users will become heroin addicts. Figure 3-1 shows why this conclusion is mistaken. Most heroin addicts indeed have used marijuana, as the dark section of the small circle shows. However, this does not mean that most marijuana users are heroin addicts -- the same dark section that covers most of the heroin addicts covers only a small portion of the marijuana users. On the basis of his mistaken conclusion, the minister of the interior asserted that marijuana should therefore remain illegal. Whatever one's views on the legalization of marijuana, the minister's conclusion was based on clouded thinking. In Western countries, most children learn to read and write, but even in adulthood, many people do not know how to think with numbers. This is the problem that Paulos and others have called innumeracy. I focus on the most important form of innumeracy in everyday life, statistical innumeracy -- that is, the inability to reason about uncertainties and risk. Henceforth, when I use the term "innumeracy," I mean statistical innumeracy. How is the illusion of certainty connected to innumeracy? Here is an overview. Illusion of certainty. Franklin's law is a mind tool to overcome the illusion of certainty, to help make the transition from certainty to uncertainty. For instance, when Susan, the woman introduced in Chapter 1, finally learned (the hard way) that laboratory errors occur in HIV testing, she made the transition from certainty to uncertainty. Ignorance of risk. This is an elementary form of innumeracy in which a person does not know, not even roughly, how large a personally or professionally relevant risk is. This differs from the illusion of certainty in that the person is aware that there may be uncertainties, but does not know how great these are. The major tool for overcoming the ignorance of risk consists of various forms of information search (for example, scientific literature). For instance, Chapter 7 gives details about the various risks involved in HIV testing, including false positives. Miscommunication of risk. In this form of innumeracy a person knows the risks but does not know how to communicate these so that others understand them. The mind tool for overcoming miscommunication is representations that facilitate understanding. For instance, the Prozac story in Chapter 1 illustrates the miscommunication of risk -- the failure to communicate risk in an understandable way -- and how to overcome it. Clouded thinking. In this form of innumeracy a person knows the risks but not how to draw conclusions or inferences from them. For instance, physicians often know the error rates of a clinical test and the base rate of a disease, but not how to infer from this information the chances that a patient with a positive test actually has the disease (Chapter 1). Representations such as natural frequencies are a mind tool that facilitate the drawing of conclusions (Chapter 4). Innumeracy -- ignorance of risk, miscommunication of risk, and clouded thinking -- becomes a problem as soon as one is driven out of the promised land of certainty into the world in which Franklin's law reigns. Innumeracy, I emphasize, is not simply a problem within an individual mind; ignorance and miscommunication of specific risks can, for example, be produced and maintained by various groups within society to their own benefit. Risk The term "risk" has several senses. The one I intend has to do with uncertainty, but not necessarily regarding a dangerous event, such as a plane crash, because one can also be uncertain about a positive outcome, such as a successful landing. Another reason not to use the term to refer exclusively to negative outcomes is that there are situations in which a negative outcome from one perspective is a positive outcome from another. For instance, losing a month's salary in a gambling casino is a negative outcome for the gambler but a positive one for the casino. In this book, I call an uncertainty a risk when it can be expressed as a number such as a probability or frequency on the basis of empirical data. The number need not be fixed -- it may be updated in light of experience. In situations in which a lack of empirical evidence makes it impossible or undesirable to assign numbers to the possible alternative outcomes, I use the term "uncertainty" instead of "risk." Uncertainty does not imply chaos; for instance, when a cure for cancer will be found is uncertain, but this does not have anything to do with chaos. When does an uncertainty qualify as a risk? The answer depends on one's interpretation of probability, of which there are three major versions: degree of belief, propensity, and frequency. Degrees of belief are sometimes called subjective probabilities. Of the three interpretations of probability, the subjective interpretation is most liberal about expressing uncertainties as quantitative probabilities, that is, risks. Subjective probabilities can be assigned even to unique or novel events. Degrees of Belief. Consider the surgeon Christiaan Barnard's account of his first encounter with Louis Washkansky, who was then soon to become the first man to have a heart transplant. Washkansky was propped up in bed, reading a book. Barnard introduced himself and explained that he would exchange Washkansky's heart for a healthy one and that "there's a chance you can get back to normal life again." Washkansky did not ask how great the chance was, how long he would survive, or what the transplant operation involved. He just said he was ready to go ahead and turned back to his book, a Western. Barnard was deeply disturbed that Washkansky was more interested in pulp fiction than in this great moment in medical history and the risk it posed to him. But Washkansky's wife, Ann, did ask Barnard, "What chance do you give him?" Without hesitation or further explanation, he answered, "An 80 percent chance." Eighteen days after the operation, Washkansky died. Barnard's "80 percent" reflected a degree of belief, or subjective probability. In the subjective view, uncertainties can always be transformed into risks, even in novel situations, as long as they satisfy the laws of probability -- such as that the probabilities of an exhaustive and exclusive set of alternatives such as survival and death add up to 1. Thus, according to the subjective interpretation, Barnard's statement that Washkansky had an 80 percent chance of survival is meaningful provided that the surgeon also held that there was a 20 percent chance of his patient not surviving. In this interpretation, Barnard's "80 percent" would qualify as quantified uncertainty, that is, as risk. Propensities. The possibility of translating uncertainties into risks is much more restricted in the propensity view. Propensities are properties of an object, such as the physical symmetry of a die. If a die is constructed to be perfectly symmetrical, then the probability of rolling a six is 1 in 6. The reference to a physical design, mechanism, or trait that determines the risk of an event is the essence of the propensity interpretation of probability. Note how propensity differs from the subjective interpretation: It is not sufficient that someone's subjective probabilities about the outcomes of a die roll are coherent, that is, that they satisfy the laws of probabilty. What matters is the die's design. If the design is not known, there are no probabilities. According to this view, Barnard's estimate of 80 percent would not qualify as a probability, or risk, because not enough is known about the heart operation for its propensities to be assessed. Frequencies. For a frequentist, a probability must be based on a large number of observations and is defined as the relative frequency of an event in a specified reference class, such as the relative frequency of lung cancer in white American males who smoked cigarettes for at least 20 years. No reference class, no probability. Frequentists would not be interested in what someone believes about the outcome of rolling a die, nor would they need to study the design of the die to determine the probability of rolling a six. They would determine the probability empirically by rolling the die many times and computing the relative frequency with which the outcome was a six. Therefore, frequentists would declare Barnard's estimate of 80 percent meaningless (because there were no comparable transplants at the time he made the estimate), and hard-line frequentists would reject altogether the notion of assigning probabilities to a single event such as the survival of a specific man. Clearly, frequentists are cautious about moving from uncertainties to risks. For them, risks refer only to situations for which a large body of empirical data exists. The courts, for instance, tend to adhere to the frequentist position, admitting statements about risks as evidence only when they are based on empirical frequencies rather than opinion. These different interpretations of probability can produce drastically different estimates of risk. A few years ago, I enjoyed a guided tour through Daimler-Benz Aerospace (DASA), which produces the Ariane, a rocket that carries satellites into orbit. Standing with my guide in front of a large poster that listed all 94 rockets launched so far (Ariane models 4 and 5), I asked him what the risk of an accident was. He replied that the security factor is around 99.6 percent. That was surprisingly high because on the poster I saw eight stars, which meant eight accidents. To be sure, several of the stars were next to early launchings, but launch numbers 63, 70, and 88 were also accidents. I asked my guide how eight accidents could translate into 99.6 percent certainty. He replied that DASA did not count the number of accidents, but rather computed the security factor from the design features of the individual parts of the rocket. He added that counting accidents would have included human errors, and pointed out that behind one of these stars, for instance, was a misunderstanding between one worker who had not installed a screw, and the worker on the next shift who had assumed that his predecessor had done so. The reported risk of an Ariane accident was, hence, based on a propensity, not a frequency, interpretation. In this book, I will focus on risks that can be quantified on the basis of frequency data. This is not to say that frequencies are the whole story in estimating risks, but -- when they are available -- they provide a good starting point. Ignorance of Risk Who is informed about risks? The answer depends on one's culture and the event or hazard in question. For instance, the weather forecast might say that the chances of rain tomorrow are 30 percent, and we at least think we understand what that means. Although it seems natural to express the uncertainty of weather in terms of probabilities, this is a recent cultural phenomenon. Before 1965, the U.S. National Weather Service expressed its forecasts in all-or-none terms such as "it will not rain tomorrow," perhaps preceded by "it is unlikely that...." In Germany, probabilities began to be reported in weather forecasts only around 1990; in France, weather forecasts are still largely probability-free. Some cultures have an insatiable appetite for numbers -- batting averages, SAT scores, and market indices -- while others are more reluctant to express uncertainties in numerical form. In general, democracies tend to have a greater desire for numbers and a greater motivation to make risks transparent than most other social systems. PROMOTING PUBLIC IGNORANCE However, democracies also host groups that have little interest in the public's knowing about certain risks. For instance, in the 1950s, the American tobacco industry began a massive campaign to convince the public that cigarette smoking was safe. This was around the time when the American scientific community began to reach a consensus that cigarettes are a major cause of illness, and the industry invested hundreds of millions of dollars in the creation of an illusion of certainty. After the illusion crumbled following a report by the U.S. Surgeon General in 1964, the tobacco industry launched a second campaign of obfuscation to engender "doubt" about the extent of the actual risks involved. For decades, the scientific evidence concerning the hazards of smoking was rarely if ever discussed in the leading national magazines in which the tobacco industry advertised. Large segments of the public got the impression that the question of the effects of smoking on health was still open. As early as the mid-1950s, however, the American Cancer Society had evidence that people who smoked two packs of cigarettes a day were dying about seven years earlier, on average, than nonsmokers. Most experts today agree that tobacco is the cause of 80 to 90 percent of all cases of lung cancer. Tobacco kills upward of 400,000 Americans every year, primarily through lung cancer and heart disease; in Germany, the number is estimated to be 75,000. In China, the number of people who die from lung cancer will soon be close to 1 million a year. The case of cigarette smoking illustrates how public awareness of a health hazard can be diluted by a double defense line. First, the illusion of certainty is manufactured: Smoking is safe -- period. When this illusion breaks down, uncertainty is acknowledged, but doubt is spread as to whether the actual risks are known or not. BEYOND IGNORANCE: IT'S OFTEN ONLY A SIMPLE CALCULATION Not all ignorance is driven by trade lobbies or other parties that have an interest in keeping people ignorant of risks. There are also situations in which the facts are plainly in view and people have only to make a small mental effort to put them together. What is the chance of one's dying in a motor vehicle accident over the course of a lifetime? It does not take much time to figure this out. In an average year, 40,000 to 45,000 people die on the roads in the United States. Given that the country has about 280 million inhabitants, this means that about 1 in 7,000 of them is killed on the road each year. Assuming that this figure remains fairly stable over time, we can also figure out the chance of dying on the road during one's life. Given a life span of 75 years, the result is roughly 1 in 90. That is, 1 out of every 90 Americans will lose his or her life in a motor vehicle accident by the age of 75. Most of them die in passenger car accidents. Are Americans in greater danger of being killed on the road than people in Germany or in Great Britain? In an average year, about 8,000 people die on the roads in Germany. Given its population of about 80 million, one can calculate that about 1 in 10,000 is killed in a motor vehicle accident. Over a life span of 75 years, this is equivalent to about 1 in every 130 people. Again, the majority of these people are killed while driving or riding in a passenger car. Note that the higher fatality rate of Americans does not imply that they drive more dangerously than Germans; they just drive more, in part because of the lack of public transportation. In Great Britain (including Northern Ireland), the roads are safer than in the United States and Germany. There, again over a life span of 75 years, "only" about 1 in 220 people is killed in a motor vehicle accident. American roads are, however, definitely not the most dangerous in the Western world. There are two European countries that stand out from the others, Portugal and Greece, where about 1 in 4,000 citizens is killed on the road every year. This means that over a life span of 75 years, about 1 out of every 50 people in Portugal and Greece is killed on the roads. All that is needed to make these estimates is the number of people who die of the cause in question each year and the population of the country. Both can be looked up easily for any country or state. These estimates are only rough because they do not take account of the possibility that driving behavior or safety technology might drastically change over time. I do not present the striking risks of driving to make every reader switch to public transportation. Many people have heard arguments of the sort "planes are safer than cars," yet these arguments do not change their behavior -- because of habit, fear of flying, or love of driving. However, knowing the actual risk allows individuals to make up their own minds, to weigh the risk against the individual benefits of driving, and to arrive at an informed decision. For instance, the terrorist attack on September 11, 2001, cost the lives of some 3,000 people. The subsequent decision of millions to drive rather than fly may have cost the lives of many more. PUBLIC NUMBERS Public ignorance of risk has a historical basis. Unlike the stories, mythologies, and gossip that have been shaping our minds since the beginning of human culture, public statistics are a recent cultural achievement. During much of the eighteenth and nineteenth centuries, statistical information was a state secret known only by an elite and withheld from the public. The power of statistical information, such as population figures, has been recognized among political leaders for centuries. Napoleon's appetite for facts from his bureau de statistique was legendary. And he always wanted the numbers immediately. At the Napoleonic court, the saying was, If you want something from Napoleon, give him statistics. Willingness to make economic and demographic figures public is a recent phenomenon. It was not until around 1830 that statistics, or at least some of them, became public. Since then, an "avalanche of printed numbers," to borrow the philosopher Ian Hacking's phrase, has turned modern life into a vast ocean of information conveyed by media such as television, newspapers, and the Internet. In this sense, one can say that, although uncertainties are old, risks are relatively new. As already mentioned, the widening dissemination of statistical information to the public during the nineteenth and twentieth centuries has been linked to the rise of democracies in the Western world. A democracy makes lots of information available to everyone, but its citizens often have very selective interests. It is more likely that a young American male knows baseball statistics than that his chance of dying on a motorcycle trip is about 15 times higher than his chance of dying on a car trip of the same distance. Today, numbers are public, but the public is not generally numerate. From Miscommunication to Risk Communication The Prozac story in Chapter 1 illustrates the miscommunication of risk -- the failure to communicate risk in an understandable way. Some forms of communication enhance understanding; others don't. Miscommunication of risk is often the rule rather than the exception and can be difficult to detect, as the ambiguous probabilities in the Prozac story illustrate. Statements about the probabilities of single events -- such as "you have a 30 to 50 percent chance of developing a sexual problem" -- are fertile ground for miscommunication. One mind tool that can overcome this problem is specifying a reference class, which occurs automatically when one uses statements about frequencies rather than single events. There are three major forms of risk communication that invite miscommunication: the use of single-event probabilities, relative risks, and conditional probabilities. As it happens, these seem to be the most frequently used forms of risk communication today. SINGLE-EVENT PROBABILITIES To communicate risk in the form of a single-event probability means to make a statement of this type: "The probability that an event will happen is X percent." There are two reasons why such a statement can be confusing. First, as illustrated by the Prozac case, a probability of a single event, by definition, does not state what the reference class is. Second, if the event is unique, that is, there are no comparable events known, then the probability estimate itself is likely to be nothing but a wild guess that may suggest precision where, in fact, only uncertainty reigns. Let me give you some examples. The statement "there is a 30 percent chance that it will rain tomorrow" is a probability statement about a singular event -- it will either rain or not rain tomorrow. In contrast, the statement that it will rain on 10 days in May is a frequency statement. The latter statement can be true or false; a single-event probability by itself, however, can never be proven wrong (unless the probability is zero or one). Single-event probabilities can lead to miscommunication because people tend to fill in different reference classes. This happens even with such familiar statements as "there is a 30 percent chance that it will rain tomorrow." Some think this statement means that it will rain 30 percent of the time, others that it will rain in 30 percent of the area, and a third group believes it will rain on 30 percent of the days that are like tomorrow. These three interpretations are about equally frequent. What weather forecasters actually have in mind is the last interpretation. However, people should not be blamed for different interpretations; the statement "there is a 30 percent chance that it will rain tomorrow" is ambiguous. Dr. Barnard's 80 percent estimate illustrates specific problems with statements about unique events. Ann Washkansky may have gotten the impression that this high probability offered hope, but what it meant was ambiguous. Barnard did not say to what the number referred: the probability of Washkansky's surviving the operation, surviving the following day or year, or something else. Furthermore, the probability referred to the first heart transplant in history; there were no comparable cases on which Barnard could have based his estimate. Barnard's answer may have reassured, but did not inform, Washkansky's wife. RELATIVE RISKS What is the benefit of a cholesterol-lowering drug on the risk of coronary heart disease? In 1995, the results of the West of Scotland Coronary Prevention Study were presented in a press release: "People with high cholesterol can rapidly reduce...their risk of death by 22 per cent by taking a widely prescribed drug called pravastatin sodium. This is the conclusion of a landmark study presented today at the annual meeting of the American Heart Association." The benefit of this cholesterol-reducing drug, just like that of most medical treatment, was reported by the press in the form of a relative risk reduction. What does "22 percent" mean? Studies indicate that a majority of people think that out of 1,000 people with high cholesterol, 220 of these people can be prevented from becoming heart attack victims. This, however, is not true. Table 3-1 shows the actual result of the study: Out of 1,000 people who took pravastatin over a period of 5 years, 32 died, whereas of 1,000 people who did not take pravastatin but rather a placebo, 41 died. The following three presentations of the raw result -- a total mortality reduction from 41 to 32 in every 1,000 people -- are all correct, but they suggest different amounts of benefit and can evoke different emotional reactions in ordinary citizens. Three Ways to Present the Benefit Absolute risk reduction: The absolute risk reduction is the proportion of patents who die without treatment (placebo) minus those who die with treatment. Pravastatin reduces the number of people who die from 41 to 32 in 1,000. That is, the absolute risk reduction is 9 in 1,000, which is 0.9 percent. Relative risk reduction: The relative risk reduction is the absolute risk reduction divided by the proportion of patients who die without treatment. For the present data, the relative risk reduction is 9 divided by 41, which is 22 percent. Thus, pravastatin reduces the risk of dying by 22 percent. Number needed to treat: The number of people who must participate in the treatment to save one life is the number needed to treat (NNT). This number can be easily derived from the absolute risk reduction. The number of people who needed to be treated to save one life is 111, because 9 in 1,000 deaths (which is about 1 in 111) are prevented by the drug. The relative risk reduction looks more impressive than the absolute risk reduction. Relative risks are larger numbers than absolute risks and therefore suggest higher benefits than really exist. Absolute risks are a mind tool that makes the actual benefits more understandable. Another mind tool serving as an alternative to relative risks is presenting benefits in terms of the number needed to treat to save one life. With this mind tool, one can see right away that out of 111 people who swallow the tablets for 5 years, 1 had the benefit, whereas the other 110 did not. The situation here is quite different from that of penicillin and other antibiotics whose positive effects when first introduced were dramatic. CONDITIONAL PROBABILITIES One can communicate the chances that a test will actually detect a disease in various ways (see Chapter 1). The most frequent way is in the form of a conditional probability: If a woman has breast cancer, the probability that she will test positive on a screening mammogram is 90 percent. Many mortals, physicians included, confuse that statement with this one: If a woman tests positive on a screening mammogram, the probability that she has breast cancer is 90 percent. That is, the conditional probability that an event A occurs given event B is confused with the conditional probability that an event B occurs given event A. This is not the only confusion. Others mistake the probability of A given B with the probability of A and B. One can reduce this confusion by replacing conditional probabilities with natural frequencies, as explained in the next chapter. A RIGHT TO CLEAR INFORMATION Despite the potential confusion created by single-event probabilities, relative risk reduction, and conditional probabilities, these forms of risk communication are standard. For instance, relative risks are the prevalent way in which the press and drug company advertising report the benefits of new treatments. There is a consensus today that the public has a right to information. But there is not yet a consensus that the public also has a right to get this information in a way that is clear and not misleading. I strongly urge medical, legal, and other associations to subscribe to an ethical policy that demands reporting risks in clear terms such as absolute risks and natural frequencies, rather than in ways that are more likely to confuse people. In this book, I introduce various mind tools for communicating risk in ways people can understand. From Clouded Thinking to Insight Ignorance of relevant risks and miscommunication of those risks are two aspects of innumeracy. A third aspect of innumeracy concerns the problem of drawing incorrect inferences from statistics. This third type of innumeracy occurs when inferences go wrong because they are clouded by certain risk representations. Such clouded thinking becomes possible only once the risks have been communicated. The mammography example in Chapter 1 illustrates a tool for achieving mental clarity, that is, a device for translating conditional probabilities -- which impede not only risk communication but also correct inference from risks -- into natural frequencies. Why is it so difficult for even highly educated people to make inferences on the basis of probabilities? One reason might be that the theory of probability, which is concerned with drawing inferences from uncertain or incomplete information, is a relatively recent development in human history. Ian Hacking, who is fond of precise numbers, has dated this discovery to 1654, when the mathematicians Blaise Pascal and Pierre Fermat exchanged a now-famous series of letters about gambling. The fact that the notion of mathematical probability developed so late -- later than most key philosophical concepts -- has been called the "scandal of philosophy." The difficulty that even great thinkers had in understanding risk before then is best illustrated by Girolamo Cardano, a sixteenth-century Italian physician and mathematician and the author of one of the first treatises on probability. Cardano, a notorious gambler, asserted that each face of a die will occur exactly once in any given six rolls. This assertion, however, flew in the face of his lifelong experience at the gambling tables. He resolved the conflict with an appeal to the intervention of luck (he was a great believer in his own). Cardano's intuition recalls that of the little girl who, as the story goes, was scheduled to receive an injection from her pediatrician. Upset that her father signed a consent form stating that he understood that 1 out of 10,000 children experience a serious allergic reaction, she insisted on speaking to the doctor. "I want to know," the little girl asked, "what number you're on." The remainder of this book presents mind tools for overcoming innumeracy that are easy to learn, apply, and remember. I focus on three kinds of tools: Franklin's law for overcoming the illusion of certainty, devices for communicating risk intelligibly, and the use of natural frequencies for turning clouded thinking into insight. Overcoming innumeracy is like completing a three-step program to statistical literacy. The first step is to defeat the illusion of certainty. The second step is to learn about the actual risks of relevant events and actions. The third step is to communicate the risks in an understandable way and to draw inferences without falling prey to clouded thinking. The general point is this: Innumeracy does not simply reside in our minds but in the representations of risk that we choose. Copyright © 2002 by Gerd Gigerenzer