AI snake oil What artificial intelligence can do, what it can't, and how to tell the difference

Arvind Narayanan

Book - 2024

"A trade book that argues that predictive AI is snake oil: it cannot and will never work. Artificial Intelligence is an umbrella term for a set of loosely related technologies. For instance, ChatGPT has little in common with algorithms that banks use to evaluate loan applicants. Both of these are referred to as AI, but in all of the salient ways -- how they work, what they're used for and by whom, and how they fail -- they couldn't be more different. Understanding the fundamental differences between AI technologies is critical for a technologically literate public to evaluate how AI is being used all around us. In this book, Arvind Narayanan and Sayash Kapoor explain the major strains of AI in use today: generative AI, predic...tive AI, and AI for content moderation. They show readers how to differentiate between them and, importantly, make a cogent argument for which types of AI can work well and which can never work, because of their inherent limitations. AI in this latter category, the authors argue, is AI snake oil: it does not and cannot work. More precisely, generative AI is imperfect but can be used for good once we learn how to apply it appropriately, whereas predictive AI can never work -- in spite of the fact that it's being sold and marketed today in products -- because we have never been able to accurately predict human behavior"--

Saved in:
1 being processed

2nd Floor New Shelf Show me where

006.3/Narayanan
0 / 1 copies available
Location Call Number   Status
2nd Floor New Shelf 006.3/Narayanan (NEW SHELF) Due Jan 29, 2025
Subjects
Published
Princeton : Princeton University Press [2024]
Language
English
Main Author
Arvind Narayanan (author)
Other Authors
Sayash Kapoor, 1996- (author)
Physical Description
x, 348 pages : illustrations ; 23 cm
Bibliography
Includes bibliographical references (pages 293-330) and index.
ISBN
9780691249131
  • 1. Introduction
  • The Dawn of Alas a Consumer Product
  • AI Shakes Up Entertainment
  • Predictive AI: An Extraordinary Claim That Requires Extraordinary Evidence
  • Painting AI with a Single Brush Is Tempting but Flawed
  • A Series of Curious Circumstances Led to This Book
  • The AI Hype Vortex
  • What Is AI Snake Oil?
  • Who This Book Is For
  • 2. How Predictive AI Goes Wrong
  • Predictive AI Makes Life-Altering Decisions
  • A Good Prediction Is Not a Good Decision
  • Opaque AI Incentivizes Gaming
  • Overautomation
  • Predictions about the Wrong People
  • Predictive AI Exacerbates Existing Inequalities
  • A World without Prediction
  • Concluding Thoughts
  • 3. Why Can't AI Predict the Future?
  • A Brief History of Predicting the Future Using Computers
  • Getting Specific
  • The Fragile Families Challenge
  • Why Did the Fragile Families Challenge End in Disappointment?
  • Predictions in Criminal Justice
  • Failure Is Hard. What about Success?
  • The Meme Lottery
  • From Individuals to Aggregates
  • Recap: Reasons for Limits to Prediction
  • 4. The Long Road to Generative AI
  • Generative AI Is Built on a Long Series of Innovations Dating Back Eighty Years
  • Failure and Revival
  • Training Machines to "See"
  • The Technical and Cultural Significance of ImageNet
  • Classifying and Generating Images
  • Generative AI Appropriates Creative Labor
  • AI for Image Classification Can Quickly Become AI for Surveillance
  • From Images to Text
  • From Models to Chatbots
  • Automating Bullshit
  • Deepfakes, Fraud, and Other Malicious Uses
  • The Cost of Improvement
  • Taking Stock
  • 5. Is Advanced AI an Existential Threat?
  • What Do the Experts Think?
  • The Ladder of Generality
  • What's Next on the Ladder?
  • Accelerating Progress ?
  • Rogue AI?
  • A Global Ban on Powerful AI?
  • A Better Approach: Defending against Specific Threats
  • Concluding Thoughts
  • 6. Why Can't AI Fix Social Media?
  • When Everything Is Taken Out of Context
  • Cultural Incompetence
  • AI Excels at Predicting … the Past
  • When AI Goes Up against Human Ingenuity
  • A Matter of Life and Death
  • Now Add Regulation into the Mix
  • The Hard Part Is Drawing the Line
  • Recap: Seven Shortcomings of AI for Content Moderation
  • A Problem of Their Own Making
  • The Future of Content Moderation
  • 7. Why Do Myths about AI Persist?
  • AI Hype Is Different from Previous Technology Hype
  • The AI Community Has a Culture and History of Hype
  • Companies Have Few Incentives for Transparency
  • The Reproducibility Crisis in AI Research
  • News Media Misleads the Public
  • Public Figures Spread AI Hype
  • Cognitive Biases Lead Us Astray
  • 8. Where Do We Go from Here?
  • AI Snake Oil Is Appealing to Broken Institutions
  • Embracing Randomness
  • Regulation: Cutting through the False Dichotomy
  • Limitations of Regulation
  • AI and the Future of Work
  • Growing Up with AI in Kai's World
  • Growing Up with AI in Maya's World
  • Acknowledgments
  • References
  • Index
Review by Publisher's Weekly Review

Narayanan (coauthor of Bitcoin and Cryptocurrency Technologies), a computer science professor at Princeton University, and Kapoor, a PhD candidate in Princeton's computer science program, present a capable examination of AI's limitations. Because ChatGPT and other generative AI software imitate text patterns rather than memorize facts, it's impossible to prevent them from spouting inaccurate information, the authors contend. They suggest that this shortcoming undercuts any hoped-for efficiency gains and describe how news website CNET's deployment of the technology in 2022 backfired after errors were discovered in many of the pieces it wrote. Predictive AI programs are riddled with design flaws, the authors argue, recounting how software tasked with determining "the risk of releasing a defendant before trial" was trained on a national dataset and then used in Cook County, Ill., where it failed to adjust for the county's lower crime rate and recommended thousands of defendants be jailed when they actually posed no threat. Narayanan and Kapoor offer a solid overview of AI's defects, though the anecdotes about racial biases in facial recognition software and the abysmal working conditions of data annotators largely reiterate the same critiques found in other AI cris de coeur. This may not break new ground, but it gets the job done. (Sept.)

(c) Copyright PWxyz, LLC. All rights reserved
Review by Kirkus Book Review

Two academics in the burgeoning field of AI survey the landscape and present an accessible state-of-the-union report. Like it or not, AI is widespread. The present challenge involves strategies to use it properly, comprehend its limitations, and ask the right questions of the entrepreneurs promoting it as a cure for every social ill. The experienced authors bring a wealth of knowledge to their subject: Narayanan is a professor of computer science at Princeton and director of its Center for Information Technology Policy, and Kapoor is a doctoral candidate with hands-on experience of AI. They walk through the background of AI development and explain the difference between generative and predictive AI. They see great advantages in generative AI, which can provide, collate, and communicate massive amounts of information. Developers and regulators must take strict precautions in areas such as academic cheating, but overall, the advantages outweigh the problems. Predictive AI, however, is another matter. It seeks to apply generalized information to specific cases, and there are plenty of horror stories about people being denied benefits, having reputations ruined, or losing jobs due to the opaque decision of an AI system. The authors argue convincingly that when individuals are affected, there should always be human oversight, even if it means additional costs. In addition, the authors show how the claims of AI developers are often overoptimistic (to say the least), and it pays to look at their records as well as have a plan for regular review. Written in language that even nontechnical readers can understand, the text provides plenty of practical suggestions that can benefit creators and users alike. It's also worth noting that Narayanan and Kapoor write a regular newsletter to update their points. Highly useful advice for those who work with or are affected by AI--i.e., nearly everyone. Copyright (c) Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.