AI snake oil What artificial intelligence can do, what it can't, and how to tell the difference

Arvind Narayanan

Book - 2024

"A trade book that argues that predictive AI is snake oil: it cannot and will never work. Artificial Intelligence is an umbrella term for a set of loosely related technologies. For instance, ChatGPT has little in common with algorithms that banks use to evaluate loan applicants. Both of these are referred to as AI, but in all of the salient ways - how they work, what they're used for and by whom, and how they fail - they couldn't be more different. Understanding the fundamental differences between AI technologies is critical for a technologically literate public to evaluate how AI is being used all around us. In this book, Arvind Narayanan and Sayash Kapoor explain the major strains of AI in use today: generative AI, predicti...ve AI, and AI for content moderation. They show readers how to differentiate between them and, importantly, make a cogent argument for which types of AI can work well and which can never work, because of their inherent limitations. AI in this latter category, the authors argue, is AI snake oil: it does not and cannot work. More precisely, generative AI is imperfect but can be used for good once we learn how to apply it appropriately, whereas predictive AI can never work - in spite of the fact that it's being sold and marketed today in products - because we have never been able to accurately predict human behavior"--

Saved in:
1 copy ordered
Subjects
Published
Princeton : Princeton University Press [2024]
Language
English
Main Author
Arvind Narayanan (author)
Other Authors
Sayash Kapoor, 1996- (author)
Physical Description
x, 348 pages : illustrations ; 23 cm
Bibliography
Includes bibliographical references (pages 293-330) and index.
ISBN
9780691249131
  • Introduction
  • How predictive AI goes wrong
  • Why can't AI predict the future?
  • The long road to generative AI
  • Is advanced AI an existential threat?
  • Why can't AI fix social media?
  • Why do myths about AI persist?
  • Where do we go from here?
Review by Publisher's Weekly Review

Narayanan (coauthor of Bitcoin and Cryptocurrency Technologies), a computer science professor at Princeton University, and Kapoor, a PhD candidate in Princeton's computer science program, present a capable examination of AI's limitations. Because ChatGPT and other generative AI software imitate text patterns rather than memorize facts, it's impossible to prevent them from spouting inaccurate information, the authors contend. They suggest that this shortcoming undercuts any hoped-for efficiency gains and describe how news website CNET's deployment of the technology in 2022 backfired after errors were discovered in many of the pieces it wrote. Predictive AI programs are riddled with design flaws, the authors argue, recounting how software tasked with determining "the risk of releasing a defendant before trial" was trained on a national dataset and then used in Cook County, Ill., where it failed to adjust for the county's lower crime rate and recommended thousands of defendants be jailed when they actually posed no threat. Narayanan and Kapoor offer a solid overview of AI's defects, though the anecdotes about racial biases in facial recognition software and the abysmal working conditions of data annotators largely reiterate the same critiques found in other AI cris de coeur. This may not break new ground, but it gets the job done. (Sept.)

(c) Copyright PWxyz, LLC. All rights reserved
Review by Kirkus Book Review

Two academics in the burgeoning field of AI survey the landscape and present an accessible state-of-the-union report. Like it or not, AI is widespread. The present challenge involves strategies to use it properly, comprehend its limitations, and ask the right questions of the entrepreneurs promoting it as a cure for every social ill. The experienced authors bring a wealth of knowledge to their subject: Narayanan is a professor of computer science at Princeton and director of its Center for Information Technology Policy, and Kapoor is a doctoral candidate with hands-on experience of AI. They walk through the background of AI development and explain the difference between generative and predictive AI. They see great advantages in generative AI, which can provide, collate, and communicate massive amounts of information. Developers and regulators must take strict precautions in areas such as academic cheating, but overall, the advantages outweigh the problems. Predictive AI, however, is another matter. It seeks to apply generalized information to specific cases, and there are plenty of horror stories about people being denied benefits, having reputations ruined, or losing jobs due to the opaque decision of an AI system. The authors argue convincingly that when individuals are affected, there should always be human oversight, even if it means additional costs. In addition, the authors show how the claims of AI developers are often overoptimistic (to say the least), and it pays to look at their records as well as have a plan for regular review. Written in language that even nontechnical readers can understand, the text provides plenty of practical suggestions that can benefit creators and users alike. It's also worth noting that Narayanan and Kapoor write a regular newsletter to update their points. Highly useful advice for those who work with or are affected by AI--i.e., nearly everyone. Copyright (c) Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.