How to stay smart in a smart world Why human intelligence still beats algorithms

Gerd Gigerenzer

Book - 2022

"The book deflates the hype about AI, offering instead a balanced view of what it can and cannot do, and shows how humans can more wisely use digital technology"--

Saved in:

2nd Floor Show me where

303.4834/Gigerenzer
1 / 1 copies available
Location Call Number   Status
2nd Floor 303.4834/Gigerenzer Checked In
Subjects
Published
Cambridge, Massachusetts ; London, England : The MIT Press [2022]
Language
English
Main Author
Gerd Gigerenzer (author)
Physical Description
xxii, 297 pages : illustrations ; 24 cm
Bibliography
Includes bibliographical references (pages 229-284) and index.
ISBN
9780262046954
  • Introduction
  • Part I. The Human Affair with Al
  • 1. Is True Love Just a Click Away?
  • 2. What Al Is Best At: The Stable-World Principle
  • 3. Machines Influence How We Think about Intelligence
  • 4. Are Self-Driving Cars Just Down the Road?
  • 5. Common Sense and Al
  • 6. One Data Point Can Beat Big Data
  • Part II. High Stakes
  • 7. Transparency
  • 8. Sleepwalking into Surveillance
  • 9. The Psychology of Getting Users Hooked
  • 10. Safety and Self-Control
  • 11. Fact or Fake?
  • Acknowledgments
  • Notes
  • Bibliography
  • Index
Review by Publisher's Weekly Review

Gigerenzer (Risk Savvy), director emeritus at the Max Planck Institute for Human Development, offers plausible reassurance for those who fear artificial intelligence is poised to take over human decision-making. Things that AI can do well, Gigerenzer explains, such as playing chess, occur in strict rules-based environments where there's little or no chance of something unpredictable happening. The AI Watson's vaunted Jeopardy! victory over human champions Ken Jennings and Brad Rutter, for example, was less impressive than it appears, Gigerenzer writes, as it was the result of an altered game in which certain kinds of questions were excluded because it was anticipated that the AI wouldn't be able to answer them accurately. Gigerenzer also covers more pressing issues, among them self-driving cars that are unable to accurately assess dangers to pedestrians, tech and ads that are designed to demand attention and distract users, and the large-scale voluntary abandonment of privacy. It amounts to a solid case against "unconditional trust in complex algorithms," arguing that "more computing power and bigger data" won't bridge the gap between machine and mind, because most problems humans face involve "situations in which uncertainty abounds." Anyone worried about the age of AI will sleep better after reading this intelligent account. (Aug.)

(c) Copyright PWxyz, LLC. All rights reserved
Review by Library Journal Review

According to psychologist Gigerenzer (Calculated Risks: How To Know When Numbers Deceive You), with our technologically-centered world increasingly driven by artificial intelligence (AI), it's important to understand how algorithms work within AI. Grasping what algorithms can do well and understanding their limitations is the key to staying in charge of our lives. Gigerenzer reminds that AI works best in a stable world situation (with little unpredictable human behavior). AI is good at playing chess, analyzing health data, and assisting the field of astronomy, but comes up short with dating apps, predictive policing software, and fully self-driving cars. After providing readers with numerous examples of myriad uses of algorithms in our daily life, the author turns his attention to exploring other tech minefields, such as our willingness to hand over personal data to companies like Google and Facebook, resulting in a now ubiquitous form of economy known as surveillance capitalism. VERDICT Gigerenzer explains why technology is so addictive and offers tips for fostering digital self-control. A seriously compelling, eye-opening, and well-researched investigation.--Ragan O'Malley

(c) Copyright Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.