The coming wave Technology, power, and the twenty-first century's greatest dilemma

Mustafa Suleyman

Book - 2023

"We are approaching a critical threshold in the history of our species. Everything is about to change. Soon you will live surrounded by AIs. They will organise your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy. None of us are prepared. As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the centre of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies. In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the na...tion-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side, the threat of overbearing surveillance on the other. Can we forge a narrow path between catastrophe and dystopia? This groundbreaking book from the ultimate AI insider establishes "the containment problem"-the task of maintaining control over powerful technologies-as the essential challenge of our age"--

Saved in:

Bookmobile Nonfiction Show me where

303.483/Suleyman
1 / 1 copies available

2nd Floor Show me where

303.483/Suleyman
1 / 1 copies available
Location Call Number   Status
Bookmobile Nonfiction 303.483/Suleyman Checked In
2nd Floor 303.483/Suleyman Checked In
Subjects
Published
New York : Crown [2023]
Language
English
Main Author
Mustafa Suleyman (author)
Other Authors
Michael Bhaskar (author)
Edition
First edition
Physical Description
viii, 332 pages ; 25 cm
Bibliography
Includes bibliographical references (pages 289-317) and index.
ISBN
9780593593950
9780593728178
  • Glossary of Key Terms
  • Prologue
  • Chapter 1. Containment is Not Possible
  • Part I. Homo Technologicus
  • Chapter 2. Endless Proliferation
  • Chapter 3. The Containment Problem
  • Part II. The Next Wave
  • Chapter 4. The Technology of Intelligence
  • Chapter 5. The Technology of Life
  • Chapter 6. The Wider Wave
  • Chapter 7. Four Features of the Coming Wave
  • Chapter 8. Unstoppable Incentives
  • Part III. States of Failure
  • Chapter 9. The Grand Bargain
  • Chapter 10. Fragility Amplifiers
  • Chapter 11. The Future of Nations
  • Chapter 12. The Dilemma
  • Part IV. Through the Wave
  • Chapter 13. Containment Must Be Possible
  • Chapter 14. Ten Steps Toward Containment
  • Life After the Anthropocene
  • Acknowledgments
  • Notes
  • Index
Review by Choice Review

The Coming Wave is an informed reflection on modern artificial intelligence and synthetic biology technologies. The wave in the title refers to the sea changes in business, society, and science that the author predicts will result from these new tools. Suleyman (entrepreneur) first surveys past waves of technology, finding common threads in humanity's response. He then outlines the major developments in both learning computers and biological and genetic manipulation. Finally, he lays out his vision for the potential of this technology. He frequently uses parallels from history to illustrate his predictions. Despite a seemingly fatalistic view, Suleyman ultimately ends with a ten-step strategy to rein in the coming wave so that it might benefit all. Suleyman's natural language and frequent use of examples create a stirring narrative aimed more toward general interest readers than academic scholars. The strong focus on the difficulties of regulation, potential for exploitation, and business interests makes this book more suitable to sociology and political science fields than the hard sciences. Summing Up: Recommended. Undergraduates and general readers. --Jean Marie Cook, University of West Georgia

Copyright American Library Association, used with permission.
Review by Publisher's Weekly Review

"An emerging cluster of related technologies centered on AI and synthetic biology... will both empower humankind and present unprecedented new risks," according to this shrewd debut. Suleyman, cofounder of the artificial intelligence companies DeepMind and Inflection AI, chronicles the technological advances that led to today's AI boom with anecdotes from his career, describing DeepMind's 2012 work on an algorithm capable of teaching itself simple computer games and the company's 2018 breakthrough developing a program capable of predicting unknown protein structures. According to Suleyman, four features distinguish these new technologies: the high speed at which they're developing, the broad variety of uses for them, their ability to function relatively autonomously, and their capacity to affect "entire societies" (he mentions the possibility that "a single system could control autonomous vehicles throughout a territory"). Regulation is the key to dodging dystopia, he contends, outlining 10 steps for keeping AI under human control, including ensuring that all AI have a "bulletproof off switch" and requiring government-issued licenses to produce "the most sophisticated AI systems." Suleyman's account of DeepMind's achievements can come across as self-serving, but anecdotes about other companies working on technologies capable of, for instance, interfacing directly with the human brain, underscore the mind-bending possibilities. It's a sober take on navigating the perils of AI. (Sept.)

(c) Copyright PWxyz, LLC. All rights reserved
Review by Kirkus Book Review

Amid the flood of optimism about artificial intelligence, the significant dangers must be understood and assessed. Suleyman might seem like a strange person to write a book about the dangers of AI. He is the CEO and co-founder of Inflection AI, and, before that, he co-founded DeepMind (now owned by Alphabet), a company working at the leading edge of AI research. As the author shows, however, it is precisely because he is an expert that he knows enough to be fearful. He believes that within a few years, AI systems will break into the broad public market, placing enormous computing power in the hands of anyone with a few thousand dollars and a bit of expertise. Suleyman recognizes that this could bring remarkable benefits, but he argues that the negatives are even greater. One frightening possibility is a disgruntled individual using off-the-shelf AI to manufacture a deadly, unstoppable virus. Other scenarios range from disrupting financial markets to creating floods of disinformation. Suleyman accepts that the AI genie is too far out of the bottle to be put back; the questions are now about containment and regulation. There is a model in the framework established by the biomedical sector to set guidelines and moral limits on what genetic experiments could take place. The author also suggests looking at "choke points," including the manufacturers of advanced chips and the companies that manage the cloud. The key step, however, would be the development of a culture of caution in the AI community. As Suleyman admits, any of these proposals would be extremely difficult to implement. Nonetheless, he states his case with clarity and authority, and the result is a worrying, provocative book. "Containment is not, on the face of it, possible," he concludes. "And yet for all our sakes, containment must be possible." An informative yet disturbing study and a clear warning from someone whose voice cannot be ignored. Copyright (c) Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.

The Containment Problem Revenge Effects Alan Turing and Gordon Moore could never have predicted, let alone altered the rise of, social media, memes, Wikipedia, or cyberattacks. Decades after their invention, the architects of the atomic bomb could no more stop a nuclear war than Henry Ford could stop a car accident. Technology's unavoidable challenge is that its makers quickly lose control over the path their inventions take once introduced to the world. Technology exists in a complex, dynamic system (the real world), where second-, third-, and n th-order consequences ripple out unpredictably. What on paper looks flawless can behave differently out in the wild, especially when copied and further adapted downstream. What people actually do with your invention, however well intentioned, can never be guaranteed. Thomas Edison invented the phonograph so people could record their thoughts for posterity and to help the blind. He was horrified when most people just wanted to play music. Alfred Nobel intended his explosives to be used only in mining and railway construction. Gutenberg just wanted to make money printing Bibles. Yet his press catalyzed the Scientific Revolution and the Reformation, and so became the greatest threat to the Catholic Church since its establishment. Fridge makers didn't aim to create a hole in the ozone layer with chlorofluorocarbons (CFCs), just as the creators of the internal combustion and jet engines had no thought of melting the ice caps. In fact early enthusiasts for automobiles argued for their environmental benefits: engines would rid the streets of mountains of horse dung that spread dirt and disease across urban areas. They had no conception of global warming. Understanding technology is, in part, about trying to understand its unintended consequences, to predict not just positive spillovers but "revenge effects." Quite simply, any technology is capable of going wrong, often in ways that directly contradict its original purpose. Think of the way that prescription opioids have created dependence, or how the overuse of antibiotics renders them less effective, or how the proliferation of satellites and debris known as "space junk" imperils spaceflight. As technology proliferates, more people can use it, adapt it, shape it however they like, in chains of causality beyond any individual's comprehension. As the power of our tools grows exponentially and as access to them rapidly increases, so do the potential harms, an unfolding labyrinth of consequences that no one can fully predict or forestall. One day someone is writing equations on a blackboard or fiddling with a prototype in the garage, work seemingly irrelevant to the wider world. Within decades, it has produced existential questions for humanity. As we have built systems of increasing power, this aspect of technology has felt more and more pressing to me. How do we guarantee that this new wave of technologies does more good than harm? Technology's problem here is a containment problem. If this aspect cannot be eliminated, it might be curtailed. Containment is the overarching ability to control, limit, and, if need be, close down technologies at any stage of their development or deployment. It means, in some circumstances, the ability to stop a technology from proliferating in the first place, checking the ripple of unintended consequences (both good and bad). The more powerful a technology, the more ingrained it is in every facet of life and society. Thus, technology's problems have a tendency to escalate in parallel with its capabilities, and so the need for containment grows more acute over time. Does any of this get technologists off the hook? Not at all; more than anyone else it is up to us to face it. We might not be able to control the final end points of our work or its long-term effects, but that is no reason to abdicate responsibility. Decisions technologists and societies make at the source can still shape outcomes. Just because consequences are difficult to predict doesn't mean we shouldn't try. Excerpted from The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma by Mustafa Suleyman All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.