The alignment problem Machine learning and human values

Brian Christian, 1984-

Book - 2020

"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want ...or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--Provided by publisher.

Saved in:

2nd Floor Show me where

006.3101/Christian
0 / 1 copies available
Location Call Number   Status
2nd Floor 006.3101/Christian Due May 4, 2024
Subjects
Published
New York, NY : W.W. Norton & Company [2020]
Language
English
Main Author
Brian Christian, 1984- (author)
Edition
First edition
Physical Description
xii, 476 pages ; 25 cm
Bibliography
Includes bibliographical references (pages [401]-451) and index.
ISBN
9780393635829
  • Prophecy. Representation
  • Fairness
  • Transparency
  • Agency. Reinforcement
  • Shaping
  • Curiosity
  • Normativity. Imitation
  • Inference
  • Uncertainty.
Review by Publisher's Weekly Review

Christian (The Most Human Human), a writer and lecturer on technology-related issues, delivers a riveting and deeply complex look at artificial intelligence and the significant challenge in creating computer models that "capture our norms and values." Machines that use mathematical and computational systems to learn are everywhere in modern life, Christian writes, and are "steadily replacing both human judgment and explicitly programmed software" in decision-making. Some of those decisions, however, are unreliable, as Christian shows through scrupulous research. Facial recognition systems can be "wildly inaccurate for people of one race or gender but not another" and perform particularly poorly on identifying Black women correctly. Meanwhile, risk assessment software, which helps decide bail, parole, and even sentencing for criminal defendants, has been widely adopted nationwide without being extensively audited. Though it's tempting to assume a doom-and-gloom outlook while reading of these problems, Christian refreshingly insists that "our ultimate conclusions need not be grim," as a new subset of computer scientists "focused explicitly on the ethics and safety of machine-learning" is working to bridge the gap between human values and AI learning styles. Lay readers will find Christian's revealing study to be a helpful guide to an urgent problem in tech. (Oct.)

(c) Copyright PWxyz, LLC. All rights reserved
Review by Kirkus Book Review

The latest examination of the problems and pitfalls of artificial intelligence. Computer scientist Christian begins this technically rich but accessible discussion of AI with a very real problem: When programming an algorithm to teach a machine analogies and substitutions, researchers discovered that the phrase "man -- doctor + woman" came back with the answer "nurse" while "shopkeeper -- man + woman" came back with "housewife." An algorithm designed to examine and label photographs returned the caption "gorillas" when it depicted two African Americans. It happened that one of those men was a programmer himself, and he said, "It's not even the algorithm at fault. It did exactly what it was designed to do." In other words, the algorithm is returning human biases, just as algorithms do when examining criminal records that often lead to machine-assisted recommendations for sentencing that overwhelmingly give Whites lighter punishments than Blacks and Latinos and color calibration programs for TVs and movie screens that are indexed to white skin. So how to teach machines to be reliable and bias-free? Christian considers models of human learning, such as those developed by Jean Piaget, whom Christian finds off on a couple of key assumptions but still a useful guide. He recalls that Alan Turing wondered why machine-learning programs were geared as if the machines were adults instead of children. Children, of course, learn by mistakes and accidents and by emulating adult doings "that would lead to the interesting result," but can a machine? On that score, Christian ponders how self-driving vehicles are taught how to be autonomous, making decisions that are logical--but logical to a machine mind, not a human one. "Perhaps, rather than painstakingly trying to hand-code the things we care about," writes the author, "we should develop machines that simply observe human behavior and infer our values and desires from that--a task easier said than done. An intriguing exploration of AI, which is advancing faster than--well, than we are. Copyright (c) Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.