How we learn Why brains learn better than any machine ... for now

Stanislas Dehaene

Book - 2020

"In today's technological society, with an unprecedented amount of information at our fingertips, learning plays a more central role than ever. In How We Learn, Stanislas Dehaene decodes its biological mechanisms, delving into the neuronal, synaptic, and molecular processes taking place in the brain. He explains why youth is such a sensitive period, during which brain plasticity is maximal, but also assures us that our abilities continue into adulthood, and that we can enhance our learning and memory at any age. We can all 'learn to learn' by taking maximal advantage of the four pillars of the brain's learning algorithm: attention, active engagement, error feedback, and consolidation. The human brain is an extraordi...nary machine. Its ability to process information and adapt to circumstances by reprogramming itself is unparalleled, and it remains the best source of inspiration for recent developments in artificial intelligence. The exciting advancements in A.I. of the last twenty years reveal just as much about our remarkable abilities as they do about the potential of machines. How We Learn finds the boundary of computer science, neurobiology, and cognitive psychology to explain how learning really works and how to make the best use of the brain's learning algorithms, in our schools and universities as well as in everyday life"--

Saved in:

2nd Floor Show me where

1 / 1 copies available
Location Call Number   Status
2nd Floor 153.15/Dehaene Checked In
[New York, New York] : Viking 2020.
Main Author
Stanislas Dehaene (author)
First American edition
Item Description
Based in part on: Apprendre! : les talents du cerveau, le défi des machines.
Physical Description
xxviii, 319 pages, 16 unnumbered pages of plates : illustrations (chiefly color) ; 24 cm
Includes bibliographical references (pages 269-305) and index.
  • Seven definitions of learning
  • Why our brain learns better than current machines
  • Babies' invisible knowledge
  • The birth of a brain
  • Nurture's share
  • Recycle your brain
  • Attention
  • Active engagement
  • Error feedback
  • Consolidation
  • Conclusion. Reconciling education with neuroscience.
Review by Publisher's Weekly Review

Dehaene (Consciousness in the Brain), a professor of experimental psychology at the Collège de France, devotes this detailed, sometimes hard-going, but stimulating study to the science of learning. The first half largely concerns brain physiology, touching on, among other subjects, how learning can physically change the brain, such as by thickening the cortex. Dehaene also refutes the idea of the "blank slate" infant brain, noting that "even a baby... encodes the external world using abstract and systematic rules--an ability that eludes... conventional artificial neural networks." The book's second, less technical and more widely accessible half explores the "four pillars" of learning: attention, active engagement, error feedback, and consolidation of information (for which REM sleep is especially key) and discusses how enjoyment can assist learning--Dehaene notes that laughter seems to enhance curiosity and memory. While calling for more research into the field, he issues his most potentially controversial pronouncement, at least for neurodiversity advocates: "We all face similar hurdles in learning and the same teaching methods can surmount them." At times not an easy book to comprehend, it will nonetheless be a richly instructive one for educators, parents, and others interested in how to most effectively foster the pursuit of knowledge. Agent: Max Brockman, Brockman, Inc. (Jan.)

(c) Copyright PWxyz, LLC. All rights reserved
Review by Kirkus Book Review

Computers learn, but they will not hold a candle to humans for the foreseeable future, according to this expert overview of learning.Dehaene (Cognitive Psychology/Collge de France (Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts, 2014, etc.) emphasizes that a fly can learn and that a newborn's brain contains a great deal of information thanks to several billion years of evolution. Unfortunately, he writes, "evolution adapts each organism to its ecological niche, but it does so at an appallingly slow rate." However, "the ability to learnacts much faster; it can change behavior within the span of a few minutes, which is the very quintessence of learning." Never mind our opposable thumb, upright posture, fire, tools, or language; it is education that enabled humans to conquer the world. "We are not simply Homo Sapiens, but Homo docensthe species that teaches itself," writes the author. Short-term memory of a literate person is almost double that of someone who has never attended school. IQ (a supposedly fixed concept) increases several points for each additional year of education. In the first of the book's occasionally dense but mostly accessible sections, Dehaene defines learning as simply forming an internal model of the outside world. In the second, he describes how learning occurs. A computer leaves the assembly line as a blank slate, but a newborn's brain already possesses circuits enabling it to generate abstract formulas and the ability to choose wisely from those formulas according to their plausibility. The third section defines "four pillars of learning" that make our brain the most effective learning device. "Attention" carefully selects relevant signals. "Active engagement" (i.e., curiosity) generates hypotheses. "Error feedback" corrects the mental model when the world violates our expectations. Finally, "consolidation," which involves sleep as a key component, transfers knowledge to long-term memory, freeing neural circuits for further learning. The best educators, whether parents or teachers, follow these principles, and the author urges their general adoption.Dehaene's fourth insightful exploration of neuroscience will pay dividends for attentive readers. Copyright Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.

Chapter 1 Seven Definitions of Learning What does "learning" mean? My first and most general definition is the following: to learn is to form an internal model of the external world. You may not be aware of it, but your brain has acquired thousands of internal models of the outside world. Metaphorically speaking, they are like miniature mock-ups more or less faithful to the reality they represent. We all have in our brains, for example, a mental map of our neighborhood and our home-all we have to do is close our eyes and envision them with our thoughts. Obviously, none of us were born with this mental map-we had to acquire it through learning. The richness of these mental models, which are, for the most part, unconscious, exceeds our imagination. For example, you possess a vast mental model of the English language, which allows you to understand the words you are reading right now and guess that plastovski is not an English word, whereas swoon and wistful are, and dragostan could be. Your brain also includes several models of your body: it constantly uses them to map the position of your limbs and to direct them while maintaining your balance. Other mental models encode your knowledge of objects and your interactions with them: knowing how to hold a pen, write, or ride a bike. Others even represent the minds of others: you possess a vast mental catalog of people who are close to you, their appearances, their voices, their tastes, and their quirks. These mental models can generate hyper-realistic simulations of the universe around us. Did you ever notice that your brain sometimes projects the most authentic virtual reality shows, in which you can walk, move, dance, visit new places, have brilliant conversations, or feel strong emotions? These are your dreams! It is fascinating to realize that all the thoughts that come to us in our dreams, however complex, are simply the product of our free-running internal models of the world. But we also dream up reality when awake: our brain constantly projects hypotheses and interpretative frameworks on the outside world. This is because, unbeknownst to us, every image that appears on our retina is ambiguous-whenever we see a plate, for instance, the image is compatible with an infinite number of ellipses. If we see the plate as round, even though the raw sense data picture it as an oval, it is because our brain supplies additional data: it has learned that the round shape is the most likely interpretation. Behind the scenes, our sensory areas ceaselessly compute with probabilities, and only the most likely model makes it into our consciousness. It is the brain's projections that ultimately give meaning to the flow of data that reaches us from our senses. In the absence of an internal model, raw sensory inputs would remain meaningless. Learning allows our brain to grasp a fragment of reality that it had previously missed and to use it to build a new model of the world. It can be a part of external reality, as when we learn history, botany, or the map of a city, but our brain also learns to map the reality internal to our bodies, as when we learn to coordinate our actions and concentrate our thoughts in order to play the violin. In both cases, our brain internalizes a new aspect of reality: it adjusts its circuits to appropriate a domain that it had not mastered before. Such adjustments, of course, have to be pretty clever. The power of learning lies in its ability to adjust to the external world and to correct for errors-but how does the brain of the learner "know" how to update its internal model when, say, it gets lost in its neighborhood, falls from its bike, loses a game of chess, or misspells the word ecstasy? We will now review seven key ideas that lie at the heart of present-day machine-learning algorithms and that may apply equally well to our brains-seven different definitions of what "learning" means. Learning Is Adjusting the Parameters of a Mental Model Adjusting a mental model is sometimes very simple. How, for example, do we reach out to an object that we see? In the seventeenth century, RenZ Descartes (1596-1650) had already guessed that our nervous system must contain processing loops that transform visual inputs into muscular commands (see the figure on the next page). You can experience this for yourself: try grabbing an object while wearing somebody else's glasses, preferably someone who is very nearsighted. Even better, if you can, get a hold of prisms that shift your vision a dozen degrees to the left and try to catch the object. You will see that your first attempt is completely off: because of the prisms, your hand reaches to the right of the object that you are aiming for. Gradually, you adjust your movements to the left. Through successive trial and error, your gestures become more and more precise, as your brain learns to correct the offset of your eyes. Now take off the glasses and grab the object: you'll be surprised to see that your hand goes to the wrong location, now way too far to the left! So, what happened? During this brief learning period, your brain adjusted its internal model of vision. A parameter of this model, one that corresponds to the offset between the visual scene and the orientation of your body, was set to a new value. During this recalibration process, which works by trial and error, what your brain did can be likened to what a hunter does in order to adjust his rifle's viewfinder: he takes a test shot, then uses it to adjust his scope, thus progressively shooting more and more accurately. This type of learning can be very fast: a few trials are enough to correct the gap between vision and action. However, the new parameter setting is not compatible with the old one-hence the systematic error we all make when we remove the prisms and return to normal vision. Undeniably, this type of learning is a little particular, because it requires the adjustment of only a single parameter (viewing angle). Most of our learning is much more elaborate and requires adjusting tens, hundreds, or even thousands of millions of parameters (every synapse in the relevant brain circuit). The principle, however, is always the same: it boils down to searching, among myriad possible settings of the internal model, for those that best correspond to the state of the external world. An infant is born in Tokyo. Over the next two or three years, its internal model of language will have to adjust to the characteristics of the Japanese language. This baby's brain is like a machine with millions of settings at each level. Some of these settings, at the auditory level, determine which inventory of consonants and vowels is used in Japanese and the rules that allow them to be combined. A baby born into a Japanese family must discover which phonemes make up Japanese words and where to place the boundaries between those sounds. One of the parameters, for example, concerns the distinction between the sounds /R/ and /L/: this is a crucial contrast in English, but not in Japanese, which makes no distinction between Bill Clinton's election and his erection. . . . Each baby must thus fix a set of parameters that collectively specify which categories of speech sounds are relevant for his or her native language. A similar learning procedure is duplicated at each level, from sound patterns to vocabulary, grammar, and meaning. The brain is organized as a hierarchy of models of reality, each nested inside the next like Russian dolls-and learning means using the incoming data to set the parameters at every level of this hierarchy. Let's consider a high-level example: the acquisition of grammatical rules. Another key difference which the baby must learn, between Japanese and English, concerns the order of words. In a canonical sentence with a subject, a verb, and a direct object, the English language first states the subject, then the verb, and finally its object: "John + eats + an apple." In Japanese, on the other hand, the most common order is subject, then object, then verb: "John + an apple + eats." What is remarkable is that the order is also reversed for prepositions (which logically become post-positions), possessives, and many other parts of speech. The sentence "My uncle wants to work in Boston," thus becomes mumbo jumbo worthy of Yoda from Star Wars: "Uncle my, Boston in, work wants"-which makes perfect sense to a Japanese speaker. Fascinatingly, these reversals are not independent of one another. Linguists think that they arise from the setting of a single parameter called the "head position": the defining word of a phrase, its head, is always placed first in English (in Paris, my uncle, wants to live), but last in Japanese (Paris in, uncle my, live wants). This binary parameter distinguishes many languages, even some that are not historically linked (the Navajo language, for example, follows the same rules as Japanese). In order to learn English or Japanese, one of the things that a child must figure out is how to set the head position parameter in his internal language model. Learning Is Exploiting a Combinatorial Explosion Can language learning really be reduced to the setting of some parameters? If this seems hard to believe, it is because we are unable to fathom the extraordinary number of possibilities that open up as soon as we increase the number of adjustable parameters. This is called the "combinatorial explosion"-the exponential increase that occurs when you combine even a small number of possibilities. Suppose that the grammar of the world's languages can be described by about fifty binary parameters, as some linguists postulate. This yields 2 combinations, which are over one million billion possible languages, or 1 followed by fifteen zeros! The syntactic rules of the world's three thousand languages easily fit into this gigantic space. However, in our brain, there aren't just fifty adjustable parameters, but an astoundingly larger number: eighty-six billion neurons, each with about ten thousand synaptic contacts whose strength can vary. The space of mental representations that opens up is practically infinite. Human languages heavily exploit these combinations at all levels. Consider, for instance, the mental lexicon: the set of words that we know and whose model we carry around with us. Each of us has learned about fifty thousand words with the most diverse meanings. This seems like a huge lexicon, but we manage to acquire it in about a decade because we can decompose the learning problem. Indeed, considering that these fifty thousand words are on average two syllables, each consisting of about three phonemes, taken from the forty-four phonemes in English, the binary coding of all these words requires less than two million elementary binary choices ("bits," whose value is 0 or 1). In other words, all our knowledge of the dictionary would fit in a small 250-kilobyte computer file (each byte comprising eight bits). This mental lexicon could be compressed to an even smaller size if we took into account the many redundancies that govern words. Drawing six letters at random, like "xfdrga," does not generate an English word. Real words are composed of a pyramid of syllables that are assembled according to strict rules. And this is true at all levels: sentences are regular collections of words, which are regular collections of syllables, which are regular collections of phonemes. The combinations are both vast (because one chooses among several tens or hundreds of elements) and bounded (because only certain combinations are allowed). To learn a language is to discover the parameters that govern these combinations at all levels. In summary, the human brain breaks down the problem of learning by creating a hierarchical, multilevel model. This is particularly obvious in the case of language, from elementary sounds to the whole sentence or even discourse-but the same principle of hierarchical decomposition is reproduced in all sensory systems. Some brain areas capture low-level patterns: they see the world through a very small temporal and spatial window, thus analyzing the smallest patterns. For example, in the primary visual area, the first region of the cortex to receive visual inputs, each neuron analyzes only a very small portion of the retina. It sees the world through a pinhole and, as a result, discovers very low-level regularities, such as the presence of a moving oblique line. Millions of neurons do the same work at different points in the retina, and their outputs become the inputs of the next level, which thus detects "regularities of regularities," and so on and so forth. At each level, the scale broadens: the brain seeks regularities on increasingly vast scales, in both time and space. From this hierarchy emerges the ability to detect increasingly complex objects or concepts: a line, a finger, a hand, an arm, a human body . . . no, wait, two, there are two people facing each other, a handshake. . . . It is the first Trump-Macron encounter! Learning Is Minimizing Errors The computer algorithms that we call "artificial neural networks" are directly inspired by the hierarchical organization of the cortex. Like the cortex, they contain a pyramid of successive layers, each of which attempts to discover deeper regularities than the previous one. Because these consecutive layers organize the incoming data in deeper and deeper ways, they are also called "deep networks." Each layer, by itself, is capable of discovering only an extremely simple part of the external reality (mathematicians speak of a linearly separable problem, i.e., each neuron can separate that data into only two categories, A and B, by drawing a straight line through them). Assemble many of these layers, however, and you get an extremely powerful learning device, capable of discovering complex structures and adjusting to very diverse problems. Today's artificial neural networks, which take advantage of the advances in computer chips, are also deep, in the sense that they contain dozens of successive layers. These layers become increasingly insightful and capable of identifying abstract properties the further away they are from the sensory input. Let's take the example of the LeNet algorithm, created by the French pioneer of neural networks, Yann LeCun (see figure 2 in the color insert). As early as the 1990s, this neural network achieved remarkable performance in the recognition of handwritten characters. For years, Canada Post used it to automatically process handwritten postal codes. How does it work? The algorithm receives the image of a written character as an input, in the form of pixels, and it proposes, as an output, a tentative interpretation: one out of the ten possible digits or twenty-six letters. The artificial network contains a hierarchy of processing units that look a bit like neurons and form successive layers. The first layers are connected directly with the image: they apply simple filters that recognize lines and curve fragments. The layers higher up in the hierarchy, however, contain wider and more complex filters. Higher-level units can therefore learn to recognize larger and larger portions of the image: the curve of a 2, the loop of an O, or the parallel lines of a Z . . . until we reach, at the output level, artificial neurons that respond to a character regardless of its position, font, or case. All these properties are not imposed by a programmer: they result entirely from the millions of connections that link the units. These connections, once adjusted by an automated algorithm, define the filter that each neuron applies to its inputs: their settings explain why one neuron responds to the number 2 and another to the number 3. Excerpted from How We Learn: Why Brains Learn Better Than Any Machine ... for Now by Stanislas Dehaene All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.