But what if we're wrong? Thinking about the present as if it were the past

Chuck Klosterman, 1972-

Sound recording - 2016

We live in a culture of casual certitude. This has always been the case, no matter how often that certainty has failed. Though no generation believes there's nothing left to learn, every generation unconsciously assumes that what has already been defined and accepted is (probably) pretty close to how reality will be viewed in perpetuity. And then, of course, time passes. Ideas shift. Opinions invert. What once seemed reasonable eventually becomes absurd, replaced by modern perspectives that feel even more irrefutable and secure--until, of course, they don't.

Saved in:

2nd Floor Show me where

COMPACT DISC/303.49/Klosterman
1 / 1 copies available
Location Call Number   Status
2nd Floor COMPACT DISC/303.49/Klosterman Checked In
Subjects
Published
New York, NY : Books on Tape [2016]
Language
English
Main Author
Chuck Klosterman, 1972- (author)
Other Authors
Fiona Hardingham (narrator)
Edition
Unabridged
Item Description
Title from container.
Physical Description
8 audio discs (approximately 10 hrs.) : digital ; 4 3/4 in
ISBN
9780451484895
  • A brief examination as to why this book is hopeless (and a briefer examination as to why it might not be)
  • A quaint and curious volume of (destined-to-be) forgotten lore
  • But that's the way I like it, Baby. I don't want to live forever.
  • "Merit"
  • Burn thy witches
  • The world that is not there
  • Don't tell me what happens. I'm recording it.
  • Sudden death (over time)
  • The case against freedom
  • But what if we're right?
  • Only the penitent man shall pass.
Review by Booklist Review

*Starred Review* Klosterman (I Wear the Black Hat, 2013) is a skeptic of the highest order, and, as he himself admits, he is also often wrong, Not about everything. Just about most things. In this compellingly readable volume he insists it is not a collection of essays, even though he admits it might look like and even feel like one Klosterman wonders about many things, from democracy to science and other topics. But this doesn't quite do it justice, since Klosterman's wonderfully inventive mind is all over the place: Moby-Dick, gravity, books, language, Jane Austen, Beowulf, rock and roll, the last scene from The Sopranos, Johnny B. Goode, conspiracy theories, the whole idea of merit, television as an art form, Malcolm Gladwell and the future of football and red meat (it makes sense in context), the movie Whiplash, and much more. He often thinks aloud by writing aloud, confused by his own contradictions (I told people I loved my job at the newspaper, but if that was true, why did I hate going to work? I guess that's why they call it work.) His comments about pop culture are spot-on (Roseanne was an attempt to show how white families weren't necessarily rich and functional), as is most everything else here. Jump in anywhere; it's all fabulous.--Sawyers, June Copyright 2016 Booklist

From Booklist, Copyright (c) American Library Association. Used with permission.
Review by Publisher's Weekly Review

Veteran voice actor Hardingham certainly brings impressive credentials to any narration project. At first glance, the British woman may not seem an obvious choice to tackle the latest title from noted cultural critic and essayist Klosterman, a man who originally hails from deep in the American Heartland, but Hardingham manages to pull off her duties with an effective tone that balances the sometimes whimsical quality of Klosterman's musings with the serious nature of the larger questions he raises. Klosterman draws on personal anecdotes to make his case, including several references to the generational transition between an analog world and the digital media revolution. Hardingham's professional and poised approach to narration keeps the displays of personal catharsis in check to keep listeners' attention on the bigger picture. Her earnest vocal style seems akin to what listeners would expect in a highbrow broadcast setting such as public radio. A Blue Rider hardcover. (June) © Copyright PWxyz, LLC. All rights reserved.

(c) Copyright PWxyz, LLC. All rights reserved
Review by Library Journal Review

Klosterman delivers another introspective think piece, this time reflecting on the present from the point of view of centuries into the future. As human society has advanced, many commonly held concepts have changed immensely; for example, the belief of a geocentric universe started in ancient Greece, but the Copernican Revolution in the 16th century discredited that theory. Klosterman asks which of our current, most resolute beliefs will change in the same manner. What elements of our current culture will be remembered, which will be forgotten, and which will be transformed? Moving quickly from one topic to the next, Klosterman investigates a wide range of themes from gravity to color to popular culture in his usual witty manner. He draws from interviews from a host of authorities such as Neil deGrasse Tyson, Junot Diaz, and Ryan Adams to present a very engaging and accessible piece. Fiona Hardingham adds a sophisticated element with her pleasant narration and British accent. VERDICT While thought provoking, this book's often lengthy conjecture might turn off some readers. Still, Klosterman's work will be very popular. ["An engaging and entertaining workout for the mind led by one of today's funniest and most thought-provoking writers": LJ 4/15/16 starred review of the Blue Rider hc.]-Cathleen Keyser, NoveList, Durham, NC © Copyright 2016. Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.

(c) Copyright Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.
Review by Kirkus Book Review

An inquiry into why we'll probably be wrong about almost everything.The ever smart, witty, and curious Klosterman (I Wear the Black Hat: Grappling with Villains (Real and Imagined), 2013, etc.) takes on the notion that it's "impossible to understand the world of today until today has become tomorrow." One might call that a "klosterism," and the book is full of them. It's also full of intelligence and insights, as the author gleefully turns ideas upside down to better understand them. Klosterman is currently obsessed with ideas that are so accepted we dare not dispute theme.g., gravity. Once upon a time, Aristotle believed things didn't float away because they were in their "natural place." Then Newton came along 2,000 years later and changed the way we think. Then Einstein said gravity was really a warping of time and space. Now, scientists are trying to "rethink gravity itself." Therefore, the author posits, in the future, whenever that may be, we'll know we were wrong about whatever we thought "gravity" was back then. In each chapter, Klosterman takes on a different topic, applying "Klosterman's Razor" to it: "the philosophical belief that the best hypothesis is the one that reflexively accepts its potential wrongness to begin with." He seeks out a variety of experts to assist him. George Saunders and Franz Kafka help him sort out why future literary greats are "at the momenteither totally unknown or widely disrespected." Physicists Neil deGrasse Tyson and Brian Greene help him explore the concept of a multiverse universe. Others assist Klosterman in taking on the future of rock 'n' roll ("there are still things about the Beatles that can't be explained"), time, dreams, democracy, TV shows (Roseanne is an overlooked work of "genius"), and sports. Klosterman is fond of lists and predictions. Here's one: this book will become a popular book club selection because it makes readers think. Replete with lots of nifty, whimsical footnotes, this clever, speculative book challenges our beliefs with jocularity and perspicacity. Copyright Kirkus Reviews, used with permission.

Copyright (c) Kirkus Reviews, used with permission.

***This excerpt is from an advance uncorrected proof*** Copyright ©2016 Chuck Klosterman I've spent most of my life being wrong. Not about everything. Just about most things. I mean, sometimes I get stuff right. I married the right person. I've never purchased life insurance as an investment. The first time undrafted free agent Tony Romo led a touchdown drive against the Giants on  Monday Night Football , I told my roommate, "I think this guy will have a decent career." At a New Year's Eve party in 2008, I predicted Michael Jackson would unexpectedly die within the next twelve months, an anecdote I shall casually recount at every New Year's party I'll ever attend for the rest of my life. But these are the exceptions. It is far, far easier for me to catalog the various things I've been wrong about: My insistence that I would never own a cell phone. The time I wagered $100--against $1--that Barack Obama would never become president (or even receive the Democratic nomination). My three‑week obsession over the looming Y2K crisis, prompting me to hide bundles of cash, bottled water, and Oreo cookies throughout my one‑ bedroom apartment. At this point, my wrongness doesn't even surprise me. I almost anticipate it. Whenever people tell me I'm wrong about something, I might disagree with them in conversation, but--in my mind--I assume their accusation is justified, even when I'm relatively certain they're wrong, too. Yet these failures are small potatoes. These micro‑moments of wrongness are personal: I assumed the answer to something was "A," but the true answer was "B" or "C" or "D." Reasonable parties can disagree on the unknowable, and the passage of time slowly proves one party to be slightly more reasonable than the other. The stakes are low. If I'm wrong about something specific, it's (usually) my own fault, and someone else is (usually, but not totally) right. But what about the things we're  all  wrong about? What about ideas that are so accepted and internalized that we're not even in a position to question their fallibility? These are ideas so ingrained in the collective consciousness that it seems fool‑ hardy to even wonder if they're potentially untrue. Sometimes these seem like questions only a child would ask, since children aren't paralyzed by the pressures of consensus and common sense. It's a dissonance that creates the most unavoidable of intellectual paradoxes: When you ask smart people if they believe there are major ideas currently accepted by the culture at large that will eventually be proven false, they will say, "Well, of course. There must be. That phenomenon has been experienced by every generation who's ever lived, since the dawn of human history." Yet offer those same people a laundry list of contemporary ideas that might fit that description, and they'll be tempted to reject them all. It is impossible to examine questions we refuse to ask. These are the big potatoes.   Like most people, I like to think of myself as a skeptical person. But I'm pretty much in the tank for gravity. It's the natural force most recognized as perfunctorily central to everything we under‑ stand about everything else. If an otherwise well‑executed argument contradicts the principles of gravity, the argument is inevitably altered to make sure that it does not. The fact that I'm not a physicist makes my adherence to gravity especially unyielding, since I don't know anything about gravity that wasn't told to me by someone else. My confidence in gravity is absolute, and I believe this will be true until the day I die (and if someone subsequently throws my dead body out of a window, I believe my corpse's rate of acceleration will be 9.8 m/s2). And I'm probably wrong. Maybe not completely, but partially. And maybe not today, but eventually. "There is a very, very good chance that our understanding of gravity will not be the same in five hundred years. In fact, that's the one arena where I would think that most of our contemporary evidence is circumstantial, and that the way we think about gravity will be very different." These are the words of Brian Greene, a theoretical physicist at Columbia University who writes books with titles like  Icarus at the Edge of Time . He's the kind of physicist famous enough to guest star on a CBS sitcom, assuming that sit‑ com is  The Big Bang Theory . "For two hundred years, Isaac Newton had gravity down. There was almost no change in our thinking until 1907. And then from 1907 to 1915, Einstein radically changes our understanding of gravity: No longer is gravity just a force, but a warping of space and time. And now we realize quantum mechanics must have an impact on how we describe gravity within very short distances. So there's all this work that really starts to pick up in the 1980s, with all these new ideas about how gravity would work in the microscopic realm. And then string theory comes along, trying to understand how gravity behaves on a small scale, and that gives us a description--which we don't know to be right or wrong--that equates to a quantum theory of gravity. Now, that requires extra dimensions of space. So the understanding of gravity starts to have radical implications for our understanding of reality. And now there are folks, inspired by these findings, who are trying to rethink gravity itself. They suspect gravity might not even be a fundamental force, but an emergent1 force. So I do think--and I think many would agree--that gravity is the least stable of our ideas, and the most ripe for a major shift." If that sounds confusing, don't worry--I was confused when Greene explained it to me as I sat in his office   1 This means that gravity might just be a manifestation of other forces--not a force itself, but the peripheral result of something else. Greene's analogy was with the idea of temperature: Our skin can sense warmth on a hot day, but "warmth" is not some independent thing that exists on its own. Warmth is just the consequence of invisible atoms moving around very fast, creating the  sensation  of temperature. We feel it, but it's not really there. So if gravity were an emergent force, it would mean that gravity isn't the central power pulling things to the Earth, but the tangential consequence of something else we can't yet explain. We feel it, but it's not there. It would almost make the whole idea of "gravity" a semantic construction. (and he explained it to me twice). There are essential components to physics and math that I will never understand in any functional way, no matter what I read or how much time I invest. A post‑gravity world is beyond my comprehension. But the concept of a post‑gravity world helps me think about something else: It helps me understand the pre‑ gravity era. And I don't mean the days before Newton published  Principia  in 1687, or even that period from the late 1500s when Galileo was (allegedly) dropping balls off the Leaning Tower of Pisa and inadvertently inspiring the Indigo Girls. By the time those events occurred, the notion of gravity was already drifting through the scientific ether. Nobody had pinned it down, but the mathematical intelligentsia knew Earth was rotating around the sun in an elliptical orbit (and that  something  was making this hap‑ pen). That was around three hundred years ago. I'm more fixated on how life was another three hundred years before that. Here was a period when the best understanding of why objects did not spontaneously f loat was some version of what Aristotle had argued more than a thousand years prior: He believed all objects craved their "natural place," and that this place was the geocentric center of the universe, and that the geocentric center of the universe was Earth. In other words, Aristotle believed that a dropped rock fell to the earth because rocks belonged on earth and wanted to be there. So let's consider the magnitude of this shift: Aristotle--arguably the greatest philosopher who ever lived--writes the book  Physics  and defines his argument. His view exists unchallenged for almost two thousand years. Newton (history's most meaningful mathematician, even to this day) eventually watches an apocryphal apple fall from an apocryphal tree and inverts the entire human under‑ standing of why the world works as it does. Had this been explained to those people in the fourteenth century with no understanding of science--in other words, pretty much everyone else alive in the fourteenth century--Newton's explanation would have seemed way, way crazier than what they currently believed: Instead of claiming that Earth's existence defined reality and that there was something essentialist about why rocks acted like rocks, Newton was advocating an invisible, imperceptible force field that some‑ how anchored the moon in place. We now know ("know") that Newton's concept was correct. Humankind had been collectively,  objectively  wrong for roughly twenty centuries. Which provokes three semi‑related questions:      * If mankind could believe something false was objectively true for two thousand years, why do we ref lexively assume that our current understanding of gravity--which we've embraced for a mere three hundred fifty years--will some‑ how exist forever?    * Is it possible that this type of problem has simply been solved? What if Newton's answer really is--more or less-- the final answer, and the only one we will ever need? Because if that is true, it would mean we're at the end of a process that has defined the experience of being alive. It would mean certain intellectual quests would no longer be necessary.    * Which statement is more reasonable to make: "I believe grav‑ ity exists" or "I'm 99.9 percent certain that gravity exists"? Certainly, the second statement is safer . But if we're going to acknowledge even the slightest possibility of being wrong about gravity, we're pretty much giving up on the possibility of being right about anything at all.   There's a popular website that sells books (and if you purchased this particular book, consumer research suggests there's a 41 per‑ cent chance you ordered it from this particular site). Book sales constitute only about 7 percent of this website's total sales, but books are the principal commodity this enterprise is known for. Part of what makes the site successful is its user‑generated con‑ tent; consumers are given the opportunity to write reviews of their various purchases, even if they never actually consumed the book they're critiquing. Which is amazing, particularly if you want to read negative, one‑star reviews of Herman Melville's  Moby-Dick . "Pompous, overbearing, self‑indulgent, and insufferable. This is the worst book I've ever read," wrote one dissatisfied customer in 2014. "Weak narrative, poor structure, incomplete plot threads, ¾ of the chapters are extraneous, and the author often confuses himself with the protagonist. One chapter is devoted to the fact that whales don't have noses. Another is on the color white." Interestingly, the only other purchase this person elected to review was a Hewlett‑Packard printer that can also send faxes, which he awarded two stars. I can't dispute this person's distaste for  Moby-Dick . I'm sure he did hate reading it. But his choice to state this opinion in public-- almost entirely devoid of critical context, unless you count his take on the HP printer--is more meaningful than the opinion itself. Publicly attacking  Moby-Dick  is shorthand for arguing that what we're socialized to believe about art is fundamentally questionable. Taste is subjective, but some subjective opinions are casually expressed the same way we articulate principles of math or science. There isn't an ongoing cultural debate over the merits of  Moby- Dick : It's not merely an epic novel, but a transformative literary innovation that helps define how novels are supposed to be viewed. Any discussion about the clichéd concept of "the Great American Novel" begins with this book. The work itself is not above criticism, but no individual criticism has any impact; at this point, attacking  Moby-Dick  only reflects the contrarianism of the critic. We all start from the supposition that  Moby-Dick is accepted as self‑evidently awesome, including (and perhaps especially) those who disagree with that assertion. So how did this happen? Melville publishes  Moby-Dick  in 1851, basing his narrative on the real‑life 1839 account of a murderous sperm whale nicknamed "Mocha Dick." The initial British edition is around nine hundred pages. Melville, a moderately successful author at the time of the novel's release, assumes this book will immediately be seen as a masterwork. This is his premeditated intention throughout the writing process. But the reviews are mixed, and some are contemptuous ("it repels the reader" is the key takeaway from one of the very first reviews in the London  Spectator ). It sells poorly--at the time of Melville's death, total sales hover below five thousand copies. The failure ruins Melville's life: He becomes an alcoholic and a poet, and eventually a customs inspector. When he dies destitute in 1891, one has to assume his perspective on  Moby-Dick  is some‑ thing along the lines of "Well, I guess that didn't work. Maybe I should have spent fewer pages explaining how to tie complicated knots." For the next thirty years, nothing about the reception of this book changes. But then World War I happens, and--somehow, and for reasons that can't be totally explained2--modernists living in postwar America start to view literature through a different lens. There is a Melville revival. The concept of what a novel is supposed to accomplish shifts in his direction and amplifies with each passing generation, eventually prompting people (like the 2005 director of Columbia University's American studies pro‑ gram) to classify  Moby-Dick  as "the most ambitious book ever conceived by an American writer." Pundits and cranks can disagree with that assertion, but no one cares if they do. Melville's place in history is secure, almost as if he were an explorer or an inventor: When the prehistoric remains of a previously unknown predatory whale were discovered in Peru in 2010, the massive creature was eventually named  Livyatan melvillei . A century after his death, Melville gets his own extinct super‑whale named after him, in tribute to a book that commercially tanked. That's an interesting kind of career. Now, there's certainly a difference between collective, objective wrongness (e.g., misunderstanding gravity for twenty centuries) and collective, subjective wrongness (e.g., not caring about  Moby- Dick  for seventy‑five years). The machinations of the transitionsare completely different. Yet both scenarios hint at a practical reality and a modern problem. The practical reality is that any present‑tense version of the world is unstable. What we currently consider to be true--both objectively and subjectively--is habitually provisional. But the modern problem is that reevaluating what we consider "true" is becoming increasingly difficult. Superficially, it's become easier for any one person to dispute the status quo: Everyone has a viable platform to criticize  Moby-Dick  (or, I suppose, a mediocre HP printer). If there's a rogue physicist in Winnipeg who doesn't believe in gravity, he can self‑publish a book that outlines his argument and potentially attract a larger audience than  Principia  found during its first hundred years of existence. But increasing the capacity for the reconsideration of ideas is not the same as actually changing those ideas (or even  allowing  them to change by their own momentum). We live in an age where virtually no content is lost and virtually all content is shared. The sheer amount of information about every current idea makes those concepts difficult to contradict, particularly in a framework where public consensus has become the ultimate arbiter of validity. In other words, we're starting to behave as if we've reached the end of human knowledge. And while that notion is undoubtedly false, the sensation of certitude it generates is paralyzing.   In her book  Being Wrong , author Kathryn Schulz spends a few key pages on the concept of "naïve realism." Schulz notes that while there are few conscious proponents of naïve realism, "that doesn't mean there are no naïve realists." I would go a step further than Schulz; I suspect most conventionally intelligent people are naïve realists, and I think it might be the defining intellectual quality of this era. The straightforward definition of naïve realism doesn't seem that outlandish: It's a theory that suggests the world is exactly as it appears. Obviously, this viewpoint creates a lot of opportunity for colossal wrongness (e.g., "The sun appears to move across the sky, so the sun must be orbiting Earth"). But my personal characterization of naïve realism is wider and more insidious. I think it operates as the manifestation of two ingrained beliefs:      * "When considering any question, I must be rational and logical, to the point of dismissing any unverifiable data as preposterous," and    * "When considering any question, I'm going to assume that the information we currently have is all the information that will ever be available."   Here's an extreme example: the possibility of life after death. When considered rationally, there is no justification for believing that anything happens to anyone upon the moment of his or her death. There is no reasonable counter to the prospect of nothing‑ ness. Any anecdotal story about "floating toward a white light" or Shirley MacLaine's past life on Atlantis or the details in  Heaven Is for Real  are automatically (and justifiably) dismissed by any secular intellectual. Yet this wholly logical position discounts the over‑ whelming likelihood that we currently don't know something critical about the experience of life, much less the ultimate conclusion to that experience. There are so many things we don't know about energy, or the way energy is transferred, or why energy (which can't be created or destroyed) exists at all. We can't truly conceive the conditions of a multidimensional reality, even though we're (probably) already living inside one. We have a limited under‑ standing of consciousness. We have a limited understanding of time, and of the perception of time, and of the possibility that all time is happening at once. So while it seems unrealistic to seriously   2 The qualities that spurred this rediscovery can, arguably, be quantified: The isolation and brotherhood the sailors experience mirrors the experience of fight‑ ing in a war, and the battle against a faceless evil whale could be seen as a metaphor for the battle against the faceless abstraction of evil Germany. But the fact that these details can be quantified is still not a satisfactory explanation as to why  Moby-Dick  became the specific novel that was selected and elevated. It's not like  Moby-Dick  is the only book that could have served this role. consider the prospect of life after death, it seems equally naïve to assume that our contemporary understanding of this phenomenon is remotely complete. We have no idea what we don't know, or what we'll eventually learn, or what might be true despite our perpetual inability to comprehend what that truth is. It's impossible to understand the world of today until today has become tomorrow. This is no brilliant insight, and only a fool would disagree. But it's remarkable how habitually this truth is ignored. We constantly pretend our perception of the present day will not seem ludicrous in retrospect, simply because there doesn't appear to be any other option. Yet there  is  another option, and the option is this: We must start from the premise that--in all likelihood--we are already wrong. And not "wrong" in the sense that we are examining questions and coming to incorrect conclusions, because most of our conclusions are reasoned and coherent. The problem is with the questions themselves. Excerpted from But What If We're Wrong?: Thinking about the Present As If It Were the Past by Chuck Klosterman All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.