Takeaways

  1. Obsessives (who actively elevate culture) and amateurs (who have limited cultural attention) have an outsized impact on what history eventually considers “significant” culture.
  2. If mankind can believe something false for 2000 years (e.g. gravity), we shouldn’t assume our current “truths” will endure.
  3. Society is uncomfortable identifying specific truths that will be disproven, even if they agree with the inevitability of collective wrongness.


Highlights

The implications of collective wrongness

We now know (“know”) that Newton’s concept was correct. Humankind had been collectively, objectively wrong for roughly twenty centuries. Which provokes three semi-related questions:

  • If mankind could believe something false was objectively true for two thousand years, why do we reflexively assume that our current understanding of gravity — which we’ve embraced for a mere 350 years — will somehow exist forever?
  • Is it possible that this type of problem has simply been solved? What if Newton’s answer really is—more or less—the final answer, and the only one we will ever need? Because if that is true, it would mean we’re at the end of a process that has defined the experience of being alive. It would mean certain intellectual quests would no longer be necessary.
  • Which statement is more reasonable to make: “I believe gravity exists” or “I’m 99.9 percent certain that gravity exists”? Certainly, the second statement is safer. But if we’re going to acknowledge even the slightest possibility of being wrong about gravity, we’re pretty much giving up on the possibility of being right about anything at all.

Klosterman’s Razor

Klosterman’s Razor: the philosophical belief that the best hypothesis is the one that reflexively accepts its potential wrongness to begin with.

The long-term cultural insignificance of plot mechanics

What critics in the ninteeneth century were profoundly wrong about was not the experience of reading [Moby-Dick]; what they were wrong about was how that experience would be valued by other people. … Which forces us to consider the importance — or the lack of importance — of plot mechanics.

The short answer seems to be that the specific substance of a novel matters very little. … The larger key is the tone, and particularly the ability of that tone to detach itself from the social moment of its creation.

The reason something becomes retrospectively significant in a far-flung future is detached from the reason it was significant at the time of its creation — and that’s almost always due to a recalibration of social ideologies that future generations will accept as normative.

The long-term influence of obsessives, who elevate subtext

A book becomes popular because of its text, but it’s the subtext that makes it live forever. For the true obsessive, whatever an author doesn’t explicitly explain ends up becoming everything that matters most (and since it’s inevitably the obsessives who keep art alive, they make the rules).

If the meaning of a book can be deduced from a rudimentary description of its palpable plot, the life span of that text is limited to the time of its release. Historically awesome art always means something different from what it superficially appears to suggest — and if future readers can’t convince themselves that the ideas they’re consuming are less obvious than whatever simple logic indicates, that book will disappear. … When any novel is rediscovered and culturally elevated, part of the process is creative: The adoptive generation needs to be able to decide for themselves what the deeper theme is, and it needs to be something that wasn’t widely recognized by the preceding generation.

So this, it seems, is the key for authors who want to live forever: You need to write about important things without actually writing about them. … “The most amazing writer of this generation is someone you’ve never heard of, representing a subculture we don’t even recognize, expressing ideas that don’t signify what they appear to mean.”

The history of rock music and society’s invention of teenagers.

The symbolic value of rock is conflict-based. It emerged as a by-product of the post-WW2 invention of the teenager. → Obviously, there have always been living humans between the ages of twelve and twenty. But it wasn’t until after WW2 that the notion of an “in between” period connecting the experience of childhood with the experience of adulthood became something people recognized as a real demographic. Prior to this, youw ere a child until yous tarted working or got married; the moment that happened, you became an adult (even if those things happened when you were eleven).

The current state of rock music

“Rock” can now signify anything, so it really signifies nothing; it’s more present, but less essential. It’s also shackled by its own formal limitations: Most rock songs are made with six strings and electricity, four thicker strings and electricity, and drums. The advent of the digital synthesizer opened the window of possibility in the 1980s, but only marginally. By now, it’s almost impossible to create a new rock song that doesn’t vaguely resemble an old rock song. So what we have is a youth-oriented musical genre that (a) isn’t symbolically important, (b) lacks creative potentiality, and (c) has no specific tie to young people. It has completed its historical trajectory. It will always subsist, but only as itself. And if something is only itself, it doesn’t particularly matter. Rock will recede out of view, just as all great things eventually do.

The long-term cultural influence of amateurs.

But in order for someone to argue in favor of any architect except Wright (or even to be in a position to name three other plausible candidates), that person would almost need to be an expert in architecture. Normal humans don’t possess enough information to nominate alternative possibilities. And what emerges from that social condition is an insane kind of logic: Frank Lloyd Wright is indisputably the greatest architect of the twentieth century, and the only people who’d potentially disagree with that assertion are those who legitimately understand the question.

History is defined by people who don’t really understand what they are defining.

I don’t believe all art is the same. I wouldn’t be a critic if I did. Subjective distinctions can be made, and those distinctions are worth quibbling about. The juice of life is derived from arguments that don’t seem obvious. But I don’t believe subjective distinctions about quality transcend to anything close to objective truth — and every time somebody tries to prove otherwise, the results are inevitably galvanized by whatever it is that they get wrong.

To matter forever, you need to matter to those who don’t care. And if that strikes you as sad, be sad.

Society’s discomfort with specific wrongness.

There is, certainly, an unbreachable chasm between the subjective and objective world. A reasonable person expects subjective facts to be overturned, because subjective facts are not facts; they’re just well-considered opinions, held by multiple people at the same time. Whenever the fragility of those beliefs is applied to a specific example, people bristle… But if you remove the specificity…, any smart person will agree that such a scenario is not only plausible but inevitable. In other words, everyone concedes we have the potential to be subjectively wrong about anything, as long as we don’t explicitly name whatever that something is. Our sense of subjective reality is simultaneously based on an acceptance of abstract fallibility (“Who is to say what constitutes good art?”) and a casual certitude that we’re right about exclusive assertions that feel like facts (“The Wire represents the apex of television”).

But the objective world is different. Here, we traffic in literal facts — but the permanence of those facts matters less than the means by which they are generated.

Physicists don’t care about asking why, according to DeGrasse Tyson

“…Philosophers like arguing about [semantics]. In physics, we’re way more practical than philosophers. Way more practical. If something works, we’re on to the next problem. We’re not arguing why. Philosophers argue why. It doesn’t mean we don’t like to argue. We’re just not derailed by why, provided the equation gives you an accurate account of reality.”

If you remove the deepest question — the question of why — the risk of major error falls through the floor. And this is because the problem of why is a problem that’s impossible to detach from the foibles of human nature.

The ultimate model for naive realism

It’s irrational to question any explicit detail within a field of study that few rational people classify as complete.

An explanation of dreams

In 1976, two Harvard psychiatrists proposed the possibility that dreams were just the by-product of the brain stem firing chaotically during sleep. Since then, the conventional scientific sentiment has become that — while we don’t totally understand why dreaming happens — the dreams themselves are meaningless. They’re images and sounds we unconsciously collect, almost at random. The psychedelic weirdness of dreaming can be explained by the brain’s topography: The part of your mind that controls emotions (the limbic system) is highly active during dreams, while the part that controls logic (the prefrontal cortex) stays dormant. This is why a dream can feel intense and terrifying, even if what you’re seeing within that dream wouldn’t sound scary if described to someone else. This, it seems, has become the standard way to compartmentalize a collective, fantastical phenomenon: Dreaming is just something semi-interesting that happens when our mind is at rest — and when it happens in someone else’s mind (and that person insists on describing it to us at breakfast), it isn’t interesting at all.

Which seems like a potentially massive misjudgment.

Why smart people tend to be more wrong than not-so-smart peers

These advocates remind me of an apocryphal quote attributed to film critic Pauline Kael after the 1972 presidential election: “How could Nixon have won? I don’t know one person who voted for him.” Now, Kael never actually said this. But that erroneous quote survives as the best shorthand example for why smart people tend to be wrong as often as their not-so-smart peers — they work from the flawed premise that their worldview is standard.

The state of boxing as a niche sport

Because it operates on a much smaller scale, boxing is — inside its own crooked version of reality — flourishing. It doesn’t seem like it, because the average person doesn’t care. But boxing doesn’t need average people. It’s not really a sport anymore. It’s a mildly perverse masculine novelty, and that’s enough to keep it relevant. It must also be noted that boxing’s wounds were mostly self-inflicted. Its internal corruption was more damaging than its veneration of violence, and much of its fanbase left of their own accord.

The fox and the hedgehog

The clever fox knows many things, states the proverb, but the old hedgehog knows one big thing. … In a plain sense, the adage simply means that some people know a little about many subjects while other people know a lot about one subject. Taken at face value, it seems like the former quality should be preferable to the latter — yet we know this is not true, due to the inclusion of the word “but.” The fox knows a lot, but the hedgehog knows one singular thing that obviously matters more. So what is that singular thing? Well, maybe this: The fox knows all the facts, and the fox can place those facts into a logical context. The fox can see how history and politics intertwine, and he can knit them into a nonfiction novel that makes narrative sense. But the fox can’t see the future, so he assumes it does not exist. The fox is a naive realist who believes the complicated novel he has constructed is almost complete. Meanwhile, the hedgehog constructs nothing. He just reads over the fox’s shoulder. But he understands something about the manuscript that the fox can’t comprehened — this book will never be finished. The fox thinks he’s at the end, but he hasn’t even reached the middle. What the fox views as conclusions are only plot mechanics, which means they’ll eventually represent the opposite of whatever they seem to suggest.

This is the difference between the fox and the hedgehog. Both creatures know that storytelling is everything, and that the only way modern people can understandd history and politics is through the machinations of a story. But only the hedgehog knows that storytelling is secretly the problem, which is why the fox is constantly wrong.

The Internet reinforces collective “rightness” with a perpetual sense of now.

For a time in the early 2000s, there was a belief that bloggers would become the next wave of authors, and many big-money blogger-to-author book deals were signed. Besides a handful of notable exceptions, this rarely worked, commercially or critically. The problem was not a lack of talent; the problem was that writing a blog and writing a book have almost no psychological relationship. They both involve a lot of typing, but that’s about as far as it goes. A sentence in a book is written a year before it’s published, with the express intent that it will still make sense twenty years later. A sentence on the Internet is designed to last one day, usually the same day it’s written. The next morning, it’s overwritten again (by multiple writers). The Internet experience is not even that similar to daily newspaper writing, because there’s no physical artifact to demarcate the significance of the original moment. Yet this limitation is not a failure. It proved to be an advantage. It naturally aligns with the early-adoption sensibility that informs everything else. Even when the Internet appears to be nostalgically churning through the cultural past, it’s still hunting for “old newness.” A familiar video clip from 1986 does not possess virility; what the medium desires is an obscure clip from 1985 that recontextualizes the familiar one. The result is a perpetual sense of now. It’s a continual merging of the past with the present, all jammed into the same fixed perspective. This makes it seem like our current, temporary view have always existed, and that what we believe today is what people have always believed. There is no longer any distance between what we used to think and what we currently think, because our evolving vision of reality does not extend beyoond yesterday.

And this, somewhat nonsensically, is how we might be right: All we need to do is convince ourselves we always were. And now there’s a machine that makes that easy.

Culture as a river versus ocean

If we think about the trajectory of anything — art, science, sports, politics — not as a river but as an endless, shallow ocean, there is no place for collective wrongness. All feasible ideas and every possible narrative exist together, and each new societal generation can scoop out a bucket of whatever antecedent is necessary to support their contemporary conclusions.

The modern bias toward collective rightness is socially detrimental.

Is there a danger (or maybe stupidity) in refusing to accept certain espoused truths are, in fact, straightforwardly true? Yes — if you take such thinking to the absolute extreme…. But I think there’s a greater detriment with our escalating progression toward the opposite extremity — the increasingly common ideology that assures people they’re right about what they believe. … Most day-to-day issues are minor, the passage of time will dictate who was right and who was wrong, and the future will sort out the past. It is, however, socially detrimental. It hijacks conversation and aborts ideas. It engenders a delusion of simplicity that benefits people with inflexible minds. It makes the experience of living in a society slightly worse than it should be.

The social need for a “third rail” in arguments like climate change.

If a problem is irreversible, is there still an ethical obligation to try to reverse it?

Such a nihilistic question is hard and hopeless, but not without meaning. It needs to be asked. Yet in the modern culture of certitude, such ambivalence has no place in a public conversation. The third rail is the enemy of both poles. Accepting the existence of climate change while questioning its consequence is seen as both an unsophisticated denial of the scientific community and a blind acceptance of the non-scientific status quo.

The problem with “You’re doing it wrong” arguments.

There’s a phrase I constantly notice on the Internet, particularly after my wife pointed out how incessant it has become. The phrase is, “You’re doing it wrong.” … It could be argued that this is simply an expository shortcut, and maybe you think I should appreciate this phrase, since it appears to recognize the possibility that some widely accepted assumption is being dutifully reconsidered. But that’s not really what’s happening here. Whenever you see something defining itself with the “You’re doing it wrong” conceit, it’s inevitably arguing for a different approach that is just as specific and limited. When you see the phrase “You’re doing it wrong,” the unwritten sentence that follows is: “And I’m doing it right.” Which has become the pervasive way to argue about just about everything, particularly in a Web culture where discource is dominated by the reaction to (and the rejection of) other people’s ideas, as opposed to generating one’s own.

A flaw in progressive wisdom: rejecting outmoded thinking.

This particular attempt [of a magazine column elevating critically unacclaimed movies] illustrates a specific mode of progressive wisdom: the conscious decision to replace one style of thinking with a new style of thinking, despite the fact that both styles could easily coexist. I realize certain modes of thinking can become outdated. But outdated modes are essential to understanding outdated times, which are the only times that exist.

The smaller part of our mind is who we really are.

We spend our lives learning many things, only to discover (again and again) that most of what we’ve learned is either wrong or irrelevant. A big part of our mind can handle this; a smaller, deeper part cannot. And it’s that smaller part that matters more, because that part of our mind is who we really are (whether we like it or not).

Why it matters to speculate about collective rightness

“If we won’t be alive in a hundred or three hundred or a thousand years, what difference will it make if we’re unknowingly wrong about everything, much less anything? Isn’t being right for the sake of being right pretty much the only possible motive for any attempt at thinking about today from the imagined vantage point of tomorrow? … The only reason to speculate about the details of a distant future is for the unprovable pleasure of being potentially correct about it now.”

… There is, however, more than one way to view this. There is not, in a material sense, any benefit to being right about a future you will not experience. But there are intrinsic benefits to constantly probing the possibility that our assumptions about the future might be wrong: humility and wonder. It’s good to view reality as beyond our understanding, because it is. And it’s exciting to imagine the prospect of a reality that cannot be imagined, because that’s as close to a pansophical omniscience as we will ever come. If you aspire to be truly open-minded, you can’t just try to see the other sides of an argument. That’s not enough. You have to go all the way.