One thing I’ve learned from years of wide reading is that every text, whether fiction or nonfiction, article or book, consists of two levels: the subject matter and the agenda. The subject matter is, basically, the topic or explicit contents of the text (e.g., the presidency of Andrew Jackson), and the agenda is the real point (sometimes stated, often hidden) of the text (Jackson prefigures the populist nationalism of Donald Trump, with likely similar results). “The Elephant in the Brain” which constitutes the subject matter of this book is the largely unconscious work we do to deny (to ourselves and others) the hidden motives that drive our behaviors in our everyday lives, how in fact we “accentuate our higher, purer motives” over our selfish ones. Kevin Simler and Robin Hanson describe the ways in which a number of these motives (competition, status seeking, self-protection, self-esteem, etc.) are expressed in various contexts, such as conversations, consumption, art, education, religion, and so forth.
They are, however, far from being the first to explicate such foibles of the human psyche. There is a vast literature from ancient times to modern that have already explored the same phenomena—one thinks of moments in the Old Testament (“the heart is deceitful in all things”), of Sir Walter Scott (“Oh what a tangled web we weave/When first we practice to deceive”); or of Erasmus’s “In Praise of Folly” (“No man is wise at all times, or without his blind side.”), Charles Mackay’s “Extraordinary Popular Delusions and the Madness of Crowds,” or just about anything by Freud. Nor should we exclude Thorstein Veblen’s “Conspicuous Consumption” from this short exemplary list, as his theory anticipates Simler and Hanson’s chapter on consumption.
That Simler and Hanson have little new to say about our propensity for self-deception does not mean they can’t occasionally be interesting and even fun. Their chapter on medicine makes an excellent point: that much (too much) of our consumption of health care exceeds our need for it; that such consumption amounts to “conspicuous caring,” a kind of status seeking or signaling that is analogous to Veblen’s conspicuous consumption, with the same kind of wastefulness of resources. Their chapters on education and charity demonstrate that, as they put it, “many of our most cherished institutions—charities, corporations, hospitals, universities—serve covert agendas alongside their official ones. Because of this, we must take covert agendas into account when thinking about institutions, or risk radically misunderstanding them.”
I can think of an example: At my state university, the football and basketball head coaches make $2,475,000 and $2,200,000 million (in base salary) respectively, while their putative boss, the university president, makes a base salary of $800,000. The university’s mission statement makes no mention of either the football or basketball program; instead it brags about research, learning, career success—the usual academic checkpoints. Yet money speaks louder than words.
But otherwise, the authors have little new to say about our hidden motives, so why then have they written this book, and why has it been received so positively by reviewers and readers? Because it fits smoothly into our contemporary Zeitgeist in which everything is explained (or re-explained) in terms of 1) digital technologies and 2) evolutionary theory (especially “fitness”). It’s as if these two paradigms supply us with the long-sought theory of everything and thereby relegate all that has come before to the dustbin of error and superstition; consequently, everything, including our propensity for denying our own motives, must be explained as if they had never been explained before, as if for the very first time in all of human history and thought, we have identified and explained the sources of all of our traits and quirks.
Now, while the hyper-reductionist world view of the digirati is not directly mentioned in the book, nor explicitly appropriated as a supportive argument, it is nonetheless fundamental to the authors’ thesis, both of whom are full-fledged members of the geek collective: Hanson is an economist with a degree in physics, a devotee of AI and robotics, and the author of “The Age of Em: Work, Love, and Life When Robots Rule the Earth” (2016); he has arranged to have his brain cryogenically frozen, perhaps in hopes that in the not-so-distant future it will be one of the brains (the brain?) that will be downloaded into all those em robots that will soon take over the world. His co-author Kevin Simler (shown in his blog photo as a typical Silicon Valleyboy in tee shirt, hoodie, and little wire-rimmed glasses) has studied philosophy and computer science (not as contradictory as one might assume) and has had an extensive career with start-ups. Both separately and in collusion, they view the world through logarithm-tinted glasses. There are certain background assumptions that, in this book at least, while unspoken nevertheless shape their conclusions.
One of the most important of these is the notion that the brain is a computer that runs on programs and apps (“modules,” etc.)—consciousness is the screen behind which these programs are silently running without our awareness, invisible but determinative. Which brings us to the second of their paradigms, evolution—especially the notion that unconscious instincts, running determinatively behind our conscious minds, are not only sufficient to explain all of human activity but have only one goal, to win the mating game. And while they may be expert in digitization, they are rank amateurs when dealing with supposedly Darwinian explanations of human behavior (neither has a background in neuroscience, evolutionary biology, or anthropology, the fields most relevant to their claims).
This lack is most evident in their chapter on art, which they claim is nothing more than a means of signaling fitness, i.e., that the production of art signals to prospective mates that the artist has the vigor and strength to waste on nonproductive or impractical activities (“survival surplus”). They provide only scenarios (made up illustrative fairy tales), tired analogies (bowerbirds), and modals (such words as “may” or “may have,” as in “Art may have arisen, originally, as a byproduct of other adaptations.” Note the provisional quality here: it “may have” [but maybe it didn’t] and “originally” [but maybe now it’s something else?].) We have almost no evidence of how or why human beings first began making art; there is no evidence that the making of art has enhanced the reproductive success of any artist; indeed, many of our most famous artists had no offspring at all. Maybe some of them had more sex than average, but evolutionarily that doesn’t count if it resulted in no children (and grandchildren, etc.; Shakespeare had children, but he has no living descendants today).
It is true that art, or at least the collecting of art, can enhance a person’s social status, but it’s interesting to note that enhanced social status does not necessarily result in more descendants. Indeed, at least in our current world, poorer people tend to have more children than the rich, despite the fact that the poor do not have the wherewithal to purchase prestige works of art.
There is also the problem of ethnocentrism (including what we can call present-centrism): “One study, for example, found that consumers appreciate the same artwork less when they’re told it was made by multiple artists instead of a single artist . . .” (p. 194). These few words reveal a) that neither author has the faintest acquaintance with the history of art, either Western or world; b) that they are unaware that the idea of the individual artist we’re familiar with today is an historically recent development that even in the West did not exist before the Renaissance; c) that they are unaware that even in the Renaissance, the great artists whom we revere today did not do all the work on a canvas (for example) themselves but rather headed studios in which much of the grunt work (backgrounds, preliminary work, etc.) was done by apprentices (a modern example, of course, is Andy Warhol’s Factory); d) and they are also obviously unaware that in many cultures (including traditional Chinese) originality was not valued. Simler and Hanson’s theory ignores the vast majority of artists and arts of the world and is therefore without merit.
Frankly, I don’t believe that the real issue for Simler and Hanson is evolutionary fitness; rather, it’s their bias towards the “practical” and against the “impractical” that’s at play here, as demonstrated by their chapter on education. Consider this passage: “In college we find a similar intolerance for impractical subjects. For example, more than 35 percent of college students major in subjects whose direct application is rare after school: communications, English, liberal arts, interdisciplinary studies, history, psychology, social sciences, and the visual and performing arts” (p.228). Say what? Thirty-five percent of all students is a sizable chunk and hardly indicative of an “intolerance for impractical subjects.” And given their earlier argument that art enhances the fitness of its practitioners, one wonders why it’s included on this list of impractical subjects. Such an egregious failure to pay attention to the illogic of their own arguments suggests that the authors have an unacknowledged agenda.
It’s an agenda that the authors themselves may not be fully aware of: by reducing the human mind to nothing more than the usually unconscious expression of instincts, they can convince themselves and their naïve readers that the mind can be reduced to a network of programs and applications. Thus they can justify the fantasy that AI can duplicate and eventually replace the human mind altogether; they can envision a future in which minds (particularly their minds) can be uploaded to a computer or to multiple robot machines and thereby defy mortality (the law of entropy) and achieve that Faustian dream (nightmare?) of complete power and knowledge—at least for them, not for the masses. This is nothing but digital-age Social Darwinism. But as Jaron Lanier recently wrote, “Every time you believe in A.I. you are reducing your belief in human agency and value.”