Tag Archives: just-so story

“The Elephant in the Brain”: A Review

One thing I’ve learned from years of wide reading is that every text, whether fiction or nonfiction, article or book, consists of two levels: the subject matter and the agenda. The subject matter is, basically, the topic or explicit contents of the text (e.g., the presidency of Andrew Jackson), and the agenda is the real point (sometimes stated, often hidden) of the text (Jackson prefigures the populist nationalism of Donald Trump, with likely similar results). “The Elephant in the Brain” which constitutes the subject matter of this book is the largely unconscious work we do to deny (to ourselves and others) the hidden motives that drive our behaviors in our everyday lives, how in fact we “accentuate our higher, purer motives” over our selfish ones. Kevin Simler and Robin Hanson describe the ways in which a number of these motives (competition, status seeking, self-protection, self-esteem, etc.) are expressed in various contexts, such as conversations, consumption, art, education, religion, and so forth.

They are, however, far from being the first to explicate such foibles of the human psyche. There is a vast literature from ancient times to modern that have already explored the same phenomena—one thinks of moments in the Old Testament (“the heart is deceitful in all things”), of Sir Walter Scott (“Oh what a tangled web we weave/When first we practice to deceive”); or of Erasmus’s “In Praise of Folly” (“No man is wise at all times, or without his blind side.”), Charles Mackay’s “Extraordinary Popular Delusions and the Madness of Crowds,” or just about anything by Freud. Nor should we exclude Thorstein Veblen’s “Conspicuous Consumption” from this short exemplary list, as his theory anticipates Simler and Hanson’s chapter on consumption.

That Simler and Hanson have little new to say about our propensity for self-deception does not mean they can’t occasionally be interesting and even fun. Their chapter on medicine makes an excellent point: that much (too much) of our consumption of health care exceeds our need for it; that such consumption amounts to “conspicuous caring,” a kind of status seeking or signaling that is analogous to Veblen’s conspicuous consumption, with the same kind of wastefulness of resources. Their chapters on education and charity demonstrate that, as they put it, “many of our most cherished institutions—charities, corporations, hospitals, universities—serve covert agendas alongside their official ones. Because of this, we must take covert agendas into account when thinking about institutions, or risk radically misunderstanding them.”

I can think of an example: At my state university, the football and basketball head coaches make $2,475,000 and $2,200,000 million (in base salary) respectively, while their putative boss, the university president, makes a base salary of $800,000. The university’s mission statement makes no mention of either the football or basketball program; instead it brags about research, learning, career success—the usual academic checkpoints. Yet money speaks louder than words.

But otherwise, the authors have little new to say about our hidden motives, so why then have they written this book, and why has it been received so positively by reviewers and readers? Because it fits smoothly into our contemporary Zeitgeist in which everything is explained (or re-explained) in terms of 1) digital technologies and 2) evolutionary theory (especially “fitness”). It’s as if these two paradigms supply us with the long-sought theory of everything and thereby relegate all that has come before to the dustbin of error and superstition; consequently, everything, including our propensity for denying our own motives, must be explained as if they had never been explained before, as if for the very first time in all of human history and thought, we have identified and explained the sources of all of our traits and quirks.

Now, while the hyper-reductionist world view of the digirati is not directly mentioned in the book, nor explicitly appropriated as a supportive argument, it is nonetheless fundamental to the authors’ thesis, both of whom are full-fledged members of the geek collective: Hanson is an economist with a degree in physics, a devotee of AI and robotics, and the author of “The Age of Em: Work, Love, and Life When Robots Rule the Earth” (2016); he has arranged to have his brain cryogenically frozen, perhaps in hopes that in the not-so-distant future it will be one of the brains (the brain?) that will be downloaded into all those em robots that will soon take over the world. His co-author Kevin Simler (shown in his blog photo as a typical Silicon Valleyboy in tee shirt, hoodie, and little wire-rimmed glasses) has studied philosophy and computer science (not as contradictory as one might assume) and has had an extensive career with start-ups. Both separately and in collusion, they view the world through logarithm-tinted glasses. There are certain background assumptions that, in this book at least, while unspoken nevertheless shape their conclusions.

One of the most important of these is the notion that the brain is a computer that runs on programs and apps (“modules,” etc.)—consciousness is the screen behind which these programs are silently running without our awareness, invisible but determinative. Which brings us to the second of their paradigms, evolution—especially the notion that unconscious instincts, running determinatively behind our conscious minds, are not only sufficient to explain all of human activity but have only one goal, to win the mating game. And while they may be expert in digitization, they are rank amateurs when dealing with supposedly Darwinian explanations of human behavior (neither has a background in neuroscience, evolutionary biology, or anthropology, the fields most relevant to their claims).

This lack is most evident in their chapter on art, which they claim is nothing more than a means of signaling fitness, i.e., that the production of art signals to prospective mates that the artist has the vigor and strength to waste on nonproductive or impractical activities (“survival surplus”). They provide only scenarios (made up illustrative fairy tales), tired analogies (bowerbirds), and modals (such words as “may” or “may have,” as in “Art may have arisen, originally, as a byproduct of other adaptations.” Note the provisional quality here: it “may have” [but maybe it didn’t] and “originally” [but maybe now it’s something else?].) We have almost no evidence of how or why human beings first began making art; there is no evidence that the making of art has enhanced the reproductive success of any artist; indeed, many of our most famous artists had no offspring at all. Maybe some of them had more sex than average, but evolutionarily that doesn’t count if it resulted in no children (and grandchildren, etc.; Shakespeare had children, but he has no living descendants today).

It is true that art, or at least the collecting of art, can enhance a person’s social status, but it’s interesting to note that enhanced social status does not necessarily result in more descendants. Indeed, at least in our current world, poorer people tend to have more children than the rich, despite the fact that the poor do not have the wherewithal to purchase prestige works of art.

There is also the problem of ethnocentrism (including what we can call present-centrism): “One study, for example, found that consumers appreciate the same artwork less when they’re told it was made by multiple artists instead of a single artist . . .” (p. 194). These few words reveal a) that neither author has the faintest acquaintance with the history of art, either Western or world; b) that they are unaware that the idea of the individual artist we’re familiar with today is an historically recent development that even in the West did not exist before the Renaissance; c) that they are unaware that even in the Renaissance, the great artists whom we revere today did not do all the work on a canvas (for example) themselves but rather headed studios in which much of the grunt work (backgrounds, preliminary work, etc.) was done by apprentices (a modern example, of course, is Andy Warhol’s Factory); d) and they are also obviously unaware that in many cultures (including traditional Chinese) originality was not valued. Simler and Hanson’s theory ignores the vast majority of artists and arts of the world and is therefore without merit.

Frankly, I don’t believe that the real issue for Simler and Hanson is evolutionary fitness; rather, it’s their bias towards the “practical” and against the “impractical” that’s at play here, as demonstrated by their chapter on education. Consider this passage: “In college we find a similar intolerance for impractical subjects. For example, more than 35 percent of college students major in subjects whose direct application is rare after school: communications, English, liberal arts, interdisciplinary studies, history, psychology, social sciences, and the visual and performing arts” (p.228). Say what? Thirty-five percent of all students is a sizable chunk and hardly indicative of an “intolerance for impractical subjects.” And given their earlier argument that art enhances the fitness of its practitioners, one wonders why it’s included on this list of impractical subjects. Such an egregious failure to pay attention to the illogic of their own arguments suggests that the authors have an unacknowledged agenda.

It’s an agenda that the authors themselves may not be fully aware of: by reducing the human mind to nothing more than the usually unconscious expression of instincts, they can convince themselves and their naïve readers that the mind can be reduced to a network of programs and applications. Thus they can justify the fantasy that AI can duplicate and eventually replace the human mind altogether; they can envision a future in which minds (particularly their minds) can be uploaded to a computer or to multiple robot machines and thereby defy mortality (the law of entropy) and achieve that Faustian dream (nightmare?) of complete power and knowledge—at least for them, not for the masses. This is nothing but digital-age Social Darwinism. But as Jaron Lanier recently wrote, “Every time you believe in A.I. you are reducing your belief in human agency and value.”


Evolutionary Just-So Story, Again!

So yet again we have a story of evolution that seems to say that evolution works like God, i.e., that it indulges in design. I am referring to an article recently published in the New York Times reporting on research into why the squid lost its shell. The phrasing of the article will, in the minds of the naive, create the impression that the squid lost its shell in order to move faster to escape its predators (shells being rather heavy and cumbersome). “The evolutionary pressures favored being nimble over being armored, and cephalopods started to lose their shells.” This seems to be an innocent enough statement, but its construction implies that the pressure to become nimble preceded and caused the loss of the shells.

That is design. It may not be God design, though one could easily make that leap, but it is design nonetheless.

Oh, if only they would read Lucretius!

Here’s what really happened: Originally, “squids” we shelled creatures; generation after generation were shelled. Occasionally, a genetic mutation or defect (call it what you will) resulted in progeny lacking shells. No doubt, most of these shell-less individuals quickly died or were eaten and left no progeny; but at some point, some of them survived (perhaps thanks to another mutation that enabled them to move more quickly than their shelled relatives) and reproduced, eventually giving rise to a new class of creatures, squids and octopuses, etc. In other words, the change occurred first, without intention or purpose, and the benefit followed. The change did not occur in order to confer the benefit. It just happened.

Of course, such changes often occur gradually, say by shrinking the shell over many generations, in what some have called “path dependency” (i.e., evolution follows an already established path and does not go backwards, in other words it doesn’t restore the shell to creatures who have lost it). But the principle remains the same: first the change, and then, if it happens to have an advantage, it sticks.

As Lucretius said, humans did not develop opposable thumbs in order to grasp tools; we can grasp tools because we have opposable thumbs.

Nicholas Wade’s Troublesome Inheritance: A Critical Review

In his latest book, Nicholas Wade, a well-known science journalist, argues three points: 1) That human races are real, 2) that differences in human behavior, and likely cognition, are genetically based, and 3) that there are likely subtle but nonetheless crucial behavioral differences among races which are also genetically based. Wade is well aware that these are extremely controversial ideas, that they overturn politically correct notions that human behavior and social structures are purely cultural, yet he is confident that developments in genetics support his view.

Click here to read the full article.

Marriage vs Mating

Yet Another Just-So Story

What is marriage? Ask an American of strong religious beliefs, and he is likely to say that it is a union between one man and one woman sanctioned by God. Ask more secular individuals, and they are likely to say that it is a civil contract between two individuals, committed to each other by love, but of practical importance in terms of legal and tax benefits, etc. Ask some biologists, and they will say that monogamous marriage is an evolutionary adaptation that increased the survival rate of helpless human infants, guaranteed to the father that the children produced by his wife were indeed his, and/or facilitated the development of human intelligence—or whatever, as long as the explanation can be stated in terms of natural selection. So at least is the impression one receives from a recent article in the New York Times (titled, somewhat misleadingly, since polygamy is discussed, “Monogamy’s Boost to Human Evolution”—but at least the title does neatly summarize the bias).

Ask an historian, a sociologist, or an anthropologist, and any one of them is likely to say that marriage practices vary over time and among cultures, from polygamy to monogamy, and they are also likely to mention that it varies by class. In warrior societies polygamy was common among the warrior elite (including kings and nobility, whose avocation was warfare, and who could have both many wives and concubines) to monogamy among the commoners; polygamy is common in societies in which there is a high mortality rate among young men (war, hunting mishaps, etc.) whereas monogamy is more common among societies in which the balance of adult males to females is more even, as well as in more egalitarian societies. Generally speaking, marriages were contracted for social purposes, to cement alliances, to protect inherited property, or to synchronize labor.

Marrying for love is a rather recent innovation and is characteristic of modern individualistic (and capitalist) countries, although monogamy has long been legitimized by Christianity, in part because of its dread of sexual license. Some people get around the stricture by having separate and unofficial multiple spouses, for example Charles Lindbergh, who had children in long-term relationships with three women other than his wife. Contemporary Americans seem to be practicing serial monogamy (divorce and remarriage) as well as unofficial and often temporary arrangements. In all cases, there has always been a whole lot of cheatin’ goin’ on. Then there is the added element of prostitution, including street walkers and courtesans, for which even the cleverest evolutionary biologist would have a hard time providing an evolutionary explanation. All of which suggests that marriage is different from mating. The latter is strictly biological—up until very recent times, there has been only one way to produce children, the sexual congress of a fertile man with a fertile woman, and this one way is unaffected by social customs. That is, socially sanctioned monogamy does not prevent either partner from producing a child with a person other than his/her spouse; eggs and sperm recognize no such boundaries.
t therefore seems both pointless and fruitless to try to concoct explanations for marriage customs and practices from natural selection. At some unknown point in the remote human past, people began creating nonbiological ways of organizing their lives. It’s what our big brains allow us to do. Mating may be in our DNA; marriage, however, is not.

Apart from the waste of time and grant money entailed in the pursuit of these evolutionary Just-So stories, the misguided notion, bordering on an ideology, that everything humans do can be explained solely in biological evolutionary terms, by a module in the brain, by DNA (i.e., instinct), denigrates other modes of knowledge that actually produce better explanations. We can learn more about marriage from historians and anthropologists than we can from biologists.

Paleolithic Fantasies

We live in an age like all previous ages, one in which thinking people assess the state of the world, find it wanting, and consequently seek a better, even perfect, way of life. Such people tend to roughly divide into those who seek their utopias in a vision of the future (today: think digital prophets, genetically modified crops) or a return to a golden past when human beings were in perfect harmony with nature (past: think Eden and the Noble Savage; today: think organic farming, artisanal cheese). Interestingly, one finds both types among both liberals and conservatives, though usually with different emphases (liberals tend to go for the organic, conservatives for traditional morality, while both seem to think that digital technology holds great promise for the future, either through greater community or better security). And advocates of both sides seem to appeal, either implicitly or explicitly, to “human nature” as the ultimate measure of the perfect way of life (using either Darwin or the Bible as the validating text). Thus, amid all the changes of outward circumstance, human nature has remained unchanged through time.

Marlene Zuk, author of Paleofantasy: What Evolution Really Tells Us about Sex, Diet, and How We Live (W. W. Norton, 2013), addresses the myth, the just-so story, of a fixed human nature from an evolutionary perspective. An evolutionary biologist currently associated with the University of Minnesota, Zuk has conducted extensive field research, particularly on crickets, and is the author of numerous specialized articles and several popular books on evolutionary biology, behavioral biology, and sexual selection. She is therefore particularly well-qualified to demolish popular myths about human evolution, which she does with clarity and wit in this new book. (Her wit is best illustrated by her statement that “After all, nothing says evolution like a brisk round of the plague.”) Her immediate targets here are evo-myths about diet and health, particularly those that base their tenets on the very false idea that contemporary human beings are Paleolithic creatures uncomfortably and unhealthily stuck in an unnatural modern industrial environment. In other words, the natural man, the Noble Savage, the Eden which we have lost, is to be found in the lifestyles of early Stone Age humans prior to the development of agriculture (the true Original Sin) and settled life, that is prior to about 10,000 years ago. Supposedly, humans of the Paleolithic lived in that much admired perfect harmony with nature, and to restore our health and souls, we need to retrieve that lifestyle and apply it to our urbanized lives today.

Alas, like all utopian dreams, whether of past or future, what Zuk calls paleofantasies are exactly that, fantasies, and in the course of demonstrating just how fantastic they are, she treats her readers to a particularly clear and nonidealogical series of lessons on what evolution really is. And what it is not: it is not purposeful and it is not perfect or ever perfected. Thus, she demolishes the notion of the Noble Savage (by whatever name) when she writes that there is no utopian moment of perfect synchronicity between human beings and their environment. Both organisms and environments constantly change (and both humans and environments certainly did over the 2.6 million years of the Paleolithic period), and to think that today’s human beings are unchanged from those of even a mere 10,000 years ago “misses the real lessons of evolution” and “is specious” (p. 59). And lest we think that evolutionary change moves in some kind of logical direction, she writes that “evolution is more of a drunkard’s walk than a purposeful path” (p. 78).

Evolution never intends anything. It is a Rube Goldberg contraption, or rather the creatures it throws up are, because, rather than aiming at or achieving perfection, it measures success only by reproductive success. “If something works well enough for the moment, at least long enough for its bearer to reproduce, that’s enough for evolution” (p. 8). When you think about it, this is actually an excellent measure, simply because “perfection” is purely a human concept, and no one can agree on just exactly what perfection is. Should we eat only meat, because, as some paleo diet buffs claim, that’s what our Pleistocene ancestors ate? Or should we eat only raw vegetables and fruit, because, as other buffs claim, those were the exclusive menu items of our ideal past? Should we eschew grains, because they are cultivated and therefore not natural? Just exactly what would the “perfect” diet for human beings consist of?

According to Zuk, it depends. As she shows, various populations of human beings have evolved to utilize foods that our hunter-gatherer ancestors would not have been able to eat. For example, adults of some populations can digest milk, while the majority of human adults cannot (lactose intolerance). Certainly, the latter should avoid dairy, but the former can consume dairy products pretty much as they please. Insofar as the deleterious effects of agriculture are concerned, yes, it appears to be true that initially human health and well-being declined after people began cultivating grain crops and living in permanent settlements, but Zuk points out that it did not take all that long for this disadvantage to disappear; and as we know, agricultural societies grew larger and faster than foraging societies (reproductive success again being the measure of evolutionary success). Certainly some kind of genetic mutations could have occurred that conferred a greater ability to prosper on a diet high in grains; but it is also possible that as people improved their knowledge of cultivation and selectively improved the quality of their crops, and also exploited the advantages of settlements in facilitating trade, they overcame the initial disadvantages of agriculture. But whatever the case, it’s important to keep in mind that the early agricultural peoples themselves apparently thought that the advantages of agriculture outweighed its disadvantages—why else persist in farming?

An analogous point could be made about our modernity: If modern urban life is so bad for us, so unnatural and maladaptive, why did we develop it in the first place? If we are really, as some do argue, merely products of biological evolution like any other animal and, as some do argue, our consciousness is merely an illusion, how did we “evolve” a state of affairs so contrary to our biological being? And why do we cling to it so tenaciously? If it were really so horrible, wouldn’t we be fleeing the city for the more natural environments of the northern woods or western prairies (the United States’ closest approximation of the Edenic savannahs)? The fact that we do not suggests that urban industrialized life may not be so bad for humans after all. (How bad it may be for other organisms is a different question.)

Whatever the sources of some people’s dissatisfaction with modern human life, a mismatch between our Paleolithic natures and modernity is not one of them, and the appeal to evolution is, as already noted, based on a misconception of what evolution is. A major aspect of that misconception is an over-emphasis on natural selection. But as Zuk points out, “it is important to distinguish between two concepts that are sometimes—incorrectly—used interchangeably, evolution and natural selection. At its core, evolution simply means a change in the frequency of a particular gene or genes in a population” (p. 251). The mechanisms by which these gene frequency changes occur include not only natural selection, but genetic drift, gene flow, and mutation. “Genetic drift is the alteration of gene frequencies through chance events” (p. 251). “Gene flow is simply the movement of individuals and their genes from place to place, and activity that can itself alter gene frequencies and drive evolution” (p. 252). “The final way that evolution sans natural selection can occur is via those mutations, changes in genes that are the result of environmental or internal hiccups that are then passed on to offspring” (p. 252). In order to see whether or not evolution is occurring in humans today, one does not look at superficially visible traits but at changes in gene frequency among human populations.

Another all too common misconception is that “evolution is progressing to a goal” (p. 252), what can be called the teleological error. Even well-known and well-informed people believe that evolution is goal directed. For example, Michael Shermer, the editor of The Skeptic magazine and the author of a number of pro-evolution books, writes in The Science of Good and Evil that “Evolutionary biologists are also interested in ultimate causes—the final cause (in an Aristotelian sense) or end purpose (in a teleological sense) of a structure or behavior” (p. 8); he then states that “natural selection is the primary driving force of evolution” (p. 9). In contrast, Zuk reiterates throughout her book that “everything about evolution is unintentional” (p. 223), that “all of evolution’s consequences are unintended, and there never are any maps” designating a foreordained destination—and she is in fact an evolutionary biologist!

A good example of an unintentional evolutionary consequence is resistance to HIV, the retrovirus that causes AIDS. As it happens, some individuals are resistant or immune to the retrovirus, but not because evolution or natural selection intended them to be so. Centuries ago, bubonic plague swept through Europe; millions died of this highly infectious disease, but some few people did not get the disease despite having been exposed to it. No doubt they thought God had spared them for some divine reason. Centuries later, some of their descendents were exposed to HIV and did not become ill. Did God plan that far ahead to spare these few lucky individuals? Did evolution? No. A random mutation happened to render human cells unreadable to the plague bacterium (or, as Zuk suggests is more likely, unreadable to the smallpox virus); consequently, the bacteria could not enter the cells and wreak their havoc. The mutation would have had to have occurred before the introduction of the disease into the lucky few’s environment (there would not have been enough time for it to occur and proliferate after the disease’s introduction), and may have had no prior function, good or bad. As chance would have it, centuries later, the same mutation also made the owner’s cells unreadable to the AIDS virus, thus rendering him or her immune to HIV—quite by chance. Pace Lamarck, perhaps we can say that it is not characteristics that are acquired, but functions. The gene mutation that confers HIV immunity has after many generations finally an acquired function.

Why then do organisms seem so perfectly adapted to their environments? Perhaps they are not so perfectly adapted as they appear to human eyes; more importantly, since environments change, organisms must change as well, but perhaps if they were too perfectly adapted (each and every individual of the species therefore being identical), they would rather quickly become imperfectly adapted to even small changes in their environment. Perhaps, then, perfection is an extinction trap rather than a desirable goal.

Empathy Imperiled: A Review

One can to some extent understand the current enthusiasm of conservatives for Darwinian deterministic explanations of human behavior, inasmuch as determinism is compatible with the views of human nature already held by conservatives. Even religious conservatives, those who go so far as to deny evolution per se, subscribe to a deterministic view. The Edenic fall, the apocalyptic view of history, etc., are elements in God’s overarching plan, and human free will is largely limited to submitting to God’s will or facing the dire consequences. Secular conservatives hold that Evolution is the grand plan (even though they usually deny teleology for appearances’ sake) and that we should submit to the inevitabilities of our genes and our Pleistocene natures. But it is puzzling that a considerable number of scholars in the humanities and social sciences submit to Darwinian explanations of art, literature, philosophy, etc.; perhaps they do so in a desperate attempt to retain “relevance” in an age when technology, science, and the MBA have the hegemonic edge.

It is especially surprising when a writer of definitely left-wing political beliefs attempts to recruit biological evolution to the socialist or communitarian cause. Such is the case, sadly, with Gary Olson’s book Empathy Imperiled: Capitalism, Culture, and the Brain (Springer, 2013). Olson is a professor of political science at Moravian College in Bethlehem, Pennsylvania, and active in liberal causes. In this book, he explores a two-part thesis: the first is that mirror neurons in the brain hardwire us for empathy; the second is that the culture of capitalism thwarts this natural empathy in favor of selfishness.

Why is his first point important to his second point? According to Olson, that we (and at least some other animals) have mirror neurons has been proven by science, which in turn provides support for the idea that human beings are naturally (i.e., biologically) empathetic. It is not biology or our evolutionary history that makes us divisive and driven by selfishness and enmity but rather, culture, particularly capitalist culture, has thwarted this natural trait. However, while the existence of mirror neurons in macaques appears to be well established, their existence in human beings is not. It further is not at all certain that mirror neurons are the source of empathy. They seem instead to mirror others’ motor movements, such that when a macaque sees another macaque pick up a peanut and put it in its month, the first macaque can imitate that action, but it is a long way from motor imitation to empathy. But by means of a non sequitur, Olson evades the problem: “The monkey’s neurons were ‘mirroring’ the activity she was observing, suggesting she was responding to the experience of another, such as when we experience empathy for someone else’s circumstances” (p. 21). As in all non sequiturs, there is some verbal sleight of hand in this sentence: from mirroring an activity (outwardly visible) to mirroring an experience (inward and subjective), and then the leap from a monkey mirroring/responding to another monkey’s actions to a human being actually feeling with another human being (and what “circumstances” are implied here?). No explanation for this leap from an activity to a subjective state is provided.

It is worth pointing out here that complex animals like macaques, chimpanzees, or humans do not consist of one behavioral trait. Even if mirror neurons do exist in monkeys or humans, even if we are willing to make the leap of faith that mirror neurons hardwire us for empathy, empathy is not our only behavioral trait and can then, quite naturally rather than culturally, be over-ridden by other traits that might be more appropriate to a particular situation or circumstance. Thus a person might be empathetic one day and jealous the next, or understanding and helpful to one person and belligerent to another. None of us would hurt a fly—until the situation called for a fly swatter.

Perhaps “empathy” is a poor word, anyway. The observant macaque might use its ability to “mirror” another’s actions by stealing the peanuts; a human being who can “feel with” another person might use that to manipulate and outwit. Merely “mirroring” does not guarantee virtuous cooperation.

There are equally damaging inadequacies with Olson’s development of the second part of his thesis, that capitalism thwarts our natural empathy. He writes that “capitalism is by its very nature competitive and exploitive, not communal and empathetic except to the degree that empathy can enhance profitability” (p. 25). Well, true, at least to some extent. But is this true only of capitalism? As a leftist, Olson seems to think that it is. But Olson fails to show that capitalism is more destructive of empathy than other actual (rather than ideal) economic systems. To do so, some comparisons (other than to Cuba) would be necessary. For example, given the endemic slavery of the Roman Empire, which was not capitalist, surely we can say that Rome was destructive of empathy. Indeed, a major motive for official Roman antagonism to early Christianity was precisely its encouragement of empathy, particularly for the poor, the oppressed, and the enslaved. Ancient Greece, despite Athens’ reputation as the birthplace of democracy, also depended on slavery and denied citizen status to everyone except free-born, native-born males (i.e., not “foreigners,” women, slaves, etc.). It is the ancient Greeks who gave us the word barbarian, a pejorative for the “them” of the us vs. them dichotomy. In the Americas, aside from the Aztecs and Maya (human sacrifice, fierce warfare), there were the Iroquois, who made territorial war against their neighbors, and slavery was also practiced by numerous Indian tribes. While many sins have been committed under capitalism, so have they under all other actual economic systems.

On the other hand, some ancient sins withered away under capitalism. Chattel slavery was abolished after capitalism was established, the vote has been extended to all adults, men and women alike, of whatever class. The various products of industrial/scientific medicine have eliminated or vastly reduced the ravages of infectious diseases, to the point where infant and child mortality has gone from being a commonplace to an exception. The disease, smallpox, that many historians estimate killed as much as 90% of Native Americans after the arrival of Europeans has been eliminated. These examples are not meant to absolve capitalism of its sins, but to demonstrate that any political and economic systems, just like the human beings who create and sustain them, are complex mixtures and degrees of good, bad, and indifferent. Capitalism may have run its course and may, through the usual difficult process that attends major historical shifts, be replaced by something better suited to our new globalized, over-heated world, but I doubt that that new system will be as morally exemplary as many dream of.

In my opinion, mirror neurons, neuroscience, genetics, etc., add little of interest or usefulness to issues of morality. In Olson’s book, the best passages are not those which unsuccessfully attempt to recruit mirror neurons to moral purposes but those which explore the profound words of Jesus (e.g., the parable of the Good Samaritan) and Martin Luther King, Jr. (e.g., King’s interpretation and application of that parable). Such wisdom does not require a pseudoscientific gloss.

Boehm’s “Social Selection”

Christopher Boehm’s book Moral Origins: The Evolution of Virtue, Altruism, and Shame (Basic Books, 2012) is yet another sad example of the futility of the widespread hope that Neo-Darwinism, as over extended by evolutionary psychology and sociobiology, can ever be a theory of everything, particularly a theory that explains modern human behavior and values. It is not science. It is an ideology, or perhaps merely a hope, dressing up in a sloppy imitation of science.

Boehm’s thesis is that human moral values, the virtue, altruism, and shame of his subtitle, evolved through a process of what he calls “social selection,” which can be defined as the selecting out of socially uncooperative individuals (whom Boehm equates with psychopaths) and the selecting in of cooperative ones. Lengthy as the book is (at 362 pages of text), with its elaborate arguments and numerous examples, Boehm fails to support his thesis with anything more than supposition and false analogies.

First let’s consider what social selection would have to do in order to affect the evolution of human beings:

1) It would require a concerted effort species-wide over a great swath of time to define, identify, and eliminate socially uncooperative individuals (psychopaths and free riders).

2) In order to affect the gene pool, undesirable individuals would have to be identified very early in life, before they had the chance to reproduce. Killing the parent without killing the child does not eliminate the parent’s genes.

3) The criteria for determining whom to eliminate would not only have to be clear but consistent over many generations. Any change in the standards midstream would ruin the whole scheme. Yet any historian can tell you that standards have changed over time, sometimes quite sharply.

There is no evidence that any of this obtained at any time in human history or prehistory. There is also no evidence that if it did occur it would have had a significant impact on human evolution. Prior to modern medicine and germ theory, infant and child mortality, not to mention plagues and epidemics that affected adults as well, would have had an impact many times that of social selection, effectively diminishing its proportionally infinitesimal effects.

In order to compensate for the serious lack of evidence, Boehm resorts to highly suppositional phrasing and subjunctive grammar. The following examples from pages 80 and 81 are illustrative of far too much of the book:

“prehistoric forager lifestyles could have generated distinctive types of social selection” (Perhaps they could have, but science wants to know if they actually did.)

These types of social selection “could have supported generosity outside the family at the level of genes.” (Again, did they actually do so?)

“were likely to have”
“could have become”
“It’s even possible . . . if”
“may have begun to differ”
“it’s likely that”
“would have been”
“would not have negated”
“they would have”
“were likely to have been”
“what could have happened”
“very likely”

And all these from just two pages! The careless or naïve reader might not notice this suppositional language and therefore mistakenly believe that Boehm is solidly establishing his argument; but the careful reader will find these to be crippling stumbling blocks.

There are also problems of self-contradiction. For example, Boehm seems to be saying that social selection eliminates psychopaths, but then states that psychopaths constitute a significant percentage of modern day populations. He claims that “People very significantly [psychopathic] probably number as high as one or more [vague: how many more?] out of several hundred in our total population,” which may not seem all that many, but perhaps too many if humans began socially selecting these people out thousands of years ago. Other sources put the percentage as low as 2% and as high as 4%, but no doubt problems of definition affect the numbers. Whatever the true number may be, I think Boehm does need at the very least to clarify just how effective social selection really is.

The examples he pulls from contemporary forager societies are also contradictory of his thesis. He cites the example of Cephu, a Mbuti Pygmy who, as recounted by Colin Turnbull, let his greed overcome his responsibility to the rest of his group. His colleagues caught him in the act of helping himself to more game than he was entitled to and subjected him to an intense course of humiliation—but they did not kill him or his progeny, and after he had adequately apologized and humbled himself, he was readmitted to the group. The story of Cephu, meant to illustrate the book’s thesis, actually proves its opposite. Cephu’s behavior was corrected not genetically, but culturally.

Perhaps a comparison would clarify the problems with Boehm’s thesis. There is another form of behavior that one might think would have been socially eliminated fairly early in human evolution, male homosexuality. It is not, after all, conducive to reproductive survival, and has often been punished, quite horribly in many instances, not only with shunning and shaming techniques but with imprisonment, torture, and execution; yet it has persisted through thousands of years, in part because homosexuals can camouflage themselves but also because efforts of social selection to eliminate the behavior have proven to be ineffectual—just as has been, I would argue, social selection to eliminate socially uncooperative individuals. This analogy suggests that social selection is a very weak hook on which to hang the hope that biology and genetics can account for all human behavior in terms of “fitness.”

Finally, we should note that throughout history there have been people we would today label as psychopaths who have been quite successful leaders, often revered not only in their own times but long after their deaths. One thinks of Napoleon Bonaparte, killer of millions yet romanticized and admired by other millions, credited with the Napoleonic Code and sympathized with in his exile. One also thinks of Genghis Khan, the great butcher who, far from being selected out of the gene pool, is now thought to be the ancestor of as many as 16 million people living today. Of course, being a psychopathic great leader is no guarantee of reproductive success; Hitler, fortunately, had no children, and though he did have nieces and nephews, none of them has followed his example. While Boehm believes that psychopaths and free riders were (at least to some extent) weeded out of the gene pool through social selection, it may be that such individuals were selected for because in some ways that we 21st century Americans may not comprehend, they were in fact socially useful. Perhaps they made good warriors, or maybe they built the great empires that encouraged the arts and sciences, or maybe they made their liege lords great fortunes (perhaps Cortez and Pizarro were useful psychopaths, enriching the Spanish treasury while taking all the risks). What we can say is that they have been, and are, legion.

Darwinists and Telos

In a recent article in the New York Times, on the frequency of cross-species mating among birds, a Dr. Lovett,a biologist at the Cornell Lab of Ornithology, is quoted as saying the following: “much of the entrancing diversity of the avian world, like colors, plumes, songs and bizarre mating displays, ‘has arisen in part because these differences help female birds avoid accidental matings with a male of a different species.'” Given that Cornell has one of the more important departments of ornithology, and given that Dr. Lovett is a director of its evolutionary biology program, we can take his statement as representing a mainstream and widely accepted view of evolution.

The “because” in his statement is troubling, in that it quietly implies what is seldom aciknowledged–a teleological view of evolution, i.e., that traits arise in order to fulfill a prior need or to suit a purpose. That “because” is situated between and links a trait (avian diversity) to a goal (avoiding accidental matings). This gives intelligence to evolution, makes it goal directed, therefore teleological. Whatever kind of evolutionary theory this represents, it is not Darwinism, for the picture Darwin drew was nonteleological, accidental, contingent, and undirected.

One cannot say that a trait arose because of anything, and one cannot say that diversity arose in order to enable females to distinguish between species. If a distinction arose, and if it happens to function in such a way, that is after the fact, not before. A better way of stating the case would thus be, “as various differences arose among bird species as a result of random mutation, genetic drift, and other factors, and as they became fixed through isolation and natural selection, female birds may have come to recognize males of their own species by their particular distinguishing traits. This would have the effect of preventing cross-species matings.”

Yes, that takes more words; but it is also not misleading. It does not imply a teleology behind the evolutionary process.

Wet Wrinkled Fingers and Teleodarwinism

Just how far teleodarwinists will go in creating seemingly logical reasons for every human characteristic is illustrated by the recent buzz, both on PBS’ Science Friday and in the pages of the New York Times, about wrinkled fingers.  As anyone who has soaked in a tub has observed, when our fingers have been immersed in water for some time, they get all wrinkly.  Well, some English scientists recently conducted experiments that purportedly show that wrinkly fingers get a better grip on wet objects (marbles) than dry fingers do.  On this basis, Dr. Tom Smulders and colleagues opine that wrinkly fingers “may have evolved to give early humans an advantage in wet conditions.”  And since our toes also get wrinkled when soaked, he says that “’The actual origin of this may have been to help us move on all fours.’” 

Notice that all Dr. Smulders et al have to go on is a lab experiment in which some of the participants soaked their hands in warm water “for quite a long time,” as the Science Friday host put it, and then competed with dry-fingered participants to see how quickly they could transfer marbles from one container to another; the wrinkly-fingered won.  On this basis, he hazards selective advantages, not for humans today, but for hundreds of thousands of years ago, if not millions (going back to some supposed ancestor who routinely moved on all fours?). 

A selective advantage means that those few individuals who first exhibited this trait, as the result of a mutation of some kind, must have had a considerable survival advantage over their more numerous relatives who did not have the mutation.  But a new trait or feature has to be advantageous in an environment where it makes a difference.  So the question is, was wrinkled-when-wet sufficiently advantageous under natural, not laboratory, conditions, way back when it first appeared, to warrant being selectively favored?  Did those ancestors live in a sufficiently wet environment to make it likely that sticky wrinkled-when-wet fingers would prove to be useful?  If there is no evidence of human evolution in such an environment, then this experiment provides us with no explanation of the phenomenon.  We can go further and say that the musings of the good professor are mere fantasies, without any basis other than a habit of wishful thinking among teleodarwinists who believe that evolution is logical, and worse, that it is designed.  Look again at the wording:  “evolved to give,” “to help us move,” and “for what purpose.”  Purpose!  The reason something is done; intention.  Unless teleodarwinists are positing a god or an intelligent designer or panpsychism or pantheism or some other form of intentionality to the universe, there is no purpose.  Wrinkly fingers may have no selective advantage at all.  There must, in the living world, be room for non-purposive, non-selective traits, to explain the immense variability of organisms.  Iron-clad selectivism (teleodarwinism) is a strait-jacket.  It is also nonsense.

Is the Brain Hard Wired for Optimism?

Another new book in the worn-out sociobiological genre is on its way:  Tali Sharot’s The Optimism Bias: A Tour of the Irrationally Positive Brain is due for release on June 14.  We can get a foretaste of her thesis in an op-ed article Sharot published on May 15 in the New York Times, titled “Major Delusions,” in which she takes the opportunity provided by the college graduation season to make a point not exactly relevant to completing college, an opportunity annually exploited by pundits, run of the mill as well as elite.  Sharot’s point is that “it is not commencement speeches or self-help books that make us hopeful.  Recently, with the development of non-invasive brain imaging techniques, we have gathered evidence that suggests our brains are hard-wired to be unrealistically optimistic.”

This is a statement that begs to be unpacked. There is first the subtle use of the word “suggests,” which is open to interpretation by the reader:  What does it mean when someone says that evidence “suggests” a stated conclusion?    Particularly, how strong is the assertion which follows “suggests”?  And what does she mean by “unrealistically”?   That is a value-laden word, not an objective or strictly scientific one, and clearly not quantifiable.

More serious, however, is the notion that, because a certain area of the brain shows activity when the person is thinking about a certain kind of topic, it therefore follows that “hard-wiring” is indicated, that the trait being studied, in this case optimism (in other cases, you name it), is genetic and the result of biological evolution.  As the Scientific American book club website puts it, Sharot “concludes by speculating that optimism was selected during evolution because positive expectations enhance the probability of survival.”  Speculating indeed!

At this point, prior to the publication of her book, one cannot know Sharot’s methodology or who her and her colleagues’ human subjects were; the NYT column reads as if her conclusions apply universally to all human beings.  The possibility of a cultural bias, however, is hinted at in an article published in New Scientist in October 2007, which states that at that time, at New York University, 15 volunteers were asked a series of questions and then asked to think about various scenarios, positive or negative, while lying in a brain scanner. The article did not state who these volunteers were, but since the experiments were conducted at an American university, one suspects that they were college students and likely Americans (rather than, say, foreign exchange students).  If this proves to be the case (and I do hope the book details how the experiments were conducted and gives adequate information on who the subjects were), the sample is too biased and probably too small to justify the very large conclusion that optimism was selected during evolution for any reason.  For one thing, American culture valorizes optimism under all circumstances, and American students would have imbibed the cultural value since birth.  Identical studies done on subjects located in other countries and speaking other languages, for example, seem called for.

For another thing, since everything human beings do and think is done and thought in the brain, that certain regions of the brain show activity when a subject is engaged in an activity or merely thinking about something does not lead to the conclusion that that particular activity or that particular way of thinking is hard wired.  Nor especially does it mean that that particular activity or way of thinking evolved in order to enhance survival or fitness.  All it means is that, yes, indeedy, the brain does it.  Brain scans can only show us where activity occurs—they cannot tell us if such activity is genetic rather than cultural nor can they parse the relative differences between or contributions of the two.