Tag Archives: Sam Harris

Evolution and Theodicy

“Why is there evil in the world?” This question has been asked by philosophers and theologians and ordinary men and women for millennia. Today scientists, particularly evolutionary biologists, neuroscientists, and evolutionary/neuropsychologists have joined the effort to explain evil: why do people indulge in violence, cheating, lies, harassment, and so on. There is no need here to itemize all the behaviors that can be labeled evil. What matters is the question of “why?”

The question of “why is there evil in the world?” assumes the premise that evil is abnormal while good however defined) is normal—the abnorm vs. the norm, if you will. Goodness is the natural state of man, the original condition, and evil is something imposed on or inserted into the world from some external, malevolent source. In Genesis, God created the world and pronounced it good; then Adam and Eve succumbed to the temptations of the Serpent and brought evil and therefore death into the world (thus, death is a manifestation of evil, immortality the natural state of good). Unfortunately, the Bible does not adequately account for the existence of the Serpent or Satan, so it was left to Milton to fill in the story. Gnostics, Manicheans, and others posited the existence of two deities, one good and the other evil, and constructed a vision of a cosmic struggle between light and darkness that would culminate in the triumph of good—a concept that filtered into Christian eschatology. The fact that Christian tradition sees the end times as a restoration to a state of Adamic or Edenic innocence underscores the notion that goodness is the natural, default state of man and the cosmos.

Contemporary secular culture has not escaped this notion of the primeval innocence of man. It has simply relocated Eden to the African savannah. When mankind was still at the hunter-gatherer stage, so the story goes, people lived in naked or near-naked innocence; they lived in egalitarian peace with their fellows and in harmony with nature. Alas, with the invention of agriculture and the consequent development of cities and civilizations, egalitarianism gave way to greed, social hierarchies, war, imperialism, slavery, patriarchy, all the factors that cause people to engage in violence, oppression, materialism, and so on; further, these faults of civilizations caused the oppressed to engage in violence, theft, slovenliness, and other sins. Laws and punishments and other means of control and suppression were instituted to keep the louts in their place. Many people believe that to restore the lost innocence of our hunter-gatherer origins, we must return to the land, re-engage with nature, adopt a paleo diet, restructure society according to matriarchal and/or socialist principles, and so on. Many people (some the same, some different from the back-to-nature theorists) envision a utopian future in which globalization, or digitization, or general good feeling will restore harmony and peace to the whole world.

Not too surprisingly, many scientists join in this vision of a secular peaceable kingdom. Not a few evolutionary biologists maintain that human beings are evolutionarily adapted to life on the savannah, not to life in massive cities, and that the decline in the health, intelligence, and height of our civilized ancestors can be blamed on the negative effects of a change in diet brought on by agriculture (too much grain, not enough wild meat and less variety of plants) and by the opportunities for diseases of various kinds to colonize human beings too closely crowded together in cities and too readily exposed to exotic pathogens spread along burgeoning trade routes. Crowding and competition lead to violent behaviors as well.

Thus, whether religious or secular, the explanations of evil generally boil down to this: that human beings are by nature good, and that evil is externally imposed on otherwise good people; and that if circumstances could be changed (through education, redistribution of wealth, exercise, diet, early childhood interventions, etc.), our natural goodness would reassert itself. Of course, there are some who believe that evil behavior has a genetic component, that certain mutations or genetic defects are to blame for psychopaths, rapists, and so on, but again these genetic defects are seen as abnormalities that could be managed by various eugenic interventions, from gene or hormone therapies to locking up excessively aggressive males to ensure they don’t breed and pass on their defects to future generations.

Thus it is that in general we are unable to shake off the belief that good is the norm and evil is the abnorm, whether we are religious or secular, scientists or philosophers, creationists or Darwinists. But if we take Darwinism seriously we have to admit that “evil” is the norm and that “good” is the abnorm—nature is red in tooth and claw, and all of the evil that men and women do is also found in other organisms; in fact, we can say that the “evil” done by other organisms long precedes the evil that men do, and we can also say, based on archaeological and anthropological evidence, that men have been doing evil since the very beginning of the human line. In other words, there never was an Eden, never a Noble Savage, never a long-ago Golden Age from which we have fallen or declined—and nor therefore is there any prospect of an imminent or future Utopia or Millennial Kingdom that will restore mankind to its true nature because there is nothing to restore.

The evolutionary function of “evil” is summarized in the term “natural selection”: the process by which death winnows out the less fit from the chance to reproduce (natural selection works on the average, meaning of course that some who are fit die before they can reproduce and some of the unfit survive long enough to produce some offspring, but on average fitness is favored). Death, usually by violence (eat, and then be eaten), is necessary to the workings of Darwinian evolution. An example: When a lion or pair of lions defeat an older pride lion and take over his pride, they kill the cubs of the defeated male, which has the effect of bringing the lionesses back into heat so that the new males can mate with them and produce their own offspring; their task is then to keep control of the pride long enough for their own cubs to reach reproductive maturity. Among lions, such infanticide raises no moral questions, whereas among humans it does.

There is no problem of evil but rather the problem of good: not why is there “evil” but rather why is there “good”? Why do human beings consider acts like infanticide to be morally evil while lions do not? Why do we have morality at all? I believe that morality is an invention, a creation of human thought, not an instinct. It is one of the most important creations of the human mind, at least as great as the usually cited examples of human creativity (art, literature, science, etc.), if not greater considering how much harder won it is than its nearer competitors, and how much harder it is to maintain. Because “good” is not natural, it is always vulnerable to being overwhelmed by “evil,” which is natural: Peace crumbles into war; restraint gives way to impulse, holism gives way to particularism, agape gives way to narcissism, love to lust, truth to lie, tolerance to hate. War, particularism, narcissism, etc., protect the self of the person and the tribe, one’s own gene pool so to speak, just as the lion kills his competitor’s cubs to ensure the survival of his own. We do not need to think very hard about doing evil; we do need to think hard about what is good and how to do it. It is something that every generation must relearn and rethink, especially in times of great stress.

It appears that we are in such a time today. Various stressors, the economy, the climate, overpopulation and mass migrations, religious conflict amid the dregs of moribund empires, are pushing the relationship of the tribes versus the whole out of balance, and the temptations are to put up walls, dig trenches, draw up battle lines, and find someone other than ourselves to blame for our dilemmas. A war of all against all is not totally out of the question, and it may be that such a war or wars will eventuate in a classic Darwinian victory for one group over another—but history (rather than evolution) tells us that such a victory is often less Darwinian than Pyrrhic.

Advertisements

The Credulous Skeptic: Michael Shermer’s Moral Arc

The Credulous Skeptic:  Michael Shermer’s The Moral Arc

The thesis of Michael Shermer’s new book is that morality is about the flourishing of sentient beings through the application of science and reason; in this, he follows in the footsteps of Stephen Pinker’s The Better Angels of Our Nature and Sam Harris’s The Moral Landscape.  All three consider science to be the arbiter of all things, and at least Pinker and Shermer argue that moral progress has been made only in the last 500 years or so, i.e., the modern period of scientific discovery and advancement and the Enlightenment.  As far as Shermer is concerned, all that preceded this period is superstition, ignorance, and darkness.  Religion, and especially Christianity, are not only of no use in their projects but in fact actually harmful.

Shermer argues that great strides in moral behavior have occurred since the Enlightenment, particularly in freedom and the abolition of slavery, women’s and gay rights, and animal rights, and that these advances are the direct result of a scientific and materialist worldview and an indirect result of the material prosperity afforded by the industrial revolution, capitalism, and democracy.  He marshals an impressive quantity of evidence to support his claims, and any reader is likely to concede that he has made a compelling case.  That is, if said reader is already sympathetic to Shermer’s libertarianism and as worshipful of science, the Founding Fathers, and the Enlightenment.

Any less naïve reader, however, is likely to notice a number of problems with Shermer’s book, not least of which is its Western bias.  Although early in the book Shermer refers to the moral progress of our species, virtually all his evidence and examples come from Western Europe and the United States, as if “we” were all that needed to be said about the species in general—even though the populations of Europe and the United States taken together constitute a minority of the world population, and this minority status applies even when the white populations of Australia and New Zealand and elsewhere are factored in.  Thus it would seem that in order to establish that the “species” has made moral progress since the Age of Enlightenment, data from non-Western societies would have to be taken into account.  In other words, Shermer is guilty of a sampling bias.

Compounding this problem—or perhaps the source of the problem—are his naïve and simplistic views of history.  Apparently, Shermer believes that the Enlightenment arose by spontaneous generation, for he dismisses everything that preceded it, most especially religion.  Or rather, Christian religion, which apparently has no moral tradition or intellectual history worthy of the name (never mind the moral and intellectual traditions of any other religious tradition, such as Buddhism or Hinduism).  In fact, Shermer’s notion of Christianity appears to be limited to that version with which, as a former born-again, he is most familiar, American evangelical fundamentalism.

For example, in his chapter on slavery, Shermer reads Paul’s letter to Philemon without any sense of the context in which Paul was writing and in fact explicitly dismisses contextual interpretations.  In this, he is more fundamentalist than the fundamentalists.  He is also guilty of presentism, i.e., the logical error of reading the past through the lens of the present; because “we” (Westerners) today abhor slavery, it must therefore be that any moral person at any time in history, regardless of how long ago or of what culture or civilization, should also explicitly abhor slavery and also openly call for its abolition.  Never mind that at the time of Paul’s writings, Christians were a distinct minority in the Roman world, no more than a few thousands out of a total population of millions; Christianity had barely begun, and it would be centuries before it had built up anything like a coherent intellectual tradition or widespread influence.  Meanwhile, Paul lived under the Roman system, which was exploitative and brutal in a way we today would find extreme.  Paul was certainly smart enough to know that calling for the abolition of slavery, by a small group of marginal people following a bizarre new religion, would have no impact on anything.  Thus when he urges Philemon to treat his slave Onesimus as a brother, he is making as radical statement as one could imagine, in context.  He was not asking Philemon to do anything useless or dangerous—he was asking him to treat Onesimus as a fellow human being, a radical idea at the time, but in doing so Paul planted the seed that eventually grew into the Western ideal of the individual, an ideal which is at the center of Shermer’s own libertarianism.

Shermer has built a career on being a skeptic (even editing a magazine of that name), but his skepticism tends to be selective (in the same way, ironically as a fundamentalist is selectively skeptical—of evolution, or climate change, i.e., of things he already rejects).  This selective skepticism is displayed not only in his tendentious reading of Paul but also in his takedown of William Wilberforce, one of the most successful abolitionists, whom he characterizes as “pushy and overzealous” in his “moralizing” and as worrying “excessively about what other people were doing, especially if what they were doing involved pleasure [and] excess.”  Meanwhile, Shermer’s Enlightenment heroes get a complete pass:  he never mentions Locke’s rationalization of the taking of American Indian lands for white settlement (because the Indians did not have “property”), nor that Jefferson, whom Shermer hero-worships, and Washington owned slaves (which would certainly be relevant to his chapter on slavery), or that Franklin favored using war dogs against Indians who were too stubbornly resisted white theft of their real estate.  One has to wonder:  What has Shermer ever read of American history?  Why does he apparently take his heroes at their written word, without investigating the context in which they wrote?  Would that reveal that his idols have feet of clay?  Why is he skeptical of Wilberforce but not of Jefferson?

When Shermer turns to the issue of animal rights, he seems at first to be on firmer ground.  There certainly does seem to be a positive movement in the direction of extending at least the right not to suffer at human hands to domesticated animals.  Animal welfare groups have proliferated, laws protecting animals from harm continue to be expanded, and more people are embracing a vegetarian lifestyle—in the United States today, somewhat more than 7 million people are vegetarians, which is an impressive number until one realizes that they represent only 3.2% of the American population (however, that compares unfavorably to India, where 42% of households are vegetarian).  While Shermer does recognize the cruelties of industrialized meat production, he misses an opportunity to connect some dots.  One of the effects of industrialization is to specialize the production of goods and services, and the effect of that is to remove the means by which things get done from the view of most people.  In an urbanized world, for example, the making of a ham or a pound of ground beef is invisible to the typical supermarket shopper, who never has to raise an animal from birth, slaughter it, carve up its corpse, etc., so that a cook can look at a hunk of muscle from a steer and call it a beautiful piece of meat; our farming ancestors knew what that hunk of meat really was from firsthand experience.

Likewise, an urbanized population can keep cats and dogs as pets solely for their companionship, can even confer on them the status of humans in fur because dogs and cats (and to some extent horses) no longer have any utilitarian function; thus giving them moral status of the kind promoted by animal welfare groups and PETA is something we can afford.  We don’t need them to aid in the hunt, keep down rodent pests, or herd our sheep anymore.  Yet every year we kill 2.7 million unwanted dogs and cats, not to mention those that die from neglect, and while those numbers are down, one has to wonder how long we can afford to keep excess animals alive.  However, the point here is that the mistreatment of animals is removed from most Westerners daily lives.

As is violence to other human beings.  As the nation state grew, it appropriated violence to itself and diminished individual violence; justice has replaced revenge, most of the time.  But we have also exported violence, outsourced it so to speak, so that most of our official military violence is committed overseas.  Shermer might do well to read a few books on that:  perhaps those by Chalmers Johnson, or Andrew Bacevich, to name just two authors worth consulting.  Or he might refresh his memory of our involvement in the death of Allende and our moral responsibility for the deaths caused by Pinochet, or of the numbers of Iraqi civilians who died in the second Iraq war (approximately 150,000).  He also could consider the number of people who died as the result of the partitioning of India (about a million).  And since Shermer claims to be speaking on behalf of the species, perhaps he should consider the deaths and oppression of people in, say, China or North Korea, or many other places in the non-Western world.

In some ways, we Westerners are like our pets—domesticated and cuddly.  But remove the luxuries of domestication and, like feral cats and dogs, we will quickly revert to our basic instincts, which will not be fluffy.  The “long peace” since World War II has not been all that peaceful, and certainly not, within historical time, very long.  As Peter Zeihan (The Accidental Superpower) and others are warning us, the post-World War II global order is fraying, and disorder and its symptoms (e.g., violence) could once again rise to the surface.

Mark Balaguer on Free Will

Into the fray of recent books on whether or not we humans have free will jumps Mark Balaguer’s sprightly book, one of a series on current vogue topics published by MIT Press, intended for a nonspecialist readership. In other words, Balaguer is not writing for other philosophers, but for you and me—and this audience may account for the book’s jauntiness, inasmuch as it appears that authors, and/or their editors and publishers, believe that the only way that the common man or woman can be induced to swallow and digest cogitations on the great questions is by talking to him or her as if he or she were a child. One sometimes imagines the author as rather like a sitcom daddy explaining mortality or sin as he tucks in his four-year-old daughter.

You can tell that I find that style annoying. But despite that, Balaguer does more or less accomplish his goal, which is basically to show that the anti-free will arguments advanced today by such luminaries of the genre as Daniel Wegner and Sam Harris don’t amount to much. Primarily because they tend to assume what yet remains to be proven. Balaguer does an excellent job of exposing the holes in the determinist arguments, as well as going back to some of the studies that constitute the supposed proofs of those arguments, such as those of Benjamin Libet, and finding that they do not in fact offer such proof. I won’t go into his explanations, as the reader can do that easily enough on his own, especially since the book is short (a mere 126 pages of text) and free of arcane jargon.

Much as I welcome Balaguer’s poking of holes in the determinist hot-air balloon, I do have a bone to pick with his argument, namely that he seems to have a trivial notion of what free will is. Apparently, Balaguer thinks that free will is synonymous with consumer choice; his primary and repeated example is a scenario of someone entering an ice cream parlor and considering whether to order vanilla or chocolate ice cream. Even in his interesting distinction of a “torn decision,” i.e., one in which the options are equally appealing or equally unappealing, he repeats the chocolate vs. vanilla example. In this he is like Sam Harris, the determinist who uses tea vs. coffee as his example. And like Harris, he says nothing about the fact that free will is an ethical concept and as such has nothing to do with consumer choice—and a lot of other kinds of common, everyday choices as well.

So let me offer a scenario in which the question of free will is truly interesting: Imagine that you are a young man in the antebellum South, say about 1830, and you are the sole heir of a large plantation on which cotton is grown with slave labor. Let’s say you’re about 30 years old and that for all those 30 years you have lived in a social and ideological environment in which slavery has been a natural and God-given institution. You therefore assume that slavery is good and that, when your father dies and you inherit the plantation, you will continue to use slave labor; you will also continue to buy and sell slaves as valuable commodities in their own right, just like the bales of cotton you sell in the markets of New Orleans. Further, you are aware that cotton is an important commodity, crucial to the manufacturing enterprises of the new factories of the northeast and England. You are justly proud (in your own estimation, as well as that of your social class) of the contributions the plantation system has made to the nation and civilization. Because of your background and experience, perhaps at this point you cannot be said to have free will when it comes to the question of whether or not slavery is morally just.

Then one day you learn of people called abolitionists, and perhaps quite by chance you come across a pamphlet decrying the practice of slavery, or perhaps even you hear a sermon by your local preacher demonizing abolitionists as atheists or some such thing, though in the course of that sermon the preacher happens to mention that these atheists presume to claim Biblical authority for their heretical beliefs. Maybe you rush to your copy of the Bible to prove them wrong, only to come across St. Paul’s assertion that there is neither slave nor freedman in Christ. Perhaps you ignore these hints that what you have always assumed to be true may not be; or perhaps they prick your conscience somewhat, enough to make you begin to look around you with slightly different eyes. Maybe you even become fraught, particularly when you consider that some of the younger slaves on the property are your half-siblings, or perhaps even your own offspring—how could my brother or my son be a slave while I am free? Who can say what nightmares these unwelcome but insistent thoughts engender? At any rate, for the first time in your life, you find that you cannot to be a slaveholder without considering the moral implications of the peculiar institution. For the first time, you must actually decide.

The above is certainly an example of what Balaguer calls a torn decision, but unlike chocolate vs. vanilla, it is a moral decision, and therefore profound rather than trivial. And it is in such moral dilemmas, when something that is taken for granted emerges into consciousness, that the concept of free will becomes meaningful. It would therefore seem that scientists, qua scientists, can’t be of much help in deciding whether or not we have free will. Try as they might (and some have, sort of), they cannot design laboratory experiments that address moral dilemmas—it is only in living, in the real world with other people and complex issues, that morality, and therefore free will, can exist. Of course, that does not mean that in exercising free will everyone will always make the morally right decision—we cannot know if the young man of the antebellum South will free his slaves or keep them (or even perhaps decide that the question is too difficult or costly to be answered, so he chooses to ignore it, likely leading to a lifetime of neuroses)—but we do know that once the question has risen into his consciousness, he has no choice but to choose.

Free will, then, operates when a situation rises into consciousness, creating a moral dilemma that can be resolved only by actively choosing a course of action or belief on the basis of moral principles rather than personal preference or benefit. There are dilemmas that superficially resemble moral dilemmas, such as whether or not it I ought to lose weight or whether or not I should frequent museums rather than sports bars, but which are in fact matters of taste rather than ethics. Chocolate vs. vanilla is of the latter kind. To say that I ought to have the vanilla is very different from saying I ought not to own slaves, even though both statements use the same verb. It is disappointing that philosophers fail to make the distinction.

Ethics and Human Nature

It is an unhappy characteristic of our age that certain ignoramuses have been elevated to the ranks of “public intellectual,” a category which seems to consist of men and women who provide sweeping theories of everything, especially of everything they know nothing about. Into this category fall certain writers whose sweeping theory is that, prior to the Enlightenment, everyone lived in abject superstition and physical misery. With the Enlightenment, reason and science began the process of sweeping away misery and ignorance, clearing the field for the flowers of prosperity and knowledge. Such a sophomoric view of human history and thought has the virtue (in their minds only) of rendering it unnecessary for them to acquaint themselves with a deep and nuanced knowledge of the past, an error which permits them to attribute all that is good in human accomplishment to the age of science and all that is bad to a dark past best forgotten.

Nowhere is this more evident than in the recent fad for publishing books and articles claiming that science, particularly evolutionary science, provides the necessary and sufficient basis for ethics.

To read the article, click here.

Why Determinism?

The eternal debate between determinism and free will has lately taken a new form. Determinism has been reincarnated in the shape of neuroscience, with attendant metaphors of computers, chemistry, machines, and Darwinism. Meanwhile, defenders of free will seem to have run out of arguments, particularly since, if they wish to be taken seriously, they dare not resort to a religious argument. That the debate is virtually eternal suggests that it is not finally resolvable; it could be said in fact that the two sides are arguing about different things, even though they often use the same terminology.

Determinism’s popularity is most clearly suggested by the sales figures for books on the subject and by the dominance of the view in popular science writing. Such books are widely reviewed, while those arguing for free will are neglected, especially by the mainstream press.

The question then is not whether or not we have free will, or whether or not we are wholly determined in all our thoughts and actions; but rather, why at this point in time, particularly in this country, determinism is so popular, more so than free will?

Today’s determinism is not the same as the ancient concept of fate. Fatalism was not so much about determinism or, as the Calvinists posited, predestination; fatalism did not pretend to know what would happen, but rather held that fate was a matter of unpredictability, of whim (on the part of the universe or of the gods, etc.), and in fact left some room for free will, in a what-will-be-will-be sort of way; i.e., because outcomes were unpredictable, one had to choose, one had to act, and let the dice fall where they may. The tragic flaw of hubris was exactly what is wrong with any determinism, the delusion that one could stop the wheel of fate from turning past its apex, i.e., that through prediction one could control.

Determinists worship predictability and control. I once read somewhere the idea that, if everything that has already happened were known, everything that will happen could be accurately predicted. Extreme as this statement is, it accurately summarizes the mindset of the determinists. It also suggests why determinism is so attractive in a scientific age such as ours, for science is not only about the gathering of facts and the formulation of theories but also about using those theories to make predictions.

Given the apparent power of science to accurately predict, and given that prediction is predicated on a deterministic stance, it is not surprising that scientists should turn their attention to the human condition, nor that scientists, being what they are, tend to look for, and find, evidence that human thoughts and behavior are determined by genes, neurons, modules, adaptations, what have you, and are therefore predictable. And it further is not surprising that, in a restless and rapidly changing world, laymen are attracted to these ideas. Certainty is an antidote to powerlessness.

If we are religiously minded, we find certainty in religion; hence the rise of politically and socially powerful fundamentalist movements today. If we are not religious, we may find certainty in New Age nostrums, ideologies, art, bottom lines, celebrity worship, or even skepticism (no one is more certain of his or her own wisdom than the skeptic). If we are politicians, we look for certainty and security in megabytes of data. If we are scientifically minded, we find certainty in science. But certainty is not science. It is a common psychological need in an age of uncertainty.

In satisfying this need for certainty, determinism often leads to excessive self-confidence and egotism—which in turn leads to simplifications and dismissal of complexity, ambivalence, and randomness. Determinism is teleology. Today’s determinists may have discarded God, but they still believe that He does not play dice. They are, in short, utopians. We all know where utopias end up. That much at least we can confidently predict.

Boehm’s “Social Selection”

Christopher Boehm’s book Moral Origins: The Evolution of Virtue, Altruism, and Shame (Basic Books, 2012) is yet another sad example of the futility of the widespread hope that Neo-Darwinism, as over extended by evolutionary psychology and sociobiology, can ever be a theory of everything, particularly a theory that explains modern human behavior and values. It is not science. It is an ideology, or perhaps merely a hope, dressing up in a sloppy imitation of science.

Boehm’s thesis is that human moral values, the virtue, altruism, and shame of his subtitle, evolved through a process of what he calls “social selection,” which can be defined as the selecting out of socially uncooperative individuals (whom Boehm equates with psychopaths) and the selecting in of cooperative ones. Lengthy as the book is (at 362 pages of text), with its elaborate arguments and numerous examples, Boehm fails to support his thesis with anything more than supposition and false analogies.

First let’s consider what social selection would have to do in order to affect the evolution of human beings:

1) It would require a concerted effort species-wide over a great swath of time to define, identify, and eliminate socially uncooperative individuals (psychopaths and free riders).

2) In order to affect the gene pool, undesirable individuals would have to be identified very early in life, before they had the chance to reproduce. Killing the parent without killing the child does not eliminate the parent’s genes.

3) The criteria for determining whom to eliminate would not only have to be clear but consistent over many generations. Any change in the standards midstream would ruin the whole scheme. Yet any historian can tell you that standards have changed over time, sometimes quite sharply.

There is no evidence that any of this obtained at any time in human history or prehistory. There is also no evidence that if it did occur it would have had a significant impact on human evolution. Prior to modern medicine and germ theory, infant and child mortality, not to mention plagues and epidemics that affected adults as well, would have had an impact many times that of social selection, effectively diminishing its proportionally infinitesimal effects.

In order to compensate for the serious lack of evidence, Boehm resorts to highly suppositional phrasing and subjunctive grammar. The following examples from pages 80 and 81 are illustrative of far too much of the book:

“prehistoric forager lifestyles could have generated distinctive types of social selection” (Perhaps they could have, but science wants to know if they actually did.)

These types of social selection “could have supported generosity outside the family at the level of genes.” (Again, did they actually do so?)

“were likely to have”
“could have become”
“It’s even possible . . . if”
“may have begun to differ”
“it’s likely that”
“would have been”
“would not have negated”
“they would have”
“were likely to have been”
“what could have happened”
“very likely”

And all these from just two pages! The careless or naïve reader might not notice this suppositional language and therefore mistakenly believe that Boehm is solidly establishing his argument; but the careful reader will find these to be crippling stumbling blocks.

There are also problems of self-contradiction. For example, Boehm seems to be saying that social selection eliminates psychopaths, but then states that psychopaths constitute a significant percentage of modern day populations. He claims that “People very significantly [psychopathic] probably number as high as one or more [vague: how many more?] out of several hundred in our total population,” which may not seem all that many, but perhaps too many if humans began socially selecting these people out thousands of years ago. Other sources put the percentage as low as 2% and as high as 4%, but no doubt problems of definition affect the numbers. Whatever the true number may be, I think Boehm does need at the very least to clarify just how effective social selection really is.

The examples he pulls from contemporary forager societies are also contradictory of his thesis. He cites the example of Cephu, a Mbuti Pygmy who, as recounted by Colin Turnbull, let his greed overcome his responsibility to the rest of his group. His colleagues caught him in the act of helping himself to more game than he was entitled to and subjected him to an intense course of humiliation—but they did not kill him or his progeny, and after he had adequately apologized and humbled himself, he was readmitted to the group. The story of Cephu, meant to illustrate the book’s thesis, actually proves its opposite. Cephu’s behavior was corrected not genetically, but culturally.

Perhaps a comparison would clarify the problems with Boehm’s thesis. There is another form of behavior that one might think would have been socially eliminated fairly early in human evolution, male homosexuality. It is not, after all, conducive to reproductive survival, and has often been punished, quite horribly in many instances, not only with shunning and shaming techniques but with imprisonment, torture, and execution; yet it has persisted through thousands of years, in part because homosexuals can camouflage themselves but also because efforts of social selection to eliminate the behavior have proven to be ineffectual—just as has been, I would argue, social selection to eliminate socially uncooperative individuals. This analogy suggests that social selection is a very weak hook on which to hang the hope that biology and genetics can account for all human behavior in terms of “fitness.”

Finally, we should note that throughout history there have been people we would today label as psychopaths who have been quite successful leaders, often revered not only in their own times but long after their deaths. One thinks of Napoleon Bonaparte, killer of millions yet romanticized and admired by other millions, credited with the Napoleonic Code and sympathized with in his exile. One also thinks of Genghis Khan, the great butcher who, far from being selected out of the gene pool, is now thought to be the ancestor of as many as 16 million people living today. Of course, being a psychopathic great leader is no guarantee of reproductive success; Hitler, fortunately, had no children, and though he did have nieces and nephews, none of them has followed his example. While Boehm believes that psychopaths and free riders were (at least to some extent) weeded out of the gene pool through social selection, it may be that such individuals were selected for because in some ways that we 21st century Americans may not comprehend, they were in fact socially useful. Perhaps they made good warriors, or maybe they built the great empires that encouraged the arts and sciences, or maybe they made their liege lords great fortunes (perhaps Cortez and Pizarro were useful psychopaths, enriching the Spanish treasury while taking all the risks). What we can say is that they have been, and are, legion.

Sam Harris and Free Will

Sam Harris’s Free Will: An Exasperated Review

Some books are so bad they defy refutation.  Sam Harris’s short book on free will comes close to being one of those books.  It is rife with naïveté and self-contradictions, and proffers trivia and hypotheticals in place of evidence.  In these, it is like his earlier book The Moral Landscape, which covered much of the same ground at greater length.

As one is reading this book, one begins to wonder just what definition of free will Harris is talking about, and one finally finds out (sort of) on page 30 (of a book that is only 66 pages long), where he refers to the “popular” version of free will.  This popular version, which appears to be Harris’s target (why?), is indeed a very naïve one, entailing that free will must be conscious and operate somewhat like a syllogism.  He writes, “to actually have free will [,y]ou would need to be aware of all the factors that determine your thoughts and actions, and you would have to have complete control over those factors” (italics added).  I have never before encountered such a view of free will; if Harris is correct, this stands as a major insight, never before thought of in the whole history of theology and philosophy.  Of course, no one can be, or would want to be, aware of all the factors that determine his thoughts and actions, nor can anyone imagine having complete control over them, or wanting to.  One would be so entangled in what he elsewhere refers to as introspection, and so fraught with indecision as to what kind of control to exercise, that making any kind of decision, let alone of the kind that makes the question of free will interesting, would not be possible.  To devote even a short book to this childish, magic-wand kind of free will seems a tragic waste of time.

The trivial nature of his definition is seconded by the triviality of his evidence.  He discusses choosing between coffee or tea as an example of how our choices are not freely willed but determined by prior causes, particularly by events in the brain (p. 7).  I can imagine the internal struggle unfolding in a Starbucks, but not in the Garden of Gethsemane; he seems to be talking about the “free will” of a rather shallow character, a consumer.  Free will is a profoundly ethical concept, but alas it is not surprising that in our contemporary society of entertainment and shopping it has come to mean no more than consumer choice.

As in his earlier book, Harris trots out the experiments of Benjamin Libet, who demonstrated that, when subjects were asked to push either one of two buttons, scans of their brains showed that the decision was made several seconds before the subjects were consciously aware of it.  This is a trivial fact, one might even say a factoid, and again has nothing to do with the free will as an ethical problem.  This is more akin to reflex action than to the often drawn-out process of deciding what to do when faced with a moral dilemma, the kind  that might cause a President to pace the Oval Office floor.  Perhaps Harris does not discuss examples of the latter sort because such situations cannot be reduced to simple cause and effect or measured by brain scans and therefore do not fall within the purview of science, in the pure reductionist way that  Harris understands it.  If physics offers us anything helpful in addressing this question, it is its understanding that the whole is not describable in terms of the particle.  We may be made up of atoms, but we are not atoms and don’t act like them.

Speaking of brain scans, Harris also indulges in hypotheticals that are supposed to buttress his thesis but which do not.  He writes, “Imagine a perfect neuroimaging device that allow us to detect and interpret the subtlest changes in brain function.”  Well, we are free to imagine just about anything, from unicorns to warp speed, but to imagine something is not to establish its existence, and since we do not have such a machine, we cannot reach any conclusions from its operations.  Sure, it might be that such a machine would show that “the experimenters knew what you would think and do just before you did,” but until such a machine comes along we cannot conclude from its imaginary existence that free will “is an illusion” (p. 11).  That “We know that we could perform such an experiment, at least in principle” which would then “directly challenge [our] status as conscious agents in control of [our] inner lives” (p.24) is, in principle, a fairy tale.  If we had a time machine, in principle, we could travel backwards and forwards in time, but for now, in principle, we cannot travel through time.

I think that free will is most likely a cultural artifact rather than an innate trait conferred by genes or souls or whatever; it is something that we human beings, through the power of symbolic language, have created for ourselves, just as much as any other cultural artifact, whether a social structure or a work of art, and it is every bit as real as those things.  It is both an act and an attitude—we choose to do one thing rather than others, and we choose to accept or reject responsibility.   Historically, it has preoccupied the Christian West more than it has other cultures.  The version of free will Harris denies is a particularly Western, Christian one.  Apparently, he cannot conceive of any version of free will that does not first assume a soul; his language gets confused when he speaks of “your brain” vs. “you” (p. 9); he seems unclear about how one could be introspective and yet without free will; and in a single sentence he both denies and affirms free will.  “This understanding reveals you to be a biochemical puppet, of course, but it also allows you to grab hold of one of your strings” (p. 47).

It’s that one string that makes all the difference.  Of course, we are limited, by the circumstances of our birth, by the social class we grow up in, by the vagaries of health and accidents, by the climate, by the other human beings who surround us, by the brevity of our lives; but we are also capable of acting within these circumstances rather than always and only reacting.  We do not need a magic wand, we do not need to make a Faustian pact with the devil to exercise free will.  In fact, free will would be meaningless without those limiting circumstances, for it is only in the real world that free will can present itself.

See also BigQuestionsOnline

Sam Harris’s Moral Swampland

My original intention was to write a long and detailed critique of Sam Harris’s most recent book The Moral Landscape: How Science Can Determine Human Values (2010).  Harris is the author of two previous books promoting a naïve form of antireligion and is one of the Four Horseman of Atheism, a posse of atheist secular conservatives also known as the New Atheists, that also includes Daniel Dennett, Richard Dawkins, and Christoper Hitchens.  However, since the book has already been widely reviewed and most of its manifold failings have been identified (among them that Harris’s version of ethics is utilitarianism in science drag), I will not repeat those points and instead will focus on two problems that particularly struck me, one of which has been alluded to but not detailed, the other of which has not been mentioned in the reviews I read.

The first problem, the one some reviewers have noted, is Harris’s apparent lack of interest in philosophers who have previously (over many centuries) wrestled with questions of morality.  As I read his book, I became aware that one of his strategies is to create colossal straw men arguments; he creates extreme but vague versions of his opponents and them knocks them down, but he rarely names names or provides quotations.  For example,  on page 17 he asserts that “there is a pervasive assumption among educated people that either such [moral] differences don’t exist, or that they are too variable, complex, or culturally idiosyncratic to admit of general value judgments,” but he does not identify whom he’s talking about nor does he quote anyone who holds such a view; the statement is also absolute (as so many of his statements are), in that he does not qualify the category: he does not say, for example, “80% of educated people,” nor does he define what he means by “educated.”  Furthermore, the word “pervasive” has negative valence without explicitly declaring it; anything “pervasive” has taken over (evil is pervasive, good is universal, for example).

On page 20 he states that “Many social scientists incorrectly believe that all long-standing human practices must be evolutionarily adaptive,” but he does not identify who those many social scientists are, nor specify how many constitutes many; nor does he quote any of them to support or even illustrate his assertion; nor does he offer so much as a footnote reference to any social scientists who allegedly hold this view.  In his hostile statements on religion, one who pays attention will note that he has not only oversimplified religion, but also seems to limit his conception of religion to the most conservative strands of contemporary Judeo-Christian systems.  He rarely refers to theologians and then only to contemporary and conservative ones. His bibliography runs to 40 pages, or about 800 sources (give or take), but the only theologians in this extensive listing are J.C. Polkinghorne and N. T. Wright.  Nothing of Augustine or Aquinas, nothing of Barth or Bonheoffer or Fletcher.  If he had bothered to consult any of these and other theologians and moral philosophers, he might have seen that his ideas have already been more extensively and more deeply explored than he manages to do in this book.  He does try to wriggle out of this problem in the long first footnote to Chapter 1, but it is disingenuous.

I also question that he has actually carefully and completely read all 800 (give or take) sources he lists.  It would take an inordinate amount of time to read all of them, to read them carefully to ensure one has properly understood them, to take adequate notes, and to think about how they fit into or relate to one’s thesis and argument.  For example, it strikes me as odd that he lists 10 sources by John R. Searle, 3 of which are densely argued books, but refers to Searle only once in the body of his book and 3 times, obliquely, in his endnotes.  One wonders, in what sense then is Searle a source?  Is he a source by association only?

The other major problem I have with Harris’s book is in my view more serious:  He mistakes “brain states” for thoughts.  This is a common error among those who imagine that scanning the brain to measure areas of activity or measuring the levels of various hormones such as oxytocin suffices to explain human thought.  (Harris confesses to a measurement bias on p. 20 when he writes that “The world of measurement and the world of meaning must eventually be reconciled.”).  That a particular region of the brain is “lit up” tells us only where the thought is occurring—it tells us nothing about the content of that thought nor anything about its validity.  This is because human thoughts are generated, shared, discussed, modified, and passed on through language, through words, which while processed in the brain nevertheless have meaningful independence from any one particular brain, and therefore have a degree of freedom from “brain  states.”

Harris’s inability to properly distinguish between brain-states and thoughts is apparent in an interesting passage on pages 121-122:  Here he discusses research he conducted using fMRI’s that identify the medial pre-frontal cortex as the region of the brain that is most active when the human subject believed a statement.  He discovered that this area is activated similarly when the subject is considering a mathematical statement (2 + 6 + 8 = 16) and when the subject is considering an ethical belief (“It is good to let your children know that you love them”).  This similarity of activity in the same brain area leads Harris to conclude “that the similarity of belief may be the same regardless of a proposition’s content.  It also suggests that the division between facts and values does not make much sense in terms of underlying brain function.” How true.  Yet nonetheless, human beings (including quite obviously Harris himself) do make the distinction.

But he goes on:  “This finding of content-independence challenges the fact/value distinction very directly:  for if, from the point of view of the brain, believing ‘the sun is a star’ is importantly similar to believing ‘cruelty is wrong,’ how can we say that scientific and ethical judgments have nothing in common?” (italics added)  Aside from the fact that he does not specify who that “we” is and that he does not prove that anyone has said that there is “nothing in common” between scientific and ethical judgments, there is the fact that, in language, that is by actually thinking, we can make the distinction and do so all the time.  “This finding” does not challenge the distinction; it merely highlights that the distinction is not dependent upon a “brain-state”.  The MPFC may be equally activated, but the human thinker knows the differences.

The underlying problems with Harris’s thesis are at least threefold.  One is that his hostility to and ignorance of religion blocks him from considering or accurately representing what religion has to say about ethics.  Another is one habitual among conservatives, to tilt at straw men while mounted on rickety arguments.  The third is his reductionism to the absurd degree.  Arguments of the type offered in this book mistake the foundation for the whole edifice; it is as if, in desiring to know and understand Versailles, we razed it to its foundation and then said, “There, behold, that is Versailles!”

Note: A partial omission in the original post of 1 May 2011 in the quotation in the second to last paragraph was corrected on 2 October 2011