Tag Archives: morality

Evolution and Theodicy

“Why is there evil in the world?” This question has been asked by philosophers and theologians and ordinary men and women for millennia. Today scientists, particularly evolutionary biologists, neuroscientists, and evolutionary/neuropsychologists have joined the effort to explain evil: why do people indulge in violence, cheating, lies, harassment, and so on. There is no need here to itemize all the behaviors that can be labeled evil. What matters is the question of “why?”

The question of “why is there evil in the world?” assumes the premise that evil is abnormal while good however defined) is normal—the abnorm vs. the norm, if you will. Goodness is the natural state of man, the original condition, and evil is something imposed on or inserted into the world from some external, malevolent source. In Genesis, God created the world and pronounced it good; then Adam and Eve succumbed to the temptations of the Serpent and brought evil and therefore death into the world (thus, death is a manifestation of evil, immortality the natural state of good). Unfortunately, the Bible does not adequately account for the existence of the Serpent or Satan, so it was left to Milton to fill in the story. Gnostics, Manicheans, and others posited the existence of two deities, one good and the other evil, and constructed a vision of a cosmic struggle between light and darkness that would culminate in the triumph of good—a concept that filtered into Christian eschatology. The fact that Christian tradition sees the end times as a restoration to a state of Adamic or Edenic innocence underscores the notion that goodness is the natural, default state of man and the cosmos.

Contemporary secular culture has not escaped this notion of the primeval innocence of man. It has simply relocated Eden to the African savannah. When mankind was still at the hunter-gatherer stage, so the story goes, people lived in naked or near-naked innocence; they lived in egalitarian peace with their fellows and in harmony with nature. Alas, with the invention of agriculture and the consequent development of cities and civilizations, egalitarianism gave way to greed, social hierarchies, war, imperialism, slavery, patriarchy, all the factors that cause people to engage in violence, oppression, materialism, and so on; further, these faults of civilizations caused the oppressed to engage in violence, theft, slovenliness, and other sins. Laws and punishments and other means of control and suppression were instituted to keep the louts in their place. Many people believe that to restore the lost innocence of our hunter-gatherer origins, we must return to the land, re-engage with nature, adopt a paleo diet, restructure society according to matriarchal and/or socialist principles, and so on. Many people (some the same, some different from the back-to-nature theorists) envision a utopian future in which globalization, or digitization, or general good feeling will restore harmony and peace to the whole world.

Not too surprisingly, many scientists join in this vision of a secular peaceable kingdom. Not a few evolutionary biologists maintain that human beings are evolutionarily adapted to life on the savannah, not to life in massive cities, and that the decline in the health, intelligence, and height of our civilized ancestors can be blamed on the negative effects of a change in diet brought on by agriculture (too much grain, not enough wild meat and less variety of plants) and by the opportunities for diseases of various kinds to colonize human beings too closely crowded together in cities and too readily exposed to exotic pathogens spread along burgeoning trade routes. Crowding and competition lead to violent behaviors as well.

Thus, whether religious or secular, the explanations of evil generally boil down to this: that human beings are by nature good, and that evil is externally imposed on otherwise good people; and that if circumstances could be changed (through education, redistribution of wealth, exercise, diet, early childhood interventions, etc.), our natural goodness would reassert itself. Of course, there are some who believe that evil behavior has a genetic component, that certain mutations or genetic defects are to blame for psychopaths, rapists, and so on, but again these genetic defects are seen as abnormalities that could be managed by various eugenic interventions, from gene or hormone therapies to locking up excessively aggressive males to ensure they don’t breed and pass on their defects to future generations.

Thus it is that in general we are unable to shake off the belief that good is the norm and evil is the abnorm, whether we are religious or secular, scientists or philosophers, creationists or Darwinists. But if we take Darwinism seriously we have to admit that “evil” is the norm and that “good” is the abnorm—nature is red in tooth and claw, and all of the evil that men and women do is also found in other organisms; in fact, we can say that the “evil” done by other organisms long precedes the evil that men do, and we can also say, based on archaeological and anthropological evidence, that men have been doing evil since the very beginning of the human line. In other words, there never was an Eden, never a Noble Savage, never a long-ago Golden Age from which we have fallen or declined—and nor therefore is there any prospect of an imminent or future Utopia or Millennial Kingdom that will restore mankind to its true nature because there is nothing to restore.

The evolutionary function of “evil” is summarized in the term “natural selection”: the process by which death winnows out the less fit from the chance to reproduce (natural selection works on the average, meaning of course that some who are fit die before they can reproduce and some of the unfit survive long enough to produce some offspring, but on average fitness is favored). Death, usually by violence (eat, and then be eaten), is necessary to the workings of Darwinian evolution. An example: When a lion or pair of lions defeat an older pride lion and take over his pride, they kill the cubs of the defeated male, which has the effect of bringing the lionesses back into heat so that the new males can mate with them and produce their own offspring; their task is then to keep control of the pride long enough for their own cubs to reach reproductive maturity. Among lions, such infanticide raises no moral questions, whereas among humans it does.

There is no problem of evil but rather the problem of good: not why is there “evil” but rather why is there “good”? Why do human beings consider acts like infanticide to be morally evil while lions do not? Why do we have morality at all? I believe that morality is an invention, a creation of human thought, not an instinct. It is one of the most important creations of the human mind, at least as great as the usually cited examples of human creativity (art, literature, science, etc.), if not greater considering how much harder won it is than its nearer competitors, and how much harder it is to maintain. Because “good” is not natural, it is always vulnerable to being overwhelmed by “evil,” which is natural: Peace crumbles into war; restraint gives way to impulse, holism gives way to particularism, agape gives way to narcissism, love to lust, truth to lie, tolerance to hate. War, particularism, narcissism, etc., protect the self of the person and the tribe, one’s own gene pool so to speak, just as the lion kills his competitor’s cubs to ensure the survival of his own. We do not need to think very hard about doing evil; we do need to think hard about what is good and how to do it. It is something that every generation must relearn and rethink, especially in times of great stress.

It appears that we are in such a time today. Various stressors, the economy, the climate, overpopulation and mass migrations, religious conflict amid the dregs of moribund empires, are pushing the relationship of the tribes versus the whole out of balance, and the temptations are to put up walls, dig trenches, draw up battle lines, and find someone other than ourselves to blame for our dilemmas. A war of all against all is not totally out of the question, and it may be that such a war or wars will eventuate in a classic Darwinian victory for one group over another—but history (rather than evolution) tells us that such a victory is often less Darwinian than Pyrrhic.

Advertisements

The Credulous Skeptic: Michael Shermer’s Moral Arc

The Credulous Skeptic:  Michael Shermer’s The Moral Arc

The thesis of Michael Shermer’s new book is that morality is about the flourishing of sentient beings through the application of science and reason; in this, he follows in the footsteps of Stephen Pinker’s The Better Angels of Our Nature and Sam Harris’s The Moral Landscape.  All three consider science to be the arbiter of all things, and at least Pinker and Shermer argue that moral progress has been made only in the last 500 years or so, i.e., the modern period of scientific discovery and advancement and the Enlightenment.  As far as Shermer is concerned, all that preceded this period is superstition, ignorance, and darkness.  Religion, and especially Christianity, are not only of no use in their projects but in fact actually harmful.

Shermer argues that great strides in moral behavior have occurred since the Enlightenment, particularly in freedom and the abolition of slavery, women’s and gay rights, and animal rights, and that these advances are the direct result of a scientific and materialist worldview and an indirect result of the material prosperity afforded by the industrial revolution, capitalism, and democracy.  He marshals an impressive quantity of evidence to support his claims, and any reader is likely to concede that he has made a compelling case.  That is, if said reader is already sympathetic to Shermer’s libertarianism and as worshipful of science, the Founding Fathers, and the Enlightenment.

Any less naïve reader, however, is likely to notice a number of problems with Shermer’s book, not least of which is its Western bias.  Although early in the book Shermer refers to the moral progress of our species, virtually all his evidence and examples come from Western Europe and the United States, as if “we” were all that needed to be said about the species in general—even though the populations of Europe and the United States taken together constitute a minority of the world population, and this minority status applies even when the white populations of Australia and New Zealand and elsewhere are factored in.  Thus it would seem that in order to establish that the “species” has made moral progress since the Age of Enlightenment, data from non-Western societies would have to be taken into account.  In other words, Shermer is guilty of a sampling bias.

Compounding this problem—or perhaps the source of the problem—are his naïve and simplistic views of history.  Apparently, Shermer believes that the Enlightenment arose by spontaneous generation, for he dismisses everything that preceded it, most especially religion.  Or rather, Christian religion, which apparently has no moral tradition or intellectual history worthy of the name (never mind the moral and intellectual traditions of any other religious tradition, such as Buddhism or Hinduism).  In fact, Shermer’s notion of Christianity appears to be limited to that version with which, as a former born-again, he is most familiar, American evangelical fundamentalism.

For example, in his chapter on slavery, Shermer reads Paul’s letter to Philemon without any sense of the context in which Paul was writing and in fact explicitly dismisses contextual interpretations.  In this, he is more fundamentalist than the fundamentalists.  He is also guilty of presentism, i.e., the logical error of reading the past through the lens of the present; because “we” (Westerners) today abhor slavery, it must therefore be that any moral person at any time in history, regardless of how long ago or of what culture or civilization, should also explicitly abhor slavery and also openly call for its abolition.  Never mind that at the time of Paul’s writings, Christians were a distinct minority in the Roman world, no more than a few thousands out of a total population of millions; Christianity had barely begun, and it would be centuries before it had built up anything like a coherent intellectual tradition or widespread influence.  Meanwhile, Paul lived under the Roman system, which was exploitative and brutal in a way we today would find extreme.  Paul was certainly smart enough to know that calling for the abolition of slavery, by a small group of marginal people following a bizarre new religion, would have no impact on anything.  Thus when he urges Philemon to treat his slave Onesimus as a brother, he is making as radical statement as one could imagine, in context.  He was not asking Philemon to do anything useless or dangerous—he was asking him to treat Onesimus as a fellow human being, a radical idea at the time, but in doing so Paul planted the seed that eventually grew into the Western ideal of the individual, an ideal which is at the center of Shermer’s own libertarianism.

Shermer has built a career on being a skeptic (even editing a magazine of that name), but his skepticism tends to be selective (in the same way, ironically as a fundamentalist is selectively skeptical—of evolution, or climate change, i.e., of things he already rejects).  This selective skepticism is displayed not only in his tendentious reading of Paul but also in his takedown of William Wilberforce, one of the most successful abolitionists, whom he characterizes as “pushy and overzealous” in his “moralizing” and as worrying “excessively about what other people were doing, especially if what they were doing involved pleasure [and] excess.”  Meanwhile, Shermer’s Enlightenment heroes get a complete pass:  he never mentions Locke’s rationalization of the taking of American Indian lands for white settlement (because the Indians did not have “property”), nor that Jefferson, whom Shermer hero-worships, and Washington owned slaves (which would certainly be relevant to his chapter on slavery), or that Franklin favored using war dogs against Indians who were too stubbornly resisted white theft of their real estate.  One has to wonder:  What has Shermer ever read of American history?  Why does he apparently take his heroes at their written word, without investigating the context in which they wrote?  Would that reveal that his idols have feet of clay?  Why is he skeptical of Wilberforce but not of Jefferson?

When Shermer turns to the issue of animal rights, he seems at first to be on firmer ground.  There certainly does seem to be a positive movement in the direction of extending at least the right not to suffer at human hands to domesticated animals.  Animal welfare groups have proliferated, laws protecting animals from harm continue to be expanded, and more people are embracing a vegetarian lifestyle—in the United States today, somewhat more than 7 million people are vegetarians, which is an impressive number until one realizes that they represent only 3.2% of the American population (however, that compares unfavorably to India, where 42% of households are vegetarian).  While Shermer does recognize the cruelties of industrialized meat production, he misses an opportunity to connect some dots.  One of the effects of industrialization is to specialize the production of goods and services, and the effect of that is to remove the means by which things get done from the view of most people.  In an urbanized world, for example, the making of a ham or a pound of ground beef is invisible to the typical supermarket shopper, who never has to raise an animal from birth, slaughter it, carve up its corpse, etc., so that a cook can look at a hunk of muscle from a steer and call it a beautiful piece of meat; our farming ancestors knew what that hunk of meat really was from firsthand experience.

Likewise, an urbanized population can keep cats and dogs as pets solely for their companionship, can even confer on them the status of humans in fur because dogs and cats (and to some extent horses) no longer have any utilitarian function; thus giving them moral status of the kind promoted by animal welfare groups and PETA is something we can afford.  We don’t need them to aid in the hunt, keep down rodent pests, or herd our sheep anymore.  Yet every year we kill 2.7 million unwanted dogs and cats, not to mention those that die from neglect, and while those numbers are down, one has to wonder how long we can afford to keep excess animals alive.  However, the point here is that the mistreatment of animals is removed from most Westerners daily lives.

As is violence to other human beings.  As the nation state grew, it appropriated violence to itself and diminished individual violence; justice has replaced revenge, most of the time.  But we have also exported violence, outsourced it so to speak, so that most of our official military violence is committed overseas.  Shermer might do well to read a few books on that:  perhaps those by Chalmers Johnson, or Andrew Bacevich, to name just two authors worth consulting.  Or he might refresh his memory of our involvement in the death of Allende and our moral responsibility for the deaths caused by Pinochet, or of the numbers of Iraqi civilians who died in the second Iraq war (approximately 150,000).  He also could consider the number of people who died as the result of the partitioning of India (about a million).  And since Shermer claims to be speaking on behalf of the species, perhaps he should consider the deaths and oppression of people in, say, China or North Korea, or many other places in the non-Western world.

In some ways, we Westerners are like our pets—domesticated and cuddly.  But remove the luxuries of domestication and, like feral cats and dogs, we will quickly revert to our basic instincts, which will not be fluffy.  The “long peace” since World War II has not been all that peaceful, and certainly not, within historical time, very long.  As Peter Zeihan (The Accidental Superpower) and others are warning us, the post-World War II global order is fraying, and disorder and its symptoms (e.g., violence) could once again rise to the surface.

Mark Balaguer on Free Will

Into the fray of recent books on whether or not we humans have free will jumps Mark Balaguer’s sprightly book, one of a series on current vogue topics published by MIT Press, intended for a nonspecialist readership. In other words, Balaguer is not writing for other philosophers, but for you and me—and this audience may account for the book’s jauntiness, inasmuch as it appears that authors, and/or their editors and publishers, believe that the only way that the common man or woman can be induced to swallow and digest cogitations on the great questions is by talking to him or her as if he or she were a child. One sometimes imagines the author as rather like a sitcom daddy explaining mortality or sin as he tucks in his four-year-old daughter.

You can tell that I find that style annoying. But despite that, Balaguer does more or less accomplish his goal, which is basically to show that the anti-free will arguments advanced today by such luminaries of the genre as Daniel Wegner and Sam Harris don’t amount to much. Primarily because they tend to assume what yet remains to be proven. Balaguer does an excellent job of exposing the holes in the determinist arguments, as well as going back to some of the studies that constitute the supposed proofs of those arguments, such as those of Benjamin Libet, and finding that they do not in fact offer such proof. I won’t go into his explanations, as the reader can do that easily enough on his own, especially since the book is short (a mere 126 pages of text) and free of arcane jargon.

Much as I welcome Balaguer’s poking of holes in the determinist hot-air balloon, I do have a bone to pick with his argument, namely that he seems to have a trivial notion of what free will is. Apparently, Balaguer thinks that free will is synonymous with consumer choice; his primary and repeated example is a scenario of someone entering an ice cream parlor and considering whether to order vanilla or chocolate ice cream. Even in his interesting distinction of a “torn decision,” i.e., one in which the options are equally appealing or equally unappealing, he repeats the chocolate vs. vanilla example. In this he is like Sam Harris, the determinist who uses tea vs. coffee as his example. And like Harris, he says nothing about the fact that free will is an ethical concept and as such has nothing to do with consumer choice—and a lot of other kinds of common, everyday choices as well.

So let me offer a scenario in which the question of free will is truly interesting: Imagine that you are a young man in the antebellum South, say about 1830, and you are the sole heir of a large plantation on which cotton is grown with slave labor. Let’s say you’re about 30 years old and that for all those 30 years you have lived in a social and ideological environment in which slavery has been a natural and God-given institution. You therefore assume that slavery is good and that, when your father dies and you inherit the plantation, you will continue to use slave labor; you will also continue to buy and sell slaves as valuable commodities in their own right, just like the bales of cotton you sell in the markets of New Orleans. Further, you are aware that cotton is an important commodity, crucial to the manufacturing enterprises of the new factories of the northeast and England. You are justly proud (in your own estimation, as well as that of your social class) of the contributions the plantation system has made to the nation and civilization. Because of your background and experience, perhaps at this point you cannot be said to have free will when it comes to the question of whether or not slavery is morally just.

Then one day you learn of people called abolitionists, and perhaps quite by chance you come across a pamphlet decrying the practice of slavery, or perhaps even you hear a sermon by your local preacher demonizing abolitionists as atheists or some such thing, though in the course of that sermon the preacher happens to mention that these atheists presume to claim Biblical authority for their heretical beliefs. Maybe you rush to your copy of the Bible to prove them wrong, only to come across St. Paul’s assertion that there is neither slave nor freedman in Christ. Perhaps you ignore these hints that what you have always assumed to be true may not be; or perhaps they prick your conscience somewhat, enough to make you begin to look around you with slightly different eyes. Maybe you even become fraught, particularly when you consider that some of the younger slaves on the property are your half-siblings, or perhaps even your own offspring—how could my brother or my son be a slave while I am free? Who can say what nightmares these unwelcome but insistent thoughts engender? At any rate, for the first time in your life, you find that you cannot to be a slaveholder without considering the moral implications of the peculiar institution. For the first time, you must actually decide.

The above is certainly an example of what Balaguer calls a torn decision, but unlike chocolate vs. vanilla, it is a moral decision, and therefore profound rather than trivial. And it is in such moral dilemmas, when something that is taken for granted emerges into consciousness, that the concept of free will becomes meaningful. It would therefore seem that scientists, qua scientists, can’t be of much help in deciding whether or not we have free will. Try as they might (and some have, sort of), they cannot design laboratory experiments that address moral dilemmas—it is only in living, in the real world with other people and complex issues, that morality, and therefore free will, can exist. Of course, that does not mean that in exercising free will everyone will always make the morally right decision—we cannot know if the young man of the antebellum South will free his slaves or keep them (or even perhaps decide that the question is too difficult or costly to be answered, so he chooses to ignore it, likely leading to a lifetime of neuroses)—but we do know that once the question has risen into his consciousness, he has no choice but to choose.

Free will, then, operates when a situation rises into consciousness, creating a moral dilemma that can be resolved only by actively choosing a course of action or belief on the basis of moral principles rather than personal preference or benefit. There are dilemmas that superficially resemble moral dilemmas, such as whether or not it I ought to lose weight or whether or not I should frequent museums rather than sports bars, but which are in fact matters of taste rather than ethics. Chocolate vs. vanilla is of the latter kind. To say that I ought to have the vanilla is very different from saying I ought not to own slaves, even though both statements use the same verb. It is disappointing that philosophers fail to make the distinction.

Eichmann Before Jerusalem: A Review

Eichmann:  Before, In, and After Jerusalem

“One death is a tragedy; a million deaths is a statistic.”  Whether or not Stalin actually ever said this is irrelevant to the point that it makes, for it tells us in a most condensed form the totalitarian view of human beings, as exemplified not only by the Stalinist era in Russia but especially by the short but deadly reign of National Socialism in Germany.  Unlike the socialism found in contemporary European societies such as Sweden and France, in which the individual human being is recognized as a person regardless of his or her circumstances, and thus equally worthy of education, medical care, and hope, the “socialism” of the Nazis stripped the individual of personhood by subsuming him in a collective identity, so that this body was interchangeable with that body, as not only representative of the collective he was assigned to (born as) but was in fact that collective, with no more independent existence from that collective than a cell exists independently of its body.  Individuals thus were considered and treated not as symbols of the collective (Jews, gypsies, homosexuals, Poles, intellectuals, etc., as well as “Germans” or “Aryans”) but as the collective itself.  The purpose of the individual was to sustain the collective, just as the purpose of a cell is to sustain the body.  No one is interested in the dignity and autonomy of a cell.

Click here to read the complete review.

Joachim Fest’s “Not I”: A Review

Joachim Fest’s memoir, Not I: Memoirs of a German Childhood, as translated by Martin Chalmers, is a fascinating, well-written story of his family’s plight during the Hitler years. Fest is already well known to scholars and amateurs of the Third Reich, having previously published a number of books on that era, most notably his biography of Hitler, originally published in 1973, just 28 years after the end of the war. In my view, Hitler remains one of the best biographies of the Fuhrer, despite the fact that historians have since delved more deeply into the particulars of the Third Reich, largely because Fest experienced that era first hand, and from the perspective of an educated, well-read German who also had the perspective of an outsider. It is that outsider experience that makes Not I such an insightful book.

Fest’s family was headed by Johannes Fest, an educated Catholic German, active during the Weimar years in the Zentrum party, patriotic but not fanatic, and convinced that representative, parliamentary government was necessary to Germany’s future. He understood from the beginning that Hitler and the Nazi party would be a disaster for Germany, and he was one of the few who verified early on the rumors of what was happening to the Jews. Unfortunately, few of his Jewish friends heeded his consequent warnings, for like the Fests, they were members of the German Bildungsbürger, the educated, bookish upper middle class and, unlike Johannes Fest, unable to accept that such a cultured country as Germany would long tolerate such a barbarian as Hitler. The price paid by German Jews is well documented elsewhere; Joachim Fest’s memoir details the costs to a German Catholic Bildungsbürger family: the dismissal of Johannes from his job and the extinction of his career (he never worked again, even after the war), the expulsions from school of the two oldest boys (Joachim and his adored older brother Wolfgang) and their eventual conscription into the German military, the hardships endured by his mother as she tried to maintain some normalcy and dignity for the family as their circumstances straitened, and the narrowed prospects and necessary compromises of the younger sisters as the German war effort deteriorated.

Culture, as opposed to mere civilization, provides the leitmotif of this book. Throughout Fest describes the literature, music, and art that formed the Idealist, Romantic infrastructure of his family and friends: Fontane, Goethe, Schiller; Mozart and Beethoven; the Italian Renaissance painters (especially Caravaggio, who was also a murderer) and their German imitators. He and his older brother Wolfgang spent hours reading and discussing the great writers of the German tradition, and Joachim spent time at museums and in the homes of intellectuals who introduced him to the subtleties of opera and art. After he was conscripted, late in the war, his closest friends were other young soldiers who shared his passions, including his great friend Reinhard Buck, a young musician. It was such friendships that sustained Fest in the dark hours of the accelerating collapse of the Reich. After the war, Fest continued in his reading and his aspiration to become an “independent scholar” of the Renaissance and spent a good deal of time in Italy, but circumstances and his status as a German who had refused to collaborate with the Reich, the “Not I” of his book’s title, led him to become one of the early historians of the Hitler years.

A perennial question (one might say mystery) of the rise of Nazism is why it occurred in Germany: How could such a cultured nation, with such a transcendent history of great literature and philosophy, succumb to the hysterical blandishments of an uncouth and uneducated bit of rabble as Hitler? One often gets the impression while reading this book that the continuous heady discussions of literature and music, of all that constitutes Kultur in the German attitude, is not only a shield against the barbarians but almost a denial of their existence—culture as a space where Hitler does not reign. It was this belief, amounting to a creed, that prevented so many of the Bildungsbürgertum from first, seeing how dangerous the Hitler movement actually was and then, realizing the danger in time to escape. The Fests’ were an exception in anticipating the reality, though they too chose not to flee Germany. Perhaps because their father Johannes not only saw the writing on the wall but also did not quite want to believe what it said: As Joachim writes, after the war his father “spoke of the main error he and his friends had fallen victim to, because they had believed all too unreservedly in reason, in Goethe, Kant, Mozart, and the whole tradition that came from that.”

Fest does not answer that question in this book, perhaps because it is a memoir of his childhood and youth and is therefore not meant to be a mature analysis of historical causes of Nazism; in his earlier biography of Hitler, however, he does address the question and hints that, besides the dislocations following the First World War (defeat, economic hardships, inflation, lack of a democratic tradition, etc.) which historians generally explore, there was an element of German culture that contributed to making Hitler possible: Idealism, of a particularly Romantic sort. Hitler was not well educated, but he was nevertheless an heir of the German Idealist tradition, and his imagination, though vulgar, was Romantic in its tenor, as exemplified in his worship of the operas of Wagner. As Fest wrote, Hitler had the “ability to build castles in the air with an intrepid and acute rationality” and “In his way of sharply opposing an idea to reality, of elevating what ought to be above what is, he was truly German.” (Aside: Fest might have said that he scorned the “reality-based community.”) When the devil becomes an Idealist, he turns into a Hitler.

Nonetheless, there is still the question of why so few of the Bildersbürgertum joined the Fests in seeing what was bearing down on them. Perhaps one answer is that they vastly overestimated the influence of the German intellectual tradition on the general German population and perhaps did not realize that their own class constituted a tiny percentage of the general German “volk,” perhaps as little as 1%. It is easy enough, when you read and debate in a book-lined study, with friends of similar inclination, to believe that you represent a significant and influential class, when the truth is quite the contrary. Perhaps if they had read more books like Alfred Döblin’s Berlin Alexanderplatz (1929) they would have understood their own country better, perhaps would have realized that the hapless, directionless, and instinctively brutal Franz Biberkopf represented a larger slice of the population than they did, and that lacking in the education of the gymnasium and the university, the Biberkopf’s of their day knew nothing of Kant or Goethe, et al. But then, who among the New York intelligentsia would bother to consider the residents of Boise or Yuma in their pontifications on the American national attitude?

However, screening out the barbarians would not necessarily prevent a Hitler. All too many well-educated men and women, including leading professors of the German universities, as well as artists, novelists, orchestra directors, and others of the cultivated classes enthusiastically joined the Hitler movement—not least among them the (supposedly) greatest philosopher of the twentieth century, Martin Heidegger, a man who could elevate what ought to be above what is with the worst of them. Fest was certainly correct when he wrote in Hitler that “a totalitarian system need not be built up upon a nation’s deviant or criminal tendencies.” Intellectuals, particularly those who deliberately insulate themselves not only from the realities of their own times but from the cultures of other nations, can, inadvertently or intentionally, set the stage for a tyrant as easily as can the frustrated feral class.

After Fest was captured by American soldiers and interned in a POW camp in France, and after his brother Wolfgang died of a lung infection on the eastern front and his musical friend Reinhold Buck was shot to death only a few hundred yards from where Fest had been captured, Fest learned for the first time of a different cultural tradition, that of the Anglophone world of Great Britain and the United States. During his captivity, Fest improved his English and read the works of contemporary English and American authors, including Somerset Maugham, Mark Twain, Joseph Conrad, and William Golding. As he writes, his readings “brought home to me that my knowledge of literature had so far been much too dominated by classic German works.” It was likely that that domination clouded the vision of many Germans as Hitler hove into view.

Alas, it is not only Germans who have idealized the intellectual life, the life of thought. Even in America, among certain segments of the bourgeoisie at least, writers and artists are often viewed as set apart from the rest of mankind, not only in their talent but morally as well. We too have inherited much of the attitude expressed by Shelley, that poets, and artists in general, are the unacknowledged legislators of the world. But it may be more true to say that while the ability to live in our heads may be our finest trait and the one that most distinguishes us from the other animals, it may also be our fatal flaw.

Genetics, Ethics, and the New Social Darwinism

There’s been a lot of buzz among the pundits lately about “cooperation,” particularly about purported scientific findings that cooperation, collaboration, altruism, and other kinds of social virtues are genetic and that “cooperation is as central to evolution as mutation and selection” (Brooks, 5 May 2011).  The pundits are responding to a minor publishing fad for books on these subjects, for example Super Cooperator  by Nowak and Highfield and Braintrust by Patrica Churchland (reviewed by Matt Ridley in the Wall Street Journal on 14 May 2011).

The underlying thesis is that because cooperation improves the survival of individuals and their relatives, our “moral rules of thumb” have, as Ridley puts it, “been chosen by evolution to achieve certain social goals.”  I do hope that Ridley is being deliberately poetic here, for otherwise he is promulgating a teleological version of evolution, one in which evolution chooses and has intentional goals and one that verges on intelligent design.

It is naïve to assert that evolution makes choices or has goals—evolution is not an entity.  The word “evolution” denotes a process of nondirected change over eons. Tthat change has eventuated in complexity of form and behavior is a consequence of time, not purpose.

The New Social Darwinists’ notion that evolution favors cooperation also does not stand up to the facts.  If cooperation (and all its variants) were “favored” by evolution, in the same way that, say, development of bigger brains is, one would expect to see a diminution of its opposite, selfishness.  But no such diminution can be observed.

This raises the more interesting and nonbiological question of why there has recently been this upsurge in books and articles on the biological basis of cooperation.  It seems to me that it is a reaction to the manifest lack of cooperative and altruistic behavior in American society today.  The current recession was triggered by a tsunami of selfish, devil take the hindmost behavior; and the current political climate, particularly on the right (which no longer deserves to be called “conservative”), as well as the pervasive “look-at-me-first” attitudes expressed by popular culture, all point to a collapse of social cohesion and sense of responsibility to others.

In other words, the current interest in cooperation is symptomatic of its manifest lack in American society, economics, and politics.  Those alarmed by this decline are attempting to use “science” as an antidote, as a means of encouraging greater cooperation in a poisonous political and economic climate of selfishness and disregard for those who lack power and money.  But we don’t need pseudoscience and appeals to myths of the primeval savannah to dissect the causes of the current discouraging state of America nor to argue for greater cooperation and consideration for others.  Sociology and history, moral philosophy and ethics, and yes even religion, rather than sociobiology and evolutionary just-so stories, are more than adequate to the task.

Sam Harris’s Moral Swampland

My original intention was to write a long and detailed critique of Sam Harris’s most recent book The Moral Landscape: How Science Can Determine Human Values (2010).  Harris is the author of two previous books promoting a naïve form of antireligion and is one of the Four Horseman of Atheism, a posse of atheist secular conservatives also known as the New Atheists, that also includes Daniel Dennett, Richard Dawkins, and Christoper Hitchens.  However, since the book has already been widely reviewed and most of its manifold failings have been identified (among them that Harris’s version of ethics is utilitarianism in science drag), I will not repeat those points and instead will focus on two problems that particularly struck me, one of which has been alluded to but not detailed, the other of which has not been mentioned in the reviews I read.

The first problem, the one some reviewers have noted, is Harris’s apparent lack of interest in philosophers who have previously (over many centuries) wrestled with questions of morality.  As I read his book, I became aware that one of his strategies is to create colossal straw men arguments; he creates extreme but vague versions of his opponents and them knocks them down, but he rarely names names or provides quotations.  For example,  on page 17 he asserts that “there is a pervasive assumption among educated people that either such [moral] differences don’t exist, or that they are too variable, complex, or culturally idiosyncratic to admit of general value judgments,” but he does not identify whom he’s talking about nor does he quote anyone who holds such a view; the statement is also absolute (as so many of his statements are), in that he does not qualify the category: he does not say, for example, “80% of educated people,” nor does he define what he means by “educated.”  Furthermore, the word “pervasive” has negative valence without explicitly declaring it; anything “pervasive” has taken over (evil is pervasive, good is universal, for example).

On page 20 he states that “Many social scientists incorrectly believe that all long-standing human practices must be evolutionarily adaptive,” but he does not identify who those many social scientists are, nor specify how many constitutes many; nor does he quote any of them to support or even illustrate his assertion; nor does he offer so much as a footnote reference to any social scientists who allegedly hold this view.  In his hostile statements on religion, one who pays attention will note that he has not only oversimplified religion, but also seems to limit his conception of religion to the most conservative strands of contemporary Judeo-Christian systems.  He rarely refers to theologians and then only to contemporary and conservative ones. His bibliography runs to 40 pages, or about 800 sources (give or take), but the only theologians in this extensive listing are J.C. Polkinghorne and N. T. Wright.  Nothing of Augustine or Aquinas, nothing of Barth or Bonheoffer or Fletcher.  If he had bothered to consult any of these and other theologians and moral philosophers, he might have seen that his ideas have already been more extensively and more deeply explored than he manages to do in this book.  He does try to wriggle out of this problem in the long first footnote to Chapter 1, but it is disingenuous.

I also question that he has actually carefully and completely read all 800 (give or take) sources he lists.  It would take an inordinate amount of time to read all of them, to read them carefully to ensure one has properly understood them, to take adequate notes, and to think about how they fit into or relate to one’s thesis and argument.  For example, it strikes me as odd that he lists 10 sources by John R. Searle, 3 of which are densely argued books, but refers to Searle only once in the body of his book and 3 times, obliquely, in his endnotes.  One wonders, in what sense then is Searle a source?  Is he a source by association only?

The other major problem I have with Harris’s book is in my view more serious:  He mistakes “brain states” for thoughts.  This is a common error among those who imagine that scanning the brain to measure areas of activity or measuring the levels of various hormones such as oxytocin suffices to explain human thought.  (Harris confesses to a measurement bias on p. 20 when he writes that “The world of measurement and the world of meaning must eventually be reconciled.”).  That a particular region of the brain is “lit up” tells us only where the thought is occurring—it tells us nothing about the content of that thought nor anything about its validity.  This is because human thoughts are generated, shared, discussed, modified, and passed on through language, through words, which while processed in the brain nevertheless have meaningful independence from any one particular brain, and therefore have a degree of freedom from “brain  states.”

Harris’s inability to properly distinguish between brain-states and thoughts is apparent in an interesting passage on pages 121-122:  Here he discusses research he conducted using fMRI’s that identify the medial pre-frontal cortex as the region of the brain that is most active when the human subject believed a statement.  He discovered that this area is activated similarly when the subject is considering a mathematical statement (2 + 6 + 8 = 16) and when the subject is considering an ethical belief (“It is good to let your children know that you love them”).  This similarity of activity in the same brain area leads Harris to conclude “that the similarity of belief may be the same regardless of a proposition’s content.  It also suggests that the division between facts and values does not make much sense in terms of underlying brain function.” How true.  Yet nonetheless, human beings (including quite obviously Harris himself) do make the distinction.

But he goes on:  “This finding of content-independence challenges the fact/value distinction very directly:  for if, from the point of view of the brain, believing ‘the sun is a star’ is importantly similar to believing ‘cruelty is wrong,’ how can we say that scientific and ethical judgments have nothing in common?” (italics added)  Aside from the fact that he does not specify who that “we” is and that he does not prove that anyone has said that there is “nothing in common” between scientific and ethical judgments, there is the fact that, in language, that is by actually thinking, we can make the distinction and do so all the time.  “This finding” does not challenge the distinction; it merely highlights that the distinction is not dependent upon a “brain-state”.  The MPFC may be equally activated, but the human thinker knows the differences.

The underlying problems with Harris’s thesis are at least threefold.  One is that his hostility to and ignorance of religion blocks him from considering or accurately representing what religion has to say about ethics.  Another is one habitual among conservatives, to tilt at straw men while mounted on rickety arguments.  The third is his reductionism to the absurd degree.  Arguments of the type offered in this book mistake the foundation for the whole edifice; it is as if, in desiring to know and understand Versailles, we razed it to its foundation and then said, “There, behold, that is Versailles!”

Note: A partial omission in the original post of 1 May 2011 in the quotation in the second to last paragraph was corrected on 2 October 2011