Tag Archives: atheism

The Credulous Skeptic: Michael Shermer’s Moral Arc

The Credulous Skeptic:  Michael Shermer’s The Moral Arc

The thesis of Michael Shermer’s new book is that morality is about the flourishing of sentient beings through the application of science and reason; in this, he follows in the footsteps of Stephen Pinker’s The Better Angels of Our Nature and Sam Harris’s The Moral Landscape.  All three consider science to be the arbiter of all things, and at least Pinker and Shermer argue that moral progress has been made only in the last 500 years or so, i.e., the modern period of scientific discovery and advancement and the Enlightenment.  As far as Shermer is concerned, all that preceded this period is superstition, ignorance, and darkness.  Religion, and especially Christianity, are not only of no use in their projects but in fact actually harmful.

Shermer argues that great strides in moral behavior have occurred since the Enlightenment, particularly in freedom and the abolition of slavery, women’s and gay rights, and animal rights, and that these advances are the direct result of a scientific and materialist worldview and an indirect result of the material prosperity afforded by the industrial revolution, capitalism, and democracy.  He marshals an impressive quantity of evidence to support his claims, and any reader is likely to concede that he has made a compelling case.  That is, if said reader is already sympathetic to Shermer’s libertarianism and as worshipful of science, the Founding Fathers, and the Enlightenment.

Any less naïve reader, however, is likely to notice a number of problems with Shermer’s book, not least of which is its Western bias.  Although early in the book Shermer refers to the moral progress of our species, virtually all his evidence and examples come from Western Europe and the United States, as if “we” were all that needed to be said about the species in general—even though the populations of Europe and the United States taken together constitute a minority of the world population, and this minority status applies even when the white populations of Australia and New Zealand and elsewhere are factored in.  Thus it would seem that in order to establish that the “species” has made moral progress since the Age of Enlightenment, data from non-Western societies would have to be taken into account.  In other words, Shermer is guilty of a sampling bias.

Compounding this problem—or perhaps the source of the problem—are his naïve and simplistic views of history.  Apparently, Shermer believes that the Enlightenment arose by spontaneous generation, for he dismisses everything that preceded it, most especially religion.  Or rather, Christian religion, which apparently has no moral tradition or intellectual history worthy of the name (never mind the moral and intellectual traditions of any other religious tradition, such as Buddhism or Hinduism).  In fact, Shermer’s notion of Christianity appears to be limited to that version with which, as a former born-again, he is most familiar, American evangelical fundamentalism.

For example, in his chapter on slavery, Shermer reads Paul’s letter to Philemon without any sense of the context in which Paul was writing and in fact explicitly dismisses contextual interpretations.  In this, he is more fundamentalist than the fundamentalists.  He is also guilty of presentism, i.e., the logical error of reading the past through the lens of the present; because “we” (Westerners) today abhor slavery, it must therefore be that any moral person at any time in history, regardless of how long ago or of what culture or civilization, should also explicitly abhor slavery and also openly call for its abolition.  Never mind that at the time of Paul’s writings, Christians were a distinct minority in the Roman world, no more than a few thousands out of a total population of millions; Christianity had barely begun, and it would be centuries before it had built up anything like a coherent intellectual tradition or widespread influence.  Meanwhile, Paul lived under the Roman system, which was exploitative and brutal in a way we today would find extreme.  Paul was certainly smart enough to know that calling for the abolition of slavery, by a small group of marginal people following a bizarre new religion, would have no impact on anything.  Thus when he urges Philemon to treat his slave Onesimus as a brother, he is making as radical statement as one could imagine, in context.  He was not asking Philemon to do anything useless or dangerous—he was asking him to treat Onesimus as a fellow human being, a radical idea at the time, but in doing so Paul planted the seed that eventually grew into the Western ideal of the individual, an ideal which is at the center of Shermer’s own libertarianism.

Shermer has built a career on being a skeptic (even editing a magazine of that name), but his skepticism tends to be selective (in the same way, ironically as a fundamentalist is selectively skeptical—of evolution, or climate change, i.e., of things he already rejects).  This selective skepticism is displayed not only in his tendentious reading of Paul but also in his takedown of William Wilberforce, one of the most successful abolitionists, whom he characterizes as “pushy and overzealous” in his “moralizing” and as worrying “excessively about what other people were doing, especially if what they were doing involved pleasure [and] excess.”  Meanwhile, Shermer’s Enlightenment heroes get a complete pass:  he never mentions Locke’s rationalization of the taking of American Indian lands for white settlement (because the Indians did not have “property”), nor that Jefferson, whom Shermer hero-worships, and Washington owned slaves (which would certainly be relevant to his chapter on slavery), or that Franklin favored using war dogs against Indians who were too stubbornly resisted white theft of their real estate.  One has to wonder:  What has Shermer ever read of American history?  Why does he apparently take his heroes at their written word, without investigating the context in which they wrote?  Would that reveal that his idols have feet of clay?  Why is he skeptical of Wilberforce but not of Jefferson?

When Shermer turns to the issue of animal rights, he seems at first to be on firmer ground.  There certainly does seem to be a positive movement in the direction of extending at least the right not to suffer at human hands to domesticated animals.  Animal welfare groups have proliferated, laws protecting animals from harm continue to be expanded, and more people are embracing a vegetarian lifestyle—in the United States today, somewhat more than 7 million people are vegetarians, which is an impressive number until one realizes that they represent only 3.2% of the American population (however, that compares unfavorably to India, where 42% of households are vegetarian).  While Shermer does recognize the cruelties of industrialized meat production, he misses an opportunity to connect some dots.  One of the effects of industrialization is to specialize the production of goods and services, and the effect of that is to remove the means by which things get done from the view of most people.  In an urbanized world, for example, the making of a ham or a pound of ground beef is invisible to the typical supermarket shopper, who never has to raise an animal from birth, slaughter it, carve up its corpse, etc., so that a cook can look at a hunk of muscle from a steer and call it a beautiful piece of meat; our farming ancestors knew what that hunk of meat really was from firsthand experience.

Likewise, an urbanized population can keep cats and dogs as pets solely for their companionship, can even confer on them the status of humans in fur because dogs and cats (and to some extent horses) no longer have any utilitarian function; thus giving them moral status of the kind promoted by animal welfare groups and PETA is something we can afford.  We don’t need them to aid in the hunt, keep down rodent pests, or herd our sheep anymore.  Yet every year we kill 2.7 million unwanted dogs and cats, not to mention those that die from neglect, and while those numbers are down, one has to wonder how long we can afford to keep excess animals alive.  However, the point here is that the mistreatment of animals is removed from most Westerners daily lives.

As is violence to other human beings.  As the nation state grew, it appropriated violence to itself and diminished individual violence; justice has replaced revenge, most of the time.  But we have also exported violence, outsourced it so to speak, so that most of our official military violence is committed overseas.  Shermer might do well to read a few books on that:  perhaps those by Chalmers Johnson, or Andrew Bacevich, to name just two authors worth consulting.  Or he might refresh his memory of our involvement in the death of Allende and our moral responsibility for the deaths caused by Pinochet, or of the numbers of Iraqi civilians who died in the second Iraq war (approximately 150,000).  He also could consider the number of people who died as the result of the partitioning of India (about a million).  And since Shermer claims to be speaking on behalf of the species, perhaps he should consider the deaths and oppression of people in, say, China or North Korea, or many other places in the non-Western world.

In some ways, we Westerners are like our pets—domesticated and cuddly.  But remove the luxuries of domestication and, like feral cats and dogs, we will quickly revert to our basic instincts, which will not be fluffy.  The “long peace” since World War II has not been all that peaceful, and certainly not, within historical time, very long.  As Peter Zeihan (The Accidental Superpower) and others are warning us, the post-World War II global order is fraying, and disorder and its symptoms (e.g., violence) could once again rise to the surface.

Advertisements

Mark Balaguer on Free Will

Into the fray of recent books on whether or not we humans have free will jumps Mark Balaguer’s sprightly book, one of a series on current vogue topics published by MIT Press, intended for a nonspecialist readership. In other words, Balaguer is not writing for other philosophers, but for you and me—and this audience may account for the book’s jauntiness, inasmuch as it appears that authors, and/or their editors and publishers, believe that the only way that the common man or woman can be induced to swallow and digest cogitations on the great questions is by talking to him or her as if he or she were a child. One sometimes imagines the author as rather like a sitcom daddy explaining mortality or sin as he tucks in his four-year-old daughter.

You can tell that I find that style annoying. But despite that, Balaguer does more or less accomplish his goal, which is basically to show that the anti-free will arguments advanced today by such luminaries of the genre as Daniel Wegner and Sam Harris don’t amount to much. Primarily because they tend to assume what yet remains to be proven. Balaguer does an excellent job of exposing the holes in the determinist arguments, as well as going back to some of the studies that constitute the supposed proofs of those arguments, such as those of Benjamin Libet, and finding that they do not in fact offer such proof. I won’t go into his explanations, as the reader can do that easily enough on his own, especially since the book is short (a mere 126 pages of text) and free of arcane jargon.

Much as I welcome Balaguer’s poking of holes in the determinist hot-air balloon, I do have a bone to pick with his argument, namely that he seems to have a trivial notion of what free will is. Apparently, Balaguer thinks that free will is synonymous with consumer choice; his primary and repeated example is a scenario of someone entering an ice cream parlor and considering whether to order vanilla or chocolate ice cream. Even in his interesting distinction of a “torn decision,” i.e., one in which the options are equally appealing or equally unappealing, he repeats the chocolate vs. vanilla example. In this he is like Sam Harris, the determinist who uses tea vs. coffee as his example. And like Harris, he says nothing about the fact that free will is an ethical concept and as such has nothing to do with consumer choice—and a lot of other kinds of common, everyday choices as well.

So let me offer a scenario in which the question of free will is truly interesting: Imagine that you are a young man in the antebellum South, say about 1830, and you are the sole heir of a large plantation on which cotton is grown with slave labor. Let’s say you’re about 30 years old and that for all those 30 years you have lived in a social and ideological environment in which slavery has been a natural and God-given institution. You therefore assume that slavery is good and that, when your father dies and you inherit the plantation, you will continue to use slave labor; you will also continue to buy and sell slaves as valuable commodities in their own right, just like the bales of cotton you sell in the markets of New Orleans. Further, you are aware that cotton is an important commodity, crucial to the manufacturing enterprises of the new factories of the northeast and England. You are justly proud (in your own estimation, as well as that of your social class) of the contributions the plantation system has made to the nation and civilization. Because of your background and experience, perhaps at this point you cannot be said to have free will when it comes to the question of whether or not slavery is morally just.

Then one day you learn of people called abolitionists, and perhaps quite by chance you come across a pamphlet decrying the practice of slavery, or perhaps even you hear a sermon by your local preacher demonizing abolitionists as atheists or some such thing, though in the course of that sermon the preacher happens to mention that these atheists presume to claim Biblical authority for their heretical beliefs. Maybe you rush to your copy of the Bible to prove them wrong, only to come across St. Paul’s assertion that there is neither slave nor freedman in Christ. Perhaps you ignore these hints that what you have always assumed to be true may not be; or perhaps they prick your conscience somewhat, enough to make you begin to look around you with slightly different eyes. Maybe you even become fraught, particularly when you consider that some of the younger slaves on the property are your half-siblings, or perhaps even your own offspring—how could my brother or my son be a slave while I am free? Who can say what nightmares these unwelcome but insistent thoughts engender? At any rate, for the first time in your life, you find that you cannot to be a slaveholder without considering the moral implications of the peculiar institution. For the first time, you must actually decide.

The above is certainly an example of what Balaguer calls a torn decision, but unlike chocolate vs. vanilla, it is a moral decision, and therefore profound rather than trivial. And it is in such moral dilemmas, when something that is taken for granted emerges into consciousness, that the concept of free will becomes meaningful. It would therefore seem that scientists, qua scientists, can’t be of much help in deciding whether or not we have free will. Try as they might (and some have, sort of), they cannot design laboratory experiments that address moral dilemmas—it is only in living, in the real world with other people and complex issues, that morality, and therefore free will, can exist. Of course, that does not mean that in exercising free will everyone will always make the morally right decision—we cannot know if the young man of the antebellum South will free his slaves or keep them (or even perhaps decide that the question is too difficult or costly to be answered, so he chooses to ignore it, likely leading to a lifetime of neuroses)—but we do know that once the question has risen into his consciousness, he has no choice but to choose.

Free will, then, operates when a situation rises into consciousness, creating a moral dilemma that can be resolved only by actively choosing a course of action or belief on the basis of moral principles rather than personal preference or benefit. There are dilemmas that superficially resemble moral dilemmas, such as whether or not it I ought to lose weight or whether or not I should frequent museums rather than sports bars, but which are in fact matters of taste rather than ethics. Chocolate vs. vanilla is of the latter kind. To say that I ought to have the vanilla is very different from saying I ought not to own slaves, even though both statements use the same verb. It is disappointing that philosophers fail to make the distinction.

Eichmann Before Jerusalem: A Review

Eichmann:  Before, In, and After Jerusalem

“One death is a tragedy; a million deaths is a statistic.”  Whether or not Stalin actually ever said this is irrelevant to the point that it makes, for it tells us in a most condensed form the totalitarian view of human beings, as exemplified not only by the Stalinist era in Russia but especially by the short but deadly reign of National Socialism in Germany.  Unlike the socialism found in contemporary European societies such as Sweden and France, in which the individual human being is recognized as a person regardless of his or her circumstances, and thus equally worthy of education, medical care, and hope, the “socialism” of the Nazis stripped the individual of personhood by subsuming him in a collective identity, so that this body was interchangeable with that body, as not only representative of the collective he was assigned to (born as) but was in fact that collective, with no more independent existence from that collective than a cell exists independently of its body.  Individuals thus were considered and treated not as symbols of the collective (Jews, gypsies, homosexuals, Poles, intellectuals, etc., as well as “Germans” or “Aryans”) but as the collective itself.  The purpose of the individual was to sustain the collective, just as the purpose of a cell is to sustain the body.  No one is interested in the dignity and autonomy of a cell.

Click here to read the complete review.

Why Determinism?

The eternal debate between determinism and free will has lately taken a new form. Determinism has been reincarnated in the shape of neuroscience, with attendant metaphors of computers, chemistry, machines, and Darwinism. Meanwhile, defenders of free will seem to have run out of arguments, particularly since, if they wish to be taken seriously, they dare not resort to a religious argument. That the debate is virtually eternal suggests that it is not finally resolvable; it could be said in fact that the two sides are arguing about different things, even though they often use the same terminology.

Determinism’s popularity is most clearly suggested by the sales figures for books on the subject and by the dominance of the view in popular science writing. Such books are widely reviewed, while those arguing for free will are neglected, especially by the mainstream press.

The question then is not whether or not we have free will, or whether or not we are wholly determined in all our thoughts and actions; but rather, why at this point in time, particularly in this country, determinism is so popular, more so than free will?

Today’s determinism is not the same as the ancient concept of fate. Fatalism was not so much about determinism or, as the Calvinists posited, predestination; fatalism did not pretend to know what would happen, but rather held that fate was a matter of unpredictability, of whim (on the part of the universe or of the gods, etc.), and in fact left some room for free will, in a what-will-be-will-be sort of way; i.e., because outcomes were unpredictable, one had to choose, one had to act, and let the dice fall where they may. The tragic flaw of hubris was exactly what is wrong with any determinism, the delusion that one could stop the wheel of fate from turning past its apex, i.e., that through prediction one could control.

Determinists worship predictability and control. I once read somewhere the idea that, if everything that has already happened were known, everything that will happen could be accurately predicted. Extreme as this statement is, it accurately summarizes the mindset of the determinists. It also suggests why determinism is so attractive in a scientific age such as ours, for science is not only about the gathering of facts and the formulation of theories but also about using those theories to make predictions.

Given the apparent power of science to accurately predict, and given that prediction is predicated on a deterministic stance, it is not surprising that scientists should turn their attention to the human condition, nor that scientists, being what they are, tend to look for, and find, evidence that human thoughts and behavior are determined by genes, neurons, modules, adaptations, what have you, and are therefore predictable. And it further is not surprising that, in a restless and rapidly changing world, laymen are attracted to these ideas. Certainty is an antidote to powerlessness.

If we are religiously minded, we find certainty in religion; hence the rise of politically and socially powerful fundamentalist movements today. If we are not religious, we may find certainty in New Age nostrums, ideologies, art, bottom lines, celebrity worship, or even skepticism (no one is more certain of his or her own wisdom than the skeptic). If we are politicians, we look for certainty and security in megabytes of data. If we are scientifically minded, we find certainty in science. But certainty is not science. It is a common psychological need in an age of uncertainty.

In satisfying this need for certainty, determinism often leads to excessive self-confidence and egotism—which in turn leads to simplifications and dismissal of complexity, ambivalence, and randomness. Determinism is teleology. Today’s determinists may have discarded God, but they still believe that He does not play dice. They are, in short, utopians. We all know where utopias end up. That much at least we can confidently predict.

Sam Harris’s Moral Swampland

My original intention was to write a long and detailed critique of Sam Harris’s most recent book The Moral Landscape: How Science Can Determine Human Values (2010).  Harris is the author of two previous books promoting a naïve form of antireligion and is one of the Four Horseman of Atheism, a posse of atheist secular conservatives also known as the New Atheists, that also includes Daniel Dennett, Richard Dawkins, and Christoper Hitchens.  However, since the book has already been widely reviewed and most of its manifold failings have been identified (among them that Harris’s version of ethics is utilitarianism in science drag), I will not repeat those points and instead will focus on two problems that particularly struck me, one of which has been alluded to but not detailed, the other of which has not been mentioned in the reviews I read.

The first problem, the one some reviewers have noted, is Harris’s apparent lack of interest in philosophers who have previously (over many centuries) wrestled with questions of morality.  As I read his book, I became aware that one of his strategies is to create colossal straw men arguments; he creates extreme but vague versions of his opponents and them knocks them down, but he rarely names names or provides quotations.  For example,  on page 17 he asserts that “there is a pervasive assumption among educated people that either such [moral] differences don’t exist, or that they are too variable, complex, or culturally idiosyncratic to admit of general value judgments,” but he does not identify whom he’s talking about nor does he quote anyone who holds such a view; the statement is also absolute (as so many of his statements are), in that he does not qualify the category: he does not say, for example, “80% of educated people,” nor does he define what he means by “educated.”  Furthermore, the word “pervasive” has negative valence without explicitly declaring it; anything “pervasive” has taken over (evil is pervasive, good is universal, for example).

On page 20 he states that “Many social scientists incorrectly believe that all long-standing human practices must be evolutionarily adaptive,” but he does not identify who those many social scientists are, nor specify how many constitutes many; nor does he quote any of them to support or even illustrate his assertion; nor does he offer so much as a footnote reference to any social scientists who allegedly hold this view.  In his hostile statements on religion, one who pays attention will note that he has not only oversimplified religion, but also seems to limit his conception of religion to the most conservative strands of contemporary Judeo-Christian systems.  He rarely refers to theologians and then only to contemporary and conservative ones. His bibliography runs to 40 pages, or about 800 sources (give or take), but the only theologians in this extensive listing are J.C. Polkinghorne and N. T. Wright.  Nothing of Augustine or Aquinas, nothing of Barth or Bonheoffer or Fletcher.  If he had bothered to consult any of these and other theologians and moral philosophers, he might have seen that his ideas have already been more extensively and more deeply explored than he manages to do in this book.  He does try to wriggle out of this problem in the long first footnote to Chapter 1, but it is disingenuous.

I also question that he has actually carefully and completely read all 800 (give or take) sources he lists.  It would take an inordinate amount of time to read all of them, to read them carefully to ensure one has properly understood them, to take adequate notes, and to think about how they fit into or relate to one’s thesis and argument.  For example, it strikes me as odd that he lists 10 sources by John R. Searle, 3 of which are densely argued books, but refers to Searle only once in the body of his book and 3 times, obliquely, in his endnotes.  One wonders, in what sense then is Searle a source?  Is he a source by association only?

The other major problem I have with Harris’s book is in my view more serious:  He mistakes “brain states” for thoughts.  This is a common error among those who imagine that scanning the brain to measure areas of activity or measuring the levels of various hormones such as oxytocin suffices to explain human thought.  (Harris confesses to a measurement bias on p. 20 when he writes that “The world of measurement and the world of meaning must eventually be reconciled.”).  That a particular region of the brain is “lit up” tells us only where the thought is occurring—it tells us nothing about the content of that thought nor anything about its validity.  This is because human thoughts are generated, shared, discussed, modified, and passed on through language, through words, which while processed in the brain nevertheless have meaningful independence from any one particular brain, and therefore have a degree of freedom from “brain  states.”

Harris’s inability to properly distinguish between brain-states and thoughts is apparent in an interesting passage on pages 121-122:  Here he discusses research he conducted using fMRI’s that identify the medial pre-frontal cortex as the region of the brain that is most active when the human subject believed a statement.  He discovered that this area is activated similarly when the subject is considering a mathematical statement (2 + 6 + 8 = 16) and when the subject is considering an ethical belief (“It is good to let your children know that you love them”).  This similarity of activity in the same brain area leads Harris to conclude “that the similarity of belief may be the same regardless of a proposition’s content.  It also suggests that the division between facts and values does not make much sense in terms of underlying brain function.” How true.  Yet nonetheless, human beings (including quite obviously Harris himself) do make the distinction.

But he goes on:  “This finding of content-independence challenges the fact/value distinction very directly:  for if, from the point of view of the brain, believing ‘the sun is a star’ is importantly similar to believing ‘cruelty is wrong,’ how can we say that scientific and ethical judgments have nothing in common?” (italics added)  Aside from the fact that he does not specify who that “we” is and that he does not prove that anyone has said that there is “nothing in common” between scientific and ethical judgments, there is the fact that, in language, that is by actually thinking, we can make the distinction and do so all the time.  “This finding” does not challenge the distinction; it merely highlights that the distinction is not dependent upon a “brain-state”.  The MPFC may be equally activated, but the human thinker knows the differences.

The underlying problems with Harris’s thesis are at least threefold.  One is that his hostility to and ignorance of religion blocks him from considering or accurately representing what religion has to say about ethics.  Another is one habitual among conservatives, to tilt at straw men while mounted on rickety arguments.  The third is his reductionism to the absurd degree.  Arguments of the type offered in this book mistake the foundation for the whole edifice; it is as if, in desiring to know and understand Versailles, we razed it to its foundation and then said, “There, behold, that is Versailles!”

Note: A partial omission in the original post of 1 May 2011 in the quotation in the second to last paragraph was corrected on 2 October 2011

Evolution and Creationism: Consider the Botfly

In the United States at least, the argument from design has traditionally been used to support a literal reading of the Genesis account of creation (setting aside the fact that Genesis offers two versions), and there remain today many people who believe in the “young earth theory” and that fossils and other indications of great swaths of geologic and cosmic time are simply erroneously interpreted by scientists or deliberate deceptions by God meant to trip up the proud and faithless.  Other creationists, however, conceding to the scientific evidence for great stretches of time and for evolution, resort to the dodge of Intelligent Design.  One has to fan away a great deal of smoke before one can get to the fundamental theses of the proponents of ID, and even then one may not be sure exactly what they believe.

Go to the Evolution and Creationism page to read the full essay.