Tag Archives: ethics

Death of a Bug

The other day I squashed a bug. It was quite small, rather rounded in shape, and making its way slowly across the surface of my nightstand. I am usually not insecticidal, but having a bug of any conformation so proximate to my bed brings out my squeamishness. And recently my condo association had sent out a newsletter with an article about bedbugs. This was probably not a bedbug, but nonetheless, it had to die.

I regretted my brutality immediately. The poor thing had as much right to its life as I have to mine. In the great scheme of things, the life of a human is of really of no more importance than the life of any other creature. We got here through the same process of evolution as they did, and since I do not subscribe to any form of teleology, I do not consider Homo sapiens to be any more perfect nor any more the apex and fulfillment of some great cosmic plan than that poor bug and his cohorts. It is the attitude that we do count for more that has led to so much environmental destruction and so much cruelty, not only to other animals but also to other people. For as eugenics exposed, the idea that humans are the perfection of evolution leads all to easily to the notion that my humans, the people of my group, are more fully perfect than yours. Hence, genocide.

It is therefore not surprising that good souls who reject cruelty to other people also reject cruelty to animals; and also not surprising what psychologists tells us of serial killers, that they tortured and killed animals in their childhoods. Many children, especially boys, do mistreat animals, at least of the insect kind (remember watching ants burst into flame under the magnifying glass?), but most children, even boys, soon outgrow that tendency. Serial killers apparently do not, which suggests that there is an element of immaturity, even of that primitivism that can be both so charming and occasionally so alarming in children, in the serial killer’s makeup. Something having to do with the child’s sense of himself or herself as the center of the world, the world being that which was designed for one’s gratification.

There are other ways in which this juvenile belief that the world owes us gratification can be manifest. The despoiling of the natural world for profit, so that we may live in an abundance that exceeds what the world actually can supply to us, fits this bill. We take not only what is our natural due but also that which is the natural due of all the other creatures with which evolution has populated this planet, which is why so many are being driven into extinction (why so many already have been), and why, when we know perfectly well that our “lifestyles” are warming the planet, we continue to pillage as if there were no tomorrow—until one day perhaps there literally will not be.

Perhaps I am making too much of the squashing of a mere bug. I mentioned that we are the product of the same process of evolution that led to all other creatures, and that process is anything but benign. The process of life is the process of death. Virtually everything that lives does so by killing and eating some other living thing. Even a vegan lives by killing carrots and broccoli and mushrooms (do carrots scream in pain and terror when we yank them from the ground?) There is no escape from this round of death and life. The vegan may not eat any animal product, but his or her efforts make little difference in the great scheme of things—there are predators enough to override the effects of the vegetarian. That is how evolution works its mighty wonders.

Which is why I am not persuaded by those good souls who imagine that we can end suffering and wars and crime and all the other means and ways that we wreak havoc on each other and the world. I am not hopeful that we who live in the so-called developed world will rein in our greed for money and things for the sake of the planet or even for the sake of the starving and terrorized millions of so much of the rest of the world, or even for those who live within our own borders. Like all other creatures, we kill to live. Unlike other creatures, we can overkill. All too often we do, both literally and metaphorically.

That little bug on my nightstand was most likely harmless, at least to me, and maybe it even had some important function in the ecology of my apartment. Or maybe it was just quietly living its own life. I killed it anyway.

See also my “Requiem for a Tree” at this site.

Advertisements

Evolution and Theodicy

“Why is there evil in the world?” This question has been asked by philosophers and theologians and ordinary men and women for millennia. Today scientists, particularly evolutionary biologists, neuroscientists, and evolutionary/neuropsychologists have joined the effort to explain evil: why do people indulge in violence, cheating, lies, harassment, and so on. There is no need here to itemize all the behaviors that can be labeled evil. What matters is the question of “why?”

The question of “why is there evil in the world?” assumes the premise that evil is abnormal while good however defined) is normal—the abnorm vs. the norm, if you will. Goodness is the natural state of man, the original condition, and evil is something imposed on or inserted into the world from some external, malevolent source. In Genesis, God created the world and pronounced it good; then Adam and Eve succumbed to the temptations of the Serpent and brought evil and therefore death into the world (thus, death is a manifestation of evil, immortality the natural state of good). Unfortunately, the Bible does not adequately account for the existence of the Serpent or Satan, so it was left to Milton to fill in the story. Gnostics, Manicheans, and others posited the existence of two deities, one good and the other evil, and constructed a vision of a cosmic struggle between light and darkness that would culminate in the triumph of good—a concept that filtered into Christian eschatology. The fact that Christian tradition sees the end times as a restoration to a state of Adamic or Edenic innocence underscores the notion that goodness is the natural, default state of man and the cosmos.

Contemporary secular culture has not escaped this notion of the primeval innocence of man. It has simply relocated Eden to the African savannah. When mankind was still at the hunter-gatherer stage, so the story goes, people lived in naked or near-naked innocence; they lived in egalitarian peace with their fellows and in harmony with nature. Alas, with the invention of agriculture and the consequent development of cities and civilizations, egalitarianism gave way to greed, social hierarchies, war, imperialism, slavery, patriarchy, all the factors that cause people to engage in violence, oppression, materialism, and so on; further, these faults of civilizations caused the oppressed to engage in violence, theft, slovenliness, and other sins. Laws and punishments and other means of control and suppression were instituted to keep the louts in their place. Many people believe that to restore the lost innocence of our hunter-gatherer origins, we must return to the land, re-engage with nature, adopt a paleo diet, restructure society according to matriarchal and/or socialist principles, and so on. Many people (some the same, some different from the back-to-nature theorists) envision a utopian future in which globalization, or digitization, or general good feeling will restore harmony and peace to the whole world.

Not too surprisingly, many scientists join in this vision of a secular peaceable kingdom. Not a few evolutionary biologists maintain that human beings are evolutionarily adapted to life on the savannah, not to life in massive cities, and that the decline in the health, intelligence, and height of our civilized ancestors can be blamed on the negative effects of a change in diet brought on by agriculture (too much grain, not enough wild meat and less variety of plants) and by the opportunities for diseases of various kinds to colonize human beings too closely crowded together in cities and too readily exposed to exotic pathogens spread along burgeoning trade routes. Crowding and competition lead to violent behaviors as well.

Thus, whether religious or secular, the explanations of evil generally boil down to this: that human beings are by nature good, and that evil is externally imposed on otherwise good people; and that if circumstances could be changed (through education, redistribution of wealth, exercise, diet, early childhood interventions, etc.), our natural goodness would reassert itself. Of course, there are some who believe that evil behavior has a genetic component, that certain mutations or genetic defects are to blame for psychopaths, rapists, and so on, but again these genetic defects are seen as abnormalities that could be managed by various eugenic interventions, from gene or hormone therapies to locking up excessively aggressive males to ensure they don’t breed and pass on their defects to future generations.

Thus it is that in general we are unable to shake off the belief that good is the norm and evil is the abnorm, whether we are religious or secular, scientists or philosophers, creationists or Darwinists. But if we take Darwinism seriously we have to admit that “evil” is the norm and that “good” is the abnorm—nature is red in tooth and claw, and all of the evil that men and women do is also found in other organisms; in fact, we can say that the “evil” done by other organisms long precedes the evil that men do, and we can also say, based on archaeological and anthropological evidence, that men have been doing evil since the very beginning of the human line. In other words, there never was an Eden, never a Noble Savage, never a long-ago Golden Age from which we have fallen or declined—and nor therefore is there any prospect of an imminent or future Utopia or Millennial Kingdom that will restore mankind to its true nature because there is nothing to restore.

The evolutionary function of “evil” is summarized in the term “natural selection”: the process by which death winnows out the less fit from the chance to reproduce (natural selection works on the average, meaning of course that some who are fit die before they can reproduce and some of the unfit survive long enough to produce some offspring, but on average fitness is favored). Death, usually by violence (eat, and then be eaten), is necessary to the workings of Darwinian evolution. An example: When a lion or pair of lions defeat an older pride lion and take over his pride, they kill the cubs of the defeated male, which has the effect of bringing the lionesses back into heat so that the new males can mate with them and produce their own offspring; their task is then to keep control of the pride long enough for their own cubs to reach reproductive maturity. Among lions, such infanticide raises no moral questions, whereas among humans it does.

There is no problem of evil but rather the problem of good: not why is there “evil” but rather why is there “good”? Why do human beings consider acts like infanticide to be morally evil while lions do not? Why do we have morality at all? I believe that morality is an invention, a creation of human thought, not an instinct. It is one of the most important creations of the human mind, at least as great as the usually cited examples of human creativity (art, literature, science, etc.), if not greater considering how much harder won it is than its nearer competitors, and how much harder it is to maintain. Because “good” is not natural, it is always vulnerable to being overwhelmed by “evil,” which is natural: Peace crumbles into war; restraint gives way to impulse, holism gives way to particularism, agape gives way to narcissism, love to lust, truth to lie, tolerance to hate. War, particularism, narcissism, etc., protect the self of the person and the tribe, one’s own gene pool so to speak, just as the lion kills his competitor’s cubs to ensure the survival of his own. We do not need to think very hard about doing evil; we do need to think hard about what is good and how to do it. It is something that every generation must relearn and rethink, especially in times of great stress.

It appears that we are in such a time today. Various stressors, the economy, the climate, overpopulation and mass migrations, religious conflict amid the dregs of moribund empires, are pushing the relationship of the tribes versus the whole out of balance, and the temptations are to put up walls, dig trenches, draw up battle lines, and find someone other than ourselves to blame for our dilemmas. A war of all against all is not totally out of the question, and it may be that such a war or wars will eventuate in a classic Darwinian victory for one group over another—but history (rather than evolution) tells us that such a victory is often less Darwinian than Pyrrhic.

Donald Trump: Psychoanalysis vs. Ethics

Is Donald Trump a narcissist? Is he a psychopath? Is he mentally unstable? These questions, and others of the same ilk, have been asked (and often answered in the affirmative) throughout the primary campaign season. To a lesser extent, similar questions have been asked about his followers. There has been, in other words, a lot of psychoanalyzing. It’s as if the DSM-5, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, has become the primary guide to politics and politicians.

Hillary Clinton has also, and for a longer time (at least since the Lewinsky scandal), been subjected to armchair and coffee house analysis (she’s in denial, etc.), even though, given that she is, for a politician, a surprisingly private person (i.e., uptight? Secretive? Not warm?), one wonders how anyone can legitimately diagnose her. Bill Clinton has also, of course, been parsed and dissected (narcissist, sex addict, etc.). Surprisingly, there has been little psychoanalysis of Bernie Sanders, perhaps because, as Hillary’s gadfly, he has dominated the high ground of principle.

Perhaps when a serious candidate actually has principles and stays consistent with them, psychologizing is unnecessary and even irrelevant. Principles have the effect of overriding personal quirks and biases. They are not generated from within this or that individual, and therefore are not reflective only of that individual, but are generated in a long process of shared thought. We come to principles through reason (Hannah Arendt might have said, through reason paired with imagination), not through impulse; indeed, the point of principle is to put a bridle on impulse, to restrain the impetuousness of the moment in favor of the longer, wider view. In Pauline terms, it replaces the natural or carnal man with the spiritual man; in late Protestant terms, it replaces immediate with delayed gratification.

So while Trump may or may not be a psychopath, a narcissist, or mentally unstable or ill, which none of us can really know, he is an unprincipled man. His constant shape-shifting, self-contradictions, denials, and off-the-cuff bluster are the signs of an impulsive man whose thoughts and words are not subjected to the vetting of a set of principles that can tell him whether he is right or wrong. He has at long last no shame, no decency, because he has no principles to tell him what is decent or shameful. In other words, he is typical of human beings, men and women, when they have nothing higher or wider than themselves as guides to behavior. This is not the place to go in depth into the utility of moral principle, but just as an example, something as simple as “do unto others as you would have others do unto you” can restrain the natural selfish impulse to grab as much as you can for yourself.

Anyone who has taken an introductory course in psychology or who has paged through any of the editions of the DSM has found plenty of evidence that they are in some way or another mentally unstable or unhealthy. Just about anyone can look at the list of defining characteristics of, say, narcissistic personality disorder (do you think you are special or unique?), or antisocial personality disorder (are you opinionated and cocky), or a perfectionist, and wonder, in a bit of self-diagnosis, if they should seek help. Welcome to the cuckoo’s nest. Or rather, welcome to humanity.

But for the concept of a disorder to exist, there has to be a concept of an order, i.e., a definition of what being a normal person is. Ironically, psychology is of no help to us here. The DSM-V is nearly one thousand pages long, and according to its critics adds more previously normal or eccentric behaviors to its exhaustive, not to say fatiguing, list of mental maladies. Its critics also charge that it provides ever more excuses for psychiatrists and physicians to prescribe very profitable drugs to people who are really just normal people. After all, they point out, life is not a cakewalk, and people are not churned out like standardized units.

Principle, i.e., morality, ethics, on the other hand, can be of great help here. It is obvious that the followers of Trump have not been dissuaded from supporting him because of the amateur psychoanalyses of pundits and opponents. Clearly they like those traits which the alienists are diagnosing. But what if someone started criticizing him on moral grounds, what if someone performed something analogous to “Have you no decency, sir?” This question, posed by Joseph N. Welch to Senator Joe McCarthy in a full Senate hearing in 1954, was a key moment in the demise of one of the worst men in American political history. Welch did not psychoanalyze McCarthy, nor did Edward R. Murrow in his famous television broadcast on McCarthy’s methods, and McCarthy was not taken away in a straitjacket. He was taken down by morally principled men and women who had had enough of his cruelty and recklessness.

The Credulous Skeptic: Michael Shermer’s Moral Arc

The Credulous Skeptic:  Michael Shermer’s The Moral Arc

The thesis of Michael Shermer’s new book is that morality is about the flourishing of sentient beings through the application of science and reason; in this, he follows in the footsteps of Stephen Pinker’s The Better Angels of Our Nature and Sam Harris’s The Moral Landscape.  All three consider science to be the arbiter of all things, and at least Pinker and Shermer argue that moral progress has been made only in the last 500 years or so, i.e., the modern period of scientific discovery and advancement and the Enlightenment.  As far as Shermer is concerned, all that preceded this period is superstition, ignorance, and darkness.  Religion, and especially Christianity, are not only of no use in their projects but in fact actually harmful.

Shermer argues that great strides in moral behavior have occurred since the Enlightenment, particularly in freedom and the abolition of slavery, women’s and gay rights, and animal rights, and that these advances are the direct result of a scientific and materialist worldview and an indirect result of the material prosperity afforded by the industrial revolution, capitalism, and democracy.  He marshals an impressive quantity of evidence to support his claims, and any reader is likely to concede that he has made a compelling case.  That is, if said reader is already sympathetic to Shermer’s libertarianism and as worshipful of science, the Founding Fathers, and the Enlightenment.

Any less naïve reader, however, is likely to notice a number of problems with Shermer’s book, not least of which is its Western bias.  Although early in the book Shermer refers to the moral progress of our species, virtually all his evidence and examples come from Western Europe and the United States, as if “we” were all that needed to be said about the species in general—even though the populations of Europe and the United States taken together constitute a minority of the world population, and this minority status applies even when the white populations of Australia and New Zealand and elsewhere are factored in.  Thus it would seem that in order to establish that the “species” has made moral progress since the Age of Enlightenment, data from non-Western societies would have to be taken into account.  In other words, Shermer is guilty of a sampling bias.

Compounding this problem—or perhaps the source of the problem—are his naïve and simplistic views of history.  Apparently, Shermer believes that the Enlightenment arose by spontaneous generation, for he dismisses everything that preceded it, most especially religion.  Or rather, Christian religion, which apparently has no moral tradition or intellectual history worthy of the name (never mind the moral and intellectual traditions of any other religious tradition, such as Buddhism or Hinduism).  In fact, Shermer’s notion of Christianity appears to be limited to that version with which, as a former born-again, he is most familiar, American evangelical fundamentalism.

For example, in his chapter on slavery, Shermer reads Paul’s letter to Philemon without any sense of the context in which Paul was writing and in fact explicitly dismisses contextual interpretations.  In this, he is more fundamentalist than the fundamentalists.  He is also guilty of presentism, i.e., the logical error of reading the past through the lens of the present; because “we” (Westerners) today abhor slavery, it must therefore be that any moral person at any time in history, regardless of how long ago or of what culture or civilization, should also explicitly abhor slavery and also openly call for its abolition.  Never mind that at the time of Paul’s writings, Christians were a distinct minority in the Roman world, no more than a few thousands out of a total population of millions; Christianity had barely begun, and it would be centuries before it had built up anything like a coherent intellectual tradition or widespread influence.  Meanwhile, Paul lived under the Roman system, which was exploitative and brutal in a way we today would find extreme.  Paul was certainly smart enough to know that calling for the abolition of slavery, by a small group of marginal people following a bizarre new religion, would have no impact on anything.  Thus when he urges Philemon to treat his slave Onesimus as a brother, he is making as radical statement as one could imagine, in context.  He was not asking Philemon to do anything useless or dangerous—he was asking him to treat Onesimus as a fellow human being, a radical idea at the time, but in doing so Paul planted the seed that eventually grew into the Western ideal of the individual, an ideal which is at the center of Shermer’s own libertarianism.

Shermer has built a career on being a skeptic (even editing a magazine of that name), but his skepticism tends to be selective (in the same way, ironically as a fundamentalist is selectively skeptical—of evolution, or climate change, i.e., of things he already rejects).  This selective skepticism is displayed not only in his tendentious reading of Paul but also in his takedown of William Wilberforce, one of the most successful abolitionists, whom he characterizes as “pushy and overzealous” in his “moralizing” and as worrying “excessively about what other people were doing, especially if what they were doing involved pleasure [and] excess.”  Meanwhile, Shermer’s Enlightenment heroes get a complete pass:  he never mentions Locke’s rationalization of the taking of American Indian lands for white settlement (because the Indians did not have “property”), nor that Jefferson, whom Shermer hero-worships, and Washington owned slaves (which would certainly be relevant to his chapter on slavery), or that Franklin favored using war dogs against Indians who were too stubbornly resisted white theft of their real estate.  One has to wonder:  What has Shermer ever read of American history?  Why does he apparently take his heroes at their written word, without investigating the context in which they wrote?  Would that reveal that his idols have feet of clay?  Why is he skeptical of Wilberforce but not of Jefferson?

When Shermer turns to the issue of animal rights, he seems at first to be on firmer ground.  There certainly does seem to be a positive movement in the direction of extending at least the right not to suffer at human hands to domesticated animals.  Animal welfare groups have proliferated, laws protecting animals from harm continue to be expanded, and more people are embracing a vegetarian lifestyle—in the United States today, somewhat more than 7 million people are vegetarians, which is an impressive number until one realizes that they represent only 3.2% of the American population (however, that compares unfavorably to India, where 42% of households are vegetarian).  While Shermer does recognize the cruelties of industrialized meat production, he misses an opportunity to connect some dots.  One of the effects of industrialization is to specialize the production of goods and services, and the effect of that is to remove the means by which things get done from the view of most people.  In an urbanized world, for example, the making of a ham or a pound of ground beef is invisible to the typical supermarket shopper, who never has to raise an animal from birth, slaughter it, carve up its corpse, etc., so that a cook can look at a hunk of muscle from a steer and call it a beautiful piece of meat; our farming ancestors knew what that hunk of meat really was from firsthand experience.

Likewise, an urbanized population can keep cats and dogs as pets solely for their companionship, can even confer on them the status of humans in fur because dogs and cats (and to some extent horses) no longer have any utilitarian function; thus giving them moral status of the kind promoted by animal welfare groups and PETA is something we can afford.  We don’t need them to aid in the hunt, keep down rodent pests, or herd our sheep anymore.  Yet every year we kill 2.7 million unwanted dogs and cats, not to mention those that die from neglect, and while those numbers are down, one has to wonder how long we can afford to keep excess animals alive.  However, the point here is that the mistreatment of animals is removed from most Westerners daily lives.

As is violence to other human beings.  As the nation state grew, it appropriated violence to itself and diminished individual violence; justice has replaced revenge, most of the time.  But we have also exported violence, outsourced it so to speak, so that most of our official military violence is committed overseas.  Shermer might do well to read a few books on that:  perhaps those by Chalmers Johnson, or Andrew Bacevich, to name just two authors worth consulting.  Or he might refresh his memory of our involvement in the death of Allende and our moral responsibility for the deaths caused by Pinochet, or of the numbers of Iraqi civilians who died in the second Iraq war (approximately 150,000).  He also could consider the number of people who died as the result of the partitioning of India (about a million).  And since Shermer claims to be speaking on behalf of the species, perhaps he should consider the deaths and oppression of people in, say, China or North Korea, or many other places in the non-Western world.

In some ways, we Westerners are like our pets—domesticated and cuddly.  But remove the luxuries of domestication and, like feral cats and dogs, we will quickly revert to our basic instincts, which will not be fluffy.  The “long peace” since World War II has not been all that peaceful, and certainly not, within historical time, very long.  As Peter Zeihan (The Accidental Superpower) and others are warning us, the post-World War II global order is fraying, and disorder and its symptoms (e.g., violence) could once again rise to the surface.

Mark Balaguer on Free Will

Into the fray of recent books on whether or not we humans have free will jumps Mark Balaguer’s sprightly book, one of a series on current vogue topics published by MIT Press, intended for a nonspecialist readership. In other words, Balaguer is not writing for other philosophers, but for you and me—and this audience may account for the book’s jauntiness, inasmuch as it appears that authors, and/or their editors and publishers, believe that the only way that the common man or woman can be induced to swallow and digest cogitations on the great questions is by talking to him or her as if he or she were a child. One sometimes imagines the author as rather like a sitcom daddy explaining mortality or sin as he tucks in his four-year-old daughter.

You can tell that I find that style annoying. But despite that, Balaguer does more or less accomplish his goal, which is basically to show that the anti-free will arguments advanced today by such luminaries of the genre as Daniel Wegner and Sam Harris don’t amount to much. Primarily because they tend to assume what yet remains to be proven. Balaguer does an excellent job of exposing the holes in the determinist arguments, as well as going back to some of the studies that constitute the supposed proofs of those arguments, such as those of Benjamin Libet, and finding that they do not in fact offer such proof. I won’t go into his explanations, as the reader can do that easily enough on his own, especially since the book is short (a mere 126 pages of text) and free of arcane jargon.

Much as I welcome Balaguer’s poking of holes in the determinist hot-air balloon, I do have a bone to pick with his argument, namely that he seems to have a trivial notion of what free will is. Apparently, Balaguer thinks that free will is synonymous with consumer choice; his primary and repeated example is a scenario of someone entering an ice cream parlor and considering whether to order vanilla or chocolate ice cream. Even in his interesting distinction of a “torn decision,” i.e., one in which the options are equally appealing or equally unappealing, he repeats the chocolate vs. vanilla example. In this he is like Sam Harris, the determinist who uses tea vs. coffee as his example. And like Harris, he says nothing about the fact that free will is an ethical concept and as such has nothing to do with consumer choice—and a lot of other kinds of common, everyday choices as well.

So let me offer a scenario in which the question of free will is truly interesting: Imagine that you are a young man in the antebellum South, say about 1830, and you are the sole heir of a large plantation on which cotton is grown with slave labor. Let’s say you’re about 30 years old and that for all those 30 years you have lived in a social and ideological environment in which slavery has been a natural and God-given institution. You therefore assume that slavery is good and that, when your father dies and you inherit the plantation, you will continue to use slave labor; you will also continue to buy and sell slaves as valuable commodities in their own right, just like the bales of cotton you sell in the markets of New Orleans. Further, you are aware that cotton is an important commodity, crucial to the manufacturing enterprises of the new factories of the northeast and England. You are justly proud (in your own estimation, as well as that of your social class) of the contributions the plantation system has made to the nation and civilization. Because of your background and experience, perhaps at this point you cannot be said to have free will when it comes to the question of whether or not slavery is morally just.

Then one day you learn of people called abolitionists, and perhaps quite by chance you come across a pamphlet decrying the practice of slavery, or perhaps even you hear a sermon by your local preacher demonizing abolitionists as atheists or some such thing, though in the course of that sermon the preacher happens to mention that these atheists presume to claim Biblical authority for their heretical beliefs. Maybe you rush to your copy of the Bible to prove them wrong, only to come across St. Paul’s assertion that there is neither slave nor freedman in Christ. Perhaps you ignore these hints that what you have always assumed to be true may not be; or perhaps they prick your conscience somewhat, enough to make you begin to look around you with slightly different eyes. Maybe you even become fraught, particularly when you consider that some of the younger slaves on the property are your half-siblings, or perhaps even your own offspring—how could my brother or my son be a slave while I am free? Who can say what nightmares these unwelcome but insistent thoughts engender? At any rate, for the first time in your life, you find that you cannot to be a slaveholder without considering the moral implications of the peculiar institution. For the first time, you must actually decide.

The above is certainly an example of what Balaguer calls a torn decision, but unlike chocolate vs. vanilla, it is a moral decision, and therefore profound rather than trivial. And it is in such moral dilemmas, when something that is taken for granted emerges into consciousness, that the concept of free will becomes meaningful. It would therefore seem that scientists, qua scientists, can’t be of much help in deciding whether or not we have free will. Try as they might (and some have, sort of), they cannot design laboratory experiments that address moral dilemmas—it is only in living, in the real world with other people and complex issues, that morality, and therefore free will, can exist. Of course, that does not mean that in exercising free will everyone will always make the morally right decision—we cannot know if the young man of the antebellum South will free his slaves or keep them (or even perhaps decide that the question is too difficult or costly to be answered, so he chooses to ignore it, likely leading to a lifetime of neuroses)—but we do know that once the question has risen into his consciousness, he has no choice but to choose.

Free will, then, operates when a situation rises into consciousness, creating a moral dilemma that can be resolved only by actively choosing a course of action or belief on the basis of moral principles rather than personal preference or benefit. There are dilemmas that superficially resemble moral dilemmas, such as whether or not it I ought to lose weight or whether or not I should frequent museums rather than sports bars, but which are in fact matters of taste rather than ethics. Chocolate vs. vanilla is of the latter kind. To say that I ought to have the vanilla is very different from saying I ought not to own slaves, even though both statements use the same verb. It is disappointing that philosophers fail to make the distinction.

Eichmann Before Jerusalem: A Review

Eichmann:  Before, In, and After Jerusalem

“One death is a tragedy; a million deaths is a statistic.”  Whether or not Stalin actually ever said this is irrelevant to the point that it makes, for it tells us in a most condensed form the totalitarian view of human beings, as exemplified not only by the Stalinist era in Russia but especially by the short but deadly reign of National Socialism in Germany.  Unlike the socialism found in contemporary European societies such as Sweden and France, in which the individual human being is recognized as a person regardless of his or her circumstances, and thus equally worthy of education, medical care, and hope, the “socialism” of the Nazis stripped the individual of personhood by subsuming him in a collective identity, so that this body was interchangeable with that body, as not only representative of the collective he was assigned to (born as) but was in fact that collective, with no more independent existence from that collective than a cell exists independently of its body.  Individuals thus were considered and treated not as symbols of the collective (Jews, gypsies, homosexuals, Poles, intellectuals, etc., as well as “Germans” or “Aryans”) but as the collective itself.  The purpose of the individual was to sustain the collective, just as the purpose of a cell is to sustain the body.  No one is interested in the dignity and autonomy of a cell.

Click here to read the complete review.

Ethics and Human Nature

It is an unhappy characteristic of our age that certain ignoramuses have been elevated to the ranks of “public intellectual,” a category which seems to consist of men and women who provide sweeping theories of everything, especially of everything they know nothing about. Into this category fall certain writers whose sweeping theory is that, prior to the Enlightenment, everyone lived in abject superstition and physical misery. With the Enlightenment, reason and science began the process of sweeping away misery and ignorance, clearing the field for the flowers of prosperity and knowledge. Such a sophomoric view of human history and thought has the virtue (in their minds only) of rendering it unnecessary for them to acquaint themselves with a deep and nuanced knowledge of the past, an error which permits them to attribute all that is good in human accomplishment to the age of science and all that is bad to a dark past best forgotten.

Nowhere is this more evident than in the recent fad for publishing books and articles claiming that science, particularly evolutionary science, provides the necessary and sufficient basis for ethics.

To read the article, click here.

Duck Dynasty: Free Speech Rights?

The patriarch of the “Duck Dynasty” franchize has been suspended from the show for making negative comments about gays and for his statement that African-Americans were better off before the civil rights movement. His suspension has riled far-right groups (who like to camouflage themselves by calling themselves conservatives) who accuse A&E of violating Phil Robertson’s religious and free speech rights. There are a number of problems with their position.

First, whatever religious rights he or anyone else may have, as the star of the “Duck Dynasty” show, he is not a spokesperson for any religion or religious belief; he is not, as someone has laughably stated, analogous to the Pope. It is the Pope’s job to take religious positions and to advocate for his church’s doctrines. It is Robertson’s job to provide footage for a television show whose purpose is to entertain viewers.

Second, although as an American citizen, Robertson has the right to express his views, as an employee of the A&E network, he does not. The First Amendment does not apply to his state of employment with the network. It is, in fact, ironic that people who would be the first to defend the right of employers to fire workers for whatever reason (“right to work” laws that could better be called “right to fire”) are here the first to jump in and defend the fired (suspended) employee. A&E is a commercial, not a journalistic or political or denominational, enterprise. (CNN made that point about itself when it fired the liberal firebrand Keith Olbermann for making donations to Democratic candidates without prior approval from the network–so what’s good for the right-wing conservative goose holds good for the liberal gander.) Not to mention that the owners of A&E have their own political/religious/whatever right to express their opinion of Robertson’s remarks by firing him.

Third, if Robertson has the right to his opinions, his critics have the right to their negative opinions of him, and just as much right to air them; and as consumers who determine A&E’s ratings, they have the right to withdraw their support for his show and cause its demise. That’s good ol’ consumer choice, is it not? A&E is certainly not going to thumb its nose at its viewers and continue to air a losing show.

Fourth, it should be noted that there is more than mere opinion at issue here. Robertson’s comments that blacks were better off before the civil rights movement, and his claim that he had never seen black people badly treated so therefore they must not have been, are historically inaccurate. Maybe he just wasn’t looking, or maybe he had selective vision; but a few vintage photographs of lynched blacks would be sufficient to make the opposite point. Perhaps such photos should be shown to Robertson’s defenders as well.

Curiosity Killed the Clam

Some time ago, a quahog clam was found that was initially thought to be 400 some years old. Nicknamed “Ming,” it’s true age was determined by scientists to be 507 years. But in order for curious scientests to verify Ming’s age, they had to open it, which caused Ming’s death. This anecdote caused me to think about ethics and science. To wit, when curiosity is scientific, is anything and everything justified?

One might, of course, say that the life of a clam, no matter how old, is hardly of much import, and anyway, Ming could as easily have wound up in clam chowder as on a laboratory dissecting table. But Ming didn’t die to nourish anyone else, but rather to satisfy the curiosity of a few scientists. Perhaps we are not aware of the extent to which science, especially biology and medicine, has been and is dependent on killing things.

Charles Darwin, for example, killed a lot of animals, especially birds, on his voyage and sent the skins back to London (his description of killing a fox is especially sad), and Audubon used dead birds, most of whom he had shot himself, as models for his famous paintings (he used wires to pose the corpses). We all know the extent to which animals have been used in all kinds of experiments, some of which (to study diseases) have contributed to human health but others of which seem comparatively trivial (to test cosmetics).

I thinking of one series of experiments by Harry Harlow during the 1960s. To test hypotheses about the role of mother-love in the development of normal adults, he took hours-old baby rhesus monkeys from their birth mothers and put them in cages with two surrogate “mothers,” one a wire form with a feeding bottle, the other with fuzzy cloth but no feeding bottle. Harlow found that the infants spent most of their time with the fuzzy “mother.” He also conducted isolation experiments and found that infant monkeys that grew up without contact or interaction with other monkeys failed to reintegrate into monkey society and also suffered long-term health and behavioral problems. The cruelty of these experiments confounds belief and is made worse by the fact that, had he asked any normal human beings (especially relatively uneducated ones who had not been contaminated by the specious theories of behaviorism) they could have told him that a mother’s love is essential to a child’s development. But of course scientists dismiss such testimony as purely anecdotal and therefore not scientific or worth paying attention to.

But, one might say, these examples involve animals, which don’t have the same degree of rights or consciousness as humans, and it is worth the toll to advance scientific knowledge. But as the psychology of sociopathy tells us (without resort to animal experimentation), cruelty to animals is a precursor to cruelty to other human beings. As was seen in the Nazi experiments on concentration camp inmates (and even on disabled German soldiers, their own Aryan countrymen). These experiments were not without their applicable results. The experiments on hypothermia yielded actual, objective, usable facts; and it is a dilemma, whether or not to use the results of such experiments in real life cases of hypothermia.

But the Nazis were evil. Americans would never do such things. Except that Americans have done such things, as in the Tuskegee experiments of 1932 to 1972, conducted by our own Public Health Service, the purpose of which was to track the natural progression of syphilis in a sample of African American men. Deception was resorted to: The subjects had been told they would receive treatment for their disease, but in fact did not. Untreated syphilis exacts a terrible toll on body and mind–a fact which was already known, since before the discovery of antibiotics, lots of people died from that disease. In short, the experiment was as pointless as Harlow’s monkey experiments.

Experimentation continues on larger scales today. A wide variety of chemicals are used in all kinds of products. Plastic bottles leach chemicals into the drinks they contain; insecticides are sprayed on fields and gardens; food additives are shown to be cancer-causing, etc. Recently the FDA called for the total banning of trans fats, which have been widely used in processed foods and which are artificially created rather than natural. Commonplace consumer products contain tens of thousands of chemicals whose safety has never been tested (presumably, on animals). Are we not then conducting massive experiments on human subjects (i.e., ourselves) in our homes, schools, and workplaces every day, waiting to see what will kill or sicken us and what won’t?

I’m not sure I’m that curious.

Don’t Touch My Stuff: Archeologists vs Looters

Looters. Graverobbers. Criminals. Black marketeers. Such are the epithets attached to those who locate, dig, and pillage artifacts from ancient graves, tombs, and assorted ruins, for reasons, as imputed to them by others, of greed, ignorance, and a mania for collecting. And it is true that everywhere that antiquities can be found (which is almost anywhere), private persons do dig up and sell objects from ancient sites.

Most countries have strict laws that forbid such private digging and regulate the selling and exporting of antiquities; many items more or less legally (if not morally) acquired in the past have been repatriated, especially since the international agreements of the 1970’s codified the standards for the trade. The Getty Museum is just one among many Western museums that has had to return iconic objects to their countries of origin. (Worth mentioning are the very similar laws regarding the poaching and sale of exotic animals.) The rationales for these laws and agreements have much to do with national identity, anticolonialism, tourism dollars, and even scholarship.

When objects are pulled willy nilly from the ground, with no record of the context in which they were found (at what depth, associated with which king, bundled with what other items, etc.), information as to age and function is lost to archeologists, and therefore to the rest of us who might be interested in the archeologists’ findings. Particularly in today’s globalized world, these objects and sites are part of the heritage of the whole human race and belong to all of us.

Taking these points into account, it seems reasonable and just to protect our human heritage through law and to bring the full force of the justice and police system down on the looters and grave robbers. This seems so obvious.
But that which seems obvious may simply be prejudice or self interest, so let us consider the motives of the looters themselves. That is, the often poverty-stricken peasants who, upon locating a cache of ancient and often very beautiful objects, remove them and carry them away to be sold to dealers, who in turn will sell them to collectors who want them as objects of interest and beauty for their own enjoyment. The latter may (or may not) be motivated by greed (they could also be motivated by genuine interest or a love of art), but the peasants are motivated by the needs of poverty. It is of little interest to them that somewhere in America, Europe, or Japan there are disinterested scholars eager to excavate these sites properly and to display the prime objects in cosmopolitan museums (the rest of the objects being cataloged and stored in university basements, unlikely ever to see the light of day again), nor does it interest them that said scholars will present their findings at posh conferences and in peer reviewed journals. What they want is to feed their families, build a better house, or send a child to school.

But such peasants not only have no money, they have no power, yet they are well aware of who does, and they are not incorrect in believing that the academics periodically swarming over their territories represent those who do have power. They might (certainly in their own minds) wonder why they cannot dig and remove while the men and women in khaki can. Why, they might ask, is what we do called looting and grave robbing while what the professors and their students do is not? Have not in both cases the graves of the ancients been opened and emptied of their contents? Have not the professors gained wealth and prestige from this activity?

I remember what a leading mullah said some years ago, well before the United States invaded Afghanistan, about the Western world’s lamentations over the destruction of the Bamiyan buddhas, that we had never shown such concern for the hunger of the Afghan people. Wrong as it was to destroy those unique and irreplaceable monuments to Afghanistan’s past, I couldn’t help but think he had a point; the United States had used the Afghans to fight a proxy war with Russia, and once that was concluded to our satisfaction, we abandoned their country. Now we seem to be doing that again. Can it be that loss of life bothers us less than the loss of statues?

“Education” is often used to justify what might otherwise be recognized as exploitation, self-interest, and cruelty. Consider that marine mammal theme parks promote themselves as educational (especially for children, who may be too naïve to notice the hypocrisy) to justify the corralling of whales and porpoises who, in their natural setting (not “the wild,” since it is not the wild to them) range for thousands of miles; or that zoos, which are anything but natural settings, promote themselves as not only educating the public but also preserving endangered animals. Meanwhile, the societies in which these amusement parks and zoos are imbedded continue to degrade these creatures’ natural environments, thus hastening their demise and, not so incidentally, making the zoos even more “necessary”.

That college students and, occasionally, the general public are “educated” about ancient peoples through popular books and exhibitions may well be true, but mostly such beneficiaries are the already prosperous urbanites of populous consumer societies who have the time and inclination for such hobbies, and money to spend in museum stores. It is for these privileged ones that the graves and tombs of the ancients are pillaged by the archeologists. Not to mention that there is money to be made by documentary film-makers working for, say the Discovery or National Geographic channels, as well as scholarly careers, reputations, and tenured professorships.

Basically, fundamentally, opening an ancient tomb, removing the objects and bodies it contains, and transporting them elsewhere is desecration. The person or persons buried there meant to stay there; they did not go to their graves with the hope that hundreds or thousands of years later they would be exhumed by curious or greedy people; they did not think of their graves as “time capsules.” The reasons for disturbing their final resting places would make little difference to them. There is no a priori justification for the greed, the love of beauty, or the thirst for knowledge that motivates the desecrater.

I admit to being fascinated by the discoveries of archeology, especially by what they tell us of the ways ancient people understood themselves and the world, but I also admit I cannot think of a truly objective basis for holding that the satisfaction of such curiosity trumps all other considerations. The priority of such academic curiosity has been established by an elite that has power, money, ideology, ego, and politics on its side.

Perhaps Native Americans have the moral high ground on this issue. In recent decades, Native American tribes have demanded that objects and human remains removed from their ancestral lands and deposited in academic storage rooms and museums be returned to them for reburial according to their own rituals, and they have increasingly won their point.

(Books that document the seamier side of archeology include two by Cathy Gere, The Tomb of Agamemnon and Knossos and the Prophets of Modernism, and Jo Marchant’s recent book The Shadow King: The Bizarre Afterlife of King Tut’s Mummy.)