Tag Archives: free will

Evolution and Theodicy

“Why is there evil in the world?” This question has been asked by philosophers and theologians and ordinary men and women for millennia. Today scientists, particularly evolutionary biologists, neuroscientists, and evolutionary/neuropsychologists have joined the effort to explain evil: why do people indulge in violence, cheating, lies, harassment, and so on. There is no need here to itemize all the behaviors that can be labeled evil. What matters is the question of “why?”

The question of “why is there evil in the world?” assumes the premise that evil is abnormal while good however defined) is normal—the abnorm vs. the norm, if you will. Goodness is the natural state of man, the original condition, and evil is something imposed on or inserted into the world from some external, malevolent source. In Genesis, God created the world and pronounced it good; then Adam and Eve succumbed to the temptations of the Serpent and brought evil and therefore death into the world (thus, death is a manifestation of evil, immortality the natural state of good). Unfortunately, the Bible does not adequately account for the existence of the Serpent or Satan, so it was left to Milton to fill in the story. Gnostics, Manicheans, and others posited the existence of two deities, one good and the other evil, and constructed a vision of a cosmic struggle between light and darkness that would culminate in the triumph of good—a concept that filtered into Christian eschatology. The fact that Christian tradition sees the end times as a restoration to a state of Adamic or Edenic innocence underscores the notion that goodness is the natural, default state of man and the cosmos.

Contemporary secular culture has not escaped this notion of the primeval innocence of man. It has simply relocated Eden to the African savannah. When mankind was still at the hunter-gatherer stage, so the story goes, people lived in naked or near-naked innocence; they lived in egalitarian peace with their fellows and in harmony with nature. Alas, with the invention of agriculture and the consequent development of cities and civilizations, egalitarianism gave way to greed, social hierarchies, war, imperialism, slavery, patriarchy, all the factors that cause people to engage in violence, oppression, materialism, and so on; further, these faults of civilizations caused the oppressed to engage in violence, theft, slovenliness, and other sins. Laws and punishments and other means of control and suppression were instituted to keep the louts in their place. Many people believe that to restore the lost innocence of our hunter-gatherer origins, we must return to the land, re-engage with nature, adopt a paleo diet, restructure society according to matriarchal and/or socialist principles, and so on. Many people (some the same, some different from the back-to-nature theorists) envision a utopian future in which globalization, or digitization, or general good feeling will restore harmony and peace to the whole world.

Not too surprisingly, many scientists join in this vision of a secular peaceable kingdom. Not a few evolutionary biologists maintain that human beings are evolutionarily adapted to life on the savannah, not to life in massive cities, and that the decline in the health, intelligence, and height of our civilized ancestors can be blamed on the negative effects of a change in diet brought on by agriculture (too much grain, not enough wild meat and less variety of plants) and by the opportunities for diseases of various kinds to colonize human beings too closely crowded together in cities and too readily exposed to exotic pathogens spread along burgeoning trade routes. Crowding and competition lead to violent behaviors as well.

Thus, whether religious or secular, the explanations of evil generally boil down to this: that human beings are by nature good, and that evil is externally imposed on otherwise good people; and that if circumstances could be changed (through education, redistribution of wealth, exercise, diet, early childhood interventions, etc.), our natural goodness would reassert itself. Of course, there are some who believe that evil behavior has a genetic component, that certain mutations or genetic defects are to blame for psychopaths, rapists, and so on, but again these genetic defects are seen as abnormalities that could be managed by various eugenic interventions, from gene or hormone therapies to locking up excessively aggressive males to ensure they don’t breed and pass on their defects to future generations.

Thus it is that in general we are unable to shake off the belief that good is the norm and evil is the abnorm, whether we are religious or secular, scientists or philosophers, creationists or Darwinists. But if we take Darwinism seriously we have to admit that “evil” is the norm and that “good” is the abnorm—nature is red in tooth and claw, and all of the evil that men and women do is also found in other organisms; in fact, we can say that the “evil” done by other organisms long precedes the evil that men do, and we can also say, based on archaeological and anthropological evidence, that men have been doing evil since the very beginning of the human line. In other words, there never was an Eden, never a Noble Savage, never a long-ago Golden Age from which we have fallen or declined—and nor therefore is there any prospect of an imminent or future Utopia or Millennial Kingdom that will restore mankind to its true nature because there is nothing to restore.

The evolutionary function of “evil” is summarized in the term “natural selection”: the process by which death winnows out the less fit from the chance to reproduce (natural selection works on the average, meaning of course that some who are fit die before they can reproduce and some of the unfit survive long enough to produce some offspring, but on average fitness is favored). Death, usually by violence (eat, and then be eaten), is necessary to the workings of Darwinian evolution. An example: When a lion or pair of lions defeat an older pride lion and take over his pride, they kill the cubs of the defeated male, which has the effect of bringing the lionesses back into heat so that the new males can mate with them and produce their own offspring; their task is then to keep control of the pride long enough for their own cubs to reach reproductive maturity. Among lions, such infanticide raises no moral questions, whereas among humans it does.

There is no problem of evil but rather the problem of good: not why is there “evil” but rather why is there “good”? Why do human beings consider acts like infanticide to be morally evil while lions do not? Why do we have morality at all? I believe that morality is an invention, a creation of human thought, not an instinct. It is one of the most important creations of the human mind, at least as great as the usually cited examples of human creativity (art, literature, science, etc.), if not greater considering how much harder won it is than its nearer competitors, and how much harder it is to maintain. Because “good” is not natural, it is always vulnerable to being overwhelmed by “evil,” which is natural: Peace crumbles into war; restraint gives way to impulse, holism gives way to particularism, agape gives way to narcissism, love to lust, truth to lie, tolerance to hate. War, particularism, narcissism, etc., protect the self of the person and the tribe, one’s own gene pool so to speak, just as the lion kills his competitor’s cubs to ensure the survival of his own. We do not need to think very hard about doing evil; we do need to think hard about what is good and how to do it. It is something that every generation must relearn and rethink, especially in times of great stress.

It appears that we are in such a time today. Various stressors, the economy, the climate, overpopulation and mass migrations, religious conflict amid the dregs of moribund empires, are pushing the relationship of the tribes versus the whole out of balance, and the temptations are to put up walls, dig trenches, draw up battle lines, and find someone other than ourselves to blame for our dilemmas. A war of all against all is not totally out of the question, and it may be that such a war or wars will eventuate in a classic Darwinian victory for one group over another—but history (rather than evolution) tells us that such a victory is often less Darwinian than Pyrrhic.

Donald Trump: Psychoanalysis vs. Ethics

Is Donald Trump a narcissist? Is he a psychopath? Is he mentally unstable? These questions, and others of the same ilk, have been asked (and often answered in the affirmative) throughout the primary campaign season. To a lesser extent, similar questions have been asked about his followers. There has been, in other words, a lot of psychoanalyzing. It’s as if the DSM-5, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, has become the primary guide to politics and politicians.

Hillary Clinton has also, and for a longer time (at least since the Lewinsky scandal), been subjected to armchair and coffee house analysis (she’s in denial, etc.), even though, given that she is, for a politician, a surprisingly private person (i.e., uptight? Secretive? Not warm?), one wonders how anyone can legitimately diagnose her. Bill Clinton has also, of course, been parsed and dissected (narcissist, sex addict, etc.). Surprisingly, there has been little psychoanalysis of Bernie Sanders, perhaps because, as Hillary’s gadfly, he has dominated the high ground of principle.

Perhaps when a serious candidate actually has principles and stays consistent with them, psychologizing is unnecessary and even irrelevant. Principles have the effect of overriding personal quirks and biases. They are not generated from within this or that individual, and therefore are not reflective only of that individual, but are generated in a long process of shared thought. We come to principles through reason (Hannah Arendt might have said, through reason paired with imagination), not through impulse; indeed, the point of principle is to put a bridle on impulse, to restrain the impetuousness of the moment in favor of the longer, wider view. In Pauline terms, it replaces the natural or carnal man with the spiritual man; in late Protestant terms, it replaces immediate with delayed gratification.

So while Trump may or may not be a psychopath, a narcissist, or mentally unstable or ill, which none of us can really know, he is an unprincipled man. His constant shape-shifting, self-contradictions, denials, and off-the-cuff bluster are the signs of an impulsive man whose thoughts and words are not subjected to the vetting of a set of principles that can tell him whether he is right or wrong. He has at long last no shame, no decency, because he has no principles to tell him what is decent or shameful. In other words, he is typical of human beings, men and women, when they have nothing higher or wider than themselves as guides to behavior. This is not the place to go in depth into the utility of moral principle, but just as an example, something as simple as “do unto others as you would have others do unto you” can restrain the natural selfish impulse to grab as much as you can for yourself.

Anyone who has taken an introductory course in psychology or who has paged through any of the editions of the DSM has found plenty of evidence that they are in some way or another mentally unstable or unhealthy. Just about anyone can look at the list of defining characteristics of, say, narcissistic personality disorder (do you think you are special or unique?), or antisocial personality disorder (are you opinionated and cocky), or a perfectionist, and wonder, in a bit of self-diagnosis, if they should seek help. Welcome to the cuckoo’s nest. Or rather, welcome to humanity.

But for the concept of a disorder to exist, there has to be a concept of an order, i.e., a definition of what being a normal person is. Ironically, psychology is of no help to us here. The DSM-V is nearly one thousand pages long, and according to its critics adds more previously normal or eccentric behaviors to its exhaustive, not to say fatiguing, list of mental maladies. Its critics also charge that it provides ever more excuses for psychiatrists and physicians to prescribe very profitable drugs to people who are really just normal people. After all, they point out, life is not a cakewalk, and people are not churned out like standardized units.

Principle, i.e., morality, ethics, on the other hand, can be of great help here. It is obvious that the followers of Trump have not been dissuaded from supporting him because of the amateur psychoanalyses of pundits and opponents. Clearly they like those traits which the alienists are diagnosing. But what if someone started criticizing him on moral grounds, what if someone performed something analogous to “Have you no decency, sir?” This question, posed by Joseph N. Welch to Senator Joe McCarthy in a full Senate hearing in 1954, was a key moment in the demise of one of the worst men in American political history. Welch did not psychoanalyze McCarthy, nor did Edward R. Murrow in his famous television broadcast on McCarthy’s methods, and McCarthy was not taken away in a straitjacket. He was taken down by morally principled men and women who had had enough of his cruelty and recklessness.

Plato’s Cave: Real

Plato’s Cave, Inside Out, Part 2:
Real Caves with Real People in Them

In my previous post, I retold Plato’s parable of the cave by suggesting that the shadows on the wall were the Ideas and that the objects which cast those shadows were the Real—in other words, there is no heaven of ideas or ideal forms superior to the corrupted embodiments of them in the material world, but rather that these forms were literally ideas, shadows of the real in the minds of philosophers and thinkers such as political ideologues, and frankly of most of us as well. To some degree, we are all Platonists.

In this article I want to consider the notion(s) of the cave from a different angle, not that of parable, but of actual human practice. Plato’s cave is a deliberate fiction created to make a point, but it may be based on practices that have been common to human cultures since the beginning of our species. Most of the best preserved fossils of very-ancient humans have been found in caves, along with the bones of various kinds of animals, including animals that also lived in or made use of caves, as well as with materials that in some instances are clearly artifacts of a kind (such as the small slabs of ocher etched with lines and cross hatchings found in some South African caves). There is abundant evidence that deep caves were thought to be entrances to the underworld and the haunts of beings such as gods of death and monsters and serpents, etc. It would not be surprising to learn that Plato was familiar with caves as cultic centers or as sites associated with certain gods or sibyls.

There is a native tribe called the Kogi living in the mountains of Columbia whose culture has remained largely intact since pre-colonial times. One of the most interesting practices of the Kogi is the way they train selected young boys to become priests: the boy is sequestered at a very early age, before he has acquired much knowledge of the world, in a dark cave where there is just enough light to prevent him from becoming blind; over the course of nine years, he is trained by priests in the knowledge and ways of his people and the world, emerging into the world at the end of his training as a priest in his own right.

This custom parallels Plato’s parable, yet in an inside-out way, by implying that right knowledge is acquired through the ideas of things rather than by the experience of things. The boy emerges with superior priestly knowledge which allows him to guide his people in the right ways. This could be seen as Idealism practiced in its most ideal form. And while it might appear to us as a bizarre practice, in fact something similar has always been practiced in literate cultures: our education system, after all, isolates the young from the outside world and inculcates them with a form of “ideal” knowledge from books (and now of course from electronic sources, which are in some ways even less real, less embodied, than printed books); those who succeed in the process are the elite of our culture. In former times, when education was limited to the male children of the upper classes and focused on philosophy, theology, and the classics, this was even more true. Monks and hermits were even more isolated from the world than scholars, and yet both scholars and hermits were considered to be wiser and more insightful than ordinary people caught up in the hurly-burly and distractions of the material world.

This notion that true knowledge and wisdom exist in separation from the world rather than involvement in the world is a particularly striking characteristic of the human mind; one might argue that it is what makes the mind human and therefore what most distinguishes us from other animals. But how did this state of mind come about? I doubt that it is hardwired, though of course it is grounded in the structure of the brain; but the brain is structured (at least in part) to think, and the origin of this notion has to be in thinking.

Let us consider the cave paintings of prehistoric France, those splendid depictions of stone age animals in the caves of Lascaux, Chauvet, Grotte de Fond de Gaume, some dating as far back as 33,000 years B.C. Since their discovery in the twentieth century, assorted theories as to their significance have been offered by anthropologists, Structuralists, psychologists, art historians, film makers (Werner Herzog’s mesmerizing film of Chauvet, for example), and others. All of these interpretations have their plausibilities, but in the absence of any written explanations by the original artists themselves, we cannot be sure which, if any, of these interpretations approximate what the artists themselves thought they were doing. Perhaps the paintings had magical or religious meanings; maybe they were just pictures. Or maybe they were something in between, something transitional.

Modern interpretations of the paintings presuppose that they were the end products of thought, that it was thought that created the art. For example, that success in the hunt was desired, so they created paintings of the sought-after prey in order to ensure that success. But what if the art created the thought? What if the paintings were a relatively late stage in a process that began with the patterns of lines and cross hatchings found on very-ancient ocher slabs, and those etchings were at first random exercises or experiments in using the hands and simple tools to put marks on things. Surely the first human to do such a thing, perhaps just scratching some lines in the dust while lying in wait for prey to pass by, must have been taken aback by what he had done. We must not think in our terms about the origins of art, but in terms of the very first humans who started the whole thing, without any precedent of any kind. Would not that have been astonishing? Perhaps puzzling and even a bit scary at first? Would it not have been somewhat like (but not entirely so) a very small child of today swiping a crayon across a blank sheet of paper for the very first time in her life? None of us can remember that moment in our lives, and perhaps none of us can fully imagine that very first moment in human history (and maybe not in our own species H. sapiens sapiens)—but such a moment had to have happened.

And then the process of making sense of it all began. A proto-artist might have asked, what can I do with this? And in the course of further experimentation or play with these interesting scratches, refinements could have evolved, until we get to the representation, at first in stick figures, then in more developed figural sketches, of animals and other objects of experience. At some point, perhaps very early in the process, scratches and figures began to take on magical or spiritual properties—they began, in other words, to be interpreted, i.e., explained. Explanations led to more refinements of form, which in turn led to more explanations, and eventually we end up here, where we are in the twenty-first century, with shelves of books (and thousands of websites) that explain not just the objects themselves; they explain the explanations as well, in an almost infinite network of thought.

But what of the caves? The art on the walls of Chauvet and Lascaux appear to be late developments of a long process. Their apparent sophistication and indisputable beauty seem to indicate that they are the expressions of a rich and complex tradition, and their familiarity (their seeming prefiguring of, for example, twentieth-century modern art) seems to invite very modern responses and interpretations. Their presence deep in dark caves that could have been illuminated only by torchlight encourages theories of shamanistic practices, perhaps intended to ensure success on the hunt. Perhaps the caves were viewed as providing closer access to spiritual forces believed to be operating underground, or to bring the people closer to their ancestors (quite a few known origin myths posit that the first people emerged from underground).

But there is another, admittedly speculative, possibility: That descending into the caves provided an escape from the pressures and distractions of the sunlit active world and allowed the artists to play with their art, to discover what more they might do with this ability, such as when they discovered that they could add depth and contour, a kind of three dimensionality, by drawing around the bumps and irregularities in the rock walls—after all, it isn’t likely that they thought about that while going about their routines outside the caves; the technique had to have been suggested by the irregularities of the rock surface they were working on down in the caves, at the time of creation. In that sense, the pictures were pure art.

But they could not remain pure art for long—the human need for explanation, for interpretation, would have engaged almost immediately, and hence the paintings would have acquired meanings, likely even sacred meanings, soon after the artist stepped back to contemplate his finished work. Such a wonder would have inspired thought, perhaps an entire treatise of thought of a kind analogous to what we today call theoretical, which then could be carried in the minds and words of the artists to the sunlit surface world and conveyed to the general population.

It is relevant to note that many artists and writers of today explain themselves by saying that they didn’t know what they wanted to say until they said it (or painted or sculpted it). They will say that they knew where they started but didn’t know where they would end up. Some art and literary critics assert that the meaning of a work of art or a novel or poem is not completed until it has been viewed or read and interpreted by a viewer or reader. In this sense, art is the beginning of thought, not its conclusion.

Mark Balaguer on Free Will

Into the fray of recent books on whether or not we humans have free will jumps Mark Balaguer’s sprightly book, one of a series on current vogue topics published by MIT Press, intended for a nonspecialist readership. In other words, Balaguer is not writing for other philosophers, but for you and me—and this audience may account for the book’s jauntiness, inasmuch as it appears that authors, and/or their editors and publishers, believe that the only way that the common man or woman can be induced to swallow and digest cogitations on the great questions is by talking to him or her as if he or she were a child. One sometimes imagines the author as rather like a sitcom daddy explaining mortality or sin as he tucks in his four-year-old daughter.

You can tell that I find that style annoying. But despite that, Balaguer does more or less accomplish his goal, which is basically to show that the anti-free will arguments advanced today by such luminaries of the genre as Daniel Wegner and Sam Harris don’t amount to much. Primarily because they tend to assume what yet remains to be proven. Balaguer does an excellent job of exposing the holes in the determinist arguments, as well as going back to some of the studies that constitute the supposed proofs of those arguments, such as those of Benjamin Libet, and finding that they do not in fact offer such proof. I won’t go into his explanations, as the reader can do that easily enough on his own, especially since the book is short (a mere 126 pages of text) and free of arcane jargon.

Much as I welcome Balaguer’s poking of holes in the determinist hot-air balloon, I do have a bone to pick with his argument, namely that he seems to have a trivial notion of what free will is. Apparently, Balaguer thinks that free will is synonymous with consumer choice; his primary and repeated example is a scenario of someone entering an ice cream parlor and considering whether to order vanilla or chocolate ice cream. Even in his interesting distinction of a “torn decision,” i.e., one in which the options are equally appealing or equally unappealing, he repeats the chocolate vs. vanilla example. In this he is like Sam Harris, the determinist who uses tea vs. coffee as his example. And like Harris, he says nothing about the fact that free will is an ethical concept and as such has nothing to do with consumer choice—and a lot of other kinds of common, everyday choices as well.

So let me offer a scenario in which the question of free will is truly interesting: Imagine that you are a young man in the antebellum South, say about 1830, and you are the sole heir of a large plantation on which cotton is grown with slave labor. Let’s say you’re about 30 years old and that for all those 30 years you have lived in a social and ideological environment in which slavery has been a natural and God-given institution. You therefore assume that slavery is good and that, when your father dies and you inherit the plantation, you will continue to use slave labor; you will also continue to buy and sell slaves as valuable commodities in their own right, just like the bales of cotton you sell in the markets of New Orleans. Further, you are aware that cotton is an important commodity, crucial to the manufacturing enterprises of the new factories of the northeast and England. You are justly proud (in your own estimation, as well as that of your social class) of the contributions the plantation system has made to the nation and civilization. Because of your background and experience, perhaps at this point you cannot be said to have free will when it comes to the question of whether or not slavery is morally just.

Then one day you learn of people called abolitionists, and perhaps quite by chance you come across a pamphlet decrying the practice of slavery, or perhaps even you hear a sermon by your local preacher demonizing abolitionists as atheists or some such thing, though in the course of that sermon the preacher happens to mention that these atheists presume to claim Biblical authority for their heretical beliefs. Maybe you rush to your copy of the Bible to prove them wrong, only to come across St. Paul’s assertion that there is neither slave nor freedman in Christ. Perhaps you ignore these hints that what you have always assumed to be true may not be; or perhaps they prick your conscience somewhat, enough to make you begin to look around you with slightly different eyes. Maybe you even become fraught, particularly when you consider that some of the younger slaves on the property are your half-siblings, or perhaps even your own offspring—how could my brother or my son be a slave while I am free? Who can say what nightmares these unwelcome but insistent thoughts engender? At any rate, for the first time in your life, you find that you cannot to be a slaveholder without considering the moral implications of the peculiar institution. For the first time, you must actually decide.

The above is certainly an example of what Balaguer calls a torn decision, but unlike chocolate vs. vanilla, it is a moral decision, and therefore profound rather than trivial. And it is in such moral dilemmas, when something that is taken for granted emerges into consciousness, that the concept of free will becomes meaningful. It would therefore seem that scientists, qua scientists, can’t be of much help in deciding whether or not we have free will. Try as they might (and some have, sort of), they cannot design laboratory experiments that address moral dilemmas—it is only in living, in the real world with other people and complex issues, that morality, and therefore free will, can exist. Of course, that does not mean that in exercising free will everyone will always make the morally right decision—we cannot know if the young man of the antebellum South will free his slaves or keep them (or even perhaps decide that the question is too difficult or costly to be answered, so he chooses to ignore it, likely leading to a lifetime of neuroses)—but we do know that once the question has risen into his consciousness, he has no choice but to choose.

Free will, then, operates when a situation rises into consciousness, creating a moral dilemma that can be resolved only by actively choosing a course of action or belief on the basis of moral principles rather than personal preference or benefit. There are dilemmas that superficially resemble moral dilemmas, such as whether or not it I ought to lose weight or whether or not I should frequent museums rather than sports bars, but which are in fact matters of taste rather than ethics. Chocolate vs. vanilla is of the latter kind. To say that I ought to have the vanilla is very different from saying I ought not to own slaves, even though both statements use the same verb. It is disappointing that philosophers fail to make the distinction.

Eichmann Before Jerusalem: A Review

Eichmann:  Before, In, and After Jerusalem

“One death is a tragedy; a million deaths is a statistic.”  Whether or not Stalin actually ever said this is irrelevant to the point that it makes, for it tells us in a most condensed form the totalitarian view of human beings, as exemplified not only by the Stalinist era in Russia but especially by the short but deadly reign of National Socialism in Germany.  Unlike the socialism found in contemporary European societies such as Sweden and France, in which the individual human being is recognized as a person regardless of his or her circumstances, and thus equally worthy of education, medical care, and hope, the “socialism” of the Nazis stripped the individual of personhood by subsuming him in a collective identity, so that this body was interchangeable with that body, as not only representative of the collective he was assigned to (born as) but was in fact that collective, with no more independent existence from that collective than a cell exists independently of its body.  Individuals thus were considered and treated not as symbols of the collective (Jews, gypsies, homosexuals, Poles, intellectuals, etc., as well as “Germans” or “Aryans”) but as the collective itself.  The purpose of the individual was to sustain the collective, just as the purpose of a cell is to sustain the body.  No one is interested in the dignity and autonomy of a cell.

Click here to read the complete review.

Ethics and Human Nature

It is an unhappy characteristic of our age that certain ignoramuses have been elevated to the ranks of “public intellectual,” a category which seems to consist of men and women who provide sweeping theories of everything, especially of everything they know nothing about. Into this category fall certain writers whose sweeping theory is that, prior to the Enlightenment, everyone lived in abject superstition and physical misery. With the Enlightenment, reason and science began the process of sweeping away misery and ignorance, clearing the field for the flowers of prosperity and knowledge. Such a sophomoric view of human history and thought has the virtue (in their minds only) of rendering it unnecessary for them to acquaint themselves with a deep and nuanced knowledge of the past, an error which permits them to attribute all that is good in human accomplishment to the age of science and all that is bad to a dark past best forgotten.

Nowhere is this more evident than in the recent fad for publishing books and articles claiming that science, particularly evolutionary science, provides the necessary and sufficient basis for ethics.

To read the article, click here.

Why Determinism?

The eternal debate between determinism and free will has lately taken a new form. Determinism has been reincarnated in the shape of neuroscience, with attendant metaphors of computers, chemistry, machines, and Darwinism. Meanwhile, defenders of free will seem to have run out of arguments, particularly since, if they wish to be taken seriously, they dare not resort to a religious argument. That the debate is virtually eternal suggests that it is not finally resolvable; it could be said in fact that the two sides are arguing about different things, even though they often use the same terminology.

Determinism’s popularity is most clearly suggested by the sales figures for books on the subject and by the dominance of the view in popular science writing. Such books are widely reviewed, while those arguing for free will are neglected, especially by the mainstream press.

The question then is not whether or not we have free will, or whether or not we are wholly determined in all our thoughts and actions; but rather, why at this point in time, particularly in this country, determinism is so popular, more so than free will?

Today’s determinism is not the same as the ancient concept of fate. Fatalism was not so much about determinism or, as the Calvinists posited, predestination; fatalism did not pretend to know what would happen, but rather held that fate was a matter of unpredictability, of whim (on the part of the universe or of the gods, etc.), and in fact left some room for free will, in a what-will-be-will-be sort of way; i.e., because outcomes were unpredictable, one had to choose, one had to act, and let the dice fall where they may. The tragic flaw of hubris was exactly what is wrong with any determinism, the delusion that one could stop the wheel of fate from turning past its apex, i.e., that through prediction one could control.

Determinists worship predictability and control. I once read somewhere the idea that, if everything that has already happened were known, everything that will happen could be accurately predicted. Extreme as this statement is, it accurately summarizes the mindset of the determinists. It also suggests why determinism is so attractive in a scientific age such as ours, for science is not only about the gathering of facts and the formulation of theories but also about using those theories to make predictions.

Given the apparent power of science to accurately predict, and given that prediction is predicated on a deterministic stance, it is not surprising that scientists should turn their attention to the human condition, nor that scientists, being what they are, tend to look for, and find, evidence that human thoughts and behavior are determined by genes, neurons, modules, adaptations, what have you, and are therefore predictable. And it further is not surprising that, in a restless and rapidly changing world, laymen are attracted to these ideas. Certainty is an antidote to powerlessness.

If we are religiously minded, we find certainty in religion; hence the rise of politically and socially powerful fundamentalist movements today. If we are not religious, we may find certainty in New Age nostrums, ideologies, art, bottom lines, celebrity worship, or even skepticism (no one is more certain of his or her own wisdom than the skeptic). If we are politicians, we look for certainty and security in megabytes of data. If we are scientifically minded, we find certainty in science. But certainty is not science. It is a common psychological need in an age of uncertainty.

In satisfying this need for certainty, determinism often leads to excessive self-confidence and egotism—which in turn leads to simplifications and dismissal of complexity, ambivalence, and randomness. Determinism is teleology. Today’s determinists may have discarded God, but they still believe that He does not play dice. They are, in short, utopians. We all know where utopias end up. That much at least we can confidently predict.

Boehm’s “Social Selection”

Christopher Boehm’s book Moral Origins: The Evolution of Virtue, Altruism, and Shame (Basic Books, 2012) is yet another sad example of the futility of the widespread hope that Neo-Darwinism, as over extended by evolutionary psychology and sociobiology, can ever be a theory of everything, particularly a theory that explains modern human behavior and values. It is not science. It is an ideology, or perhaps merely a hope, dressing up in a sloppy imitation of science.

Boehm’s thesis is that human moral values, the virtue, altruism, and shame of his subtitle, evolved through a process of what he calls “social selection,” which can be defined as the selecting out of socially uncooperative individuals (whom Boehm equates with psychopaths) and the selecting in of cooperative ones. Lengthy as the book is (at 362 pages of text), with its elaborate arguments and numerous examples, Boehm fails to support his thesis with anything more than supposition and false analogies.

First let’s consider what social selection would have to do in order to affect the evolution of human beings:

1) It would require a concerted effort species-wide over a great swath of time to define, identify, and eliminate socially uncooperative individuals (psychopaths and free riders).

2) In order to affect the gene pool, undesirable individuals would have to be identified very early in life, before they had the chance to reproduce. Killing the parent without killing the child does not eliminate the parent’s genes.

3) The criteria for determining whom to eliminate would not only have to be clear but consistent over many generations. Any change in the standards midstream would ruin the whole scheme. Yet any historian can tell you that standards have changed over time, sometimes quite sharply.

There is no evidence that any of this obtained at any time in human history or prehistory. There is also no evidence that if it did occur it would have had a significant impact on human evolution. Prior to modern medicine and germ theory, infant and child mortality, not to mention plagues and epidemics that affected adults as well, would have had an impact many times that of social selection, effectively diminishing its proportionally infinitesimal effects.

In order to compensate for the serious lack of evidence, Boehm resorts to highly suppositional phrasing and subjunctive grammar. The following examples from pages 80 and 81 are illustrative of far too much of the book:

“prehistoric forager lifestyles could have generated distinctive types of social selection” (Perhaps they could have, but science wants to know if they actually did.)

These types of social selection “could have supported generosity outside the family at the level of genes.” (Again, did they actually do so?)

“were likely to have”
“could have become”
“It’s even possible . . . if”
“may have begun to differ”
“it’s likely that”
“would have been”
“would not have negated”
“they would have”
“were likely to have been”
“what could have happened”
“very likely”

And all these from just two pages! The careless or naïve reader might not notice this suppositional language and therefore mistakenly believe that Boehm is solidly establishing his argument; but the careful reader will find these to be crippling stumbling blocks.

There are also problems of self-contradiction. For example, Boehm seems to be saying that social selection eliminates psychopaths, but then states that psychopaths constitute a significant percentage of modern day populations. He claims that “People very significantly [psychopathic] probably number as high as one or more [vague: how many more?] out of several hundred in our total population,” which may not seem all that many, but perhaps too many if humans began socially selecting these people out thousands of years ago. Other sources put the percentage as low as 2% and as high as 4%, but no doubt problems of definition affect the numbers. Whatever the true number may be, I think Boehm does need at the very least to clarify just how effective social selection really is.

The examples he pulls from contemporary forager societies are also contradictory of his thesis. He cites the example of Cephu, a Mbuti Pygmy who, as recounted by Colin Turnbull, let his greed overcome his responsibility to the rest of his group. His colleagues caught him in the act of helping himself to more game than he was entitled to and subjected him to an intense course of humiliation—but they did not kill him or his progeny, and after he had adequately apologized and humbled himself, he was readmitted to the group. The story of Cephu, meant to illustrate the book’s thesis, actually proves its opposite. Cephu’s behavior was corrected not genetically, but culturally.

Perhaps a comparison would clarify the problems with Boehm’s thesis. There is another form of behavior that one might think would have been socially eliminated fairly early in human evolution, male homosexuality. It is not, after all, conducive to reproductive survival, and has often been punished, quite horribly in many instances, not only with shunning and shaming techniques but with imprisonment, torture, and execution; yet it has persisted through thousands of years, in part because homosexuals can camouflage themselves but also because efforts of social selection to eliminate the behavior have proven to be ineffectual—just as has been, I would argue, social selection to eliminate socially uncooperative individuals. This analogy suggests that social selection is a very weak hook on which to hang the hope that biology and genetics can account for all human behavior in terms of “fitness.”

Finally, we should note that throughout history there have been people we would today label as psychopaths who have been quite successful leaders, often revered not only in their own times but long after their deaths. One thinks of Napoleon Bonaparte, killer of millions yet romanticized and admired by other millions, credited with the Napoleonic Code and sympathized with in his exile. One also thinks of Genghis Khan, the great butcher who, far from being selected out of the gene pool, is now thought to be the ancestor of as many as 16 million people living today. Of course, being a psychopathic great leader is no guarantee of reproductive success; Hitler, fortunately, had no children, and though he did have nieces and nephews, none of them has followed his example. While Boehm believes that psychopaths and free riders were (at least to some extent) weeded out of the gene pool through social selection, it may be that such individuals were selected for because in some ways that we 21st century Americans may not comprehend, they were in fact socially useful. Perhaps they made good warriors, or maybe they built the great empires that encouraged the arts and sciences, or maybe they made their liege lords great fortunes (perhaps Cortez and Pizarro were useful psychopaths, enriching the Spanish treasury while taking all the risks). What we can say is that they have been, and are, legion.

Thomas Nagel’s “Mind and Cosmos”

The general problem addressed by Thomas Nagel’s latest book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False addresses this general problem:  As a conscious, reflective and self-reflective creature, man is preoccupied with questions of, Who am I? How did I get here? Where did I come from? What is my purpose? What can I know?  What is true?  These questions have been the traditional starting points for those fraternal twins religion and philosophy and have been answered in a shifting spectrum ranging from the nihilist to the promethean, with innumerable shades between.  As a philosopher (most widely known perhaps for his essay “What Is It Like to be a Bat?”), Nagel is rather put out by the fact that this general problem has been usurped by science, particularly by a hyper-reductionist world view that denies consciousness and all that it entails, including free will, cognition, and value, and especially purpose.

The goal of his book is to restore all these human capacities and to challenge the Neo-Darwinian or hyper-reductionist position, which in his view is “radically self-undermining” (p. 25).  It is radically self-undermining because, if logically followed through to its end point, “Evolutionary naturalism provides an account of our capacities that undermines their reliability, and in so doing undermines itself,” (p. 27); that is, if it is true, we are incapable of knowing that it is true, but the very fact that certain thinkers assert that it is true proves that it is not true.  This can be called an anti-tautology, i.e., if it is true it is self-evidently false.  For example, how can a person who denies free will or who asserts that consciousness is an illusion write an entire book, sentence after sentence, chapter logically following chapter, with arguments and citations, while actually believing that he or she has done so in a trance or had no choice but to do so?  How can an illusion believe that it is an illusion?

The best point in this otherwise flawed book is that consciousness is self-evident and obvious—it is something each of us experiences personally and directly.  Therefore, evolutionary processes must be “reconceived in light of what they have produced” rather than being misconceived in a hyper-reductionist way that denies evolution’s products.  “Conscious minds must be part of what is explained by any theory of the world” rather than explained away by the inexorable logic of a flawed theory.  Alas, as much as one agrees with him on this central and important point, Nagel’s proposed alternative hypothesis fails to correct the problem.

That hypothesis is, basically, that consciousness not only exists, but that it is a feature of the universe—not just of humans and/or other developed organisms, but the telos of the whole cosmos.  “We should seek a form of understanding that enables us to see ourselves and other conscious organisms as specific expressions simultaneously of the physical and mental character of the universe” (p.69, italics added).  And, “Each of our lives is a part of the lengthy process of the universe gradually waking up and becoming aware of itself” (p. 85).  His solution, then, is an explicitly panpsychic world view, and I am sure most scientists would wonder just how that hypothesis could be investigated or proven.  That Nagel seems to have a seriously inadequate knowledge of science and fails to provide any specific evidence or examples in support of his notion is not likely to inspire their enthusiasm.

Nagel does, however, share at least one prejudice with the Neo-Darwinists:  He is too tied to the concept of “fitness” (though in his case rather naively understood), particularly to the idea that fitness, or natural selection, is logical.  Nagel believes that natural selection is the engine that drives evolution and therefore that every new development must have arisen to serve a purpose, a belief that does logically lead to a teleological view of evolution.  Thus, because evolution led to human consciousness, it must be true that “the propensity for the development of organisms with a subjective point of view [consciousness] must have been there from the beginning” (p. 61).  But in a nice U-turn, he questions the likelihood that consciousness arose because it had strictly survival value:  “Is it credible that selection for fitness in the prehistoric past should have fixed capacities that are effective in theoretical pursuits [today] that were unimaginable at the time?” (p. 74), and “It is not easy to say how one might decide whether this could be a manifestation of abilities that have survival value in prehistoric everyday life” (p. 77), such as, one would imagine, the savannahs of ancient Africa or the caves of Neanderthal Europe.  These last two quotations might lead one to believe that Nagel is now arguing against fitness as the driver of evolution, but in fact he is not, except in the sense that he is turning it on its head; by agreeing that fitness drives material evolution, he is able to argue that Neo-Darwinian evolution is incomplete, that there is something more, indeed much more, than natural selection and matter at work.  There is the “mental character” of the universe, the Great Pan-Psyche, of which we are the conscious expression.

Nevertheless, just a Nagel correctly points out that the existence of consciousness is obvious and self-evident and therefore must be a big part of that which any adequate theory claims to explain (and not explain away), so too is he correct in pointing out that natural selection or fitness is not adequate to explaining consciousness; and that because it is not, those who hold to it as sacred doctrine must inevitably deny the existence of that which is obvious and self-evident.  (Again, the anti-tautology.)  So, perhaps it is time to get rid of, or at least demote, the doctrine of natural selection.

And given that natural selection, or the survival of the fittest, is a cultural construct with a well-known cultural history, getting rid of it ought to be easy (though in fact it is not).  Many have noted that Darwin did not originate the expression “survival of the fittest” and did not use it in his earlier editions of On the Origin of Species, that it was instead coined by Herbert Spencer, a founder of sociology and one of the roots of the Social Darwinist movement; but Darwin did incorporate it in a later edition of his book because he heard in it a nice nutshell expressionof a basic tenet of his theory.   Both Spencer and Darwin were products of a Great Britain near the height of its imperial powers, and it was common among English gentlemen to view themselves as superior beings, either blessed by God or by evolution to rule over the inferior masses.  Darwin made occasional references to such superiority in his notebooks and often described native peoples he encountered on his famous voyage in deprecatory terms.  Not even great geniuses can set aside their biases (look at Aristotle’s justification of slavery for another example), and so it should not be surprising that the biases of 19th century English gentlemen should affect their views on evolution, and that these gentlemen should emphasize the idea of fitness over other mechanisms of evolution (especially since they had no knowledge of genetics).  Survival of the fittest suited their biases too much to be modified.

“Fitness” does entail notions of teleology as well as whispers of some kind or degree or standard of “perfection” to which all life either progresses or conforms.  Those organisms that do not survive or that go extinct after a brief day in the sun are “unfit,” are in some way therefore weak, imperfect, deserving of their fate.  A kind of Platonic materialism seems to be at work in a great deal of neo-Darwinian thinking.

But there really is no reason to view natural selection or survival of the fittest as the engine that drives evolution.  One could as easily, and perhaps more logically, view mutation as the engine, with natural selection as the brakes (if you are noticing the metaphors, good for you!).  In other words, evolution will tolerate whatever mutations can throw into the world to whatever extent the variety of circumstances will permit, without regard to logic or telos or whatever else human thinkers may wish to propose.  Life mutates in ways both splendid and mundane, pushing to the very limits of survival, yet with no premonition of the new life forms that are yet to come.  During the Mesozoic era, the age of the dinosaurs, life was at least as various and elaborated, as cruel and beautiful, as it is today, and it is only time, not telos, that eventually ushered in our own very different world.

Both Nagel and the hyper-reductionists whom he attacks want the same thing: a universe, and an evolutionary process, that makes sense.  He believes, without any evidence to support that belief, that since we are the product of the universe it stands to reason that we should be able to understand it; hence his repeated use of such terms as “intelligible,” “likely,” and “credible.”  Both Nagel and his opponents also want an essentially spiritual explanation of human consciousness. The hyper-reductionists deny the spiritual dimension and therefore deny the possibility of consciousness, along with all that consciousness entails (free will, etc.).  Though they claim to be materialists, they have no faith in matter; thus spirit is present by its denial (which might explain why some of them have written quite virulent books attacking religion).  Nagel, on the other hand, denies the possibility that matter can think and therefore adds back in a barely disguised spiritual dimension (throughout the book he reiterates that he is an atheist, but perhaps he is only in a Western sense).  The motive in both cases is a broadly religious one—like it or not.

However, if we return to Nagel’s point that consciousness is an obvious and self-evident fact, and if we accept that the universe is matter with no spiritual dimension or God, then we must reach the obvious conclusion that matter sufficiently organized can think and be conscious.  And we can do so 1) without concluding that what it thinks must be correct or adequate to explaining the universe or even itself, 2) and without concluding that the universe must be intelligible, either now or at some time in the future, 3) and without concluding that the universe has a purpose or that evolution is teleological.  Time is all the teleology there is.

Sam Harris and Free Will

Sam Harris’s Free Will: An Exasperated Review

Some books are so bad they defy refutation.  Sam Harris’s short book on free will comes close to being one of those books.  It is rife with naïveté and self-contradictions, and proffers trivia and hypotheticals in place of evidence.  In these, it is like his earlier book The Moral Landscape, which covered much of the same ground at greater length.

As one is reading this book, one begins to wonder just what definition of free will Harris is talking about, and one finally finds out (sort of) on page 30 (of a book that is only 66 pages long), where he refers to the “popular” version of free will.  This popular version, which appears to be Harris’s target (why?), is indeed a very naïve one, entailing that free will must be conscious and operate somewhat like a syllogism.  He writes, “to actually have free will [,y]ou would need to be aware of all the factors that determine your thoughts and actions, and you would have to have complete control over those factors” (italics added).  I have never before encountered such a view of free will; if Harris is correct, this stands as a major insight, never before thought of in the whole history of theology and philosophy.  Of course, no one can be, or would want to be, aware of all the factors that determine his thoughts and actions, nor can anyone imagine having complete control over them, or wanting to.  One would be so entangled in what he elsewhere refers to as introspection, and so fraught with indecision as to what kind of control to exercise, that making any kind of decision, let alone of the kind that makes the question of free will interesting, would not be possible.  To devote even a short book to this childish, magic-wand kind of free will seems a tragic waste of time.

The trivial nature of his definition is seconded by the triviality of his evidence.  He discusses choosing between coffee or tea as an example of how our choices are not freely willed but determined by prior causes, particularly by events in the brain (p. 7).  I can imagine the internal struggle unfolding in a Starbucks, but not in the Garden of Gethsemane; he seems to be talking about the “free will” of a rather shallow character, a consumer.  Free will is a profoundly ethical concept, but alas it is not surprising that in our contemporary society of entertainment and shopping it has come to mean no more than consumer choice.

As in his earlier book, Harris trots out the experiments of Benjamin Libet, who demonstrated that, when subjects were asked to push either one of two buttons, scans of their brains showed that the decision was made several seconds before the subjects were consciously aware of it.  This is a trivial fact, one might even say a factoid, and again has nothing to do with the free will as an ethical problem.  This is more akin to reflex action than to the often drawn-out process of deciding what to do when faced with a moral dilemma, the kind  that might cause a President to pace the Oval Office floor.  Perhaps Harris does not discuss examples of the latter sort because such situations cannot be reduced to simple cause and effect or measured by brain scans and therefore do not fall within the purview of science, in the pure reductionist way that  Harris understands it.  If physics offers us anything helpful in addressing this question, it is its understanding that the whole is not describable in terms of the particle.  We may be made up of atoms, but we are not atoms and don’t act like them.

Speaking of brain scans, Harris also indulges in hypotheticals that are supposed to buttress his thesis but which do not.  He writes, “Imagine a perfect neuroimaging device that allow us to detect and interpret the subtlest changes in brain function.”  Well, we are free to imagine just about anything, from unicorns to warp speed, but to imagine something is not to establish its existence, and since we do not have such a machine, we cannot reach any conclusions from its operations.  Sure, it might be that such a machine would show that “the experimenters knew what you would think and do just before you did,” but until such a machine comes along we cannot conclude from its imaginary existence that free will “is an illusion” (p. 11).  That “We know that we could perform such an experiment, at least in principle” which would then “directly challenge [our] status as conscious agents in control of [our] inner lives” (p.24) is, in principle, a fairy tale.  If we had a time machine, in principle, we could travel backwards and forwards in time, but for now, in principle, we cannot travel through time.

I think that free will is most likely a cultural artifact rather than an innate trait conferred by genes or souls or whatever; it is something that we human beings, through the power of symbolic language, have created for ourselves, just as much as any other cultural artifact, whether a social structure or a work of art, and it is every bit as real as those things.  It is both an act and an attitude—we choose to do one thing rather than others, and we choose to accept or reject responsibility.   Historically, it has preoccupied the Christian West more than it has other cultures.  The version of free will Harris denies is a particularly Western, Christian one.  Apparently, he cannot conceive of any version of free will that does not first assume a soul; his language gets confused when he speaks of “your brain” vs. “you” (p. 9); he seems unclear about how one could be introspective and yet without free will; and in a single sentence he both denies and affirms free will.  “This understanding reveals you to be a biochemical puppet, of course, but it also allows you to grab hold of one of your strings” (p. 47).

It’s that one string that makes all the difference.  Of course, we are limited, by the circumstances of our birth, by the social class we grow up in, by the vagaries of health and accidents, by the climate, by the other human beings who surround us, by the brevity of our lives; but we are also capable of acting within these circumstances rather than always and only reacting.  We do not need a magic wand, we do not need to make a Faustian pact with the devil to exercise free will.  In fact, free will would be meaningless without those limiting circumstances, for it is only in the real world that free will can present itself.

See also BigQuestionsOnline