Neuroscience. Determinism, and Free Will: A Review of Gazzaniga’s “Who’s in Charge?”

          Michael S. Gazzaniga’s latest book “Who’s in Charge: Free Will and the Science of the Brain” (Harper Collins) attempts to counteract the increasingly widespread notion that contemporary neuroscience research proves determinism and lack of free will, particularly in terms of personal moral responsibility.  As he states in his introduction, “Beliefs have consequences,” and belief in determinism leads us “to not hold people accountable for their actions or antisocial behavior” (4).  He never makes quite clear who exactly is not holding people responsible, and one suspects that we have encountered a bit of a straw man already this early in the book.  Nonetheless, his point that neuroscientists (unlike, for example, physicists) tend to be deterministic seems true to anyone who has recently read some of the more accessible popular books on the subject, and as a practicing and prominent researcher himself, Gazzaniga knows whereof he speaks.  The introduction also makes clear that he does not believe in reductionism:  “the physical world has different sets of laws depending on what organizational layer one is looking at” (6).  Just as the laws of atoms do not apply to the bodies they compose, so the firing of neurons is not equivalent to the mind.  As he will be at some pains to demonstrate, the mind is emergent, a whole that is more than the sum of its parts.

          The body of the book falls into two parts: chapters 1 through 4 provide basic information on what neuroscience does know about how the brain works, including both its abilities and its limitations; chapters 5 through 7 attempt to use that knowledge to explain or at least further clarify our concepts about free will and moral responsibility, including whether or not neuroscience should inform law.  Readers will find the first four chapters highly informative, interesting, and even a bit surprising; they are precise and well written, mainly because in these chapters Gazzaniga, a prominent neuroscientist and professor at UC Santa Barbara who is perhaps most famous for his pioneering split brain research, is in his natural element.  They will experience more skepticism as they read the last three chapters, primarily because there Gazzaniga ventures beyond his field of expertise.

Refreshingly, Gazzaniga does not fall into the common trap of believing that we can learn very much about how the human mind works by studying animal brains and behavior.  He makes the point that research into the brains and behavior of a limited number of model species such as rats or monkeys provides only a weak basis for more general statements about the human brain.  Brain plasticity, the stimulation of the growth of neuronal connections, and culture create the social, emergent mind that characterizes human beings.  Brain organization is more important that brain size, and the human brain is organized quite differently from the brains of other animals.  What is particularly interesting is that he shows that the chimpanzee, that favorite “first cousin” trotted out to prove how little distance there is between us and animals, is not really close enough to carry such weight.  The common ancestor of both men and apes was emphatically not a chimp, and we are not descended from chimps; “the chimp itself has undergone more evolutionary changes since the divergence than has been previously appreciated” (25).  In other words, the chimp veered off in its own direction—it did not evolve in any sense parallel to humans.  It is therefore of more limited value in helping us to understand ourselves than is popularly (or even scientifically) believed.

Chapters 2 and 3 provide some of the most interesting information in the book, drawn largely from Gazzaniga’s own decades of research with split-brain patients (men and women who have undergone surgery to sever the connections, of the anterior commissure and the corpus callosum, to relieve the seizures of epilepsy).  As is now commonly known, the brain consists of two hemispheres, what Gazzaniga calls brain left and brain right, each processing certain tasks and abilities of its own.  The left brain is responsible for, among other things, language, while the right brain, again among other things, dominates visual processing.  Gazzaniga and his associates discovered this asymmetry of brain function by means of a number of ingenious experiments on post-operative patients, who otherwise would strike us a perfectly normal in their cognitive and perceptual function. It is the left hemisphere that makes inferences, creates explanations and narratives to interpret events, and thereby gives us the “illusion of self” (passim).  It “tries to figure out a system” and “creates a theory to explain” events; its explanations, Gazzaniga believes, are post hoc or after the fact.  “The right hemisphere,” on the other hand, “is poor at making inferences” (62).  It also “leads a literal life”; to the right hemisphere, “A box of candy .  .  . is [just] a box of candy.”  It is the left hemisphere that “can infer all sorts of things from this gift” (63).  The common notion is that the right hemisphere is the creative side and the left hemisphere is the rational side, but Gazzaniga’s description, to me at least, suggests that the opposite is true.  All those fact-obsessed, literal-minded stereotypes out there, like the character of Temperance Brennan on the television series “Bones,” are right-brain dominant, not left-brain, while the character of Seely Booth, the FBI agent who works from “gut instinct” and makes inferential leaps, is the left-brain dominant character—interestingly, while the common stereotype of the rational left-brain individual is usually male while the stereotypical intuitive right-brain dominant is usually female, the show keeps the gender/hemisphere identification but reverses the rational/intuitive cognitive strategies.  Make of that what you will.  Gazzaniga does state, however, that the left hemisphere arrives at its inferences by the “application of logical rules and conceptual knowledge to the interpretation of events” (99), but he also points out that the left brain fills in the holes (i.e., what is not actually there perceptually) in order to create its interpretations or explanations.

Gazzaniga also subscribes to the theory that the brain, taken as a whole, is a system of modules with multiple subsystems and that it is a complex system, which he helpfully defines as “composed of many different systems that interact and produce emergent properties that are greater than the sum of their parts and cannot be reduced to the properties of their constituent parts” (71).  “The common characteristic of all complex systems is that they display organization without any external organizing principle being applied” (72).  Therefore, to know the mind requires more than knowing the functions of distinct modules of the brain.  What, then, pulls everything together and creates the mind, particularly the sense of self and the concomitant sense of personal responsibility?  In part, it is the ongoing life narrative created by the left hemisphere, but as important is the social context in which the brain develops.  No man, as the old saying goes, is an island, nor living in solitude on an island; if you were the only person in the world, neither the notion of self nor the concept of responsibility would have any meaning.  You wouldn’t know very much, either, only that which is physically immediate to your body, and of course you wouldn’t have any need to know any more than that:  no art, no philosophy, no ethics or science, no history or politics.  The human mind, one might say, is an emergent property of a culture, a shared mind rather than an individual or solipsistic one.  This “social mind,” as the fifth chapter is titled, has not been much studied because, Gazzaniga believes, of the American individualistic bias, but individual brain scans, for example, cannot detect the social mind.  As a section of chapter 4 is titled, “You’d never predict the tango if you only studied neurons.”

All this sounds very encouraging to those of us who have remained skeptical of reductionist explanations of the mind.  But alas, Gazzaniga falters when he leaves his own territory and wanders into the vast wasteland of contemporary evolutionary speculation.  A warning sign occurs on the first page of chapter 4:  “Puncturing this illusory bubble of a single willing self is difficult to say the least.  Just as we know but find it difficult to believe that the world is not flat, it too is difficult to believe that we are not totally free agents” (105).  Oh boy!  There’s more to unpack from this statement than initially meets the eye.  For one thing, I have never encountered anyone who found it difficult to believe that the earth is not flat, nor do I recall myself ever having believed that is was, having been introduced to the fact that it is round (roughly) so long ago; every classroom of my childhood contained a large model globe which it was fun to spin.  There is also the problem, which recurs in later chapters of the book, of just exactly who “we” is, but if we break down the global category of “we” into its many constituent parts, we will find (you and I, the readers of this book) many sourpusses across all of history and among all cultures who readily believed that human beings are not “totally free agents” and that in fact many people have been determinists, on grounds other than neuroscientific experiments, rather than either believers in free will or individualism (predestination, anyone?).  Further, what being a totally free agent might in fact mean has been debated extensively by philosophers, literary artists, and theologians, both professional and armchair.  There are also, by the way, plenty of examples of the dire consequences of believing too fervently or literally in total free agency:  Macbeth and Faust spring immediately to mind, as do Prometheus and other assorted Greek heroes who presumed to defy Fate or the gods.  (Not surprisingly, the Greeks had a word for it:  hubris.)

A further problem emerges in Gazzaniga’s example of reflex action, an example oddly conflicting with his disparaging remarks about the experiments conducted by Benjamin Libet.  On page 114 Gazzaniga offers the example of smashing one’s finger with a hammer.  Our reflex action is to pull away our hand instantly, without thinking, and we feel the pain only after the fact of the hammer striking the finger.  So far, so good.  But then he asserts that, post hoc or after the fact, we explain that we felt the pain and then pulled away the hand.  Thus, pulling away the hand was done unconsciously, without conscious choice, and therefore, this reflex action illustrates that we do not have free will.  Well, I have smashed my fingers with a hammer many times, not to mention in doors, drawers, etc., and I have pricked and burnt various body parts, stubbed my toes, and otherwise encountered recalcitrant objects and reacted reflexively.  I have never explained these occurrences as suggested by Gazzaniga, indeed I have always noted that the pain from such injuries is delayed, and I do not believe I am more aware of this sequence than anyone else.  His example not only rings untrue to experience, it is too trivial to have anything to say about free will or personal responsibility.  (It may also be worth noting that anyone can train him- or herself to not react reflexively, as when preparing for a shot or challenging another to a contest to see who can hold out the longest against a deliberately inflicted pain. And it is common practice not to react reflexively to psychological pains, such as insults or your mother’s query as to when you are going to get married and present her with a grandchild—it’s called civility.)

It is when he attempts an evolutionary explanation for the present state of the human mind that his input is insufficient to his thesis and his interpreter kicks into overdrive and fills in a lot if gaping holes; allow me to excavate a few of those holes. Like many amateurs of human evolution, Gazzaniga is impressed by the idea that humans evolved on the African savannah and are therefore poorly adapted to our contemporary densely populated urban lifestyles.  “For most of human history, food sources were widely scattered, and these small groups were nomadic.  It has not been until very recently that the population has become dense, which all started with the development of agriculture and the change to the sedentary lifestyle.  .  .  . As population density increased, the second stage kicked in: adaptations for navigating and managing the increasingly populated social world” (146).  In other words, the development of denser societies preceded and precipitated the development of bigger, more complexly organized brains.  So, it appears, evolution of the brain continued after agriculture was invented.  There are several problems with this view, perhaps the most important of which is that agriculture arose independently in several different, distant areas and at different times.  It was also unevenly applied; for example, the great Meso-American civilizations were intensively agricultural while the cultures of the North American plains remained largely hunter-gatherer (although they did exercise some agriculture-like controls on their environments).  Thus, some peoples were highly settled and agricultural, others were of mixed lifestyle, and still others remained largely dependent on hunting and gathering.  One might also question just how “sedentary” agriculture was, given the amount of labor involved; it was only with the very recent development of industrial agriculture that the physical labor of most people was no longer necessary for agriculture.  In fact, industrialization continues today, as for example Chinese peasants who abandon the countryside for the cities and factory jobs.  All this to point out that the genetic changes necessary for biological evolution of the brain could not have had time to permeate the entire human population.  The human brain had to be already sufficiently evolved to invent and culturally adapt to the new man-made social environment created by complex agricultural civilizations, not the other way around.  This was understood two thousand years ago by Lucretius, who termed it inverse reasoning, or putting the cart before the horse as a recent translation (by A. E. Stallings) expresses it.

That Gazzaniga’s understanding of evolution is simplistic is further illustrated by his use of the unfortunate metaphor of “the course of human evolution” (151).  The word “course” has several meanings, all of which imply a direction or telos.  A river or stream follows an inevitable course to a lake or the ocean, i.e., it has a destination; a college course builds to a conclusion; and a race course or golf course leads to a final goal, the finish line or the eighteenth hole.  Evolution follows no course; it has no goal.  The impression that it does is a creation of the left-brain interpreter, filling in the gaps and making logical and narrative sense of a process that extends over a great span of time, most of that span being invisible to human perception.  Gazzaniga’s use of this metaphor not only implies a telos; it also implies that the human species evolves in lockstep, whereas evolution sorts on the level of the individual:  some individuals survive and successfully reproduce, thus influencing the evolution of their descendents, while others fail to reproduce, thus having no influence; this process is repeated with each descendent generation, i.,e., some individual progeny of successful ancestors survive to reproduce, others don’t.  Eventually a new species, made up of individuals who successfully reproduce more often than not, appears.  If that new species happens to be homo sapiens, human individuals and small groups expand around the globe, and separately and independently devise technologies for surviving in a variety of environments through culture, without tinkering all that much with the basic genetic profile of the species.  The “evolution” of technologies (e.g., agriculture to industry) with their attendant social changes can be sufficiently explained by culture (Gazzaniga’s social mind) and do not require an evolutionary (i.,e,., genetic) explanation.

In at least one instance Gazzaniga sails perilously close to Social Darwinism, even eugenics—one hopes inadvertently.  Without a scintilla of skepticism, he cites the idea of Michael Tomasello and Brian Hare “that we have been domesticating ourselves over thousands of years through ostracizing and killing those who were too aggressive, in essence removing them from the gene pool and modifying our social environment” (182).  There is not a scintilla of evidence that any such selective breeding program has ever occurred; indeed, many societies have encouraged aggressiveness, especially among their warrior class.  There is again the problem of who “we”is.  He seems to mean all human beings, or perhaps all societies; or perhaps he means there is only one human society that encompasses all of “us.”  But clearly, even if one or some societies did ostracize and kill those they deemed as too aggressive (just how aggressive is too aggressive?), some others would not have, while still others might have encouraged aggressiveness.  Anthropologists have, for example, noted that in some Amazonian societies, those adult men who have killed other men tend to have more wives and children than those men who have not.  Whether or not that constitutes selective breeding for aggressiveness has not been determined.

Despite these caveats, “Who’s In Charge” is a fascinating and informative book.  It shines in those chapters dealing with Gazzaniga’s own field of expertise, and while it falters when he attempts to apply the findings of his research and specialized knowledge to broader issues of free will and personal responsibility, that faltering is in itself of great interest.  It will drive the skeptical reader back to the proper provinces of those issues, to the disciplines of philosophy, ethics, history, psychology, and culture which, for now at least, provide the deepest and best explorations of our questions of meaning.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: