Evolutionary Just-So Stories

Although I am convinced that evolution occurred and that Darwin’s theory of how it occurred is the best explanation for evolution that we have so far, nonetheless I question speculation by both scientists and journalists when they propose explanations or scenarios in the absence of evidence.  Below are some examples of wishful thinking or just-so-stories that mislead as to what we actually know about the evolutionary past and about the evolutionary causes for contemporary observable behaviors or traits.

Fabulous Animals, Ordinary Humans

“Chaser, a border collie who lives in Spartanburg, S.C., has the largest vocabulary of any known dog. She knows 1,022 nouns, a record that displays unexpected depths of the canine mind and may help explain how children acquire language.” –Nicholas Wade, New York Times, 17 January 2011

          There has been a lot of this sort of thing going around, at least since researchers began claiming that Washoe and other apes had successfully been taught sign language (ASL) and could communicate with humans and, in some instances, with each other.  Similar claims have been made about animals of other species, such as Alex the African grey parrot.  There are many articles, books, and reports, including video, on these fabulous animals and the claims their handlers make for them, as well as plenty of skeptical and debunking analyses by others, so I will not review or summarize them; interested readers can find plenty of information on these controversies on their own.

          While I am skeptical to some degree about the claims that animals understand and can use language, I am more interested in the premise that these animals’ abilities show that the gap between them and humans is less than has been traditionally believed and that language is not as much of a distinctively human trait as we once thought.  In fact, ape language research is an important factor in the arguments some people make that apes should be accorded the same moral status as human beings.  Apes have been described as “on the brink of the human mind,” as the title of a book on Kanzi, a bonobo studied by Sue Savage-Rumbaugh, puts it; consequently, many have concluded that we can discern the origins of human behavior, including language, by observations of ape (and to a lesser extent, other animal) behavior.

          The controversies swirling around these experiments have tended to focus on whether or not these animals really understand and can use language and entail questions of how to define language, which supposedly is not easy to do.  However, it seems to me that we can assume that these experiments show what their advocates claim they do, that (some) animals can learn at least a rudimentary form of language and that these experiments do have something to show us about human language and therefore about what it means to be human—but not in the way either advocates or skeptics have so far thought.

          Alex accumulated a vocabulary of 150 words.  Chaser can recognize 1022 nouns (although she cannot, of course, utter them).  And various apes have been taught to respond to signing and can use signs to make simple requests for food and other items.  Generally, these animals seem to be able to achieve the linguistic level of a two-year-old human child.  How impressive!

          It takes years of intensive training by many handlers in highly controlled environments to achieve such heights.  Alex was in training for 30 years. Chaser underwent four to five hours of focused training per day.  A great deal of effort is required to get an animal to achieve what a human infant picks up with ease.  Furthermore, while even the cleverest, most intensively trained animal learns a limited vocabulary of concrete words, humans have much larger vocabularies that include things like prepositions, inflections, adverbs and articles, grammar and syntax, can create novel sentences, hold conversations, give lectures, talk about abstract concepts, and share ideas and pass cultural knowledge from one generation to the next.  They also can compose poetry, use and quickly understand metaphors, and speak of things past and future as well as wholly nonexistent.  They can do all these things before reaching adolescence, though as adults, especially as educated adults, they can do these things at a very high level of sophistication.  None of which animals can do.

          And while innate ability (a suitable brain) is the foundation of our linguistic tours de force, language is not purely instinctive.  Language acquisition requires a culture in which the child learns and in which the language makes sense.  This is suggested by the animal studies themselves:  None of these animals would have achieved so much without there being a prior human language to teach them and without highly skilled and dedicated human trainers working intensively with them over many years.  These animals do not naturally or easily acquire language, and unlike a human learning a foreign language, have no language of their own to which they can compare the human language that they are (barely) learning.  Nor do these animals make up neologisms or say novel things.  They are trained, not educated.

          While how much animals can learn through intensive training, and how little, says much about the differences between humans and animals, so do the rare cases of so-called feral children, such as the California girl named Genie who was found at age 13 locked up in a barren room.  She had not been talked to in all her 13 years and could not speak at the time she was found by authorities and removed from her prison.  Despite attempts to teach her to talk afterwards, she learned no more than a minimal vocabulary and virtually no grammar.  Having missed growing up in her first years in an environment of language and human interaction, she had passed the point at which her brain could be stimulated appropriately to “wire” it for language.   Lacking a true language instinct, the only way a human being’s brain can be programmed for language is through learning within a cultural or social context. 

          Most other distinctly human behaviors are learned rather than instinctive.  Great pianists begin their piano lessons as young children.  Great athletes begin their sports equally young.  Try turning a 20 year old with no previous musical training into a good pianist or with no previous athletic experience into a star basketball player.  Innate talent leads to nothing, or at least to very mediocre accomplishment, if it’s not cultivated from an early age.  It is culture, developed slowly over the millennia, that has made human accomplishments possible.  Without that cultural heritage, we would be not only less than human, but less than animal.

Lamarckian Teleology

I read a lot of books and articles on evolution and, as a former English instructor, have paid a lot of attention to how they are written:  style and phrasing, tone, even how well they have been proofread (a dying, if not dead art).  I have been struck repeatedly by how frequently supposed Darwinists or Neo-Darwinists use language that sounds more like what I will call Lamarckian Teleology than true Darwinism.  The latter posits that evolution occurs as a result of natural selection working on the effects of random mutation; in other words, there is no design or purpose in the process.

One would think that Darwinists could explain evolution even to a popular audience in language that reflects their understanding of how evolution works, but oddly they habitually use language that undermines that understanding.  Aside from the frequency with which they use the word “design” to describe features of living things (e.g., the wings of a bird are designed for flight), even in extended descriptions of evolutionary processes, they use language that implies not only that evolution has a purpose but that it operates in Lamarckian ways:  features of living things arise in response to their environments.  That is to say, the environment happens first, the feature develops in response.  This is what I mean by Lamarckian Teleology.

An example comes from a fascinating recent book, “Almost Chimpanzee” by Jon Cohen (Times Books, 2010), a book I otherwise recommend.  On page 225, Cohen discusses the reason that the sagittal crest occurs in apes but not in humans.  He quotes Alyssa Crittenden, a researcher at UCSD, as saying, “ ‘The argument is that we don’t need the same muscles because of the shift in our diet.’ ”  Cohen continues:  “when we began to mechanically process food—pounding meat, for example—and, later, cook, it reduced the need for large masticatory muscles.”

O.K., let’s look at the phrasing:  Which came first, the loss of the sagittal crest or the mechanical processing of food?  Answer:  Mechanically processing food came first, which then reduced the need for powerful chewing muscles, which then led to the disappearance of the sagittal crest which was no longer needed to anchor the muscles.  That is, our behavior changed first; the genetic change (a mutation to the MYH16 gene) followed.

Really?  Isn’t that Lamarckian?  Isn’t it teleological, in the sense that, somehow or another, the genes knew to mutate because the crest was no longer needed?  I’m not sure that this is the impression either Cohen or Crittenden meant to create, but the language here certainly does give that impression.  And such language feeds right into the creationists’ hands:  “How could such a thing happen without Design?” they might rightly ask. 

If Cohen and other writers on evolution wish to avoid the impression that such language creates, they need to think harder about what they are writing.  They need to bring their words and their overall descriptions into line with what they really believe.

So let’s try:  The sagittal crest disappeared about 2.4 million years ago (the dating Cohen gives) as a result of a mutation to the MYH16 gene.  There was no reason for this mutation—it just happened—and ordinarily it would have worked against the reproductive success of those individuals who were affected by it.  Perhaps similar mutations had occurred previously and the affected individuals had been unable to obtain sufficient nutrients to survive long enough to reproduce and pass that mutation down to their descendents.  But this time it proved not to be a liability, as the population of which the affected individuals were members had evolved sufficient intelligence and manual dexterity that they were able to figure out how to pound meat (and perhaps fibrous vegetables) to make it easier to chew.  Perhaps the mother of one of these affected individuals was the first individual clever enough to pick up a stone and pound meat for her child, thus ensuring it survived to reproductive age; exactly how this happened we will never know.  But whoever it was who first pounded meat to make it easier to chew and digest guaranteed the survival and reproductive success of the individuals affected by the mutation, and eventually, for reasons still shrouded in mystery (perhaps the meat-pounders had greater success in raising more of their children to adulthood than did non-meat-pounders), the mutation spread until it predominated in the human population.  Clearly, however, there would have been no motivation to pound meat in the first place, unless and until someone was lacking the sagittal crest.  Why waste energy and time on an unnecessary procedure?

Granted , this way of describing why we lost the sagittal crest takes longer—but in my view, it is also more in line with Darwinian evolutionary theory and less likely to mislead readers into ascribing intention to a nonintentional process.  As Ernst Mayr wrote in “What Evolution Is,” selection of a trait is the second step in natural selection; the first step is “the production of the variation that provides the material for the selection process, and here stochastic processes (chance, contingency) are dominant.”  Let all popularizers of evolutionary theory read Mayr before embarking on their own tomes!

The Wind in the Grass

          Consider the following scenario:  Two ancestral humans are foraging on the ancient African savannah.  The grass is tall and obscures their view of their surroundings.  As they forage, they suddenly hear a rustling in the grass a short distance away.  One human assumes it’s a lion and flees; the other assumes it’s the wind and stays put.  Alas, it is a lion, and the second human falls prey to the predator.  Ipso facto, the human who fled successfully breeds and passes on his or her “flee” genes to subsequent generations; the genes for “wait” are not passed on.

          This story was recently repeated, at greater length than I have done here, in an article in the New York Times by James Gorman, who was attempting to give a Neo-Darwinist explanation for the conspiracy theories surrounding the mass deaths of blackbirds in Arkansas and Louisiana.  Gorman cited Michael Shermer, founder of Skeptic Magazine and the author of How We Believe, as one source of this scenario, which is supposed to explain how pattern recognition can deceive as often as it illuminates and, in a leap no logician would tolerate, is also the basis for religion.  Shermer, however, is by no means the originator of this explanation for superstition and religion nor the only writer to build a mighty edifice on a thin foundation. 

          At least Shermer bothers to note that pattern recognition of this type predates the emergence of human beings and their predecessors.  As he states, “This process is called association learning, and it is fundamental to all animal behavior, from the humble worm C. elegans to H. sapiens,” a point Gorman fails to mention.  In other words, the association between rustling grass and the likely presence of a predator was locked into the genes long before human beings would have needed to develop such a survival mechanism.  Humans, in other words, did not have to reinvent this particular wheel. 

          Which means that the scenario is false, a fable, a just-so story, because both individuals would already have inherited, from many generations of creatures previous to them, the tendency to flee rather than wait-and-see.  Unfortunately for the thesis that patternicity of this type provides an evolutionary explanation for religion, there is no evidence that any other creature has ever made the enormous leap from fear of predators in the grass to gods in the heavens.  Furthermore, contemporary animal behavior suggests that the scenario’s plot is incorrect:  Animals do not generally flee merely at the rustling of grass—if they did, they would never do anything but flee, as grass is always rustling, especially on savannahs and plains where the wind blows unobstructed by stands of trees (note the windbreaks early settlers planted on the American plains to shelter their homesteads) or elevations in the land.  Unless the rustling grass or snapping twig is very close, triggering a reflex action to jump and run at least a short distance, animals usually survey their surroundings and sniff the wind to determine if a predator is in fact within striking distance.  They cannot afford to waste energy on constant pointless fleeing; they also know, consciously or instinctively, that fleeing can make them more of a target than staying still:  many predators are triggered into chasing moving targets, which are easier to see than creatures who freeze and blend into the background.  In other words, animals do in fact have “baloney detectors,” despite Shermer’s (and others’) assertion that they (and by extension, we) do not.

          Scenarios of this type (and they are legion) are mere fables with little or no basis in fact.  They are works of the imagination, fictions that try to bridge the gaps in our knowledge.  No one knows what occurred on the African plains millions of years ago; it is better to leave a question unanswered than to posit simplistic answers.

ADDENDUM:  Shortly after I posted this article, I came across a book review written by Micheal Shermer.  Published in the January 8, 2011, issue of the Wall Street Journal, Shermer’s article purports to review Seth Mnookin’s book The Panic Virus, which is about the controversies swirling around autism and vaccines.  Shermer uses his review as an opportunity to repeat and advance the thesis of the wind-in-the-grass fable, creating the impression that a highly complex situation can be explained in simplistic pattern-recognition terms.  Shermer is so determined to make his point about the irrationality of average humans (vs. the rationality of scientists) that he also reduces the antivirus faction to simplistic straw men in order to more definitively knock them down. 

          Certainly the evidence shows that it is improbable the autism is caused by the MMR vaccines (which Shermer does mention) or by the thimerasol used as a preservative (which he doesn’t mention); that the antivaccine faction is wrong, however, does not mean that they are unthinking.  Anyone who has followed the controversy over the years knows that desperate parents did a lot of research (not always good or well-informed), that most if not all of them understood what viruses are and what immunization is (some parents chose to space the individual vaccinations rather than get their children all three as one dose), and understood that medical science is not a perfected discipline. 

          More seriously, however, Shermer’s use of the wind-in-the-grass scenario in his review, whether in itself valid or not, sheds no light on the autism controversy.  It is too simplistic to be of any use in this instance.

Advertisements

Trackbacks

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: