Tag Archives: language

Why Cursive Still Matters

According to Anne Trubek, author of a forthcoming book on the history and future of handwriting, and an advance selfie-blurb in the New York Times, “handwriting just doesn’t matter” anymore and should not be taught in elementary schools. Instead, students should be given a short course in printing and then quickly move on to typing skills. I beg to differ.

But before I do, I should be fair and mention that I am of an older, pre-digital generation and have been writing in cursive since I was eight years old. In fact, I am so habituated to cursive that I find it awkward and slow to hand print; when I’m filling out all those redundant forms in a doctor’s waiting room, I soon switch from printing to script because my hands get tired doing what they’re not accustomed to doing—and too bad for the smart young things at the desk who can’t read cursive.

Thus I am admittedly taking a traditionalist position here, consciously and deliberately counter to the futurist stance of Trubek and others who agree with her denigration of cursive. Being a traditionalist, however, does not—I repeat DOES NOT—delegitimize my argument. No more than being a futurist legitimates any argument against cursive.
So, what are Trubek’s arguments against the teaching of cursive (also called script, handwriting, longhand, etc.)? As already noted, one is that handwriting is old fashioned, outdated, and therefore as irrelevant to today’s world as Grandma’s old icebox (well, I guess it’s great Grandma’s). It is time, therefore, to consign handwriting to the same rubbish heap or museum as that icebox, and as those old ways of writing Trubek lists—carving on stone (which was never used for day to day writing, anyway), quill pens, and typewriters. But fountain pens are still widely used (I had a student once who had bought a cheap one and loved it—but I did have to demonstrate to him how to use it correctly), and typewriters are something of a fad among the young (like vinyl records). Stone cutters are still doing what they’ve always done: carving letters on headstones and monuments. Nothing is superseded entirely.

Trubek’s primary argument is a utilitarian one—in the digital age, handwriting is impractical and therefore no time should be wasted on teaching it. It is “superannuated.” One can write faster, and therefore more, by typing than by handwriting; and, glories of glories, one can write better! She asserts that “there is evidence that college students are writing more rhetorically complex essays, and at greater length, than they did a generation ago.” Hopefully, she will cite the “evidence” for this assertion in her forthcoming book, but until then I will continue to wonder why American students do so much more poorly than students in other countries on language skills and why college graduates appear to have serious deficits in writing skills. My own experiences as a college English instructor confirm the findings of large-scale tests: Students today do not write better than they did in the past, nor I have noticed that all the social-media writing that young people engage in has improved their writing skills.

Now, I am not asserting that teaching handwriting, in and of itself, will have any effect on the more global aspects of writing (organization, development of thought, etc.), but nor can one assert that teaching handwriting diminishes those skills. One need only look at the diaries and letters of, say, nineteenth-century Civil War soldiers, virtually none of whom attended school past the age of fourteen, to see that. I have in my possession a letter my paternal grandmother wrote to one of her sisters during the Great Depression; neither woman attended college, in fact what formal education they received occurred in a one-room schoolhouse in a small town near their family homestead in the Ozarks of southern Missouri—yet Grandma obviously could write, well and thoughtfully (on the political issues of the day), and, lordamercy, in a clear, readable cursive!

Frankly, to argue for the superior cognitive effects of computer typing is as bogus as arguing for the superior cognitive effects of cursive—after all, neither manual skill is about content, but only about means. Of course, I would not today compose this essay by hand on a yellow legal pad—I would never want to go back to the pre-word-processor days—all that White Out and carbon paper and retyping entire pages to correct one or two sentences is not for me! But I don’t want to give up handwriting either—in fact, my outline for this essay, and my margin comments on Trubek’s article, were handwritten in cursive on paper. The differing writing technologies available to us today are complementary, not mutually exclusive.

There is, however, one very good reason for knowing how to write in longhand: privacy. The digital world today is massively intrusive—cookies trace every move one makes on the Internet, the giant digital corporations make a mockery of web privacy, and hackers and government surveillance agencies sneak around in the background looking for vulnerabilities and suspicious activities. Just as one minor but truly exemplary instance: the other day I received yet another email from a major retailer (from whom I had recently purchased a big-ticket item) advertising their goods; rashly, I clicked on one of the items, just to satisfy my curiosity as to what such a thing would cost, and for the rest of the day, every time I went to a news media site, up popped another one of that retailer’s ads for that very item. We are getting very close to a ubiquitous media/advertising environment like that depicted in the Tom Cruise film “Minority Report.” Maybe in fact we’re already there.

But when I write something down on a slip of paper, or write an entry in a real diary, or otherwise make use of the superannuated skills of pen or pencil on paper, I am engaging in something truly private, totally inaccessible to hackers and algorithms, even these days to the prying eyes of all those who are unable to read cursive. I can express myself (not my social-media-self) without worrying or caring about the necessity of self-censorship. And I can do so anywhere under any conditions—I don’t need an electrical outlet or batteries. I can write by sunlight, or candlelight if need be. And if I don’t like what I wrote, or I want to ensure that no one else can ever read my private thoughts, I can burn the papers or send them through a shredder. There is no eternal cloud for pen-on-paper, no wayback machine to dig up some random and ill-conceived thoughts from the past. In cursive, there is still the privacy of the self. That makes teaching handwriting to students a true and wonderful gift. No reasons of utility or timely relevance are needed.

Advertisements

What Do Smart Animals Tell Us About Language?

Chaser, Puck, Koko, Alex, Nim Chimsky. These are the names of animals (a dog, a parakeet, a gorilla, and a chimpanzee) who have gained renown as paragons of animal intelligence. It is often claimed that in having been able to learn language, they prove that language is not a unique attribute of human beings.
Their achievements are indeed remarkable. Chaser knows the names of over 1,000 objects. Puck is said to have learned nearly 2,000 words. Koko and Nim learned rudimentary sign language (though there are skeptics, including the original trainer of Nim), and Alex could identify colors and keys. But what do these achievements really mean?

After all, it took hours, indeed years, of focused training for these animals to achieve what a human child achieves in months, and while they acquired the ability to recognize words, the more important aspect of language, grammar, seemed to elude them. None of these animals could spontaneously generate anything resembling a sentence. As important is the fact that it was only through contact with human beings, and also isolation from their own kind, that these animals were able to achieve as much as they did.

Chaser, for example, learned the words for manmade objects—specifically, a large variety of toys, which she could retrieve precisely. She also was able to distinguish new objects by a process of elimination; if her trainer asked her to find an object whose name she had not heard before, she knew that it had to be the one object in the pile that had not been there previously. That’s very impressive. But it is also true that in her “natural” environment, she would never have acquired nor generated such a vocabulary (though she certainly would know her environment, as does any wild animal). The same general idea applies to all the other “talking” animals who achieve these amazing feats.

The acquisition of language is a process of socialization, and these animals were in the (for them) unusual position of being socialized among humans rather than among their own species and thus achieved a level of “language” they could not otherwise have. Conversely, those rare individual human beings who have been isolated from others during their infancy and early childhood fail to acquire language and, if isolated for too long, never master language in later years. Genie, a girl isolated in a back room of her parents’ house for the first thirteen years of her life, is an example of a child who was denied the normal socialization process, and it isn’t only that she was deprived of language; she was deprived of objects, clothing, interactions with a range of other people, of exposure to the world, both natural and cultural, beyond the drawn shades of that one small room. She was deprived not only of words but of the rich layers of meanings that words carry in a typical human society. Words, as any poet or politician can attest, carry more meaning than a dictionary can define—take for example “fairy princess,” two words that when put together mean almost nothing outside of their cultural context, yet which resonate and amplify for small children within their cultural context. Think also of a simple key, the one you use to unlock the door to your house or apartment, something which Alex the parrot could recognize and name; but think also of its layers of meaning, as in “The key to cosmic understanding is gnosis.” What could Koko or Alex or Chaser make of such a sentence?

While having a certain kind of brain is a prerequisite to language, it is not sufficient to that ability. There is no language of one. Language is culturally generated, culturally acquired, and culturally deployed. As languages are elaborated over millennia, they do something else as well, something completely novel that goes beyond mere communication: they generate ideas. The very first words may have been simple names for objects or warnings of nearby predators, but over time they would have acquired more abstract meanings, so that eventually the world human beings inhabited was not solely the world of concrete objects and events but a created one of interwoven ideas, of art, philosophy, religion, social organization, myth, poetry, and science. This created world, this world of language, is one which no animal has ever been shown to have, not even the most extensively trained chimpanzee, parrot, or dog.

Shame

“Can Horses Feel Shame?”
This was a question posed by a friend who is both a horsewoman and an equine therapist. We discussed the question at some length over lunch, sharing our different perspectives or slants on the topic, while agreeing on the answer.

The short answer is “No.” Or rather, “Probably not.”

Why not?

First, what is “shame”? Is it merely an emotion, or is it something more complex? If it is merely an emotion, then horses and other mammals should be able to feel shame, because all mammals share the basic limbic brain structures that regulate emotions (including the relevant hormones). This includes humans, so at the basic level, humans feel the same emotions as other mammals.

However, humans have something that other mammals lack, language, by which I mean not merely communication but the ability to organize and elaborate abstract thought (concepts), particularly on a cultural rather than individual level. Language itself is a cultural rather than individual trait, which is why children must learn their native language rather than being born able to speak it instinctively. Through language/culture, we abstract and reify all our experiences, including our emotions.
We can consider the emotions as the substrate or foundation of our linguistically/culturally organized feelings. Fear, for example, can be seen as a foundation of shame, as suggested by the body language of shame, which looks much like the body language of fear (head down, tail tucked, a pulling in of the limbs, looking away from the, for example, threatening dominant animal, etc.). But shame elaborates fear, mixing it with other elements, into an abstraction, a concept which can vary markedly among cultures and situations.

Emotion fades with the withdrawal of the stimulus as the hormones clear from the body. An animal confronting a threat feels fear and reacts, but when the threat is removed, the fear abates and the animal returns to normal. Of course, if a threat is continually repeated over time, the animal will either become skittish and wary or, if it learns that the threat is actually not a threat at all, will come to ignore the stimulus. But generally, the animal’s reaction is in response to a concrete, perceived, and present threat.

Shame, however, does not fade with the removal of the (social) situation that triggered it. Because it is a concept rather than purely an emotion, shame can be recollected at a later time, when the threatening situation is long since removed, and trigger an emotional response—including the release of the associated hormones. We mull over the experience, reliving it as if it were in fact unfolding in the present, and re-feeling the emotions we experienced during the actual event. In fact, we can make the experience and the attendant emotions worse by these mental re-enactments. In so doing, we also can turn shame (as well as guilt, joy, love, etc.) from responses to motives for future actions (or inaction—it can prevent us from engaging in actions which we anticipate will bring shame upon us). Shame might cause us to plot revenge, for example, or guilt may prompt us to apologize or make it up to a person we have wronged. And that person may decide to forgive us or to punish us. For humans, all these social emotions are a two-way street, or perhaps more accurately multiple streets of many ways.

Shame can only be experienced when we feel ourselves to be negatively judged by others or by the norms of our society. We feel that we have not measured up to the expectations of others or that we have acted or thought in ways that society would disapprove; we can feel shame for actions or thoughts that no one else has seen. But first, we must have learned what our society considers shameful—this knowledge is not instinctive. Shame also, and crucially, involves empathy or a theory of mind, the ability to recognize the subjectivity of others (akin to “mind reading,” etc.).

It is likely that sociopaths do not feel shame or guilt, no matter how culturally/socially adept they may be. It is the sad situation of the sociopath that points to the positive aspects of shame, his/her pathology being precisely of the social kind—without shame, the sociopath visits misery on everyone he comes into contact with, depriving both them and himself of the joyful experiences of social life. Too much shame, or shame imposed on us by totalitarian persons and regimes, results in neuroses, but too little shame causes a lack of restraint and consideration of others. A person who feels ashamed of having been rude to another is less likely to act rudely in the future; a political leader who is not likely to anticipate his feelings of shame (perhaps because he cannot have them) will not hesitate to kill millions to achieve his ambitions. To repress appropriate feelings of shame to protect one’s reputation or self-image will condition one to continue his/her bad behavior. Thus the popular idea that one should not feel shame (often expressed as an I-don’t-care-what-others-think attitude) has serious drawbacks. We cannot be successful social creatures without caring what others think.

The existentialists recognized the central importance of shame, as a form of self-consciousness. Sartre’s classic example of a man suddenly aware of being looked at, caught picking his nose, underscores the self-consciousness of shame; Sartre held that such self-consciousness was crucial to developing an authentic sense of self. He writes, “Nobody can be vulgar all alone!” We must be seen picking our noses in order for that act to be vulgar and to feel ashamed of our vulgarity (though as mentioned earlier in this article, we can relive and/or anticipate the shameful situation). Sartre also writes, “I am ashamed of myself as I appear to another” and “Shame is by nature recognition,” both as oneself and as an Other to another person—that is, a “self which is not myself”. (The sociopath may feel irritation or anger at being perceived negatively, perhaps precisely because he does not want to be known at all, but he will not feel shame.)

This realization of our own Other-ness to others is particularly important. Not only do we feel subjectively ourselves, and not only do we recognize the subjective existence of another person (empathy, theory of mind), but we also become aware that to that other person we are Other, in the full sense of being a subjective being in our own right. Perhaps it is in this sense that the greatest joy of love is experienced. For if as the Beloved, I am more than merely an object to the Lover (a blank screen perhaps on which he/she projects an image of himself), then I am truly loved, rather than possessed. As Simone de Beauvoir wrote, “to love him genuinely is to love him in his otherness.”

All this takes us a very long way from the simple emotions. Whether or not one agrees with Sartre and de Beauvoir, or any other thinker on the subject of shame, what is clear is that to human beings, through language/culture, shame means a great deal more than a momentary hormonal response of fear.

Sam Harris’s Moral Swampland

My original intention was to write a long and detailed critique of Sam Harris’s most recent book The Moral Landscape: How Science Can Determine Human Values (2010).  Harris is the author of two previous books promoting a naïve form of antireligion and is one of the Four Horseman of Atheism, a posse of atheist secular conservatives also known as the New Atheists, that also includes Daniel Dennett, Richard Dawkins, and Christoper Hitchens.  However, since the book has already been widely reviewed and most of its manifold failings have been identified (among them that Harris’s version of ethics is utilitarianism in science drag), I will not repeat those points and instead will focus on two problems that particularly struck me, one of which has been alluded to but not detailed, the other of which has not been mentioned in the reviews I read.

The first problem, the one some reviewers have noted, is Harris’s apparent lack of interest in philosophers who have previously (over many centuries) wrestled with questions of morality.  As I read his book, I became aware that one of his strategies is to create colossal straw men arguments; he creates extreme but vague versions of his opponents and them knocks them down, but he rarely names names or provides quotations.  For example,  on page 17 he asserts that “there is a pervasive assumption among educated people that either such [moral] differences don’t exist, or that they are too variable, complex, or culturally idiosyncratic to admit of general value judgments,” but he does not identify whom he’s talking about nor does he quote anyone who holds such a view; the statement is also absolute (as so many of his statements are), in that he does not qualify the category: he does not say, for example, “80% of educated people,” nor does he define what he means by “educated.”  Furthermore, the word “pervasive” has negative valence without explicitly declaring it; anything “pervasive” has taken over (evil is pervasive, good is universal, for example).

On page 20 he states that “Many social scientists incorrectly believe that all long-standing human practices must be evolutionarily adaptive,” but he does not identify who those many social scientists are, nor specify how many constitutes many; nor does he quote any of them to support or even illustrate his assertion; nor does he offer so much as a footnote reference to any social scientists who allegedly hold this view.  In his hostile statements on religion, one who pays attention will note that he has not only oversimplified religion, but also seems to limit his conception of religion to the most conservative strands of contemporary Judeo-Christian systems.  He rarely refers to theologians and then only to contemporary and conservative ones. His bibliography runs to 40 pages, or about 800 sources (give or take), but the only theologians in this extensive listing are J.C. Polkinghorne and N. T. Wright.  Nothing of Augustine or Aquinas, nothing of Barth or Bonheoffer or Fletcher.  If he had bothered to consult any of these and other theologians and moral philosophers, he might have seen that his ideas have already been more extensively and more deeply explored than he manages to do in this book.  He does try to wriggle out of this problem in the long first footnote to Chapter 1, but it is disingenuous.

I also question that he has actually carefully and completely read all 800 (give or take) sources he lists.  It would take an inordinate amount of time to read all of them, to read them carefully to ensure one has properly understood them, to take adequate notes, and to think about how they fit into or relate to one’s thesis and argument.  For example, it strikes me as odd that he lists 10 sources by John R. Searle, 3 of which are densely argued books, but refers to Searle only once in the body of his book and 3 times, obliquely, in his endnotes.  One wonders, in what sense then is Searle a source?  Is he a source by association only?

The other major problem I have with Harris’s book is in my view more serious:  He mistakes “brain states” for thoughts.  This is a common error among those who imagine that scanning the brain to measure areas of activity or measuring the levels of various hormones such as oxytocin suffices to explain human thought.  (Harris confesses to a measurement bias on p. 20 when he writes that “The world of measurement and the world of meaning must eventually be reconciled.”).  That a particular region of the brain is “lit up” tells us only where the thought is occurring—it tells us nothing about the content of that thought nor anything about its validity.  This is because human thoughts are generated, shared, discussed, modified, and passed on through language, through words, which while processed in the brain nevertheless have meaningful independence from any one particular brain, and therefore have a degree of freedom from “brain  states.”

Harris’s inability to properly distinguish between brain-states and thoughts is apparent in an interesting passage on pages 121-122:  Here he discusses research he conducted using fMRI’s that identify the medial pre-frontal cortex as the region of the brain that is most active when the human subject believed a statement.  He discovered that this area is activated similarly when the subject is considering a mathematical statement (2 + 6 + 8 = 16) and when the subject is considering an ethical belief (“It is good to let your children know that you love them”).  This similarity of activity in the same brain area leads Harris to conclude “that the similarity of belief may be the same regardless of a proposition’s content.  It also suggests that the division between facts and values does not make much sense in terms of underlying brain function.” How true.  Yet nonetheless, human beings (including quite obviously Harris himself) do make the distinction.

But he goes on:  “This finding of content-independence challenges the fact/value distinction very directly:  for if, from the point of view of the brain, believing ‘the sun is a star’ is importantly similar to believing ‘cruelty is wrong,’ how can we say that scientific and ethical judgments have nothing in common?” (italics added)  Aside from the fact that he does not specify who that “we” is and that he does not prove that anyone has said that there is “nothing in common” between scientific and ethical judgments, there is the fact that, in language, that is by actually thinking, we can make the distinction and do so all the time.  “This finding” does not challenge the distinction; it merely highlights that the distinction is not dependent upon a “brain-state”.  The MPFC may be equally activated, but the human thinker knows the differences.

The underlying problems with Harris’s thesis are at least threefold.  One is that his hostility to and ignorance of religion blocks him from considering or accurately representing what religion has to say about ethics.  Another is one habitual among conservatives, to tilt at straw men while mounted on rickety arguments.  The third is his reductionism to the absurd degree.  Arguments of the type offered in this book mistake the foundation for the whole edifice; it is as if, in desiring to know and understand Versailles, we razed it to its foundation and then said, “There, behold, that is Versailles!”

Note: A partial omission in the original post of 1 May 2011 in the quotation in the second to last paragraph was corrected on 2 October 2011

Evolutionary Just-So Stories

Although I am convinced that evolution occurred and that Darwin’s theory of how it occurred is the best explanation for evolution that we have so far, nonetheless I question speculation by both scientists and journalists when they propose explanations or scenarios in the absence of evidence.  On this page I discuss some examples of wishful thinking or just-so-stories that mislead as to what we actually know about the evolutionary past and about the evolutionary causes for contemporary observable behaviors or traits.

Go to the Evolutionary Just-So Stories page for these short essays.