Narcissism and Individualism

This is the age of the selfie. Never before have so many people taken so many pictures of themselves and posted them on so many different media. They’re all the same, goofy smiles, funny faces, sexy poses. For something that is supposed to be self-expressive, they are pathetically predictable and unoriginal.

Some social critics say that this is an age of individualism run amok. But the selfie suggests quite otherwise. This is the age of rampant narcissism.

The narcissist craves the attention of others because she or he otherwise has no self, no means of affirming his or her existence other than through the attention of others. Narcissists love mirrors, whether of the conventional glass kind or the gaze of others looking at them. The selfie is a kind of mirror: it is focused on oneself rather than the environment in which one is situated (on me rather than the Bridge of Sighs barely visible in the background) and it is “shared” with others with the purpose of receiving a reaction. It not only announces one’s existence, it asks for confirmation of that existence. The narcissist depends on the confirmation of others. Today, narcissism is a feedback system–I will friend you if you friend me, I will “LOL” your selfie if you will do the same for mine. Narcissists are not so much antisocial as hypersocial, in the sense that they need social recognition as much as they need oxygen.

Narcissists are selfish, but they are not individuals. Individuals do not require constant affirmation to support their sense of self. They do not need to attract attention to themselves always and everywhere. They have private lives. They enjoy privacy as much as they enjoy socializing. They do not need to take pictures of themselves to verify they have been to Venice, nor to prove to anyone else that they have. They will take a picture of the Bridge of Sighs without their funny faces obscuring it, or they might not take a photo at all–they have the personal experience, an experience that does not need to be “socialized” to be real.

Narcissists will go on a talk show and reveal all to millions. Individuals can’t imagine doing such a thing. They do not watch such shows. Individuals believe in privacy, they believe that they have a right to their secrets. They believe that other people have a right to their own secrets as well. They do not conform, neither do they make a fetish of not conforming (which is a backhanded way of conforming, by drawing attention to oneself, as a kind of hypersocializing–see me, see me!).

A society of narcissists is a society of conformists, with everyone vying for everyone else’s attention. Such attention is not always positive–often it is negative, bringing down the opprobrium of the masses for transgressing the rules of proper self-revelation. This is why selfies all look the same–there is a right way of doing it. It is why talk shows follow the same format, whether it’s Dr. Phil or Jerry Springer. There is a right way to be a “selfie”–there is no right way to be a self.


Why Cursive Still Matters

According to Anne Trubek, author of a forthcoming book on the history and future of handwriting, and an advance selfie-blurb in the New York Times, “handwriting just doesn’t matter” anymore and should not be taught in elementary schools. Instead, students should be given a short course in printing and then quickly move on to typing skills. I beg to differ.

But before I do, I should be fair and mention that I am of an older, pre-digital generation and have been writing in cursive since I was eight years old. In fact, I am so habituated to cursive that I find it awkward and slow to hand print; when I’m filling out all those redundant forms in a doctor’s waiting room, I soon switch from printing to script because my hands get tired doing what they’re not accustomed to doing—and too bad for the smart young things at the desk who can’t read cursive.

Thus I am admittedly taking a traditionalist position here, consciously and deliberately counter to the futurist stance of Trubek and others who agree with her denigration of cursive. Being a traditionalist, however, does not—I repeat DOES NOT—delegitimize my argument. No more than being a futurist legitimates any argument against cursive.
So, what are Trubek’s arguments against the teaching of cursive (also called script, handwriting, longhand, etc.)? As already noted, one is that handwriting is old fashioned, outdated, and therefore as irrelevant to today’s world as Grandma’s old icebox (well, I guess it’s great Grandma’s). It is time, therefore, to consign handwriting to the same rubbish heap or museum as that icebox, and as those old ways of writing Trubek lists—carving on stone (which was never used for day to day writing, anyway), quill pens, and typewriters. But fountain pens are still widely used (I had a student once who had bought a cheap one and loved it—but I did have to demonstrate to him how to use it correctly), and typewriters are something of a fad among the young (like vinyl records). Stone cutters are still doing what they’ve always done: carving letters on headstones and monuments. Nothing is superseded entirely.

Trubek’s primary argument is a utilitarian one—in the digital age, handwriting is impractical and therefore no time should be wasted on teaching it. It is “superannuated.” One can write faster, and therefore more, by typing than by handwriting; and, glories of glories, one can write better! She asserts that “there is evidence that college students are writing more rhetorically complex essays, and at greater length, than they did a generation ago.” Hopefully, she will cite the “evidence” for this assertion in her forthcoming book, but until then I will continue to wonder why American students do so much more poorly than students in other countries on language skills and why college graduates appear to have serious deficits in writing skills. My own experiences as a college English instructor confirm the findings of large-scale tests: Students today do not write better than they did in the past, nor I have noticed that all the social-media writing that young people engage in has improved their writing skills.

Now, I am not asserting that teaching handwriting, in and of itself, will have any effect on the more global aspects of writing (organization, development of thought, etc.), but nor can one assert that teaching handwriting diminishes those skills. One need only look at the diaries and letters of, say, nineteenth-century Civil War soldiers, virtually none of whom attended school past the age of fourteen, to see that. I have in my possession a letter my paternal grandmother wrote to one of her sisters during the Great Depression; neither woman attended college, in fact what formal education they received occurred in a one-room schoolhouse in a small town near their family homestead in the Ozarks of southern Missouri—yet Grandma obviously could write, well and thoughtfully (on the political issues of the day), and, lordamercy, in a clear, readable cursive!

Frankly, to argue for the superior cognitive effects of computer typing is as bogus as arguing for the superior cognitive effects of cursive—after all, neither manual skill is about content, but only about means. Of course, I would not today compose this essay by hand on a yellow legal pad—I would never want to go back to the pre-word-processor days—all that White Out and carbon paper and retyping entire pages to correct one or two sentences is not for me! But I don’t want to give up handwriting either—in fact, my outline for this essay, and my margin comments on Trubek’s article, were handwritten in cursive on paper. The differing writing technologies available to us today are complementary, not mutually exclusive.

There is, however, one very good reason for knowing how to write in longhand: privacy. The digital world today is massively intrusive—cookies trace every move one makes on the Internet, the giant digital corporations make a mockery of web privacy, and hackers and government surveillance agencies sneak around in the background looking for vulnerabilities and suspicious activities. Just as one minor but truly exemplary instance: the other day I received yet another email from a major retailer (from whom I had recently purchased a big-ticket item) advertising their goods; rashly, I clicked on one of the items, just to satisfy my curiosity as to what such a thing would cost, and for the rest of the day, every time I went to a news media site, up popped another one of that retailer’s ads for that very item. We are getting very close to a ubiquitous media/advertising environment like that depicted in the Tom Cruise film “Minority Report.” Maybe in fact we’re already there.

But when I write something down on a slip of paper, or write an entry in a real diary, or otherwise make use of the superannuated skills of pen or pencil on paper, I am engaging in something truly private, totally inaccessible to hackers and algorithms, even these days to the prying eyes of all those who are unable to read cursive. I can express myself (not my social-media-self) without worrying or caring about the necessity of self-censorship. And I can do so anywhere under any conditions—I don’t need an electrical outlet or batteries. I can write by sunlight, or candlelight if need be. And if I don’t like what I wrote, or I want to ensure that no one else can ever read my private thoughts, I can burn the papers or send them through a shredder. There is no eternal cloud for pen-on-paper, no wayback machine to dig up some random and ill-conceived thoughts from the past. In cursive, there is still the privacy of the self. That makes teaching handwriting to students a true and wonderful gift. No reasons of utility or timely relevance are needed.

What Is a Species?

That science is a human enterprise and not some pure and perfect object independent of culture is highlighted by a recent investigation into the DNA of American wolves—the gray wolf, the Eastern wolf, and the red wolf. An article in the New York Times (7/27/16) reports that analysis of the DNA of these three wolf species reveals that in fact “there is only one species [of wolf] on the continent: the gray wolf.” The other two are hybrids of coyotes and wolves—Eastern wolves are 50/50, red wolves are 75 percent coyote and 25 percent wolf. The investigators also concluded that the wolf and coyote species shared a common ancestor only 50,000 years ago, which is very recent in evolutionary terms.

Now, anyone comfortable with the fact that nature goes its own way without regard to the human need for tidy intellectual categories is not likely to be much disturbed by these findings. But such people are relatively rare, especially in academic and political circles, so it happens that certain people do find it disturbing that Eastern and red wolves are hybrids. That is, they are not “pure” and therefore may not be entitled to protection from, say, extermination—they are not “pure” and therefore not entitled to the protection of such laws as the Endangered Species Act. In a sense, they are not “natural” because—well, because they violate the notion of the purity of species, they don’t fit neatly into our conceptual categories. As one scientist was quoted (in dissension from the worry warts), “’We put things in categories, but it doesn’t work that way in nature.’”

Indeed it doesn’t. In fact, it couldn’t. The notion of “species” as neatly distinct forms of life, immune to crossings of the so-called “species barrier,” among other common myths of the “logic” of evolution, would cause evolution to grind to a halt. Evolution requires messiness, contingency, happenstance, the unexpected, for it to work. For example, genetic mutations do not magically appear in consequential response to environmental pressures, just in time to save a species from extinction. Instead, a mutation lies quietly in the background, sometimes for many generations, to emerge as the crucial factor of salvation (for those individuals who carry it, and their descendants) when and if a factor in the environment calls it forth.

I am reminded of a startling discovery during the height of the AIDS epidemic in America, that some individuals, despite a particularly risky lifestyle, were immune to the disease. Turns out, they carried a mutation that had first manifested itself centuries earlier, during an epidemic of an entirely different disease, bubonic plague. One could describe how this mutation protects against both diseases, but one could not explain why—why this gene mutation occurred in the first place, why it just happened to confer immunity or resistance to these two quite different diseases (one caused by a bacterium, the other by a retrovirus), and why it resided silently in the genomes of its fortunate carriers for so many generations before it could prove its usefulness.

A fundamental goal of all human endeavors is to reduce the entangled complexities of life, including our own, to a simple set of principles that fit the limitations of the computational power of our little brains, a mere three pounds of meat, of which only a relatively small portion engages in the tasks of reasoning. Not surprisingly, it is difficult to wrap our heads around the genuine complexity of the earth we inhabit, let alone of the cosmos. Being the limited creatures that we are, we need our categories—but let’s not worship them. Let’s not condemn the Eastern wolf and the red wolf to extermination just because they mess up our laws.

Evolution and Theodicy

“Why is there evil in the world?” This question has been asked by philosophers and theologians and ordinary men and women for millennia. Today scientists, particularly evolutionary biologists, neuroscientists, and evolutionary/neuropsychologists have joined the effort to explain evil: why do people indulge in violence, cheating, lies, harassment, and so on. There is no need here to itemize all the behaviors that can be labeled evil. What matters is the question of “why?”

The question of “why is there evil in the world?” assumes the premise that evil is abnormal while good however defined) is normal—the abnorm vs. the norm, if you will. Goodness is the natural state of man, the original condition, and evil is something imposed on or inserted into the world from some external, malevolent source. In Genesis, God created the world and pronounced it good; then Adam and Eve succumbed to the temptations of the Serpent and brought evil and therefore death into the world (thus, death is a manifestation of evil, immortality the natural state of good). Unfortunately, the Bible does not adequately account for the existence of the Serpent or Satan, so it was left to Milton to fill in the story. Gnostics, Manicheans, and others posited the existence of two deities, one good and the other evil, and constructed a vision of a cosmic struggle between light and darkness that would culminate in the triumph of good—a concept that filtered into Christian eschatology. The fact that Christian tradition sees the end times as a restoration to a state of Adamic or Edenic innocence underscores the notion that goodness is the natural, default state of man and the cosmos.

Contemporary secular culture has not escaped this notion of the primeval innocence of man. It has simply relocated Eden to the African savannah. When mankind was still at the hunter-gatherer stage, so the story goes, people lived in naked or near-naked innocence; they lived in egalitarian peace with their fellows and in harmony with nature. Alas, with the invention of agriculture and the consequent development of cities and civilizations, egalitarianism gave way to greed, social hierarchies, war, imperialism, slavery, patriarchy, all the factors that cause people to engage in violence, oppression, materialism, and so on; further, these faults of civilizations caused the oppressed to engage in violence, theft, slovenliness, and other sins. Laws and punishments and other means of control and suppression were instituted to keep the louts in their place. Many people believe that to restore the lost innocence of our hunter-gatherer origins, we must return to the land, re-engage with nature, adopt a paleo diet, restructure society according to matriarchal and/or socialist principles, and so on. Many people (some the same, some different from the back-to-nature theorists) envision a utopian future in which globalization, or digitization, or general good feeling will restore harmony and peace to the whole world.

Not too surprisingly, many scientists join in this vision of a secular peaceable kingdom. Not a few evolutionary biologists maintain that human beings are evolutionarily adapted to life on the savannah, not to life in massive cities, and that the decline in the health, intelligence, and height of our civilized ancestors can be blamed on the negative effects of a change in diet brought on by agriculture (too much grain, not enough wild meat and less variety of plants) and by the opportunities for diseases of various kinds to colonize human beings too closely crowded together in cities and too readily exposed to exotic pathogens spread along burgeoning trade routes. Crowding and competition lead to violent behaviors as well.

Thus, whether religious or secular, the explanations of evil generally boil down to this: that human beings are by nature good, and that evil is externally imposed on otherwise good people; and that if circumstances could be changed (through education, redistribution of wealth, exercise, diet, early childhood interventions, etc.), our natural goodness would reassert itself. Of course, there are some who believe that evil behavior has a genetic component, that certain mutations or genetic defects are to blame for psychopaths, rapists, and so on, but again these genetic defects are seen as abnormalities that could be managed by various eugenic interventions, from gene or hormone therapies to locking up excessively aggressive males to ensure they don’t breed and pass on their defects to future generations.

Thus it is that in general we are unable to shake off the belief that good is the norm and evil is the abnorm, whether we are religious or secular, scientists or philosophers, creationists or Darwinists. But if we take Darwinism seriously we have to admit that “evil” is the norm and that “good” is the abnorm—nature is red in tooth and claw, and all of the evil that men and women do is also found in other organisms; in fact, we can say that the “evil” done by other organisms long precedes the evil that men do, and we can also say, based on archaeological and anthropological evidence, that men have been doing evil since the very beginning of the human line. In other words, there never was an Eden, never a Noble Savage, never a long-ago Golden Age from which we have fallen or declined—and nor therefore is there any prospect of an imminent or future Utopia or Millennial Kingdom that will restore mankind to its true nature because there is nothing to restore.

The evolutionary function of “evil” is summarized in the term “natural selection”: the process by which death winnows out the less fit from the chance to reproduce (natural selection works on the average, meaning of course that some who are fit die before they can reproduce and some of the unfit survive long enough to produce some offspring, but on average fitness is favored). Death, usually by violence (eat, and then be eaten), is necessary to the workings of Darwinian evolution. An example: When a lion or pair of lions defeat an older pride lion and take over his pride, they kill the cubs of the defeated male, which has the effect of bringing the lionesses back into heat so that the new males can mate with them and produce their own offspring; their task is then to keep control of the pride long enough for their own cubs to reach reproductive maturity. Among lions, such infanticide raises no moral questions, whereas among humans it does.

There is no problem of evil but rather the problem of good: not why is there “evil” but rather why is there “good”? Why do human beings consider acts like infanticide to be morally evil while lions do not? Why do we have morality at all? I believe that morality is an invention, a creation of human thought, not an instinct. It is one of the most important creations of the human mind, at least as great as the usually cited examples of human creativity (art, literature, science, etc.), if not greater considering how much harder won it is than its nearer competitors, and how much harder it is to maintain. Because “good” is not natural, it is always vulnerable to being overwhelmed by “evil,” which is natural: Peace crumbles into war; restraint gives way to impulse, holism gives way to particularism, agape gives way to narcissism, love to lust, truth to lie, tolerance to hate. War, particularism, narcissism, etc., protect the self of the person and the tribe, one’s own gene pool so to speak, just as the lion kills his competitor’s cubs to ensure the survival of his own. We do not need to think very hard about doing evil; we do need to think hard about what is good and how to do it. It is something that every generation must relearn and rethink, especially in times of great stress.

It appears that we are in such a time today. Various stressors, the economy, the climate, overpopulation and mass migrations, religious conflict amid the dregs of moribund empires, are pushing the relationship of the tribes versus the whole out of balance, and the temptations are to put up walls, dig trenches, draw up battle lines, and find someone other than ourselves to blame for our dilemmas. A war of all against all is not totally out of the question, and it may be that such a war or wars will eventuate in a classic Darwinian victory for one group over another—but history (rather than evolution) tells us that such a victory is often less Darwinian than Pyrrhic.

Is Brexit the End of the Postwar Era?

Most people with any sense of history know that the European Union came into existence as a consequence of the desire of Europeans to prevent a recurrence of the disputes and national rivalries that had led to the two great world wars, as well as to present a united front against the new threat to Europe, the Soviet Union.  With the fall of the Soviet Union, several countries of Eastern Europe joined the EU, eventually expanding the membership to 28 countries.  It is now 27 countries—and possibly on countdown, as other countries, exasperated by the lack of democracy and the failures of the EU governing classes, contemplate following the UK’s lead.

A faulty system will be tolerated so long as people believe that it is preferable to any other likely system; the EU has been tolerated largely because it was seen as preferable to the many wars that European nations had engaged in previously.  But the last great war ended seventy-one years ago; very few people who lived through that war are still alive and memory of it and its long aftermath of reconstruction and national reorganization is largely relegated, for most living Europeans, to history books.  This may be especially true for the British, whose island continues to keep it somewhat apart from events on the Continent.  The threat of Russia under Putin is one too close to, say, Poland and Germany, but a bit far for the UK.

Of course, the United States has not been a disinterested observer of the EU (as suggested by Obama’s remarks when he visited the UK earlier this year).  Having fought with the Allies in both World Wars, having financed the rebuilding of Western Europe through the Marshal Plan, and having been the prime mover behind NATO, one can argue that the US is as much a part of the EU as it would be if it were an actual member.  One might even argue that the EU is a continuation of empire by means other than outright warfare—perhaps we could even call the European project the “imperial project.”  Napoleon tried to unify Europe under the banner of France; the Austro-Hungarian Empire experienced some success in unifying parts of central and eastern Europe; and Prussia unified the disparate German states into Germany.  The rise of nation states themselves out of the motley assortment of duchies, kingdoms, free cities, and spheres of influence into the distinct nations we know today—France, Germany, Italy, the United Kingdom, etc.—was itself a long imperial project (each of these examples was initially united under a national king who had defeated his feudal aristocratic competitors).  And of course, we know the efforts of the Nazis to impose a unified Europe by brutal force under the swastika flag.

One might say, then, that the EU is a bureaucratic rather than a military empire.  Almost by definition, empire attempts to unify national, ethnic, linguistic, and religious “tribes” under one government, but its Achilles’ heel, it’s genetic defect, is the persistence of those tribes despite the efforts of the imperium to eliminate their differences.  It happened to the Roman Empire, which was disassembled by the very tribes which it had incorporated into its borders.  It also happened to the British Empire, once the most extensive the world has ever seen but which now is reduced to the islands of Great Britain and a small part of Ireland—and which may be further reduced if Scotland and Wales, both long ago (but not forgotten) bloodily defeated and humiliated by the English.

The United States, too, has been an empire-that-will-not-speak-its-name (although the Founders were not chary in using the term when describing their continental ambitions).  We have seen in the last few decades a diminution in the global power and influence of the US as various historic threats have been removed, making others including Europe less reliant on our power, and previously backward countries have risen to the world stage, providing alternate centers of power for client states to orient to.  Our zenith of power was in the decades immediately following the end of WW2, but also for us with the passing of the “Greatest Generation,” memory of that triumph has faded, perhaps disastrously so.

So while it cannot yet be definitively confirmed, it does seem that the frustrations and resentments that built up to the Brexit vote could be a signal that the postwar era has come to an end.  If so, then the next question becomes:  Can globalization continue as planned and hoped for by the corporate, digital and government elites, or will tribalism and nationalism reassert themselves?  Will Europe (and the world) revert to its pre-WW1 national conflicts and warlike imperialist ambitions, or will it and the world evolve a totally new type of organization, one that no one has seen before or can as yet predict?  Or will things like global warming make all hope moot?

Stay tuned.

You Lie!

One of the questions epistemology tries to answer is, how do we know? This broad question breaks down into a number of narrower questions, among which is how do we know that what we know is true? Hence, how do we know that a statement (an assertion) by another person is true? How do we know that an assertion is not true? How do we determine that a statement is a lie?

Just as interesting: How is it that we are susceptible to believing a statement is a lie when in fact it is not?
How is it that climate deniers can continue to believe that climate change is a hoax, a deliberate lie, a conspiracy by a world-wide cabal of leftists and humanists (synonyms, I suppose)? I don’t refer here to the oil executives and conservative politicians who know perfectly well that climate change is real and that it is human activity that is causing it (i.e., the real conspirators), but the average Joes and Janes who believe firmly and without doubt that climate change is a lie, the ones who pepper the reader comments of, for example, the Wall Street Journal, with their skepticism at every opportunity—even when the article in question has nothing to do with climate change, or even the weather. Climate change deniers are just a convenient example of the problem—there is virtually no end to the number of topics on which people firmly, often violently disagree, on the left as well as the right.

There are two basic means by which we determine the truth or falsehood of statements (assertions), the specific and the general. By the specific I mean such things as data, facts, direct observation, and so forth—the basic stuff of science. Objective evidence, if you will. We determine the truth of a statement by the degree to which it conforms with the facts. If someone says, “It’s a bright sunny day,” but in looking out the window I can see that it is gray and raining heavily, I have proof based on direct observation that the statement “It’s a bright sunny day” is false. It might even be a lie, depending on the motive of the person who made the statement; or it might not be a lie but simply an error.

However, if I’m in the depths of an office building where there are no windows, and someone comes in and says, “It’s a bright sunny day outside,” how do I determine if his statement is true or false?
By the general I mean the use of such things as theories, ideologies, world views, traditions, beliefs, etc., as templates to determine the truth of statements by, in a sense, measuring how well the statement conforms to the parameters or principles of the theory (etc.)—a theory (etc.) which, of course, we have already accepted (through one or more of the various ways in which theories become accepted). For example, diehard creationists evaluate the claims of Darwinism according to a strict Biblical literalism, the theory that every word of the Bible was directly inspired by God and is therefore true and that the Bible taken as a whole conveys His divine plan of human history from beginning to end. So Darwinism, which not only denies the seven days of creation (in 4004 BC) but also provides no basis for teleological views of the history of life, is godless and therefore untrue. The “so-called facts” of Biblical scholarship and biology aren’t facts at all and can be dismissed out of hand.

Something not dissimilar occurred among Marxist intellectuals in England, France, and the United States during the Stalin era when such luminaries as Sartre refused to believe the horrors being perpetrated in the Soviet Union because they did not conform to Marxist theory. Theory trumps reality in multitudes of cases, usually in ways less obvious than the errors of creationists and Marxists. Consider the political situation in the United States today as a near at hand example of the power of ideology, any ideology, to deny facts, or worse, to consider facts, and the people who bring them to our attention, outright lies.

“It’s a bright sunny day.” Perhaps the person who makes this statement is an extreme optimist who believes that if he repeats the assertion often enough, it will be true; or perhaps she will point out that somewhere on this planet it is a bright sunny day even if it isn’t here. Or maybe it’s a cruel lie perpetrated so that you will walk out to lunch without your umbrella and get soaked to the skin (ha, ha!). Or maybe he’s a politician who fears that if he speaks the truth (“There’s a mighty storm brewing.”) he will lose the election. And if you are a true believer, you will believe him, walk out to lunch without your umbrella, get soaked, and declare that the Senator was right, it is a sunny day—or maybe deny that he ever said that it was. You might recall years later that the Senator said it was a sunny day, or morning in America or whatever, and by golly he was right and history will recognize him as a great man. There are people in Russia who are nostalgic for the days of Stalin. It is said that the current head man of China wants to return to the ways of Mao. Could the Confederacy rise again?

Theories, ideologies, world views, all the general ways in which we measure truth and falsity are particularly resistant to correction or debunking, in part because we invest a great deal of life’s meaningfulness in our own special theory, in part because once we have adopted a theory (which we often do, without thought, very early in life), we get in the habit of measuring, evaluating, judging, and deciding according to its parameters. It is our paradigm, our gestalt, without which we could not make sense of the world. True or false, creationism and the whole system of beliefs of which it is a part makes sense and gives meaning. True or false, evolution and all that follows from it makes sense, though for most people it does not provide much meaning. Many people think that the sciences in general don’t provide humanly useful meaning, the kind of meaning that motivates us to get up in the morning, to care about voting, to raise children, to have something meatier than “values” to guide our lives. We are even willing to “[depart] from the truth in the name of some higher order” rather than risk meaninglessness. Hence the attraction of -isms: Communism, Darwinism, Creationism, Capitalism, Feminism, Terrorism—any -ism that can confer on us something other than the inevitable insignificance of being only one out of seven billion people, who are only seven billion out of the 107 billion people who have ever lived—and who knows how many more in the future. The less significant we are, the more selfies proliferate. Every -ism is a kind of selfie.

Is Ignorance Like Color Blindness?

By ignorance I do not mean stupidity or prejudice, even though ignorance is often used as if it were a synonym of those two words. Stupidity in its strict sense is an incapacity to know, a kind of mental defect, though to use it in that sense today is considered rude and discriminatory. Mostly it is now used to indicate willful refusal to acknowledge the truth or to inform oneself of the facts. Some liberals like to refer to Trump voters as stupid, thereby dismissing them and their concerns as not worthy of attention.

Stupidity is often used as a synonym of prejudice, whose common meaning is basically to dislike anything or anyone not like oneself (with occasionally the added caveat that, if only the prejudiced person would just get to know whatever or whoever they dislike, they would lose their prejudice and even become a fan—if you’re afraid of pit bulls, for example, well, just get to know one and you will see what fine dogs they actually are). Prejudice in the strict sense, however, means to prejudge, to make a judgment before knowing anything or very much about a person or thing, and while often wrong, not always so. The child staring at broccoli on his plate for the first time, noting its cyanide green color and musty, death-like odor, is likely prejudiced against putting it in his mouth. Prejudice of the kind that is synonymous with stupidity is not always from lack of familiarity. Racist whites in the South were quite familiar with African-Americans, for example; their “prejudice” came from sources other than unfamiliarity.

Ignorance is simply absence of knowledge, and all of us are ignorant in a multitude of ways, even at the same time as we are knowledgeable about others. I am knowledgeable about the novels of Henry James but wholly ignorant of the ancient Egyptian Book of the Dead. This kind of ignorance, as opposed to that kind mentioned above, is akin to color blindness. The color-blind husband knows that he is color blind, so when dressing in the morning he will ask his color-sighted wife if the suit he plans to wear is blue or gray. He will probably also ask her to hand him his red tie, because he knows (because she has told him) that the green tie doesn’t go with either gray or blue. And would she please check that his socks match? He knows that there are colors even though he cannot see them, because people have told him that colors exist and that they can see them. He knows that he is color blind, even though he does not experience being color blind.

That sounds paradoxical, doesn’t it. But I think it’s true in a particular sense, a metaphorical sense. Genuine ignorance is like color blindness in that it can’t really be experienced. It’s not a state of being. What could ignorance feel like? What does color blindness feel like?

Certain persons like to refer to periods long in the past, say before the Enlightenment, as times when ignorance was rife in the land, as if it were a kind of plague from which those superstitious and benighted people unnecessarily suffered. This is an instance when “ignorance” is used in the pejorative, yet the question is, what in God’s name are those peoples of the past supposed to have known but didn’t? Were they willfully ignorant? Did they make no efforts to know what their modern critics think they should have known? What exactly is it that studious monks of the twelfth century should have known? Quantum physics? Germ theory? That God does not exist? If everyone were color blind, who would tell us of color?

Metaphorically speaking, we live in a world today when most people are color blind and only a few can see colors. Like the color-blind husband, we should listen to what the color-sighted have to say. Only a relative handful of people in the world understand the mathematics that is necessary to understand today’s physics; when they attempt to tell us in our language the truths of that physics, we really have little choice but to believe what they say, to place our trust in their vision. There is a larger, but still very much a minority, group of people who understand climate science sufficiently to make the determination that the world is warming and that human activity, particularly the burning of fossil fuels, is the primary, perhaps only, cause of that warming. We could draw up a long list of knowledge fields in which most of us are color blind. The husband who ignores his wife’s admonitions and walks out the door wearing one red sock and one green one is willfully stubborn. Those of us who reject the expertise of climate scientists are willfully ignorant. That’s stupid.

Donald Trump: Psychoanalysis vs. Ethics

Is Donald Trump a narcissist? Is he a psychopath? Is he mentally unstable? These questions, and others of the same ilk, have been asked (and often answered in the affirmative) throughout the primary campaign season. To a lesser extent, similar questions have been asked about his followers. There has been, in other words, a lot of psychoanalyzing. It’s as if the DSM-5, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, has become the primary guide to politics and politicians.

Hillary Clinton has also, and for a longer time (at least since the Lewinsky scandal), been subjected to armchair and coffee house analysis (she’s in denial, etc.), even though, given that she is, for a politician, a surprisingly private person (i.e., uptight? Secretive? Not warm?), one wonders how anyone can legitimately diagnose her. Bill Clinton has also, of course, been parsed and dissected (narcissist, sex addict, etc.). Surprisingly, there has been little psychoanalysis of Bernie Sanders, perhaps because, as Hillary’s gadfly, he has dominated the high ground of principle.

Perhaps when a serious candidate actually has principles and stays consistent with them, psychologizing is unnecessary and even irrelevant. Principles have the effect of overriding personal quirks and biases. They are not generated from within this or that individual, and therefore are not reflective only of that individual, but are generated in a long process of shared thought. We come to principles through reason (Hannah Arendt might have said, through reason paired with imagination), not through impulse; indeed, the point of principle is to put a bridle on impulse, to restrain the impetuousness of the moment in favor of the longer, wider view. In Pauline terms, it replaces the natural or carnal man with the spiritual man; in late Protestant terms, it replaces immediate with delayed gratification.

So while Trump may or may not be a psychopath, a narcissist, or mentally unstable or ill, which none of us can really know, he is an unprincipled man. His constant shape-shifting, self-contradictions, denials, and off-the-cuff bluster are the signs of an impulsive man whose thoughts and words are not subjected to the vetting of a set of principles that can tell him whether he is right or wrong. He has at long last no shame, no decency, because he has no principles to tell him what is decent or shameful. In other words, he is typical of human beings, men and women, when they have nothing higher or wider than themselves as guides to behavior. This is not the place to go in depth into the utility of moral principle, but just as an example, something as simple as “do unto others as you would have others do unto you” can restrain the natural selfish impulse to grab as much as you can for yourself.

Anyone who has taken an introductory course in psychology or who has paged through any of the editions of the DSM has found plenty of evidence that they are in some way or another mentally unstable or unhealthy. Just about anyone can look at the list of defining characteristics of, say, narcissistic personality disorder (do you think you are special or unique?), or antisocial personality disorder (are you opinionated and cocky), or a perfectionist, and wonder, in a bit of self-diagnosis, if they should seek help. Welcome to the cuckoo’s nest. Or rather, welcome to humanity.

But for the concept of a disorder to exist, there has to be a concept of an order, i.e., a definition of what being a normal person is. Ironically, psychology is of no help to us here. The DSM-V is nearly one thousand pages long, and according to its critics adds more previously normal or eccentric behaviors to its exhaustive, not to say fatiguing, list of mental maladies. Its critics also charge that it provides ever more excuses for psychiatrists and physicians to prescribe very profitable drugs to people who are really just normal people. After all, they point out, life is not a cakewalk, and people are not churned out like standardized units.

Principle, i.e., morality, ethics, on the other hand, can be of great help here. It is obvious that the followers of Trump have not been dissuaded from supporting him because of the amateur psychoanalyses of pundits and opponents. Clearly they like those traits which the alienists are diagnosing. But what if someone started criticizing him on moral grounds, what if someone performed something analogous to “Have you no decency, sir?” This question, posed by Joseph N. Welch to Senator Joe McCarthy in a full Senate hearing in 1954, was a key moment in the demise of one of the worst men in American political history. Welch did not psychoanalyze McCarthy, nor did Edward R. Murrow in his famous television broadcast on McCarthy’s methods, and McCarthy was not taken away in a straitjacket. He was taken down by morally principled men and women who had had enough of his cruelty and recklessness.

Means and Ends

The other day, as I was making a left turn from one major thoroughfare to another, I noticed several traffic cameras still perched on poles in the median. I wondered why they were still there because back in November the voters of my fair city passed a ballot proposition to have the cameras shut down. Perhaps the city fathers and mothers are hoping that, with an upsurge in traffic accidents, the voters might change their minds and vote the cameras back on.

But I doubt that. Voters were well aware of the facts, that for example the traffic cameras had not only caught red-light runners and left-turn violators but had (they said) reduced accidents at the city’s busiest, most dangerous intersections. The hefty fines, they said, had sent the desired message. But such reasoning misses the point: Voters were not upset that the cameras worked as advertised, had fulfilled the purpose for which they were installed in the first place. What voters didn’t like was being spied on as they went about their daily business. They did not like feeling surveilled in public any more than they would have liked it in private. Indeed, they believed that they retained a right to privacy when driving in their own cars.

They also did not like the mechanical, algorithmic one-size-fits-all assumptions lying behind the programming of the cameras. There are, they felt, differences of reaction in different situations, decisions such whether to proceed with a left turn or not depending on the assessments of individual drivers in particular circumstances. Cameras, they reasoned, do not record those situational strategies. Thus the cameras, with their automatic flashes and equally automatic generation of tickets automatically mailed to their doors, represented Big Brother not only monitoring their activities and telling them not only what to do but what not to do—and fining them for it.

So the cameras have been shut down. This little example tells me a lot about my fellow citizens and marks a difference between attitudes in the United States and those of other countries, say Great Britain, where CCTV surveils everywhere. This is true “Don’t tread on me!” Americanism. More importantly, it suggests that people recognize that the ends don’t necessarily justify the means.

It certainly is a good thing to try to reduce traffic accidents and to spare drivers and passengers the horrors of severe injury and death, but in this case the means of achieving that end were rejected by the voters because they conflicted with values which the voters held equally dear. In other words, the achievement of one laudable end by this particular means eroded another laudable end, the desire for privacy and for not living in a surveillance state. A nanny state, if you will. For many Americans, and certainly for the majority of voters who ousted the cameras, doing for oneself is preferable to having the government step in and do it for them. Even if it means putting up with one’s own and others’ mistakes, even if sometimes those mistakes lead to serious consequences. It entails also a recognition that governments can also make serious mistakes, and that governmental mistakes can have more far-reaching consequences than the mistakes of individuals.

One can argue that traffic cameras do not rise to the level of, say, decisions to invade foreign countries, or to demolish established neighborhoods for so-called urban renewal, or unequally applied death sentences, etc., but maybe my city’s voters recognize that something as seemingly benign as traffic cameras are the thin edge of a much bigger wedge.

Wars of a Thousand Cuts

In trying to understand the grotesque turn that American politics has taken in the last year, the economy seems to be the most often-cited explanation. Inequality has increased, with the richest Americans filching an increasingly large slice of the pie and leaving only crumbs for the rest of us: the middle class is in retreat, more good-paying jobs are being “outsourced” to foreign countries, and college tuitions and loans are crippling the next generation of workers. We can add to all this the sense that many people have, on both the left and the right, that the political and cultural elites are disconnected from the concerns of the people, that for the elites far too much of the country is fly-over country, both literally and metaphorically. Wall Street is blamed, Washington gridlock is blamed; so too are immigrants, terrorists, pop culture, GMO’s–you name it.

Although all these disasters seem to have struck us suddenly, out of the blue so to speak, or at least since 9/11, perhaps the roots of our problems extend further back, to the first Gulf War (under Bush1), perhaps further than that. The Wikipedia timeline of American wars shows that the country has been engaged in some kind of war more or less continuously since 1909–and frankly,it doesn’t list every incident in which the military has been involved. This constant warmongering, so often contrary to reason, perhaps has wounded our collective psyche so slowly and completely that we fail to see that it is slowly bleeding us to death. We have spent a lot on these wars, we spend a lot to maintain and improve our military capability, and to supply our “allies” with weapons and munitions–money that could be used to maintain and improve our infrastructure and educational system, our healthcare and environment.

Yet we are promised, and clearly we want to believe (why else keep electing the same politicians over and over again), that we can fight these wars at no cost to ourselves, that we can continue to dump trillions into “security” while enjoying tax cuts at the same time.

Here’s an apt illustration of the dilemma: the recent news has been dominated by the excessive wait times for passengers going through security checks at our airports. The TSA is getting the blame (too few personnel, etc.), but equal blame should be put on the blow-back from our ill-considered military interventions in other countries’ business, in our poor choice of allies to whom we ship armaments, and our failure to rebuild our aging and inadequate airports. And our free-lunch attitudes: one of the factors in the long security waits is the fact that passengers want to avoid paying checked baggage fees and therefore carry on as much luggage as they can get away with–that’s a lot of extra bags that need to be searched!

What a tangled web we have (all) woven! Will this political season make a difference? After all, we have a true political renegade now virtually guaranteed to be the Republican nominee for the Presidency, someone who “tells it like it is”; and on the Democratic side we have a very popular contrarian candidate who is giving the assumed nominee a serious challenge right up to the finish line (and who knows, perhaps beyond). Many voters hope that these mavericks can turn things around, but given the interwoven complexities of the overall situation, one wonders what they could actually accomplish should one or the other win. Has too much blood already been lost?