Evolution and Theodicy

“Why is there evil in the world?” This question has been asked by philosophers and theologians and ordinary men and women for millennia. Today scientists, particularly evolutionary biologists, neuroscientists, and evolutionary/neuropsychologists have joined the effort to explain evil: why do people indulge in violence, cheating, lies, harassment, and so on. There is no need here to itemize all the behaviors that can be labeled evil. What matters is the question of “why?”

The question of “why is there evil in the world?” assumes the premise that evil is abnormal while good however defined) is normal—the abnorm vs. the norm, if you will. Goodness is the natural state of man, the original condition, and evil is something imposed on or inserted into the world from some external, malevolent source. In Genesis, God created the world and pronounced it good; then Adam and Eve succumbed to the temptations of the Serpent and brought evil and therefore death into the world (thus, death is a manifestation of evil, immortality the natural state of good). Unfortunately, the Bible does not adequately account for the existence of the Serpent or Satan, so it was left to Milton to fill in the story. Gnostics, Manicheans, and others posited the existence of two deities, one good and the other evil, and constructed a vision of a cosmic struggle between light and darkness that would culminate in the triumph of good—a concept that filtered into Christian eschatology. The fact that Christian tradition sees the end times as a restoration to a state of Adamic or Edenic innocence underscores the notion that goodness is the natural, default state of man and the cosmos.

Contemporary secular culture has not escaped this notion of the primeval innocence of man. It has simply relocated Eden to the African savannah. When mankind was still at the hunter-gatherer stage, so the story goes, people lived in naked or near-naked innocence; they lived in egalitarian peace with their fellows and in harmony with nature. Alas, with the invention of agriculture and the consequent development of cities and civilizations, egalitarianism gave way to greed, social hierarchies, war, imperialism, slavery, patriarchy, all the factors that cause people to engage in violence, oppression, materialism, and so on; further, these faults of civilizations caused the oppressed to engage in violence, theft, slovenliness, and other sins. Laws and punishments and other means of control and suppression were instituted to keep the louts in their place. Many people believe that to restore the lost innocence of our hunter-gatherer origins, we must return to the land, re-engage with nature, adopt a paleo diet, restructure society according to matriarchal and/or socialist principles, and so on. Many people (some the same, some different from the back-to-nature theorists) envision a utopian future in which globalization, or digitization, or general good feeling will restore harmony and peace to the whole world.

Not too surprisingly, many scientists join in this vision of a secular peaceable kingdom. Not a few evolutionary biologists maintain that human beings are evolutionarily adapted to life on the savannah, not to life in massive cities, and that the decline in the health, intelligence, and height of our civilized ancestors can be blamed on the negative effects of a change in diet brought on by agriculture (too much grain, not enough wild meat and less variety of plants) and by the opportunities for diseases of various kinds to colonize human beings too closely crowded together in cities and too readily exposed to exotic pathogens spread along burgeoning trade routes. Crowding and competition lead to violent behaviors as well.

Thus, whether religious or secular, the explanations of evil generally boil down to this: that human beings are by nature good, and that evil is externally imposed on otherwise good people; and that if circumstances could be changed (through education, redistribution of wealth, exercise, diet, early childhood interventions, etc.), our natural goodness would reassert itself. Of course, there are some who believe that evil behavior has a genetic component, that certain mutations or genetic defects are to blame for psychopaths, rapists, and so on, but again these genetic defects are seen as abnormalities that could be managed by various eugenic interventions, from gene or hormone therapies to locking up excessively aggressive males to ensure they don’t breed and pass on their defects to future generations.

Thus it is that in general we are unable to shake off the belief that good is the norm and evil is the abnorm, whether we are religious or secular, scientists or philosophers, creationists or Darwinists. But if we take Darwinism seriously we have to admit that “evil” is the norm and that “good” is the abnorm—nature is red in tooth and claw, and all of the evil that men and women do is also found in other organisms; in fact, we can say that the “evil” done by other organisms long precedes the evil that men do, and we can also say, based on archaeological and anthropological evidence, that men have been doing evil since the very beginning of the human line. In other words, there never was an Eden, never a Noble Savage, never a long-ago Golden Age from which we have fallen or declined—and nor therefore is there any prospect of an imminent or future Utopia or Millennial Kingdom that will restore mankind to its true nature because there is nothing to restore.

The evolutionary function of “evil” is summarized in the term “natural selection”: the process by which death winnows out the less fit from the chance to reproduce (natural selection works on the average, meaning of course that some who are fit die before they can reproduce and some of the unfit survive long enough to produce some offspring, but on average fitness is favored). Death, usually by violence (eat, and then be eaten), is necessary to the workings of Darwinian evolution. An example: When a lion or pair of lions defeat an older pride lion and take over his pride, they kill the cubs of the defeated male, which has the effect of bringing the lionesses back into heat so that the new males can mate with them and produce their own offspring; their task is then to keep control of the pride long enough for their own cubs to reach reproductive maturity. Among lions, such infanticide raises no moral questions, whereas among humans it does.

There is no problem of evil but rather the problem of good: not why is there “evil” but rather why is there “good”? Why do human beings consider acts like infanticide to be morally evil while lions do not? Why do we have morality at all? I believe that morality is an invention, a creation of human thought, not an instinct. It is one of the most important creations of the human mind, at least as great as the usually cited examples of human creativity (art, literature, science, etc.), if not greater considering how much harder won it is than its nearer competitors, and how much harder it is to maintain. Because “good” is not natural, it is always vulnerable to being overwhelmed by “evil,” which is natural: Peace crumbles into war; restraint gives way to impulse, holism gives way to particularism, agape gives way to narcissism, love to lust, truth to lie, tolerance to hate. War, particularism, narcissism, etc., protect the self of the person and the tribe, one’s own gene pool so to speak, just as the lion kills his competitor’s cubs to ensure the survival of his own. We do not need to think very hard about doing evil; we do need to think hard about what is good and how to do it. It is something that every generation must relearn and rethink, especially in times of great stress.

It appears that we are in such a time today. Various stressors, the economy, the climate, overpopulation and mass migrations, religious conflict amid the dregs of moribund empires, are pushing the relationship of the tribes versus the whole out of balance, and the temptations are to put up walls, dig trenches, draw up battle lines, and find someone other than ourselves to blame for our dilemmas. A war of all against all is not totally out of the question, and it may be that such a war or wars will eventuate in a classic Darwinian victory for one group over another—but history (rather than evolution) tells us that such a victory is often less Darwinian than Pyrrhic.

Advertisements

Is Brexit the End of the Postwar Era?

Most people with any sense of history know that the European Union came into existence as a consequence of the desire of Europeans to prevent a recurrence of the disputes and national rivalries that had led to the two great world wars, as well as to present a united front against the new threat to Europe, the Soviet Union.  With the fall of the Soviet Union, several countries of Eastern Europe joined the EU, eventually expanding the membership to 28 countries.  It is now 27 countries—and possibly on countdown, as other countries, exasperated by the lack of democracy and the failures of the EU governing classes, contemplate following the UK’s lead.

A faulty system will be tolerated so long as people believe that it is preferable to any other likely system; the EU has been tolerated largely because it was seen as preferable to the many wars that European nations had engaged in previously.  But the last great war ended seventy-one years ago; very few people who lived through that war are still alive and memory of it and its long aftermath of reconstruction and national reorganization is largely relegated, for most living Europeans, to history books.  This may be especially true for the British, whose island continues to keep it somewhat apart from events on the Continent.  The threat of Russia under Putin is one too close to, say, Poland and Germany, but a bit far for the UK.

Of course, the United States has not been a disinterested observer of the EU (as suggested by Obama’s remarks when he visited the UK earlier this year).  Having fought with the Allies in both World Wars, having financed the rebuilding of Western Europe through the Marshal Plan, and having been the prime mover behind NATO, one can argue that the US is as much a part of the EU as it would be if it were an actual member.  One might even argue that the EU is a continuation of empire by means other than outright warfare—perhaps we could even call the European project the “imperial project.”  Napoleon tried to unify Europe under the banner of France; the Austro-Hungarian Empire experienced some success in unifying parts of central and eastern Europe; and Prussia unified the disparate German states into Germany.  The rise of nation states themselves out of the motley assortment of duchies, kingdoms, free cities, and spheres of influence into the distinct nations we know today—France, Germany, Italy, the United Kingdom, etc.—was itself a long imperial project (each of these examples was initially united under a national king who had defeated his feudal aristocratic competitors).  And of course, we know the efforts of the Nazis to impose a unified Europe by brutal force under the swastika flag.

One might say, then, that the EU is a bureaucratic rather than a military empire.  Almost by definition, empire attempts to unify national, ethnic, linguistic, and religious “tribes” under one government, but its Achilles’ heel, it’s genetic defect, is the persistence of those tribes despite the efforts of the imperium to eliminate their differences.  It happened to the Roman Empire, which was disassembled by the very tribes which it had incorporated into its borders.  It also happened to the British Empire, once the most extensive the world has ever seen but which now is reduced to the islands of Great Britain and a small part of Ireland—and which may be further reduced if Scotland and Wales, both long ago (but not forgotten) bloodily defeated and humiliated by the English.

The United States, too, has been an empire-that-will-not-speak-its-name (although the Founders were not chary in using the term when describing their continental ambitions).  We have seen in the last few decades a diminution in the global power and influence of the US as various historic threats have been removed, making others including Europe less reliant on our power, and previously backward countries have risen to the world stage, providing alternate centers of power for client states to orient to.  Our zenith of power was in the decades immediately following the end of WW2, but also for us with the passing of the “Greatest Generation,” memory of that triumph has faded, perhaps disastrously so.

So while it cannot yet be definitively confirmed, it does seem that the frustrations and resentments that built up to the Brexit vote could be a signal that the postwar era has come to an end.  If so, then the next question becomes:  Can globalization continue as planned and hoped for by the corporate, digital and government elites, or will tribalism and nationalism reassert themselves?  Will Europe (and the world) revert to its pre-WW1 national conflicts and warlike imperialist ambitions, or will it and the world evolve a totally new type of organization, one that no one has seen before or can as yet predict?  Or will things like global warming make all hope moot?

Stay tuned.

You Lie!

One of the questions epistemology tries to answer is, how do we know? This broad question breaks down into a number of narrower questions, among which is how do we know that what we know is true? Hence, how do we know that a statement (an assertion) by another person is true? How do we know that an assertion is not true? How do we determine that a statement is a lie?

Just as interesting: How is it that we are susceptible to believing a statement is a lie when in fact it is not?
How is it that climate deniers can continue to believe that climate change is a hoax, a deliberate lie, a conspiracy by a world-wide cabal of leftists and humanists (synonyms, I suppose)? I don’t refer here to the oil executives and conservative politicians who know perfectly well that climate change is real and that it is human activity that is causing it (i.e., the real conspirators), but the average Joes and Janes who believe firmly and without doubt that climate change is a lie, the ones who pepper the reader comments of, for example, the Wall Street Journal, with their skepticism at every opportunity—even when the article in question has nothing to do with climate change, or even the weather. Climate change deniers are just a convenient example of the problem—there is virtually no end to the number of topics on which people firmly, often violently disagree, on the left as well as the right.

There are two basic means by which we determine the truth or falsehood of statements (assertions), the specific and the general. By the specific I mean such things as data, facts, direct observation, and so forth—the basic stuff of science. Objective evidence, if you will. We determine the truth of a statement by the degree to which it conforms with the facts. If someone says, “It’s a bright sunny day,” but in looking out the window I can see that it is gray and raining heavily, I have proof based on direct observation that the statement “It’s a bright sunny day” is false. It might even be a lie, depending on the motive of the person who made the statement; or it might not be a lie but simply an error.

However, if I’m in the depths of an office building where there are no windows, and someone comes in and says, “It’s a bright sunny day outside,” how do I determine if his statement is true or false?
By the general I mean the use of such things as theories, ideologies, world views, traditions, beliefs, etc., as templates to determine the truth of statements by, in a sense, measuring how well the statement conforms to the parameters or principles of the theory (etc.)—a theory (etc.) which, of course, we have already accepted (through one or more of the various ways in which theories become accepted). For example, diehard creationists evaluate the claims of Darwinism according to a strict Biblical literalism, the theory that every word of the Bible was directly inspired by God and is therefore true and that the Bible taken as a whole conveys His divine plan of human history from beginning to end. So Darwinism, which not only denies the seven days of creation (in 4004 BC) but also provides no basis for teleological views of the history of life, is godless and therefore untrue. The “so-called facts” of Biblical scholarship and biology aren’t facts at all and can be dismissed out of hand.

Something not dissimilar occurred among Marxist intellectuals in England, France, and the United States during the Stalin era when such luminaries as Sartre refused to believe the horrors being perpetrated in the Soviet Union because they did not conform to Marxist theory. Theory trumps reality in multitudes of cases, usually in ways less obvious than the errors of creationists and Marxists. Consider the political situation in the United States today as a near at hand example of the power of ideology, any ideology, to deny facts, or worse, to consider facts, and the people who bring them to our attention, outright lies.

“It’s a bright sunny day.” Perhaps the person who makes this statement is an extreme optimist who believes that if he repeats the assertion often enough, it will be true; or perhaps she will point out that somewhere on this planet it is a bright sunny day even if it isn’t here. Or maybe it’s a cruel lie perpetrated so that you will walk out to lunch without your umbrella and get soaked to the skin (ha, ha!). Or maybe he’s a politician who fears that if he speaks the truth (“There’s a mighty storm brewing.”) he will lose the election. And if you are a true believer, you will believe him, walk out to lunch without your umbrella, get soaked, and declare that the Senator was right, it is a sunny day—or maybe deny that he ever said that it was. You might recall years later that the Senator said it was a sunny day, or morning in America or whatever, and by golly he was right and history will recognize him as a great man. There are people in Russia who are nostalgic for the days of Stalin. It is said that the current head man of China wants to return to the ways of Mao. Could the Confederacy rise again?

Theories, ideologies, world views, all the general ways in which we measure truth and falsity are particularly resistant to correction or debunking, in part because we invest a great deal of life’s meaningfulness in our own special theory, in part because once we have adopted a theory (which we often do, without thought, very early in life), we get in the habit of measuring, evaluating, judging, and deciding according to its parameters. It is our paradigm, our gestalt, without which we could not make sense of the world. True or false, creationism and the whole system of beliefs of which it is a part makes sense and gives meaning. True or false, evolution and all that follows from it makes sense, though for most people it does not provide much meaning. Many people think that the sciences in general don’t provide humanly useful meaning, the kind of meaning that motivates us to get up in the morning, to care about voting, to raise children, to have something meatier than “values” to guide our lives. We are even willing to “[depart] from the truth in the name of some higher order” rather than risk meaninglessness. Hence the attraction of -isms: Communism, Darwinism, Creationism, Capitalism, Feminism, Terrorism—any -ism that can confer on us something other than the inevitable insignificance of being only one out of seven billion people, who are only seven billion out of the 107 billion people who have ever lived—and who knows how many more in the future. The less significant we are, the more selfies proliferate. Every -ism is a kind of selfie.

Is Ignorance Like Color Blindness?

By ignorance I do not mean stupidity or prejudice, even though ignorance is often used as if it were a synonym of those two words. Stupidity in its strict sense is an incapacity to know, a kind of mental defect, though to use it in that sense today is considered rude and discriminatory. Mostly it is now used to indicate willful refusal to acknowledge the truth or to inform oneself of the facts. Some liberals like to refer to Trump voters as stupid, thereby dismissing them and their concerns as not worthy of attention.

Stupidity is often used as a synonym of prejudice, whose common meaning is basically to dislike anything or anyone not like oneself (with occasionally the added caveat that, if only the prejudiced person would just get to know whatever or whoever they dislike, they would lose their prejudice and even become a fan—if you’re afraid of pit bulls, for example, well, just get to know one and you will see what fine dogs they actually are). Prejudice in the strict sense, however, means to prejudge, to make a judgment before knowing anything or very much about a person or thing, and while often wrong, not always so. The child staring at broccoli on his plate for the first time, noting its cyanide green color and musty, death-like odor, is likely prejudiced against putting it in his mouth. Prejudice of the kind that is synonymous with stupidity is not always from lack of familiarity. Racist whites in the South were quite familiar with African-Americans, for example; their “prejudice” came from sources other than unfamiliarity.

Ignorance is simply absence of knowledge, and all of us are ignorant in a multitude of ways, even at the same time as we are knowledgeable about others. I am knowledgeable about the novels of Henry James but wholly ignorant of the ancient Egyptian Book of the Dead. This kind of ignorance, as opposed to that kind mentioned above, is akin to color blindness. The color-blind husband knows that he is color blind, so when dressing in the morning he will ask his color-sighted wife if the suit he plans to wear is blue or gray. He will probably also ask her to hand him his red tie, because he knows (because she has told him) that the green tie doesn’t go with either gray or blue. And would she please check that his socks match? He knows that there are colors even though he cannot see them, because people have told him that colors exist and that they can see them. He knows that he is color blind, even though he does not experience being color blind.

That sounds paradoxical, doesn’t it. But I think it’s true in a particular sense, a metaphorical sense. Genuine ignorance is like color blindness in that it can’t really be experienced. It’s not a state of being. What could ignorance feel like? What does color blindness feel like?

Certain persons like to refer to periods long in the past, say before the Enlightenment, as times when ignorance was rife in the land, as if it were a kind of plague from which those superstitious and benighted people unnecessarily suffered. This is an instance when “ignorance” is used in the pejorative, yet the question is, what in God’s name are those peoples of the past supposed to have known but didn’t? Were they willfully ignorant? Did they make no efforts to know what their modern critics think they should have known? What exactly is it that studious monks of the twelfth century should have known? Quantum physics? Germ theory? That God does not exist? If everyone were color blind, who would tell us of color?

Metaphorically speaking, we live in a world today when most people are color blind and only a few can see colors. Like the color-blind husband, we should listen to what the color-sighted have to say. Only a relative handful of people in the world understand the mathematics that is necessary to understand today’s physics; when they attempt to tell us in our language the truths of that physics, we really have little choice but to believe what they say, to place our trust in their vision. There is a larger, but still very much a minority, group of people who understand climate science sufficiently to make the determination that the world is warming and that human activity, particularly the burning of fossil fuels, is the primary, perhaps only, cause of that warming. We could draw up a long list of knowledge fields in which most of us are color blind. The husband who ignores his wife’s admonitions and walks out the door wearing one red sock and one green one is willfully stubborn. Those of us who reject the expertise of climate scientists are willfully ignorant. That’s stupid.

Donald Trump: Psychoanalysis vs. Ethics

Is Donald Trump a narcissist? Is he a psychopath? Is he mentally unstable? These questions, and others of the same ilk, have been asked (and often answered in the affirmative) throughout the primary campaign season. To a lesser extent, similar questions have been asked about his followers. There has been, in other words, a lot of psychoanalyzing. It’s as if the DSM-5, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, has become the primary guide to politics and politicians.

Hillary Clinton has also, and for a longer time (at least since the Lewinsky scandal), been subjected to armchair and coffee house analysis (she’s in denial, etc.), even though, given that she is, for a politician, a surprisingly private person (i.e., uptight? Secretive? Not warm?), one wonders how anyone can legitimately diagnose her. Bill Clinton has also, of course, been parsed and dissected (narcissist, sex addict, etc.). Surprisingly, there has been little psychoanalysis of Bernie Sanders, perhaps because, as Hillary’s gadfly, he has dominated the high ground of principle.

Perhaps when a serious candidate actually has principles and stays consistent with them, psychologizing is unnecessary and even irrelevant. Principles have the effect of overriding personal quirks and biases. They are not generated from within this or that individual, and therefore are not reflective only of that individual, but are generated in a long process of shared thought. We come to principles through reason (Hannah Arendt might have said, through reason paired with imagination), not through impulse; indeed, the point of principle is to put a bridle on impulse, to restrain the impetuousness of the moment in favor of the longer, wider view. In Pauline terms, it replaces the natural or carnal man with the spiritual man; in late Protestant terms, it replaces immediate with delayed gratification.

So while Trump may or may not be a psychopath, a narcissist, or mentally unstable or ill, which none of us can really know, he is an unprincipled man. His constant shape-shifting, self-contradictions, denials, and off-the-cuff bluster are the signs of an impulsive man whose thoughts and words are not subjected to the vetting of a set of principles that can tell him whether he is right or wrong. He has at long last no shame, no decency, because he has no principles to tell him what is decent or shameful. In other words, he is typical of human beings, men and women, when they have nothing higher or wider than themselves as guides to behavior. This is not the place to go in depth into the utility of moral principle, but just as an example, something as simple as “do unto others as you would have others do unto you” can restrain the natural selfish impulse to grab as much as you can for yourself.

Anyone who has taken an introductory course in psychology or who has paged through any of the editions of the DSM has found plenty of evidence that they are in some way or another mentally unstable or unhealthy. Just about anyone can look at the list of defining characteristics of, say, narcissistic personality disorder (do you think you are special or unique?), or antisocial personality disorder (are you opinionated and cocky), or a perfectionist, and wonder, in a bit of self-diagnosis, if they should seek help. Welcome to the cuckoo’s nest. Or rather, welcome to humanity.

But for the concept of a disorder to exist, there has to be a concept of an order, i.e., a definition of what being a normal person is. Ironically, psychology is of no help to us here. The DSM-V is nearly one thousand pages long, and according to its critics adds more previously normal or eccentric behaviors to its exhaustive, not to say fatiguing, list of mental maladies. Its critics also charge that it provides ever more excuses for psychiatrists and physicians to prescribe very profitable drugs to people who are really just normal people. After all, they point out, life is not a cakewalk, and people are not churned out like standardized units.

Principle, i.e., morality, ethics, on the other hand, can be of great help here. It is obvious that the followers of Trump have not been dissuaded from supporting him because of the amateur psychoanalyses of pundits and opponents. Clearly they like those traits which the alienists are diagnosing. But what if someone started criticizing him on moral grounds, what if someone performed something analogous to “Have you no decency, sir?” This question, posed by Joseph N. Welch to Senator Joe McCarthy in a full Senate hearing in 1954, was a key moment in the demise of one of the worst men in American political history. Welch did not psychoanalyze McCarthy, nor did Edward R. Murrow in his famous television broadcast on McCarthy’s methods, and McCarthy was not taken away in a straitjacket. He was taken down by morally principled men and women who had had enough of his cruelty and recklessness.

Means and Ends

The other day, as I was making a left turn from one major thoroughfare to another, I noticed several traffic cameras still perched on poles in the median. I wondered why they were still there because back in November the voters of my fair city passed a ballot proposition to have the cameras shut down. Perhaps the city fathers and mothers are hoping that, with an upsurge in traffic accidents, the voters might change their minds and vote the cameras back on.

But I doubt that. Voters were well aware of the facts, that for example the traffic cameras had not only caught red-light runners and left-turn violators but had (they said) reduced accidents at the city’s busiest, most dangerous intersections. The hefty fines, they said, had sent the desired message. But such reasoning misses the point: Voters were not upset that the cameras worked as advertised, had fulfilled the purpose for which they were installed in the first place. What voters didn’t like was being spied on as they went about their daily business. They did not like feeling surveilled in public any more than they would have liked it in private. Indeed, they believed that they retained a right to privacy when driving in their own cars.

They also did not like the mechanical, algorithmic one-size-fits-all assumptions lying behind the programming of the cameras. There are, they felt, differences of reaction in different situations, decisions such whether to proceed with a left turn or not depending on the assessments of individual drivers in particular circumstances. Cameras, they reasoned, do not record those situational strategies. Thus the cameras, with their automatic flashes and equally automatic generation of tickets automatically mailed to their doors, represented Big Brother not only monitoring their activities and telling them not only what to do but what not to do—and fining them for it.

So the cameras have been shut down. This little example tells me a lot about my fellow citizens and marks a difference between attitudes in the United States and those of other countries, say Great Britain, where CCTV surveils everywhere. This is true “Don’t tread on me!” Americanism. More importantly, it suggests that people recognize that the ends don’t necessarily justify the means.

It certainly is a good thing to try to reduce traffic accidents and to spare drivers and passengers the horrors of severe injury and death, but in this case the means of achieving that end were rejected by the voters because they conflicted with values which the voters held equally dear. In other words, the achievement of one laudable end by this particular means eroded another laudable end, the desire for privacy and for not living in a surveillance state. A nanny state, if you will. For many Americans, and certainly for the majority of voters who ousted the cameras, doing for oneself is preferable to having the government step in and do it for them. Even if it means putting up with one’s own and others’ mistakes, even if sometimes those mistakes lead to serious consequences. It entails also a recognition that governments can also make serious mistakes, and that governmental mistakes can have more far-reaching consequences than the mistakes of individuals.

One can argue that traffic cameras do not rise to the level of, say, decisions to invade foreign countries, or to demolish established neighborhoods for so-called urban renewal, or unequally applied death sentences, etc., but maybe my city’s voters recognize that something as seemingly benign as traffic cameras are the thin edge of a much bigger wedge.

Wars of a Thousand Cuts

In trying to understand the grotesque turn that American politics has taken in the last year, the economy seems to be the most often-cited explanation. Inequality has increased, with the richest Americans filching an increasingly large slice of the pie and leaving only crumbs for the rest of us: the middle class is in retreat, more good-paying jobs are being “outsourced” to foreign countries, and college tuitions and loans are crippling the next generation of workers. We can add to all this the sense that many people have, on both the left and the right, that the political and cultural elites are disconnected from the concerns of the people, that for the elites far too much of the country is fly-over country, both literally and metaphorically. Wall Street is blamed, Washington gridlock is blamed; so too are immigrants, terrorists, pop culture, GMO’s–you name it.

Although all these disasters seem to have struck us suddenly, out of the blue so to speak, or at least since 9/11, perhaps the roots of our problems extend further back, to the first Gulf War (under Bush1), perhaps further than that. The Wikipedia timeline of American wars shows that the country has been engaged in some kind of war more or less continuously since 1909–and frankly,it doesn’t list every incident in which the military has been involved. This constant warmongering, so often contrary to reason, perhaps has wounded our collective psyche so slowly and completely that we fail to see that it is slowly bleeding us to death. We have spent a lot on these wars, we spend a lot to maintain and improve our military capability, and to supply our “allies” with weapons and munitions–money that could be used to maintain and improve our infrastructure and educational system, our healthcare and environment.

Yet we are promised, and clearly we want to believe (why else keep electing the same politicians over and over again), that we can fight these wars at no cost to ourselves, that we can continue to dump trillions into “security” while enjoying tax cuts at the same time.

Here’s an apt illustration of the dilemma: the recent news has been dominated by the excessive wait times for passengers going through security checks at our airports. The TSA is getting the blame (too few personnel, etc.), but equal blame should be put on the blow-back from our ill-considered military interventions in other countries’ business, in our poor choice of allies to whom we ship armaments, and our failure to rebuild our aging and inadequate airports. And our free-lunch attitudes: one of the factors in the long security waits is the fact that passengers want to avoid paying checked baggage fees and therefore carry on as much luggage as they can get away with–that’s a lot of extra bags that need to be searched!

What a tangled web we have (all) woven! Will this political season make a difference? After all, we have a true political renegade now virtually guaranteed to be the Republican nominee for the Presidency, someone who “tells it like it is”; and on the Democratic side we have a very popular contrarian candidate who is giving the assumed nominee a serious challenge right up to the finish line (and who knows, perhaps beyond). Many voters hope that these mavericks can turn things around, but given the interwoven complexities of the overall situation, one wonders what they could actually accomplish should one or the other win. Has too much blood already been lost?

Terror and Security

The quarrel between Apple and the federal government over access to the smart phone of the San Bernardino terrorist shooters highlights the contest between the right to privacy and the need for national security.  At the same time that the government has unprecedented power to electronically snoop into our lives, the individual has new ways to thwart that intrusion, leading to the current confrontation between two giants of information, a global corporation and a powerful national government.

In my own mind there is considerable skepticism that the cell phones in question contain information which federal investigators have not already figured out by other means, given that even the most devious villains have their limits—perhaps they have even greater limits than the average citizen, because of their obsessive-compulsive, tunnel-vision focus on their “mission.”  But be that as it may, in this election year, as voters we will be making a choice about what we value more, privacy or security.

Certainly we are hearing a great deal from the potential Republican candidates about terrorism, national security, boots on the ground, etc., as well as about the perceived threat of illegal immigrants, who might after all include terrorists slipping into the country under the guise of refugees, especially if they’re Muslims.  The Democratic hopefuls also talk about security and terrorism, though they spend more time talking about the economy and the financial system.  But clearly, what to do about terrorism is on everyone’s mind.

Perhaps it would be well to consider history.  Not so long ago, the federal government engaged in domestic spying to thwart the communist threat, with “communist threat” being rather broadly defined.  The FBI under J. Edgar Hoover spied on Dr. King and many others, including the Kennedys (who could hardly be considered communists), resorting to illegal wiretapping, subversion of vulnerable insiders as informants, and other noxious tactics.  Since in fact communism was never all that serious a threat to the United States, one has to wonder if all that surveillance was engaged in for its own sake—it could be done, so it was done.

Great harm came from our obsession with the supposed threat of communism: the Vietnam War, for one, and our incessant interference in the affairs of other (sovereign) countries.  Whatever the real or imagined threat posed by Allende, for example, Pinochet was hardly an appropriate alternative.  Likewise, the invasion of Iraq and toppling of Hussein made matters in the Middle East far worse than they were before (and we might mention the disaster that befell the people of Libya after the fall of Qaddafi).  Great harm can come from our obsession with terrorism.  The greatest harm could be to ourselves, as we gradually and imperceptibly become accustomed to being perpetually watched, and as we adjust our behaviors to that environment.  Even our most private moments will seem less than truly private.

Caveat:  There is some irony in a giant technology/information corporation such as Apple (or Google, or Facebook, et al.) taking a stance against the federal government’s collection of our personal “data,” given that the coin of the tech realm is the collection of data from billions of users worldwide.  We are assured this is for our benefit, but we simply do not know what information is being gathered and stored, and we have little idea, beyond those suspiciously specific advertisements popping up on our screens and in our inboxes, what is being done with that information and by whom.  Never before have individuals been so vulnerable to both corporate and governmental, legal and illegal, hacking as we are today, nor, as users of social media, so complicit in that hacking.  It looks like we’re all Winston Smith.

Plato’s Cave: Real

Plato’s Cave, Inside Out, Part 2:
Real Caves with Real People in Them

In my previous post, I retold Plato’s parable of the cave by suggesting that the shadows on the wall were the Ideas and that the objects which cast those shadows were the Real—in other words, there is no heaven of ideas or ideal forms superior to the corrupted embodiments of them in the material world, but rather that these forms were literally ideas, shadows of the real in the minds of philosophers and thinkers such as political ideologues, and frankly of most of us as well. To some degree, we are all Platonists.

In this article I want to consider the notion(s) of the cave from a different angle, not that of parable, but of actual human practice. Plato’s cave is a deliberate fiction created to make a point, but it may be based on practices that have been common to human cultures since the beginning of our species. Most of the best preserved fossils of very-ancient humans have been found in caves, along with the bones of various kinds of animals, including animals that also lived in or made use of caves, as well as with materials that in some instances are clearly artifacts of a kind (such as the small slabs of ocher etched with lines and cross hatchings found in some South African caves). There is abundant evidence that deep caves were thought to be entrances to the underworld and the haunts of beings such as gods of death and monsters and serpents, etc. It would not be surprising to learn that Plato was familiar with caves as cultic centers or as sites associated with certain gods or sibyls.

There is a native tribe called the Kogi living in the mountains of Columbia whose culture has remained largely intact since pre-colonial times. One of the most interesting practices of the Kogi is the way they train selected young boys to become priests: the boy is sequestered at a very early age, before he has acquired much knowledge of the world, in a dark cave where there is just enough light to prevent him from becoming blind; over the course of nine years, he is trained by priests in the knowledge and ways of his people and the world, emerging into the world at the end of his training as a priest in his own right.

This custom parallels Plato’s parable, yet in an inside-out way, by implying that right knowledge is acquired through the ideas of things rather than by the experience of things. The boy emerges with superior priestly knowledge which allows him to guide his people in the right ways. This could be seen as Idealism practiced in its most ideal form. And while it might appear to us as a bizarre practice, in fact something similar has always been practiced in literate cultures: our education system, after all, isolates the young from the outside world and inculcates them with a form of “ideal” knowledge from books (and now of course from electronic sources, which are in some ways even less real, less embodied, than printed books); those who succeed in the process are the elite of our culture. In former times, when education was limited to the male children of the upper classes and focused on philosophy, theology, and the classics, this was even more true. Monks and hermits were even more isolated from the world than scholars, and yet both scholars and hermits were considered to be wiser and more insightful than ordinary people caught up in the hurly-burly and distractions of the material world.

This notion that true knowledge and wisdom exist in separation from the world rather than involvement in the world is a particularly striking characteristic of the human mind; one might argue that it is what makes the mind human and therefore what most distinguishes us from other animals. But how did this state of mind come about? I doubt that it is hardwired, though of course it is grounded in the structure of the brain; but the brain is structured (at least in part) to think, and the origin of this notion has to be in thinking.

Let us consider the cave paintings of prehistoric France, those splendid depictions of stone age animals in the caves of Lascaux, Chauvet, Grotte de Fond de Gaume, some dating as far back as 33,000 years B.C. Since their discovery in the twentieth century, assorted theories as to their significance have been offered by anthropologists, Structuralists, psychologists, art historians, film makers (Werner Herzog’s mesmerizing film of Chauvet, for example), and others. All of these interpretations have their plausibilities, but in the absence of any written explanations by the original artists themselves, we cannot be sure which, if any, of these interpretations approximate what the artists themselves thought they were doing. Perhaps the paintings had magical or religious meanings; maybe they were just pictures. Or maybe they were something in between, something transitional.

Modern interpretations of the paintings presuppose that they were the end products of thought, that it was thought that created the art. For example, that success in the hunt was desired, so they created paintings of the sought-after prey in order to ensure that success. But what if the art created the thought? What if the paintings were a relatively late stage in a process that began with the patterns of lines and cross hatchings found on very-ancient ocher slabs, and those etchings were at first random exercises or experiments in using the hands and simple tools to put marks on things. Surely the first human to do such a thing, perhaps just scratching some lines in the dust while lying in wait for prey to pass by, must have been taken aback by what he had done. We must not think in our terms about the origins of art, but in terms of the very first humans who started the whole thing, without any precedent of any kind. Would not that have been astonishing? Perhaps puzzling and even a bit scary at first? Would it not have been somewhat like (but not entirely so) a very small child of today swiping a crayon across a blank sheet of paper for the very first time in her life? None of us can remember that moment in our lives, and perhaps none of us can fully imagine that very first moment in human history (and maybe not in our own species H. sapiens sapiens)—but such a moment had to have happened.

And then the process of making sense of it all began. A proto-artist might have asked, what can I do with this? And in the course of further experimentation or play with these interesting scratches, refinements could have evolved, until we get to the representation, at first in stick figures, then in more developed figural sketches, of animals and other objects of experience. At some point, perhaps very early in the process, scratches and figures began to take on magical or spiritual properties—they began, in other words, to be interpreted, i.e., explained. Explanations led to more refinements of form, which in turn led to more explanations, and eventually we end up here, where we are in the twenty-first century, with shelves of books (and thousands of websites) that explain not just the objects themselves; they explain the explanations as well, in an almost infinite network of thought.

But what of the caves? The art on the walls of Chauvet and Lascaux appear to be late developments of a long process. Their apparent sophistication and indisputable beauty seem to indicate that they are the expressions of a rich and complex tradition, and their familiarity (their seeming prefiguring of, for example, twentieth-century modern art) seems to invite very modern responses and interpretations. Their presence deep in dark caves that could have been illuminated only by torchlight encourages theories of shamanistic practices, perhaps intended to ensure success on the hunt. Perhaps the caves were viewed as providing closer access to spiritual forces believed to be operating underground, or to bring the people closer to their ancestors (quite a few known origin myths posit that the first people emerged from underground).

But there is another, admittedly speculative, possibility: That descending into the caves provided an escape from the pressures and distractions of the sunlit active world and allowed the artists to play with their art, to discover what more they might do with this ability, such as when they discovered that they could add depth and contour, a kind of three dimensionality, by drawing around the bumps and irregularities in the rock walls—after all, it isn’t likely that they thought about that while going about their routines outside the caves; the technique had to have been suggested by the irregularities of the rock surface they were working on down in the caves, at the time of creation. In that sense, the pictures were pure art.

But they could not remain pure art for long—the human need for explanation, for interpretation, would have engaged almost immediately, and hence the paintings would have acquired meanings, likely even sacred meanings, soon after the artist stepped back to contemplate his finished work. Such a wonder would have inspired thought, perhaps an entire treatise of thought of a kind analogous to what we today call theoretical, which then could be carried in the minds and words of the artists to the sunlit surface world and conveyed to the general population.

It is relevant to note that many artists and writers of today explain themselves by saying that they didn’t know what they wanted to say until they said it (or painted or sculpted it). They will say that they knew where they started but didn’t know where they would end up. Some art and literary critics assert that the meaning of a work of art or a novel or poem is not completed until it has been viewed or read and interpreted by a viewer or reader. In this sense, art is the beginning of thought, not its conclusion.

Plato’s Cave, Inside Out

The original story of Plato’s cave can be summarized as this: A group of men are bound inside a cave with a wide entrance, though which the sun streams, projecting shadows on the back wall of the cave. The men’s shackles force them to face that back wall, so that all they can see are the shadows, moving back and forth across the wall. They are watching a kind of shadow play, which however they take for reality, as it is the only thing they can see. One day, the men are set free and dragged out of the cave into the sunlight, where they can see for the first time that the shadows they took for reality were cast by other men walking back and forth in front of the cave, carrying various objects as they went about their business. For the first time in their lives, these former prisoners realize that what they had believed was real were merely insubstantial silhouettes of the actual things that cast the shadows.

This parable has traditionally been understood to explain Plato’s philosophical Idealism, that is that the objects of the world as we perceive them are imperfect embodiments of the ideal Forms, which are the real things of the Cosmos. Thus, for example, that table in the dining room is a representation, so to speak, of the ideal form of “Table” which, unlike your dining table, is immaterial, perfect, eternal, and the “idea” that informs all tables—dining and kitchen, coffee and end, writing and conference, etc. All specific things of the material world are likewise merely expressions of their ideal forms. Thus, the “idea” of a thing is its truth—the material embodiment of the idea is imperfect, temporary, and therefore in a sense “false.”

The task of philosophy is to contemplate the ideal forms, not the imperfect expressions of them; this puts the “idea” above everything. One can see why Plato’s view has a great deal of appeal to philosophers and other types of intellectual, including all too often, ideologues, for whom an ideology (“a system of ideas and ideals, especially one that forms the basis of economic or political theory and policy”) trumps practicality (and oftentimes, morality). Whether or not Plato and his legion of descendents believed in a literal heaven of ideal forms, in practice they have behaved as if their ideas were in fact perfect, eternal, “self-evident,” and true, truer than experience and superior to the stubborn resistance of material things to being shaped according to these truths. For these types, reality is a sin against reason.

So let us attempt to correct Plato’s parable: The prisoners in the cave are not trapped in the material world, but in the confines of their own mind; they are contemplating the flickering shadows of their own thoughts, stripping away the particulars of individual objects and constructing vast theories on the basis of these one dimensional, flat, featureless cutouts. (It is worth noticing that shadows are also dark, i.e., the blocking out or absence of light, as being in the shadow of a tree or building.) Once the prisoners are freed, they can see that what they thought was real (their own thoughts) were not real at all.

It is the material world of particular objects, particular individual persons for example, as well as trees, vases, tables, songs, flowers, dogs, etc., that is filled with real things, the ideas of which are figments piled on figments unto confusion. Ideas uninspired and uncorrected by reality can lead us very far astray.

A relevant quotation:
“I ran out of interest in my own consciousness around 1990, but there’s no reason ever to run out of interest in the world.” –Crispin Sartwell, “Philosophy Returns to the Real World,” The New York Times, April 13, 2015