“Literal Meanings”

The other day, Slate magazine posted a map titled “Literal Meanings of Places in the US.”  It’s a fun graphic, I recommend it, but I will also mention a couple of caveats.  These caveats may be obvious in themselves, but perhaps I can express them in a way that will suggest interesting thoughts.

First, what is the “literal meaning” of a name?  When I think of that phrase, I ask two questions.  First, is the name likely to bring that meaning to the minds of most of the people who are likely to hear it?  And second, can the name be used independently to signify that meaning?  For example, the name “Newfoundland” likely brings to the minds of most English speakers, not only the place Newfoundland and the breed of dogs named after it, but also the idea that a land has been newly found.  With just a little typographical liberty, we can refer to places other than Newfoundland as new-found lands.  So I don’t object to saying that new-found land is the “literal meaning” of Newfoundland.

What we see on this map are not, in that sense, the “literal meanings” of North American place names.  They are etymological meanings, that is to say, meanings that have, at one time or another, been associated with words that have influenced the development of those names.  For example, “New York” is supposed to “literally mean” “New Yew-Tree Village.”  When the Latinism Eboracum was coined sometime before the year 95 of our era it probably represented an attempt to spell in Roman letters a Celtic word that meant “Place of the Yew Trees.”   And Eboracum, evolving in tandem with that Celtic word, changed its pronunciation over the centuries to become “York.”  But of course only scholars hear the word “York” and think “Place of the Yew Trees.” And by the time the word came to be pronounced “York” it was centuries past any connection with yew trees.  I suspect that no one has ever looked at a place of yew trees and called it a “York.”

I think it would be reasonable to imagine the history of a word as something like an archaeological site, in which collections of material from different periods of history can be found concentrated on on top of another.  So, two thousand years ago, Eboracum and its Celtic root may have meant “Place of the Yew Trees” to most of the people concerned with settlements in the far northeast of Roman Britannia.

At a a higher stratum, that is to say, a later period, very different meanings are associated with the word.  The acts of the British crown which created the Province of New York in 1664, 1665, and 1674 and thus introduced the name “New York” into the English language were executed by a king who was not only ignorant of the Celtic etymology of the name “York,” but who was not likely giving much thought to the city of that name.  The province was created under the patronage of the king’s brother, the Duke of York, and was named for him.  That nobleman later became King James II of England and VII of Scotland, the last of the Stuart dynasty.  James was York by title, but doesn’t seem to have been greatly involved with the city or its affairs, and he never visited the North American territory claimed in his name.  It is as if we found that someplace named New Newfoundland was named, not for Newfoundland, but for a particular dog of the Newfoundland breed.  At that point, the etymology of the name might have been glossed as something like “James’ new province,” or, considering James’ awkward position within the royal house in 1664, “We still care about you, James.”

If we dig further down to an earlier period, the root word might have meant something quite different.  Various Celtic languages include words similar to Eboracum that refer to various trees; perhaps the root of those words meant something other than “yew tree.”  It is possible that Phoenician merchants, whom we know to have been active in Cornwall and southern Ireland in Roman times, brought with them a word cognate with the Coptic ebu, “ivory,” and its Latin derivative ebor, eboris, and that this word was the base of those Celtic words.  This may not be a particularly likely etymology, but I have never been one to miss an opportunity to bring up the Phoenicians.

A second point enters in with glosses like “of the monks” for Des Moines, Iowa.  This appears to be a folk etymology that white settlers applied to mooyiinkweena, a name that the Peoria people used for certain neighbors of theirs.  The opinion the Peoria had of those neighbors can be surmised from the fact that the parts of the word mooyiinkweena appear to be mooy, meaning “dung,” and iinkwee, meaning “face.”  So, when they pointed at the site where Des Moines now stands and said mooyiinkweena, they were telling the whites that the people who lived there were shit-faces.  I should add that the erudite sources I link to above are not where I first learned the etymology of “Des Moines”; I first saw it last week on Cracked.

Originally, the folk etymology of Des Moines might have been a mistake.  But words mean what people use them to mean, not what they are supposed to mean.  If Des Moines residents and others who are concerned with the city have thought that the meaning “of the monks” is part of the name’s history, then it is part of that history.   And the fact that the name is now “Des Moines” rather than “Mooyiinkweena” is an example of the role that the folk etymology plays in that history.  Therefore, a map listing etymological meanings of North American place names would have to include both “of the monks” and “shit-faces” for Des Moines.   To return to the image of an etymology as an archaeological site stratified into layers, we might think of a three-dimensional map, on which both the geographic location of the places and the temporal development of the names’ meanings could be represented.

Even the two-dimensional map on Slate must be the result of a great deal of work; a three-dimensional map would require a great deal of drudgery, and even then it would be a severe oversimplification.  So I mention it only to illustrate the point, not to find fault with the map or to take back my recommendation that everyone look at it.

Illegals

On October 15, linguist Neal Whitman wrote a piece on his blog in which he conceded that there are several good reasons to avoid the term “illegal immigrant.”  He cites three of these:

  1. It is politically divisive or inflammatory.
  2. It presumes guilt before due process has been done.
  3. It is inaccurate in characterizing people who entered legally but overstayed their visa, or did not come here of their own accord.

Mr Whitman accepts all of these arguments, and grants that the term “illegal alien” is dehumanizing and should be avoided at all times.  He does register a dissent from a fourth argument, however:

[The phrase “illegal immigrant”] is nonsensical, because illegal refers to acts, not to people.

Mr Whitman categorizes this claim as “just plain silly, and grasping at straws.”  He explains:

When the noun is the agentive form of a verb, and the adjective is the morphological analog of a manner adverb, there is a common, productive rule of semantic composition that gets you to the accepted meaning. Let me illustrate with an example unburdened by controversy. If I were to say, “Sandy is a deep thinker,” it would be willfully obtuse to say, “Hey, wait a minute! People can’t be deep!” If I were to tell you, “Lee is a beautiful dancer,” I could be telling the truth even if Lee’s face, when covered by a paper bag, could still make clocks lose two minutes per hour. In short,

dances beautifully : beautiful dancer :: thinks deeply : deep thinker :: immigrates illegally : illegal immigrant

Object to the term illegal immigrant on ethical, political, or legal grounds if you want to. But don’t resort to claiming the term embodies sloppy semantics, when it’s the most natural way to refer to someone who immigrated illegally. That just makes it look like you’ll accept any old argument that favors your side, and weakens the more valid ones.

On October 17, I commented on Mr Whitman’s post as follows:

I have a reservation about “illegal immigrant.” It is a long, awkward expression (six syllables, two lexical items, several highly abstract notions embedded in it,) so people will naturally want to shorten it. And the form to which it always seems to be shortened is “illegal.” As in, “How many illegals are in the USA?” That usage doesn’t exactly invite the full range of opinions as to what our policies should be with regard to immigration. Granted, a phrase like “undocumented worker” also signals a strong preference in the same regard. Using either term suggests that the speaker has set his or her face firmly against one side of the discussion. Perhaps if we as a society declared both expressions off-limits in polite conversation, people would come up with a truly neutral term. Of course, there would always be the danger that one or both of the expressions would sneak back into the language and steel American jaws, but that’s just something we’d have to guard against.*

On October 22, functional linguist Daniel Ginsberg wrote this comment:

Full disclosure: I’m a functional linguist, so I tend to be skeptical of people talking about what “words mean” in the absence of a person who used those words to encode a specific message. Also, I’m pretty far to the left of mainstream in American politics, and I’ve spent years working with immigrants, so you can guess what my personal choice of phrase is.

That said, my intuition is that the problem with “illegal immigrant” isn’t as much in the semantics of adjective-noun compounds as in the associations with the word “illegal.” The top hits of a COCA** search for “illegal [*nn]” are “immigrants, immigration, aliens,” and after that we get into “drugs, weapons, substances, acts, dumping, gambling, arms,” as well as “workers,” which seems to be a euphemism for “immigrants.” Going down the list, other collocates that refer to human beings are always other terms for *ahem* undocumented workers: “residents,” “entrants” (into the U.S.), “population.” The top 100 collocations in COCA don’t show any “illegal” + person pairings except for “illegal immigrants” and synonyms.

So the question becomes, if the language permits “illegal N” to mean “person who did N in an illegal way,” why is N nearly exclusively reserved to signify “immigrate into the United States”? Why isn’t Bernie Madoff an “illegal banker,” or Jack Kevorkian an “illegal doctor,” or Lance Armstrong an “illegal cyclist”?

The CDA*** researcher in me says, we’re making a class of “illegal things” here, that is implicitly expressing an ideology about the nature of illegality. The contents of that class include assault weapons, addictive drugs, the pollution of waterways with industrial runoff, cutting trees on protected land, running a casino out of your basement … and sneaking across the US border because conditions in your home country are so dire that you have no hope for a better life there.

Mr Whitman’s post and the discussion appended to it presaged a news story that broke a few days later.  On October 19, the Associated Press released a statement announcing that it would continue to use the phrase “illegal immigrant” to refer to people who have entered and established residence in the United States without the permission of the legal authorities.  The wire service‘s defense of this decision reads eerily like what Mr Whitman had posted a few days before:

Finally, there’s the concern that “illegal immigrant” offends a person’s dignity by suggesting his very existence is illegal. We don’t read the term this way. We refer routinely to illegal loggers, illegal miners, illegal vendors and so forth. Our language simply means that a person is logging, mining, selling, etc., in violation of the law — just as illegal immigrants have immigrated in violation of the law. (Precisely to respect the dignity of people in this situation, the Stylebook warns against such terms as “illegal alien,” “an illegal” or “illegals.”)

The press release goes on to describe circumstances in which the AP would avoid the phrase or add qualifications to it, descriptions which again recall Mr Whitman’s agreement that the first three arguments he cites constitute good reasons for using another expression:

The first thing to note is that “illegal immigrant” is not the only term we use. The Stylebook entry on this subject was modified a year ago to make clear that other wording is always acceptable, including “living in the country without legal permission.”

In fact, there are cases where “illegal immigrant” doesn’t work at all. For instance, if a young man was brought into the country by parents who entered illegally, he didn’t consciously commit any act of “immigration” himself. It’s best to describe such a person as living in the country without legal permission, and then explain his story.

There are also cases where a person’s right to be in the country is currently in legal dispute; in such a case, we can’t yet say the person is here illegally.

But what about the cases where we do write “illegal immigrants”? Why not say “undocumented immigrants” or “unauthorized immigrants,” as some advocates would have it?

To us, these terms obscure the essential fact that such people are here in violation of the law. It’s simply a legal reality.

Terms like “undocumented” and “unauthorized” can make a person’s illegal presence in the country appear to be a matter of minor paperwork. Many illegal immigrants aren’t “undocumented” at all; they may have a birth certificate and passport from their home country, plus a U.S. driver’s license, Social Security card or school ID. What they lack is the fundamental right to be in the United States.

Without that right, their presence is illegal. Some say the word is inaccurate, because depending on the situation, they may be violating only civil, not criminal law. But both are laws, and violating any law is an illegal act (we do not say “criminal immigrant”).

Mr Whitman’s blog is titled “Literal-Minded“; its tagline is “Linguistic Commentary from a Guy Who Takes Things Too Literally.”  So when he argues that the rules of English semantics permit a construction like “illegal immigrant,” it is quite believable that his agenda does not go beyond the explication of those rules.  The sheer fact that the phrase is well-formed does not mean that anyone should ever use it, and so his argument is by no means a defense of its use.  He recognizes this; the AP does not.  Its press release offers no defense of the phrase beyond its formal admissibility as a semantic structure, and does not answer any of the objections Mr Whitman had so readily acknowledged.

On October 31, Slate magazine carried a piece by Kerry Howley, associated with the title “Is Saying ‘Illegal Immigrant’ Like Saying ‘Illegal Logger‘?”  Ms Howley reports on the AP’s decision; a photo accompanying the piece carries the caption “Support for undocumented immigrants at the Democratic National Convention. Supporters of illegal loggers never showed.”  Neither Mr Whitman nor the AP had mentioned any particular group or individual that had asked the wire service to discontinue use of the phrase “illegal immigrant”; Ms Howley links to a website associated with the campaign known as “Drop the I Word.”  In response to the AP’s observation that “[t]erms like ‘undocumented’ and ‘unauthorized’ can make a person’s illegal presence in the country appear to be a matter of minor paperwork,” Ms Howley argues:

“Illegal” suggests fault with immigrants rather than the system of laws in which they are ensnared. It’s possible that illegal loggers are illegal because of poorly drawn statutes about public land—maybe they’re really freedom loggers—but that’s not the connotation.

“Undocumented” places the burden on the bureaucracy rather than on the moral integrity of any particular person. That’s the correct position in my view, and I reveal prior judgments when I use the word “undocumented” just as restrictionists do when they say “illegal.” What’s bizarre is that the Associated Press, having deemed “undocumented” a loaded term, thinks “illegal” to be perfectly descriptive, sprung from nowhere, privileging no side of the debate. It may be that there is no objective term with which to describe people guilty of being in a particular space without state permission. You have to pick one and own it, which “Drop the I-word” seems to recognize. They suggest you start saying “NAFTA Refugee.”

Here Ms Howley echoes my comment of the 17th, though without my suggestion that we might try to invent a new term that will be neutral.  Of course, I made that suggestion in less than total earnestness- there doesn’t seem to be any great demand for detached, objective discussion of immigration policy, much less for new vocabulary to promote such discussion.  All sides of the debate are driven by people who favor policies which they regard as indispensable to their livelihoods.  In that position, people look at words as weapons with which to fight the enemies who threaten them, not as laboratory equipment with which to gain understanding.  So when you choose your words, you choose your battles.

*None of the subsequent commenters said anything about “steel American jaws,” a line of which I was somewhat proud.  I would have been happy if they had said it made them laugh, but I’m not upset that they didn’t. 

**COCA = the Corpus of Contemporary American English

***CDA = Critical Discourse Analysis

A possible etymology of the name “Acilius”

I’ve long used “Acilius” as my screen-name, in tribute to Gaius Acilius, a Roman historian who was alive and doing interesting things in 155 BC.  It never occurred to me that anyone would know the etymology of the name “Acilius”; it was quite an old name among the Romans, and they did not really keep track of that sort of thing in those days.

A couple of months ago, I happened onto a post on the blog “Paleoglot” which led me to wonder if there might not be a way to explore the question of where the gens Acilia found its name.  Blogger Glen Gordon analyzes various occurrences of a stem acil- in Etruscan.  In his conclusion, Mr Gordon offers these definitions to cover the occurrences he has discussed:

I think we could define the English translations of the whole word family much better as part of a grander morphological design:

*aχ (v.) = ‘to do, to make, to cause’
> acas (v.) = ‘to craft, to make’
> acil (n.) = ‘thing, act; rite, holy service’ (> acil (v.) = ‘to do rites, to worship’)

The implied underlying verb here, *aχ, reminds me very much of the Indo-European *h₂eǵ-, as if borrowed from Latin agere ‘to drive, lead, conduct, impel’.

This intrigues me very much.  If the Etruscans borrowed such a word from Latin, that would suggest that the usual story about the relationship between Etruscan religion and Roman religion is misleading.  Rather than a situation in which the Etruscans molded the religious practices and ideas of their subjects, the early Romans, the presence of a Latinate word in Etruscan religious vocabulary would suggest a reciprocal relationship between the hegemonic Etruscans and their vassals.

On the other hand, if the similarity between acil- and agere is a mere coincidence, another possibility presents itself.  This is where the Acilii come to mind.  Perhaps the name “Acilius” is a combination of the Etruscan root acil-, with its sense of performing holy service, and the Latinate suffix -ius.  A fairly exact equivalent could be suggested, as chance would have it, in the English name “Priestley,” where the borrowed word priest is combined with the indigenous suffix -ley.  So perhaps all these years I’ve been unwittingly associating myself with such distinguished polymaths as Joseph Priestley and J. B. Priestley.

Cartoon etymology

Thanks to Stan Carey, who introduced those of us who read his site to “Mysteries of Vernacular.”  “Mysteries of Vernacular” is a series of animated shorts exploring the etymology of a few English words.  Here’s the one for hearse.  I like the interactive graphic that they give you to browse the videos:

 

The other day, Zach Weiner’s Saturday Morning Breakfast Cereal featured an explanation of the origin of the phrase “vanilla sex.”  The explanation:

The VAN- part comes from the Spanish “Vaina” from the Latin “Vagina.”  The -ILLA part is diminutive.  So, etymologically, “Vanilla Sex” refers to a little vaginal sex.

Each of the etymological claims in the explanation is basically true,* but the conclusion they allegedly support is ludicrous.  Which I’m sure is the point, the etymological information represents a pun of an unusual and elaborate form.

*Basically.  So Spanish vaina does come from the Latin vagina, but so does Spanish vagina.  In Latin, vagina meant either “vagina” or “sheath”; in Spanish, vagina means “vagina” and vaina means “sheath.”  So, etymologically, “vanilla” means “little sheath,” not “little vagina.”  If the people who coined the phrase “vanilla sex” were thinking about the etymology behind the word “vanilla,” the etymological meaning of the phrase would be “little sheath sex.” In that case, we would expect the first appearances of the phrase to have some association with condoms.  Perhaps with little condoms.  Though perhaps not; sometimes the Romans used diminutive endings the way we use them in English, as terms of endearment or as ways of sounding cutesy (like the -sy on the end of “cutesy,” or the -y on the end of “thingy.”)  So maybe “condom-y sex,” or “condomish sex” might be a more accurate rendering than “little condom sex” if the original formation of “vanilla” were in fact part of the history of the expression.

Atheism is no excuse for skipping church

In a recent review of Alain de Botton‘s Religion for Atheists: A Non-Believer’s Guide to the Uses of Religion, John Gray writes:

Rarely mentioned in the debates of recent years is that atheism has been linked with all kinds of positions in ethics, politics and philosophy. More particularly, there is no necessary connection – either as a matter of logic or in the longer history of atheist thinking – between atheism and the rejection of religion.

Atheist thinkers have rejected and at times supported religion for many different reasons. The 19th-century anarchist Max Stirner rejected religion as a fetter on individual self-assertion. Bakunin, Marx and Lenin rejected it because it obstructed socialist solidarity, while Nietzsche hated religion (specifically, Christianity) because he believed that it had led to ideologies of solidarity such as socialism. Auguste Comte, an atheist and virulent anti-liberal, attempted to create a new church of humanity based on science.

In contrast, the French atheist and proto-fascist Charles Maurras, an admirer of both Comte and Nietzsche, was an impassioned defender of the Catholic Church. John Stuart Mill – not exactly an atheist but not far off – tried to fuse Comte’s new religion with liberalism. In marrying atheism with very different ethical and political positions, none of these thinkers was confused or inconsistent. Atheism can go with practically anything, since in itself it amounts to very little.

Certainly a dictionary definition such as “the doctrine that there are no gods” amounts to very little.  Professor Gray champions such a definition:  “Rightly understood, atheism is a purely negative position: an atheist is anyone who has no use for the doctrines and concepts of theism.”  For my part, I am reflexively skeptical of any very simple, purely abstract definition of an ideological label.  I doubt that anyone adopts such a label as a self-description or responds powerfully to it as a description of a participant in a debate unless it suggests a rather substantial narrative.   “Atheist” is a label that millions of people wear with fierce pride, and that raises equally fierce anger and fear in hundreds of millions of others.  The strength of those reactions proves that the word has connotations for these people that go far beyond the tidy little abstractions of the dictionary, and their predictability shows that these connotations are much the same from person to person.   Therefore, I am not convinced that anyone anywhere is an atheist simply in the dictionary sense of the word.  There are people who reject particular religious beliefs that involve the existence of gods, and there are people who accept particular beliefs that exclude the existence of gods.  The key thing about each of these people is their relationship to those particular beliefs, to the people they know who espouse those beliefs, and to the institutions in their social worlds that are associated with those beliefs.  A label such as “atheist,” in the dictionary sense, would sort a pious Confucian, an orthodox Communist, and a militant freethinker together.  Certainly no category that includes three such disparate people could be a very important part of our understanding of the world.

As I am skeptical of the dictionary version of the word “atheism,” so too am I skeptical of the word “theism.”  The Oxford English Dictionary gives four definitions for “theism.”  (Not counting another, unrelated, word spelled the same way, which means “illness as the result of drinking tea.”)  These definitions are: “belief in a deity or deities; as opposed to atheism”; “belief in one god, as opposed to polytheism or pantheism”; “belief in the existence of god, with denial of revelation”; “belief in the existence of god, without denial of revelation.”  n the first of these senses, the word appears to be a back formation created by taking the prefix off of “atheism.”  The word is obsolete in the second sense, having been replaced by “monotheism.”  The third sense has been replaced by “deism”; where deism is a live option, its opponents still use the word “theism” to describe themselves.  In view of the word’s history, then, it would be as true to say that “theism” names a “purely negative position” as it is to say that “atheism” names a “purely negative position.”  A theist is someone who rejects the labels “atheist” and “deist” and will not play the social roles that come with those labels.

Again, no one does only this.  Those who call themselves “theists” are adherents of particular religions.  Surely, no one believes in “a personal god”; billions of people believe in the God their favorite preacher describes.  Mere theism is as unreal as C. S. Lewis’ “Mere Christianity.”  Indeed, the labels that name world religions cover so many people and so many cultures of faith that anyone can see the point the late Edward Said made when he proposed scrapping the term “Islam” on the grounds that such a word “imputes a unified and monolithic religious and cultural system” to what is in fact an infinitely diverse range of experiences lived by over a billion people scattered all over the globe.  How much worse then is a label that encompasses not only that range, but also the ranges of experience grouped under “Christianity,” “Judiasm,” Sikhism,” “Hinduism,” etc.

Professor Gray does recover a bit as the review goes on.  So:

Most people think that atheists are bound to reject religion because religion and atheism consist of incompatible beliefs. De Botton accepts this assumption throughout his argument, which amounts to the claim that religion is humanly valuable even if religious beliefs are untrue. He shows how much in our way of life comes from and still depends on religion – communities, education, art and architecture and certain kinds of kindness, among other things. I would add the practice of toleration, the origins of which lie in dissenting religion, and sceptical doubt, which very often coexists with faith.

Today’s atheists will insist that these goods can be achieved without religion. In many instances this may be so but it is a question that cannot be answered by fulminating about religion as if it were intrinsically evil. Religion has caused a lot of harm but so has science. Practically everything of value in human life can be harmful. To insist that religion is peculiarly malignant is fanaticism, or mere stupidity.

De Botton has done us a service by showing why atheists should be friendly to religion. Where he could have dug deeper is the tangled relations between religion and belief. If you ask people in modern western societies whether they are religious, they tend to answer by telling you what they believe (or don’t believe). When you examine religion as a universal human phenomenon, however, its connections with belief are far more tenuous.

The fixation on belief is most prominent in western Christianity, where it results mainly from the distorting influence of Greek philosophy. Continuing this obsession, modern atheists have created an evangelical cult of unbelief. Yet the core of most of the world’s religions has always been holding to a way of life rather than subscribing to a list of doctrines. In Eastern Orthodoxy and some currents of Hinduism and Buddhism, there are highly developed traditions that deny that spiritual realities can be expressed in terms of beliefs at all. Though not often recognised, there are parallels between this sort of negative theology and a rigorous version of atheism.

A couple of years ago, we noticed James P. Carse’s The Religious Case Against Belief, a book which argues not only that its beliefs are not the things which make a religious tradition most valuable, but that an excessive emphasis on beliefs is the surest way to drain a religious tradition of its value.  Professor Gray seems to be approaching Professor Carse’s views here.  He goes on to write paragraphs that will make any admirer of Irving Babbitt wince:

The present clamour against religion comes from confusing atheism with humanism, which in its modern forms is an offshoot of Christianity.

Unfortunately, de Botton falls into this confusion when he endorses Comte’s scheme for a humanist church. “Regrettably,” he writes, “Comte’s unusual, complex, sometimes deranged but always thought-provoking project was derailed by practical obstacles.” It is true that in accepting the need for religion Comte was more reasonable than the current breed of atheists. But it is one thing to point out why atheists should be friendly to religion and another to propose that a new religion should be invented for atheists.

The church of humanity is a prototypical modern example of atheism turned into a cult of collective self-worship. If this ersatz faith came to nothing, it was not because of practical difficulties. Religions are human creations. When they are consciously designed to be useful, they are normally short-lived. The ones that survive are those that have evolved to serve enduring human needs – especially the need for self-transcendence. That is why we can be sure the world’s traditional religions will be alive and well when evangelical atheism is dead and long forgotten.

I mention Irving Babbitt because of the episode that briefly made him a celebrity.  In 1930, Babbitt was 65 years old, and had for over 30 years taught French and Comparative Literature at Harvard University.  In those decades, he and his friend Paul Elmer More had assembled a school of learned followers who labeled themselves “the New Humanists.”  1930 was the year the New Humanists chose to make their debut as a movement.  A book featuring essays by Babbitt, More, and many of their followers (including Babbitt’s pupil T. S. Eliot) appeared under the title Humanism and America: Essays on the Outlook of Modern Civilization; Babbitt himself gave a lecture at Carnegie Hall, drawing an audience of 3000.  Much to the dismay of Babbitt and company, a circle around philosopher John Dewey also chose 1930 to launch a project under the name “the New Humanism.”  While Babbitt traced the criticism that he and his school practiced back to Erasmus and the other the Christian humanists of the Renaissance and claimed that it offered a way even for irreligious people such a himself to recognize the value of religion, the Deweyans were hostile to traditional religion and favored views quite similar to those Professor Gray describes above.  The extent of the Deweyans’ triumph in the battle for the word “humanist” can be measured not only by remarks like Professor Gray’s but also by the prosperity of the American Humanist Association, which had its origins in the Dewey group’s 1930 activities and which stands today as the USA’s foremost institutional champion of atheism.  Needless to say, the American Humanist Association’s successive “Humanist Manifestoes” make no reference to Babbitt and More, and certainly take no notice of Erasmus or any other Christian humanists.

Babbitt’s “humanism” suffered from many weaknesses, not least the fact that it was at least as sweeping a collection of diverse beliefs and experiences as would be sorted under the label “theism.”  Indeed, at the height of the “Humanist” controversy Paul Shorey slashed away at the New Humanists precisely because they made the term “humanism” bear an impossible burden.  Even as the dictionary versions of “theism” and “atheism” elide the whole world of religious experience, so too Babbitt’s conflation of all the sages, philosophers, and prophets of the past is, in Shorey’s words, “exposed to misunderstandings and misapplications, and Professor Babbitt wishes to deduce from it precisely his own ideals in religion, ethics, culture, philosophy, politics, and education.”  By contrast, Shorey declared himself  “content to take the word in a loose, fluid, literary way and in the traditional Renaissance sense of devotion to the Greek and Latin classics and to the cultural and ethical ideals that naturally result from an educational system in which they hold a considerable place.”  Babbitt would likely have claimed that he and his school used the word in the same way, but that they, unlike Shorey, had thought through the question of what “cultural and ethical ideals” can be expected to “naturally result” from various educational systems in which the Greek and Latin classics hold various places that might be called considerable.  In other words, what Shorey was doing with the word “humanism” may be very much like what Professor Gray is doing by invoking the dictionary definition of “atheism.”  In each case, the critic is trying to avoid a controversy by associating himself with a version of a word that is artificially drained of its connotations and narrative content and confined to a purely formal significance.  In each case, however, the word has associations that cannot be suppressed.  By trying to hide those associations behind the dictionary, the critic puts himself in a weak position.  If Shorey wished to escape from Babbitt’s attempt to overstuff the word “humanism” with all the wisdom in the world and to ground in it all of his preferred ideas, he would have been better advised to consider the particular uses of the word as evidenced by identifiable people in specific situations than to express a preference for a use of the word that differs from Babbitt’s chiefly in its greater vagueness.

Philosopher that he is, Professor Gray was never likely to declare that a term and the prejudices it expresses are best left unexamined.  His refuge in the dictionary, however, leaves him in a very awkward position.  For example:

“Religion,” writes Alain de Botton, “is above all a symbol of what exceeds us and an education in the advantages of recognising our paltriness.” It is a thought reminiscent of Blaise Pascal. One of the creators of modern probability theory, the 17th-century thinker invented an early calculating machine, the Pascaline, along with a version of the syringe and a hydraulic press. He made major contributions to geometry and helped shape the future development of mathematics. He also designed the first urban mass transit system.

Pascal was one of the founders of the modern world. Yet the author of the Pensées – an apology for Christianity begun after his conversion to Catholicism – was also convinced of the paltriness of the human mind. By any standards a scientific genius and one of the most intelligent human beings that may ever have lived, Pascal never supposed that humankind’s problems could be solved if only people were smarter.

The paradox of an immensely powerful mind mistrusting the intellect is not new. Pascal needed intellectual humility because he had so many reasons to be proud of his intelligence. It is only the illiteracy of the current generation of atheists that leads them to think religious practitioners must be stupid or thoughtless. Were Augustine, Maimonides and al-Ghazali – to mention only religious thinkers in monotheist traditions – lacking in intellectual vitality? The question is absurd but the fact it can be asked at all might be thought to pose a difficulty for de Botton. His spirited and refreshingly humane book aims to show that religion serves needs that an entirely secular life cannot satisfy. He will not persuade those for whom atheism is a militant creed. Such people are best left with their certainties, however childish.

I would be the last to deny that Pascal was a great mind, but neither would I say that atheism, even of the militant variety, has confined its appeal to people who can be dismissed as “best left with their certainties, however childish.”  As Professor Gray says, a bare denial of the existence of gods, considered in the abstract, doesn’t “amount to much.”  Yet there is something in the label “atheist” and the roles that atheists play in society that has a powerful attraction even to people who could have matched wits with Pascal.  Like Paul Shorey before him, Professor Gray has not followed his own lead.  As he is willing to break the “fixation on belief” in discussing religion, so too should he break the same fixation when discussing irreligion.

Gettier cases in real life

It strikes me that I left something important out of a post I put up the other day, the one titled “Justified True Belief.”  In it, I summarized Edmund L. Gettier’s 1963 article “Is Justified True Belief Knowledge?” an article that was less than three pages long to begin with, so it was a bit silly to summarize it.)  Gettier cited a definition of knowledge as “justified true belief,” a definition that went back to Plato, and gave two examples of justified true beliefs that we should not call knowledge.  Gettier’s examples were rather highly contrived, but have been followed by many publications giving more plausible scenarios in which a person might hold a justified true belief, and yet not be said to have knowledge.  I said in the post that such “Gettier cases” occur in real life with some frequency, then gave a novel by Anthony Trollope as my closest approximation to real life.

Here’s something that happened to me.  I was teaching a class about social life in ancient Greece and Rome.  The topic for the day was marriage, including the custom of the dowry.  Most of my students have passed their whole lives up to this point in the interior of the USA.  To them the idea of a dowry is a bizarre one.  To make it somewhat intelligible to them, I explain that in ancient times it was common for a household to subsist on resources approaching the minimum necessary for survival.  So, it was quite a serious matter to share what little one had with one’s neighbors.  Say a creek ran through your farm, and your neighbor wanted to make a deal with you to divert a portion of its water to irrigate his fields.  If he were to trick you and take too much of the water, you and your entire family might very well starve to death as a result.  How was it possible to develop such trust in one’s neighbor that it would be possible to strike such a bargain?  If you and he were going to have grandchildren in common, then you could believe that he would have enough interest in your long-term well-being that he would be unlikely to treat with you in so harsh a manner.  Thus, a property owner who would not let his neighbor dig an irrigation ditch for any amount of money might freely dig it for his neighbor himself as a dowry for his daughter.

I tell this story every semester.  A couple of years ago, one of my students approached me after class.  A woman from India, she was troubled by my explanation of the dowry, and by the textbook’s equally pragmatic discussion of it.  Her parents had dowered her and her sisters, as her grandparents had dowered their mother, not with any such materialistic motives in mind, but as an expression of respect for the prospective bridegroom and welcome to his kinfolk into their family circle.  She did not disagree with anything I had said; so far as she could see, all of my remarks about the economic function of the dowry were quite true.  But she did not believe that any Indian, or anyone else from a society where the dowry was a living custom, would ever have made them.  From her point of view, the propositions I had enunciated concerning the dowry were true, and I was justified in believing them.  However, she clearly thought that I did not know what I was talking about.

I would make one other point.  The vast and ever-growing literature that lays out plausible sounding Gettier cases makes it clear that the contrived nature of Gettier’s two examples bothers people.  Yet, why do we have a category of “contrived” when it comes to counterexamples?  Surely it is because we think that it is possible to think up some scenario in which a given statement might be true, even when that statement is not something we really know to be true.  So that a far-fetched example may establish the logical possibility of a point, but only an argument grounded in real life or in exhaustive reasoning is likely to convince us that the statement is worth taking seriously and incorporating into that set of beliefs and mental habits that we consider to be our stock of knowledge.  In other words, our very discomfort with Gettier’s examples proves the point that those examples are intended to establish.

Justified True Belief

There are a couple of passages where Plato seems to define knowledge as “justified true belief.”  So, if you have enough evidence that you have a right to accept a given proposition as true, if you do in fact exercise this right and accept that proposition as true, and if  it so happens that the proposition is true, then Plato might have said that your belief in that proposition is an example of knowledge.

This definition was occasionally challenged in an oblique sort of way in the first 24 centuries after Plato put it forward, but it was still uncontroversial enough that philosophers could use it matter-of-factly as late as the 1950s.  In 1963, Professor Edmund L. Gettier of Wayne State University wrote a very short, indeed tiny, article in which he gave two counterexamples to the definition of knowledge as justified true belief.  Here is example one:

Suppose that Smith and Jones have applied for a certain job. And suppose that Smith has strong evidence for the following conjunctive proposition:

  1. Jones is the man who will get the job, and Jones has ten coins in his pocket.

Smith’s evidence for (d) might be that the president of the company assured him that Jones would in the end be selected, and that he, Smith, had counted the coins in Jones’s pocket ten minutes ago. Proposition (d) entails:

  1. The man who will get the job has ten coins in his pocket.

Let us suppose that Smith sees the entailment from (d) to (e), and accepts (e) on the grounds of (d), for which he has strong evidence. In this case, Smith is clearly justified in believing that (e) is true.

But imagine, further, that unknown to Smith, he himself, not Jones, will get the job. And, also, unknown to Smith, he himself has ten coins in his pocket. Proposition (e) is then true, though proposition (d), from which Smith inferred (e), is false. In our example, then, all of the following are true: (i) (e) is true, (ii) Smith believes that (e) is true, and (iii) Smith is justified in believing that (e) is true. But it is equally clear that Smith does not know that (e) is true; for (e) is true in virtue of the number of coins in Smith’s pocket, while Smith does not know how many coins are in Smith’s pocket, and bases his belief in (e) on a count of the coins in Jones’s pocket, whom he falsely believes to be the man who will get the job.

Here Smith is justified in believing that “The man who will get the job has ten coins in his pocket,” and it is in fact true that the man who will get the job has ten coins in his pocket.  However, the same evidence which justifies that true belief also justifies Smith’s false belief that Jones will get the job.  In Smith’s mind, these two beliefs are so intertwined that the true proposition is unlikely to figure in any line of reasoning uncoupled from the false one.  Moreover, since Smith does not realize that he himself has ten coins in his pocket, nor presumably that there is any applicant for the job other than Jones who has ten coins in his pocket, there is no reason to suppose that he would regard such a proposition as anything other than a statement that Jones will get the job.  So, true though the proposition may be, and justified as Smith may be in accepting it as true, his belief in it can lead him to nothing but error.

This counterexample is of course highly contrived, as is Professor Gettier’s second counterexample.  That doesn’t matter.  His only goal was to show that there can be justified true beliefs which we would not call knowledge, not that such beliefs are particularly commonplace.  Having given even one counterexample, Professor Gettier showed that justified true belief is not an adequate definition of knowledge.  Needless to say, Plato himself would probably have been thrilled with these counterexamples.  One can easily imagine him starting from them and proceeding to spin out a whole theory of justification, perhaps based on the idea that what we have a right to believe varies depending on the plane of existence to which our belief pertains, or that justification isn’t really justification unless the subject is approaching the topic in the true character of a philosopher, or some such Platonistic thing.

As it happens, Professor Gettier’s article was followed by a great many publications giving “Gettier-style” counterexamples, including many that are far more natural and straightforward than his original two.  Evidently all that needed to be done was to give some counterexamples, and the floodgates of creativity came open.  Professor Gettier himself did not write any of these articles, or indeed any articles at all after his 1963 paper.

Once you’ve read the 1963 paper, you may begin to notice naturally-occurring Gettier-style counterexamples.  The first novel I read after I was introduced to this topic about 20 years ago was Anthony Trollope’s The Eustace Diamonds.  Trollope is not often called a philosophical novelist.  However, a Gettier-style counterexample lies at the heart of this novel.  Lizzie Eustace is the childless widow of Sir Florian Eustace.  Among Sir Florian’s possessions had been a diamond necklace valued at £ 10,000.  Lady Eustace claimed that Sir Florian wanted her to have the necklace, and so insisted on treating it as her own; however, the Eustace family lawyer claimed that it was a family heirloom, entailed to Sir Florian’s blood relations, and so that it should revert to the family in event of his death without issue.   While this dispute was moving towards the courts, a person or persons unknown broke into a safe where Lady Eustace was known to keep the necklace.  The burglary was discovered; the necklace was not there.  Lady Eustace did not tell the police what was in fact true, that she had taken the necklace from the safe before the burglary and still had it in her possession.  The leader of the police investigation is Inspector Gage, a wily and experienced detective who quickly arrives at the conclusion that Lady Eustace has stolen the necklace herself, likely in conjunction with her lover, Lord George de Bruce Carruthers.

In fact, Inspector Gage is mistaken not only about Lady Lizzie’s complicity in the burglary, but also about the nature of her relationship with Lord George and about Lord George’s character.  For all that they seem like lovers, and for all that Lady Eustace would like to become Lord George’s lover, they never quite come together.  And for all that Lord George’s sources of income are shrouded in mystery, he proves in the end to be thoroughly law-abiding.  However, the collection of evidence on which the inspector bases his theory is so impressive that if it did not justify him believing it, one can hardly imagine how anyone could be justified in believing anything.  So those three propositions could be classified as justified false beliefs.  At the nub of them all, however, is a justified true belief: that the necklace is in the possession of Lady Eustace.  Surrounded as it is by these false beliefs, false beliefs which would prevent the inspector from forming a true theory of the case, he cannot be said to know even this.

Cartoonist Zach Weiner devoted a recent installment of his Saturday Morning Breakfast Cereal to laying out some thoughts about Gettier-style counterexamples:

 

I want to make a few remarks about this strip.  First, it doesn’t seem right to say that Professor Gettier proposed a “philosophical problem.”  To the extent that there is a “Gettier problem,” it is a problem with Plato’s proposed definition of knowledge.  By finding a weakness in that definition, Professor Gettier may have reopened philosophical problems that some had hoped to use the definition to mark as solved, but his article does not in itself suggest any new problems.  To jump directly from Professor Gettier’s challenge to Plato’s definition to a statement that “humans find the order of events to be cute” is to introduce quite an unnecessarily grandiose generalization.

Second, it’s clever that the irate child denounces “the Gettier ‘problem'” with a claim that “Maybe all the ‘problems’ of philosophy are just emergent properties that disappear when you simplify.”   Professor Gettier’s 1963 paper includes just three footnotes.  One refers to the two passages where Plato floats the definition of knowledge as justified true belief (“Plato seems to be considering some such definition at Theaetetus 201, and perhaps accepting one at Meno 98.”)  The other two cite uses of the definition by Roderick Chisholm and Alfred Ayer, two very eminent philosophers working in the Anglo-American tradition of analytic philosophy (“Roderick M. Chisholm, Perceiving: A Philosophical Study (Ithaca, New York: Cornell University Press, 1957), p. 16,” and “A. J. Ayer, The Problem of Knowledge (London: Macmillan, 1956), p. 34.”)  Much of the analytic tradition stems from the suspicion that “all the ‘problems’ of philosophy are just emergent properties that disappear when you simplify,” and Ayer and Chisholm both had interesting things to say about this suspicion.

Third, by what criterion can brain cells be regarded as “small stuff” and consciousness as “big stuff”? I’d say the only person to whom that idea makes sense is one who has heard straightforward explanations of the basics of brain anatomy and woolly explanations of the metaphysics of consciousness.  Everyone who is likely to read this strip either is, or has at some time been, awake.  Consciousness is thus familiar to all of them, an everyday thing, the very smallest of the “small stuff.” Conversely, brain cells are knowable only to people who have access to a microscope or to findings arrived at by use of a microscope.  They are, therefore, a relatively recherche topic, and most definitely “big stuff” to any truly naive subject.  To connect the phenomena of consciousness with brain cells, or with brain anatomy, is not only an even more sophisticated topic, but is at present wildly speculative.

Fourth, it’s clever to have the irate child find that “the small stuff” is no easier to understand than “the big stuff.”  I think Plato would have liked the strip, not for its defense of his definition, but for its illustration of the difficulty of separating “the small stuff” from “the big stuff.”  After all, probability wobbles and the rest of quantum theory are, so far as we are concerned, highly abstract.  We may use various images to make physics intelligible, but the deeper we enter into the subject the more thoroughly mathematical it becomes.  As the final nose-flicking indicates, our experience of “facts” and “brain cells” and “stuff that happens” are also theory-laden, so that it is an empty boast to claim that one regards them as real and the ideas behind them as unreal.

The History of English in Ten Minutes

Here’s something funny that was produced for the Open University this summer.  Everyone else has been posting it this week, I decided to join the herd.

A proposed definition of “feminism”

I teach at a state university deep in the interior of the USA.  The other day I was grading some papers students had written about ancient Greek culture.  One student focused on women’s clothing in ancient Sparta.  She included a paragraph starting with the famous phrase “I’m not a feminist, but…”  In her case, she’s not a feminist, but she believes that it is an unacceptable infringement of the equality of persons for the law to require women to cover their breasts in situations where men are allowed to go shirtless.   That puzzled me.  If a principled insistence that women must have a legal right to bear their breasts in public doesn’t make you a feminist, what do you have to do to earn that title? According to the eminent philosopher Lady Gaga, only someone who despises men can be a feminist.  That would disqualify most of the feminists I know, including many people who have spent decades on the radical fringe of the women’s movement, and several who have made a living as professional advocates of what they call “feminism.”

I haven’t brought this up in class, since I’m not quite sure where a discussion of the word “feminism” might lead.  Also because we’re behind schedule, and I want to catch up.  Eventually I will bring the question up, though.  To clarify my own thinking, I’ve been trying to craft a definition that will describe what I mean when I say “feminism.”  What I have on this so far breaks into two parts:

1, The belief that women have a right to play a wider variety of social roles than they play at present.  2, The habit of placing a higher value on this right than on the traditions that tend to restrict it.

I see seven advantages to this proposed definition.  First, the expression “wider variety of social roles” accommodates, on the one hand, liberal feminists who want to praise both women who choose to play traditional roles and those who move into what have been male-dominated areas, and on the other hand radical liberationists who want to stamp out the traditional roles on the grounds that they tend to crowd out the nontraditional ones.  By the same token, it leaves room both for feminists who claim that pornography and other forms of sex work can be a way of empowering women, and for those who argue that the sex industry and its products are just so many attacks on women.  In each debate, both sides agree that women should have a wider variety of options than they do now, but disagree about whether a particular sort of role opens more possibilities than it closes off.

Second, the vagueness of the term “social role,” which may seem like a weakness of the proposed definition, is in fact one of its strengths.  Consider the question, are right-wing female politicians feminists?  If they seek offices that have been strongly gendered as male, then to a certain extent they are feminists, no matter what they may say.  So, US Representative Michele Bachmann claims to view the proper role of a wife as submission to her husband.  Yet at this moment, Representative Bachmann is running for the presidency, an office in which she will not only be barred from taking direction from her husband, but which no woman has ever held, and which makes its holder, as commander-in-chief of the US military, a symbol of one of the most masculinized institutions in society.  Of course, it is possible to exaggerate the extent to which right-wing women are feminists in spite of themselves when they run for high office.  One thinks of Margaret Thatcher appointing a cabinet in which she was the only woman.  The ambiguity of “social role” captures the paradox.  Some might say that the relevant social role is “politician”; as this role has been open to women for some time, it was not an act of feminism for Representative Bachmann or Lady Thatcher to seek advancement within a political career.  Others will say that the role of “politician” is one thing, the role “head of national government” quite another.  So that any woman seeking to add that role to the repertoire of female possibilities is perforce a feminist, whatever she may call herself.

Third, “at present” makes it clear that the qualifications for the label shift over time.  To return to the example of Representative Bachmann, she is one of 72 women currently serving in the US House of Representatives.  While that leaves the House more than 80% male, it is a sign that service in Congress is not viewed as the sole prerogative of men.  So one could not say that the simple act of running for the House made Representative Bachmann a feminist.  The 41 women who served between 1917 and 1951, however, could be so labeled, especially the 23 who were elected to seats that had not previously been held by their husbands or fathers.  Among them were a number of women who were fiercely conservative in many ways, but even in the act of avowing their support for the old ways they were in fact increasing the opportunities women had to participate in politics.

The fourth advantage stems from the phrase “than they play at present.”  Notice, the idea is not that women should be free to play roles they are not now free to play, but that they should be free to play roles that they do not in fact play.  This avoids the dead-end of feeling obligated to make a legalistic argument proving a history of sex discrimination every time we express joy that women are starting to enter a previously all-male domain.

The fifth advantage is the converse of this.  Saying that feminism involves the ” belief that women have a right to play a wider variety of social roles than they play at present,” we do not imagine feminists as people who shame women into playing particular roles.  So, if all the sewage workers in town are men, one need not go around insisting to each woman one meets that it is her duty to take a job in that area in order to meet this definition of “feminist.”  I see that as an advantage in a definition of “feminism” since I’ve never met a feminist who insisted on such a thing.

Sixth, the word “habit” at the beginning of the second clause of the definition opens the door to assertions like those I’ve been making about right-wing women, that one can be a feminist without knowing it or intending it.  Beliefs and the labels attached to those beliefs tend to be associated with each other so closely that it is hazardous to say that a particular label “really” applies to a person who rejects it.  So  someone who resists the label “feminist” might well resent being told that s/he holds beliefs which merit the label.  However, we all have habits that we aren’t aware of.  So it might be fair to expect that if we present a reasonable person with evidence that s/he has a habit which we call “feminism,” that person will at least see why we want to say that s/he is a feminist.  Not that such a person would necessarily be unreasonable if s/he continued to reject the label, but s/he might be less likely to be insulted by our presumption in applying it to him or her.

Seventh, saying that feminism involves “habit of placing a higher value on this right than on the traditions that tend to restrict it” is another way of opening the label to people who differ in other ways.  Some people whom we would call feminist refuse to find value in any tradition that restricts the variety of social roles women are free to play.  Others place very high values on many such traditions, but not usually so high a value that they would be comfortable with their restrictive aspects.  For example, there are many people who grew up as Roman Catholics and who wear the feminist label proudly.   Some of these look at such policies of that church as its refusal to ordain women to the priesthood and break away from it altogether.  Others continue to participate, not necessarily because they like those policies but because they find other elements in the tradition that in their view make it worthwhile to stick around.  Emphasizing, as this clause of the definition does, that feminism is about placing a higher value on the right of women to play a wider variety of roles than they do at present than on traditions that restrict that right allows people on both sides of this dispute to continue calling themselves “feminist.”

The proposed definition is more or less a top-of-the-head exercise.  So I’m not committed to it.  If someone could suggest another definition that preserves all seven of its strengths, I’d be excited to hear about it.

“An apple a day keeps the doctor away”

The other day, I was eating an apple for breakfast.  My wife mentioned that a friend of ours was planning to stop by our house later that morning.  This friend is a medical doctor by occupation; I joked that I’d better stop eating the apple, since I didn’t want to keep him away.  Recognizing the play on the proverb “An apple a day keeps the doctor away,” Mrs Acilius was kind enough to chuckle at my little witticism, as was our friend when I repeated the line to him.  Clearly, the proverb means something like “If you eat an apple each day, you will reduce the likelihood that you will require the professional attentions of a medical doctor.”  Since our friend’s visit was purely social, the humor of my remark arose from an ambiguity in the expression “keeps the doctor away.”  It wasn’t hugely funny, since this ambiguity is a purely formal one that has rarely confused anyone, but to the extent that it is funny at all, that’s what makes it so.

The next day, I was teaching a class.  I had a Twitter stream on the screen in front of the room, consisting of questions and answers that my students had tweeted to my work Twitter account (not to be confused with the Los Thunderlads Twitter account, or my own private Twitter account.)  There are other systems that enable students to send short items to a page that can be projected on a screen, but since Twitter is a public site and the students always have access to it, it has certain advantages.  In the middle of class, a student decided, for some reason, to share with the class a joke that has been whipping its way around Twitter of late: “A blowjob a day keeps the pimphand away.”  The class laughed, and I took advantage of the opportunity to remind them of the reasons why they should keep a separate Twitter account just for their classes.  I also spent a moment or two making fun of the offender for his need to share, then moved on.

It’s a shame the class wasn’t in lexical semantics.  If it were, I could have used the sentence “A blowjob a day keeps the pimphand away” as an example of some interesting points.  It scans the same as “”An apple a day keeps the doctor away”; “apple,” “blowjob,” “doctor,” and “pimphand” are all trochaic, and in each pair the second word has a more complex consonant structure than does the first.  So the two expressions sound very similar, but of course they differ dramatically in that one is among the most anodyne of expressions, while the other is doubly taboo, combining as it does an explicitly sexual term and an explicitly violent one.

“A blowjob a day keeps the pimphand away” also gets a laugh because it prompts us to think of similarities between the act of eating an apple and the act of performing oral sex on a man.  Each process takes a few minutes.  In each case, one performs a series of oral manipulations on an object that is, at the beginning of the process, bulbous in shape and about as long as it is wide, and in the course of those manipulations changes the object into a roughly cylindrical shape.  Also, an uneaten apple is covered with a peel, that can be any of a variety of colors, but that shows a variation of color tone around its exterior.  Once the peel is gone, the apple eater chews on the fruit inside, ending up with a mouth full of shapeless, but uniformly white, material.  The similarity to fellatio is perhaps obvious.

The relationship between “keeps the doctor away” and “keeps the pimphand away” is, perhaps, more interesting.  The phrase “the doctor” in the proverb calls up the image of a person who is a doctor; keeping that person away is supposed to mean preventing the need for a house-call.*  As my little joke of the other morning showed, the bare noun phrase “the doctor” does not by itself logically imply the idea of need for a house call, but could, to a person unfamiliar with the proverb, allow for the meaning “If you eat an apple, doctors will avoid you.”  By contrast, the phrase “the pimphand” evokes a very specific scenario.  A pimp demands that a prostitute hand over her earning to him, and slaps her in the face for refusing to do so.  Look at this image, from Urban Dictionary’s top-rated entry for “pimphand”:

Compare it with this comic strip, which Josh Fruhlinger described as featuring a “distinguished-looking senator, who isn’t so distinguished that he can’t slap an angry lake-bully with his pimp hand when he gets his dander up”:

The first picture is accepted as an illustration of the term “pimphand,” even though the man in it has few of the characteristics one associates with pimpdom, because the position of his hand suggests the sort of slap that the senator is administering in the comic strip.  So in place of the merely nominal “the doctor,” with its vague evocation of a gentle custom that is obsolete in the USA, we find an expression that may parse the same, but that definitely signifies a particular scenario of brutal violence.

*Some USA residents may never have heard of “house calls.”  This is when a doctor goes to a patient’s home to provide medical care.  These have been unknown in the USA for decades, my entire lifetime in fact, though I understand there are still places where they are common.