WEIRD laughter

Recently, several websites I follow have posted remarks about theories that are meant to explain why some things strike people as funny.

Zach Weinersmith, creator of Saturday Morning Breakfast Cereal, wrote an essay called “An artificial One-Liner Generator” in which he advanced a tentative theory of humor as problem-solving.

Slate is running a series of articles on theoretical questions regarding things that make people laugh.  The first piece, called “What Makes Something Funny,” gives a lot of credit to a researcher named Peter McGraw, who is among the pioneers of “Benign Violation Theory.”  This is perhaps unsurprising, since Professor McGraw and his collaborator Joel Warner are credited as the authors of the piece.  Professor McGraw and Mr Warner summarize earlier theories of humor thus:

Plato and Aristotle introduced the superiority theory, the idea that people laugh at the misfortune of others. Their premise seems to explain teasing and slapstick, but it doesn’t work well for knock-knock jokes. Sigmund Freud argued for his relief theory, the concept that humor is a way for people to release psychological tension, overcome their inhibitions, and reveal their suppressed fears and desires. His theory works well for dirty jokes, less well for (most) puns.

The majority of humor experts today subscribe to some variation of the incongruity theory, the idea that humor arises when there’s an inconsistency between what people expect to happen and what actually happens.

Professor McGraw and Mr Warner claim that incongruity theory does not stand up well to empirical testing:

Incongruity has a lot going for it—jokes with punch lines, for example, fit well. But scientists have found that in comedy, unexpectedness is overrated. In 1974, two University of Tennessee professors had undergraduates listen to a variety of Bill Cosby and Phyllis Diller routines. Before each punch line, the researchers stopped the tape and asked the students to predict what was coming next, as a measure of the jokes’ predictability. Then another group of students was asked to rate the funniness of each of the comedians’ jokes. The predictable punch lines turned out to be rated considerably funnier than those that were unexpected—the opposite of what you’d expect to happen according to incongruity theory.

To which one might reply that when Mr Cosby and Ms Diller actually performed their routines, they didn’t stop after the setup and ask the audience to predict the punchline.  Nor would any audience member who wanted to enjoy the show be likely to try to predict the punchline.  Doing so would make for an entirely different experience than the one the audience had paid for.

Be that as it may, Professor McGraw and Mr Warner go on to claim that their theory of “benign violation” is supported by empirical evidence:

Working with his collaborator Caleb Warren and building from a 1998 HUMOR article published by a linguist named Thomas Veatch, he hit upon the benign violation theory, the idea that humor arises when something seems wrong or threatening, but is simultaneously OK or safe.

After extolling some of the theory’s strengths, the authors go on:

Naturally, almost as soon as McGraw unveiled the benign violation theory, people began to challenge it, trying to come up with some zinger, gag, or “yo momma” joke that doesn’t fit the theory. But McGraw believes humor theorists have engaged in such thought experiments and rhetorical debates for too long. Instead, he’s turned to science, running his theory through the rigors of lab experimentation.

The results have been encouraging. In one [Humor Research Laboratory] experiment, a researcher approached subjects on campus and asked them to read a scenario based on a rumor about legendarily depraved Rolling Stones guitarist Keith Richards. In the story—which might or might not be true—Keith’s father tells his son to do whatever he wishes with his cremated remains—so when his father dies, Keith decides to snort them. Meanwhile the researcher (who didn’t know what the participants were reading) gauged their facial expressions as they perused the story. The subjects were then asked about their reactions to the stories. Did they find the story wrong, not wrong at all, a bit of both, or neither? As it turned out, those who found the tale simultaneously “wrong” (a violation) and “not wrong” (benign) were three times more likely to smile or laugh than either those who deemed the story either completely OK or utterly unacceptable.

In a related experiment, participants read a story about a church that was giving away a Hummer H2 to a lucky member of its congregation, and were then asked if they found it funny. Participants who were regular churchgoers found the idea of mixing the sanctity of Christianity with a four-wheeled symbol of secular excess significantly less humorous than people who rarely go to church. Those less committed to Christianity, in other words, were more likely to find a holy Hummer benign and therefore funnier.

Lately, social scientists in general have been more mindful than usual of the ways in which North American undergraduates are something other than a perfectly representative sample of the human race.  Joseph Henrich, Steven Heine, and Ara Noranzayan have gone so far as to ask in the title of a widely cited paper whether the populations most readily available for study by psychologists and other social scientists are in fact  “The weirdest people in the world?”  In that paper, Professors Henrich, Heine, and Noranzayan use the acronym “WEIRD,” meaning Western, Educated, Industrialized, Rich, Democratic.  Their abstract:

Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.

It would be particularly easy to see why a theory like Benign Violation would have a special appeal to undergraduates.  Undergraduate students are rewarded for learning to follow sets of rules, both the rules of academic disciplines which their teachers expect them to internalize and the rules of social behavior appropriate to people who,like most undergraduates, are living independent adult lives for the first time.  So, I suppose if one wanted to defend Superiority Theory (as for example mentioned by Aristotle in his Poetics, 1449a, p. 34-35,) one would be able to use the same results, saying that students simultaneously saw themselves as superior both to the characters in the jokes who did not follow the usual rules and to those who would enforce those rules in too narrowly literalistic a fashion to fit with the overall approach of higher education, where innovation and flexibility are highly valued.  Here the WEIRD phenomenon comes into play as well, since cultures vary in their ideas of what rules are and what relationship they have to qualities like innovation and flexibility.  Moreover, one could also say that the judgment that a particular violation is or is not benign itself implies superiority over those involved in the violation, and that this implication of superiority is what generates laughter.

Also, because undergraduates are continually under pressure to internalize one set of rules after another, they often show anxiety related to sets of rules.  This may not be the sort of thing Sigmund Freud had in mind when he talked about Oedipal anxiety, but it certainly does drive undergraduates to seek relief.  Example of action that is at once quite all right and by no means in accordance with the rules may well provide that relief.

Incongruity theorists may find comfort in Professor McGraw’s results, as well.  The very name “Benign Violation” as well as experimental rubrics such as “wrong” and “not wrong” are incongruous combinations by any definition.  So a defender of Incongruity Theory may claim Benign Violation as a subcategory of Incongruity Theory, and cite these results in support of that classification.

Professor McGraw is evidently aware of these limitations.  He and Mr Warner explain what they did to rise above them:

[T]hree years ago, he set off on an international exploration of the wide world of humor—with me, a Denver-based journalist, along for the ride to chronicle exactly what transpired. Our journey took us from Japan to the West Bank to the heart of the Amazon, in search of various zingers, wisecracks and punch lines that would help explain humor once and for all. The result is The Humor Code: A Global Search for What Makes Things Funny, to be published next week—on April Fool’s Day, naturally. As is often the case with good experiments—not to mention many of the funniest gags—not everything went exactly as planned, but we learned a lot about what makes the world laugh.

It isn’t April First yet, so I don’t know how well they have done in their efforts to expand their scope.

One sentence that struck me wrong in Professor McGraw and Mr Warner’s piece was this one, about Superiority Theory, that it “seems to explain teasing and slapstick, but it doesn’t work well for knock-knock jokes.”  I’m not at all sure about that one.  In a knock-knock joke, there are two hypothetical characters who take turns delivering five lines of dialogue.  The first character to speak is the Knocker (whose first line is always “Knock-knock!”)  The second character to speak is the Interlocutor (whose first line is always “Who’s there?”)  The Knocker’s second line is an unsatisfactory answer to this question.  The Interlocutor’s second line begins by repeating this incomplete answer, then adds the question word “who?”  The Knocker’s third line then delivers the punchline in the form of a repetition of the unsatisfactory answer followed by one or more additional syllables that change the apparent meaning of the initial unsatisfactory answer.

Knock-knock jokes became popular in the USA in the 1950s, as part of a national craze.  The first joke recorded in this mid-twentieth century craze, I have read, is the following:

K: Knock-knock!

I: Who’s there?

K: Sam and Janet.

I: Sam and Janet who?

K: Sam and Janet evening! (sung to the tune of this song)

Apparently all of the jokes that brought the form into such prominence in the 1950s that they are still beloved today by seven-year-olds of all ages took this form, in which the punchline involved the Knocker bursting into song with a popular Broadway tune of the period.

I think the jokes from this original craze almost have to be examples of superiority.  The Knocker is confident that the Interlocutor will be surprised when the punchline is presented under the usual conditions of the joke.  This is not to deny that if the joke were interrupted and the Interlocutor were asked to predict the punchline, after the manner of Professor McGraw’s students the Interlocutor might be able to do so.  When it is presented the Interlocutor will join in his or her satisfaction at being part of the relatively elite portion of the population who recognize current Broadway hits when they hear them.

As knock-knock jokes have become more familiar over the decades, meta-knock-knock jokes have gained a following.  For example, a person named Alice might play the Knocker in this joke:

K: Knock knock!

I: Who’s there?

K: Alice.

I: Alice who?

K: Alice (in a tone suggesting that she is wounded that the Interlocutor doesn’t recognize her)

The met-knock-knock joke suggests superiority to the genre of knock-knock jokes.  If first-order knock-knock jokes are popular among seven-year-olds of all ages, meta-knock-knock jokes are popular among eight-year-olds of all ages, suggesting superiority to those who still persist in telling first-order knock-knock jokes.

The world’s most hated knock-knock joke is this meta-knock-knock:

K: Knock, knock.
I: Who’s there?
K: Banana.
I: Banana who?
K: Knock, knock.
I: Who’s there?
K: Banana.
I: Banana who?
K: Knock, knock.
I: Who’s there?
K: Orange.
I: Orange who?
K: ORANGE YOU GLAD I DIDN’T SAY BANANA!

This joke attacks the several parts of the shared understanding between Knocker and Interlocutor.  The joke is more than five lines long, the fifth line does not take the form original unsatisfactory response + additional syllable or syllables, the Knocker expects the Interlocutor to repeat his or her two lines multiple times, and the punchline does not include a repetition of the original unsatisfactory response.  For the experienced Interlocutor, these attacks are an undue imposition on the Knocker-Interlocutor relationship.  For anyone else, the whole thing would be utterly pointless.

Hated as the joke is, Knockers of a particular sort, mostly eight-year-old boys, seem unable to resist it.  Willing Interlocutors can rely on these boys to laugh uproariously every time they drag them through the ritual responses.  Here too, Superiority Theory seems to be the only explanation for the boys’ laughter and the strain tolerating the joke puts on the Interlocutors.  The Knockers who enjoy the joke laugh at their own power to inflict it on their Interlocutors.

Each time a potential Interlocutor is confronted with “Orange you glad I didn’t say banana,” the joke gets a bit more annoying.  Perhaps this is because of an aspect of politeness recently referenced on yet another of my favorite sites, Language Log.  There it was mentioned that Penelope Brown and Stephen Levinson, founders of “Politeness Theory,” have provided conceptual tools to enable us to distinguish between situations in which statements offering information the hearer should already have suggest that the hearer does not already know that information and thereby offend the hearer and those which do not carry that suggestion and which therefore do not offend the hearer.  A joke with a painfully obvious punchline may fall in the first category, as do the reiterated responses in “Orange you glad I didn’t say banana.”  Casual remarks about the weather and other forms of small talk usually fall in the second category, as do formalized utterances generally.

Pythagoras Today

Slate recently reran a New Scientist piece about the similarities between mathematical patterns musicologists use and mathematical patterns  researchers to explore other fields.  Pythagoras did something similar two and a half millennia ago, and built a whole religion around it.  The Pythagorean cult was apparently still up and running in 1959, that’s when no less a celebrity than Donald Duck was initiated into Pythagoreanism:

 

The world’s fastest manhole cover?

Tuesday, xkcd’s What-If mentioned the story of a manhole cover that may have gone into space before Sputnik:

A brief story:

The official record for fastest manmade object is the Helios 2 probe, which reached about 70 km/s in a close swing around the Sun. But it’s possible the actual holder of that title is a two-ton metal manhole cover.

The cover sat atop a shaft at an underground nuclear test site operated by Los Alamos as part of Operation Plumbbob. When the one-kiloton nuke went off below, the facility effectively became a nuclear potato cannon, giving the cap a gigantic kick. A high-speed camera trained on the lid caught only one frame of it moving upward before it vanished—which means it was moving at a minimum of 66 km/s. The cap was never found.

66 km/s is about six times escape velocity, but contrary to the linked blog’s speculation, it’s unlikely the cap ever reached space. Newton’s impact depth approximation suggests that it was either destroyed completely by impact with the air or slowed and fell back to Earth.

This remark includes a link to a post about the test on “Notes from the Technology Underground.”  A comment on that post takes some of the fun out of it:

It probably never left the atmosphere. As Newton found, a projectile penetration into a medium is proportional to their relative densities, times projectile bodylength, quite irrespective of projectile velocity.

Here we have steel projectile (8 g/cm3), thrust into air (0.001 g/cm3), meaning that the projectile will only travel 8000 times its bodylength into the atmosphere.

If it was 4 foot across, weighted 2 tons (I think I saw that figure somewhere else), and was roughly circular, this works out to thickness of 22 cm. So face up, it coul travel 1 760 meters high. If it somehow turned to its side and stayed in that position, it could travel 4 feet (120 cm) * 8 000 = 9 600 m high. Even this best case scenario is short from leaving the atmosphere. Thinning of air as you get higher is obviously not considered, but I don’t expect it would change the results much.

It gets even less exciting when we look at these old remarks on “Above Top Secret,” based on discussions with Dr Robert Brownlee, the principal investigator behind the test in question (known officially as Pascal-B):

For an authentic account of this incident by Dr. Robert Brownlee himself, this web site is pleased to host:Learning to Contain Underground Nuclear Explosions.

As Dr. Brownlee explains, the figure of “a velocity six times that needed to escape Earth’s gravity” refers to the results of a simulation, that may not of been a good model of the actual test conditions (the actual yield for example, was unknown even if all other parameters were correct). No measurement of the actual plate velocity was made.

If the description of the plate is accurate – 4 feet wide, 4 inches thick and made of steel – then it would weigh about 900 kg (a lower weight is possible if the dimensions are inaccurate or if it was not of uniform thickness). A velocity of 6 times Earth’s escape velocity (67 km/sec, since escape velocity is 11.2 km/sec) would give the plate a kinetic energy 60% larger than the total energy released by the explosion. This is clearly impossible.

Brownlee explained to this author, by email, that the concrete plug placed in close proximity to the bomb was vaporized by the explosion. Thus the propulsion of the plate could be considered to be due to the energy imparted by this expanding vaporized material, rather like the propellant of a gun. From the descriptions available of the plug a mass of at least 3000 kg can be estimated, and if half the bomb’s energy were deposited in it then it would have an energy density of 50 times that of normal gun propellant. From the physics of high velocity guns, it can be estimated that velocities produced by the gas expanding up the long shaft could propel and object to velocities exceeding Earth’s escape velocity, perhaps as much as twice escape velocity.

If by some chance the metal that had made up the manhole cover did escape from the atmosphere (and after all, the atmosphere is thin enough that after less than two seconds going straight up at the hypothetical speed of 66 km/second the blob of molten iron that once made up the manhole cover would be in a near vacuum,) interesting things might have happened.  If it had fallen straight down, for instance, it would have bounced off the atmosphere back into space.  Perhaps it might have repeated that process several times, growing ever hotter.  With each bounce some iron vapor would have been flung down into the atmosphere, some flung outward into space.  Perhaps the bit of the blob that finally crashed into the ocean would have been quite small.

Probably nothing of the sort happened; probably the whole blob dissipated before leaving the atmosphere.  But one does wonder where the metal ended up…

“There is nothing intelligent to say about a massacre.”

Way back in the 4 June issue of The Nation, William Deresiewicz published a review essay about Kurt Vonnegut.  As I read Mr Deresiewicz’ piece, it dawned on me that I had never read Slaughterhouse-Five.  I’d read several of Vonnegut’s novels and miscellaneous writings, but had missed the most famous one.  Embarrassingly enough, I had talked about Slaughterhouse-Five with a number of people over the years, conversations in the course of which I sincerely, if somewhat vaguely, believed that I had read the book at some point.   Once, while still in high school, I even suggested to a friend that we co-author a tribute to Slaughterhouse-Five in comic book form.  If he’d taken me up on that, I suppose it would have become clear to both of us quickly enough that I hadn’t read it, but we settled on a tribute to Froissart’s Chronicles  instead.

So, not long after I read that issue, I reported to the library and checked out a copy of Slaughterhouse-Five.  It was well worth reading.  Mr Deresiewicz says that the novel’s real subject is not the firebombing of Dresden, but the Post-Traumatic Stress Disorder that the firebombing bequeathed to Vonnegut and other survivors.  Mr Deresiewicz quotes a remark from the beginning of the novel, “there is nothing intelligent to say about a massacre.”  The novel is great, he argues, because Vonnegut doesn’t try to offer answers or find meanings.  He looks directly at an unintelligible world, a world in which human beings by the thousand can be incinerated in their homes, and does not flinch by looking away to something else, something reassuring in its logic.  Instead, the novel’s Billy Pilgrim, like Vonnegut in his own authorial voice, says simply, “I was there.”  Mr Deresiewicz writes:

“I was there,” he says. And he adds, “So was my old war buddy, Bernard V. O’Hare.” The moment prefigures the novel’s moral climax a few pages before the end. Billy’s in a hospital in 1968, after the plane crash. His roommate is a former Air Force general who is working on a history of the Army Air Corps in World War II. He is wealthy, healthy, masterful, accomplished (his name is Rumfoord, by the way), and he dismisses Billy, in his quasi-comatose state, as so much human refuse. He is telling someone that the raid on Dresden had been kept a secret for so long

“For fear that a lot of bleeding hearts…might not think it was such a wonderful thing to do.”
It was now that Billy Pilgrim spoke up intelligently. “I was there,” he said.

“I was there.” Meaning not, I suffered, but simply: It happened. It doesn’t fit the story that we tell ourselves about the war, but it happened. And I alone escaped to tell the tale. But not completely alone: my old war buddy was there as well, which means you can’t dismiss me as a lunatic. I was there. Or as the novel’s famous invocation, thrice repeated, puts it: Listen.

“I was there”—not, “The death of Dresden was a bitter tragedy, needlessly and willfully executed.” The sentence comes from a short, unpublished manuscript, included in the Library of America edition, that Vonnegut had worked on in the years immediately following the war. Before he could write the novel, I believe, he needed to surrender that sense of judgment. “It had to be done,” Rumfoord finally says to Billy. “I know,” Billy replies, “everybody has to do exactly what he does.”

Elsewhere in the novel, Vonnegut explicitly disavows judgment of the pilots who carried out the raid.  He never did blame them, he says; he has known bombers and admired them.  He describes the bombs as if they acted on their own, unassisted by human agency.  In the novel, that description figures not as a psychological evasion,  but as the facing of a supreme horror.  A world dominated by malevolence and permeated by guilt would have a structure, and so would be intelligible.  As such, even a realm of villainy would be easier to bear than the realm of sheer absurdity into which the massacre introduced its survivors.

In a bit of the novel that Mr Deresiewicz does not quote, Billy Pilgrim and his fellow prisoners are herded into Dresden.  The crowd gives Billy dirty looks; one man confronts him and demands to know if he “thought we would laugh”?  Billy is confused, then realizes that the miscellaneous items of clothing he has scavenged to cover his nakedness in his weeks as a prisoner adds up to a clown’s costume.  Here, Billy parallels his creator.  Cobbling together a way to tell his story, Vonnegut has gathered up bits of wartime memoir, of science fiction, of midlife-crisis narrative, of soft-core porn, of half a dozen other genres,  and pasted them together.  The result is a very odd book, at first glance an aggregation as clownish as Billy’s costume.   It is precisely because Vonnegut is entirely willing to play the fool, to make himself as much a stranger to smart rhetoric as the war has made Billy a stranger to smart attire, that Slaughterhouse-Five is a possession for the ages.

As the Periodicals Notes section of this website attests, I read a lot of magazines.  After the attacks of 11 September 2001, I dropped several titles from my list of regular reads.  These included The New Statesman, The National Review, The London Review of Books, and The American Spectator.  Each of these magazines carried a number of piece about that series of massacres.  There were many things to find objectionable about those pieces; certainly the right-wing publications did not cover themselves in glory by arguing that the appropriate response was to adopt policies that would punish all Muslims everywhere, and the others did their reputations no favors when they published remarks such as “the United States had it coming.”  What I found most rebarbative about all of them was something I couldn’t put into words at the time, but Vonnegut crystallizes it perfectly.  Each of those commentators, left and right, treated the massacres and their aftermath as a continuation of their lifelong quest to display their own brainpower to the utmost possible advantage.  Because there is nothing intelligent to say about a massacre, the result of this contest to be the smartest one was an exhibition of moral idiocy on a spectacular scale.

If we don’t endeavor to make intelligent remarks about a massacre, how do we honor the dead it leaves behind?  This is typically a religious question, so let’s see what we can say about Vonnegut and religion.

As Mr Deresiewicz documents, Vonnegut was raised to be skeptical of conventional religion*:

Vonnegut saw our spiritual anxiety, in the postwar chaos, and as a former public relations man, he knew our mass gullibility. He had also studied anthropology, an experience, he later said, that “confirmed my atheism, which was the faith of my fathers anyway. Religions were exhibited and studied as the Rube Goldberg inventions I’d always thought they were.” Now machines were taking control, so we needed to pretend that something else was in control. Or as he puts it in The Sirens of Titan, “Gimcrack religions were big business.” The Age of Aquarius surely came as no surprise to him—the age of crystals and gurus and mystical hucksters. Charles Manson and Jim Jones surely came as no surprise, and neither did L. Ron Hubbard, a man who started writing science fiction but decided he was writing Scripture.

If we reject the belief systems and hierarchies of traditional religions and the rites that go with them, how do we go about honoring the dead?  I think I detect a kindred spirit in the Vonnegut/ Deresiewicz emphasis on “I was there, and so was Bernard V. O’Hare.”  We honor the dead by remembering them.  To do this we must turn our attention from ourselves and focus it on them, on them as they were individually and as they interacted with each other in groups.  To sustain this focus we must resist the temptation to retreat into distractions, whether those distractions take the form of ideologies that make our losses bearable or of activities in which we ourselves become again the center of attention.  We must give the dead our undivided attention, if only for a moment, if we are to honor them.

Religions can certainly be fruitful source of excuses for keeping the focus off the dead.  Many funerary rites focus attention on clergy or other performers; many include invitations to dwell on recondite theological doctrines about the relationship between life and death.  So I sympathize with opponents of religion like T. H. Huxley who say that respect for the dead requires us to renounce the conventional forms of religion.  On the other hand, for many mourners these things quiet their minds and take them outside of themselves, enabling them to maintain a clear, unwavering focus on the dead.  And there’s nothing to say that persons who find the ritual elements a distraction can’t learn to respond to them in the desired way.  After all, the others learned it; no religious practice comes instinctively to anyone, even if there is an instinct for something called religion in general.  So even proceeding from my idea that mourning should be a matter of focusing our attention on the dead, we don’t find an argument against funerary rites.

Of course, funerary rites do something else as well.  They reassure the mourners that the remembrance of the dead is not a burden they will carry alone, but a bond they share with their community.  Funerary rites aren’t the only social practices that give that assurance; one of the reasons we want medical professionals to make heroic efforts to save our loved ones is that we want to know that those professionals will remember them, at least as an interesting case.  When someone is to blame for the death of a loved one, we want the same attention from the criminal justice system, in part for the same reason.  That’s probably why murder mysteries are so popular.  Some time ago, I saw an episode of Columbo on some cable TV channel that specializes in nostalgia.  Lieutenant Columbo had caught the murderer hiding the victim’s body.  In his bizarrely friendly way,  Lieutenant Columbo was trying to keep the murderer from feeling too bad about himself, telling him, “Dead bodies have a way of turning up.”  In reality, of course, they don’t.  The only thing dead bodies actually have a way of doing is decomposing.   Given enough time, it will be as if the dead had never lived.  That may well be the world’s most unbearable fact.  Many years ago, my wife lost her closest friend to an act of violence that was never investigated; with each passing year, fewer people remember her, and her family’s burden grows more obvious.

Medicine and the criminal justice system, whatever their virtues, are never entirely satisfactory substitutes for funerary rites.  A course of medical treatment is an exercise in technology and finance that revolves around the person of a patient, but is never simply a tribute to that patient; a criminal proceeding is an exercise in institutionalized conflict in the course of which a person who is unavailable to participate actively is likely to vanish from view altogether.

Many people recommend political action as a way to honor the dead.  I’m all for democracy, and I understand the power of martyrs to arouse a citizenry to action.  So I’m not opposed to the idea of waging a campaign for reform in the name of some dead person.  But consider.  Every political dispute is complex; every political issue shades into other, related issues, and every person who takes part in a political disagreement is pursuing several objectives at once.  To turn a person into a political symbol, therefore, is likely to make it virtually impossible to focus our undivided attention on that person.  Again, not everyone sees that focus as the essence of honoring the dead; some may define honoring the dead in a way that begins and ends with the political utility of martyrdom, or in other ways that put a low priority on memory of them as they were.  But for me, and perhaps for Vonnegut, the key thing is to meet the dead on their own terms, not to impose our preconceived notions on them or to lose sight of them in the midst of some other activity.

If we say that our ways of honoring the dead are part of our religion, whether we belong to any recognized religious tradition or not, then Vonnegut and I may share a religion.  Moreover, at least in my version of that unnamed religion, politics is not part of the funerary rites by which we honor the dead.  The rites of the various religious traditions that do have names and belief systems and hierarchies aren’t really part of it either, though they can serve the same purpose.  What is a part of it?  How do we go about focusing our attention simply on a person, not on desires and ideas of our own that we may associate with that person?

In a post a few years ago, I quoted a man who had said that his way of praying for a person was to hold an image in his mind of that person against a plain white background.  This meditative exercise does not involve any words; that way he isn’t tempted to wish things on the person, or to try to recruit God as an ally in an effort to make the person do what he thinks is right.  Instead, it enables him to see the person clearly, to listen to what the person is actually saying, to accept the person as s/he is, and to respect his or her journey in life.  I’ve tried this exercise myself on many occasions, and can recommend it highly.

So that exercise is part of my religion, if you call it that.  Science is part of it, too.  Richard Feynman said in his 1974 commencement address at Caltech that in science, “The first principle is that you must not fool yourself- and you are the easiest person to fool.”  My favorite living philosopher, Alasdair MacIntyre, argues that healthy religious traditions represent lines of inquiry that guide their followers away from particular forms of self-deception.  I don’t really understand how that is supposed to work; MacIntyre’s own religious tradition, as embodied in the Roman Catholic Church, seems to me to be an ever-flowing fountain from which self-deception springs in forms unimagined anywhere else.  Be that as it may, science offers its practitioners tools unmatched in any other avenue of human pursuit for disabusing oneself of one’s pet ideas.  Thomas à Kempis said that the highest reward of the contemplative life was that it had enabled him to free himself of a multitude of opinions; to the extent that Thomas’s words apply to religious practice in general, scientific inquiry is the most efficient of all forms of worship.

*To be precise about it, the Vonneguts were members of All Souls Unitarian Church in Indianapolis, Indiana when the novelist was growing up.  At that time, the congregation met in a building designed by architect Kurt Vonnegut, Senior.  In his maturity, Kurt Vonnegut, Junior did not identify even with the creedless religion of the Unitarians, or the Unitarian-Universalists as they became in 1961.

Inner Check, Inner Dash

Irving Babbitt (1865-1933) and Paul Elmer More (1864-1937) were American literary scholars, famous in their day for arguing that Socrates, the Buddha, Samuel Johnson, and a wide array of other sages throughout the history of the world had conceived of the freedom of the will as the ability to defy one’s impulses.  Babbitt and More gave this conception a variety of names; perhaps the most familiar of these names is “the inner check.”

The other day, I picked up a copy of the August 2012 issue of Scientific American magazine while I was waiting for the pharmacist to fill a prescription.  Lo and behold, a column by Michael Shermer described neurological study conducted in 2007 by Marcel Brass and Patrick Haggard.  Doctors Brass and Haggard found support for an hypothesis that will sound familiar to students of Babbitt and More.  As Mr Shermer puts it:

[I]f we define free will as the power to do otherwise, the choice to veto one impulse over another is free won’t. Free won’t is veto power over innumerable neural impulses tempting us to act in one way, such that our decision to act in another way is a real choice. I could have had the steak—and I have—but by engaging in certain self-control techniques that remind me of other competing impulses, I vetoed one set of selections for another.

Support for this hypothesis may be found in a 2007 study in the Journal of Neuroscience by neuroscientists Marcel Brass and Patrick Haggard, who employed a task… in which subjects could veto their initial decision to press a button at the last moment. The scientists discovered a specific brain area called the left dorsal frontomedian cortex that becomes activated during such intentional inhibitions of an action: “Our results suggest that the human brain network for intentional action includes a control structure for self-initiated inhibition or withholding of intended actions.” That’s free won’t.

If this is true, then Babbitt and More’s works take on a new interest.  If such a control structure exists in the human brain network, it wouldn’t necessarily be the case that humans would be consciously aware of it.  There are any number of facts about the operation of our brains that no one ever seems to have guessed until quite recent scientific findings pointed to them.  So, if Babbitt and More were right and a great many distinguished intellectuals operating in many times and cultures conceived of moral agency as a matter of “self-initiated inhibition or withholding of intended actions,” it would be reasonable to ask whether this conception is evidence that the process Doctors Brass and Haggard detected in the left dorsal frontomedian cortex is perceptible to the person who owns the brain in which it occurs.

The same issue included a couple of other interesting notes on psychological and neurological topics.  A bit by Ferris Jabr discusses Professors George Mandler and Lia Kvavilashvili, who have been studying a phenomenon they call “mind-pops.”  A mind-pop is a fragments of memory which suddenly appears in one’s conscious mind for no apparent reason.  Most mind-pops are very slight experiences; the example in the column is a person washing dishes who suddenly thinks of the word “orangutan.”  That’s the sort of thing a person might forget seconds after it occurred.  Trivial as an individual mind-pop might be, perhaps as a class of experiences they may point to significant aspects of mental functioning.  Professors Kvavilashvili and Mandler:

propose that mind pops are often explained by a kind of long-term priming. Priming describes one way that memory behaves: every new piece of information changes how the mind later responds to related information. “Most of the information we encounter on a daily basis activates certain representations in the mind,” Kvavilashvili explains. “If you go past a fish-and-chips shop, not only the concept of fish may get activated but lots of things related to fish, and they may stay activated for a certain amount of time—for hours or even days. Later on, other things in the environment may trigger these already active concepts, which have the feeling of coming out of nowhere.” This phenomenon can boost creativity because, she says, “if many different concepts remain activated in your mind, you can make connections more efficiently than if activation disappears right away.”

The same researchers also suspect that mind-pops have a connection to a variety of mental illnesses and emotional disorders, so it isn’t all so cheerful as that paragraph may suggest.

Morten Kringelbach and Kent Berridge, in a feature article titled “New Pleasure Circuit Found in the Brain,” describe a study conducted in the 1950s that involved electrical stimulation of certain areas of the brain.  Subjects expressed a strong desire that the stimulation should continue.  From that desire, researchers concluded that the areas in question were producing pleasure.  However, more recent work suggests that these are in fact areas that produce, not pleasure, but desire.  Indeed, none of the patients in the original study actually said that they enjoyed the stimulation, they simply said that they wanted more of it.  Researchers were jumping to an unwarranted conclusion when they interpreted that desire as a sign of pleasure.  The actual process by which the brain produces pleasure is rather more complicated than those researchers, and the “pleasure-center” model of the brain that grew out of their work, might lead one to assume.

The smallest and the largest

In 1968, designers Charles and Ray Eames released a short film called “Powers of Ten.”  Here it is:

Here’s a tribute to the film that appeared as xkcd #271:

There’s something I occasionally wonder about.  People sometimes say that hearing about the scale of the universe at its largest makes them feel small and unimportant.  On that scale, the earth figures as a minuscule portion of a solar system that is itself a minuscule portion of a galaxy that is in turn a minuscule portion of one of countless clusters of galaxies in the universe.  When I hear that remark, I think about the scale of the universe at its smallest.  Tiny as our world is in comparison with the deepest reaches of the sky, how large do we bulk in comparison with the smallest units of the submicroscopic world?

This flash animation from Cary and Michael Huang, released a couple of days ago as a followup to a similar project they put out in 2010, takes us from the Planck length, which is evidently the smallest size a thing can be, up to what theorists currently suspect is the total size of the universe, which extends at least 7000 times as far as we will ever be able to see.  The latest theories hold that the total size of the universe might be about 1.6 times 10 to the 27th power meters.  That theory may be as mistaken as all the previous theories about the same thing have been, but it’s the best we have going at the moment.   So, if that theory is correct and you were to lay humans end to end, the number of them you would need to stretch from one end of the universe to the other would be 28 digits long.  The Huang brothers note in their animation that the total height of all living humans is much shorter than that, of course; in fact, if we were all to lie down end to end we would only reach about 1o million kilometers, only about 1/15 of the way from the earth to the sun.  Even if all 100,000,000,000 humans who ever lived were to be reincarnated and join us, we would still reach less than 15 billion kilometers, which would barely get us from one side of the Solar System’s Kuiper Belt to the other.

So, compared to the largest scale of structure in the universe, we are indeed quite small.  But let’s take a moment and look at the smallest scale of things in the universe.  The Eameses stopped their exploration at the level of the proton.  In 1968, there wasn’t much point in trying to delve deeper.  Since then, science has made advances.  The Planck length is 1.6 times ten to the minus 35th power meters; so, if you laid the shortest possible objects end to end, the number of them it would take to stretch from one end of your body to the other would be 36 digits long. A 36 digit number is of course bigger than a 28 digit number, vastly bigger.  So our size is much closer to that of the entire universe than it is to that of an object that exists on the scale of the Planck length.  The 2010 version of the Huangs’ flash animation illustrated this dramatically, with a human symbol standing well to the right of the center of the zoom bar. If contemplating the scope of the universe as a whole makes us feel small and insignificant, does contemplation of the Planck length make us feel large and mighty?

Perhaps it does.  If it doesn’t, I can think of two possible reasons.  First, it might be that the feature of astronomy that gives people the feeling of smallness and weakness is not the size of the structures astronomers study, but the fact that those people don’t understand what astronomers are talking about and don’t feel confident in their ability to figure it out.  That sense of smallness is not likely to be relieved when the conversation turns from light-years and dark energy to yoctometers and the quantum foam.

Second, it might be the legacy of monotheism.  When we visualize the universe on the largest scale, we imagine ourselves to be standing outside it, taking it in at a glance.  To minds formed under the influence of a belief in an all-powerful, all-knowing, ever-present Creator God, such a visualization is clearly an imitation of God.  In such an imagination, it shows an awesome power to zoom in and find individual humans, to number the hairs on their heads (between 50,000 and 200,000, according to the Huang brothers) and keep an eye on the sparrow.  On the other hand, to look up from the level of the Planck length may not suggest much to such a mind.  It’s true that medieval theologians like Albertus Magnus and Thomas Aquinas developed ideas about angels as one-dimensional beings who had the power to assume visible form as the situation required; to those titans of Scholasticism, the idea of two-dimensional strings vibrating at the smallest levels of scale in the universe and forming the basis of the physical world might have been extremely interesting.  Still, to the extent that people think about angels today, it’s in terms that neither Albertus nor Thomas would have recognized as rational, or even as Christian.  Aside from a few eccentric philosophers like Massimo Cacciari and the late Mortimer Adler who maintain an interest in the Scholastic conception of angels, the only people who bring them up nowadays seem to be those who believe that the souls of holy people become angels.  For the Scholastics, this would have been rank heresy.  They believed that angels represented an order of creation separate from humans, who may operate within time and space on occasion, but who are not generally subject to the passage of time or extended in three-dimensional space.  Humans, they held, were destined either to be resurrected in perfected bodies, as Jesus had been, or to be cast into Hell.  In either case, we would continue to be three-dimensional beings of something about a meter and a half in height.  I can’t see what motive believers in the Hollywood-derived conception of the afterlife would have for attaching any special significance to a view of the universe that looks up from the smallest scale, and indeed they do not seem to be excited about the submicroscopic world.

Why isn’t life on earth more diverse than it in fact is?

Via Kottke, a New Scientist piece on the hypothesis that the earliest common ancestor of all life on earth was a mega-organism of planetary scale.  That’s one way of solving a problem that I sometimes wonder about.  If, as we now hear from exobiology types, life can be expected to originate wherever in the universe conditions will sustain it, how many times did it arise on this planet?  If the answer is “many times” and life has arisen independently many times in the history of the earth, why do all of the organisms we see look so similar that they might have a common ancestor?  Of course several solutions have been proposed to this problem, but the idea of a mega-organism that assimilated some previous life-forms and drove the rest to extinction would seem to be another.

“Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing”

T. H. Huxley in 1860

The title of this post is a quote from Thomas Henry Huxley.  I came across it a few months ago, when I was reading an old paperback I found in a used book store.  The book was Voices from the Sky, by Arthur C. Clarke (Mayflower Press, 1969.)  The cheap, high-acid paper hadn’t aged well in the decades since the book was printed; the pages crumbled in my hands as I read.  All I kept from it is the top half of page 155, a bit from an essay titled “Science and Spirituality.”  On the previous page, Clarke had mentioned the widespread impression that science and religion are irreconcilable.  To which he says:

It is a great tragedy that such an impression has ever arisen, for nothing could be further from the truth.  ‘Truth’ is the key word; for what does science mean except truth?  And of all human activities, the quest for truth is the most noble, the most disinterested, the most spiritual.

It is also the one most likely to inculcate humility.  Said T. H. Huxley a century ago: ‘Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever abysses nature leads, or you shall learn nothing.’

Science has now led, in our generation, to the ultimate abyss- that of space.  Questions to which philosophers and mystics have given conflicting answers for millennia will soon be answered, as our rocket-borne instruments range ever further from Earth.

Several things about these paragraphs arrested my attention.  Clarke usually called himself an atheist, and it was to describe his own opinions that Huxley coined the word “agnostic.”  So it is rather odd to see one of them invoke the other in defense of religious values.  Moreover, it seems a bit naive to assert that space is “the ultimate abyss.”  Ultimate in size space may be, unless of course there are parallel universes, in which case the space in which we live may be only fraction of an infinitely larger abyss.  But for all we know, the mysteries of space may yet pale in significance and complexity next to those of the subatomic world, or of some other field of study.  The final sentence is, if anything, even more naive.  Surely the most interesting thing about science is not its ability to answer familiar questions, but its ability to raise unfamiliar questions.  Philosophers and mystics have not given conflicting answers for millennia to, for example, the question of whether Venus ever had plate tectonics.  Without scientific inquiry, not only would we not have the concept of plate tectonics, we wouldn’t even know that Venus had a surface.  Once scientific inquiry has reached a certain point, our habits of mind and our whole view of nature change in ways too subtle to notice and too numerous to count.  In truth, the essay showed many signs of hasty composition; it was far from Clarke’s best, and I was surprised to see it published in a collection.

I tracked the Huxley quote down.  It comes from a letter Huxley wrote to the Reverend Charles Kingsley on 23 September 1860.  The letter appears on pages 217-221 of Life and Letters of Thomas Henry Huxley, edited by Leonard Huxley (London: MacMillan and Company, 1900); the sentence Clarke quotes appears on page 219 of that book.  Kingsley had sent Professor and Mrs Huxley a letter of condolence on the death of their young son, in which he alluded to the doctrine of the immortality of the soul.  Huxley’s response is worth reading in full, though I will quote only a few selections.  Kingsley had mentioned various arguments that are supposed to bolster the doctrine of the immortality of the soul.  Huxley’s response makes it clear what these arguments were:

I feel that it is due to you to speak as frankly as you have done to me. An old and worthy friend of mine tried some three or four years ago to bring us together—because, as he said, you were the only man who would do me any good. Your letter leads me to think he was right, though not perhaps in the sense he attached to his own words.

Thus Huxley sets the gracious tone of the letter, and makes it clear that he disagrees with Kingsley.  Next:

To begin with the great doctrine you discuss. I neither deny nor affirm the immortality of man. I see no reason for believing in it, but, on the other hand, I have no means of disproving it.

Certainly a classic statement from the first self-described agnostic!

Pray understand that I have no a priori objections to the doctrine. No man who has to deal daily and hourly with nature can trouble himself about a priori difficulties. Give me such evidence as would justify me in believing anything else, and I will believe that. Why should I not? It is not half so wonderful as the conservation of force, or the indestructibility of matter. Whoso clearly appreciates all that is implied in the falling of a stone can have no difficulty about any doctrine simply on account of its marvellousness. But the longer I live, the more obvious it is to me that the most sacred act of a man’s life is to say and to feel, “I believe such and such to be true.” All the greatest rewards and all the heaviest penalties of existence cling about that act. The universe is one and the same throughout; and if the condition of my success in unravelling some little difficulty of anatomy or physiology is that I shall rigorously refuse to put faith in that which does not rest on sufficient evidence, I cannot believe that the great mysteries of existence will be laid open to me on other terms. It is no use to talk to me of analogies and probabilities. I know what I mean when I say I believe in the law of the inverse squares, and I will not rest my life and my hopes upon weaker convictions. I dare not if I would.

So does Huxley present himself as a scientist, and as a stout defender of the methods of science.  I am surprised at his statement that “the longer I live, the more obvious it is to me that the most sacred act of a man’s life is to say and feel, ‘I believe such and such to be true.’  All the greatest rewards and all the heaviest penalties of existence cling about that act.”  When Huxley wrote that sentence, he was 35 years old.  I’m older than that now, and the longer I live the more obvious it is to me that to say and to feel “I believe such and such to be true” is usually a waste of time, and quite often the mark of a jackass. Maybe that’s just because I spend a lot of time on the web, or maybe not.

Huxley then sets out to show how a thoroughly scientific mind approaches Kingsley’s views:

Measured by this standard, what becomes of the doctrine of immortality?You rest in your strong conviction of your personal existence, and in the instinct of the persistence of that existence which is so strong in you as in most men.

To me this is as nothing. That my personality is the surest thing I know—may be true. But the attempt to conceive what it is leads me into mere verbal subtleties. I have champed up all that chaff about the ego and the non-ego, about noumena and phenomena, and all the rest of it, too often not to know that in attempting even to think of these questions, the human intellect flounders at once out of its depth.

It must be twenty years since, a boy, I read Hamilton’s essay on the unconditioned, and from that time to this, ontological speculation has been a folly to me. When Mansel took up Hamilton’s argument on the side of orthodoxy (!) I said he reminded me of nothing so much as the man who is sawing off the sign on which he is sitting, in Hogarth’s picture. But this by the way.

I cannot conceive of my personality as a thing apart from the phenomena of my life. When I try to form such a conception I discover that, as Coleridge would have said, I only hypostatise a word, and it alters nothing if, with Fichte, I suppose the universe to be nothing but a manifestation of my personality. I am neither more nor less eternal than I was before.

Nor does the infinite difference between myself and the animals alter the case. I do not know whether the animals persist after they disappear or not. I do not even know whether the infinite difference between us and them may not be compensated by their persistence and my cessation after apparent death, just as the humble bulb of an annual lives, while the glorious flowers it has put forth die away.

Surely it must be plain that an ingenious man could speculate without end on both sides, and find analogies for all his dreams. Nor does it help me to tell me that the aspirations of mankind— that my own highest aspirations even — lead me towards the doctrine of immortality. I doubt the fact, to begin with, but if it be so even, what is this but in grand words asking me to believe a thing because I like it.

Science has taught to me the opposite lesson. She warns me to be careful how I adopt a view which jumps with my pre-conceptions, and to require stronger evidence for such belief than for one to which I was previously hostile.

My business is to teach my aspirations to conform themselves to fact, not to try and make facts harmonise with my aspirations.

Science seems to me to teach in the highest and strongest manner the great truth which is embodied in the Christian conception of entire surrender to the will of God. Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing. I have only begun to learn content and peace of mind since I have resolved at all risks to do this.

This rather resolves the paradox of the atheist Clarke invoking the agnostic Huxley in defense of religion.  Huxley is responding graciously to his friend’s sincere attempt to comfort him in a moment of extreme affliction.  In that endeavor, he assures Kingsley that he seeks to cultivate the same virtues Kingsley hopes his religion will engender, and simultaneously makes it clear that he does not accept Kingsley’s religion.  Clarke was known for a similar combination of friendliness and forthrightness in his dealings with believers, and I surmise that in this letter he may have found a kindred spirit.

Huxley mentions arguments in favor of the immortality of the soul that Kingsley had not made, notably the idea “that such a system is indispensable to practical morality.”  Huxley’s objection to this argument is emotionally powerful and scientifically astute:

As I stood behind the coffin of my little son the other day, with my mind bent on anything but disputation, the officiating minister read, as a part of his duty, the words, “If the dead rise not again, let us eat and drink, for to-morrow we die.” I cannot tell you how inexpressibly they shocked me. Paul had neither wife nor child, or he must have known that his alternative involved a blasphemy against all that was best and noblest in human nature. I could have laughed with scorn. What! because I am face to face with irreparable loss, because I have given back to the source from whence it came, the cause of a great happiness, still retaining through all my life the blessings which have sprung and will spring from that cause, I am to renounce my manhood, and, howling, grovel in bestiality? Why, the very apes know better, and if you shoot their young, the poor brutes grieve their grief out and do not immediately seek distraction in a gorge.

The words that shocked Huxley came from I Corinthians chapter 15, verse 32.  Here is the passage Huxley probably heard, taken from the funeral service in the Church of England’s 1662 Book of Common Prayer.  It comprises verses 20 through 58 of that chapter:

Now is Christ risen from the dead, and become the first-fruits of them that slept. For since by man came death, by man came also the resurrection of the dead. For as in Adam all die, even so in Christ shall all be made alive. But every man in his own order: Christ the firstfruits; afterward they that are Christ’s, at his coming. Then cometh the end, when he shall have delivered up the kingdom to God, even the Father; when he shall have put down all rule, and all authority, and power. For he must reign, till he hath put all enemies under his feet. The last enemy that shall be destroyed is death. For he hath put all things under his feet. But when he saith, all things are put under him, it is manifest that he is excepted, which did put all things under him. And when all things shall be subdued unto him, then shall the Son also himself be subject unto him that put all things under him, that God may be all in all. Else what shall they do which are baptized for the dead, if the dead rise not at all? Why are they then baptized for the dead? and why stand we in jeopardy every hour? I protest by your rejoicing, which I have in Christ Jesus our Lord, I die daily. If after the manner of men I have fought with beasts at Ephesus, what advantageth it me, if the dead rise not? Let us eat and drink, for to-morrow we die. Be not deceived: evil communications corrupt good manners. Awake to righteousness, and sin not: for some have not the knowledge of God. I speak this to your shame. But some man will say, How are the dead raised up? and with what body do they come? Thou fool, that which thou sowest is not quickened, except it die. And that which thou sowest, thou sowest not that body that shall be, but bare grain, it may chance of wheat, or of some other grain: But God giveth it a body, as it hath pleased him, and to every seed his own body. All flesh is not the same flesh; but there is one kind of flesh of men, another flesh of beasts, another of fishes, and another of birds. There are also celestial bodies, and bodies terrestrial; but the glory of the celestial is one, and the glory of the terrestrial is another. There is one glory of the sun, and another glory of the moon, and another glory of the stars; for one star differeth from another star in glory. So also is the resurrection of the dead: It is sown in corruption; it is raised in incorruption: It is sown in dishonour; it is raised in glory: It is sown in weakness; it is raised in power: It is sown a natural body; it is raised a spiritual body. There is a natural body, and there is a spiritual body. And so it is written, The first man Adam was made a living soul; the last Adam was made a quickening spirit. Howbeit, that was not first which is spiritual, but that which is natural; and afterward that which is spiritual. The first man is of the earth, earthy: the second man is the Lord from heaven. As is the earthy, such are they that are earthy: and as is the heavenly, such are they also that are heavenly. And as we have borne the image of the earthy, we shall also bear the image of the heavenly. Now this I say, brethren, that flesh and blood cannot inherit the kingdom of God; neither doth corruption inherit incorruption. Behold, I shew you a mystery: We shall not all sleep, but we shall all be changed, in a moment, in the twinkling of an eye, at the last trump, (for the trumpet shall sound,) and the dead shall be raised incorruptible, and we shall be changed. For this corruptible must put on incorruption, and this mortal must put on immortality. So when this corruptible shall have put on incorruption, and this mortal shall have put on immortality; then shall be brought to pass the saying that is written, Death is swallowed up in victory. 0 death, where is thy sting? 0 grave, where is thy victory? The sting of death is sin, and the strength of sin is the law. But thanks be to God, which giveth us the victory through our Lord Jesus Christ. Therefore, my beloved brethren, be ye stedfast, unmoveable, always abounding in the work of the Lord, forasmuch as ye know that your labour is not in vain in the Lord.

Is Paul suggesting in this passage that those who disbelieve in the immortality of the soul will skip the funerals of their loved ones in order to visit the nearest all-you-can-eat buffet, and thus committing “a blasphemy against all that was best and noblest in human nature”?  I think not.  He does not mention funerals, or the first shock of mourning; he does not deny that it is natural for us, as animals, to grieve out our grief when death has put someone we love forever out of our reach.  He talks about baptism, and martyrdom, and day after day spent under the shadow of persecution.  The reward for all of this trouble is to be found in an immense drama that began with Adam and Eve and that will continue until the end of time, a drama in which everyone, alive and dead has a role to play.  If this drama is not really in production, if our efforts do not really connect us the living with the dead who went before s and their efforts will not connect the living to us after we die, why bother with the whole thing?  Better to embrace laziness and live the easiest possible life than to sustain so demanding an enterprise.  Such laziness might not preclude a period of howling grief fit to impress any ape, though it would set a limit to the ways in which a person is likely to change his or her habits in the aftermath of that period.

Of course, when Huxley heard the passage above he was standing at the open grave into which his young son’s coffin had just been lowered.  It would be unreasonable to think ill of a person subject to such extreme stress for taking a few words out of their context and putting an unwarranted construction on them, especially when the priest speaking them represented a group that was alien to Huxley’s beliefs and hostile to him personally.  I do wonder, though, why it was just that construction that came to Huxley’s mind.  Obviously, I don’t know.  But I think I do know what Irving Babbitt would have thought about it.   Babbitt (1865-1933) was a professor of French and Comparative Literature at Harvard whose works have had a great influence on me.  The poet and critic T. S. Eliot took some classes from Babbitt, and after Babbitt’s death wrote that “To have been once a pupil of Babbitt’s was to remain always in that position”; even for someone like myself, born decades after Babbitt’s death and familiar with him only through his books, Babbitt’s influence is permanent.

Irving Babbitt might have read Paul much as he read the Buddha.  In the course of the denunciation of the elective system then being introduced to American higher education that runs through his book Literature and the American College (Boston and New York: Houghton Mifflin,1908,)  Babbitt wrote:

A popular philosopher has said that every man is as lazy as he dares to be. If he had said that nine men in ten are as lazy as they dare to be, he would have come near hitting a great truth. Theelective system has often been regarded as a protest against the doctrine of original depravity. This doctrine at best rests on rather metaphysical 1 foundations, and is hard to verify practically. The Buddhists are perhaps nearer the facts as we know them in putting at the very basis of their belief the doctrine, not of the original depravity, but of the original laziness, of human nature. (page 53)

with this clarification:

The greatest of vices according to Buddha is the lazy yielding to the impulses of temperament (pamada); the greatest virtue (appamada) is the opposite of this, the awakening from the sloth and lethargy of the senses, the constant exercise of the active will. The last words of the dying Buddha to his disciples were an exhortation to practice this virtue unremittingly (page 53 note 1)

In the introduction to his translation of the Dhammapada,*  Babbitt enlarges on this discussion, drawing a contrast between appamada and karma.  Both words mean “work,” but Babbitt claims that in the Dhammapada and the other early Buddhist scriptures written in the Pâli language karma carries the sense of an effort sustained over a long period, several lifetimes in fact, that culminates in a kind of knowledge that can be acquired in no other way.  Babbitt made a habit of attributing his favorite ethical ideas simultaneously to all the great sages of the ancient world, east and west; nothing would have appealed to him more than declaring a familiar passage in Paul to be identical in content to the doctrines of the Buddha.

Babbitt used the Buddha and the other sages as an arsenal of sticks with which to beat his intellectual arch-nemesis, Jean-Jacques Rousseau.  For Babbitt, Rousseau was the patriarch of the Romantic movement, and the essence of the Romantic movement was an embrace of mere whim.  Indeed, the sentences I quoted from Literature and the American College introduce one of  his innumerable denunciations of Rousseau, in that case denouncing him for confusing “work,” which the Buddha understood as a virtue that quiets the spirit, with mere activity and busy-ness, which the Buddha regarded as a vice in which we escape from our true obligations.  For the Rousseau of Babbitt’s imagination, violent displays of emotion were events of great significance, while projects requiring long years of steady labor and harsh self-discipline were trivialities.

I suspect that Babbitt would have seen a child of the Romantic movement in Huxley’s reaction.  Rousseauism primed Huxley to conceive of his bereavement, not in terms of a scheme like Paul’s that subordinates the whole of life to an immense drama in which the living and the dead all have roles to play, but in terms of the intense emotional experiences in the early stages of mourning.  Presented with Paul’s statement that without a belief in the Resurrection, this drama would not make sense, Huxley then heard that without such a belief his intense emotional experiences would not make sense.  Observing the indications of the same experiences in the apes, Huxley concluded that either Paul was mistaken, or the apes were believers in the Resurrection.  Babbitt could be quite harsh in his judgments of spokesman for Romanticism; in Huxley’s remarks, he might well have seen a man unable to distinguish between, on the one hand, “the lazy yielding to temperament” that the howling ape represents and the commitment to a life on the grand scale that Paul’s letter and the Buddha’s sayings describe.

Babbitt was no more religious than Huxley or Clarke, as a matter of fact.  But some of his associates and followers were.  If Huxley were writing to one of them, perhaps he would have criticized Paul from another angle, and with another ethological example drawn from elsewhere in the animal kingdom.  There are nonhuman animals who subordinate their immediate needs and pleasures to long-term goals and the  good of a group, whether penguins going months without food to incubate their young, social insects devote their whole lives to the acquittal of a single set of tasks within the hive or swarm of which they are part, etc etc.  Granted, humans have brain functions that do not exist in those other animals, and so we respond to incentives differently than they do.  So a latter-day defender of the idea that a belief in personal immortality is indispensable to practical morality among humans might argue that only an explicit narrative connecting generation to generation can enable us to do what comes naturally to our distant cousins elsewhere in the animal kingdom.  And of course a latter-day Huxley could ask for evidence supporting this psychological claim.

Huxley was no Buddhist or Christian.  One sentence quoted above suggests that his ethical views were a form of Neo-Stoicism.  That sentence is “My business is to teach my aspirations to conform themselves to fact, not to try and make facts harmonise with my aspirations.”  This is a rather neat summary of the Stoic aspiration to peace of mind through an acceptance of the world as it is.  I find further evidence of Stoicism in other passages.  A long paragraph between the two portions that I quoted above argues that in fact, the virtuous are likelier than the wicked to prosper in this world, a view often associated with Stoicism.  In that paragraph, Huxley also argues that we should put the physical laws of nature on a par with moral laws, and not regard those who suffer in consequence of “physical sins” as instances of wronged innocents.  That certainly fits into most Stoic models of “the natural law.”  And following the description of his son’s funeral, we find this remarkable passage.  I will let it stand as the last word:

Kicked into the world a boy without guide or training, or with worse than none, I confess to my shame that few men have drunk deeper of all kinds of sin than I. Happily, my course was arrested in time—before I had earned absolute destruction—and for long years I have been slowly and painfully climbing, with many a fall, towards better things. And when I look back, what do I find to have been the agents of my redemption? The hope of immortality or of future reward? I can honestly say that for these fourteen years such a consideration has not entered my head. No, I can tell you exactly what has been at work. Sartor Resartus led me to know that a deep sense of religion was compatible with the entire absence of theology. Secondly, science and her methods gave me a resting-place independent of authority and tradition. Thirdly, love opened up to me a view of the sanctity of human nature, and impressed me with a deep sense of responsibility.

If at this moment I am not a worn-out, debauched, useless carcass of a man, if it has been or will be my late to advance the cause of science, if I feel that I have a shadow of a claim on the love of those about me, if in the supreme moment when I looked down into my boy’s grave my sorrow was full of submission and without bitterness, it is because these agencies have worked upon me, and not because I have ever cared whether my poor personality shall remain distinct for ever from the All from whence it came and whither it goes.

And thus, my dear Kingsley, you will understand what my position is. I may be quite wrong, and in that case I know I shall have to pay the penalty for being wrong. But I can only say with Luther, “Gott helfe mir, Ich kann nichts anders.”

I know right well that 99 out of 100 of my fellows would call me atheist, infidel, and all the other usual hard names. As our laws stand, if the lowest thief steals my coat, my evidence (my opinions being known) would not be received against him.

But I cannot help it. One thing people shall not call me with justice and that is—a liar. As you say of yourself, I too feel that I lack courage; but if ever the occasion arises when I am bound to speak, I will not shame my boy.

*originally published by Oxford in 1936, reissued by New Directions in 1965; this bit is on pages 91-93

Scientists need media advisors

The other day I read an article in Popular Science magazine profiling Felisa Wolfe-Simon, the NASA-sponsored scientist who made headlines in December with a paper claiming that a particular strain of bacteria throve in environments high in arsenic and low on phosphorous.  Wolfe-Simon hopes to find a life form that uses arsenic in its DNA in the way that all other known organisms use phosphorous, and NASA foregrounded that hope in its publicity for the paper.  While Wolfe-Simon did not claim that she had proven that the bacteria were using arsenic in this way, so much press discussion centered on that idea that when subsequent findings suggested that they probably weren’t, she was subjected to a kind of disgrace.  In the Popular Science piece, Wolfe-Simon says that her career may very well be over now.

After I’d read this sad tale, I turned on the TV.  The History Channel was showing a program they’d produced in 2008 about Professor Bruce Bueno de Mesquita, a political scientist who has used game theory to devise an algorithm for use in analyzing high-level decision-making.  To be precise, about a third of the show concerned Professor Bueno de Mesquita.  This third included many excerpts of the professor and his associates talking to the camera about his research.  The other two thirds were about Nostradamus.  Neither Professor Bueno de Mesquita nor any of his associates ever mentions Nostradamus, and only one of the many Nostradamus fans who appear mentions Professor Bueno de Mesquita.  I strongly suspect that the professor did not know that he was going to be presented as “The Next Nostradamus.”

Helping others, hurting oneself

In a recent issue of The Nation, Miriam Markowitz reviewed a biography of a remarkable figure named George Price.  The opening paragraph is an attention-grabber:

George Price was born a Jewish half-breed to parents who kept his Semitic side a secret; lived much of his life an aggressive atheist and skeptic of the supernatural; and died a Christian, twice converted, albeit, to his mind, a defeated one. Several years before he abandoned his career in a mission to shelter and comfort homeless alcoholics, he made a number of extraordinary contributions to evolutionary biology, a field in which he had no training. Educated as a chemist, Price had worked previously for the Manhattan Project on uranium enrichment, helped develop radiation therapy for cancer, invented computer-aided design with IBM and dabbled in journalism.

I suppose if your name is Miriam Markowitz you can use phrases like “Jewish half-breed,” though I for one would just as soon you didn’t.

In 1970, Price used a mathematical model rooted in game theory to revise an equation that William D. Hamilton had proposed as a means of analyzing altruistic behavior.  Hamilton and others saw that Price’s equation made it possible to analyze self-sacrificing behavior at many levels of selection at once, and to do so without appealing to notions of group selection.   This last point was especially attractive to Hamilton; as Markowitz explains, “Hamilton’s theory of inclusive fitness was a riposte to what he considered the naïve and ‘woolly’ group selectionism in vogue until the late 1960s, which explained altruistic behaviors with vague gestures toward ‘the good of the species.'”  Hamilton’s consistent opposition to all forms of group selectionism, be they woolly or threadbare, was one of the reasons Richard Dawkins named him as one who may have been “the greatest Darwinian since Darwin.”   Price’s theoretical work is basic to biological explanations of altruistic behavior; his own personal determination to lead a life of altruism, however, was infinitely less successful.  None of the homeless alcoholics he sought to help took much interest in his ministrations.  Despairing, Price committed suicide in 1974.

(more…)