Jail to the chief?

In the current issue of The Nation, Alexander Cockburn reminisces about the day he became a citizen of the United States of America.  On that day he and his fellows swore to preserve, protect, and defend the Constitution of the United States, a document which they had all been required to study, and which speaks of limits to state power and protection for the rights of the individual:

But it turns out it was all a fraud. The Uzbek down the row from me who had fled Karimov’s regime probably had no need to anticipate being boiled alive—a spécialité de la maison in Tashkent. But being roasted alive by Hellfire missile, doomed by executive order of President Obama, without due process in any court of law, for reasons of state forever secret, could theoretically lie in his future. If presidential death warrants beyond the reach of scrutiny and review by courts or juries are the mark of a banana republic, then we were all waving the flag of just such an entity.

What moves Mr Cockburn to this bitter declaration is of course the killing of Anwar al Awlaki, a killing for which the president of the United States proudly claimed responsibility.  al Awlaki may have been acquainted with some men who committed or attempted to commit acts of terrorism, and he certainly made unpleasant comments in public forums.  But the Obama administration has yet to do so much as accuse him of complicity in any violent act, much less provide evidence that he was the commander of an enemy force engaged in war on the United States, and as such a legitimate military target.  As it stands, the al Awlaki killing can be classified only as an act of murder.  Mr O’s boast that he ordered the strike is of a piece with his predecessor’s casual public admission that he ordered the torture of terrorism suspects.  Each man is serene in his belief that there is no crime he can commit that will stir the legal authorities to prosecute him.

Ought Americans who stand to Mr O’s left support a candidate to challenge him for the Democratic presidential nomination next year?  If being on the left means that one prefers the rule of law to a regime in which the president may kill and torture with impunity, one might  think the answer would be obvious.  For John Nichols, it’s more complicated.  Some might say that the best thing the president could do is resign, stand trial, and go to prison, accompanied if possible by his predecessors.  For Mr Nichols, not only is it clear that Mr O should continue in office, but it is apparently desirable that he should be reelected.  He wonders whether a primary challenger could help Mr O improve his chances of winning a second term, and seems to wish that one were on the horizon.  He doesn’t claim to know that it would work out that way:

The dramatically sped-up and concentrated primary calendar leaves little time for slow-to-develop challenges. It is already very late in the 2012 process, and no well-known Democratic official or progressive activist seems to be entertaining a run.

“We don’t even have a Pat Buchanan,” jokes Jeff Cohen, the veteran media critic and adviser to progressive candidates who is convinced that a credible primary challenger could win 30 to 40 percent of the vote in some states. Cohen argues that a primary challenger would not have to win to make a meaningful impact; a strong competitor could force Obama to sharpen his message and give progressives a significant role in defining the party. But for every progressive who argues that Obama’s re-election prospects would be improved by primary prodding from the left, there are cautionary voices like that of James Fallows, who asserts: “As for the primary challenges, what similarity do we notice between Jimmy Carter (challenged by Edward Kennedy in 1980) and George H.W. Bush (challenged by Pat Buchanan in 1992)? What we notice is: they held onto the nomination and went on to lose the general election.”

Obama is not likely to be defeated by a primary challenger. Despite the dip in his national approval ratings, polling suggests he retains relatively solid numbers with Democrats in key states—and among critical voting blocs. African-American voters, 86 percent of whom give the president favorable ratings (58 percent strongly favorable), are definitional players in Southern and a number of Great Lakes states. A ham-handed primary challenge could energize African-American voters—who, as Nation columnist Melissa Harris-Perry notes, may be inclined to ask why the equally disappointing Bill Clinton did not face a primary challenge in 1996. Such a challenge could also antagonize young people and many white liberals inclined to defend the nation’s first African-American president against what they perceive to be an unfair assault.

The prospect that the Democratic Party could divide against itself in an ugly debate gleefully amplified by right-wing media has little appeal even to Democrats who disdain Obama’s policy drift. But there is almost as much concern that a nuanced challenge from a candidate who appeals to African-American voters, such as Cornel West, would weaken the incumbent the way Ted Kennedy’s 1980 challenge to Carter and Buchanan’s 1992 run against George H.W. Bush are perceived to have undermined those presidents’ re-election.

In fact, the theory that primary challenges invariably lead to November defeats is wrong. In the past fifty years, two of the biggest presidential wins were secured by incumbents who faced meaningful primary competition. In 1964 President Johnson and his “favorite son” stand-ins had to fend off a determined challenge from Alabama Governor George Wallace, who won roughly 30 percent of the vote in two Midwestern primaries and 44 percent in Maryland. In 1972 President Nixon was challenged from the right and the left by Republican Congressmen (Ohio conservative John Ashbrook and California liberal Pete McCloskey) who attracted a combined 30 percent of the vote in New Hampshire’s first-in-the-nation primary. Both Johnson and Nixon would go on to win more than 60 percent of the fall vote.

On The Nation‘s website, Dave Zirin denounces singer Hank Williams, Junior, who recently lost a gig after comparing Mr O to the late Adolf Hitler.  It is not entirely clear what it is about Mr O that reminds Mr Williams of Germany’s late tyrant.  Perhaps the fact that each head of state boasted publicly of the murders he had orchestrated, that each dispatched his air force to bomb into submission countries that posed no threat to his own, that each used his office to accelerate the dismantling of the democratic constitution under which he had come to power, and that each claimed the right to detain any number of people for any length of time without judicial process may have prompted Mr Williams to think that they bore some resemblance to one another.  Of course, since Mr Zirin is a faithful supporter of the Democratic Party, one might expect him to find ways in which Mr O is less advanced in his murderous ways than was Adolf Hitler, as faithful Republicans spent the years 2001-2009 counting the degrees that separated Mr O’s predecessor from the same benchmark of wickedness.  Strangely, Mr Zirin says nothing about Mr O other than to describe him as the “first African-American president.”  This description precedes Mr Zirin’s pronouncement of his anathema upon Mr Williams, that anathema taking the form of the label “racist.”  Such a pronouncement is a sort of ritual; to complete it, the officiant needs nothing from Mr O but his skin color.  Once this ritual element is provided, no further information about Mr O could have any possible relevance to the proceeding.

Of course, there are sound reasons why one ought not to compare active politicians to Adolf Hitler.  For one thing, using him as the all-purpose symbol of an unjust ruler gives him a satanic glamour of just the sort that the Nazis used so effectively in their seduction of the more desperate members of Germany’s middle classes in the late Weimar period.  If Hitler must be remembered, it is far better to view him with contempt, perhaps tinged with the sort of pity one feels towards people who have psychological problems that one finds uninteresting.  Besides, the history of humankind is bursting with tyrants and killers; it is dismaying indeed that we share so little knowledge of history that Hitler is virtually the only one of the evil rulers of the past whose name we can be confident will be recognized almost anywhere.  For my part, I think an apt analogy could be made between Mr O and Critias, a fifth-century Athenian who is remembered today as the uncle of the philosopher Plato and the namesake of one of his nephew’s uncompleted dialogues, but in his own day he was rather more widely known as the leader of the “Thirty Tyrants,” a group who seized power in Athens after the Peloponnesian Wars and claimed the right to govern by means of assassination.

“An apple a day keeps the doctor away”

The other day, I was eating an apple for breakfast.  My wife mentioned that a friend of ours was planning to stop by our house later that morning.  This friend is a medical doctor by occupation; I joked that I’d better stop eating the apple, since I didn’t want to keep him away.  Recognizing the play on the proverb “An apple a day keeps the doctor away,” Mrs Acilius was kind enough to chuckle at my little witticism, as was our friend when I repeated the line to him.  Clearly, the proverb means something like “If you eat an apple each day, you will reduce the likelihood that you will require the professional attentions of a medical doctor.”  Since our friend’s visit was purely social, the humor of my remark arose from an ambiguity in the expression “keeps the doctor away.”  It wasn’t hugely funny, since this ambiguity is a purely formal one that has rarely confused anyone, but to the extent that it is funny at all, that’s what makes it so.

The next day, I was teaching a class.  I had a Twitter stream on the screen in front of the room, consisting of questions and answers that my students had tweeted to my work Twitter account (not to be confused with the Los Thunderlads Twitter account, or my own private Twitter account.)  There are other systems that enable students to send short items to a page that can be projected on a screen, but since Twitter is a public site and the students always have access to it, it has certain advantages.  In the middle of class, a student decided, for some reason, to share with the class a joke that has been whipping its way around Twitter of late: “A blowjob a day keeps the pimphand away.”  The class laughed, and I took advantage of the opportunity to remind them of the reasons why they should keep a separate Twitter account just for their classes.  I also spent a moment or two making fun of the offender for his need to share, then moved on.

It’s a shame the class wasn’t in lexical semantics.  If it were, I could have used the sentence “A blowjob a day keeps the pimphand away” as an example of some interesting points.  It scans the same as “”An apple a day keeps the doctor away”; “apple,” “blowjob,” “doctor,” and “pimphand” are all trochaic, and in each pair the second word has a more complex consonant structure than does the first.  So the two expressions sound very similar, but of course they differ dramatically in that one is among the most anodyne of expressions, while the other is doubly taboo, combining as it does an explicitly sexual term and an explicitly violent one.

“A blowjob a day keeps the pimphand away” also gets a laugh because it prompts us to think of similarities between the act of eating an apple and the act of performing oral sex on a man.  Each process takes a few minutes.  In each case, one performs a series of oral manipulations on an object that is, at the beginning of the process, bulbous in shape and about as long as it is wide, and in the course of those manipulations changes the object into a roughly cylindrical shape.  Also, an uneaten apple is covered with a peel, that can be any of a variety of colors, but that shows a variation of color tone around its exterior.  Once the peel is gone, the apple eater chews on the fruit inside, ending up with a mouth full of shapeless, but uniformly white, material.  The similarity to fellatio is perhaps obvious.

The relationship between “keeps the doctor away” and “keeps the pimphand away” is, perhaps, more interesting.  The phrase “the doctor” in the proverb calls up the image of a person who is a doctor; keeping that person away is supposed to mean preventing the need for a house-call.*  As my little joke of the other morning showed, the bare noun phrase “the doctor” does not by itself logically imply the idea of need for a house call, but could, to a person unfamiliar with the proverb, allow for the meaning “If you eat an apple, doctors will avoid you.”  By contrast, the phrase “the pimphand” evokes a very specific scenario.  A pimp demands that a prostitute hand over her earning to him, and slaps her in the face for refusing to do so.  Look at this image, from Urban Dictionary’s top-rated entry for “pimphand”:

Compare it with this comic strip, which Josh Fruhlinger described as featuring a “distinguished-looking senator, who isn’t so distinguished that he can’t slap an angry lake-bully with his pimp hand when he gets his dander up”:

The first picture is accepted as an illustration of the term “pimphand,” even though the man in it has few of the characteristics one associates with pimpdom, because the position of his hand suggests the sort of slap that the senator is administering in the comic strip.  So in place of the merely nominal “the doctor,” with its vague evocation of a gentle custom that is obsolete in the USA, we find an expression that may parse the same, but that definitely signifies a particular scenario of brutal violence.

*Some USA residents may never have heard of “house calls.”  This is when a doctor goes to a patient’s home to provide medical care.  These have been unknown in the USA for decades, my entire lifetime in fact, though I understand there are still places where they are common.

What matters in life

Here are the last three sentences of an opinion piece that appeared in Time magazine some time ago:

It is a pathetic little second-rate substitute for reality, a pitiable flight from life. As such it deserves fairness, compassion, understanding and, when possible, treatment. But it deserves no encouragement, no glamorization, no rationalization, no fake status as minority martyrdom, no sophistry about simple differences in taste—and, above all, no pretense that it is anything but a pernicious sickness.

To what does that “it” refer?  By themselves, these sentences leave open several possibilities.  They sound very much like the more strident remarks that aggressive atheists make about religion.  For their part, believers have been known to reply to these remarks in kind.  People on each side of that dispute tend to build their favorite presuppositions into the way they use words like “reality” and “life,” so that each could accuse the other of offering “a pathetic little second-rate substitute for reality, a pitiable flight from life.”  At the same time, leftists have been known to write this way about right-wingers, especially when the right-wingers in question belong to groups that the leftists see as victims of unjust policies that the right supports.  The phrase “false consciousness” may not be much in favor any longer, but there are other ways of accusing people of being deluded about what political movements are in their own best interests.  The line about “fake status as minority martyrdom” sounds just like the sort of thing left-of-center Americans are often provoked to say when their least favorite political figures claim to have suffered unfair treatment at the hands of a “liberal elite.”  Again, it is not uncommon for right-wingers respond in kind, presenting leftism as a mental illness and a sign of self-loathing.
:
Indeed, just about any activity or belief of which a speaker disapproves could be attacked in the words that Time magazine used above.  If the speaker is absorbed in a rival activity or committed to an opposing system of belief, it may seem obvious that Time‘s description is perfectly accurate.  For example, when I was in graduate school, I was entirely immersed in the study of ancient Greece and Rome.  For years, I and my fellow students averaged something between 80-100 hours a week studying the languages, literatures, histories, and material remains of classical antiquity.  We socialized primarily with each other, and modeled ourselves on our professors.  So by the middle of our grad school years, we came to take it for granted that every walk of life that did not advance classical learning was a waste of time, a poor consolation for people who couldn’t make it in classics.  We had entered graduate school with a more balanced view, and by the time we entered the job market most of us were on our way back to that balance, but for most of us there was a period starting sometime around the end of the first year and ending sometime before the fourth year when it was hard to take anything outside of classics at all seriously.  I suspect we would all have nodded in agreement if someone had described, say, a career in the insurance industry as “a pathetic little second-rate substitute for reality,” etc.
:
Of course, classical scholarship is not one of the most powerful or celebrated professions in the twenty-first century.  So once a person emerges from the odd little world of a graduate program in classics, that person is unlikely to continue taking it for granted that classicists are the only successful professionals.  Other fields enjoy far more  prestige; their practitioners are in much greater danger of becoming unalterably attached to the idea that they and their colleagues have a monopoly on wisdom.  Businesspeople, scientists, and medical doctors seem to number among their ranks many people whose intellectual development is permanently stunted in the condition of the second-year grad student.  For these individuals, the boundaries of “reality” and “life” are the boundaries of their disciplines, and anything outside those boundaries is a “substitute for reality” and a “flight from life,” and people who dwell out there are sad cases to be taken gently, but firmly, in hand.
:
Political and religious beliefs are even more likely to swallow up a person’s conception of success in life than is a sense of the importance of one’s profession, and certainly less likely to spit that conception back out into the open air.  So it is small wonder that left and right, atheist and believer might see each other in the light that Time magazine describes.  For each ideological group, it seems obvious which things truly matter in life, and people who are uninterested in those things and devoted to others must therefore be fools who are suffering from some peculiar sort of disorientation.  Any influence such fools have on those around them is, of course, dangerous and requires action to reassert the more wholesome values.
:
So, these sentences represent, on the one hand a content-free insult, but on the other hand the writer’s confession of faith.  What he was attacking as unreal, unliving, and pernicious was the direct negation of what he thought of as most plainly real, lively, and wholesome. To find out who Time magazine was insulting, turn to the original article (which I found here.)

“If politicians would stop arguing, they could work together– to get things done! Doesn’t matter what! Just, you know– things!”

The other day, The Monkey Cage featured a post called “Why Does Congress Flail?  Voters Reward Positions More than Success.”  As the title implies, the premise of the post was that the US Congress has been relatively ineffective in passing major legislation of late because its members know that their jobs depend, not on passing bills into law, but on striking poses that resonate with the ideological leanings of their constituents.

In a comment, I challenged the first part of the premise.  In the last ten years, the US Congress has in fact passed a great deal of major legislation, changing American government and American life far more profoundly than in almost any other epoch of US history.  Among this legislation are bills funding several wars, permitting the president to wiretap virtually anyone he likes, maintaining indefinite detention of persons accused of terrorism, creating the Department of Homeland Security, formalizing a variety of terrorism watch-lists, adding a prescription drug benefit to Medicare, requiring citizens to buy health insurance, dramatically expanding the federal role in education, handing a trillion dollars of taxpayers’ money to Goldman Sachs and companies friendly to it, and repeatedly cutting taxes on the largest incomes, to name just a few measures with vast ramifications.  It’s true that members of congress rarely cite these achievements in their reelection bids.  That isn’t because they are unimportant, but because none of them is at all popular.  If these acts constitute “success,” then it is no wonder voters don’t reward it.  Rather, it is a mystery that voters don’t punish such “success” by deserting both the Republican and Democratic parties, and replacing their entire set of political leaders.

Yet one still hears Americans who wish to be regarded as “moderate,” or “centrist,” or “responsible” say that top elected officials in Washington should stop battling with each other so that they can be more effective at “getting things done.”  I’ve found that the people who say this seem puzzled when I point out how much has “gotten done” in Washington since 2001 .  What seems equally difficult for them to grasp is the point Tom Tomorrow makes in this cartoon::

The way out of philosophy runs through philosophy

There’s a phrase I’ve been thinking about for years, ever since I read it somewhere or other in Freud: “the moderate misery required for productive work.”  It struck me as plausible; someone who isn’t miserable at all is unlikely to settle willingly into the tedious, repetitive tasks that productive work often involves, while someone who is deeply miserable is unlikely to tolerate such tasks long enough to complete them.  If blogging counts as productive work, I myself may recently have represented a case in point.  Throughout the summer and into the autumn, I wasn’t miserable at all, and I barely posted a thing.  Then I caught a cold, and I posted daily for a week or so.  If I’m typical of bloggers in this respect, maybe I could also claim to have something in common with a philosopher.  Samuel Johnson once quipped that he had intended to become a philosopher, but couldn’t manage it.  The cause of his failure?  “Cheerfulness kept breaking in.”

One item I kept meaning to post notes on when cheerfulness was distracting me from the blog was a magazine article about Johnson’s contemporary, David Hume.  Hume, of course, was a philosopher; indeed, many would argue that he was “the most important philosopher ever to write in English.”  Contrary to what Johnson’s remark suggests, however, Hume was suspected of cheerfulness on many occasions.  The article I’ve kept meaning to note is by Hume scholar and anti-nationalist Donald W. Livingston; despite the radicalism of Livingston’s politics (his avowed goal is to dissolve the United States of America in order to replace it with communities built on a “human scale”) in this article he praises Hume as “The First Conservative.”  Hume’s conservatism, in Livingston’s view, comes not only from his recognition of the fact that oversized political units such as nation-states and continental empires are inherently degrading to individuals and destructive of life-giving traditions, but also from his wariness towards the philosophical enterprise.  Hume saw philosophy as a necessary endeavor, not because it was the road to any particular truths, but because philosophical practice alone could cure the social and psychological maladies that the influence of philosophy had engendered in the West.

This is the sort of view that we sometimes associate with Ludwig Wittgenstein; so, it’s easy to find books and articles with titles like “The End of Philosophy” and “Is Philosophy Dead?” that focus on Wittgenstein.  But Livingston demonstrates that Hume, writing more than a century and a half before Wittgenstein, had made just such an argument.  Livingston’s discussion of Hume’s Treatise of Human Nature (first published in 1739-1740) is worth quoting at length:

Hume forged a distinction in his first work, A Treatise of Human Nature (1739-40), between “true” and “false” philosophy.  The philosophical act of thought has three constituents. First, it is inquiry that seeks an unconditioned grasp of the nature of reality. The philosophical question takes the form: “What ultimately is X?” Second, in answering such questions the philosopher is only guided by his autonomous reason. He cannot begin by assuming the truth of what the poets, priests, or founders of states have said. To do so would be to make philosophy the handmaiden of religion, politics, or tradition. Third, philosophical inquiry, aiming to grasp the ultimate nature of things and guided by autonomous reason, has a title to dominion. As Plato famously said, philosophers should be kings.

Yet Hume discovered that the principles of ultimacy, autonomy, and dominion, though essential to the philosophical act, are incoherent with human nature and cannot constitute an inquiry of any kind.  If consistently pursued, they entail total skepticism and nihilism. Philosophers do not end in total skepticism, but only because they unknowingly smuggle in their favorite beliefs from the prejudices of custom, passing them off as the work of a pure, neutral reason. Hume calls this “false philosophy” because the end of philosophy is self-knowledge, not self-deception.

The “true philosopher” is one who consistently follows the traditional conception of philosophy to the bitter end and experiences the dark night of utter nihilism. In this condition all argument and theory is reduced to silence. Through this existential silence and despair the philosopher can notice for the first time that radiant world of pre-reflectively received common life which he had known all along through participation, but which was willfully ignored by the hubris of philosophical reflection.

It is to this formerly disowned part of experience that he now seeks to return. Yet he also recognizes that it was the philosophic act that brought him to this awareness, so he cannot abandon inquiry into ultimate reality, as the ancient Pyrrhonian skeptics and their postmodern progeny try to do. Rather he reforms it in the light of this painfully acquired new knowledge.

What must be given up is the autonomy principle. Whereas the false philosopher had considered the totality of pre-reflectively received common life to be false unless certified by the philosopher’s autonomous reason, the true philosopher now presumes the totality of common life to be true. Inquiry thus takes on a different task. Any belief within the inherited order of common life can be criticized in the light of other more deeply established beliefs. These in turn can be criticized in the same way. And so Hume defines “true philosophy” as “reflections on common life methodized and corrected.”

By common life Hume does not mean what Thomas Paine or Thomas Reid meant by “common sense,” namely a privileged access to knowledge independent of critical reflection; this would be just another form of “false philosophy.” “Common life” refers to the totality of beliefs and practices acquired not by self-conscious reflection, propositions, argument, or theories but through pre-reflective  participation in custom and tradition. We learn to speak English by simply speaking it under the guidance of social authorities. After acquiring sufficient skill, we can abstract and reflect on the rules of syntax, semantics, and grammar that are internal to it and form judgments as to excellence in spoken and written English.  But we do not first learn these rules and then apply them as a condition of speaking the language. Knowledge by participation, custom, tradition, habit, and prejudice is primordial and is presupposed by knowledge gained from reflection.

The error of philosophy, as traditionally conceived—and especially modern philosophy—is to think that abstract rules or ideals gained from reflection are by themselves sufficient to guide conduct and belief. This is not to say abstract rules and ideals are not needed in critical thinking—they are—but only that they cannot stand on their own. They are abstractions or stylizations from common life; and, as abstractions, are indeterminate unless interpreted by the background prejudices of custom and tradition. Hume follows Cicero in saying that “custom is the great guide of life.” But custom understood as “methodized and corrected” by loyal and skillful participants.

The distinction between true and false philosophy is like the distinction between valid and invalid inferences in logic or between scientific and unscientific thinking. A piece of thinking can be “scientific”—i.e., arrived at in the right way—but contain a false conclusion. Likewise, an argument can be valid, in that the conclusion logically follows from the premises on pain of contradiction, even if all propositions in the argument are false. Neither logically valid nor scientific thinking can guarantee truth; nor can “true philosophy.” It cannot tell us whether God exists, or whether morals are objective or what time is. These must be settled, if at all, by arguments within common life.

True philosophy is merely the right way for the philosophical impulse to operate when exploring these questions. The alternative is either utter nihilism (and the end of philosophical inquiry) or the corruptions of false philosophy. True philosophy merely guarantees that we will be free from those corruptions.

This is rather like one of Friedrich Nietzsche’s parables, from Also Sprach Zarathustra (1883-1885).  Nietzsche’s Zarathustra preaches that the superman must become a camel, so as to bear the heaviest of all weights, which is the humiliation that comes when one discovers the extent of one’s ignorance, and the commitment to enlighten that ignorance; that he must then put the camel aside and become a lion, so that he may slay the dragon of “Thou-Shalt” and undertake to discover his own morality; and that at the last he must become a child, so that he may put that struggle behind him and be ready to meet new challenges, not as reenactments of his past triumphs, but on their own terms.  According to Livingston, Hume, like Nietzsche, sees the uneducated European as a half-formed philosopher, and believes that with a complete philosophical education s/he can become something entirely different from a philosopher:

Read the full post »

Mr O’s “anti-nuclear imperialism”

Let me tell you about a better way, a way that protects the purity of our precious bodily fluids.

The late September issue of Counterpunch (available to subscribers here; the newsletter’s website is here) includes a fine article by Darwin Bond-Graham titled “The Obama Administration’s Nuclear Weapons Surge.”  While Mr O has made many remarks declaring that nuclear weapons are bad and the world would be better off without them, he has in fact “worked vigorously to commit the nation to a multi-hundred-billion-dollar reinvestment in nuclear weapons, mapped out over the next three decades.”  Bond-Graham analyzes the New START agreement between the USA and Russia.  Though the publicity surrounding New START presented it as an arms-reduction treaty, Bond-Graham contends that it is nothing of the kind.  “On balance, the nominal reductions in nuclear weapons required by New START are insignificant when compared to the multibillion-dollar nuclear (and strategic non-nuclear) weapons programs committed to in the treaty’s text.”  Indeed, Bond-Graham classifies New START as an “arms-affirmation treaty.”  Mr O and his allies in the upper echelons of the congressional Democratic leadership were able to market New START as a disarmament agreement and to enlist the support of Americans who usually oppose nuclear weapons, even though “the treaty does not actually require the destruction of a single nuclear warhead.”  Bond-Graham also goes into depth on various other programs through which Mr O has managed to increase spending on nuclear weapons, to reorient the USA’s nuclear weapons programs towards potential use in conflict, and to strip away inhibitions against nuclear first strikes by the USA.

For Bond-Graham, Mr O’s anti-nuclear public statements not only represent a rhetorical device to “neutralize”  the “anti-nuclear and antiwar groups that so effectively exposed [George W.] Bush’s plans” to pursue policies similar to those of the current administration, but also constitute the foundation of a strategic orientation that Bond-Graham dubs “anti-nuclear imperialism.”  This orientation, ostensibly based on abhorrence of nuclear weapons, in fact promotes the development, maintenance, and deployment of such weapons.  Remember the claims that the Bush-Cheney administration made about Saddam Hussein’s alleged “Weapons of Mass Destruction” programs in 2002-2003, and the meaning of the phrase “anti-nuclear imperialism” becomes all too clear.

The contextualization fairy

Recently, John Holbo posted two items (here and here) on Crooked Timber about something odd in American politics.  Right-wing politicians in the USA quite often make public statements that would, if taken at face value, suggest that they are far more extreme in their views than they in fact are.  So, Professor Holbo finds remarks from Texas governor Rick Perry which, taken literally, would imply that Mr Perry thought that Texas should secede from the USA, that all federal programs established since 1900 should be abolished, indeed that there should be no government at all.  Mr Perry obviously does not believe any of those things, so obviously that only his committed opponents try to take him to task for making such extreme remarks.  This is not unique to Mr Perry, but is a usual pattern for right-wing US politicians.

What makes this so odd is that, while it is common for right-wing American politicians to exaggerate the radicalism of their views and for the public to realize that this is what they are doing, Professor Holbo can find no examples of their left-leaning counterparts doing the same thing.  A Democratic or leftist candidate who makes a radical-sounding statement likely means that statement to be taken at face value, and it certainly will be taken at face value by most observers.

Many commentators on American politics explain the right-wingers’ habit of making extreme sounding statements for which they do not expect to be held responsible as an effort to move the “Overton Window.”  The Overton Window, named for the late Joseph P. Overton, is the range of ideas that the people who hold sway in a given political culture hold to be acceptable at a particular time.  Only ideas within the window are likely to be put into effect.  The window shifts back and forth, as some ideas that had once seemed outlandish begin to seem mainstream, while other ideas that had once seemed mainstream begin to seem outlandish.

Key to the Overton Window is the idea of contextualization.  The idea of devolving Medicare, the program that ensures that most Americans over the age of 65 will be able to pay for health care, to the states may seem outlandish to many in the USA, but compared to the idea of large states seceding from the Union it is quite moderate.  The idea of shifting the revenues of Social Security, the program that provides a guaranteed income to  most Americans over the age of 65, from current benefits to private savings accounts may seem outlandish to many in the USA, but compared with the idea of abolishing the entire welfare state it is quite moderate.  Other policies favored by powerful interests on the right end of the political spectrum may also seem outlandish, but compared with anarchism they too are quite moderate.  So, within the context of the extreme remarks for which they are not called to account, rightists can gain a hearing for policies which they do seriously advocate.

Read the full post »

Brian Barder is a nice guy

Political blogging is not generally regarded as an activity that brings courtesy to the fore, but retired UK diplomat Brian Barder never fails to show good manners.  Though most of the topics he discusses are outside my usual circle of interests, I read him regularly, since it is such a pleasure to see politeness at work.

For example, the other day Brian Barder* posted a proposal about reforming the UK constitution. Brian Barder has considerable expertise on this subject, and his proposal is sufficiently close to his heart that he has been working to promote it for some years.  My knowledge of the UK constitution is limited to what I picked up during the years I took The Economist, and as someone who does not live in the UK my stake in its reform is close to nil.  Yet I took the liberty of posting comments (here and here) in which I expressed skepticism about the practical aspects of his plan.  Brian Barder would have been perfectly within his rights to ignore my uninformed remarks, or even to dismiss them icily, yet in fact he took the time to provide detailed responses to each of them.  In fact, he even emailed me to make sure that I knew he had done so.  Such generosity is not to be forgotten.

*I’d call him “Mr Barder,” but that isn’t actually his name.  He holds a knighthood, and so the proper courtesy title for him is “Sir Brian.”  I cannot bring myself to refer to any living person in that fashion; to me, it suggests only Monty Python.   So the only respectful way I can name him is as “Brian Barder.”  This is a shame, since Brian Barder is himself so scrupulous a user of courtesy titles that he titled his condemnation of the Oslo massacre “There are no lessons to learn from Mr Breivik.”  If Brian Barder can bring himself to give the correct title even to the murderer of dozens of helpless innocents, it seems churlish of me to withhold his title from him, yet I must, I must.

Some interesting comments by Michael Peachin about a new book on the Emperor Claudius

Proclaiming Claudius Emperor, by Lawrence Alma-Tadema

As a subscriber to Classical Journal, I regularly receive emailed reviews of new scholarly books concerning ancient Greece and Rome.  The other day, for example, they sent me Michael Peachin‘s review of Claudius Caesar: Image and Power in the Early Roman Empire, by Josiah Osgood (Cambridge University Press, 2010).  The only other notice I’d seen of the book was a drearily dutiful one in The Bryn Mawr Classical Review, so I was surprised that Peachin found some exciting points in the book.  I’ll quote two of these points:

Several recent accounts of Roman emperors have sailed off on a new tack. Instead of attempting a traditional biographical interpretationof the man, and thereby also a chronicle of his reign, each of thesehas sought to present an emperor on his own terms, and/or to view himas he was perceived by certain groups of contemporaries (other than theelite authors, who usually monopolize discussion). Thus, Caligula was notout of his mind; he simply had no taste for playing republic, when thereality was despotism; and so, he fashioned himself overtly as a tyrant,regardless of the consequences – or perhaps precisely to elicit certainones of those (A. Winterling, Caligula: A Biography [Berkeley, 2011]).

 

When I was in graduate school, I took a seminar on Roman history in which the professor horrified about half the class by spending a day arguing that Caligula was probably not a lunatic.  A few of my classmates were committed to the view of the third emperor presented in the ancient historical texts, and were appalled to hear a revision of that view; the others were committed to the idea that the only sort of history worth doing was social history that focused on the most numerous groups in a society, and so were appalled that we were spending so much time on the question of one man’s mental health.  I was not in either of those groups, but loved the day and have been defending Caligula ever since.  By the way, there’s a fine review of Winterling’s book in September’s New Criterion.  I recommend it to the the general reader.

Peachin makes a point that I found especially fascinating:

Augustus, in fine, had played his part well; but as Osgood aptly demonstrates, he fated all the various players in the sequel to write their own scripts as they went. In any case, Osgood argues that Claudius quite actively tried to shape his own time as emperor, and that in doing so, he contributed materially to the development of the imperial ‘system.’ As we observe this particular emperor at work, we are also being nudged slightly away from Fergus Millar’s  picture of a more passive, and perhaps generic, sort of monarch (The Emperor in the Roman World [Ithaca, 1977]): “…who the emperor was mattered” [136]). Still, Osgood sees quite clearly that Claudius (or any emperor) was indeed only one person; and hence, the princeps’  direct involvement with his subjects was perforce limited. Thus, when an emperor did choose to intervene, the event was so momentous as to carry an aura of the divine. That said, Claudius was no lone actor. We are reminded, throughout, that “…much of this emperor’s image, like any other’s, was constructed in dialogue with his subjects” (317.)

So, it was precisely because the emperor’s position was inherently weak that he inspired awe in his subjects.  This is just the sort of paradox I can never resist.

Many readers will be familiar with the theory that historian Arnaldo Momigliano developed and that Robert Graves popularized in his novels about Claudius.  Under this theory, Claudius wanted to phase out the principate and restore the old Republic.  Peachin explains Osgood’s view of this theory with admirable concision:

Following Momigliano’s observations (Claudius: the Emperor and His Achievement [Oxford, 1934]), Osgood stresses the fact that Augustus’ uneasy amalgam of republic and empire remained a befuddling puzzle  for Claudius (indeed, for every emperor). In particular, the quasi-retention of a republican state meant that a new imperial system of   government could not be crafted with anything even approaching clarity, or in any detail. Thus, to start at the start, when Gaius [a.k.a. Caligula] was murdered, and had not indicated a successor, a conclusively ‘proper’ or ‘constitutional’ way forward was nowhere to be discovered. That notwithstanding, Claudius was quickly on the throne; but then, the awkward facts of his accession, not to mention the earlier vituperation of him by members of the Augustan house (and others), seriously undercut his authority. Attempting to counter such hindrances, and just generally in his zeal to rule as he found appropriate, Claudius was too fastidious.     The result was a nasty paradox: “The loftier the goals the emperor set  for his administration, the more likely he was to fail, and to open himself to allegations of incompetency, or even corruption. Yet precisely  to try to win loyalty and increase his prestige, Claudius had to set   loftier goals than those of Tiberius, even those of Caligula” (189).

Considering that the written law in Rome in 41 BC was predicated on the idea that the Republic was still functioning, and that Claudius owed the principate to the very group of men who had just violently murdered his predecessor, it would have been quite a challenge for him to find a way to present his accession as legitimate without appealing to the idea of a restored Republic.   In no position to prosecute the assassins of Caligula, Claudius could only appeal to the right of tyrannicide, and thus evoke the two Brutuses, one who according to legend struck against the Tarquins in order to end the monarchy and establish the Republic, and the other who struck against Julius Caesar the Dictator in an attempt to prevent a new monarchy from ending the Republic.   If there were people who took this forced imposture at face value, one can hardly blame Claudius.

 

Did astrology originate in cities?

I wonder if the first astrologers were city-dwellers.  True, archeologists have found evidence that people who lived before the rise of cities paid close attention to the orbit of the Moon and identified constellations, and have argued that the orientations of temples and other religious structures from those days suggest that they attached a religious significance to the movements of heavenly bodies.  Those activities are hardly surprising; farmers need a calendar to plan their year, as hunter-gatherers also need to plan their expeditions for times when game will be relatively plentiful and fruit ripe for the picking.  Still, it might not be too much of a stretch to look at a society that invests heavily in maintaining and publicizing its calendar and to see a suggestion of something like what the western branch of organized Christianity used to call “natural astrology,” a set of ideas about ways in which heavenly bodies might influence the earth’s weather and various medical phenomena related to the transmission of disease.

Quite distinct from natural astrology are the various studies to which the Western Church used to refer as “judicial astrology.”  That’s the part that includes horoscopes, sun signs, and the like.  The difference matters when considering the origins of astrology; we have very ancient documents relating to the movements of heavenly bodies that seem to have some special significance and that predate the earliest references to judicial astronomy by centuries.  So, I’ll use the terms.

It is sometimes said that our earliest evidence of judicial astronomy comes from Mesopotamia, but that is misleading.  The nation state didn’t exist in those days; Ur and Lagash and Akkad and Babylon and the other urban centers that rose and fell in that region interacted with the political and economic systems of the countryside around them in a variety of ways, but in other ways they remained quite distinct.  It is in such cities that we find the first documents describing judicial astrology.

If astrology did arise in cities, it arose in a social environment where markets were familiar.  Its entire history would have taken place amid money, contracts, and production for exchange.  That calls into question the assumptions that we discussed last year when this xkcd appeared:

Not to be confused with "selling this stuff to OTHER people who think it works," which corporate accountants and actuaries have zero problems with.

Some people fall into the assumption that, because markets promote something called “rationality,” they must therefore favor every form of reason and disfavor every form of unreason.  However, the rationality which comes from markets is in fact something of a very narrow sort.  A month after our discussion, we noted that Shikha Dalmia had put it very well: “Markets don’t reward merit, they reward value.”  Dalmia summarizes the views of economist Friedrich Hayek:

In a functioning market, Hayek insisted, financial compensation depends not on someone’s innate gifts or moral character. Nor even on the originality or technological brilliance of their products. Nor, for that matter, on the effort that goes into producing them. The sole and only issue is a product’s value to others. Compare an innovation as incredibly mundane as a new plastic lid for paint cans with a whiz-bang, new computer chip. The painter could become just as rich as the computer whiz so long as the savings from spills that the lid offers are as great as the productivity gains from the chip. It matters not a whit that the lid maker is a drunk, wife-beating, out-of-work painter who stumbled upon this idea through pure serendipity when he tripped over a can of paint. Or that the computer whiz is a morally stellar Ph.D. who spent years perfecting his chip.

As markets are neutral as to the virtue or vice of economic actors, so too are they neutral as to the truth or falsity of the ideas that those actors bring as products for sale.  If falsehoods are in demand, falsehoods will sell; if truths are not in demand, their bearers will go begging.  The mouseover text for the xkcd represents a nod to this fact, and an attempt to wriggle out of its implications: “Not to be confused with ‘selling this stuff to OTHER people who think it works,’ which corporate accountants and actuaries have zero problems with.”  That won’t do, since it assumes that we can assign a fixed meaning to the expression “works.”  An investment advisor who believes in astrology may not be any likelier than other advisors to beat the market, but s/he may very well use that belief to “make a killing,” if s/he attracts clients who strongly value such a belief.  In that case, astrology would not “work” in the sense that quantitative analysts officially recognize, but it would make the advisor every bit as rich as it would if it did meet their definitions of success.  As for whether it makes the clients rich, well, Fred Schwed answered that one in 1940:

Once in the dear dead days beyond recall, an out of town visitor was being shown the wonders of the New York financial district.  When the party arrived at the Battery, one of his guides indicated some handsome ships riding at anchor.  He said, “Look, these are the bankers’ and brokers’ yachts.”  “Where are the customers’ yachts?,” asked the naive visitor.

Clearly, markets have not dissolved belief in astrology, any more than the continued non-existence of the customers’ yachts has discouraged people going to brokers and bankers.  If the practice of judicial astrology first arose in cities, it may in fact be a by-product of market society.  Perhaps we might find that judicial astrology began, not simply as a more elaborate version of a natural astrology that had long been a feature of rural life, but as an attempt to understand market interactions and the power of the market.  In that case, it would qualify as a school of economics.  One may wonder whether judicial astrology would be the most absurd such school in practice today.