The internal structure of the calendar

The ancient Roman calendar gave special names to two days in each month: the Kalendae (in English, “Calends,”)which was the first day of the month; and the Idus (in English the “Ides,”) which was the fifteenth day of March, May, July, and October, but the thirteenth day of every other month.  Other days were specified by counting the days until the next Calends or Ides.  So, the last day of April was pridie Kalendas Maias, the first day before the Calends of May.  There was some special significance to what came to be called the Nonae (in English the “Nones,”) that is to say, the ninth day before the Ides.  So, in March, May, July, and October, the Nones would fall on the seventh day of the month, and in other months they would fall on the fifth day.  So today, being the fifth, is the Nonae Decembris.   As far as the formal language of law and religion were concerned, this arrangement around the Calends and the Ides constituted the whole internal structure of the month.  The Romans did experiment with various forms of the week, most notably an eight-day week that determined when markets would be held.  Undoubtedly these sequences of days would also have influenced the Romans’ perceptions of time, even if they were not regularly integrated into the official calendar.

I bring this up because of an xkcd strip that appeared a week ago today.  Cartoonist Randall Munroe used Google’s Ngram search to tabulate the number of occurrences of each date by its name (ordinal number + month name) in English-language books since 2000.  In months other than September, the 11th is mentioned substantially less often than any other date.  It's been that way since long before 9/11 and I have no idea why.

His results suggest that our months do have some kind of internal structure that is not illustrated on our usual calendars.  Those simply display numbers in a grid of weeks.  Yet Mr Munroe’s findings suggest that there is more to it than that.  As the mouseover text points out, in eleven of twelve months the eleventh is mentioned much less often than any other date.  The exception is of course September, where references to the events of 11 September 2001 propel that date to the very top of the list of frequently named dates.  Yet this pattern was already well-established before 2001, and there is no obvious explanation for it.

Some variations in frequency are relatively easy to explain.  The first of the month is usually a day when many bills and reports are due, and so the first is among the most named dates of each month.  Holidays are also prominent; notice, though, that the eleventh of November, Veterans’ Day in the USA and Remembrance Day in the countries of the Commonwealth, is no bigger on Mr Munroe’s chart than the little elevenths of the other months.  The 15th of April is quite prominent; that has traditionally been the day when income taxes were due in the USA.  But, in addition to the mystery of the obscured elevenths, we also notice that the fourth and nineteenth are bigger than average in most months.  Why would that be?  Perhaps it doesn’t mean anything, but perhaps there is some explanation that would become obvious if we were in the habit of thinking of calendars, not as the grids of weeks that are usually tacked on walls in the West, but as structures built around major days, structures like those the ancient Romans used.  Too bad we can’t raise some ancient Romans from the dead and put them in charge of investigating the question, their perspective might result in a most fruitful study.  I suppose the best substitute would be classical scholars who have spent time studying the ancient Roman calendar.

Advertisements

A possible etymology of the name “Acilius”

I’ve long used “Acilius” as my screen-name, in tribute to Gaius Acilius, a Roman historian who was alive and doing interesting things in 155 BC.  It never occurred to me that anyone would know the etymology of the name “Acilius”; it was quite an old name among the Romans, and they did not really keep track of that sort of thing in those days.

A couple of months ago, I happened onto a post on the blog “Paleoglot” which led me to wonder if there might not be a way to explore the question of where the gens Acilia found its name.  Blogger Glen Gordon analyzes various occurrences of a stem acil- in Etruscan.  In his conclusion, Mr Gordon offers these definitions to cover the occurrences he has discussed:

I think we could define the English translations of the whole word family much better as part of a grander morphological design:

*aχ (v.) = ‘to do, to make, to cause’
> acas (v.) = ‘to craft, to make’
> acil (n.) = ‘thing, act; rite, holy service’ (> acil (v.) = ‘to do rites, to worship’)

The implied underlying verb here, *aχ, reminds me very much of the Indo-European *h₂eǵ-, as if borrowed from Latin agere ‘to drive, lead, conduct, impel’.

This intrigues me very much.  If the Etruscans borrowed such a word from Latin, that would suggest that the usual story about the relationship between Etruscan religion and Roman religion is misleading.  Rather than a situation in which the Etruscans molded the religious practices and ideas of their subjects, the early Romans, the presence of a Latinate word in Etruscan religious vocabulary would suggest a reciprocal relationship between the hegemonic Etruscans and their vassals.

On the other hand, if the similarity between acil- and agere is a mere coincidence, another possibility presents itself.  This is where the Acilii come to mind.  Perhaps the name “Acilius” is a combination of the Etruscan root acil-, with its sense of performing holy service, and the Latinate suffix -ius.  A fairly exact equivalent could be suggested, as chance would have it, in the English name “Priestley,” where the borrowed word priest is combined with the indigenous suffix -ley.  So perhaps all these years I’ve been unwittingly associating myself with such distinguished polymaths as Joseph Priestley and J. B. Priestley.

Walking in Roman Culture

For years I’ve had it in the back of my mind to prepare a study called Posture and Gait in Classical Antiquity.  We have a variety of sources that could help us reach conclusions about how various types of people tended to stand and move in ancient times.  There are literary descriptions of posture and gait, visual artworks depicting people standing and walking, clothes that required a particular posture and gait if they were to stay on the wearer’s body, shoes that exhibit particular patterns of wear, buildings with entryways that accommodate some strides better than others.  Were such a study completed, it could open the door to investigations in topics ranging from dance to infantry operations to architecture to the status of the disabled and the expression of social class in antiquity.

I’ve never got around to beginning such a study, and now I find that someone has had a similar idea.  Timothy M. O’Sullivan of Trinity University has written a book called Walking in Roman Culture.  According to a review by Alana Lukes that circulated on an email list of members of the Classical Association of the Middle West and South, a professional organization of American classicists, Professor O’Sullivan’s book presents “a compilation of citations from ancient sources which mention the physical activity of walking by the ancient Romans,” and does not go into depth on any other sort of evidence.  Still, that’s quite enough material for one book.  The magnum opus I have occasionally toyed with the idea of creating would be the work of decades, and Professor O’Sullivan appears to be  quite a young fellow.  So perhaps he’ll end up writing such a thing.

The New York Review of Books, 22 December 2011

I subscribed to The New York Review of Books for years and years.  I kept renewing because interesting pieces would appear in it just as my subscription was about to expire.  Then it would go back to its usual unrelieved tedium for another 11 1/2  months.  Anyway, I saw a copy of the 22 December 2011 issue in a magazine exchange rack the other day.  I picked it up.  I’m glad I don’t subscribe anymore, or it would have been the issue to lead me to renew.

Michael Tomasky reviews sometime presidential hopeful Herman Cain’s campaign autobiography.  This sentence intrigued me:  “While some of us may scoff at a man whose claims to fame include peddling Whoppers (Cain turned around the Philadelphia regional division of Burger King) and pizzas (he was for ten years CEO of Godfather’s Pizza, which he also made profitable) to an increasingly obese nation with less and less need of them, conservatives find virtually any form of private-sector achievement admirable.”  In the USA, academics, journalists, and others in the nonprofit world are routinely challenged to justify their existence in terms of the value of their services to society at large.  Success in business, by contrast, is generally accepted as self-justifying.  I’ve lived in the USA long enough to find it a bit jarring, in fact, to hear Tomasky step outside this paradigm and treat business as an activity like any other.

Ingrid Rowland reviews Robert Hughes’ Rome: A Cultural, Visual, and Personal History.  Rowland meditates on the coexistence of Rome’s historical patrimony and the dominance of mafia groups in the city’s business life.  I wonder if the two things can be separated.  The only cities I can think of that have decisively broken mafia control are Las Vegas and New York, and in each case the slayer of the mafia was the unfettered multinational corporation.  That’s hardly an entity that would be likely to preserve the signs of eternity in the Eternal City.

Freeman Dyson reviews Daniel Kahneman’s Thinking, Fast and Slow.  Kahneman’s theme, Dyson tells us, is the power of “cognitive illusions,” which he defines as “false belief[s] that we intuitively accept as true.”  Kahneman began his career by identifying what he calls the “illusion of validity,” the idea that the conclusions which people intuitively draw when faced with questions relating to topics about which they are well-informed are likely to be true.  As a very young researcher in the Israeli army in 1955, Kahneman was called upon to evaluate and, eventually, to replace the system the army was then using to place recruits in jobs.  That system was based on the opinions that experienced officers formed after brief, informal interviews with recruits.  Kahneman found that those opinions had no correlation with the recruits’ eventual performance.  He then designed a brief factual questionnaire for recruits to fill out and a mechanical method  of analyzing the results of that questionnaire, a method which turned out to be quite accurate at predicting recruits’ performance, and which has been the basis of assignments in the Israeli Defense Forces ever since.  Dyson follows this story with a story from his own experience in the Royal Air Force during World War Two, when changes that would have made bombers likelier to complete their missions were made impossible by the unwillingness of their crews to admit a fact which statistical analysis made achingly plain, that bombers carrying experienced crews were just as likely to be shot down as were bombers carrying inexperienced crews.  The illusion of validity was at work here as well; the idea that they were acquiring expertise that they would be able to use to save themselves gave the crews self-confidence that they would not exchange for safer planes.

Dyson explains the title of Kahneman’s book in terms of his thesis that cognition should be analyzed in terms of two systems, which Kahneman calls System One and System Two.  System One, our inheritance from our early primate ancestors, is fast and inaccurate; System Two, the product of our neocortex, is much more accurate but very slow.  In the fast-changing conditions of life in the arboreal canopies where our distant ancestors lived, it was far more important to be fast than it was to be right.  If a predator was coming, immediate movement in any direction was likelier to lead to safety than was long-delayed movement in the ideal direction.  Indeed, the RAF crews who resisted the changes Dyson and his fellow analysts could use statistics to recommend found themselves in a very similar environment to that in which our lemur-like forebears darted about, and so could hardly be blamed for favoring System One reasoning over System Two.

Dyson puts in a good word for two thinkers whom Kahneman does not mention, William James (whom Rowland also mentions, for his telling in The Varieties of Religious Experience of the story of how Alphonse Ratisbonne converted to Christianity) and Sigmund Freud.  Dyson argues that Freud anticipated many of Kahneman’s key concepts, notably availability bias (that is, “a biased judgment based on a memory that happens to be quickly available. It does not wait to examine a bigger sample of less cogent memories.”)  Here’s what Dyson says about James:

James was a contemporary of Freud and published his classic work, The Varieties of Religious Experience: A Study in Human Nature, in 1902. Religion is another large area of human behavior that Kahneman chooses to ignore. Like the Oedipus complex, religion does not lend itself to experimental study. Instead of doing experiments, James listens to people describing their experiences. He studies the minds of his witnesses from the inside rather than from the outside. He finds the religious temperament divided into two types that he calls once-born and twice-born, anticipating Kahneman’s division of our minds into System One and System Two. Since James turns to literature rather than to science for his evidence, the two chief witnesses that he examines are Walt Whitman for the once-born and Leo Tolstoy for the twice-born.

Freud and James were artists and not scientists. It is normal for artists who achieve great acclaim during their lifetimes to go into eclipse and become unfashionable after their deaths. Fifty or a hundred years later, they may enjoy a revival of their reputations, and they may then be admitted to the ranks of permanent greatness. Admirers of Freud and James may hope that the time may come when they will stand together with Kahneman as three great explorers of the human psyche, Freud and James as explorers of our deeper emotions, Kahneman as the explorer of our more humdrum cognitive processes. But that time has not yet come.

Lorrie Moore reviews Werner Herzog’s film Into the Abyss: A Tale of Death, a Tale of Life.  This bit, describing the death house ordinary, stuck in my mind:

The reverend is against the death penalty but in thinking of it before the camera he veers off onto an anecdote about a golf trip and almost hitting a squirrel that “had stopped in the middle of the cart path,” and we can see how when pressed to illuminate its own contradictions the human mind can go on the fritz.  This may really be Herzog’s theme.  There is much strain and helplessness felt by the functionaries asked to dole out this ritualized punishment.

I can’t help but wonder what Kahneman would make of these flailings of a mind “on the fritz.”  Moore describes another of Herzog’s interview subjects, a former executioner named Fred Allan, who had to quit his job and foreswear his pension because he couldn’t stop visualizing the faces of all the hundreds of condemned men whose lives he had ended in the death chamber at Huntsville prison.  That sounds like a cognitive illusion worth cultivating in everyone inclined to set up shop in the killing business.

Kwame Anthony Appiah reviews two new books about W. E. B. DuBois, Lawrie Balfour’s Democracy’s Reconstruction: Thinking Politically with W. E. B. DuBois and Robert Gooding-Williams’ In the Shadow of DuBois: Afro-Modern Political Thought in America.  Appiah notes two facts that impede a proper study of DuBois.  Again, I wonder what label Kahneman would put on these cognitive illusions.  The first is that DuBois’ great longevity tempts us to see him as a more nearly contemporary figure than he in fact was.  His death date, 27 August 1963, is in many ways less illuminating of his thought than is his birth date, 23 February 1868.  The second is that he still ranks as a sort of patron saint of intellectual achievement among African Americans, and so any attention to his limitations may be taken as an attack on all such achievement.  Appiah acclaims Goodling-Williams and Balfour for having the courage to venture into these sacred precincts and do scholarly work there.

According to Appiah, Goodling-Williams finds three ideas at the heart of DuBois’ political ideas: first, the idea that politics is in essence the exercise of command over a community.  Second, the idea that this command is rooted in and to some extent tempered by “political expressivism,” a process by which those who are to be led recognize as their leaders those individuals who best express what they regard as the essence of their common life, what DuBois meant by “soul” in the title of The Souls of Black Folk.  Third, the idea that the main political issue facing African Americans was social exclusion, which in turn resulted from the twin evils of racial prejudice among whites and “the cultural (economic, educational, and social) backwardness of the Negro.”

These three points set DuBois at odds with Frederick Douglass, who saw healthy politics as essentially a matter of collaboration among equals rather than a matter of command and control; who rejected nationalistic conceptions of leadership as collective self-expression; and who saw white supremacy, that negation of collaborative politics, as an evil quite apart from social exclusion of African Americans.  Goodling-Williams, Appiah argues, uses Douglass as a mouthpiece for his own democratic vision of politics, one in which leaders must listen to the actual voices of their followers, rather than to their collective soul.

Appiah ends with an interesting question about DuBois archnemesis, Booker T. Washington:

Could it be right to act like Booker T. Washington, deferring a demand for justice for yourself if that would bring justice more swiftly for your descendants?  Or is there something so discreditable, so slavish, in acceding to these injustices that it is better to resist them, whether or not your resistance brings forward the date when they will cease?

My inclination is to ask how we could know that any given act of deferring a demand for justice would in fact bring justice more swiftly for our descendants.  From a God’s eye perspective in which we could know with certainty that this was so, then the question would be one we could analyze coolly, rationally, in what Kahneman might call a System Two manner.  But given the limitations on what we can in fact know about the future, surely the best course of action would be to set an example of resistance, however futile it might be in the short term, in the hope, however ill-founded, that our descendants might hear of it and be inspired to emulate it.  Both our own action and the action we would hope to inspire in our descendants under that scenario would be the results of System One reasoning, bold and drastic and very likely to be misguided, but I don’t see how under real world conditions a policy generated solely by System Two reasoning could lead us to anything other than a situation in which full equality between whites and African Americans will remain forever in the future.

Malise Ruthven reviews Hamid Dabashi’s Shi’ism: A Religion of Protest.  Ruthven talks a bit about the paradox that results when Westerners compare the Sunni/ Shia split to the Protestant/ Catholic split.  Shias, with their hierarchy, shrines, and veneration of saints, are often compared to Roman Catholics, while Sunnis, with their many sects, internationalist themes, and iconoclastic tendency, are often compared to Protestants.  Yet Shiism is at its heart a protest against Sunni ascendancy.  So at moments it is appealing to compare Shiism with Protestantism.

This discussion obviously doesn’t get one very far, since the very definition of an analogy is a comparison between things that are in other respects dissimilar.  In some ways the Shias are a bit like the Catholics, in other ways they are a bit like the Protestants, in a great many ways they aren’t much like either.

Interesting to me were a description of Dabashi’s rejection of Max Weber’s description of Muhammad (Appiah also mentions Weber, commenting on Weber’s admiration for DuBois and his skepticism about democracy.)  For Dabashi, Weber’s view of Muhammad as an “ethical prophet,” rather than an “exemplary prophet,” is too schematic and conceals the ideological difference at the heart of the Sunni/ Shia split.  Dabashi argues that the two branches have different ways of dealing with what Weber would call the exemplary character of the prophet.  Sunnis, says Dabashi, tend to believe that shari’ah law can absorb the prophet’s example and teach the community to cultivate virtue, while Shias favor the view that a living imam must embody his example in the presence of the community if its members are to know what is virtuous.

Dyson says that “religion does not lend itself to experimental study,” and so remains outside of Kahneman’s focus.  Nonetheless, I wonder how Kahneman might analyze this difference.  It sounds to me like the Sunni ideal is a System Two prophet, Muhammad converted from living man into the rational processes of the law.  The Shia ideal, by contrast, sounds like a System One prophet, Muhammad who gave us a line of successors who are not themselves prophets, but who share the prophet’s intuitive understandings of right conduct and arouse the same understandings in us by the influence of their example.

Ruthven mentions the political sociologist Sami Zubaida, who has written a number of things about the contradictions in the Iranian political system that stem from the fact that that country’s constitution defines sovereignty as stemming from God and also as stemming from the people.   He also mentions Dabashi’s admiration for Philip Rieff.

Tim Parks and Per Wästberg exchange views on the question, “Do We Need the Nobel?”  Considering the Nobel Prize for Literature, Mr Parks takes on a job that strikes me as absurdly easy, which is to prove that no group of eighteen people can be taken seriously as the judges of all the world’s contemporary literatures.  Mr  Wästberg can respond only by describing the lengths to which he and his fellow members of the Swedish Academy go in attempting the impossible task of giving fair consideration to all the living writers in all the languages of the world.  Mr Parks receives this description in good grace, but sees in it no rebuttal to his main point.

Some interesting comments by Michael Peachin about a new book on the Emperor Claudius

Proclaiming Claudius Emperor, by Lawrence Alma-Tadema

As a subscriber to Classical Journal, I regularly receive emailed reviews of new scholarly books concerning ancient Greece and Rome.  The other day, for example, they sent me Michael Peachin‘s review of Claudius Caesar: Image and Power in the Early Roman Empire, by Josiah Osgood (Cambridge University Press, 2010).  The only other notice I’d seen of the book was a drearily dutiful one in The Bryn Mawr Classical Review, so I was surprised that Peachin found some exciting points in the book.  I’ll quote two of these points:

Several recent accounts of Roman emperors have sailed off on a new tack. Instead of attempting a traditional biographical interpretationof the man, and thereby also a chronicle of his reign, each of thesehas sought to present an emperor on his own terms, and/or to view himas he was perceived by certain groups of contemporaries (other than theelite authors, who usually monopolize discussion). Thus, Caligula was notout of his mind; he simply had no taste for playing republic, when thereality was despotism; and so, he fashioned himself overtly as a tyrant,regardless of the consequences – or perhaps precisely to elicit certainones of those (A. Winterling, Caligula: A Biography [Berkeley, 2011]).

 

When I was in graduate school, I took a seminar on Roman history in which the professor horrified about half the class by spending a day arguing that Caligula was probably not a lunatic.  A few of my classmates were committed to the view of the third emperor presented in the ancient historical texts, and were appalled to hear a revision of that view; the others were committed to the idea that the only sort of history worth doing was social history that focused on the most numerous groups in a society, and so were appalled that we were spending so much time on the question of one man’s mental health.  I was not in either of those groups, but loved the day and have been defending Caligula ever since.  By the way, there’s a fine review of Winterling’s book in September’s New Criterion.  I recommend it to the the general reader.

Peachin makes a point that I found especially fascinating:

Augustus, in fine, had played his part well; but as Osgood aptly demonstrates, he fated all the various players in the sequel to write their own scripts as they went. In any case, Osgood argues that Claudius quite actively tried to shape his own time as emperor, and that in doing so, he contributed materially to the development of the imperial ‘system.’ As we observe this particular emperor at work, we are also being nudged slightly away from Fergus Millar’s  picture of a more passive, and perhaps generic, sort of monarch (The Emperor in the Roman World [Ithaca, 1977]): “…who the emperor was mattered” [136]). Still, Osgood sees quite clearly that Claudius (or any emperor) was indeed only one person; and hence, the princeps’  direct involvement with his subjects was perforce limited. Thus, when an emperor did choose to intervene, the event was so momentous as to carry an aura of the divine. That said, Claudius was no lone actor. We are reminded, throughout, that “…much of this emperor’s image, like any other’s, was constructed in dialogue with his subjects” (317.)

So, it was precisely because the emperor’s position was inherently weak that he inspired awe in his subjects.  This is just the sort of paradox I can never resist.

Many readers will be familiar with the theory that historian Arnaldo Momigliano developed and that Robert Graves popularized in his novels about Claudius.  Under this theory, Claudius wanted to phase out the principate and restore the old Republic.  Peachin explains Osgood’s view of this theory with admirable concision:

Following Momigliano’s observations (Claudius: the Emperor and His Achievement [Oxford, 1934]), Osgood stresses the fact that Augustus’ uneasy amalgam of republic and empire remained a befuddling puzzle  for Claudius (indeed, for every emperor). In particular, the quasi-retention of a republican state meant that a new imperial system of   government could not be crafted with anything even approaching clarity, or in any detail. Thus, to start at the start, when Gaius [a.k.a. Caligula] was murdered, and had not indicated a successor, a conclusively ‘proper’ or ‘constitutional’ way forward was nowhere to be discovered. That notwithstanding, Claudius was quickly on the throne; but then, the awkward facts of his accession, not to mention the earlier vituperation of him by members of the Augustan house (and others), seriously undercut his authority. Attempting to counter such hindrances, and just generally in his zeal to rule as he found appropriate, Claudius was too fastidious.     The result was a nasty paradox: “The loftier the goals the emperor set  for his administration, the more likely he was to fail, and to open himself to allegations of incompetency, or even corruption. Yet precisely  to try to win loyalty and increase his prestige, Claudius had to set   loftier goals than those of Tiberius, even those of Caligula” (189).

Considering that the written law in Rome in 41 BC was predicated on the idea that the Republic was still functioning, and that Claudius owed the principate to the very group of men who had just violently murdered his predecessor, it would have been quite a challenge for him to find a way to present his accession as legitimate without appealing to the idea of a restored Republic.   In no position to prosecute the assassins of Caligula, Claudius could only appeal to the right of tyrannicide, and thus evoke the two Brutuses, one who according to legend struck against the Tarquins in order to end the monarchy and establish the Republic, and the other who struck against Julius Caesar the Dictator in an attempt to prevent a new monarchy from ending the Republic.   If there were people who took this forced imposture at face value, one can hardly blame Claudius.

 

Two items of interest to Classics types

When the world was young and I was in grad school, many of my classmates went to Rome to hang out with Father Reginald Foster.  Reggie, as they all called him, is an American priest who at that time was in charge of translating official Vatican documents into Latin.  His schedule was light in the summer, so Reggie ran a summer institute in conversational Latin.  Granted, there aren’t any native speakers of Latin around to converse with, but there is a substantial body of permanently interesting Latin literature, and it is easier to read the language if you can also speak it.

Reggie moved back to Milwaukee after Pope John Paul II died.  He teaches conversational Latin there from time to time.  No future generations of graduate students will be studying under him in Rome, but two current graduate students have revived the Rome summer program  They call it the Paideia Institute Slate magazine ran a piece about it recently.

David Graeber

Also of keen interest to classicists is this recent interview that economic anthropologist David Graeber granted to the website Naked Capitalism.  Graeber summarizes Adam Smith’s hypothesis that money originated as an advancement on barter systems that had prevailed before its adoption.  He then points out that in the 235 years since Smith published that hypothesis in The Wealth of Nations, observers have examined thousands of cultures in search of examples of pre-monetary barter economies, and that they have yet to find one.  Graeber concludes that Smith’s hypothesis is thereby defeated.  Societies which have not invented money do not organize markets around barter; they do not organize markets at all.  Money and markets arise together, and barter becomes widespread only when currency systems collapse.  Non-monetary societies distribute goods and services, not through markets, but through hierarchies in which obligations are based on force.  The king or chief or whatever he is has what he has because everyone else is indebted to him for protection and status, and they have what they have because of their relations with him.  When multiple authorities lay claim to the same person, they need a way of sorting out whose claim comes first and which authority is entitled to demand what deference or service.  Sometimes they develop a way of sorting those claims that involves quantifying them and making them transferable.  Once claims on a person’s deference or service can be quantified and transferred, there is a need for tokens to signify the quantification and contracts to enforce the transfer.  That is to say, there is money, and with it the dawn of market society.

Graeber makes some remarks that are similar to points that come up in some classes I teach.  For example:

Since antiquity the worst-case scenario that everyone felt would lead to total social breakdown was a major debt crisis; ordinary people would become so indebted to the top one or two percent of the population that they would start selling family members into slavery, or eventually, even themselves.

Well, what happened this time around? Instead of creating some sort of overarching institution to protect debtors, they create these grandiose, world-scale institutions like the IMF or S&P to protect creditors. They essentially declare (in defiance of all traditional economic logic) that no debtor should ever be allowed to default. Needless to say the result is catastrophic. We are experiencing something that to me, at least, looks exactly like what the ancients were most afraid of: a population of debtors skating at the edge of disaster.

And, I might add, if Aristotle were around today, I very much doubt he would think that the distinction between renting yourself or members of your family out to work and selling yourself or members of your family to work was more than a legal nicety. He’d probably conclude that most Americans were, for all intents and purposes, slaves.

When I’m talking to a class, I’m rather more emphatic than Graeber in saying that in this conclusion Aristotle was a man of his time, and that our view of wage labor as a form of freedom may be as legitimate in its own way as was the Greek view of wage labor as a form of slavery.  Partly that difference in views stems from the fact that so many slaves in ancient Greek cities were paid wages, and that those who labored side by side with free people in big workshops were paid exactly the same wages as those (nominally) free people, while American slaves were generally denied access to money.  Still, I do have a lecture that unnerves them when it ends with my remark that Aristotle would not have thought that we moderns have abolished slavery, but that we have abolished freedom.

I can’t resist quoting another bit of the Graeber’s interview.  After he derides the idea of money as a development subsequent to a barter economy, we have this exchange:

PP: You’d be forgiven for thinking this was all very Nietzschean. In his ‘On the Genealogy of Morals’ the German philosopher Friedrich Nietzsche famously argued that all morality was founded upon the extraction of debt under the threat of violence. The sense of obligation instilled in the debtor was, for Nietzsche, the origin of civilisation itself. You’ve been studying how morality and debt intertwine in great detail. How does Nietzsche’s argument look after over 100 years? And which do you see as primal: morality or debt?

DG: Well, to be honest, I’ve never been sure if Nietzsche was really serious in that passage or whether the whole argument is a way of annoying his bourgeois audience; a way of pointing out that if you start from existing bourgeois premises about human nature you logically end up in just the place that would make most of that audience most uncomfortable.
In fact, Nietzsche begins his argument from exactly the same place as Adam Smith: human beings are rational. But rational here means calculation, exchange and hence, trucking and bartering; buying and selling is then the first expression of human thought and is prior to any sort of social relations.

But then he reveals exactly why Adam Smith had to pretend that Neolithic villagers would be making transactions through the spot trade. Because if we have no prior moral relations with each other, and morality just emerges from exchange, then ongoing social relations between two people will only exist if the exchange is incomplete – if someone hasn’t paid up.

But in that case, one of the parties is a criminal, a deadbeat and justice would have to begin with the vindictive punishment of such deadbeats. Thus he says all those law codes where it says ‘twenty heifers for a gouged-out eye’ – really, originally, it was the other way around. If you owe someone twenty heifers and don’t pay they gouge out your eye. Morality begins with Shylock’s pound of flesh.
Needless to say there’s zero evidence for any of this – Nietzsche just completely made it up. The question is whether even he believed it. Maybe I’m an optimist, but I prefer to think he didn’t.

Anyway it only makes sense if you assume those premises; that all human interaction is exchange, and therefore, all ongoing relations are debts. This flies in the face of everything we actually know or experience of human life. But once you start thinking that the market is the model for all human behavior, that’s where you end up with.

If however you ditch the whole myth of barter, and start with a community where people do have prior moral relations, and then ask, how do those moral relations come to be framed as ‘debts’ – that is, as something precisely quantified, impersonal, and therefore, transferrable – well, that’s an entirely different question. In that case, yes, you do have to start with the role of violence.

Nietzsche may once have been overrated as a political thinker, but I believe that he is now seriously underrated in that wise.  So the bit above made me happy.

Gaius Acilius on Praise and Reproof

I teach in a university classics department. A few years ago, a senior colleague of mine received a “Teacher of the Year” award.  I congratulated him, then asked some questions.  After I asked him who gave the award, how they chose the recipient, and what benefits came with it, I asked him how he would react if the same people had used the same criteria to decide that he was a bad teacher, to publicize this decision, and to fine him.  Would he accept this judgment?  He did not think he would.  So, how could he justify accepting their judgment when it benefited him, if would not accept that same judgment were it to his disadvantage?  He agreed that this was a good question.  Of course, he went on to accept the award just the same.

A similar question may have preyed on the mind of my namesake, Gaius Acilius(more…)

What is a word for “grandparents of the same child”?

A simplified chart of Latin kinship terms

At Language Log, a post asks whether many English speakers use the expression “brothers-in-law” to refer to men whose relationship is that their wives are sisters and “sisters-in-law” to refer to women whose relationship is that their husbands are brothers.  So would it be idiomatic to say that my wife and my brother’s wife  are one another’s sisters-in-law?  Commenters on that post have mentioned the poverty of English vocabulary in kinship terms as compared to other languages.  One linked to a Wiktionary article about the expression “co-mother-in-law,” an article which ends with sixteen examples of languages which do have words in widespread use that mean “the mother of one spouse, in relation to the parents of the other spouse.” 

For a long time it’s struck me as strange that English has so few kinship terms.   About 14 years ago, I was in graduate school and I read an article that was then already rather old, “What does Latin tell us about the Romans?” by Carl R. Trahman.  If you have access to JSTOR, here’s a link to Trahman’s article; if you don’t, you can go to the nearest research library, look up volume 67, number three of The Classical Journal (February/ March 1972,) and turn to pages 240-250.  Here’s one thing Latin told Trahman about the Romans:

Perhaps the most telling evidence, in the case of the Romans, that the vocabulary of a language will lead to an understanding of its users lies in the terminology of Latin for family relationships.  In English we are content to speak of “in-laws,” of “cousins once removed,” of “uncles on the father’s side.”  Latin has specific words for all of these and for dozens more such affinities.  Your great-great-grandmother is your abavia; your uncle on your father’s side is patruus, but on your mother’s side is avunculus.  Your mother-in-law is socrus.  The hated step-mother is noverca and the stepson privignus.  Does your husband have a sister?  The word is glos.  Does he have a brother?  The word is levir.  In such matters, the Romans truly had a word for it.  They actually possessed a word to denote the relationship of two women married to two brothers: they were ianitrices.  Now what is the significance of such precision?  It indicates the immense importance of the family in Roman life.  If we had no other testimony of this feeling for family, which can hardly be overstated, this amazingly rich terminology would be more than enough.  It is interesting that two of the phrases used for our word prejudice, for which as I have said Latin had no proper word, are iudicia iam facta domo (Cicero) and domo adlata opinio (Seneca.)  They suggest family councils at which policy was determined and the stand to be taken by the gens on public issues yet to be debated.  (page 244)

Writing in the early 1970s, Trahman devoted a fair bit of space to the Sapir-Whorf hypothesis, an idea that the structural limitations of a given language are in some way commensurate with the range of thoughts available to the speakers of that language.   In its most extreme form, the Sapir-Whorf hypothesis can be identified with the view that a people whose language lacks a word for a given concept must therefore lack that concept.   Trahman himself clearly does not go to that extreme.  In the passage quoted above, Trahman identifies the concept expressed by Latin phrases like iudicia iam facta domo and domo adlata opinio with the concept that we express by the single word prejudice.  So he believes that they had the concept, even though they lacked a specific word for it.  Trahman does seem to suggest that something like the Sapir-Whorf hypothesis was already familiar to the ancients.  He quotes the Roman poet Ennius who said that because he spoke Latin, Greek, and Oscan he had three hearts; Trahman elaborates, “The word he used was cor, which in his day meant not only ‘heart’ but ‘mind and soul’ as well.  So [Ennius had] it all, and it could not be improved upon.” 

So we can’t say that English speakers have a poorer set of concepts for family relationships than did Latin speakers just because we have so much poorer a vocabulary through which to express those concepts.  What we can say is that the Romans probably talked about those relationships more often than we do.  This isn’t surprising.  Most people in the developed world today live in nuclear family households and see members of their extended families only occasionally, so it isn’t especially likely on any given day that you will have to explain that someone is your spouse’s sibling’s sibling’s spouse.  If it takes several words and repeated case-endings to identify that person, you probably won’t lose much time over the course of a long life.  But in the ancient world it was more usual for several generations of a family to live under the same roof, grandparents and their siblings, parents and their siblings, one’s own siblings and their spouses and children and grandchildren, one’s own spouse and children and grandchildren, etc etc, and to spend all day working side by side with other members of that population.  So of course you would need single words that could express those relationships quickly and easily.  Not only might it become tiresome to have to speak a lot of words every time you had to clarify a family relationship, but it would certainly be taxing to have to listen to a lot of convoluted phrases connecting one kinship title to another.  If you tell me that some person is your spouse’s sibling’s sibling’s spouse, I’m likely to come away thinking that the person is your sibling’s spouse’s sibling’s spouse unless I concentrate.  If we have a word like ianitrix in common, we can relax. 

So why does it strike me as strange that we have fewer kinship terms in English than the Romans had in Latin?  For one thing, because English has such a huge vocabulary overall; for another, because it doesn’t seem English was particularly rich in kinship terms even when most English speakers lived in extended family groups.  But most of all because there are a number of relationships that really are quite important to English speakers that have no simple names in English.  For example, I would think it was safe to say that most grandparents would agree that they have something important in common with their grandchildren’s other grandparents.  Yet they have no single word to express that relationship.  And a Google search for “grandparents of the same child” brings up just two hits, as I write this.  “Co-grandparent” produces hits for laws concerning grandparents in the state of Colorado (postal abbreviation CO,) for “The Grandparent Company,” and for a number of uses of “co-grandparent” meaning something like  “honorary grandparent.” 

At about the same time Trahman was writing his article, Archie C. Bush published an article in the journal Ethnology under the title “Latin kinship extensions: An interpretation of the data.”  Here’s a JSTOR link; the citation is Ethnology, volume 10, number 4, (Oct 1971) pp 409-432.  Bush opens with a list of Latin names for 110 family relationships, sorted into six grades of consanguinity.  The system of grades derives from Roman law; a text attributed to the jurist Julius Paulus listed 448 family relationships. 

Strikingly, there is no Latin word for “grandparent of the same child” on Bush’s list or in Paulus, nor can I come up with such a word in any of the dictionaries to which I have ready access at the moment.  This is really amazing.  Most marriages in the ancient world were arranged by the couple’s parents in order to build a kinship relation between one household and another household.  In that sense, one could say that the basis of marriage in those days was the hope of the parents of the bride and groom that they would be bound together as grandparents of the same child.  One could hardly imagine a more highly valued relationship.  Yet it was a relationship with no name of its own.

The Nation, 9 November 2009

nation 9 nov 09For me, the highlight of this issue was a review of Mary Beard‘s The Fires of Vesuvius: Pompeii Lost and Found.  Beard’s “down to earth portrait of Pompeii” is informed by her grasp of “the latest research in demography, the history of Roman politics, architecture, ancient economics, feminist and post-colonial studies.” 

The same issue includes a number of articles about the war in Afghanistan.  As the editors summarize this symposium:

The principal rationale for America’s expanding military commitment in Afghanistan is that a Taliban takeover there would directly threaten US security because it would again become a safe haven for Al Qaeda to plot attacks against the United States. But the essays by Stephen Walt and John Mueller strongly refute that assumption, pointing out that a Taliban victory would not necessarily mean a return of Al Qaeda to Afghanistan, and that in any case the strategic value of Afghanistan and Pakistan as base camps for Al Qaeda is greatly exaggerated and can be easily countered.

Similarly, proponents of sending more troops to Afghanistan argue that Taliban success would embolden global jihadists everywhere and destabilize Pakistan in particular. Yet, as the essays by Selig Harrison and Priya Satia show, this narrative does not fit the realities. While American policy-makers and Al Qaeda may think of this as a grand meta-struggle between the United States and global jihadism, many Taliban fighters are motivated by other factors: by traditional Pashtun resistance to foreign occupation; by internal ethnic politics, such as rebellion against the Tajik-dominated government of Hamid Karzai; or by anger over the loss of life resulting from American/NATO aerial attacks that have gone awry.

As for Pakistan, the essays by Manan Ahmed and Mosharraf Zaidi explain why the Taliban threat to Pakistan is not as serious as many assume, and why a newly democratic Pakistan has turned increasingly against Islamist extremists. As Ahmed and Zaidi suggest, Pakistanis are quite capable of defending their country–not for American interests but for their own reasons–and Pakistani stability is more likely to be threatened than enhanced by military escalation in Afghanistan.

And finally, Robert Dreyfuss offers an exit strategy: as it winds down its counterinsurgency, Washington should encourage an international Bonn II conference that would lead to a new national compact in Afghanistan.

Well, not quite “finally.”  The issue also includes a piece by Ann Jones about Afghan women.  Jones mentions groups like Feminist Majority that argue for a continued US troop presence in the name of Afghan women’s rights.  She mentions her own years of experience working with women in Afghanistan, and gives it as her assessment that “an unsentimental look at the record reveals that for all the fine talk of women’s rights since the US invasion, equal rights for Afghan women have been illusory all along, a polite feel-good fiction that helped to sell the American enterprise at home and cloak in respectability the misbegotten government we installed in Kabul.”  In light of the fiercely patriarchal Shi’ite Personal Status Law (the SPSL, “or as it became known in the Western press, the Marital Rape Law,”) she goes on to say that “From the point of view of women today, America’s friends and America’s enemies in Afghanistan are the same kind of guys.”  She is unimpressed by the number of women in the Afghan parliament:

But what about all the women parliamentarians so often cited as evidence of the progress of Afghan women? With 17 percent of the upper house and 27 percent of the lower–eighty-five women in all–you’d think they could have blocked the SPSL. But that didn’t happen, for many reasons. Many women parliamentarians are mere extensions of the warlords who financed their campaigns and tell them how to vote: always in opposition to women’s rights. Most non-Shiite women took little interest in the bill, believing that it applied only to the Shiite minority. Although Hazara women have long been the freest in the country and the most active in public life, some of them argued that it is better to have a bad law than none at all because, as one Hazara MP told me, “without a written law, men can do whatever they want.”

Jones sees little hope, and much tragic irony in the possibilities facing Afghanistan:

So there’s no point talking about how women and girls might be affected by the strategic military options remaining on Obama’s plate. None of them bode well for women. To send more troops is to send more violence. To withdraw is to invite the Taliban. To stay the same is not possible, now that Karzai has stolen the election in plain sight and made a mockery of American pretensions to an interest in anything but our own skin and our own pocketbook. But while men plan the onslaught of more men, it’s worth remembering what “normal life” once looked like in Afghanistan, well before the soldiers came. In the 1960s and ’70s, before the Soviet invasion–when half the country’s doctors, more than half the civil servants and three-quarters of the teachers were women–a peaceful Afghanistan advanced slowly into the modern world through the efforts of all its people. What changed all that was not only the violence of war but the accession to power of the most backward men in the country: first the Taliban, now the mullahs and mujahedeen of the fraudulent, corrupt, Western-designed government that stands in opposition to “normal life” as it is lived in the developed world and was once lived in their own country. What happens to women is not merely a “women’s issue”; it is the central issue of stability, development and durable peace. No nation can advance without women, and no enterprise that takes women off the table can come to much good.

Jones knows Afghanistan quite well; I know it not at all.  I can only hope that there is something left in the local culture of the seeds from which a relatively woman-friendly Afghanistan once grew, and that those seeds will again send up green shoots once foreign armies leave the country .

How many people lived in Rome in the first century BC?

Sulla: He kept the Romans' numbers down

Sulla: Mr Zero Population Growth

During the first century BC, Rome experienced a series of civil wars.  Dynasts like Marius, Sulla, Caesar, Antony, and Octavian led armies that slaughtered foreigners and Romans alike.  Romans responded to these wars by hoarding their wealth.  They hoarded some of this wealth by burying coins.  Not all of the first-century Romans who buried coins had a chance to dig their coins up again.  Some of the coins they buried have come to light only in recent centuries.  Scholars study these newly recovered coins to learn about life in ancient times. 

Historian Walter Scheidel and biologist Peter Turchin have looked at some of these recently uncovered first-century BC hoards of coins in Rome.  Using analytic techniques developed by biologists, Scheidel and Turchin have concluded that the population of Rome in those days was considerably smaller than has often been estimated.  The civil wars evidently took so heavy a toll on the Romans that the city’s population by the end of the first century was not likely more than half of the number some previous historians have estimated.