The dead end above us

A recent note on Slate about Tom Gauld’s Mooncop discusses Mr Gauld’s vision of life in a decrepit and soon-to-be-abandoned lunar colony as “the residue of an older fantasy,” of the Cold War-era dream of thriving human settlements on other planetary bodies.

No doubt there is an element of this at work in Mr Gauld’s imagination, and in other visions of a future in which settlements and stations in outer space are decaying, forgotten remnants of failed enterprises of expansion. Films such as Moon (2009) and The Martian (2015,) with individual space travelers alone on the surface of alien worlds, play to the image of outer space as a realm of abandonment. Yet such visions were part of science fiction before the end, or even the beginning, of the US-Soviet Space Race. Even the founding text of space travel-themed science fiction, Jules Verne’s From the Earth to the Moon (1865,) ends tragically, with its heroes forever separated from the rest of humanity, dying pointlessly in a metal ball orbiting the Moon. A work like Olaf Stapledon’s First and Last Men (1930) is steeped in an overwhelming sense of decline, introducing one species after another descended from humans, each of which meets extinction in its own way.  

Some of the most prominent science fiction productions of the Cold War days also represent space travel as a dead end. Robert Altman’s film Countdown (1968) depicts a US project to land a man on the Moon. The film ends with a lone astronaut wandering the lunar surface, finding a crashed Soviet space-craft and the corpses of the cosmonauts. The final moments of the film are ambiguous, as the astronaut finds a device that may or may not enable him to escape back to the Earth. The overall sense of loss and futility is the same as that with which Verne’s novel ends. The relationship between the cosmonaut and the planetary being in Andrei Tarkovsky’s Solaris (1972) develops the same feeling of isolation and helplessness, though where Altman contrasts the isolation his astronaut suffers on the Moon with the professional camaraderie and relatively satisfactory married life he had enjoyed in his life in Texas, Tarkovsky’s film is openly critical of the Soviet Union as a place where the kind of social isolation his cosmonaut suffers in space is commonplace on Earth.

Arthur Clarke, a novelist strongly influenced by Olaf Stapledon’s work, returned throughout his career to a story set a thousand million years in the future. He turned this story into novels twice, as Against the Fall of Night (1953) and The City and the Stars (1956,) and explored it in many of the unfinished tales published in The Collected Stories of Arthur C. Clarke (2001.)  The heart of the story is that humans had once created a vast stellar empire, an empire fragments of which perhaps still existed in some remote corners of space, but that the Earth had been separated from this empire, and its people had forgotten the major points of the empire’s history.  The abandoned empire, the isolated Earth, and the forgotten history of the conquest of space are also the background of a much more famous series, Isaac Asimov’s Foundation novels (1942-1993.)

The two most familiar products of Cold War science fiction are Star Trek (1966-1969) and Star Wars (1977.) The image of outer space as a realm of unkept promises figures in those as well.

The background of Star Wars is a fight, not to claim new territory or develop new settlements, but to restore the liberties of a lost Republic. We meet the hero, a young man unaware of his true parentage and his lofty destiny, in the grubby place of exile which he has grown up regarding as his home. Using battered ships, antique weapons, and a plotline recycled from 1930s movie serials, the good guys score a victory for their nostalgic cause.

While Star Trek is set in the early days of an expanding interstellar federation, in many episodes our heroes encounter the ruins of lost civilizations and other traces of abandoned developments. The initial pilot, “The Cage”(produced in 1964-1965,) shows us the ship’s captain as the prisoner of a species who have retreated underground after a war found millennia before, and while there have lost so completely lost their technological skills that they can no longer “even repair the machines left behind by their ancestors” and are faced with inevitable extinction.

Many other episodes show societies that have declined from extraordinary heights of technological development into primitive conditions, conditions that suggest either control of the population by a computer mistaken for a deity (for example, “Return of the Archons,” “The Apple,” and “For the World is Hollow and I have Touched the Sky,”) impending doom (for example, “What Are Little Girls Made Of?,” “Miri,” and “The Paradise Syndrome,”) or disconnection between intellectual and carnal satisfactions, resulting in a society of casual sadism and implied cannibalism (for example, “The Man Trap,” “Return to Tomorrow,” “The Gamesters of Triskelion,” “Spock’s Brain,” and “Turnabout Intruder.”)

Nor does Star Trek present decline and abandonment as things that happen only in alien cultures. We meet such luminaries from the history of the Earth as a former ruler of India (“Space Seed,”) the inventor of faster-than-light travel (“Metamorphosis,”) the god Apollo (“Who Mourns for Adonais?,”) and Abraham Lincoln and Genghis Khan (“The Savage Curtain,”) all forgotten and imprisoned in the infinite void of deep space.  Our heroes encounter nightmarish doppelganger versions of political entities such as the Roman Empire (“Bread and Circuses,”) the United States of America (“A Piece of the Action” and “The Omega Glory,”) and Nazi Germany (“Patterns of Force,”)  showing that space is a realm in which not only individual humans can become isolated and powerless, but that whole human societies can be cut off, condemned to stagnation and historical irrelevance, by a misconceived response to technological development.

In developing an image of outer space as a realm of isolation, abandonment, decline, and helplessness, this line of science fiction writers from Jules Verne to Tom Gauld may be harking back quite far into literary history. It is often said that Lucian (circa 125-circa 180 CE)’s “True History,” a satirical tale recounting a journey to the Moon, is the first science fiction story. Lucian’s story is itself more than a little reminiscent of two plays by Aristophanes (circa 450 BCE- circa 386 BCE,) The Birds (414 BCE) and Peace (421 BCE.)  In each of those plays, disreputable characters fly to the heavens and pull off unlikely schemes.

Particularly relevant to our discussion is the scene in Peace when Trygaeus, a poor farmer, arrives in the heavens, having flown there on the back of a giant dung-beetle. Trygaeus’ goal is to arrest Zeus and prosecute him in the courts of Athens for having allowed the wars among the Greek states to go on so long that Greece is weakened and in danger of a takeover by the Persian Empire.  Once in the heavens, Trygaeus finds that Zeus and almost all of the other gods have abandoned their usual realm, going off deeper into space in their disgust at the warlike habits of the Greeks.  Only Hermes remains in his usual spot, and he is a degraded figure, so impoverished that that Trygaeus can easily bribe him with a small bag of meat, so powerless that when the god of war and some of his minions come through, Hermes hides from them.  The lower heaven from which the rest of the gods have departed is as much a realm of isolation, abandonment, decline, and helplessness for Hermes as any of the heavenly bodies are for the characters of the gloomier sort of science fiction.

Power keeps faith with power

The recent death of longtime Cuban despot Fidel Castro has led many to remark on the admiration Castro received from many who might have been expected to find in him an enemy. For example, Roman Catholic blogger Mark Shea wrote a post remarking on Castro’s brutal repression of the Roman Catholic church in Cuba; his commenters responded by pointing out that leading members of the Roman Catholic hierarchy, including the past three popes, have made many signs of friendship towards Castro. Rod Dreher documents the complicity of Roman Catholic bishops in Castro’s regime in some detail; Mr Dreher is not Roman Catholic, but Russian Orthodox. However, in the same post he reports on a statement made by his own chief pastor, the Patriarch of Moscow, in praise of Castro, showing that his church is in no better a position.

That the leaders of the largest theistic organization in the world would make themselves so useful to the leader of a regime that has oppressed the adherents of that organization so fiercely ceases to seem strange if we take this as the first rule of analysis: Power keeps faith with power. If a common ideology or common social identity ensured loyalty, the hierarchs of Rome and Havana would stand with the laity, the religious, and the parish priests who have been imprisoned for their faith; yet they rarely mention these persecuted, happily consorting with their persecutors. The only ideological consideration that moves those in power to act is the belief that the institutions which maintain their position should continue to operate, which means that those who are in a position to help or hinder those institutions in matters affecting their survival must be brought on board. The only identity that influences the actions of the mighty is their identity with each other; the powerless, even the powerless among their own supporters and putative fellows, are abstractions whom they rarely encounter in person, but see primarily as figures on revenue statements, opinion surveys, and other ledgers.

Flagrantly corrupt organizations like the Roman Catholic Church and the Cuban Communist Party are easy targets for this sort of analysis. But the same principle applies everywhere. So the policies by which USA has opposed the Castro regime are unintelligible except as a case of power keeping faith with power, betraying every other trust. The two chief prongs of the economic warfare that the USA waged on Cuba throughout virtually the whole of Castro’s time at the head of the regime there were, on the one hand, a highly restrictive policy on trade between the USA and Cuba, and on the other a highly lax policy on immigration from Cuba. The trade embargo has been greatly eased in recent years, but only after it had consistently failed to weaken Castro’s grip on power for a half-century. And the “Dry Foot” immigration policy remains in effect. Though the Dry Foot policy has certainly helped to immiserate the people of Cuba by accelerating the Brain Drain of skilled professionals and other highly productive individuals from the island, it has probably strengthened the regime’s grip on power, by luring to Miami and points north the people likeliest to lead a revolt .  Both halves of the economic warfare policy were worse than useless to those who were ostensibly supposed to be its principal beneficiaries; that the embargo persisted for so long, and the Dry Foot policy persists still, is explicable only in terms of the powerful interests in the USA who benefit from their continuation, and from power’s tendency to keep faith with power.

Remembering that power keeps faith with power, we see what people may be getting at when they deride “identity politics.” Writing in Slate, Jamelle Bouie argues that Jesse Jackson’s “Rainbow Coalition” of the 1980s, inviting disenfranchised white working people to identify with people of color and other minority groups, is a better model for a revival of the American Left than is Senator Bernard “Bernie” Sanders’ vision of a politics that puts class first. Mr Bouie sums up his case thus:

But the history of the Democratic Party contains a model for moving forward, with an approach, honed by Jesse Jackson, that bridges the divide. And thinkers in the political and policy world have crafted solutions that reflect this approach. It respects the reality of the modern Democratic Party: a formation that represents—and depends on—the votes of women, young people, and people of color.

Mainstream Democrats have set their sights on white voters. But the path forward—the way to win them and energize those voters of color who didn’t come to the polls in 2016—might lie in the insights of black voters and black communities and a larger appreciation of how and why identity matters, in a politics of we kin, blackness in many shades. Against a political movement that defines America in exclusionary and racial terms—as a white country for white people—a renewed Rainbow Coalition is the only defense worth making.

As far as it goes, this is unexceptionable. When we get to “the reality of the modern Democratic Party,” though, we see a big trap door about to open under our feet. The Democrats can get the votes of 60,000,000 or more people in national elections, roughly half the electorate, yet hold fewer than 30% of all elected offices in the USA. Part of this can be blamed on institutional quirks such as the boundaries of the states, gerrymandering of electoral districts within states, the advantage that Republicans derive from their greater financial resources, etc.

Other parts of the problem derive from a vulnerability inherent in the structure of “the modern Democratic Party.” The great majority of African Americans may vote for Democrats, but the voices heard in the councils of the party are not those of that majority, but of the professional politicians who presume to speak for black people. Likewise for each of the other groups that make up the Democratic coalition. Often the spokespeople will come reasonably close to the views of their constituents, but even then there is an Achilles’ Heel- voters know from long experience that power, including the relatively modest power to draft portions of the Democratic Party platform and to have a say in who will be appointed as Deputy Assistant Secretaries of Housing and Urban Development under Democratic presidents, keeps faith with power.

Nonblack voters thus hear invitations to identify with blackness, when they come from the Democratic Party, not as invitations to identify with their African American neighbors, but as invitations to go along with the policy positions of the Congressional Black Caucus and similar groups. Those groups may do a fairly good job of speaking for the people they claim to represent, but are made up of human beings, and are therefore ships tossed on the rough seas of politics. As such, they are as likely, given time, as the US foreign policy establishment or the Cuban Communist Party or the Roman Catholic church to find themselves making common cause with the deadliest enemies of anyone who is so incautious as to trust them without reservation. That leaves whites open to the appeal of the ethnic bloc voting that they have long practiced in the South and that they increasingly display in other parts of the country where their numerical majority is as weak as it is in the South, perhaps less because they prefer the leaders of the Republican Party to those of the Democratic Party than because they can see a clearer path to influencing the leaders of a party that depends on them for its core support than they can see to influencing the leaders of a party that depends on everyone but them for its core support. When an ethnic group votes as a bloc, it is a power within the party it backs, and the other powers within that party dare not betray it too obviously.  When the members of a group scatter their votes, that group is no power, and its role is to be betrayed at every turn. So, in the absence of a labor movement or other force uniting people on a basis other than race, white voters are no more likely to identify with blackness than African American voters are to identify with whiteness.

Once as tragedy, once as farce

The recent announcement that the New York State Attorney General’s office is looking into the Trump Foundation, one of Don John of Astoria’s more dubious enterprises, reminds me of Marx’s famous dictum that historical situations occur twice, once as tragedy, once as farce. The Clinton Foundation is tragic; it has done a great deal of good, but as a project of people who are planning to return to the White House has also become a lobbying venue. Not only do its connections to the State Department during HRC’s tenure as Secretary raise eyebrows, but its practice of running its own projects rather than distributing money to established charities and the substantial amounts it has spent on luxurious gatherings of its super-rich donors are red flags.

The Trump Foundation, by contrast, lacks the grandeur of scale and the mixture of heroic achievement with moral ambiguity that are essential components of tragedy. It is simply farcical, a scam that has enabled Mr Trump to obscure the fact that he does not give nearly as much money to charity as a person who is as rich as he claims to be typically would.

The same could be said of the Trump and Clinton campaigns respective practice regarding information about the health of their candidates. Since cellphone video surfaced of HRC having some kind of medical episode the other day, the Clinton campaign’s unwavering insistence that any questions about her health are signs of derangement on the part of those asking them has become laughable, but I would still say that her apparent physical decline and her refusal to level with the public about it do attain to the dignity of the tragic. HRC is a major figure in the last quarter-century of history, and that she and Bill Clinton were as youthful as they were when they first appeared on the world stage did mark a transition from the Cold War era to the present time. That Clinton-world obdurately insists that she is still in her prime therefore represents, not an individual shortcoming on her part, but the difficulty with which the entire Baby Boom generation admits that the sun is setting on the period of history in which leadership rightfully belongs in its hands.  So the tragic scale of HRC’s pretense that nothing is the matter with her health comes not only from the threat of another presidency, like that of Franklin Roosevelt in 1944-1945 or Woodrow Wilson in 1919-1920 or Chester Arthur in 1883-1885, in which the White House palace guard refuses to admit that the president is gravely ill and thereby creates uncertainty as to who is really in charge, but also from her place in history.

As for Mr Trump, what he has made available to the public about his health is a statement from a guy who looks like this:

maxresdefault

As the man said, once as tragedy, once as farce.

The two foundations and the candidates’ health are in the news today. If we cast our minds back a few weeks, we will recall Mr Trump saying that as president, HRC would appoint left-of-center federal judges, and that no one could stop it- “Although the Second Amendment people, maybe there is, I don’t know- but I’ll tell you what, that will be a horrible day.” There was a great deal of parsing and analyzing this remark, though it seemed clear to me that it started in Mr Trump’s head as a joke about political assassination from which he recoiled when he heard it (“that will be a horrible day.”) Mr Trump’s opponents rightly expressed dismay at a potential US president making jokes about political assassinations.

Mr Trump’s tendency to say whatever pops into his head is suitable for a character in a low farce, not for a US president, and this joke about political assassination shows why. But what of HRC? She also has publicly joked about political assassination. Although in her case, it was not the hypothetical assassination of an opponent, but an already-accomplished assassination which she was instrumental in bringing about:

Considering the lack of provocation for the intervention that overthrew the Gadhafi regime and the catastrophic consequences of the Libyan war for the whole of North Africa, to say nothing of the gruesome manner of Colonel Gadhafi’s death, it is difficult to watch this gleeful boast without revulsion.

Still, low and coarse as HRC’s behavior might have been in this moment, it still qualifies as tragic. A phrase like “war crimes,” as in “To initiate a war of aggression, therefore, is not only an international crime; it is the supreme international crime differing only from other war crimes in that it contains within itself the accumulated evil of the whole,” does betray a certain lack of imagination. “Crime” names something inescapably small and grubby, and death as the result of crime is an unworthy end to one bearing the dignity of a human being.  War is the greatest of evils, but there is a greatness even in its evil. Thomas Aquinas developed a concept which he called “the law of the fomes of sin,” that even the darkest sin mimics the law-governed structure of God’s living creation. Nowhere is the law of the fomes more compellingly demonstrated than in the spectacle and efficiency, the awe-inspiring scale and undeniable bravery, with which even the most unjust of wars is waged. Responsibility for an unjust war is, therefore, a tragic guilt, not a farcical one.

Something that’s wrong with white people

black-mad-people-man-angry-reading-white-cartoon

He wants a crackdown

I think one of the least appealing characteristics of white Americans is an excessive tendency to identify with authority figures. We can see this tendency among whites who lean to the political right, who are often ridiculously tenacious in their defense of police officers who shoot unarmed suspects or presidents who invade barely-armed countries.  I’m a white American myself. Even though I usually tend towards the opposite extreme, being overly leery of authority, there are times when I revert to the norm. For example, when I’m under stress, my first reflex when I hear about a conflict is to identify with the more powerful side and ask impatiently why they don’t just go in with overwhelming force and sort the whole mess all out once and for all.

When I was a kid in the 1970s, people were still talking about the 71-day standoff that began when activists associated with the American Indian Movement took control of the town of Wounded Knee, South Dakota. Those activists were armed, and in the course of the standoff they shot United States Marshal Lloyd Grimm, paralyzing him from the waist down. When the topic of the Wounded Knee Incident came up in a room full of whites, there was always a good chance someone would say that the authorities should have resolved the situation with a violent assault, that organizers Russell Means and Dennis Banks should have been prosecuted on the gravest possible charges, and that the patience the government showed during the standoff and the acquittal of Mr Means and Mr Banks at their federal trial in 1974 were special treatment accorded to the activists because they were Native American.

For the last several days an armed group made up of whites has been in control of a federal bird sanctuary in Oregon. While the group who seized Wounded Knee were moved to action by their belief that their tribal government was corrupt and in need of reform, a belief that connected to a wider vision of Native American history and the place of Native peoples on this continent, the group in Oregon is incensed about what they see as the unjust federal prosecution of one man who, like them, was upset about federal land policy in the West. What strikes me is the sheer number of left-leaning whites whom I’ve heard in the last couple of days talking exactly like the people who were frothing at the mouth about Russell Means and Dennis Banks almost 43 years ago. I’ve heard them call for police violence to end the standoff; I’ve heard them call for prosecutions under federal anti-terrorism statutes; I’ve heard them say that a failure to do either of those things is the result of special treatment for the occupiers due to the their race. White people haven’t changed very much over the years, and don’t change just because they put down one political party’s banner and pick up another’s.

I’ve seen a number of interesting things about this situation.  Counterpunch isn’t what it was when Alexander Cockburn was alive and co-editing it with Jeffrey St. Clair; Cockburn would probably have had a great deal of sympathy for the occupation and never had anything but scorn for people who interjected the word “terrorism” into a political discussion, but Mr St. Clair is the co-author of a piece there today calling the occupiers terrorists and chronicling the woes of a federal land management official who has long been in conflict with them and their relatives. Mr St. Clair certainly makes out a strong prima facie case that the occupiers are a bunch of jerks and that they they would be an unwelcome addition to any neighborhood, but that’s a long way from justifying the use of the “terrorist” label, a word which, these days, is virtually trademarked by those who demand a submissive attitude to the law enforcement and intelligence-gathering apparatuses of the US government.

Artur Rosman, citing his status as a naturalized citizen of the USA, declares himself incompetent to form an opinion about the Oregon standoff, and quotes at length from an African American friend of his:

If over the last several years you’ve thought that any of the black lives cut short by police violence “had it coming” because they were not compliant with law and order and/or were disrespectful and aggressive towards those in authority, then surely you are now advocating for a quick and overwhelming amount of lethal force to be brought against the activists in Burns, Oregon, who are openly breaking the law, actually bearing and threatening to use arms against police forces, intentionally flouting authority, etc. You can’t have it both ways. Conversely, if you think that the patience and calm with regard to the disgruntled and armed activists in Burns, Oregon, is probably the better part of wisdom, then surely you have been deeply outraged at the lack of patience and calm shown by police officers in so many cases in recent years involving un-armed black men and women posing far less of a threat to authority and government than is represented by this “militia” in Oregon. Again, you can’t have it both ways. Or, if you want to have it both ways, especially if you’ve been tempted by the “all lives matter” clap-trap, you have some serious explaining to do.

At Slate, Jamelle Bouie cautions against an interpretation of the situation which is phrased in solely racial terms. He also points out how bizarre it is that people who present themselves as opponents of police violence appear to be frustrated that the police are not handling this matter with an immediate recourse to violence. Mr Bouie’s last two paragraphs sum this aspect of it up well:

In any case, why won’t they shoot at armed white fanatics isn’t just the wrong question; it’s a bad one. Not only does it hold lethal violence as a fair response to the Bundy militia, but it opens a path to legitimizing the same violence against more marginalized groups. As long as the government is an equal opportunity killer,goes the argument, violence is acceptable.

 

But that’s perverse. If there’s a question to ask on this score, it’s not why don’t they use violence, it’s why aren’t they more cautious with unarmed suspects and common criminals? If we’re outraged, it shouldn’t be because law enforcement isn’t rushing to violently confront Bundy and his group. We should be outraged because that restraint isn’t extended to all Americans.

Libertarian stalwart Justin Raimondo has taken a lively concern with the case; in this piece, he anticipates the arguments Mr St. Clair and others have made about the basic rottenness of the people occupying the bird sanctuary and the cause they represent. Mr Raimondo defends the occupiers and extols their cause, unconvincingly to my mind, but vigorously.

He’s also spent a lot of time tweeting at people who have expressed authoritarian rage at the Oregon activists; here are a couple of samples:

 

 

and:

 

Glenn Greenwald has also mounted his Twitter account and taken it to the heart of this particular battle. As for instance:

https://twitter.com/ggreenwald/status/684078947877928960

and:

https://twitter.com/ggreenwald/status/683984753545068545

 

“The author’s intent” and the pronunciations /dʒɪf/ and /ɡɪf/

Recently there’s been a flareup of interest in that great question of our age, the correct pronunciation of the acronym “gif.”

An abbreviation for “graphics interface format,” some people pronounce this acronym /dʒɪf/ (as if it were spelled “jiff,”) while others say /ɡɪf/ (as if it were spelled, um, “gif.”)

Here’s a remark from Rachel Larimore, prompted by RuPaul’s declaration in favor of the pronunciation /dʒɪf/ (“jiff”):

One of the flashpoints in this weighty debate is the fact that the inventor of the gif format, Steve Wilhite, prefers to pronounce it as /dʒɪf/, while most other people pronounce it /ɡɪf/.   For my part, whatever authority Mr Wilhite might want to claim in this matter because of his role in creating the format is seriously undercut by the fact that he at least consented to, and possibly suggested, the acronym “gif.”  If he’d wanted us to say /dʒɪf/, the time to take that stand was when the acronym was being chosen.  The abbreviation “G.P.” on American military vehicles in the late 1930’s combined with the name of a character in Popeye comics gave rise to the pronunciation /dʒiːp/ and eventually to the word “jeep”; abbreviating “graphics interface format” as “gf” would likely have started people saying either /dʒɪf/ or /dʒiːf/ (“jeef,” as if it were the singular of “Jeeves.”) Once Mr Wilhite agreed to the abbreviation “gif,” it would be as silly for him to get upset with people for saying /ɡɪf/ as it would for the inventors of Play-Doh to be upset that their product is now used as something other than wallpaper cleaner.

I think that a lot of the emotional heat in this argument comes from a sense of unease about something basic to communication that is strangely difficult to put into words.  It is generally taken for granted that there is a relationship between the interpretation we ought to put on a message and the interpretation that the author of that message would have wanted us to put on it.  But when we set out to explain exactly what that relationship is, and how it applies to different kinds of messages, and how far it restricts the proper use of material objects created for the purpose of sending messages, and how exactly we came to have this moral obligation to recreate the author’s intended message inside our heads, and what the proper penalty is for failing to do so, and who counts as the author of what, and which of the various ideas that might have been in a particular person’s mind at various points in time count as authorial intent, the whole thing gets very slippery very fast. It’s one of those things like “time” or “truth” or “love” which we are all quite sure exists and makes demands on us, but which no one can satisfactorily explain. If authorial intent can’t settle a question as basic as the pronunciation of a three letter word, then it begins to seem as if we won’t be able to hold onto the concept of authorial intent at all. Without such a concept, it is by no means obvious how any form of communication would be possible.

On the other hand, it is also obvious that a work of art always says more than its creator intended it to say.  D. H. Lawrence (almost) said* “Never trust the teller, trust the tale,” and that is wise advice.  If that were not so, not only would it be impossible for any work of art to outlive the cultural moment in which it was produced; it would also be necessary for artists to go around continually explaining the meaning of each of their works to each person encountering that work.  If you’ve ever written a work of fiction, you know how this goes; you set to work thinking you’re going to tell one story, then find another story telling itself. After the writing is done, your readers start asking you questions about what you had intended by various things you put in the story, and half your challenge is coming up with non-embarrassing ways to admit that you hadn’t realized you put those things in until the reader pointed them out to you.

Even people who start discussions of the dispute between /dʒɪf/ and /ɡɪf/ by joking about the insignificance of the issue, and who never, in their conscious minds, accept the proposition that it matters very much which pronunciation becomes and remains most widespread, do often become quite passionate about their preferred pronunciation.  I think they do that because they have an uneasy feeling that, while the author’s intent matters, it is not the only thing that matters as we interpret a text.  The feeling is uneasy because it isn’t attached to a clear explanation of why and in what sense it is so.

That in turn gives us another example of the difficulty of using “authorial intent” as a standard of interpretation.  On the one hand, very few people would agree with the proposition that much is at stake in a debate about the pronunciations /dʒɪf/ and /ɡɪf/.  Even fewer are moved by such debates to write essays about the role of authorial intent in interpretation of text.  On the other hand, a great many people raise their voices, spend time contributing vitriolic posts to online forums, and take other actions that strongly suggest that they do believe that something important is hanging in the balance.  This raises the question of levels of intentionality.  At the level of willingness to assent to particular propositions, the authors of these passionate messages have no intention to send the message that it matters which pronunciation catches on.  At the level of behavior and affect, that is precisely the message they are sending.

*Lawrence actually said “Never trust the artist, trust the tale,” which is not only less memorable than the common misquotation, but is also confusing.  Is he saying that we should look for narrative content even in artworks that don’t seem to have it, and cast a leery eye on artists who don’t seem to be telling stories?  I’m sure it wasn’t his chief conscious intent to do so, but something like that may have been in the back of his mind somewhere.  Whether or not some such idea was rumbling around in Lawrence’s mind when he crafted the aphorism, it distracts from the point which “Never trust the teller, trust the tale” makes so pungently.

The internal structure of the calendar, part 2

In December of 2012, I posted a few remarks about the calendar.  The visual representations of the calendar we see in the West usually take the form of a grid in seven columns, each representing a day of the week, with the rows representing the succession of the numbered days of the month as iterations of the sequence of the seven days of the week.  As for example:

What, you've seen one of these before?  That's FANTASTIC!

What occasioned my post in December of 2012 was this xkcd cartoon, in which Randall Munroe wrote the number of each date in a size that reflects the relative frequency with which that date is mentioned in materials searchable through Google NGrams:

In months other than September, the 11th is mentioned substantially less often than any other date.  It's been that way since long before 9/11 and I have no idea why.

The patterns here made me wonder if our usual grid layout oversimplifies the way the calendar is actually structured in our thought and social practice.  I’m a Latin teacher, and so my working life brings me into contact with the calendar of the ancient Romans.  That calendar did not include the week and was not organized as a grid.  Rather, each month had an internal structure in which days were expressed by their proximity to other days and by their religious status.  A visual representation of the Roman calendar might look like this:

This drawing is based on some fragments from about 60 BCE

Recently, other bits have appeared online suggesting that the calendar may have more internal structure than we commonly realize.  This morning on Slate, Ben Blatt looked at times of the year when newborns are most and least likely to be given particular names.  Mr Blatt’s charts, and the box in which readers can search for the seasonal patterns of particular names, are based on death reports released by the US Social Security Administration, since there is no national agency in the USA that collects and publishes comprehensive reports about births.  So his data is about 80 years behind the times, but it still is interesting.

For example, Mr Blatt shows that babies born on prominent saint’s days in the USA 80 years ago were much likelier than other babies to be named after those saints.  So lots of Valentines and Valentinas were born on 14 February, lots of Patricks and Patricias born on 17 March, lots of Johns and Janes born on 24 June, etc.   This strikes me as a bit sad- I’ve always thought the Orthodox had a good idea with celebrating both a birthday and a name day.  Having your birthday and your saint’s day simultaneously would cheat you out of an excuse for a party in your honor.  Mr Blatt also shows that lots of girls named June were born in June, lots of boys named August were born in August, etc.

Last week, Cracked highlighted an old piece called “The 9 Most Statistically Terrifying Days on the Calendar.”  I remember the weaknesses of Cracked magazine, I even remembered them in a post here,  and more than once I’ve seen things on the site that I knew to be false.  So I take everything I read there with a grain of salt.  But each of the items on that listicle looks pretty plausible.  For example, #9 tells us that traffic accidents spike the morning after people set their clocks ahead for daylight savings, since the hour of sleep-deprivation has the same effect as drinking a couple of shots of Scotch.  I haven’t done any checking to verify that or any of the other claims on the list, but none of them is outlandish on its face, and they all have explanations attached that make me feel smart when I read them, so why the hell not repeat them.

 

 

The unliked and uninjured

Earlier this week, Slate‘s Mark Joseph Stern wrote a piece asking incredulously “Do Anti-Gay Christians Really Face Employment Discrimination?”  Mr Stern cites blog posts by Princeton Professor Robert George and The American Conservatives always interesting, often apoplectic blogger Rod Dreher about a survey in which investment bank JP Morgan-Chase recently inquired into its employees positions with regard to the rights of sexual minorities.  Finding the survey a perfectly routine bit of corporate boilerplate, Mr Stern shows impatience with the concerns that Professor George and Mr Dreher voice.  “All of this is extravagantly silly, and I respect Dreher and George’s intellects too much to believe that they’re actually taking it seriously,” he writes.

I would agree that Professor George, Mr Dreher, and their fellows have made many hyperbolic statements regarding this and similar matters.  At the same time, I do think they are onto something.  I would refer to an item the retired Episcopal Bishop of New Hampshire, the Right Reverend Mr V. Gene Robinsonposted on The Daily Beast several months ago.  The Rt. Rev. Mr R, himself the first openly gay person consecrated a bishop in a traditional denomination, denied that anti-gay Christians in the USA are the targets of anything that should be called “persecution.”  At the same time he did acknowledge that they are coming to be a minority, not only numerically, but in the sense that they bear a stigma which sets them apart from the mainstream:

Here’s what victimization looks like: every day, especially in some places, LGBT people face the real possibility of violence because of their orientation or gender identity. Young people jump off bridges or hang themselves on playground swing sets because of the bullying and discrimination they face. In 29 states, one can be fired from one’s job simply for being gay, with no recourse to the courts. In most places, we cannot legally marry the one we love. Some of us have been kicked out of the house when we come out to our parents, and many young LGBT people find themselves homeless and on the streets because of the attitudes of their religious parents toward their LGBT children. And did I mention the everyday threat of violence?

Compare that to the very painful realization that one’s view of something like homosexuality is in the minority after countless centuries of being in the majority. It may feel like victimization to hang a shingle out to sell something or provide some service to the public, only to find that the “public” includes people one disagrees with or finds immoral in some way. It may feel like it has happened practically overnight, when it has actually been changing over a period of decades. Being pressed to conform to such a change in majority opinion must feel like victimization. But as a society, we would do well to distinguish between real victimization and the also-very-real discouragement felt by those who now find themselves in the minority.

I do not mean to brush aside as inconsequential the feelings of those who find themselves in the minority, whether it be around the topic of gender, race, or sexual orientation. But I do mean to question characterizing such feelings as discrimination, violation of religious freedom, and victimization. It’s time we called out our religious brothers and sisters for misunderstanding their recently-acquired status as members of a shrinking minority as victims.

I would amplify the good bishop’s remarks about “the feelings of those who find themselves in the minority.”  I would say that “feelings” is perhaps an unfortunate choice of words here, being as it is a word that often figures in non-apology apologies such as “I’m sorry I hurt your feelings,” which is a polite way of saying “I wish you hadn’t become upset when I was doing what any sensible person would regard as that right thing, you crybaby.”  The beliefs that motivate people who disapprove of homosexuality may be wrong; I am quite sure they are wrong, as a matter of fact, though I am chastened by Mr Robinson’s* own willingness to suspend final judgment on the theological ins and outs of the issue.  However, it is hardly reasonable to expect the members of this new minority group not to share the experience of every established minority group, who are from time to time frustrated when the image of the world that is presented to them in every movie, every book, every TV show, every presidential address, every classroom, every other place where the voice of The Mainstream is heard, is so much at odds with what they have seen and heard and felt in their own lives, from their own point of view, that it begins to seem as if they have been transported to a parallel universe.

I believe Mr Robinson would be quick to agree with this.  I heard him make a speech a few years ago in which he told an audience made up primarily of same-sexers that “we will never be anything other than a small minority group in society at large, no matter how large a majority we may form in this room at this moment.”  He went on to talk about the challenges inherent in minority status, especially the sense of not being heard that comes when an element so central to personal identity as one’s sexuality takes a form that is basically alien to most of the people one meets on a daily basis.  So when he tells his opponents that their new status as members of an unpopular minority does not by itself mean that they are victims of injustice, he is not trivializing their experiences or concerns.  Rather, he is suggesting that in the future he and they will have something in common.  Anti-gay Christians may never again be anything other than a small minority group in society at large, no matter how large a majority they may form in their own worship spaces.  And they can no longer expect culture high and low to be dominated by a worldview in which male and female are categories created by God and inscribed by God with specific meanings, meanings that include a concept of complementarity that exhausts the legitimate purposes of sexual activity.  Nor can they even expect the average person to have the vaguest knowledge of what their views are, or to be at all interested in learning about them.  They can hardly be faulted for considering this an unattractive prospect, yet it is no different from what any other minority group experiences.  On Mr Robinson’s account, the reduced visibility and inadvertent exclusions that come with minority status do not by themselves constitute unjust discrimination.

I don’t want to put words in Mr Robinson’s mouth; I’m sure he would be the first to concede that there is such a thing as institutional discrimination, and that injustices no one in the majority intends to commit or even knows are happening can at times wreak horrific consequences in the lives of the minority.  And while Mr Stern is blithely confident that laws against religious discrimination will give anti-gay Christians all the protection they need against any mistreatment they may suffer in the future, Mr Dreher’s American Conservative colleague Samuel Goldman** links to a recent article raising the question of whether “religious freedom” is even a coherent category in our current legal system.   So I see more grounds to the fears of this new minority than does Mr Stern.  I cannot be of much help to them; in the unlikely event that anti-gay Christians were to ask me how they could be sure of receiving fair treatment in a strongly pro-gay America, my suggestion would be that they abandon their false beliefs and join the rest of us in affirming the diversity of sexual expression in today’s world.  I’m sure that would be about as pointless as a Christian telling Muslims that if they don’t want to be smeared by association with terrorists, all they have to do is to be baptized.

*To avoid confusion, let me explain: The customary form in which the names of Anglican clergy are presented is “[Ecclesiastical Honorific] [Courtesy Title] [Proper Name]” at first reference, and “[Courtesy Title] [Proper Name]” at subsequent references.  That’s why I introduced Mr Robinson as “the Right Reverend Mr Robinson,” then switched to plain “Mr Robinson.”  My wife works for the Episcopal Church, and I occasionally read the novels of Anthony Trollope, so I’m aware of all these things.

**Like Mr Dreher, Mr Goldman is always interesting.  Unlike him, he is never apoplectic.

WEIRD laughter

Recently, several websites I follow have posted remarks about theories that are meant to explain why some things strike people as funny.

Zach Weinersmith, creator of Saturday Morning Breakfast Cereal, wrote an essay called “An artificial One-Liner Generator” in which he advanced a tentative theory of humor as problem-solving.

Slate is running a series of articles on theoretical questions regarding things that make people laugh.  The first piece, called “What Makes Something Funny,” gives a lot of credit to a researcher named Peter McGraw, who is among the pioneers of “Benign Violation Theory.”  This is perhaps unsurprising, since Professor McGraw and his collaborator Joel Warner are credited as the authors of the piece.  Professor McGraw and Mr Warner summarize earlier theories of humor thus:

Plato and Aristotle introduced the superiority theory, the idea that people laugh at the misfortune of others. Their premise seems to explain teasing and slapstick, but it doesn’t work well for knock-knock jokes. Sigmund Freud argued for his relief theory, the concept that humor is a way for people to release psychological tension, overcome their inhibitions, and reveal their suppressed fears and desires. His theory works well for dirty jokes, less well for (most) puns.

The majority of humor experts today subscribe to some variation of the incongruity theory, the idea that humor arises when there’s an inconsistency between what people expect to happen and what actually happens.

Professor McGraw and Mr Warner claim that incongruity theory does not stand up well to empirical testing:

Incongruity has a lot going for it—jokes with punch lines, for example, fit well. But scientists have found that in comedy, unexpectedness is overrated. In 1974, two University of Tennessee professors had undergraduates listen to a variety of Bill Cosby and Phyllis Diller routines. Before each punch line, the researchers stopped the tape and asked the students to predict what was coming next, as a measure of the jokes’ predictability. Then another group of students was asked to rate the funniness of each of the comedians’ jokes. The predictable punch lines turned out to be rated considerably funnier than those that were unexpected—the opposite of what you’d expect to happen according to incongruity theory.

To which one might reply that when Mr Cosby and Ms Diller actually performed their routines, they didn’t stop after the setup and ask the audience to predict the punchline.  Nor would any audience member who wanted to enjoy the show be likely to try to predict the punchline.  Doing so would make for an entirely different experience than the one the audience had paid for.

Be that as it may, Professor McGraw and Mr Warner go on to claim that their theory of “benign violation” is supported by empirical evidence:

Working with his collaborator Caleb Warren and building from a 1998 HUMOR article published by a linguist named Thomas Veatch, he hit upon the benign violation theory, the idea that humor arises when something seems wrong or threatening, but is simultaneously OK or safe.

After extolling some of the theory’s strengths, the authors go on:

Naturally, almost as soon as McGraw unveiled the benign violation theory, people began to challenge it, trying to come up with some zinger, gag, or “yo momma” joke that doesn’t fit the theory. But McGraw believes humor theorists have engaged in such thought experiments and rhetorical debates for too long. Instead, he’s turned to science, running his theory through the rigors of lab experimentation.

The results have been encouraging. In one [Humor Research Laboratory] experiment, a researcher approached subjects on campus and asked them to read a scenario based on a rumor about legendarily depraved Rolling Stones guitarist Keith Richards. In the story—which might or might not be true—Keith’s father tells his son to do whatever he wishes with his cremated remains—so when his father dies, Keith decides to snort them. Meanwhile the researcher (who didn’t know what the participants were reading) gauged their facial expressions as they perused the story. The subjects were then asked about their reactions to the stories. Did they find the story wrong, not wrong at all, a bit of both, or neither? As it turned out, those who found the tale simultaneously “wrong” (a violation) and “not wrong” (benign) were three times more likely to smile or laugh than either those who deemed the story either completely OK or utterly unacceptable.

In a related experiment, participants read a story about a church that was giving away a Hummer H2 to a lucky member of its congregation, and were then asked if they found it funny. Participants who were regular churchgoers found the idea of mixing the sanctity of Christianity with a four-wheeled symbol of secular excess significantly less humorous than people who rarely go to church. Those less committed to Christianity, in other words, were more likely to find a holy Hummer benign and therefore funnier.

Lately, social scientists in general have been more mindful than usual of the ways in which North American undergraduates are something other than a perfectly representative sample of the human race.  Joseph Henrich, Steven Heine, and Ara Noranzayan have gone so far as to ask in the title of a widely cited paper whether the populations most readily available for study by psychologists and other social scientists are in fact  “The weirdest people in the world?”  In that paper, Professors Henrich, Heine, and Noranzayan use the acronym “WEIRD,” meaning Western, Educated, Industrialized, Rich, Democratic.  Their abstract:

Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.

It would be particularly easy to see why a theory like Benign Violation would have a special appeal to undergraduates.  Undergraduate students are rewarded for learning to follow sets of rules, both the rules of academic disciplines which their teachers expect them to internalize and the rules of social behavior appropriate to people who,like most undergraduates, are living independent adult lives for the first time.  So, I suppose if one wanted to defend Superiority Theory (as for example mentioned by Aristotle in his Poetics, 1449a, p. 34-35,) one would be able to use the same results, saying that students simultaneously saw themselves as superior both to the characters in the jokes who did not follow the usual rules and to those who would enforce those rules in too narrowly literalistic a fashion to fit with the overall approach of higher education, where innovation and flexibility are highly valued.  Here the WEIRD phenomenon comes into play as well, since cultures vary in their ideas of what rules are and what relationship they have to qualities like innovation and flexibility.  Moreover, one could also say that the judgment that a particular violation is or is not benign itself implies superiority over those involved in the violation, and that this implication of superiority is what generates laughter.

Also, because undergraduates are continually under pressure to internalize one set of rules after another, they often show anxiety related to sets of rules.  This may not be the sort of thing Sigmund Freud had in mind when he talked about Oedipal anxiety, but it certainly does drive undergraduates to seek relief.  Example of action that is at once quite all right and by no means in accordance with the rules may well provide that relief.

Incongruity theorists may find comfort in Professor McGraw’s results, as well.  The very name “Benign Violation” as well as experimental rubrics such as “wrong” and “not wrong” are incongruous combinations by any definition.  So a defender of Incongruity Theory may claim Benign Violation as a subcategory of Incongruity Theory, and cite these results in support of that classification.

Professor McGraw is evidently aware of these limitations.  He and Mr Warner explain what they did to rise above them:

[T]hree years ago, he set off on an international exploration of the wide world of humor—with me, a Denver-based journalist, along for the ride to chronicle exactly what transpired. Our journey took us from Japan to the West Bank to the heart of the Amazon, in search of various zingers, wisecracks and punch lines that would help explain humor once and for all. The result is The Humor Code: A Global Search for What Makes Things Funny, to be published next week—on April Fool’s Day, naturally. As is often the case with good experiments—not to mention many of the funniest gags—not everything went exactly as planned, but we learned a lot about what makes the world laugh.

It isn’t April First yet, so I don’t know how well they have done in their efforts to expand their scope.

One sentence that struck me wrong in Professor McGraw and Mr Warner’s piece was this one, about Superiority Theory, that it “seems to explain teasing and slapstick, but it doesn’t work well for knock-knock jokes.”  I’m not at all sure about that one.  In a knock-knock joke, there are two hypothetical characters who take turns delivering five lines of dialogue.  The first character to speak is the Knocker (whose first line is always “Knock-knock!”)  The second character to speak is the Interlocutor (whose first line is always “Who’s there?”)  The Knocker’s second line is an unsatisfactory answer to this question.  The Interlocutor’s second line begins by repeating this incomplete answer, then adds the question word “who?”  The Knocker’s third line then delivers the punchline in the form of a repetition of the unsatisfactory answer followed by one or more additional syllables that change the apparent meaning of the initial unsatisfactory answer.

Knock-knock jokes became popular in the USA in the 1950s, as part of a national craze.  The first joke recorded in this mid-twentieth century craze, I have read, is the following:

K: Knock-knock!

I: Who’s there?

K: Sam and Janet.

I: Sam and Janet who?

K: Sam and Janet evening! (sung to the tune of this song)

Apparently all of the jokes that brought the form into such prominence in the 1950s that they are still beloved today by seven-year-olds of all ages took this form, in which the punchline involved the Knocker bursting into song with a popular Broadway tune of the period.

I think the jokes from this original craze almost have to be examples of superiority.  The Knocker is confident that the Interlocutor will be surprised when the punchline is presented under the usual conditions of the joke.  This is not to deny that if the joke were interrupted and the Interlocutor were asked to predict the punchline, after the manner of Professor McGraw’s students the Interlocutor might be able to do so.  When it is presented the Interlocutor will join in his or her satisfaction at being part of the relatively elite portion of the population who recognize current Broadway hits when they hear them.

As knock-knock jokes have become more familiar over the decades, meta-knock-knock jokes have gained a following.  For example, a person named Alice might play the Knocker in this joke:

K: Knock knock!

I: Who’s there?

K: Alice.

I: Alice who?

K: Alice (in a tone suggesting that she is wounded that the Interlocutor doesn’t recognize her)

The met-knock-knock joke suggests superiority to the genre of knock-knock jokes.  If first-order knock-knock jokes are popular among seven-year-olds of all ages, meta-knock-knock jokes are popular among eight-year-olds of all ages, suggesting superiority to those who still persist in telling first-order knock-knock jokes.

The world’s most hated knock-knock joke is this meta-knock-knock:

K: Knock, knock.
I: Who’s there?
K: Banana.
I: Banana who?
K: Knock, knock.
I: Who’s there?
K: Banana.
I: Banana who?
K: Knock, knock.
I: Who’s there?
K: Orange.
I: Orange who?
K: ORANGE YOU GLAD I DIDN’T SAY BANANA!

This joke attacks the several parts of the shared understanding between Knocker and Interlocutor.  The joke is more than five lines long, the fifth line does not take the form original unsatisfactory response + additional syllable or syllables, the Knocker expects the Interlocutor to repeat his or her two lines multiple times, and the punchline does not include a repetition of the original unsatisfactory response.  For the experienced Interlocutor, these attacks are an undue imposition on the Knocker-Interlocutor relationship.  For anyone else, the whole thing would be utterly pointless.

Hated as the joke is, Knockers of a particular sort, mostly eight-year-old boys, seem unable to resist it.  Willing Interlocutors can rely on these boys to laugh uproariously every time they drag them through the ritual responses.  Here too, Superiority Theory seems to be the only explanation for the boys’ laughter and the strain tolerating the joke puts on the Interlocutors.  The Knockers who enjoy the joke laugh at their own power to inflict it on their Interlocutors.

Each time a potential Interlocutor is confronted with “Orange you glad I didn’t say banana,” the joke gets a bit more annoying.  Perhaps this is because of an aspect of politeness recently referenced on yet another of my favorite sites, Language Log.  There it was mentioned that Penelope Brown and Stephen Levinson, founders of “Politeness Theory,” have provided conceptual tools to enable us to distinguish between situations in which statements offering information the hearer should already have suggest that the hearer does not already know that information and thereby offend the hearer and those which do not carry that suggestion and which therefore do not offend the hearer.  A joke with a painfully obvious punchline may fall in the first category, as do the reiterated responses in “Orange you glad I didn’t say banana.”  Casual remarks about the weather and other forms of small talk usually fall in the second category, as do formalized utterances generally.

Pythagoras Today

Slate recently reran a New Scientist piece about the similarities between mathematical patterns musicologists use and mathematical patterns  researchers to explore other fields.  Pythagoras did something similar two and a half millennia ago, and built a whole religion around it.  The Pythagorean cult was apparently still up and running in 1959, that’s when no less a celebrity than Donald Duck was initiated into Pythagoreanism:

 

The Narcissists

Recent articles in Slate and The Nation have set me to wondering about the general uselessness of white people as commentators on race.  Not that Michelle Goldberg and Tanner Colby, the white commentators who wrote those pieces, are useless; they comment quite usefully, not on race in any very broad sense, but very specifically on the knots whites tie themselves in when race comes up.  Ms Goldberg and Mr Colby are each engaged in a sort of rhetorical analysis.  Here’s one of Mr Colby’s remarks about white conservatives:

Affirmative action is unfair to white people and the Democratic Party is a plantation—that’s about as incisive as the rhetoric usually gets. Even when Republicans have a legitimate point to make about the shortcomings of some government program, it’s almost as if they can’t help blowing their own argument. They’ll start off talking sensibly enough about educational outcome disparities and within seconds they’re rambling incoherently about how black men don’t take care of their babies. It’s really astonishing to watch.

Now I grant you, complaints about black men not taking care of their babies, when they come up in the course of a highly abstract political discussion about something else, are probably going to be less than helpful.  But at least those complaints have something to do with black people, even if they are so laden with stereotypes and refusals to listen that the black people they imagine aren’t much like the ones who actually exist on the planet Earth.  If engagement with imaginary black people doesn’t sound like much to celebrate, consider this paragraph from Ms Goldberg’s piece:

There are also rules, elaborated by white feminists, on how other white feminists should talk to women of color. For example, after [Mikki] Kendall’s #solidarityisforwhitewomen hashtag erupted last fall, Sarah Milstein, co-author of a guide to Twitter, published a piece on the Huffington Post titled “5 Ways White Feminists Can Address Our Own Racism.” At one point, Milstein argued that if a person of color says something that makes you uncomfortable, “assume your discomfort is telling you something about you, not about the other person.” After Rule No. 3, “Look for ways that you are racist, rather than ways to prove you’re not,” she confesses to her own racial crimes, including being “awkwardly too friendly” toward black people at parties.

“Something about you, not about the other person” and “Look for ways that you are racist.”  Racism is a million things, among them a form of self-absorption.  Therefore, to say these things is quite literally a way of saying, “Why yes, I am self-absorbed!  Let’s talk about other ways in which I’m self-absorbed!”

Ms Goldberg goes on:

“I actually think there’s a subset of black women who really do get off on white women being prostrate,” [Professor Brittney] Cooper says. “It’s about feeling disempowered and always feeling at the mercy of white authority, and wanting to feel like for once the things you’re saying are being given credibility and authority. And to have white folks do that is powerful, particularly in a world where white women often deploy power against black women in ways that are really problematic.”

Preening displays of white feminist abjection, however, are not the same as respect. “What’s disgusting and disturbing to me is that I see some of the more intellectually dishonest arguments put forth by women of color being legitimized and performed by white feminists, who seem to be in some sort of competition to exhibit how intersectional they are,” says Jezebel founder [Anna] Holmes, who is black. “There are these Olympian attempts on the part of white feminists to underscore and display their ally-ship in a way that feels gross and dishonest and, yes, patronizing.”

If the internet has taught us anything, it is that anything you can think of is a fetish for someone, somewhere.  With a global population of well over 7,000,000,000, it could hardly be otherwise.  Many millions of those 7,000,000,000+ are black women, surely a big enough population that there must be at least a handful of people in it representing virtually every possible enthusiasm.  So it would hardly be surprising if some among them could fairly be said to “get off on white women being prostrate.”  Even so, I strongly suspect that a study would show that more whites find gratification in the idea of being rendered helpless by blacks than blacks find in the idea of rendering whites helpless. I also suspect that most blacks and other nonwhites who do entertain fantasies of humiliating whites would grow tired of the reality long before the whites were sated with it.  Attention, including hostile attention, is addictive.

It’s like men’s masochistic fantasies about women; if you look at those fantasies, it usually isn’t at all clear what the “mistress” is supposed to be getting out of her “servant.”  Most of the time he wants her to put on some kind of uncomfortable outfit and do a significant amount of manual labor while he just lies around bleeding all over her furniture.  Men find outlets for these fantasies by paying women to “dominate” them; online, masochistic men sometimes lend each other a helping hand, sharing masochistic fantasies in which they are the center of attention as objects of hostility.  The whites who take the lead in the race-shaming games Ms Goldberg describes are offering the same service to their fellow narcissists.  As there are not enough domineering women to go around when it comes to satisfying the fetishes of masochistic men, so there are not enough militantly antiwhite nonwhite women to go around to satisfy the desires of certain whites for hostile attention based on race.

And why would there be?  Why should black people, male or female, be as excited about white people as white people are excited about themselves?  Besides. the particular humiliations Ms Goldberg describes require some brainpower to inflict.  If you’re smart enough to play those games, you’re probably smart enough to realize that they are what my classmates in school used to call “white people shit” and to find a more constructive use of your time.