To know what is right

Here’s a Saturday Morning Breakfast Cereal strip that I’ve been silently disagreeing with for about a week:

1454254683-20160131

The part with which I disagree is “Moral standing is assigned to other creatures based on how similar they are to average human intelligence.” I’d say that the key consideration is that social life among humans involves an intricate mixture of competition and cooperation. Because a great deal is often at stake in our competitions with one another, conflicts of interest often render our judgments regarding one another unreliable.  Because the most valuable goals for which we compete can be fully attained only among people who trust each other to act charitably toward one another, excessively aggressive behavior in competitive situations is usually counterproductive at both the individual level and for society at large.

Therefore, codes governing human conduct must begin by acknowledging that no one can be the judge in his or her our own cause.  When we deal with someone who is in competition with us for the good things in life, we cannot justly demand the power to force that person to accept our decision that we should have access to these things and s/he should not. If we are not in direct competition, then perhaps one of us might be acceptable as judge over the other.

An extreme case would be selective breeding of humans.  In various societies there have from time to time been projects to establish a central authority to decide who is allowed to reproduce and who is not.  Since reproduction is one of the principal functions towards which humans and other living beings tend to be oriented, the stakes in this sort of decision are as high as they could possibly be.  For that reason, no central authority could ever be established that would be able to make such decisions in a truly rational manner.  Kinship groups compete with each other to produce offspring and to promote the interests of their offspring in the order of society; no conceivable human being could be altogether disinterested in the implications any particular a ruling for or against sterilization, for or against fertilization, for or against pairing, would have for his or her own kinfolk.  Most judges would, consciously or unconsciously, discriminate in favor of unions that are likely to produce mates for his or her future descendants, and against unions that are likely to produce rivals for them.  A few self-loathing individuals might discriminate in the opposite direction, but in no case would an altogether fair and above-board decision-making process be possible.

Compare this with the selective breeding humans conduct of other animals and of plants. We do not compete directly with any of the creatures whose breeding we direct.  Sometimes we use them to compete with other groups of humans, as a more prosperous agricultural will gain the advantage over its neighbors and gain opportunities to drive them out of their land, and sometimes we use them to compete with other creatures that we classify as pests or weeds or pathogens.  So, if we are to interact with the natural world in a healthy way, we ought to grant some form of moral standing to those pests and weeds and pathogens, inasmuch as our competition with them blinds us to the roles they play in the earth’s ecosystems.  What that form of moral standing would be, and how it would be enforced, is of course not an easy question to answer.  Religions that make particular places and particular species of animals sacrosanct may be good at doing that, though one can hardly be expected to adopt a religion in order to meet the requirements of a single argument from ethical theory.

Intelligence is not altogether irrelevant to the question of moral standing. Of course, creatures that are radically different from humans in average intelligence could not very well make a case for their interests in a way that humans could understand.  What is more, the closer creatures are to one another in their abilities, the fiercer, and therefore the more distorting to perceptions, competition between them is likely to be.  If it is difficult to imagine how a rhinovirus could gain a fair hearing for itself in a human court, it is scarcely any easier to imagine how a human struggling to save a wooden house from a termite colony could keep a clear view of that colony’s ecological role.  Indeed, that human would likely see the corporate intelligence formed by the termite colony, not as a virtue calling for protection, but as a menace to be eradicated by any means necessary.

Plato’s allegory of the cave is easy to attack, hard to defend, and impossible to escape

Two recent Saturday Morning Breakfast Cereal comics:

From 9 September:

And from 14 September:

As do dystopian classics like E. M. Forster’s “The Machine Stops,” the second strip transfers the world of illusion that Plato presents as the default human condition to a certain time, a future that is presented as a possible outcome of specific present trends.

The argument from design at its best

In his Dialogues Concerning Natural Religion, philosopher David Hume concluded that the classical arguments for the existence of God, even if they were logically sound, would not in fact prove what believers want to have proven.  The characters Cleanthes and Demea set out to demonstrate to the existence of God, and find themselves unable to satisfy their friend Philo.  After Cleanthes has made the case for believing that the orderliness of the observable world demonstrates that it is the creation of a supernatural being, Philo responds with a series of conclusions that follow at least as logically from Cleathes’ arguments as do the conclusions which he would like to draw.  The final item in this series is the following:

In a word, CLEANTHES, a man, who follows your hypothesis, is able, perhaps, to assert, or conjecture, that the universe, sometime, arose from something like design: but beyond that position he cannot ascertain one single circumstance, and is left afterwards to fix every point of his theology, by the utmost licence of fancy and hypothesis. This world, for aught he knows, is very faulty and imperfect, compared to a superior standard; and was only the first rude essay of some infant deity, who afterwards abandoned it, ashamed of his lame performance: it is the work only of some dependent, inferior deity; and is the object of derision to his superiors: it is the production of old age and dotage in some superannuated deity; and ever since his death, has run on at adventures, from the first impulse and active force, which it received from him. You justly give signs of horror, DEMEA, at these strange suppositions: but these, and a thousand more of the same kind, are CLEANTHES’s suppositions, not mine. From the moment the attributes of the Deity are supposed finite, all these have place. And I cannot, for my part, think, that so wild and unsettled a system of theology is, in any respect, preferable to none at all.

This passage came to mind when I read yesterday’s Saturday Morning Breakfast Cereal.  Zach Wienersmith has sharpened Philo’s hypotheticals a bit:

Scientific Arrogance

The other day, Ed Yong linked to an essay by Ethan Siegel.  Mr Siegel extols the virtues of science, both Science the process for gaining knowledge about nature and Science the body of knowledge that humans have acquired by means of that process.  Mr Siegel then quotes an interview Neil deGrasse Tyson gave to Nerdist, in which Mr Tyson expressed reservations about the value of philosophical study as part of the education of a young scientist.  In that interview, Mr Tyson and his interlocutors made some rather harsh-sounding remarks.  Take this segment, for example, as transcribed by Massimo Pigliucci:

interviewer: At a certain point it’s just futile.

dGT: Yeah, yeah, exactly, exactly. My concern here is that the philosophers believe they are actually asking deep questions about nature. And to the scientist it’s, what are you doing? Why are you concerning yourself with the meaning of meaning?

(another) interviewer: I think a healthy balance of both is good.

dGT: Well, I’m still worried even about a healthy balance. Yeah, if you are distracted by your questions so that you can’t move forward, you are not being a productive contributor to our understanding of the natural world. And so the scientist knows when the question “what is the sound of one hand clapping?” is a pointless delay in our progress.

[insert predictable joke by one interviewer, imitating the clapping of one hand]

dGT: How do you define clapping? All of a sudden it devolves into a discussion of the definition of words. And I’d rather keep the conversation about ideas. And when you do that don’t derail yourself on questions that you think are important because philosophy class tells you this. The scientist says look, I got all this world of unknown out there, I’m moving on, I’m leaving you behind. You can’t even cross the street because you are distracted by what you are sure are deep questions you’ve asked yourself. I don’t have the time for that.

interviewer: I also felt that it was a fat load of crap, as one could define what crap is and the essential qualities that make up crap: how you grade a philosophy paper?

dGT [laughing]: Of course I think we all agree you turned out okay.

interviewer: Philosophy was a good Major for comedy, I think, because it does get you to ask a lot of ridiculous questions about things.

dGT: No, you need people to laugh at your ridiculous questions.

interviewers: It’s a bottomless pit. It just becomes nihilism.

dGT: nihilism is a kind of philosophy.

Mr Tyson’s remarks have come in for criticism from many quarters.  The post by Massimo Pigliucci from which I take the transcription above is among the most notable.

I must say that I think some of the criticism is overdone.  In context, it is clear to me that Mr Tyson and his interlocutors are thinking mainly of the training of young scientists, of what sort of learning is necessary as a background to scientific research.  In that context, it’s quite reasonable to caution against too wide a range of interests.  It would certainly not be wise to wait until one had developed a deep understanding of philosophy, history, literature, music, art, etc, before getting down to business in one’s chosen field.

It’s true that Mr Tyson’s recent fame as narrator of the remake of the television series Cosmos puts a bit of an edge on his statements; that show is an attempt to present the history of science to the general public, and to promote a particular view of the place of science in human affairs.  It would be fair to say that the makers of Cosmos, Mr Tyson among them, have exposed some of their rather sizable blind spots in the course of the project (most famously in regard to Giordano Bruno,) and a bit of time spent studying the philosophy of science may very well have served to temper the bumptious self-assurance that let them parade their howlers in worldwide television broadcasts.  And it is true, as Mr Pigliucci documents, that Mr Tyson has a history of making flip and ill-informed remarks dismissing the value of philosophy and other subjects aside from his own.  Still, the remarks from the Nerdist podcast are pretty narrow in their intended scope of application, and within that scope, having to do with apprentice scientists, I wouldn’t say that they are examples of arrogance, or that they are even wrong.

I’m reminded of a problem that has faced those who would teach Latin and ancient Greek to English speakers over the centuries.  The languages are different enough from English that it seems like a shame to start them later than early childhood.  If a student starts Latin at five and Greek at six, as was the norm for boys destined for the German Gymnasia or the English public schools in the nineteenth century, that student will likely attain a reading proficiency in the classical languages at about eight or nine years of age that a student who starts them in later life may never attain.  However, the point of learning the languages is to be able to read classical literature.  What is a nine-year-old to make of Horace or Pindar or Vergil or Sophocles or Thucydides or Tacitus?  Few of the real masterworks are intelligible as anything other than linguistic puzzles to anyone under 40.  It often happens to me that I assign such things to students who are returning to college in middle age.  They usually come to me afterward and tell me that they were surprised.  They had read them when they were in the 18-25 age bracket that includes most of my students, and hadn’t found anything of interest in them.  Rereading them later in life, the books meant a tremendous amount to them.  I trot out a very old line on these occasions, and say “It isn’t just you reading the book- the book also reads you.”  Meaning that the more life experience the reader brings, the greater the riches the reading offers.

I suppose the best thing to do would be to learn the languages in early childhood while studying mathematics and the natural sciences, to study ancient literary works for several years as specimens in the scientific study of linguistics or as aids to archaeology, and to come back to them later in life, when one can benefit from reading them on their own terms.  The same might apply to philosophy, bits of which might be slipped in to the education of those aged 25 and younger, but which ought really to be introduced systematically only to those who have already confronted in practice the sorts of crises that have spurred its development over the centuries.

Be that as it may, the concept of scientific arrogance is one that has been deftly handled by one of my favorite commentators, cartoonist Zach Weiner.  I’d recommend two Saturday Morning Breakfast Cereal strips on the theme, this one about emeritus disease and this one about generalized reverence for specialized expertise.

WEIRD laughter

Recently, several websites I follow have posted remarks about theories that are meant to explain why some things strike people as funny.

Zach Weinersmith, creator of Saturday Morning Breakfast Cereal, wrote an essay called “An artificial One-Liner Generator” in which he advanced a tentative theory of humor as problem-solving.

Slate is running a series of articles on theoretical questions regarding things that make people laugh.  The first piece, called “What Makes Something Funny,” gives a lot of credit to a researcher named Peter McGraw, who is among the pioneers of “Benign Violation Theory.”  This is perhaps unsurprising, since Professor McGraw and his collaborator Joel Warner are credited as the authors of the piece.  Professor McGraw and Mr Warner summarize earlier theories of humor thus:

Plato and Aristotle introduced the superiority theory, the idea that people laugh at the misfortune of others. Their premise seems to explain teasing and slapstick, but it doesn’t work well for knock-knock jokes. Sigmund Freud argued for his relief theory, the concept that humor is a way for people to release psychological tension, overcome their inhibitions, and reveal their suppressed fears and desires. His theory works well for dirty jokes, less well for (most) puns.

The majority of humor experts today subscribe to some variation of the incongruity theory, the idea that humor arises when there’s an inconsistency between what people expect to happen and what actually happens.

Professor McGraw and Mr Warner claim that incongruity theory does not stand up well to empirical testing:

Incongruity has a lot going for it—jokes with punch lines, for example, fit well. But scientists have found that in comedy, unexpectedness is overrated. In 1974, two University of Tennessee professors had undergraduates listen to a variety of Bill Cosby and Phyllis Diller routines. Before each punch line, the researchers stopped the tape and asked the students to predict what was coming next, as a measure of the jokes’ predictability. Then another group of students was asked to rate the funniness of each of the comedians’ jokes. The predictable punch lines turned out to be rated considerably funnier than those that were unexpected—the opposite of what you’d expect to happen according to incongruity theory.

To which one might reply that when Mr Cosby and Ms Diller actually performed their routines, they didn’t stop after the setup and ask the audience to predict the punchline.  Nor would any audience member who wanted to enjoy the show be likely to try to predict the punchline.  Doing so would make for an entirely different experience than the one the audience had paid for.

Be that as it may, Professor McGraw and Mr Warner go on to claim that their theory of “benign violation” is supported by empirical evidence:

Working with his collaborator Caleb Warren and building from a 1998 HUMOR article published by a linguist named Thomas Veatch, he hit upon the benign violation theory, the idea that humor arises when something seems wrong or threatening, but is simultaneously OK or safe.

After extolling some of the theory’s strengths, the authors go on:

Naturally, almost as soon as McGraw unveiled the benign violation theory, people began to challenge it, trying to come up with some zinger, gag, or “yo momma” joke that doesn’t fit the theory. But McGraw believes humor theorists have engaged in such thought experiments and rhetorical debates for too long. Instead, he’s turned to science, running his theory through the rigors of lab experimentation.

The results have been encouraging. In one [Humor Research Laboratory] experiment, a researcher approached subjects on campus and asked them to read a scenario based on a rumor about legendarily depraved Rolling Stones guitarist Keith Richards. In the story—which might or might not be true—Keith’s father tells his son to do whatever he wishes with his cremated remains—so when his father dies, Keith decides to snort them. Meanwhile the researcher (who didn’t know what the participants were reading) gauged their facial expressions as they perused the story. The subjects were then asked about their reactions to the stories. Did they find the story wrong, not wrong at all, a bit of both, or neither? As it turned out, those who found the tale simultaneously “wrong” (a violation) and “not wrong” (benign) were three times more likely to smile or laugh than either those who deemed the story either completely OK or utterly unacceptable.

In a related experiment, participants read a story about a church that was giving away a Hummer H2 to a lucky member of its congregation, and were then asked if they found it funny. Participants who were regular churchgoers found the idea of mixing the sanctity of Christianity with a four-wheeled symbol of secular excess significantly less humorous than people who rarely go to church. Those less committed to Christianity, in other words, were more likely to find a holy Hummer benign and therefore funnier.

Lately, social scientists in general have been more mindful than usual of the ways in which North American undergraduates are something other than a perfectly representative sample of the human race.  Joseph Henrich, Steven Heine, and Ara Noranzayan have gone so far as to ask in the title of a widely cited paper whether the populations most readily available for study by psychologists and other social scientists are in fact  “The weirdest people in the world?”  In that paper, Professors Henrich, Heine, and Noranzayan use the acronym “WEIRD,” meaning Western, Educated, Industrialized, Rich, Democratic.  Their abstract:

Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.

It would be particularly easy to see why a theory like Benign Violation would have a special appeal to undergraduates.  Undergraduate students are rewarded for learning to follow sets of rules, both the rules of academic disciplines which their teachers expect them to internalize and the rules of social behavior appropriate to people who,like most undergraduates, are living independent adult lives for the first time.  So, I suppose if one wanted to defend Superiority Theory (as for example mentioned by Aristotle in his Poetics, 1449a, p. 34-35,) one would be able to use the same results, saying that students simultaneously saw themselves as superior both to the characters in the jokes who did not follow the usual rules and to those who would enforce those rules in too narrowly literalistic a fashion to fit with the overall approach of higher education, where innovation and flexibility are highly valued.  Here the WEIRD phenomenon comes into play as well, since cultures vary in their ideas of what rules are and what relationship they have to qualities like innovation and flexibility.  Moreover, one could also say that the judgment that a particular violation is or is not benign itself implies superiority over those involved in the violation, and that this implication of superiority is what generates laughter.

Also, because undergraduates are continually under pressure to internalize one set of rules after another, they often show anxiety related to sets of rules.  This may not be the sort of thing Sigmund Freud had in mind when he talked about Oedipal anxiety, but it certainly does drive undergraduates to seek relief.  Example of action that is at once quite all right and by no means in accordance with the rules may well provide that relief.

Incongruity theorists may find comfort in Professor McGraw’s results, as well.  The very name “Benign Violation” as well as experimental rubrics such as “wrong” and “not wrong” are incongruous combinations by any definition.  So a defender of Incongruity Theory may claim Benign Violation as a subcategory of Incongruity Theory, and cite these results in support of that classification.

Professor McGraw is evidently aware of these limitations.  He and Mr Warner explain what they did to rise above them:

[T]hree years ago, he set off on an international exploration of the wide world of humor—with me, a Denver-based journalist, along for the ride to chronicle exactly what transpired. Our journey took us from Japan to the West Bank to the heart of the Amazon, in search of various zingers, wisecracks and punch lines that would help explain humor once and for all. The result is The Humor Code: A Global Search for What Makes Things Funny, to be published next week—on April Fool’s Day, naturally. As is often the case with good experiments—not to mention many of the funniest gags—not everything went exactly as planned, but we learned a lot about what makes the world laugh.

It isn’t April First yet, so I don’t know how well they have done in their efforts to expand their scope.

One sentence that struck me wrong in Professor McGraw and Mr Warner’s piece was this one, about Superiority Theory, that it “seems to explain teasing and slapstick, but it doesn’t work well for knock-knock jokes.”  I’m not at all sure about that one.  In a knock-knock joke, there are two hypothetical characters who take turns delivering five lines of dialogue.  The first character to speak is the Knocker (whose first line is always “Knock-knock!”)  The second character to speak is the Interlocutor (whose first line is always “Who’s there?”)  The Knocker’s second line is an unsatisfactory answer to this question.  The Interlocutor’s second line begins by repeating this incomplete answer, then adds the question word “who?”  The Knocker’s third line then delivers the punchline in the form of a repetition of the unsatisfactory answer followed by one or more additional syllables that change the apparent meaning of the initial unsatisfactory answer.

Knock-knock jokes became popular in the USA in the 1950s, as part of a national craze.  The first joke recorded in this mid-twentieth century craze, I have read, is the following:

K: Knock-knock!

I: Who’s there?

K: Sam and Janet.

I: Sam and Janet who?

K: Sam and Janet evening! (sung to the tune of this song)

Apparently all of the jokes that brought the form into such prominence in the 1950s that they are still beloved today by seven-year-olds of all ages took this form, in which the punchline involved the Knocker bursting into song with a popular Broadway tune of the period.

I think the jokes from this original craze almost have to be examples of superiority.  The Knocker is confident that the Interlocutor will be surprised when the punchline is presented under the usual conditions of the joke.  This is not to deny that if the joke were interrupted and the Interlocutor were asked to predict the punchline, after the manner of Professor McGraw’s students the Interlocutor might be able to do so.  When it is presented the Interlocutor will join in his or her satisfaction at being part of the relatively elite portion of the population who recognize current Broadway hits when they hear them.

As knock-knock jokes have become more familiar over the decades, meta-knock-knock jokes have gained a following.  For example, a person named Alice might play the Knocker in this joke:

K: Knock knock!

I: Who’s there?

K: Alice.

I: Alice who?

K: Alice (in a tone suggesting that she is wounded that the Interlocutor doesn’t recognize her)

The met-knock-knock joke suggests superiority to the genre of knock-knock jokes.  If first-order knock-knock jokes are popular among seven-year-olds of all ages, meta-knock-knock jokes are popular among eight-year-olds of all ages, suggesting superiority to those who still persist in telling first-order knock-knock jokes.

The world’s most hated knock-knock joke is this meta-knock-knock:

K: Knock, knock.
I: Who’s there?
K: Banana.
I: Banana who?
K: Knock, knock.
I: Who’s there?
K: Banana.
I: Banana who?
K: Knock, knock.
I: Who’s there?
K: Orange.
I: Orange who?
K: ORANGE YOU GLAD I DIDN’T SAY BANANA!

This joke attacks the several parts of the shared understanding between Knocker and Interlocutor.  The joke is more than five lines long, the fifth line does not take the form original unsatisfactory response + additional syllable or syllables, the Knocker expects the Interlocutor to repeat his or her two lines multiple times, and the punchline does not include a repetition of the original unsatisfactory response.  For the experienced Interlocutor, these attacks are an undue imposition on the Knocker-Interlocutor relationship.  For anyone else, the whole thing would be utterly pointless.

Hated as the joke is, Knockers of a particular sort, mostly eight-year-old boys, seem unable to resist it.  Willing Interlocutors can rely on these boys to laugh uproariously every time they drag them through the ritual responses.  Here too, Superiority Theory seems to be the only explanation for the boys’ laughter and the strain tolerating the joke puts on the Interlocutors.  The Knockers who enjoy the joke laugh at their own power to inflict it on their Interlocutors.

Each time a potential Interlocutor is confronted with “Orange you glad I didn’t say banana,” the joke gets a bit more annoying.  Perhaps this is because of an aspect of politeness recently referenced on yet another of my favorite sites, Language Log.  There it was mentioned that Penelope Brown and Stephen Levinson, founders of “Politeness Theory,” have provided conceptual tools to enable us to distinguish between situations in which statements offering information the hearer should already have suggest that the hearer does not already know that information and thereby offend the hearer and those which do not carry that suggestion and which therefore do not offend the hearer.  A joke with a painfully obvious punchline may fall in the first category, as do the reiterated responses in “Orange you glad I didn’t say banana.”  Casual remarks about the weather and other forms of small talk usually fall in the second category, as do formalized utterances generally.

Games people and avocados play

Hmm, it seems to have been several months since anything has been posted here.  We haven’t disappeared from the internet completely in that time.  One thing we’ve been doing is tweeting links.  Such as:

1. A couple of years ago, there was a thing on Cracked by John Cheese about bad ways to respond to bullies.  It is very hard to read, for three reasons.  First, John Cheese tells stories about how several of these bad ways cost him and his family dearly when he was a boy beset by bullies.  Second, he doesn’t suggest any ways of responding to bullies that would be  more successful.  Third, he raises the terrible thought that “bullying” and “politics” are two names for the same thing.

John Cheese’s “5 Bad Ideas for Dealing With Bullies You Learned in Movies” are: “Tell An Adult- They’ll Teach You to Fight”; “Just Ignore Them- Unless You Can Verbally Slay Them”; “Run!  You’ll Have Your Victory Soon Enough”; “Fight Back- You’ll Always Win!”; “Fight Back- There Are No Consequences.”  A political scientist of my acquaintance is fond of the axiom “No unmixed strategies are valid.”  An opponent who can predict your reactions with a high degree of accuracy is one against whom you have little chance of winning in any sort of contest.  That applies at every level.  So the bullied child, or adult, or nation-state can achieve little by choosing the same response consistently when provoked.  The only hope is in regarding each response as a tactic, a tool to be used in conjunction with other tools, chosen and applied based on a cold-eyed assessment of the situation at the moment.   Sometimes you fight, sometimes you ignore, sometimes you run away, sometimes you report the situation to the authorities, sometimes you organize fellow targets in a coordinated resistance, sometimes you combine these responses with each other or with other techniques.  Whatever you do, make sure you surprise your opponent.

When I had to cope with bullies as a child, I was acutely aware of how little tactical sense I had.  I tried several methods, never in quick succession, never with much success.  If I had been shrewd enough to contain our neighborhood bullies then, maybe I would be rich and powerful now.  In which case you would not be reading this, as rich and powerful people do not maintain WordPress blogs.

2. John Wilkins is trying to figure out “why otherwise sensible men might harass a woman.”  His theory is that we might be able to answer this question if we frame it as a failure to operate in a rule-governed manner, so he calls the post “On knowing the rules.”   I’m skeptical of that approach.  I suspect that the men we see as sensible are those who have persuaded us to see them as sensible, and that to persuade anyone of anything is the result of a successful application of strategy.  Moreover, sexual harassment, like other forms of bullying, is targeted precisely at a person’s ability to seem sensible.  Tell a story about a federal judge interrupting you at lunch to quote movie lines about pubic hair, and people will probably wonder if you’re “a little bit nutty and a little bit slutty.”  Some strategies for establishing oneself as a sensible person hinge on making other people seem not-so-sensible.  So my suspicion is that the question should be, not “why otherwise sensible men might harass a woman,” but how some men secure their reputations for sensible-ness by harassing women.

3. Speaking of tactics and strategy, the avocado has a reproductive strategy developed in response to a situation that ceased to exist 13,000 years ago.  This turns out not to matter, as the avocado has been flourishing all this time.  So maybe there’s hope for those of us who are not dynamic gamesmen.

4. Let’s assume you don’t want to be a bully, and you are having a debate.  You notice that the person you are debating is getting upset.  Leah Libresco suggests you ask what your opponent thinks is at stake in the debate.  She puts it memorably:

I’ve tried using this kind of approach in non-philosophical fights (with varying success) to keep forcing myself to ask “What is this person protecting?” I’ve tried explicitly reframing whatever the other person is saying to me as “Watch out! You’re about to step on a kitten!!” and then working out what the kitten is. This way, intensity in argument isn’t necessarily aggressive or insulting, and it’s not something I need to take personally. It’s just a signal of how passionately my interlocutor loves the thing they think I’m about to blindly trample on, and I’d best figure out what it is sharpish.

5. If the US government sends you a subsidy in the form of a check, you are very likely to think of yourself as a tax recipient and to find yourself on the defensive in political discussion and appropriations battles.  If the US government subsidizes you by means of other instruments, such as tax credits, you are very likely to see yourself as a taxpayer and to take the offensive.  As they say in xkcd, “Sticks and stones can break my bones, but words can make me think I deserved it.”  The difference between a benefit administered through the congressional appropriations process and a benefit administered through the tax code may be purely verbal as far as economists are concerned, but it has tremendous consequences for public policy and the long-term future of the USA.

6. While we’re talking about xkcd, it dealt the other day with one of the big differences between the artificial games we design to play for fun and the games we play to establish our relationships with each other in real life is that the artificial games allow only moves drawn from a single restricted set.  So if you are boxing and you throw a right cross, your opponent is allowed to respond only by guarding against the blow, dodging it, or anticipating it with another punch.  In real life conflicts, however, there is little or no restriction on the sets of possible moves from which a competitor can draw.  So when a legislator defeats a policy initiative with a parliamentary procedure, or an appropriations cut, or a personal attack, it’s as if the winning response to a right cross was a bishop’s gambit.

7. Zach Weinersmith of Saturday Morning Breakfast Cereal has been on a roll lately.  The other day, he posted this epitome of misleading infographics.  He also wondered what it would be like “If Arithmetic Were Debated Like Religion” (or anything else people are passionate about); pointed out that even people who are most cautious about trying to be reasonable “have a huge collection of specific views, the arrangement of which would not be held by anyone who died more than 50 years ago”; and revealed that the Sphinx of Thebes took some time to develop her riddling ability.

8. One of our favorite publications is The American Conservative; one of our favorite Americans is the thoroughly unconservative Alison Bechdel.  If this sounds like a paradox, think again- The American Conservative raves over the musical Fun Home, based on AB’s memoir of the same title.

9. Speaking of The American Conservative, I’ve been reading Rod Dreher’s blog there.  Here’s a post of his, drawing on his book about his sister, in which he talks about the pros and cons of small-town life.  A quote:

The epiphany I had, the thing that made it possible for me to move back, is realizing that the bullying and the rejection that helped drive me away came from the same place as did the gorgeous compassion and solidarity with my sister Ruthie as she fought cancer. You can’t have one without the other.

I like this.  In bits 1 and 2 above, I’ve put a lot of emphasis on bullying as a set of moves in games individuals play.  It is that, I believe, but if course it is also more than that.  Bullying is a symptom of broader social structures, some which would be very hard to do without, and Mr Dreher does a good job of bringing that out in this post.

In another post, Mr Dreher thinks hard about Dante and W. H. Auden, ending with Auden’s line that “the only knowledge which can be true for us is the knowledge that we can live up to.”  I suppose this is what “Virtue Epistemology” is getting at, in part, by its examination of ways in which ethical and intellectual qualities interpenetrate each other.

10. While on the topic of The American Conservative, I’ll mention one of its former writers, a person well and truly loathed by most of the people who have been regular readers of this site.  I refer to Steve Sailer, or as some of my acquaintances know him, the hated SAILER.  Mr Sailer has recently posted a series of pieces about how odd a style of thinking utilitarianism presupposes.  He concentrates on the fetish utilitarians make for decontextualization, which in their case usually means taking scenarios and abstracting out everything but the question of cost and benefit.  There are many other criticisms one might level at utilitarianism, of course.  So Virtue Ethicists focus on the incoherence of utilitarian conceptions of “pleasure” and “pain,” which is a bit of a concern in a school of thought that sets out to reduce all of experience to pleasure and pain.  Other thinkers focus on the fact that the hedonistic calculus utilitarians describe presupposes a level of knowledge that no human being can attain.  Since ethics is supposed to be about the standards by which humans evaluate their behavior, utilitarianism is thereby disqualified from the label “ethical philosophy.”  If you believe in a God to whom all desires are known and from whom no secrets are hid, utilitarianism could be a theodicy, but theodicy is not ethics.

11. I am a fan of Irving Babbitt, and therefore sit up and take notice when Babbitt scholar Claes G. Ryn is mentioned.  A few years ago, Professor Ryn cast Paul Gottfried out of the Academy of Philosophy and Letters, declaring Professor Gottfried to have strayed too far towards opinions that Professor Ryn deemed racist.  Professor Gottfried is still sulking about his banishment, and grouses about it in the course of a column about his and Professor Ryn’s criticism of the followers of Leo Strauss.   The heart of the column is in these three paragraphs:

Also not surprisingly, given their contemporary focus and ambitions, Straussians over the decades have turned increasingly to political journalism. Pure scholarship seems to count less and less significantly in their putative field of study. And the reason is not primarily that they’re battling the “America-hating” Left—it’s that their interpretations are methodologically eccentric and brimful of their own ideological prejudices. They represent neoconservative politics packaged in academic jargon and allied to a peculiar hermeneutic that I earnestly try to make sense of in my work.

Ryn raises the question of why Straussian doctrines have caught on among self-described conservatives. His answers here do not surprise me, since for many years the two of us discussed this puzzling matter and reached similar conclusions.

Conservatism Inc. has been so totally infiltrated from the Left that those ideas that used to define the Left—abstract universalism, the rejection of ethnic differences, the moral imperative to extend equality to all human relations—has spread to the official Right. The political debate in America now centers on Leftist propositions. Accordingly, someone like Bloom, who could barely conceal his animus against what remains of a traditional Western world based on what Ryn rightly calls a “classical and Christian” heritage, could be featured in the late 1980s as an American patriot and cultural traditionalist.

That the “classical and Christian” worldviews could be so utterly submerged by stale leftovers from the anticommunist Left of the mid-twentieth century would rather seem to lead one to doubt that these worldviews had much life left in them at the time this “infiltration” began, but Professors Ryn and Gottfried are among those who would disagree.  I know that the kittens on their floors (to borrow Ms Libresco’s image) include most of the things that a sizable fraction of the people in the world cared most deeply about for a couple of thousand years, so far be it from me to step carelessly in my hobnailed boots of postmodern secularism.

Friday links

Thanks to the Trafford Senior Netball Club

Some funny stuff from Cracked: “14 Photographs That Shatter Your Image of Famous People“; “5 Dismissive Arguments You Only Use When You’re Wrong“; “6 Famous Things From History That Didn’t Actually Exist

Stan Carey tells an old joke.

Something that would be true if it were true that “empathy is the source of ethics” (SMBC.)

Thomas Nagel drives some people so crazy that they’re willing to endorse statements like this: “The view that all sciences are in principle reducible to the laws of physics must be true unless you’re religious.” (The Weekly Standard)  A hundred years ago, it seemed that only supernaturalists could doubt that arithmetic was in principle reducible to formal logic.  Then along came Gödel, and it became obvious, first, that arithmetic was not reducible to formal logic, and, second, that such irreducibility implied absolutely nothing about the supernatural.  In those same days, Free Will and Determinism was a big debate, with Determinists claiming that only in a perfectly predictable universe could rationality function.  Then physics demonstrated that the universe is far from perfectly predictable, and rationality didn’t seem any the worse for it.  Indeed, over the years so many reductionist theories that were once proposed as the only possible worldview for a rational person have been exploded that anyone saying “The view that x is in principle reducible to y must be true unless you’re religious” at once bears the burden of proving that s/he is not a dumbass.

How people talk about the secrecy that surrounded the Manhattan Project (Nuclear Secrecy)

Why do some politicians recover from scandal, while others are ruined?  Noah Millman has a theory: “We are willing to forgive our politicians for a multitude of private sins, because really what we care about is that we come first. They can treat their spouses and children abominably if we know that at the end of the day all they really care about is winning. Because to win they have to do what we want. Or at least convince us that they have.”

Why you shouldn’t earn a doctorate in the humanities (Slate.)

Incompleteness: “It turns out that much of this common law of contracts was specifically designed around a particular standard-form contract. When the economist junked the standard-form contract and wrote a whole new one, he also (perhaps inadvertently) junked the common law that went with it. The result was that the gaps became a lot larger, and litigation more probable. The very act that was meant to reduce contractual incompleteness ended up increasing it.” (Volokh)

Anglo-American rightists have been writing love letters to General Augusto Pinochet for almost forty years.  This article starts off like one of them, then runs into some actual Chileans who introduce the author to the ghastly realities of the general’s regime.  (Takimag)

The group of researchers who coined the acronym WEIRD (Western, Educated, Industrialized, Rich, Democratic) (Pacific Standard)

The Internet: Bureaucracy or Fiefdom?

From the Bayeux Tapestry

Bruce Schneier declares:

It’s a feudal world out there.
Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them — or to a particular one we don’t like. Or we can spread our allegiance around. But either way, it’s becoming increasingly difficult to not pledge allegiance to at least one of them.

The whole piece is worth reading.  For my part, I’ve often wondered if the Internet doesn’t fit Max Weber’s conception of a bureaucracy.  Weber described six major characteristics of bureaucracy (here‘s a handy summary of his views.)  First and most familiar in the popular use of the word, a bureaucracy has a formal hierarchical structure.  While there is no group of people who are the president and board of directors of the Internet, the machines that make up the Internet do in fact relate to each other according

Max Weber, by ludilozezanje

Max Weber, by ludilozezanje

to set routines.  Weber described bureaucracies staffed by human officials, but parts of his description still apply where, as in the functioning of the Internet, the officials are replaced by machines.

The second characteristic of bureaucracy in Weber’s description is a set of rules that consistently transform particular decisions made in one part of the structure into particular actions taken in other parts of the structure.  In this regard every bureaucracy aspires to the condition of a machine; as a bureaucracy composed of machines, the Internet would in a sense represent the ultimate bureaucracy. Along with these rules comes a heavy emphasis on written documents and permanent records, to ensure that decisions are communicated from one part of the structure to another accurately and that they are converted into action appropriately.  Here again, the Internet’s tendency to preserve data makes it the ideal form of bureaucracy.

Third, Weber says that bureaucracies are organized by functional specialty.   Here we see two levels of organization taking place independently of each other.  Of course, the machines are sorted together by their functions.  At the same time, the people who use the Internet develop specializations in their ways of relating to it.  Those who resist specialization remain on the fringes of the Internet.  So, a general-interest blog like this one toddles along for years with a handful of readers; start a tumblr site devoted entirely to eighteenth-century cocktail recipes, and you might  draw a thousand followers in a week.  Through them, you can learn more about your topic than you had imagined possible.  Because of the efficiency that results from the Internet’s specialization and consistency, users have strong incentives to specialize their own use of the system and to respect its rules.  Thus, the Internet’s human users behave as they would if they were clients of a bureaucracy staffed by human officials.

Fourth, Weber’s bureaucracies have missions.  These missions are not simply tasks for which groups might be established ad hoc, but are the overarching goals that justify the organization’s continued existence.  Because so many people have stakes in the continued existence of large bureaucracies, their missions tend to become rather broad and ill-focused over time; the last thing anyone wants is for the bureaucracy that provides his or her livelihood to have completed its mission.  A phrase like “the distribution of information,” precisely because it is so vague, is therefore a perfectly apt mission statement for a major bureaucracy.

Fifth, bureaucracies are impersonal structures, in which the relationship of one person to another is restricted to the roles that those people are playing.  So, if Alice is a sales agent for her company and Bob is a purchasing agent for his, their business discussions are between vendor and client, not between Alice and Bob.  When Internet cafes first appeared, nearly twenty years ago, a huge percentage of them had Peter Steiner’s cartoon from 5 July 1993 The New Yorker taped to the wall:

https://i1.wp.com/www.ricklatona.com/wp-content/uploads/2008/12/picresized_1229584137_youreadog1.gif

Now we’re living in the age of Facebook, and on the Internet everyone knows that you’re a dog, what you had for breakfast, where you like to do your business, etc.  Still, there is an element of impersonality built into online interactions.  So online political discussions, even on Facebook itself, quickly become interactions between supporter of Party X and supporter of Party Y, even when those supporters are close friends in other settings.  Obviously people can turn each other into symbols of opinions they dislike in any social environment, but I don’t think it’s controversial to say that online discussions are particularly prone to this sort of reduction.  Moreover, the most pleasant online relationships tend to be the simplest, those in which participants change their personas least often.  If Alice and Bob meet at a site devoted to eighteenth-century cocktail recipes and interact simply as devotees of those recipes, I suspect they are likelier to look forward to hearing from each other than they will be if they start talking about other topics and expecting other kinds of emotional and intellectual support from each other.  Offline, I would think it would be the opposite, that people who discuss only one topic and present themselves to each other in only one way are unlikely to become close.  I’d be interested to see studies on this hypotheses, a quick Google Scholar search hasn’t shown me any but if you know of such, please enlighten me.

Sixth, employment in a bureaucracy is based on technical qualifications.  Civil service exams, educational requirements, efficiency ratings, and other devices for measuring competence are not necessary if the best person for the job is the person who has inherited it as a matter of right.  They are necessary if the best person is the ablest.  Of course, every human bureaucracy exists within a society where there are laws, institutions, and ethical ideas that predate the rise of bureaucracy and survive independently of it.  So one does not expect a certifying authority to require the person who owns a business to prove that s/he is the ablest person to oversee its operations.  Nor does one expect anyone to require potential parents to demonstrate any particular abilities in order to earn a license authorizing them to produce children, or to raise the children they have produced.  If all social life were subject to the demands of a single bureaucracy, we would expect to see such requirements.  Indeed, as bureaucratization proceeds apace, we see ever more footprints of bureaucracy in areas which were once matters of right.  In many parts of the USA, for example, voters are routinely required to produce identification before they are allowed to take ballots, even though there is no evidence that anyone has ever impersonated a voter, and absolutely no way to affect the outcome of an election by impersonating voters.  These laws are accepted, not because they serve any legitimate purpose, but simply because it seems natural to the residents of a social world dominated by bureaucracy to be called on to produce one’s papers.

As for the Internet, there are technical specifications devices must meet in order to be connected.  This automated bureaucracy rarely sorts its human users by technical qualifications, though they do sort themselves in much the way that the clients of bureaucracies staffed by humans sort themselves.  And, as they do when interacting with bureaucracies staffed by humans, Internet users do tend to see themselves as clients receiving services rather than as citizens asserting their rights.  Zach Weiner expressed that point very effectively in February, with his now-classic cartoon about the so-called “Stop Online Piracy Act” that was then before the US Congress:

Saturday Morning Breakfast Cereal, 2 February 2012

Saturday Morning Breakfast Cereal, 2 February 2012

So you can see why I have thought it made sense to look at the Internet as a bureaucracy in Max Weber’s sense.  Perhaps, though, it makes more sense to follow Mr Schneier and look at it as a feudal realm.  While every element of a bureaucracy is, at least in theory, accountable to some overall authority that regulates that bureaucracy, the elements to which we trust our online security are accountable to no one.  As Mr Schneier writes:

In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won’t be exposed to hackers, criminals, and malware. We trust that governments won’t be allowed to illegally spy on us.

Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don’t know what sort of security methods they’re using, or how they’re configured. We mostly can’t install our own security products on iPhones or Android phones; we certainly can’t install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates — iPhone, for example — but we rarely know what they’re about or whether they’ll break anything else. (On the Kindle, we don’t even have that freedom.)

The Good, the Bad, and the Ugly

I’m not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.

Feudalism is good for the individual, for small startups, and for medium-sized businesses that can’t afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.

For large organizations, however, it’s more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They’ve been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don’t allow vassals to audit them, even if those vassals are themselves large and powerful.

In some of my darker moments, I’ve wondered if the USA is undergoing a revival of feudalism.  Mr Schneier makes a strong case that it is, at least in this area.

Wednesday links

Zach Weiner explains very succinctly why it’s so hard to be a pacifist, Eve Tushnet reads about single mothers, John Wilkins doesn’t believe politicians have mandates, some guy named “Zippy Catholic” decides that women’s suffrage and abortion rights are inseparable (and therefore women’s suffrage must go!,) Laura Flanders and Eve Ensler have never talked to each other about their vaginas and don’t plan to start,  this map does a terrific job of encapsulating the results of the 2012 US presidential election, xkcd is hilarious, and Jim Goad can think of ten good reasons not to assassinate Barack Obama.

Time and cartoons

In the comic books, Superman is quitting his day job as a newspaperman.  The company that publishes the Superman titles, DC Comics, explains that, as part of an effort to make the character more relevant to “the 21st century,” he will become- a blogger!  Evidently the part of the 21st century they want him to be relevant to is the part that ended about 6 years ago.

Nina Paley summarizes the history of the Levant in 3 minutes and 32 seconds of animation.

Despite what you’d expect from a webcomic with its name, Doghouse Diaries rarely deals with dogs.  Yesterday’s strip is therefore in rare company.

Neither Zach Weiner nor Randall Munroe is at all impressed with the level of statistical discourse in mass media.