A logical God?

Probably the least popular of all the familiar arguments that are from time to time offered to prove the existence of God is the Ontological Proof.  Here is a one-paragraph synopsis of Saint Anselm’s version of the Ontological Proof, taken from the Stanford Encyclopedia of Philosophy:

The first, and best-known, ontological argument was proposed by St. Anselm of Canterbury in the 11th. century C.E. In his Proslogion, St. Anselm claims to derive the existence of God from the concept of a being than which no greater can be conceived. St. Anselm reasoned that, if such a being fails to exist, then a greater being—namely, a being than which no greater can be conceived, and which exists—can be conceived. But this would be absurd: nothing can be greater than a being than which no greater can be conceived. So a being than which no greater can be conceived—i.e., God—exists.

Even believers tend to react to the Ontological Proof with distaste and irritation.  So it was rather interesting when, in 2013, German logicians Christoph Benzmüller and Bruno Woltzenlogel Paleo proved that Kurt Gödel’s demonstration that the basic axioms of Logic K, a form of modal logic developed by Saul Kripke (the “K” in “Logic K” stands for “Kripke,”) imply that the Ontological Proof is sound.

Logic K is not the only possible system of logic, so this implication does not by itself prove that God exists.  What makes Professors Benzmüller and Woltzenlogel Paleo’s work so interesting is that Logic K is an extremely simple system, especially as compared with a system like arithmetic, which as Gödel himself showed is infinitely complex in its basic axioms.  The reasoning we use in practical life adds manifold layers of complexity to propositional frameworks such as those of formal logic or mathematics.  If something as specific as monotheism can come springing out of something as spare as the basic axioms of Logic K, then the idea that any form of rigorous intellectual activity can be neutral regarding the kinds of questions monotheism is supposed to answer becomes tenuous.

That is not to say that our cultural formation precedes our intellectual activity, and so that all of our systematic reasoning is infused with the particular circumstances of the society in which we were raised, often in ways of which we are unaware.  It would no doubt be true to say this; however, it is a statement that rests on the findings of the social sciences, expressed in language that has grown up in the development of those sciences.  And the social sciences themselves derive their authority from their status as products of rigorous intellectual activity.  If all such activity is already implicated in theology, then an attempt to confine the implications of Professors Benzmüller and Woltzenlogel Paleo’s work to areas already explored by the social sciences is an attempt to minimize the scope of the problem.

A God who holds the world record for eating the most skateboards is greater than a God who does not hold that record

xkcd 1505

Nor is it even to say that as we develop a system of reasoning we are condemned to stack the deck, consciously or unconsciously, in favor of our own religious commitments.  Aristotle grew up in a society in which monotheism was an alien phenomenon which, on those rare occasions when it would be mentioned, was regarded with undisguised contempt. Yet, as such Muslim and Christian commentators on Aristotle as Ibn Sina, Ibn Rushd, and Thomas Aquinas showed many centuries ago, Aristotle’s logic works best when it is applied to a monotheistic universe.  Aristotle himself would no doubt have regarded this as a reductio ad absurdum of his work, and would have gone back to the drawing board to produce a new system of logic, one that fit with what he regarded as the real world of multiple gods and other beings whom it was obligatory to worship.  Perhaps he would have succeeded in creating such a system; he was Aristotle, after all, and was as well equipped as anyone has ever been to accomplish such a thing.  But as it happens, he never had occasion to try, and for two thousand years Aristotle’s logic was the prevailing system in the world from India to Ireland.

When Aristotle’s system of logic was in favor, the work of men like Ibn Sina, Ibn Rushd, and Thomas Aquinas gave compelling grounds for accepting monotheism.  That Aristotle, as a polytheist from a resolutely polytheistic culture, could not be accused of stacking the deck to produce a system that supported monotheism, certainly added to the force of these grounds.  Nowadays, Aristotle’s logic is obsolete, and so one could hardly expect logicians to become monotheists simply because the Medieval Scholastics found in it support for monotheism.

Still, that it is monotheism that jumps out, not only from a logical system constructed by a rabbi’s son like Saul Kripke on the basis of a metaphysics constructed by vaguely Christian thinker like Leibniz, but also from a system constructed by the thoroughly pagan Aristotle, does make it difficult to claim that the relationship between monotheism and systematic reasoning is entirely an illusion resulting from indoctrination in monotheism.  It is likely that the idea of a single deity who is the supreme creator, ruler, and judge of the world is a sort of default position built into the whole project of codifying the rules of logic.

Just as it does not follow from the fact that Logic K rests on axioms which, taken together, imply the existence of God, that God in fact exists, so it would not follow from God’s status as a default hypothesis of formal logic that God in fact exists.  Like all other human activities, formal logic is a byproduct of any number of particular and contingent circumstances, starting with the biological adaptations that enabled our ancestors to survive, continuing through the particularities of our cultural backgrounds, and continuing through the countless vicissitudes that make it possible to distinguish the life of one individual from that of another.  It may well be that formal logic, mathematics, and the sciences, pursuits in which only a small minority of the people in the world today and only a minuscule percentage of all the people who have ever lived take an interest, will ultimately prove to be trivial matters sharply limited in their ability to cast light on the weightiest matters.  Perhaps the sorts of things most people find more interesting and which a majority has always found to be more interesting will prove to be more powerful aids to understanding, or perhaps systematized reasoning in the forms we now know will ultimately turn out to be relatively trivial preparations for some new form of understanding that awaits us in the future.  Perhaps neither of those things will happen, but we will simply come to accept a tendency to monotheism as a not-very-interesting shortcoming inherent in projects to codify the rules of correct reasoning.

Of course, monotheism is also a minority pursuit in the overall picture of humanity.  At no point in the history of the world has a majority of the human race been monotheistic in its views.  Today Christians, Muslims, Jews, and members of other monotheistic groups are probably more numerous than ever before, yet they still comprise well under half the world’s people.  What is more, monotheism seems to have been invented only once, in Babylon during the Captivity, while polytheism, animism, ancestor-worship, and other religious orientations all likely arose independently in many times and places.  In that context, monotheism looks like a freak occurrence.

It is that very freakishness that makes the recurrence of monotheism at the roots of logical systems a matter of interest.  If something so particular can keep cropping up wherever people make their most intense attempts to be general, what oddities might come out of the far more complicated sets of axioms that underlie applied reasoning?  In the light of what Professors Benzmüller and Woltzenlogel Paleo have shown about Logic K, we could hardly be surprised if hidden somewhere in the axioms of trigonometry were a recipe for kosher chicken soup, or for that matter if a description of the Loch Ness Monster were encoded somewhere in Newton’s Laws of Motion.

WEIRD laughter

Recently, several websites I follow have posted remarks about theories that are meant to explain why some things strike people as funny.

Zach Weinersmith, creator of Saturday Morning Breakfast Cereal, wrote an essay called “An artificial One-Liner Generator” in which he advanced a tentative theory of humor as problem-solving.

Slate is running a series of articles on theoretical questions regarding things that make people laugh.  The first piece, called “What Makes Something Funny,” gives a lot of credit to a researcher named Peter McGraw, who is among the pioneers of “Benign Violation Theory.”  This is perhaps unsurprising, since Professor McGraw and his collaborator Joel Warner are credited as the authors of the piece.  Professor McGraw and Mr Warner summarize earlier theories of humor thus:

Plato and Aristotle introduced the superiority theory, the idea that people laugh at the misfortune of others. Their premise seems to explain teasing and slapstick, but it doesn’t work well for knock-knock jokes. Sigmund Freud argued for his relief theory, the concept that humor is a way for people to release psychological tension, overcome their inhibitions, and reveal their suppressed fears and desires. His theory works well for dirty jokes, less well for (most) puns.

The majority of humor experts today subscribe to some variation of the incongruity theory, the idea that humor arises when there’s an inconsistency between what people expect to happen and what actually happens.

Professor McGraw and Mr Warner claim that incongruity theory does not stand up well to empirical testing:

Incongruity has a lot going for it—jokes with punch lines, for example, fit well. But scientists have found that in comedy, unexpectedness is overrated. In 1974, two University of Tennessee professors had undergraduates listen to a variety of Bill Cosby and Phyllis Diller routines. Before each punch line, the researchers stopped the tape and asked the students to predict what was coming next, as a measure of the jokes’ predictability. Then another group of students was asked to rate the funniness of each of the comedians’ jokes. The predictable punch lines turned out to be rated considerably funnier than those that were unexpected—the opposite of what you’d expect to happen according to incongruity theory.

To which one might reply that when Mr Cosby and Ms Diller actually performed their routines, they didn’t stop after the setup and ask the audience to predict the punchline.  Nor would any audience member who wanted to enjoy the show be likely to try to predict the punchline.  Doing so would make for an entirely different experience than the one the audience had paid for.

Be that as it may, Professor McGraw and Mr Warner go on to claim that their theory of “benign violation” is supported by empirical evidence:

Working with his collaborator Caleb Warren and building from a 1998 HUMOR article published by a linguist named Thomas Veatch, he hit upon the benign violation theory, the idea that humor arises when something seems wrong or threatening, but is simultaneously OK or safe.

After extolling some of the theory’s strengths, the authors go on:

Naturally, almost as soon as McGraw unveiled the benign violation theory, people began to challenge it, trying to come up with some zinger, gag, or “yo momma” joke that doesn’t fit the theory. But McGraw believes humor theorists have engaged in such thought experiments and rhetorical debates for too long. Instead, he’s turned to science, running his theory through the rigors of lab experimentation.

The results have been encouraging. In one [Humor Research Laboratory] experiment, a researcher approached subjects on campus and asked them to read a scenario based on a rumor about legendarily depraved Rolling Stones guitarist Keith Richards. In the story—which might or might not be true—Keith’s father tells his son to do whatever he wishes with his cremated remains—so when his father dies, Keith decides to snort them. Meanwhile the researcher (who didn’t know what the participants were reading) gauged their facial expressions as they perused the story. The subjects were then asked about their reactions to the stories. Did they find the story wrong, not wrong at all, a bit of both, or neither? As it turned out, those who found the tale simultaneously “wrong” (a violation) and “not wrong” (benign) were three times more likely to smile or laugh than either those who deemed the story either completely OK or utterly unacceptable.

In a related experiment, participants read a story about a church that was giving away a Hummer H2 to a lucky member of its congregation, and were then asked if they found it funny. Participants who were regular churchgoers found the idea of mixing the sanctity of Christianity with a four-wheeled symbol of secular excess significantly less humorous than people who rarely go to church. Those less committed to Christianity, in other words, were more likely to find a holy Hummer benign and therefore funnier.

Lately, social scientists in general have been more mindful than usual of the ways in which North American undergraduates are something other than a perfectly representative sample of the human race.  Joseph Henrich, Steven Heine, and Ara Noranzayan have gone so far as to ask in the title of a widely cited paper whether the populations most readily available for study by psychologists and other social scientists are in fact  “The weirdest people in the world?”  In that paper, Professors Henrich, Heine, and Noranzayan use the acronym “WEIRD,” meaning Western, Educated, Industrialized, Rich, Democratic.  Their abstract:

Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.

It would be particularly easy to see why a theory like Benign Violation would have a special appeal to undergraduates.  Undergraduate students are rewarded for learning to follow sets of rules, both the rules of academic disciplines which their teachers expect them to internalize and the rules of social behavior appropriate to people who,like most undergraduates, are living independent adult lives for the first time.  So, I suppose if one wanted to defend Superiority Theory (as for example mentioned by Aristotle in his Poetics, 1449a, p. 34-35,) one would be able to use the same results, saying that students simultaneously saw themselves as superior both to the characters in the jokes who did not follow the usual rules and to those who would enforce those rules in too narrowly literalistic a fashion to fit with the overall approach of higher education, where innovation and flexibility are highly valued.  Here the WEIRD phenomenon comes into play as well, since cultures vary in their ideas of what rules are and what relationship they have to qualities like innovation and flexibility.  Moreover, one could also say that the judgment that a particular violation is or is not benign itself implies superiority over those involved in the violation, and that this implication of superiority is what generates laughter.

Also, because undergraduates are continually under pressure to internalize one set of rules after another, they often show anxiety related to sets of rules.  This may not be the sort of thing Sigmund Freud had in mind when he talked about Oedipal anxiety, but it certainly does drive undergraduates to seek relief.  Example of action that is at once quite all right and by no means in accordance with the rules may well provide that relief.

Incongruity theorists may find comfort in Professor McGraw’s results, as well.  The very name “Benign Violation” as well as experimental rubrics such as “wrong” and “not wrong” are incongruous combinations by any definition.  So a defender of Incongruity Theory may claim Benign Violation as a subcategory of Incongruity Theory, and cite these results in support of that classification.

Professor McGraw is evidently aware of these limitations.  He and Mr Warner explain what they did to rise above them:

[T]hree years ago, he set off on an international exploration of the wide world of humor—with me, a Denver-based journalist, along for the ride to chronicle exactly what transpired. Our journey took us from Japan to the West Bank to the heart of the Amazon, in search of various zingers, wisecracks and punch lines that would help explain humor once and for all. The result is The Humor Code: A Global Search for What Makes Things Funny, to be published next week—on April Fool’s Day, naturally. As is often the case with good experiments—not to mention many of the funniest gags—not everything went exactly as planned, but we learned a lot about what makes the world laugh.

It isn’t April First yet, so I don’t know how well they have done in their efforts to expand their scope.

One sentence that struck me wrong in Professor McGraw and Mr Warner’s piece was this one, about Superiority Theory, that it “seems to explain teasing and slapstick, but it doesn’t work well for knock-knock jokes.”  I’m not at all sure about that one.  In a knock-knock joke, there are two hypothetical characters who take turns delivering five lines of dialogue.  The first character to speak is the Knocker (whose first line is always “Knock-knock!”)  The second character to speak is the Interlocutor (whose first line is always “Who’s there?”)  The Knocker’s second line is an unsatisfactory answer to this question.  The Interlocutor’s second line begins by repeating this incomplete answer, then adds the question word “who?”  The Knocker’s third line then delivers the punchline in the form of a repetition of the unsatisfactory answer followed by one or more additional syllables that change the apparent meaning of the initial unsatisfactory answer.

Knock-knock jokes became popular in the USA in the 1950s, as part of a national craze.  The first joke recorded in this mid-twentieth century craze, I have read, is the following:

K: Knock-knock!

I: Who’s there?

K: Sam and Janet.

I: Sam and Janet who?

K: Sam and Janet evening! (sung to the tune of this song)

Apparently all of the jokes that brought the form into such prominence in the 1950s that they are still beloved today by seven-year-olds of all ages took this form, in which the punchline involved the Knocker bursting into song with a popular Broadway tune of the period.

I think the jokes from this original craze almost have to be examples of superiority.  The Knocker is confident that the Interlocutor will be surprised when the punchline is presented under the usual conditions of the joke.  This is not to deny that if the joke were interrupted and the Interlocutor were asked to predict the punchline, after the manner of Professor McGraw’s students the Interlocutor might be able to do so.  When it is presented the Interlocutor will join in his or her satisfaction at being part of the relatively elite portion of the population who recognize current Broadway hits when they hear them.

As knock-knock jokes have become more familiar over the decades, meta-knock-knock jokes have gained a following.  For example, a person named Alice might play the Knocker in this joke:

K: Knock knock!

I: Who’s there?

K: Alice.

I: Alice who?

K: Alice (in a tone suggesting that she is wounded that the Interlocutor doesn’t recognize her)

The met-knock-knock joke suggests superiority to the genre of knock-knock jokes.  If first-order knock-knock jokes are popular among seven-year-olds of all ages, meta-knock-knock jokes are popular among eight-year-olds of all ages, suggesting superiority to those who still persist in telling first-order knock-knock jokes.

The world’s most hated knock-knock joke is this meta-knock-knock:

K: Knock, knock.
I: Who’s there?
K: Banana.
I: Banana who?
K: Knock, knock.
I: Who’s there?
K: Banana.
I: Banana who?
K: Knock, knock.
I: Who’s there?
K: Orange.
I: Orange who?
K: ORANGE YOU GLAD I DIDN’T SAY BANANA!

This joke attacks the several parts of the shared understanding between Knocker and Interlocutor.  The joke is more than five lines long, the fifth line does not take the form original unsatisfactory response + additional syllable or syllables, the Knocker expects the Interlocutor to repeat his or her two lines multiple times, and the punchline does not include a repetition of the original unsatisfactory response.  For the experienced Interlocutor, these attacks are an undue imposition on the Knocker-Interlocutor relationship.  For anyone else, the whole thing would be utterly pointless.

Hated as the joke is, Knockers of a particular sort, mostly eight-year-old boys, seem unable to resist it.  Willing Interlocutors can rely on these boys to laugh uproariously every time they drag them through the ritual responses.  Here too, Superiority Theory seems to be the only explanation for the boys’ laughter and the strain tolerating the joke puts on the Interlocutors.  The Knockers who enjoy the joke laugh at their own power to inflict it on their Interlocutors.

Each time a potential Interlocutor is confronted with “Orange you glad I didn’t say banana,” the joke gets a bit more annoying.  Perhaps this is because of an aspect of politeness recently referenced on yet another of my favorite sites, Language Log.  There it was mentioned that Penelope Brown and Stephen Levinson, founders of “Politeness Theory,” have provided conceptual tools to enable us to distinguish between situations in which statements offering information the hearer should already have suggest that the hearer does not already know that information and thereby offend the hearer and those which do not carry that suggestion and which therefore do not offend the hearer.  A joke with a painfully obvious punchline may fall in the first category, as do the reiterated responses in “Orange you glad I didn’t say banana.”  Casual remarks about the weather and other forms of small talk usually fall in the second category, as do formalized utterances generally.

Two items of interest to Classics types

When the world was young and I was in grad school, many of my classmates went to Rome to hang out with Father Reginald Foster.  Reggie, as they all called him, is an American priest who at that time was in charge of translating official Vatican documents into Latin.  His schedule was light in the summer, so Reggie ran a summer institute in conversational Latin.  Granted, there aren’t any native speakers of Latin around to converse with, but there is a substantial body of permanently interesting Latin literature, and it is easier to read the language if you can also speak it.

Reggie moved back to Milwaukee after Pope John Paul II died.  He teaches conversational Latin there from time to time.  No future generations of graduate students will be studying under him in Rome, but two current graduate students have revived the Rome summer program  They call it the Paideia Institute Slate magazine ran a piece about it recently.

David Graeber

Also of keen interest to classicists is this recent interview that economic anthropologist David Graeber granted to the website Naked Capitalism.  Graeber summarizes Adam Smith’s hypothesis that money originated as an advancement on barter systems that had prevailed before its adoption.  He then points out that in the 235 years since Smith published that hypothesis in The Wealth of Nations, observers have examined thousands of cultures in search of examples of pre-monetary barter economies, and that they have yet to find one.  Graeber concludes that Smith’s hypothesis is thereby defeated.  Societies which have not invented money do not organize markets around barter; they do not organize markets at all.  Money and markets arise together, and barter becomes widespread only when currency systems collapse.  Non-monetary societies distribute goods and services, not through markets, but through hierarchies in which obligations are based on force.  The king or chief or whatever he is has what he has because everyone else is indebted to him for protection and status, and they have what they have because of their relations with him.  When multiple authorities lay claim to the same person, they need a way of sorting out whose claim comes first and which authority is entitled to demand what deference or service.  Sometimes they develop a way of sorting those claims that involves quantifying them and making them transferable.  Once claims on a person’s deference or service can be quantified and transferred, there is a need for tokens to signify the quantification and contracts to enforce the transfer.  That is to say, there is money, and with it the dawn of market society.

Graeber makes some remarks that are similar to points that come up in some classes I teach.  For example:

Since antiquity the worst-case scenario that everyone felt would lead to total social breakdown was a major debt crisis; ordinary people would become so indebted to the top one or two percent of the population that they would start selling family members into slavery, or eventually, even themselves.

Well, what happened this time around? Instead of creating some sort of overarching institution to protect debtors, they create these grandiose, world-scale institutions like the IMF or S&P to protect creditors. They essentially declare (in defiance of all traditional economic logic) that no debtor should ever be allowed to default. Needless to say the result is catastrophic. We are experiencing something that to me, at least, looks exactly like what the ancients were most afraid of: a population of debtors skating at the edge of disaster.

And, I might add, if Aristotle were around today, I very much doubt he would think that the distinction between renting yourself or members of your family out to work and selling yourself or members of your family to work was more than a legal nicety. He’d probably conclude that most Americans were, for all intents and purposes, slaves.

When I’m talking to a class, I’m rather more emphatic than Graeber in saying that in this conclusion Aristotle was a man of his time, and that our view of wage labor as a form of freedom may be as legitimate in its own way as was the Greek view of wage labor as a form of slavery.  Partly that difference in views stems from the fact that so many slaves in ancient Greek cities were paid wages, and that those who labored side by side with free people in big workshops were paid exactly the same wages as those (nominally) free people, while American slaves were generally denied access to money.  Still, I do have a lecture that unnerves them when it ends with my remark that Aristotle would not have thought that we moderns have abolished slavery, but that we have abolished freedom.

I can’t resist quoting another bit of the Graeber’s interview.  After he derides the idea of money as a development subsequent to a barter economy, we have this exchange:

PP: You’d be forgiven for thinking this was all very Nietzschean. In his ‘On the Genealogy of Morals’ the German philosopher Friedrich Nietzsche famously argued that all morality was founded upon the extraction of debt under the threat of violence. The sense of obligation instilled in the debtor was, for Nietzsche, the origin of civilisation itself. You’ve been studying how morality and debt intertwine in great detail. How does Nietzsche’s argument look after over 100 years? And which do you see as primal: morality or debt?

DG: Well, to be honest, I’ve never been sure if Nietzsche was really serious in that passage or whether the whole argument is a way of annoying his bourgeois audience; a way of pointing out that if you start from existing bourgeois premises about human nature you logically end up in just the place that would make most of that audience most uncomfortable.
In fact, Nietzsche begins his argument from exactly the same place as Adam Smith: human beings are rational. But rational here means calculation, exchange and hence, trucking and bartering; buying and selling is then the first expression of human thought and is prior to any sort of social relations.

But then he reveals exactly why Adam Smith had to pretend that Neolithic villagers would be making transactions through the spot trade. Because if we have no prior moral relations with each other, and morality just emerges from exchange, then ongoing social relations between two people will only exist if the exchange is incomplete – if someone hasn’t paid up.

But in that case, one of the parties is a criminal, a deadbeat and justice would have to begin with the vindictive punishment of such deadbeats. Thus he says all those law codes where it says ‘twenty heifers for a gouged-out eye’ – really, originally, it was the other way around. If you owe someone twenty heifers and don’t pay they gouge out your eye. Morality begins with Shylock’s pound of flesh.
Needless to say there’s zero evidence for any of this – Nietzsche just completely made it up. The question is whether even he believed it. Maybe I’m an optimist, but I prefer to think he didn’t.

Anyway it only makes sense if you assume those premises; that all human interaction is exchange, and therefore, all ongoing relations are debts. This flies in the face of everything we actually know or experience of human life. But once you start thinking that the market is the model for all human behavior, that’s where you end up with.

If however you ditch the whole myth of barter, and start with a community where people do have prior moral relations, and then ask, how do those moral relations come to be framed as ‘debts’ – that is, as something precisely quantified, impersonal, and therefore, transferrable – well, that’s an entirely different question. In that case, yes, you do have to start with the role of violence.

Nietzsche may once have been overrated as a political thinker, but I believe that he is now seriously underrated in that wise.  So the bit above made me happy.

Tag, you’re Hitler

The 7 June 2010 issue of The Nation includes a review of some book about Ayn Rand. 

The part of the review that I wanted to note came about halfway in, where the reviewer, Corey Robin, quotes some remarks from Hitler and Goebbels that sound eerily like things Rand and the heroes of her novels habitually said.  Applying the Führerprinzip to the world of economics, Hitler in 1933 told an audience of business leaders:

Everything positive, good and valuable that has been achieved in the world in the field of economics or culture is solely attributable to the importance of personality…. All the worldly goods we possess we owe to the struggle of the select few.

Robin has an easy time finding examples of Rand saying very similar things.  Robin goes on:

If the first half of Hitler’s economic views celebrates the romantic genius of the individual industrialist, the second spells out the inegalitarian implications of the first. Once we recognize “the outstanding achievements of individuals,” Hitler says in Düsseldorf, we must conclude that “people are not of equal value or of equal importance.” Private property “can be morally and ethically justified only if [we] admit that men’s achievements are different.” An understanding of nature fosters a respect for the heroic individual, which fosters an appreciation of inequality in its most vicious guise. “The creative and decomposing forces in a people always fight against one another.”

Again, Robin can open Rand’s works almost at random and find passages that are almost identical to these translations. 

Robin attributes these similarities to the common influence of Nietzsche on Rand and the Nazis.  Certainly Rand did study Nietzsche’s works; and certainly there were periods when the Nazis tried to use Nietzsche’s name.  But I think that there is a simpler explanation. 

The Nazi party existed for about 25 years.  It ruled Germany for half that period.  During those years, Hitler, Goebbels, and the other Nazi princes made speeches and issued other public statements numerous enough to fill a library.  That Robin can rummage through those countless pages and find a few remarks that sound eerily reminiscent of the works of Ayn Rand tells us nothing about Rand and next to nothing about the Nazis. 

As a critique of Rand’s ideas, Robin’s argumentum ad Hitlerem is ludicrously unfair.  As a way of playing a game with Rand and her acolytes, however, it can be justified by the old maxim “turnabout is fair play.”  In 1963, Rand gave a speech titled “The Fascist New Frontier.”  In this speech, she claimed that the strongest influence on the ideology of the administration of President John F. Kennedy was not Marx or Keynes or Harold Laski, as the president’s right-wing critics sometimes claimed, but that the Kennedy administration was a fascist group.  To support this claim, she juxtaposed snippets of President Kennedy’s public statements with snippets of similar-sounding statements from Hitler, Goebbels, Mussolini, etc.  So, if President Kennedy or his spokesmen said that ideological labels were of little importance, or that personal sacrifice was the index of patriotism, or that strong leadership is essential for national greatness, she would track down some remark from some Nazi or Fascist making the same point.  That the Nazis and Fascists did not invent these ideas, that they have been commonplaces of political discussion for centuries and may very possibly be true, did not seem to her to matter very much.  In Rand’s view of history, Naziism was simply an unfolding of ideas that were already fully developed in the philosophies of thinkers like Kant and Plato.  So, the fact that an idea was familiar long before the end of the First World War doesn’t excuse it from being a symptom of Fascist orr Nazi ideology.   

Robin’s invocation of Nietzsche may suggest a similar theory of history, but the rest of the piece shows a different view.  Robin praises Aristotle at length:

Unlike Kant, the emblematic modern who claimed that the rightness of our deeds is determined solely by reason, unsullied by need, desire or interest, Aristotle rooted his ethics in human nature, in the habits and practices, the dispositions and tendencies, that make us happy and enable our flourishing. And where Kant believed that morality consists of austere rules, imposing unconditional duties upon us and requiring our most strenuous sacrifice, Aristotle located the ethical life in the virtues. These are qualities or states, somewhere between reason and emotion but combining elements of both, that carry and convey us, by the gentlest and subtlest of means, to the outer hills of good conduct. Once there, we are inspired and equipped to scale these lower heights, whence we move onto the higher reaches. A person who acts virtuously develops a nature that wants and is able to act virtuously and that finds happiness in virtue. That coincidence of thought and feeling, reason and desire, is achieved over a lifetime of virtuous deeds. Virtue, in other words, is less a codex of rules, which must be observed in the face of the self’s most violent opposition, than it is the food and fiber, the grease and gasoline, of a properly functioning soul.

So Robin praises Aristotle precisely for his sense of change and development, his attempt to explain how the same action or idea can have different significance in different circumstances.  Robin thus jettisons the idea that gives Rand an excuse for her method of using quotations from historical villains to play “gotcha” with her adversaries.  The comments Robin quotes may drip with menace when we reflect on their source; spoken by another person in another setting, the same words might be rather anodyne, or even true.  For example, the claim that “Private property ‘can be morally and ethically justified only if [we] admit that men’s achievements are different'” would seem to be eminently defensible, even if words to that effect once appeared in a speech delivered by history’s least defensible man.