Tweets of the Week: 12 March 23

2. “Crazyism” in philosophy:

3. Sam Haselby on the good cop/ bad cop routine that underlies the pseudo-leftism of America’s elites:

Most people are familiar with the bad cop / good cop routine from cinema or television. America’s elite neoliberal institutions rely on it too, recognize and promote both professional types.

Think of it this way: the dogmatic neoclassical economists (in Larry Summers’s words if there is more inequality it is because people are getting more what they deserve) are the bad cops of elite neoliberalism. They frame you and beat you up, so to speak. But then their…

…colleagues come into the holding cell and say, look I want to abolish the police, return the land to the indigenous, and provide reparations. None of this is going to happen. They are the good cops of elite neoliberalism. The legitimacy and power of the system relies on both.

Another way to think of it is the bad cops have helped secure material resources of historic abundance, the good cops come in and provide the moral resources which to try to balance out the bad cop’s depredations have to be pushed to a grandiosity, a meta-historical scale.

Originally tweeted by Sam Haselby (@samhaselby) on March 17, 2023.

4. Orson Welles moaning “Mwahhh, the French”:

5. Fr Reginald Foster was a better teacher than he pretended to be:

6. Tom Holland on Saint Paul:

Here’s @holland_tom on St Paul: “You are kind of hearing him thinking aloud as he wrestles with the implications of the fact that Christ suffered this. And everything that he’s writing is an attempt to say – how this could be?

“It’s upended his expectations of God’s plan so radically that he can never arrive at, I think, a stable sense of exactly what it means. Although Paul absolutely recognizes that the fact that Jesus was crucified lies at the heart of everything that Jesus’ mission is

“and therefore how he relates to God’s plan, what is happening, the very character of the world, the very character of God, the very nature of God’s relationship to humanity – Everything has been upended by this.

“So the cross is absolutely at the heart of everything that Paul’s writing about. But at the same time, there is kind of an embarrassment about it because it is the most shocking thing imaginable, which is kind of the point.”

Originally tweeted by Susannah Black Roberts, Niece Appreciator (@suzania) on March 18, 2023.

7. Carolina Eyck plays the “Queen of the Night” aria:

8. Adult reading as a reward for adulthood:

Data and “Data”

Language Log posted two comic strips today, and I mentioned one of them in a comment about the other.

Here’s today’s xkcd (Language Log post):

There's no

And here’s a recent PhD Comics (Language Log post):

And my comment:

Reading the strip panel by panel, I wondered what the “deep philosophical question” would be. My guess was that the question would be about the role of etymological information in the process of deciding which of various constructions in current use would fit best in a particular context. How exactly you get from that stylistic process to a “deep philosophical question” about the nature of language in four panels and still have room for a punchline isn’t clear to me, but hey, PhD Comics is a big enough deal that I assume Jorge Cham can pull it off.

Instead we get this claim that “It depends on whether you consider data to be facts (plural) or information (which is singular.)” To which the only appropriate response is: No, it doesn’t! English speakers treat the words “scissors” and “trousers” as grammatical plurals, from which it does not at all follow that we “consider” the things they name to be in any sense multiple. It is all too similar to today’s xkcd, which you reproduce in today’s other post, except that relatively few of the people who like to say “There is no ‘I’ in team” seriously believe that they are raising a “deep philosophical question.”

I recommend all the other comments on the Language Log thread, it’s a mix of interesting observations, erudite humor, and speculation about the love life of the robot from Star Trek: The Next Generation.

The meaning of life (seriously- well, almost seriously)

Mother and son

A year or so ago a friend of mine asked me a series of questions, to each of which I happened to know the answer.  After I’d told her everything she wanted to know about whatever trivial subject she was asking about (it must have been a trivial subject for me to have had all the answers,) she asked, “okay, what’s the meaning of life?”  I laughed.  She pressed me on it.  I decided to play along.

My wife, Mrs Acilius, has cerebral palsy that affects her arms and legs in a big way, but her cognitive abilities hardly at all.  So a wheelchair and a trained dog can fill in for everything she needs to make her way in life as an independent person with a professional career.  At about the time my friend insisted that I craft an answer to the question “What is the meaning of life?,” I’d been spending more time than usual involved with her dog and his training, and was thinking that there might be some kind of deep cosmic significance in it.  So I took a shot at the question based on that.

Maybe, I said, it’s something to do with a reciprocity between care and need.  Mrs Acilius’ relationship with her dog has a meaning to her that a relationship with a human whom she or some social services agency paid to perform the same tasks wouldn’t have.  She and the dog both need each other and both care for each other.  A paid human attendant might need a job, but might not need her; she might need the help the attendant provided, but might not need the attendant.  In other words, the need that goes toward making a relationship meaningful isn’t just what the parties in it need from each other, but that they need each other.  And what completes that meaning is that those who need each other care for each other.

This came back to mind this afternoon as I was reading an article on 3 Quarks Daily about the value of children’s lives relative to other people’s lives.  The author, Thomas Rodham Wells, tries to fit children into a utilitarian moral scheme where they are “special, but not particularly important.”  I am not a utilitarian, for many reasons, some of which I explain here.  I do think that Mr Wells’ article is well worth reading, not only because he is a most sophisticated utilitarian, but also because his article can help to flesh out the idea that the meaning of life can be found in a relationship between care and need.

For Mr Wells, children are special because of their extreme neediness:

Children are special in one particular, their extreme neediness. They have quite specific often urgent needs that only suitably motivated adults can meet, and the younger they are, the greater their neediness. That makes children’s care and protection a moral priority in any civilised society – there are lots of things that aren’t as important and should give rightly way to meeting children’s needs. As a result, children create multiple obligations upon their care-givers, as well second-order obligations on society in general, to ensure those needs are met.

Yet the fact that you should give way to an ambulance attending an emergency doesn’t mean that the person in the ambulance is more important than you; only that her needs right now are more important than you getting to work on time. Likewise, the immanence of children’s neediness should often determine how we rank the priorities of actions we want to do, such as interrupting a movie to attend to a baby’s cries.

However, the special priority neediness confers on children’s needs is not to be confused with extraordinary value.  Indeed, children are, other things being equal, less valuable than are adults:

People’s lives get more valuable as they ‘grow up’ because part of growing up is having more life to live. The greatest part of the value of a human life, as opposed to that of a merely sentient animal like a mouse, relates to the development of personhood. Persons are what children are supposed to grow up to become. Persons are able to relate to themselves in a forward and backward looking fashion, to tell a story about where they have come from and where they are going, to determine how they should live, and so on. Persons are able to relate to other persons as independent equals, to explain and justify themselves, to make and keep promises, and so on. Personhood in this sense normally rises over the course of a life, peaking generally around the mid 50s, the traditional prime of life, before beginning to decline again.

The trouble with our attitude to children is that the less like this idea of a person they are the more valuable children’s lives are supposed to be. The younger and more inchoate their minds and the shallower their ability to relate to themselves, others, or the world the more important they are held to be and the greater the tragedy if one should die. Of course I don’t deny that the death of a child is a tragedy for her parents, I’m quite convinced of the depth of their anguish. But the fact of their grief that doesn’t address the issue of relative value. Is it really the case that the death of a baby is an objectively worse thing to happen in this world than the death of a toddler than the death of a teenager than the death of that middle-aged accountant?

The death of an adult person is a tragedy because a sophisticated unique consciousness has been lost; a life in progress, of plans and ideals and relationships with other persons, has been broken off. The death of a young child, is also a tragedy, but it seems a comparatively one-sided one, the loss of an tremendously important part of her parents’ lives.

I suspect that the idea that lives are to be valued because of their narrative content is more defensible than the idea that actions are to be valued because of their net contribution to the amount of pleasure (minus pain) in the world, and so I say that Mr Wells’ utilitarianism is more sophisticated than is the garden variety of that school.  Still, like other utilitarians he ends up putting lives in order by the rank of their worthiness to live.  In the Book of Genesis, the God of Abraham, Isaac, and Jacob specializes in this sort of ranking and presumably carries it out according to some rational plan, but I think it is safe to say that the job of the God of Genesis is unlikely to come open any time soon.  Failing that, the only scenarios in which it is at all necessary to rank particular lives by worthiness of life that are at all likely to befall any of Mr Wells’ readers may be battlefield cases where time is extremely short and highly-developed ethical codes are of little use.

Still, reciprocity of need and care, the potential for such reciprocity, need for a person rather than for anything one might get from that person, these are all narrative concepts, and all involve the kind of growth and strength upon which Mr Wells places such a premium.  Even a utilitarianism much cruder than his, which would be blind to these concepts, would still highlight the requirement that the needy person also have the ability to answer the other’s need for such a relation to have importance.

One of the weaknesses with the idea that The Meaning of Life is to be found in a reciprocal relationship between need and care is that people’s actual experience of moral reasoning in cultures around the world has many more than one dimension.  Social psychologist Jonathan Haidt has recently attracted a good deal of attention with a model of what people are actually talking about when they talk about right and wrong, a model that operates on 6 dimensions.  One of these dimensions, an axis running from care to harm, is predominant in the thinking of many in Western, Educated, Industrialized, Rich, Democratic (WEIRD) circles.  Indeed, classical utilitarians do not recognize any other component to morality than care and harm.  Looking beyond the WEIRD world, though, we find that, while humans in all times and places tend to agree that it is usually good to care for others and bad to harm them, they also place great importance on other concerns as well.  Professor Haidt arranges these other concerns in five further dimensions of moral reasoning: loyalty vs betrayal, sanctity vs degradation, fairness vs cheating, liberty vs oppression, and authority vs subversion.

To go back to the example of my wife and her service dog, I think we can bring all of these dimensions to bear in explaining the superiority of a canine companion over a human employee.  Compare the direction of loyalty in the relationship between dog and handler with the direction of loyalty in the relationship between client and employee.  Dog and handler are loyal to each other.  Unless something has gone very far wrong, that loyalty is typically deep and untroubled.  Between client and employee, however, there is a complex network of competing loyalties.  The client and employee may or may not develop a loyalty to each other.  The employee, however, must also be loyal to whoever is paying his or her wages, who may be the client, but more likely is a social services agency, an insurance company, etc.  And in a capitalist economy, an employee cannot avoid being both cheated and oppressed unless s/he throws aside all loyalty to his or her employer and clients when negotiating wages and conditions of employment.  That isn’t to deny that this suspension of loyalty, like the suspension of disbelief when watching a play, can sometimes in the long run strengthen what was once suspended, but the sheer complexity of loyalty as a phenomenon within the marketplace does mean participants in the marketplace have a harder time building up loyalty as a virtue than they do when participating in other institutions.

In the matter of sanctity vs degradation, the reciprocity of care and need that the dog offers the handler brings sanctity into settings where a client and a human attendant might have to make a special effort to avoid degradation.  Sometimes a dog helps a handler to dress and undress, to bathe, and to do other things during which the handler is exposed and vulnerable.  The handler does the same for the dog, and the dog looks to the handler for every need.  Therefore there is nothing degrading about receiving such service.  Human attendants are usually trained to be respectful and inclined to be so, but even so, there is something demoralizing about the helplessness one feels when asking for help from someone to whom one can offer no comparable help in return.  Again, a qualified professional with the average amount of human compassion will minimize that demoralization, but some trace of it is always there.  With the dog, you are building a loving relationship in which both canine and human find something that can only be called sanctity.

As for fairness vs cheating and liberty vs oppression, the dog avoids the problems inherent in an adversarial economic system to which I alluded above.  This is especially the case in a program like that which has provided Mrs Acilius with her current service dog and both of his predecessors, Canine Companions for Independence.  CCI is funded by donations and operated largely by volunteers; clients pay only their own personal expenses.  Of course, it functions within the USA’s economic system, so it isn’t altogether a utopian scheme.  For all that Mrs Acilius is given to telling her dogs that they are “angels from heaven,” they are in fact bred and trained using wealth produced in our capitalist system, with all its characteristic virtues and vices.  But I would say that CCI’s philanthropic structure maximizes those virtues and minimizes the accompanying vices.

As it does with loyalty and betrayal, the market introduces complexity into the experiences of authority and subversion.  So an employee is under the authority of an employer and sometimes under the authority of the client, but occasionally is required to give the client direction.  This need not be an especially frustrating complex of roles, but it does make it difficult to see how there can be any great moral significance in any particular phase of it.  The relationship between dog and handler, however, is one in which the lines of authority are crystal clear.  And it is the mutual need and mutual care that keeps those lines of authority functioning.

So maybe my response to my friend wasn’t quite as silly as any response to the question “What is the meaning of life?” must initially sound.  I’m not planning to work it up into a scholarly project of any sort, because I’m not actually the sort of person who wants to have an answer to that question, but I’ve posted it here for what it’s worth.

Why I am not a utilitarian

Jeremy Bentham, under surveillance at University College London

Yesterday I visited Oxford’s “Practical Ethics” blog and read a post titled “Why I Am Not a Utilitarian,” by Julian Savulescu.  Professor Savulescu says that he is not a utilitarian because utilitarianism advises us to act in a way that he would find impossible:

As we argue, Utilitarianism is a comprehensive moral doctrine with wide ranging impact. In fact it is very demanding. Few people if any have ever been anything like a perfect utilitarian. It would require donating one of your kidneys to a perfect stranger. It would require sacrificing your life, family and sleep to the level that enabled you to maximise the well-being of others. Because you could improve the lives of so many, so much, utilitarianism requires enormous sacrifices. People have donated large parts of their wealth and even a kidney, but this still does not approach the sacrifice required by Utilitarianism.

For these reasons, one criticism of utilitarianism is that it is too demanding.

Bernard Williams, a famous critic of Utilitarianism, once infuriated Dick Hare, a modern father of Utilitarianism, in a TV interview by asking him,

“If a plane had crashed and you could only rescue your own child or two other people’s children, which would you rescue?”

Utilitarians should rescue the two strangers rather than their own child.

People think I am a utilitarian but I am not. I, like nearly everyone else, find Utilitarianism to be too demanding.

I try to live my life according to “easy rescue consequentialism” – you should perform those acts which are at small cost to you and which benefit others greatly. Peter Singer, the greatest modern utilitarian, in fact appeals to this principle to capture people’s emotions – his most famous example is that of a small child drowning in a pond. You could save the child’s life by just getting your shoes wet. He argues morality requires that you rescue the child. But this is merely an easy rescue. Utilitarianism requires that you sacrifice your life to provide organs to save 7 or 8 lives.

Easy rescue consequentialism is, by contrast, a relaxed but useful moral doctrine.

I would go further than Professor Savulescu, and argue that it is not only unreasonably difficult to act as utilitarianism would advise in these extreme situations, but that the emotional attachments and personal drives that utilitarianism urges us to discard are the very things that make it possible for us to behave morally in the first place.  Professor Savulescu quotes research showing that the people who would in fact be willing to behave in ways that utilitarians urge upon us in their thought experiments are extreme egoists and psychopaths.  While such people might be willing to let their own children die in order to save the lives of a larger number of strangers, I would not envy those strangers were they subsequently to find themselves in any way dependent on their rescuers.

Other commenters on Professor Savulescu’s post had made this point by the time I got to it, so I did not say anything about it in my own comment.  Instead, I picked up on a remark that an earlier commenter had made about the various thought experiments in which utilitarians deal.  One of the more famous of these thought experiments is the “Trolley Problem,” in which one is asked to consider two hypothetical alternatives in response to a runaway trolley.  Left unchecked, the trolley will run over several people and kill many of them. The only way one has to check it is to push a fat man in front of the trolley, killing him but saving the others.

This and similar thought experiments raise the question of knowledge- how does one know that one will be able to push the fat man over, how does one know that his body will suffice to stop the trolley, how does one know that the others will be slower to get themselves out of the way of the trolley than one will be to push the fat man over, etc etc.  In posing the hypothetical, a philosopher can always dismiss these questions by saying that, ex hypothesi, the premises are all true.  But the closer you get to real life, the more pressing and more numerous the knowledge problems become.

When philosopher Jeremy Bentham developed utilitarianism 200 years ago, he built it around a notion often called “the hedonistic calculus.”  This calculus subtracts pain from pleasure, yielding a quantity of net pleasure.  The right action is that which provides the greatest amount of net pleasure for the greatest number of people.  Faced with the question of how any person could possibly know what action would provide this, considering that to do so one would have to know every consequence one’s action is likely to have on every person for all of future time, what precisely the feelings of each of those people would be about each of those consequences, and how intense each of these feelings would be, Bentham resorted to a utopian solution.  He coined the word “Panopticon,” naming a social system in which every person was under total surveillance at all times.  In such a system, the authorities might be able to form an educated guess as to what the consequences of their policies would be for their subjects.

The idea of the Panopticon in turn raises several questions.  How would such surveillance originate?  If it were instituted by people who were not themselves under surveillance, and who did yet not have access to the information surveillance would produce, how could they know that the surveillance they were crafting would itself serve to produce the greatest net pleasure among the greatest number of people?  Moreover, since the subjects of the Panopticon would know that they were under surveillance, the institution of surveillance itself would change their psychology quite dramatically, making it impossible for people living before the creation of the Panopticon to have an empirical basis for their expectations as to how such people would react to life within it.  Would those conducting the surveillance themselves be subject to surveillance, and if so, who would maintain surveillance on those conducting surveillance of those conducting surveillance?  Would there be other societies outside the realm of the Panopticon, and if so how would one know what policies would bring the greatest net pleasure to the members of those other, unsurveilled societies?  What about future generations, whom it is impossible to keep under surveillance as they do not yet exist?  How could the rulers of the Panopticon assess the feelings the consequences of their policies would produce in people of future times when they cannot monitor such people?  And, considering that the hedonistic calculus is essentially about subjective feelings of pleasure and pain, how do we respond to suggestions that our understanding of each other’s subjective feelings is always incomplete?  Finally, how does the existence of the Panopticon condition individual behavior?  Does every individual of every station have access to the complete records of the Panopticon?  Are all to use this information in making every decision in their lives?

Even in a society constructed as a Panopticon, then, it is far from clear how one could know enough to live as the utilitarians say we should.  Indeed, many forms of knowledge that are required, for example knowledge about future events or about other people’s subjective responses, may not be obtainable even in principle.  A commenter named Sean OhEigeartaigh pointed out that utilitarian thought experiments require unrealistic assumptions about the amount of knowledge a moral agent might have.  This was the point I was picking up on in the comment below:

I think Sean O hEigeartaigh makes the vital point, which is that these scenarios require more information than a person could reasonably be expected to have. Indeed, I would go further, and say that the whole concept of the hedonistic calculus requires that an agent have more information than a human being could possibly have. As such, utilitarianism is not an ethical theory at all, inasmuch as it cannot develop a set of criteria for judging human behavior. Its only possible use would be as a theodicy, a means of justifying the behavior of a supernatural being who is either omniscient or at a minimum radically better informed than humans can be.

Perhaps it is too much to say that utilitarianism is possible even as a theodicy. To make a theodicy go, one must grant, first, that a supernatural being exists, second, that that being is in some profound sense better than we are, and third, that the actions of that being require moral justification. None of these premises would appear to be particularly secure. Moreover, an attempt to use utilitarianism to justify the acts of whatever supernatural being we have posited would immediately run into a variety of other problems, some of them quite severe. Most obvious, perhaps, is the stubbornly ambiguous concept of “pleasure” at the stem of all theories of utility. I for one can think of no reason why a utilitarian theodicy would have an easier time meaning one thing at a time by this word than the attempted utiltarian philosophies of the last two centuries have had. Furthermore, the implications of conceding the existence of a supernatural being whose knowledge is radically superior to ours would seem to be rather wide-ranging and to call for a rethinking of the concept of rationality on which Bentham et al were trying to elaborate. So perhaps the time has come to discard utilitarianism altogether.

The bit about pleasure refers to another problem that Bentham tried to solve by accepting something horrid.  Asked what he would say if it could be shown that playing push-pin had given more net pleasure than high art, he would unhesitatingly say that in that case push-pin was better than high art.  Bentham’s most famous follower, John Stuart Mill, tried to escape from this by distinguishing among various forms of pleasure, high and low.

What Mill ended up doing was raising a question that has widely been considered fatal to the claims of utilitarianism to be taken seriously: what exactly is “pleasure”?  I think we know, when we say that listening to music gives us pleasure, and eating a fine meal gives us pleasure, and being reunited with a loved one gives us pleasure, and completing an important job of work gives us pleasure, that we are not saying that these experiences are interchangeable.  Saying that we have received pleasure isn’t at all like saying that we have received money.  If we set out to describe with technical precision what it is that each of those experiences has given us, we will not be surprised to find that the answer is a set of distinct and complementary feelings, not differing quantities of any particular substance.  Discard the idea that “pleasure” and “pain” are the names of substances, in the Aristotelian sense of the word “substance,” and it is difficult to see what, if anything, is left of the hedonistic calculus.

Scientific Arrogance

The other day, Ed Yong linked to an essay by Ethan Siegel.  Mr Siegel extols the virtues of science, both Science the process for gaining knowledge about nature and Science the body of knowledge that humans have acquired by means of that process.  Mr Siegel then quotes an interview Neil deGrasse Tyson gave to Nerdist, in which Mr Tyson expressed reservations about the value of philosophical study as part of the education of a young scientist.  In that interview, Mr Tyson and his interlocutors made some rather harsh-sounding remarks.  Take this segment, for example, as transcribed by Massimo Pigliucci:

interviewer: At a certain point it’s just futile.

dGT: Yeah, yeah, exactly, exactly. My concern here is that the philosophers believe they are actually asking deep questions about nature. And to the scientist it’s, what are you doing? Why are you concerning yourself with the meaning of meaning?

(another) interviewer: I think a healthy balance of both is good.

dGT: Well, I’m still worried even about a healthy balance. Yeah, if you are distracted by your questions so that you can’t move forward, you are not being a productive contributor to our understanding of the natural world. And so the scientist knows when the question “what is the sound of one hand clapping?” is a pointless delay in our progress.

[insert predictable joke by one interviewer, imitating the clapping of one hand]

dGT: How do you define clapping? All of a sudden it devolves into a discussion of the definition of words. And I’d rather keep the conversation about ideas. And when you do that don’t derail yourself on questions that you think are important because philosophy class tells you this. The scientist says look, I got all this world of unknown out there, I’m moving on, I’m leaving you behind. You can’t even cross the street because you are distracted by what you are sure are deep questions you’ve asked yourself. I don’t have the time for that.

interviewer: I also felt that it was a fat load of crap, as one could define what crap is and the essential qualities that make up crap: how you grade a philosophy paper?

dGT [laughing]: Of course I think we all agree you turned out okay.

interviewer: Philosophy was a good Major for comedy, I think, because it does get you to ask a lot of ridiculous questions about things.

dGT: No, you need people to laugh at your ridiculous questions.

interviewers: It’s a bottomless pit. It just becomes nihilism.

dGT: nihilism is a kind of philosophy.

Mr Tyson’s remarks have come in for criticism from many quarters.  The post by Massimo Pigliucci from which I take the transcription above is among the most notable.

I must say that I think some of the criticism is overdone.  In context, it is clear to me that Mr Tyson and his interlocutors are thinking mainly of the training of young scientists, of what sort of learning is necessary as a background to scientific research.  In that context, it’s quite reasonable to caution against too wide a range of interests.  It would certainly not be wise to wait until one had developed a deep understanding of philosophy, history, literature, music, art, etc, before getting down to business in one’s chosen field.

It’s true that Mr Tyson’s recent fame as narrator of the remake of the television series Cosmos puts a bit of an edge on his statements; that show is an attempt to present the history of science to the general public, and to promote a particular view of the place of science in human affairs.  It would be fair to say that the makers of Cosmos, Mr Tyson among them, have exposed some of their rather sizable blind spots in the course of the project (most famously in regard to Giordano Bruno,) and a bit of time spent studying the philosophy of science may very well have served to temper the bumptious self-assurance that let them parade their howlers in worldwide television broadcasts.  And it is true, as Mr Pigliucci documents, that Mr Tyson has a history of making flip and ill-informed remarks dismissing the value of philosophy and other subjects aside from his own.  Still, the remarks from the Nerdist podcast are pretty narrow in their intended scope of application, and within that scope, having to do with apprentice scientists, I wouldn’t say that they are examples of arrogance, or that they are even wrong.

I’m reminded of a problem that has faced those who would teach Latin and ancient Greek to English speakers over the centuries.  The languages are different enough from English that it seems like a shame to start them later than early childhood.  If a student starts Latin at five and Greek at six, as was the norm for boys destined for the German Gymnasia or the English public schools in the nineteenth century, that student will likely attain a reading proficiency in the classical languages at about eight or nine years of age that a student who starts them in later life may never attain.  However, the point of learning the languages is to be able to read classical literature.  What is a nine-year-old to make of Horace or Pindar or Vergil or Sophocles or Thucydides or Tacitus?  Few of the real masterworks are intelligible as anything other than linguistic puzzles to anyone under 40.  It often happens to me that I assign such things to students who are returning to college in middle age.  They usually come to me afterward and tell me that they were surprised.  They had read them when they were in the 18-25 age bracket that includes most of my students, and hadn’t found anything of interest in them.  Rereading them later in life, the books meant a tremendous amount to them.  I trot out a very old line on these occasions, and say “It isn’t just you reading the book- the book also reads you.”  Meaning that the more life experience the reader brings, the greater the riches the reading offers.

I suppose the best thing to do would be to learn the languages in early childhood while studying mathematics and the natural sciences, to study ancient literary works for several years as specimens in the scientific study of linguistics or as aids to archaeology, and to come back to them later in life, when one can benefit from reading them on their own terms.  The same might apply to philosophy, bits of which might be slipped in to the education of those aged 25 and younger, but which ought really to be introduced systematically only to those who have already confronted in practice the sorts of crises that have spurred its development over the centuries.

Be that as it may, the concept of scientific arrogance is one that has been deftly handled by one of my favorite commentators, cartoonist Zach Weiner.  I’d recommend two Saturday Morning Breakfast Cereal strips on the theme, this one about emeritus disease and this one about generalized reverence for specialized expertise.

The only beliefs likely to survive rational scrutiny are those formed in response to rational scrutiny

Maybe it is possible to categorize the set of a person’s beliefs by the importance that person attaches to each of those beliefs.  If we visualize a person’s collection of beliefs as a sphere, we might imagine a solid core consisting of beliefs to which the person attaches great importance, a loose periphery of beliefs to which the person attaches very little importance, and various layers in between.  Over time beliefs would of course shift from one layer to another, so that a belief held only tentatively under one set of circumstances might take on great significance under another set of circumstances.

layers 4For example, if I am walking through an unfamiliar part of town, the shape and color of the buildings may not be of any great interest to me.  If in that case I were to be asked to describe a building I had passed a few minutes before, I might not be surprised or bothered to be told that my description was in error.  My impressions of the details of any given building’s appearance might be very tentative, formed only incidentally as I walk along paying attention to the street signs and to features of greater interest.  However, if I lost my way and were trying to use those same buildings as landmarks, my ability to describe the buildings would have a direct bearing on my ability to find my way.  While I might not care about the buildings for their own sake, I certainly care about that task, and would therefore have a stake in my beliefs about their appearance.

Contact with other people of course has an effect on the movement of beliefs from one layer of significance to another.  Contact with an appealing person or group of people who represent a challenge to an idea in or near the core might pull that idea up towards the loose periphery of tentative beliefs, while contact with a hostile person who attacks a peripheral belief might drive that belief down towards or into the core.  So, a religious believer who at one time regards it as a core principle of his or her identity that only the practices of his or her religion can make a person virtuous may come to put less emphasis on that belief after meeting and beginning to like a number of apparently virtuous people who do not follow those practices.  Conversely, a person who has chosen one candidate for public office over another in the belief that his or her preferred candidate was the slightly better choice may very quickly begin to behave as if the difference between the two candidates was of immense moral significance if some obnoxious person confronts him or her with a demand that s/he shift his or her allegiance to the other one.

A striking example of this latter process took place in my living room some time ago.  Mrs Acilius and I were watching a television program in which singers competed for the votes of the text-messaging public.  Mrs A had been watching the program from its first installment months before, I was watching it for the first time on the night the winner was announced.  As they played the two finalists’ previous performances, I said that the female singer seemed much better than her male antagonist.  Mrs A agreed that she was the better singer and said that she had voted for her, but insisted that the difference between them was really very slight.  “They’ve just chosen better clips from her performances than from his,” she explained.  “I wouldn’t be at all upset if he won, he’s almost as good as she is.  I want to buy some of his music, as much as of hers.”  The male singer did win.  Mrs A’s immediate response?  “How the %$&# did that happen!?  She was so much better!”  Well, I said, I suppose more people voted for him, and– “People voted for Hitler, too!”  So in about fifteen seconds, the man went from being virtually as good as the other singer to being Hitler.  When I pointed this out to Mrs A, she burst out laughing.  Her belief that the female singer was the better choice floated back up towards the peripheral layer of her tentative, relatively unimportant beliefs.

While beliefs can shift from one layer of importance to another, they often stay at one level for long periods of time.  Beliefs about religion, politics, sexuality, economics, and other matters touching group identity and kinship structures tend to cluster at the core, while beliefs that do not have any obvious bearing on one’s social position or on any task one is attempting to perform tend to remain in the periphery.  It strikes me that this has implications for the concept of rationality.  What sorts of ideas are subjected to rational scrutiny?  Ideas in the periphery are too unimportant to subject to sustained analysis, unless one is a student in a humanities course looking for a paper topic.  On the other hand, ideas in the core are too important to subject to sustained analysis.  Challenging them brings discomfort and makes enemies.  Only a powerful incentive can ensure that a person will test them thoroughly, and even then defensive bias can be expected to enter in at every point unless one is guided by the most robust methodological constraints.

Of course, there may be times when one takes a perverse pleasure in experiencing discomfort and enmity.  I think of an old friend of mine whose second favorite activity is the denunciation of the Roman Catholic Church, its hierarchy, its doctrines, and its practices.  The only thing to which she devotes more energy than her jeremiads against the Roman Catholic Church is participation in her local Roman Catholic parish, of which she has long been one of the mainstays.  Clearly the denunciations and the devotions are two parts of the same complex of behavior, though how exactly that complex fits together I cannot say.

There are also times when people take pleasure in inflicting discomfort on others and displaying enmity towards them.  At those times, a critic of core beliefs might show both an aggressive bias in treating the beliefs that are explicitly under attack and defensive bias in an attempt to preserve other beliefs, even beliefs that are closely related to them.  We often see this in debates between political partisans or religious sectarians who seem to each other to be separated by vast ideological gulfs, while outsiders find the differences between them incomprehensibly subtle.  I confess to having spent a significant amount of time during the 2012 US presidential campaign listening to supporters of the two chief candidates explain in all earnestness that the health insurance reform one of them had sponsored as governor of Massachusetts was in reality profoundly different from the health insurance reform the other had signed as president, despite all appearances to the contrary.  I did my best to avoid probing into this topic when in conversation with committed partisans on either side, and every time I failed to express solemn agreement with their talking points I elicited a flash of real anger that I only made worse by laughing in their faces.

layers 2

Between “Don’t Care” and “Don’t Dare”

Rational scrutiny, then, is something that takes place mostly in the intermediate layers between the core and the periphery.  This suggests a troubling reflection.  The history of philosophy, the history of art, the history of science, all suggest that the only beliefs likely to survive rational scrutiny are those formed in response to rational scrutiny.  Even a belief supported by such compelling evidence as the belief that the Sun, the stars, and the planets revolve around the Earth eventually collapsed when it was subjected to examination.  If neither the beliefs in the core nor those in the periphery are regularly challenged, then it is only in the intermediate layers, between the outer periphery of beliefs we do not care about sufficiently to challenge them and the inner core of beliefs that we do not dare to challenge, that we can expect any significant percentage of our ideas to be capable of withstanding rational scrutiny.

This may explain why descriptions of rationality so often tend to drift into discussions of problem-solving, even among people who theoretically disagree with thinkers like Max Weber or the pragmatists who would identify rationality with problem-solving.  Our intermediate-importance beliefs tend to be those which we use in performing specific tasks.  So most rational scrutiny takes place among these beliefs, and in the course of problem-solving.  That in turn may go some distance towards explaining the popularity of ideas which depict rationality and emotion as so deeply opposed to each other that any high level of attainment in abstract reasoning is to be taken as evidence of emotional immaturity, and vice versa. 

Indeed, such ideas are so widely taken for granted that it may seem odd to suggest that their popularity needs explaining.  To me it has always seemed odd that our culture posits such a stark opposition between reasoning and feeling.  It is as if we all regarded it as a self-evident truth that there is a war between hands and feet, and that anyone who has exceptional manual dexterity must on that account have difficulty walking, or that any accomplished dancer must be at a loss when called upon to make use of his or her hands.   Regarding hands and feet, the opposite is of course more nearly true.  The more adept one is in using any part of the body, the less distracting that part will tend to be when trying to use another part.  Surely it is the same with emotions and reasoning, other things being equal.  The more mature and integrated one’s emotional state, the wider the range of topics about which one can reason calmly for sustained periods; the more experience one has using reason rigorously, the narrower the range of unfamiliar ideas that are likely to prompt one to seize up with panic.  So why do we assume that expert reasoners must be emotionless automatons, or that deeply happy people must live by pure feeling, not sicklied o’er by the pale cast of thought?

That rational scrutiny, in practice, is confined for the most part to a rather narrow band of ideas might explain why it is so commonplace to draw this absurdly stark opposition.   To subject ideas to rational scrutiny seems to imply that they are neither core beliefs, which because of their sensitivity must be exempt from such criticism, or tentative impressions, which because of their triviality do not merit such serious attention.  To set no bounds to rational inquiry may therefore seem to suggest that one has no core beliefs, no tentative impressions, and indeed no sense of proportion whatever.  It is difficult to see how a person of that sort  would be able to empathize with others, and if one had chosen to become such a person it would be reasonable to suspect that one was hiding from some sort of deep pain.  However, that suggestion need not be accurate.  One can have a strong sense of proportion while numbering among one’s core beliefs the conviction that rational scrutiny is of sufficient value that any belief might be subject to it.  Training in philosophy, the arts, science, or any of a number of fields might underpin such a conviction.  Living in accord with that conviction can be a sign, not of perversity or hostility, but of courage.

Inner Check, Inner Dash

Irving Babbitt (1865-1933) and Paul Elmer More (1864-1937) were American literary scholars, famous in their day for arguing that Socrates, the Buddha, Samuel Johnson, and a wide array of other sages throughout the history of the world had conceived of the freedom of the will as the ability to defy one’s impulses.  Babbitt and More gave this conception a variety of names; perhaps the most familiar of these names is “the inner check.”

The other day, I picked up a copy of the August 2012 issue of Scientific American magazine while I was waiting for the pharmacist to fill a prescription.  Lo and behold, a column by Michael Shermer described neurological study conducted in 2007 by Marcel Brass and Patrick Haggard.  Doctors Brass and Haggard found support for an hypothesis that will sound familiar to students of Babbitt and More.  As Mr Shermer puts it:

[I]f we define free will as the power to do otherwise, the choice to veto one impulse over another is free won’t. Free won’t is veto power over innumerable neural impulses tempting us to act in one way, such that our decision to act in another way is a real choice. I could have had the steak—and I have—but by engaging in certain self-control techniques that remind me of other competing impulses, I vetoed one set of selections for another.

Support for this hypothesis may be found in a 2007 study in the Journal of Neuroscience by neuroscientists Marcel Brass and Patrick Haggard, who employed a task… in which subjects could veto their initial decision to press a button at the last moment. The scientists discovered a specific brain area called the left dorsal frontomedian cortex that becomes activated during such intentional inhibitions of an action: “Our results suggest that the human brain network for intentional action includes a control structure for self-initiated inhibition or withholding of intended actions.” That’s free won’t.

If this is true, then Babbitt and More’s works take on a new interest.  If such a control structure exists in the human brain network, it wouldn’t necessarily be the case that humans would be consciously aware of it.  There are any number of facts about the operation of our brains that no one ever seems to have guessed until quite recent scientific findings pointed to them.  So, if Babbitt and More were right and a great many distinguished intellectuals operating in many times and cultures conceived of moral agency as a matter of “self-initiated inhibition or withholding of intended actions,” it would be reasonable to ask whether this conception is evidence that the process Doctors Brass and Haggard detected in the left dorsal frontomedian cortex is perceptible to the person who owns the brain in which it occurs.

The same issue included a couple of other interesting notes on psychological and neurological topics.  A bit by Ferris Jabr discusses Professors George Mandler and Lia Kvavilashvili, who have been studying a phenomenon they call “mind-pops.”  A mind-pop is a fragments of memory which suddenly appears in one’s conscious mind for no apparent reason.  Most mind-pops are very slight experiences; the example in the column is a person washing dishes who suddenly thinks of the word “orangutan.”  That’s the sort of thing a person might forget seconds after it occurred.  Trivial as an individual mind-pop might be, perhaps as a class of experiences they may point to significant aspects of mental functioning.  Professors Kvavilashvili and Mandler:

propose that mind pops are often explained by a kind of long-term priming. Priming describes one way that memory behaves: every new piece of information changes how the mind later responds to related information. “Most of the information we encounter on a daily basis activates certain representations in the mind,” Kvavilashvili explains. “If you go past a fish-and-chips shop, not only the concept of fish may get activated but lots of things related to fish, and they may stay activated for a certain amount of time—for hours or even days. Later on, other things in the environment may trigger these already active concepts, which have the feeling of coming out of nowhere.” This phenomenon can boost creativity because, she says, “if many different concepts remain activated in your mind, you can make connections more efficiently than if activation disappears right away.”

The same researchers also suspect that mind-pops have a connection to a variety of mental illnesses and emotional disorders, so it isn’t all so cheerful as that paragraph may suggest.

Morten Kringelbach and Kent Berridge, in a feature article titled “New Pleasure Circuit Found in the Brain,” describe a study conducted in the 1950s that involved electrical stimulation of certain areas of the brain.  Subjects expressed a strong desire that the stimulation should continue.  From that desire, researchers concluded that the areas in question were producing pleasure.  However, more recent work suggests that these are in fact areas that produce, not pleasure, but desire.  Indeed, none of the patients in the original study actually said that they enjoyed the stimulation, they simply said that they wanted more of it.  Researchers were jumping to an unwarranted conclusion when they interpreted that desire as a sign of pleasure.  The actual process by which the brain produces pleasure is rather more complicated than those researchers, and the “pleasure-center” model of the brain that grew out of their work, might lead one to assume.

Gettier cases in real life

It strikes me that I left something important out of a post I put up the other day, the one titled “Justified True Belief.”  In it, I summarized Edmund L. Gettier’s 1963 article “Is Justified True Belief Knowledge?” an article that was less than three pages long to begin with, so it was a bit silly to summarize it.)  Gettier cited a definition of knowledge as “justified true belief,” a definition that went back to Plato, and gave two examples of justified true beliefs that we should not call knowledge.  Gettier’s examples were rather highly contrived, but have been followed by many publications giving more plausible scenarios in which a person might hold a justified true belief, and yet not be said to have knowledge.  I said in the post that such “Gettier cases” occur in real life with some frequency, then gave a novel by Anthony Trollope as my closest approximation to real life.

Here’s something that happened to me.  I was teaching a class about social life in ancient Greece and Rome.  The topic for the day was marriage, including the custom of the dowry.  Most of my students have passed their whole lives up to this point in the interior of the USA.  To them the idea of a dowry is a bizarre one.  To make it somewhat intelligible to them, I explain that in ancient times it was common for a household to subsist on resources approaching the minimum necessary for survival.  So, it was quite a serious matter to share what little one had with one’s neighbors.  Say a creek ran through your farm, and your neighbor wanted to make a deal with you to divert a portion of its water to irrigate his fields.  If he were to trick you and take too much of the water, you and your entire family might very well starve to death as a result.  How was it possible to develop such trust in one’s neighbor that it would be possible to strike such a bargain?  If you and he were going to have grandchildren in common, then you could believe that he would have enough interest in your long-term well-being that he would be unlikely to treat with you in so harsh a manner.  Thus, a property owner who would not let his neighbor dig an irrigation ditch for any amount of money might freely dig it for his neighbor himself as a dowry for his daughter.

I tell this story every semester.  A couple of years ago, one of my students approached me after class.  A woman from India, she was troubled by my explanation of the dowry, and by the textbook’s equally pragmatic discussion of it.  Her parents had dowered her and her sisters, as her grandparents had dowered their mother, not with any such materialistic motives in mind, but as an expression of respect for the prospective bridegroom and welcome to his kinfolk into their family circle.  She did not disagree with anything I had said; so far as she could see, all of my remarks about the economic function of the dowry were quite true.  But she did not believe that any Indian, or anyone else from a society where the dowry was a living custom, would ever have made them.  From her point of view, the propositions I had enunciated concerning the dowry were true, and I was justified in believing them.  However, she clearly thought that I did not know what I was talking about.

I would make one other point.  The vast and ever-growing literature that lays out plausible sounding Gettier cases makes it clear that the contrived nature of Gettier’s two examples bothers people.  Yet, why do we have a category of “contrived” when it comes to counterexamples?  Surely it is because we think that it is possible to think up some scenario in which a given statement might be true, even when that statement is not something we really know to be true.  So that a far-fetched example may establish the logical possibility of a point, but only an argument grounded in real life or in exhaustive reasoning is likely to convince us that the statement is worth taking seriously and incorporating into that set of beliefs and mental habits that we consider to be our stock of knowledge.  In other words, our very discomfort with Gettier’s examples proves the point that those examples are intended to establish.

Justified True Belief

There are a couple of passages where Plato seems to define knowledge as “justified true belief.”  So, if you have enough evidence that you have a right to accept a given proposition as true, if you do in fact exercise this right and accept that proposition as true, and if  it so happens that the proposition is true, then Plato might have said that your belief in that proposition is an example of knowledge.

This definition was occasionally challenged in an oblique sort of way in the first 24 centuries after Plato put it forward, but it was still uncontroversial enough that philosophers could use it matter-of-factly as late as the 1950s.  In 1963, Professor Edmund L. Gettier of Wayne State University wrote a very short, indeed tiny, article in which he gave two counterexamples to the definition of knowledge as justified true belief.  Here is example one:

Suppose that Smith and Jones have applied for a certain job. And suppose that Smith has strong evidence for the following conjunctive proposition:

  1. Jones is the man who will get the job, and Jones has ten coins in his pocket.

Smith’s evidence for (d) might be that the president of the company assured him that Jones would in the end be selected, and that he, Smith, had counted the coins in Jones’s pocket ten minutes ago. Proposition (d) entails:

  1. The man who will get the job has ten coins in his pocket.

Let us suppose that Smith sees the entailment from (d) to (e), and accepts (e) on the grounds of (d), for which he has strong evidence. In this case, Smith is clearly justified in believing that (e) is true.

But imagine, further, that unknown to Smith, he himself, not Jones, will get the job. And, also, unknown to Smith, he himself has ten coins in his pocket. Proposition (e) is then true, though proposition (d), from which Smith inferred (e), is false. In our example, then, all of the following are true: (i) (e) is true, (ii) Smith believes that (e) is true, and (iii) Smith is justified in believing that (e) is true. But it is equally clear that Smith does not know that (e) is true; for (e) is true in virtue of the number of coins in Smith’s pocket, while Smith does not know how many coins are in Smith’s pocket, and bases his belief in (e) on a count of the coins in Jones’s pocket, whom he falsely believes to be the man who will get the job.

Here Smith is justified in believing that “The man who will get the job has ten coins in his pocket,” and it is in fact true that the man who will get the job has ten coins in his pocket.  However, the same evidence which justifies that true belief also justifies Smith’s false belief that Jones will get the job.  In Smith’s mind, these two beliefs are so intertwined that the true proposition is unlikely to figure in any line of reasoning uncoupled from the false one.  Moreover, since Smith does not realize that he himself has ten coins in his pocket, nor presumably that there is any applicant for the job other than Jones who has ten coins in his pocket, there is no reason to suppose that he would regard such a proposition as anything other than a statement that Jones will get the job.  So, true though the proposition may be, and justified as Smith may be in accepting it as true, his belief in it can lead him to nothing but error.

This counterexample is of course highly contrived, as is Professor Gettier’s second counterexample.  That doesn’t matter.  His only goal was to show that there can be justified true beliefs which we would not call knowledge, not that such beliefs are particularly commonplace.  Having given even one counterexample, Professor Gettier showed that justified true belief is not an adequate definition of knowledge.  Needless to say, Plato himself would probably have been thrilled with these counterexamples.  One can easily imagine him starting from them and proceeding to spin out a whole theory of justification, perhaps based on the idea that what we have a right to believe varies depending on the plane of existence to which our belief pertains, or that justification isn’t really justification unless the subject is approaching the topic in the true character of a philosopher, or some such Platonistic thing.

As it happens, Professor Gettier’s article was followed by a great many publications giving “Gettier-style” counterexamples, including many that are far more natural and straightforward than his original two.  Evidently all that needed to be done was to give some counterexamples, and the floodgates of creativity came open.  Professor Gettier himself did not write any of these articles, or indeed any articles at all after his 1963 paper.

Once you’ve read the 1963 paper, you may begin to notice naturally-occurring Gettier-style counterexamples.  The first novel I read after I was introduced to this topic about 20 years ago was Anthony Trollope’s The Eustace Diamonds.  Trollope is not often called a philosophical novelist.  However, a Gettier-style counterexample lies at the heart of this novel.  Lizzie Eustace is the childless widow of Sir Florian Eustace.  Among Sir Florian’s possessions had been a diamond necklace valued at ÂŁ 10,000.  Lady Eustace claimed that Sir Florian wanted her to have the necklace, and so insisted on treating it as her own; however, the Eustace family lawyer claimed that it was a family heirloom, entailed to Sir Florian’s blood relations, and so that it should revert to the family in event of his death without issue.   While this dispute was moving towards the courts, a person or persons unknown broke into a safe where Lady Eustace was known to keep the necklace.  The burglary was discovered; the necklace was not there.  Lady Eustace did not tell the police what was in fact true, that she had taken the necklace from the safe before the burglary and still had it in her possession.  The leader of the police investigation is Inspector Gage, a wily and experienced detective who quickly arrives at the conclusion that Lady Eustace has stolen the necklace herself, likely in conjunction with her lover, Lord George de Bruce Carruthers.

In fact, Inspector Gage is mistaken not only about Lady Lizzie’s complicity in the burglary, but also about the nature of her relationship with Lord George and about Lord George’s character.  For all that they seem like lovers, and for all that Lady Eustace would like to become Lord George’s lover, they never quite come together.  And for all that Lord George’s sources of income are shrouded in mystery, he proves in the end to be thoroughly law-abiding.  However, the collection of evidence on which the inspector bases his theory is so impressive that if it did not justify him believing it, one can hardly imagine how anyone could be justified in believing anything.  So those three propositions could be classified as justified false beliefs.  At the nub of them all, however, is a justified true belief: that the necklace is in the possession of Lady Eustace.  Surrounded as it is by these false beliefs, false beliefs which would prevent the inspector from forming a true theory of the case, he cannot be said to know even this.

Cartoonist Zach Weiner devoted a recent installment of his Saturday Morning Breakfast Cereal to laying out some thoughts about Gettier-style counterexamples:

 

I want to make a few remarks about this strip.  First, it doesn’t seem right to say that Professor Gettier proposed a “philosophical problem.”  To the extent that there is a “Gettier problem,” it is a problem with Plato’s proposed definition of knowledge.  By finding a weakness in that definition, Professor Gettier may have reopened philosophical problems that some had hoped to use the definition to mark as solved, but his article does not in itself suggest any new problems.  To jump directly from Professor Gettier’s challenge to Plato’s definition to a statement that “humans find the order of events to be cute” is to introduce quite an unnecessarily grandiose generalization.

Second, it’s clever that the irate child denounces “the Gettier ‘problem'” with a claim that “Maybe all the ‘problems’ of philosophy are just emergent properties that disappear when you simplify.”   Professor Gettier’s 1963 paper includes just three footnotes.  One refers to the two passages where Plato floats the definition of knowledge as justified true belief (“Plato seems to be considering some such definition at Theaetetus 201, and perhaps accepting one at Meno 98.”)  The other two cite uses of the definition by Roderick Chisholm and Alfred Ayer, two very eminent philosophers working in the Anglo-American tradition of analytic philosophy (“Roderick M. Chisholm, Perceiving: A Philosophical Study (Ithaca, New York: Cornell University Press, 1957), p. 16,” and “A. J. Ayer, The Problem of Knowledge (London: Macmillan, 1956), p. 34.”)  Much of the analytic tradition stems from the suspicion that “all the ‘problems’ of philosophy are just emergent properties that disappear when you simplify,” and Ayer and Chisholm both had interesting things to say about this suspicion.

Third, by what criterion can brain cells be regarded as “small stuff” and consciousness as “big stuff”? I’d say the only person to whom that idea makes sense is one who has heard straightforward explanations of the basics of brain anatomy and woolly explanations of the metaphysics of consciousness.  Everyone who is likely to read this strip either is, or has at some time been, awake.  Consciousness is thus familiar to all of them, an everyday thing, the very smallest of the “small stuff.” Conversely, brain cells are knowable only to people who have access to a microscope or to findings arrived at by use of a microscope.  They are, therefore, a relatively recherche topic, and most definitely “big stuff” to any truly naive subject.  To connect the phenomena of consciousness with brain cells, or with brain anatomy, is not only an even more sophisticated topic, but is at present wildly speculative.

Fourth, it’s clever to have the irate child find that “the small stuff” is no easier to understand than “the big stuff.”  I think Plato would have liked the strip, not for its defense of his definition, but for its illustration of the difficulty of separating “the small stuff” from “the big stuff.”  After all, probability wobbles and the rest of quantum theory are, so far as we are concerned, highly abstract.  We may use various images to make physics intelligible, but the deeper we enter into the subject the more thoroughly mathematical it becomes.  As the final nose-flicking indicates, our experience of “facts” and “brain cells” and “stuff that happens” are also theory-laden, so that it is an empty boast to claim that one regards them as real and the ideas behind them as unreal.

The way out of philosophy runs through philosophy

There’s a phrase I’ve been thinking about for years, ever since I read it somewhere or other in Freud: “the moderate misery required for productive work.”  It struck me as plausible; someone who isn’t miserable at all is unlikely to settle willingly into the tedious, repetitive tasks that productive work often involves, while someone who is deeply miserable is unlikely to tolerate such tasks long enough to complete them.  If blogging counts as productive work, I myself may recently have represented a case in point.  Throughout the summer and into the autumn, I wasn’t miserable at all, and I barely posted a thing.  Then I caught a cold, and I posted daily for a week or so.  If I’m typical of bloggers in this respect, maybe I could also claim to have something in common with a philosopher.  Samuel Johnson once quipped that he had intended to become a philosopher, but couldn’t manage it.  The cause of his failure?  “Cheerfulness kept breaking in.”

One item I kept meaning to post notes on when cheerfulness was distracting me from the blog was a magazine article about Johnson’s contemporary, David Hume.  Hume, of course, was a philosopher; indeed, many would argue that he was “the most important philosopher ever to write in English.”  Contrary to what Johnson’s remark suggests, however, Hume was suspected of cheerfulness on many occasions.  The article I’ve kept meaning to note is by Hume scholar and anti-nationalist Donald W. Livingston; despite the radicalism of Livingston’s politics (his avowed goal is to dissolve the United States of America in order to replace it with communities built on a “human scale”) in this article he praises Hume as “The First Conservative.”  Hume’s conservatism, in Livingston’s view, comes not only from his recognition of the fact that oversized political units such as nation-states and continental empires are inherently degrading to individuals and destructive of life-giving traditions, but also from his wariness towards the philosophical enterprise.  Hume saw philosophy as a necessary endeavor, not because it was the road to any particular truths, but because philosophical practice alone could cure the social and psychological maladies that the influence of philosophy had engendered in the West.

This is the sort of view that we sometimes associate with Ludwig Wittgenstein; so, it’s easy to find books and articles with titles like “The End of Philosophy” and “Is Philosophy Dead?” that focus on Wittgenstein.  But Livingston demonstrates that Hume, writing more than a century and a half before Wittgenstein, had made just such an argument.  Livingston’s discussion of Hume’s Treatise of Human Nature (first published in 1739-1740) is worth quoting at length:

Hume forged a distinction in his first work, A Treatise of Human Nature (1739-40), between “true” and “false” philosophy.  The philosophical act of thought has three constituents. First, it is inquiry that seeks an unconditioned grasp of the nature of reality. The philosophical question takes the form: “What ultimately is X?” Second, in answering such questions the philosopher is only guided by his autonomous reason. He cannot begin by assuming the truth of what the poets, priests, or founders of states have said. To do so would be to make philosophy the handmaiden of religion, politics, or tradition. Third, philosophical inquiry, aiming to grasp the ultimate nature of things and guided by autonomous reason, has a title to dominion. As Plato famously said, philosophers should be kings.

Yet Hume discovered that the principles of ultimacy, autonomy, and dominion, though essential to the philosophical act, are incoherent with human nature and cannot constitute an inquiry of any kind.  If consistently pursued, they entail total skepticism and nihilism. Philosophers do not end in total skepticism, but only because they unknowingly smuggle in their favorite beliefs from the prejudices of custom, passing them off as the work of a pure, neutral reason. Hume calls this “false philosophy” because the end of philosophy is self-knowledge, not self-deception.

The “true philosopher” is one who consistently follows the traditional conception of philosophy to the bitter end and experiences the dark night of utter nihilism. In this condition all argument and theory is reduced to silence. Through this existential silence and despair the philosopher can notice for the first time that radiant world of pre-reflectively received common life which he had known all along through participation, but which was willfully ignored by the hubris of philosophical reflection.

It is to this formerly disowned part of experience that he now seeks to return. Yet he also recognizes that it was the philosophic act that brought him to this awareness, so he cannot abandon inquiry into ultimate reality, as the ancient Pyrrhonian skeptics and their postmodern progeny try to do. Rather he reforms it in the light of this painfully acquired new knowledge.

What must be given up is the autonomy principle. Whereas the false philosopher had considered the totality of pre-reflectively received common life to be false unless certified by the philosopher’s autonomous reason, the true philosopher now presumes the totality of common life to be true. Inquiry thus takes on a different task. Any belief within the inherited order of common life can be criticized in the light of other more deeply established beliefs. These in turn can be criticized in the same way. And so Hume defines “true philosophy” as “reflections on common life methodized and corrected.”

By common life Hume does not mean what Thomas Paine or Thomas Reid meant by “common sense,” namely a privileged access to knowledge independent of critical reflection; this would be just another form of “false philosophy.” “Common life” refers to the totality of beliefs and practices acquired not by self-conscious reflection, propositions, argument, or theories but through pre-reflective  participation in custom and tradition. We learn to speak English by simply speaking it under the guidance of social authorities. After acquiring sufficient skill, we can abstract and reflect on the rules of syntax, semantics, and grammar that are internal to it and form judgments as to excellence in spoken and written English.  But we do not first learn these rules and then apply them as a condition of speaking the language. Knowledge by participation, custom, tradition, habit, and prejudice is primordial and is presupposed by knowledge gained from reflection.

The error of philosophy, as traditionally conceived—and especially modern philosophy—is to think that abstract rules or ideals gained from reflection are by themselves sufficient to guide conduct and belief. This is not to say abstract rules and ideals are not needed in critical thinking—they are—but only that they cannot stand on their own. They are abstractions or stylizations from common life; and, as abstractions, are indeterminate unless interpreted by the background prejudices of custom and tradition. Hume follows Cicero in saying that “custom is the great guide of life.” But custom understood as “methodized and corrected” by loyal and skillful participants.

The distinction between true and false philosophy is like the distinction between valid and invalid inferences in logic or between scientific and unscientific thinking. A piece of thinking can be “scientific”—i.e., arrived at in the right way—but contain a false conclusion. Likewise, an argument can be valid, in that the conclusion logically follows from the premises on pain of contradiction, even if all propositions in the argument are false. Neither logically valid nor scientific thinking can guarantee truth; nor can “true philosophy.” It cannot tell us whether God exists, or whether morals are objective or what time is. These must be settled, if at all, by arguments within common life.

True philosophy is merely the right way for the philosophical impulse to operate when exploring these questions. The alternative is either utter nihilism (and the end of philosophical inquiry) or the corruptions of false philosophy. True philosophy merely guarantees that we will be free from those corruptions.

This is rather like one of Friedrich Nietzsche’s parables, from Also Sprach Zarathustra (1883-1885).  Nietzsche’s Zarathustra preaches that the superman must become a camel, so as to bear the heaviest of all weights, which is the humiliation that comes when one discovers the extent of one’s ignorance, and the commitment to enlighten that ignorance; that he must then put the camel aside and become a lion, so that he may slay the dragon of “Thou-Shalt” and undertake to discover his own morality; and that at the last he must become a child, so that he may put that struggle behind him and be ready to meet new challenges, not as reenactments of his past triumphs, but on their own terms.  According to Livingston, Hume, like Nietzsche, sees the uneducated European as a half-formed philosopher, and believes that with a complete philosophical education s/he can become something entirely different from a philosopher:

(more…)