The “two separate languages” of neuroscience and psychology

In yesterday’s New York Times, Gary Marcus wrote an op-ed called “The Trouble With Brain Science.”  Writing of big-ticket research projects underway in brain science on both sides of the Atlantic, Professor Marcus writes:

Biology isn’t elegant the way physics appears to be. The living world is bursting with variety and unpredictable complexity, because biology is the product of historical accidents, with species solving problems based on happenstance that leads them down one evolutionary road rather than another. No overarching theory of neuroscience could predict, for example, that the cerebellum (which is involved in timing and motor control) would have vastly more neurons than the prefrontal cortex (the part of the brain most associated with our advanced intelligence).

But biological complexity is only part of the challenge in figuring out what kind of theory of the brain we’re seeking. What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.

 

Such bridges don’t come easily or often, maybe once in a generation, but when they do arrive, they can change everything. An example is the discovery of DNA, which allowed us to understand how genetic information could be represented and replicated in a physical structure. In one stroke, this bridge transformed biology from a mystery — in which the physical basis of life was almost entirely unknown — into a tractable if challenging set of problems, such as sequencing genes, working out the proteins that they encode and discerning the circumstances that govern their distribution in the body.

Neuroscience awaits a similar breakthrough. We know that there must be some lawful relation between assemblies of neurons and the elements of thought, but we are currently at a loss to describe those laws. We don’t know, for example, whether our memories for individual words inhere in individual neurons or in sets of neurons, or in what way sets of neurons might underwrite our memories for words, if in fact they do.

The problem with both of the big brain projects is that too few of the hundreds of millions of dollars being spent are devoted to spanning this conceptual chasm. Both projects are making important contributions: the European effort is helping build infrastructure for data integration; the American project is emphasizing the development of state-of-the-art tools for collecting new kinds of data. But as anyone in a field richer in data than theory (like weather forecasting) can tell you, amassing data is only a start.

The success of both the Human Brain Project and the Brain Initiative will ultimately rest not just on the data to be collected but also on what can be done with those data once they are collected. On that, too little has been said.

I’m a bit leery of this.  Professor Marcus’ bridge-building and lawful relations sound like aliases for reductionism.  Say we don’t reduce psychology to neuroscience- say we can never reduce psychology to neuroscience.  So what?  Gödel proved that we will never be able to reduce arithmetic to logic, that arithmetic needs concepts that cannot be derived from rules of logic.  Gödel did not thereby give warrant to mysticism or undermine the rationality of arithmetic, since the only concepts in that category are perfectly mundane.  Just because there is no “lawful relation” between the concept of set and the procedure of modus ponens does not make arithmetic any the less a rational pursuit.

So if it turns out that there is no “lawful relation between assemblies of neurons and the elements of thought,” it does not necessarily follow that psychologist will have to conclude that the phenomena their discipline studies derive from supernatural influences, or that they will have to become magicians, or anything so dramatic as that.  It may just be that the two fields of study will have to plod along as they currently do, operating quite independent of each other despite their superficial similarities.  Of course, it may not turn out that this is the case- perhaps some day one field will be reduced to the other.  But science has nothing to fear should this reduction prove impossible.

Scientific Arrogance

The other day, Ed Yong linked to an essay by Ethan Siegel.  Mr Siegel extols the virtues of science, both Science the process for gaining knowledge about nature and Science the body of knowledge that humans have acquired by means of that process.  Mr Siegel then quotes an interview Neil deGrasse Tyson gave to Nerdist, in which Mr Tyson expressed reservations about the value of philosophical study as part of the education of a young scientist.  In that interview, Mr Tyson and his interlocutors made some rather harsh-sounding remarks.  Take this segment, for example, as transcribed by Massimo Pigliucci:

interviewer: At a certain point it’s just futile.

dGT: Yeah, yeah, exactly, exactly. My concern here is that the philosophers believe they are actually asking deep questions about nature. And to the scientist it’s, what are you doing? Why are you concerning yourself with the meaning of meaning?

(another) interviewer: I think a healthy balance of both is good.

dGT: Well, I’m still worried even about a healthy balance. Yeah, if you are distracted by your questions so that you can’t move forward, you are not being a productive contributor to our understanding of the natural world. And so the scientist knows when the question “what is the sound of one hand clapping?” is a pointless delay in our progress.

[insert predictable joke by one interviewer, imitating the clapping of one hand]

dGT: How do you define clapping? All of a sudden it devolves into a discussion of the definition of words. And I’d rather keep the conversation about ideas. And when you do that don’t derail yourself on questions that you think are important because philosophy class tells you this. The scientist says look, I got all this world of unknown out there, I’m moving on, I’m leaving you behind. You can’t even cross the street because you are distracted by what you are sure are deep questions you’ve asked yourself. I don’t have the time for that.

interviewer: I also felt that it was a fat load of crap, as one could define what crap is and the essential qualities that make up crap: how you grade a philosophy paper?

dGT [laughing]: Of course I think we all agree you turned out okay.

interviewer: Philosophy was a good Major for comedy, I think, because it does get you to ask a lot of ridiculous questions about things.

dGT: No, you need people to laugh at your ridiculous questions.

interviewers: It’s a bottomless pit. It just becomes nihilism.

dGT: nihilism is a kind of philosophy.

Mr Tyson’s remarks have come in for criticism from many quarters.  The post by Massimo Pigliucci from which I take the transcription above is among the most notable.

I must say that I think some of the criticism is overdone.  In context, it is clear to me that Mr Tyson and his interlocutors are thinking mainly of the training of young scientists, of what sort of learning is necessary as a background to scientific research.  In that context, it’s quite reasonable to caution against too wide a range of interests.  It would certainly not be wise to wait until one had developed a deep understanding of philosophy, history, literature, music, art, etc, before getting down to business in one’s chosen field.

It’s true that Mr Tyson’s recent fame as narrator of the remake of the television series Cosmos puts a bit of an edge on his statements; that show is an attempt to present the history of science to the general public, and to promote a particular view of the place of science in human affairs.  It would be fair to say that the makers of Cosmos, Mr Tyson among them, have exposed some of their rather sizable blind spots in the course of the project (most famously in regard to Giordano Bruno,) and a bit of time spent studying the philosophy of science may very well have served to temper the bumptious self-assurance that let them parade their howlers in worldwide television broadcasts.  And it is true, as Mr Pigliucci documents, that Mr Tyson has a history of making flip and ill-informed remarks dismissing the value of philosophy and other subjects aside from his own.  Still, the remarks from the Nerdist podcast are pretty narrow in their intended scope of application, and within that scope, having to do with apprentice scientists, I wouldn’t say that they are examples of arrogance, or that they are even wrong.

I’m reminded of a problem that has faced those who would teach Latin and ancient Greek to English speakers over the centuries.  The languages are different enough from English that it seems like a shame to start them later than early childhood.  If a student starts Latin at five and Greek at six, as was the norm for boys destined for the German Gymnasia or the English public schools in the nineteenth century, that student will likely attain a reading proficiency in the classical languages at about eight or nine years of age that a student who starts them in later life may never attain.  However, the point of learning the languages is to be able to read classical literature.  What is a nine-year-old to make of Horace or Pindar or Vergil or Sophocles or Thucydides or Tacitus?  Few of the real masterworks are intelligible as anything other than linguistic puzzles to anyone under 40.  It often happens to me that I assign such things to students who are returning to college in middle age.  They usually come to me afterward and tell me that they were surprised.  They had read them when they were in the 18-25 age bracket that includes most of my students, and hadn’t found anything of interest in them.  Rereading them later in life, the books meant a tremendous amount to them.  I trot out a very old line on these occasions, and say “It isn’t just you reading the book- the book also reads you.”  Meaning that the more life experience the reader brings, the greater the riches the reading offers.

I suppose the best thing to do would be to learn the languages in early childhood while studying mathematics and the natural sciences, to study ancient literary works for several years as specimens in the scientific study of linguistics or as aids to archaeology, and to come back to them later in life, when one can benefit from reading them on their own terms.  The same might apply to philosophy, bits of which might be slipped in to the education of those aged 25 and younger, but which ought really to be introduced systematically only to those who have already confronted in practice the sorts of crises that have spurred its development over the centuries.

Be that as it may, the concept of scientific arrogance is one that has been deftly handled by one of my favorite commentators, cartoonist Zach Weiner.  I’d recommend two Saturday Morning Breakfast Cereal strips on the theme, this one about emeritus disease and this one about generalized reverence for specialized expertise.

Reason, madness, and the like

Adam Phillips reviews Gary Greenberg’s Manufacturing Depression, along the way quoting some memorable remarks and making intriguing remarks of his own.  He mentions Alfred Adler’s practice of beginning a course of psychoanalysis by asking a patient, “What would you do if you were cured?”  Adler would listen to the patient’s response, then say, “Well, go and do it.”  Phillips points out that this practice suggests that mental illness is simply an obstacle to achieving goals that themselves need no explanation, simply a form of inefficiency.  If to you this sounds like Max Weber’s description of rationality as an instrument that moderns use to achieve goals which they cannot subject to rational  criticism, you’ll appreciate this quote from Weber:  “Science presupposes that what is produced by scientific work should be important in the sense of being ‘worth knowing.’ And it is obvious that all our problems lie here, for this presupposition cannot be proved by scientific means.”  Phillips tells us that Greenberg sees mental illness quite differently than Adler did; in the course of his explanation of Greenberg’s view, he mentions D. W. Winnicott’s definition of madness as “the need to be believed.”   

Phillips identifies Greenberg’s main interest as the way psychology and psychiatry have described depression and the economic interest this particular description serves.  Phillips summarizes Greenberg’s arguments on this point and contrasts them with the radical anti-psychiatry of earlier decades, but he himself seems more interested in deep questions about the philosophy of science.  For example, he writes:

Scientists sometimes want us to believe that the evidence speaks for itself, but evidence is never self-evident; people often disagree both about what counts as evidence and what evidence is evidence of. It is as though, now, the cult of evidence—of “evidence-based research”—is the only alternative to the cults of religion. But the sciences, like the arts, like religions, are forms of interpretation, of people making something out of their experience. And our ideas about health, mental or otherwise, are just another way of talking about what a good life is for us, what we can make of it and what we can’t.

The blurb for this review on the issue’s table of contents reads “Science can be disproved only by its own criteria; when it comes to mental illness, its own criteria are often insufficient.”  Which is a strange thing to say- if the criteria of science are insufficient to disprove theories about mental illness, then how can those theories be called scientific?  Be that as it may, the blurb is clearly not a fair summary of what Phillips is saying, or of the views he attributes to Greenberg.