Want to buy some real estate on Venus?

I just saw this video on io9:

I’ve recently been rereading some of Arthur Clarke’s science fiction stories, so I was primed and ready for this topic.  Here’s the comment I offered on the post:

Step one would be to establish orbiting stations around Venus, with artificial gravity produced in centrifuges.  On these stations, we would carry out step two, the genetic engineering and then the deployment of some kind of plant that would take the form of tiny particles that would float in the clouds of Venus. Even if these plants were too small to do much individually, they could be the basis of a future ecosystem if they could temporarily link together to absorb CO2, conduct photosynthesis, and reproduce.  Those linkages would be brief, broken well before their weight caused them to sink very far into the atmosphere.

Step three would be to create and deploy a series of larger creatures that would feed on these microscopic plants, and step four to create and deploy smaller creatures that would be symbiotic with the larger creatures.  From there, the ecosystem of the cloud tops would begin to evolve on its own; in step five, we would supervise and direct that evolution to produce food and other useful products for future floating cities, while also sequestering as much carbon and sulfur as possible in order to expand the habitable regions of the atmosphere.      ​

All of those preliminaries would take generations, probably centuries. And all the while, the orbiting stations would be growing in population and complexity. So by the time we got around to building habitations in the atmosphere, it would be an open question of why we would bother.  You talk about surfacism; decades ago, Gerard K. O’Neill derided planetism, and predicted that “The High Frontier” of human settlement in space would be on stations with artificial gravity, not on planets where gravity is fixed at levels lethal to human life. I suspect O’Neill will turn out to have been right, and that the prime spot for stations will be inside the orbit of Mercury, where solar power is at its most abundant.  But it would still be nice to turn the clouds of Venus into a huge farm of some kind.

So I envision a future in which the majority of the human race will live in a collection of huge, solar-powered cylinders clustered near to the Sun, each spinning at a rate giving it an interior surface gravity equal to that under which their ancestors evolved on Earth.  Presumably the interior surface areas of this collection of cylinders will be vastly greater than that of the Earth.  I’m not at all sure this is a desirable future; if the Earth isn’t enough for humanity, then it’s unlikely that anything larger than the Earth will be.  Rather than the peaceful age of abundance foreseen by Clarke, O’Neill, and others, the settlement of space may well be a new age of conflict among grasping, covetous powers.  But it does seem likelier than settlement of any planetary body, either on its surface or in its atmosphere.

Science and the argument from authority

Back when the earth was young and I was an undergraduate, a friend of mine named Philip told me with great satisfaction that the chemistry professor who had agreed to be his advisor was the world’s foremost authority on the reaction which he planned to study.  Later in that same conversation, I mentioned something about authority in science.  “Oh, authority counts for nothing in science!” Philip earnestly assured me.

Well, I said, “nothing” is a rarity.  Perhaps there is some small residue of authority in science.  No, no, Philip insisted, there was absolutely no place for appeals to authority in scientific discourse.

I produced a hypothetical example.  Say he was working on the reaction which so interested him.  After all these years I don’t remember what it was called, unfortunately.  And say his new advisor, Professor Whatever His Name Was, were to amble into the lab, look over his shoulder, furrow his brow, and after a few moments say “I can’t put my finger on it, but I think you’re doing something wrong.”

“I’d be devastated!” Philip exclaimed.  “I don’t suppose you’d rest until you’d figured out what it was that was bothering him, even if it meant a series of sleepless nights?”  “I wouldn’t, no,” Philip agreed.

“Whereas, if someone like me, who knows as little as a person can about chemistry, were to make a similarly vague remark, you’d ignore it completely.”  “I sure would,” said Philip.

“So, Professor Whatever His Name Is has earned the authority to set you working frantically to check and recheck your work, while I have earned no such authority.”  Philip agreed that this was the case, and that to a certain extent, therefore, authority was a meaningful concept in the practice of science.

I bring up this story, not only because it gave me a rare opportunity to play the role of Socrates in a real-life Platonic dialogue, but because it seems timely.  Monday afternoon, io9 published a link to an undated essay by Jason Mitchell, Associate Professor of the Social Sciences at Harvard.  Professor Mitchell’s essay, titled “On the emptiness of failed replications,”   argues that there are many reasons why an attempt to replicate the results of a published study might fail to do so, and that such failures should often, even usually, not be used as a reason for setting aside the original claims.  Professor Mitchell’s argument has at its heart an appeal to authority.  He writes:

Science is a tough place to make a living.  Our experiments fail much of the time, and even the best scientists meet with a steady drum of rejections from journals, grant panels, and search committees.  On the occasions that our work does succeed, we expect others to criticize it mercilessly, in public and often in our presence.  For most us, our reward is simply the work itself, in adding our incremental bit to the sum of human knowledge and hoping that our ideas might manage, even if just, to influence future scholars of the mind.  It takes courage and grit and enormous fortitude to volunteer for a life of this kind.

So we should take note when the targets of replication efforts complain about how they are being treated.  These are people who have thrived in a profession that alternates between quiet rejection and blistering criticism, and who have held up admirably under the weight of earlier scientific challenges.  They are not crybabies.  What they are is justifiably upset at having their integrity questioned.  Academia tolerates a lot of bad behavior—absent-minded wackiness and self-serving grandiosity top the list—but misrepresenting one’s data is the unforgivable cardinal sin of science.  Anyone engaged in such misconduct has stepped outside the community of scientists and surrendered his claim on the truth.  He is, as such, a heretic, and the field must move quickly to excommunicate him from the fold.  Few of us would remain silent in the face of such charges.

Because it cuts at the very core of our professional identities, questioning a colleague’s scientific intentions is therefore an extraordinary claim.  That such accusations might not be expressed directly hardly matters; as social psychologists, we should know better that innuendo and intimation can be every bit as powerful as direct accusation.  Like all extraordinary claims, insinuations about others’ scientific integrity should require extraordinary evidence.  Failures to replicate do not even remotely make this grade, since they most often result from mere ordinary human failing.  Replicators not only appear blind to these basic aspects of scientific practice, but unworried about how their claims affect the targets of their efforts. One senses either a profound naiveté or a chilling mean-spiritedness at work, neither of which will improve social psychology.

What my friend Philip and I agreed on those many years ago was that science had an advantage over other forms of inquiry because, while it does have its authorities, those authorities are always open to challenge.  It may be very likely, in my hypothetical example, that Professor Whatever His Name Was could not immediately explain why Philip’s procedure was wrong simply because organic chemistry is a very complex field and he could only vaguely remember the most relevant point until he had gone through the whole experiment in detail.  However, Philip himself or any other competent researcher who checked over his work in the same way would come to the same results as would Professor Whatever His Name Was, if not as quickly or in as clever a manner as Professor Whatever His Name Was may well have done.  Science therefore promises, not to slay authority, but to tame it.  Scientists can earn authority and use it guide their colleagues without inflicting fatal damage on their fields every time they make a mistake, because there is a system for identifying and correcting the mistakes even of the most august figures.

Professor Mitchell is therefore not wrong to protest that one ought to be mindful of the reputations scientists have earned, and circumspect about impugning those reputations, however indirectly.  On the other hand, his strictures against using replication as a standard for the reliability of scientific claims go so far as to raise the question of how a scientist who has accumulated an impressive set of credentials could ever be proven wrong.  It is therefore not surprising that the io9 posting of Professor Mitchell’s essay has sparked a ferocious response from readers accusing him of threatening to ruin science for everyone.  Indeed, the headline on that posting was “If You Love Science, This Will Make You Lose Your Shit,” the tag io9 editor Annalee Newitz added to the post was “HOLY CRAP WTF,” and it is illustrated with this gif:

To io9’s credit, the comments include some thoughtful and nuanced replies, as for example this one from a sociologist explaining why she believes both that her discipline represents an important source of knowledge and that it is misleading to use the word “science” to describe it.

I’d also mention a response to Professor Mitchell’s essay by Discover’s famously pseudonymous “Neuroskeptic.”  Neuroskeptic praises Professor Mitchell for identifying a naivete in those who are quick to regard a failure to replicate as proof positive that the original finding was flawed, but goes on to argue that Professor Mitchell himself exhibits a similar naivete in defending the opposite habit:

Whereas the replication movement sees a failure to find a significant effect as evidence that the effect being investigated is non-existent, Mitchell denies this, saying that we have no way of knowing if the null result is genuine or in error: “when an experiment fails, we can only wallow in uncertainty” about what it means. But if we do find an effect, it’s a different story: “we can celebrate that the phenomenon survived these all-too-frequent shortcomings [experimenter errors].”

And here’s the problem. Implicit in Mitchell’s argument is the idea that experimenter error (or what I call ‘silly mistakes’) is a one-way street: errors can make positive results null, but not vice versa.

Unfortunately, this is just not true. Three years ago, I wrote about these kinds of mistakes and recounted my own personal cautionary tale. Mine was aspreadsheet error, one even sillier than the examples Mitchell gave. But in my case the silly mistake created a significant finding, rather than obscuring one.

There are many documented cases of this happening and (scary thought) probably many others that we don’t know about. Yet the existence of these errors is the fatal spanner in the works of Mitchell’s whole case. If positive results can be erroneous too, if errors are (as it were) a neutral force, neither the advocates nor the skeptics of a particular claim can cry ‘experimenter error!’ to silence their opponents.

The phrase “spreadsheet error” may remind politically-oriented readers of the Reinhart-Rogoff Affair, a spreadsheet error underlying a 2010 paper by economists Carmen Reinhart and Kenneth Rogoff.  That paper had a significant impact on policymaking in the USA and elsewhere before the error was exposed in 2013.

The Reinhart-Rogoff Affair took a prominent place in my mind, and I think it is safe to say in the minds of many other observers, as an example of just how untrustworthy the governing elites of the USA truly are.  Ever since the late 1990s, Washington and Wall Street have made a series of clownishly ill-advised decisions.  Many of these decisions were not only decried by experts at the time as likely to lead to disaster, but were in fact hugely unpopular with the general public.  In every case, the predicted disasters have come to pass, and our rulers have reacted to these disasters at first with denial, then with bewilderment, then with apparent amnesia as they propose a repetition of exactly the policies that had failed before.  When those same elites look to science for a warrant for their policies, it seems to bother them not at all when the studies they have cited are discredited.  Seeing how deadly is the entrenched ignorance of political and business elites, the idea of insulating distinguished scientists from criticism raises the prospect that they may in time come to form a class that is as detached from reality as are those who wield power in Washington and on Wall Street.  If such an event comes to pass, future Reinharts and Rogoffs can be as sloppy as they like, provided their claims serve the interests of those who hold the levers of opportunity.