Soft sciences are often harder than hard sciences
Discover (1987, August) by Jared Diamond


 

n      ''The overall correlation between frustration and instability (in 62 countries of the world) was 0.50.'' --Samuel Huntington, professor of government, Harvard

n      ''This is utter nonsense. How does Huntington measure things like social frustration? Does he have a social-frustration meter? I object to the academy's certifying as science what are merely political opinions.'' -- Serge Lang, professor of mathematics, Yale

n      ''What does it say about Lang's scientific standards that he would base his case on twenty-year-old gossip?'' . . . ''a bizarre vendetta'' . . . ''a madman . . .'' -- Other scholars, commenting on Lang's attack

For those who love to watch a dogfight among intellectuals supposedly above such things, it's been a fine dogfight, well publicized in Time and elsewhere. In one corner, political scientist and co-author of The Crisis of Democracy, Samuel Huntington. In the other corner, mathematician and author of Diophantine Approximation on Abelian Varieties with Complex Multiplication, Serge Lang. The issue: whether Huntington should be admitted, over Lang's opposition, to an academy of which Lang is a member. The score after two rounds: Lang 2, Huntington 0, with Huntington still out.

Lang vs. Huntington might seem like just another silly blood-letting in the back alleys of academia, hardly worth anyone's attention. But this particular dogfight is an important one. Beneath the name calling, it has to do with a central question in science: Do the so-called soft sciences, like political science and psychology, really constitute science at all, and do they deserve to stand beside ''hard sciences,'' like chemistry and physics?

The arena is the normally dignified and secretive National Academy of Sciences (NAS), an honor society of more than 1,500 leading American scientists drawn from almost every discipline. NAS's annual election of about 60 new members begins long before each year's spring meeting, with a multi- stage evaluation of every prospective candidate by members expert in the candidate's field. Challenges of candidates by the membership assembled at the annual meeting are rare, because candidates have already been so thoroughly scrutinized by the appropriate experts. In my eight years in NAS, I can recall only a couple of challenges before the Lang-Huntington episode, and not a word about those battles appeared in the press.

At first glance, Huntington's nomination in 1986 seemed a very unlikely one to be challenged. His credentials were impressive: president of the American Political Science Association; holder of a named professorship at Harvard; author of many widely read books, of which one, American Politics: The Promise of Disharmony, got an award from the Association of American Publishers as the best book in the social and behavioral sciences in 1981; and many other distinctions. His studies of developing countries, American politics, and civilian-military relationships received the highest marks from social and political scientists inside and outside NAS. Backers of Huntington's candidacy included NAS members whose qualifications to judge him were beyond question, like Nobel Prize winning computer scientist and psychologist Herbert Simon.

If Huntington seemed unlikely to be challenged, Lang was an even more unlikely person to do the challenging. He had been elected to the academy only a year before, and his own specialty of pure mathematics was as remote as possible from Huntington's specialty of comparative political development. However, as Science magazine described it, Lang had previously assumed for himself ''the role of a sheriff of scholarship, leading a posse of academics on a hunt for error,'' especially in the political and social sciences. Disturbed by what he saw as the use of ''pseudo mathematics'' by Huntington, Lang sent all NAS members several thick mailings attacking Huntington, enclosing photocopies of letters describing what scholar A said in response to scholar B's attack on scholar C, and asking members for money to help pay the postage and copying bills. Under NAS rules, a candidate challenged at an annual meeting is dropped unless his candidacy is sustained by two-thirds of the members present and voting. After bitter debates at both the 1986 and 1987 meetings, Huntington failed to achieve the necessary two-thirds support.

Much impassioned verbiage has to be stripped away from this debate to discern the underlying issue. Regrettably, a good deal of the verbiage had to do with politics. Huntington had done several things that are now anathema in U.S. academia: he received CIA support for some research; he did a study for the State Department in 1967 on political stability in South Vietnam; and he's said to have been an early supporter of the Vietnam war. None of this should have affected his candidacy. Election to NAS is supposed to be based solely on scholarly qualifications; political views are irrelevant. American academics are virtually unanimous in rushing to defend academic freedom whenever a university president or an outsider criticizes a scholar because of his politics. Lang vehemently denied that his opposition was motivated by Huntington's politics. Despite all those things, the question of Huntington's role with respect to Vietnam arose repeatedly in the NAS debates. Evidently, academic freedom means that outsiders can't raise the issue of a scholar's politics but other scholars can.

It's all the more surprising that Huntington's consulting for the CIA and other government agencies was an issue, when one recalls why NAS exists. Congress established the academy in 1863 to act as official adviser to the U.S. government on questions of science and technology. NAS in turn established the National Research Council (NRC), and NAS and NRC committees continue to provide reports about a wide range of matters, from nutrition to future army materials. As is clear from any day's newspaper, our government desperately needs professionally competent advice, particularly about unstable countries, which are one of Huntington's specialties. So Huntington's willingness to do exactly what NAS was founded to do -- advise the government -- was held against him by some NAS members. How much of a role his politics played in each member's vote will never be known, but I find it unfortunate that they played any role at all.

I accept, however, that a more decisive issue in the debates involved perceptions of the soft sciences -- e.g., Lang's perception that Huntington used pseudo mathematics. To understand the terms soft and hard science, just ask any educated person what science is. The answer you get will probably involve several stereotypes: science is something done in a laboratory, possibly by people wearing white coats and holding test tubes; it involves making measurements with instruments, accurate to several decimal places; and it involves controlled, repeatable experiments in which you keep everything fixed except for one or a few things that you allow to vary. Areas of science that often conform well to these stereotypes include much of chemistry, physics, and molecular biology. These areas are given the flattering name of hard science, because they use the firm evidence that controlled experiments and highly accurate measurements can provide.

We often view hard science as the only type of science. But science (from the Latin scientia -- knowledge) is something much more general, which isn't defined by decimal places and controlled experiments. It means the enterprise of explaining and predicting -- gaining knowledge of -- natural phenomena, by continually testing one's theories against empirical evidence. The world is full of phenomena that are intellectually challenging and important to understand, but that can't be measured to several decimal places in labs. They constitute much of ecology, evolution, and animal behavior; much of psychology and human behavior; and all the phenomena of human societies, including cultural anthropology, economics, history, and government.

These soft sciences, as they're pejoratively termed, are more difficult to study, for obvious reasons. A lion hunt or revolution in the Third World doesn't fit inside a test tube. You can't start it and stop it whenever you choose. You can't control all the variables; perhaps you can't control any variable. You may even find it hard to decide what a variable is. You can still use empirical tests to gain knowledge, but the types of tests used in the hard sciences must be modified. Such differences between the hard and soft sciences are regularly misunderstood by hard scientists, who tend to scorn soft sciences and reserve special contempt for the social sciences. Indeed, it was only in the early 1970s that NAS, confronted with the need to offer the government competent advice about social problems, began to admit social scientists at all. Huntington had the misfortune to become a touchstone of this widespread misunderstanding and contempt.

While I know neither Lang nor Huntington, the broader debate over soft versus hard science is one that has long fascinated me, because I'm among the minority of scientists who work in both areas. I began my career at the hard pole of chemistry and physics, then took my Ph.D. in membrane physiology, at the hard end of biology. Today I divide my time equally between physiology and ecology, which lies at the soft end of biology. My wife, Marie Cohen, works in yet a softer field, clinical psychology. Hence I find myself forced every day to confront the differences between hard and soft science. Although I don't agree with some of Lang's conclusions, I feel he has correctly identified a key problem in soft science when he asks, ''How does Huntington measure things like social frustration? Does he have a social-frustration meter?'' Indeed, unless one has thought seriously about research in the social sciences, the idea that anyone could measure social frustration seems completely absurd.

The issue that Lang raises is central to any science, hard or soft. It may be termed the problem of how to ''operationalize'' a concept. (Normally I hate such neologistic jargon, but it's a suitable term in this case.) To compare evidence with theory requires that you measure the ingredients of your theory. For ingredients like weight or speed it's clear what to measure, but what would you measure if you wanted to understand political instability? Somehow, you would have to design a series of actual operations that yield a suitable measurement -- i.e., you must operationalize the ingredients of theory.

Scientists do this all the time, whether or not they think about it. I shall illustrate operationalizing with four examples from my and Marie's research, progressing from hard science to softer science.

Let's start with mathematics, often described as the queen of the sciences. I'd guess that mathematics arose long ago when two cave women couldn't operationalize their intuitive concept of ''many.'' One cave woman said, ''Let's pick this tree over here, because it has many bananas.'' The other cave woman argued, ''No, let's pick that tree over there, because it has more bananas.'' Without a number system to operationalize their concept of ''many,'' the two cave women could never prove to each other which tree offered better pickings.

There are still tribes today with number systems too rudimentary to settle the argument. For example, some Gimi villagers with whom I worked in New Guinea have only two root numbers, iya = 1 and rarido = 2, which they combine to operationalize somewhat larger numbers: 4 = rarido-rarido, 7 = rarido-rarido-rarido-iya, etc. You can imagine what it would be like to hear two Gimi women arguing about whether to climb a tree with 27 bananas or one with 18 bananas.

Now let's move to chemistry, less queenly and more difficult to operationalize than mathematics but still a hard science. Ancient philosophers speculated about the ingredients of matter, but not until the eighteenth century did the first modern chemists figure out how to measure these ingredients. Analytical chemistry now proceeds by identifying some property of a substance of interest, or of a related substance into which the first can be converted. The property must be one that can be measured, like weight, or the light the substance absorbs, or the amount of neutralizing agent it consumes.

For example, when my colleagues and I were studying the physiology of hummingbirds, we knew that the little guys liked to drink sweet nectar, but we would have argued indefinitely about how sweet sweet was if we hadn't operationalized the concept by measuring sugar concentrations. The method we used was to treat a glucose solution with an enzyme that liberates hydrogen peroxide, which reacts (with the help of another enzyme) with another substance called dianisidine to make it turn brown, whereupon we measured the brown color's intensity with an instrument called a spectrophotometer. A pointer's deflection on the spectrophotometer dial let us read off a number that provided an operational definition of sweet. Chemists use that sort of indirect reasoning all the time, without anyone considering it absurd.

My next-to-last example is from ecology, one of the softer of the biological sciences, and certainly more difficult to operationalize than chemistry. As a bird watcher, I'm accustomed to finding more species of birds in a rain forest than in a marsh. I suspect intuitively that this has something to do with a marsh being a simply structured habitat, while a rain forest has a complex structure that includes shrubs, lianas, trees of all heights, and crowns of big trees. More complexity means more niches for different types of birds. But how do I operationalize the idea of habitat complexity, so that I can measure it and test my intuition?

Obviously, nothing I do will yield as exact an answer as in the case where I read sugar concentrations off a spectrophotometer dial. However, a pretty good approximation was devised by one of my teachers, the ecologist Robert MacArthur, who measured how far a board at a certain height above the ground had to be moved in a random direction away from an observer standing in the forest (or marsh) before it became half obscured by the foliage. That distance is inversely proportional to the density of the foliage at that height. By repeating the measurement at different heights, MacArthur could calculate how the foliage was distributed over various heights.

In a marsh all the foliage is concentrated within a few feet of the ground, whereas in a rain forest it's spread fairly equally from the ground to the canopy. Thus the intuitive idea of habitat complexity is operationalized as what's called a foliage height diversity index, a single number. MacArthur's simple operationalization of these foliage differences among habitats, which at first seemed to resist having a number put on them, proved to explain a big part of the habitats' differences in numbers of bird species. It was a significant advance in ecology.

For the last example let's take one of the softest sciences, one that physicists love to deride: clinical psychology. Marie works with cancer patients and their families. Anyone with personal experience of cancer knows the terror that a diagnosis of cancer brings. Some doctors are more frank with their patients than others, and doctors appear to withhold more information from some patients than from others. Why?

Marie guessed that these differences might be related to differences in doctors' attitudes toward things like death, cancer, and medical treatment. But how on earth was she to operationalize and measure such attitudes, convert them to numbers, and test her guesses? I can imagine Lang sneering ''Does she have a cancer-attitude meter?''

Part of Marie's solution was to use a questionnaire that other scientists had developed by extracting statements from sources like tape-recorded doctors' meetings and then asking other doctors to express their degree of agreement with each statement. It turned out that each doctor's responses tended to cluster in several groups, in such a way that his responses to one statement in a cluster were correlated with his responses to other statements in the same cluster. One cluster proved to consist of expressions of attitudes toward death, a second cluster consisted of expressions of attitudes toward treatment and diagnosis, and a third cluster consisted of statements about patients' ability to cope with cancer. The responses were then employed to define attitude scales, which were further validated in other ways, like testing the scales on doctors at different stages in their careers (hence likely to have different attitudes). By thus operationalizing doctors' attitudes, Marie discovered (among other things) that doctors most convinced about the value of early diagnosis and aggressive treatment of cancer are the ones most likely to be frank with their patients.

In short, all scientists, from mathematicians to social scientists, have to solve the task of operationalizing their intuitive concepts. The book by Huntington that provoked Lang's wrath discussed such operationalized concepts as economic well-being, political instability, and social and economic modernization. Physicists have to resort to very indirect (albeit accurate) operationalizing in order to ''measure'' electrons. But the task of operationalizing is inevitably more difficult and less exact in the soft sciences, because there are so many uncontrolled variables. In the four examples I've given, number of bananas and concentration of sugar can be measured to more decimal places than can habitat complexity and attitudes toward cancer.

Unfortunately, operationalizing lends itself to ridicule in the social sciences, because the concepts being studied tend to be familiar ones that all of us fancy we're experts on. Anybody, scientist or no, feels entitled to spout forth on politics or psychology, and to heap scorn on what scholars in those fields write. In contrast, consider the opening sentences of Lang's paper Diophantine Approximation on Abelian Varieties with Complex Multiplication: ''Let A be an abelian variety defined over a number field K. We suppose that A is embedded in projective space. Let AK be the group of points on A rational over K.'' How many people feel entitled to ridicule these statements while touting their own opinions about abelian varieties?

No political scientist at NAS has challenged a mathematical candidate by asking ''How does he measure things like 'many'? Does he have a many-meter?'' Such questions would bring gales of laughter over the questioner's utter ignorance of mathematics. It seems to me that Lang's question ''How does Huntington measure things like social frustration?'' betrays an equal ignorance of how the social sciences make measurements.

The ingrained labels ''soft science'' and ''hard science'' could be replaced by hard (i.e., difficult) science and easy science, respectively. Ecology and psychology and the social sciences are much more difficult and, to some of us, intellectually more challenging than mathematics and chemistry. Even if NAS were just an honorary society, the intellectual challenge of the soft sciences would by itself make them central to NAS.

But NAS is more than an honorary society; it's a conduit for advice to our government. As to the relative importance of soft and hard science for humanity's future, there can be no comparison. It matters little whether we progress with understanding the diophantine approximation. Our survival depends on whether we progress with understanding how people behave, why some societies become frustrated, whether their governments tend to become unstable, and how political leaders make decisions like whether to press a red button. Our National Academy of Sciences will cut itself out of intellectually challenging areas of science, and out of the areas where NAS can provide the most needed scientific advice, if it continues to judge social scientists from a posture of ignorance.

COPYRIGHT 1987 Discover
COPYRIGHT 2004 Gale Group