The Scientist's World
Abstract
This paper describes the features of the world of science, and it compares that world briefly with that of politics and the law. It also discusses some “postmodern” trends in philosophy and sociology that have been undermining confidence in the objectivity of science and thus have contributed indirectly to public mistrust. The paper includes broader implications of interactions of government and science.
The late Bernard D. Davis, Adele Lehman Professor at the Harvard Medical School, was a pioneer and a leader in the field of microbial physiology. He made major contributions to our understanding of amino acid biosynthesis, protein synthesis, and the mode of antibiotic action. His seminal works include the use of penicillin for the selection of auxotrophic mutants and his U-tube experiment to prove that bacterial conjugation required direct contact between the two bacterial strains. He enjoyed the process of laboratory science and the opportunity to train young scientists. He thought deeply and spoke out on issues of scientific ethics and policy. A few years before his death in 1994, Davis became interested in the “Baltimore Affair,” the decade-long investigation of escalating charges of scientific misconduct against Thereza Imanishi-Kari, involving a widely publicized conflict between her collaborator and defender, Nobel laureate David Baltimore and Congressman John Dingell. Unlike many other distinguished scientists who weighed in with public pronouncements on the case, Davis correctly predicted that Imanishi-Kari would be cleared of misconduct if she was given a fair hearing. He wanted to know how an ordinary laboratory dispute could grow into a conflagration that damaged all involved and looked for answers in cultural differences between the world of science and the world of law and politics. This article is adapted from a chapter of his unfinished book on the Baltimore/Dingell case. Here he explains the assumptions and traditions that underlie the scientific enterprise. His love for science shines through this description of how scientists think and work, how they choose problems, seek answers, deal with ambiguity in experimental results, maintain standards, and understand issues of credit, cooperation, and competition.
In the past two decades the public has become increasingly ambivalent about science and technology—interested in their advances, but also disturbed by finding that they create problems as well as benefits. There are many concerns: the future of the global environment, changes too fast for us to adapt to, too much dependence on the scientists who alone understand this arcane material, and the propensity of science to focus on its own esoteric interests instead of on our most pressing social problems, though, in fact, many of these lie outside its scope. These concerns have encouraged resentment and misunderstandings—in their more extreme manifestations an antiscience movement—and even a broader flight from rationality. Examples include not only the popularity of astrology and of fundamentalist religions but also the persistence of self-help books, as substitutes for scientific medicine, on the best-seller lists.
One source of these misunderstandings is that scientists absorb a special set of aims, assumptions, and values in the course of their apprenticeship, and these lead to views on the nature of their discipline, and its relation to society, that laypersons often find hard to understand. In particular, scientists attach great value to their self-correcting system and their autonomy in governing their affairs, and legislators often find it hard to see why these earlier traditions should persist when scientists have now become so dependent on government, and when their activities increasingly affect the rest of society.
THE SCIENTIFIC METHOD
Though the special approach that scientists have evolved for understanding the external world is often called the scientific method, we must understand that this term does not refer to a rigidly defined set of procedures, guaranteed to yield correct answers: science is not a straightforward problem-solving machine. Instead, the scientific method encompasses any approaches that help it to uncover truths about nature. Many uncertainties and errors are encountered along the way, more in some fields of science than in others.
But however broad the range of activities that we call scientific, they reflect a set of shared principles that emerged relatively late in the development of civilization. Most basic was the shift from a belief in gods that animate nature, and in miracles, towards belief systems that include the idea that we live in a lawful, deterministic world, susceptible to rational analysis. Another innovation was to abjure the global systems of thought of philosophy and to settle instead for well-defined, “bite-sized” questions, whose answers build up a coherent conceptualization of the world of nature—small bricks in a large edifice. Science also discouraged muddy, discursive thinking by sharply separating two kinds of “findings”: observations and conclusions. The former should be verifiable, but in principle the latter are always subject to replacement—or to revision (as happened to Newton's laws)—in the light of new evidence or new principles. In practice, however, the foundations have proved reliable: the deeper layers of the scientific edifice are consolidated by the weight of the later findings resting on them. Every time I weigh a substance in a chemical balance I confirm the principles of mass and gravity.
While science began with individual discoveries, such as those of Archimedes in classical Greece, as we use the term today it refers to a social process that incorporates contributions into a cumulative, growing, intricately interwoven body of knowledge. (For example, Antonie van Leeuwenhoek of Delft discovered the unknown world of invisible organisms in the 17th century. Though he became known as the father of bacteriology, he did not launch it as a progressive science because he kept secret his technique for forming single lenses with sufficient magnification. More than a century later the development of the compound two-lens microscope, as public knowledge, made it possible for any interested person to see bacteria.) The reliability and the importance of a scientific discovery are not judged in the abstract but are based on its fit into that fabric.
This body of knowledge is hierarchical: it seeks understanding of phenomena not only at a particular level of organization but also in terms of properties and interactions of the underlying components at a finer level (e.g., populations, organisms, organs, cells, molecules, molecular surfaces). An example of this principle of explaining or reducing events at one level in terms of the interactions of components at a finer level—reductionism—can be taken from neurobiology, where much of the advance consists in reducing complex behavior to the activities of individual nerve cells and reducing these in turn to their biochemical components. The social sciences are hampered, as science, by their very limited ability to reduce social behavior to psychological or neurobiological terms.
However, some reductionist connections may have a long wait. For example, Newton observed the phenomena of gravitation without being able to explain the force, and Darwin demonstrated natural selection but had to assume the underlying variation long before genetics appeared and supplied its basic mechanism. We understand a phenomenon better, and have more confidence in our interpretations, if our knowledge of its features at different levels reinforce each other.
The distinction between basic research, aimed at understanding nature, and applied research, aimed at providing useful control, is not always sharp, and it is a permanent source of tension between scientists and government. The wave of mistrust of scientists in recent years has no doubt contributed to the tendency of legislators to intervene in the decisions on priorities in research, with an emphasis on the promise of early payoffs that often seems to scientists to be shortsighted.
TECHNOLOGY
Earlier civilizations developed a remarkably rich technology in many areas—metals, ceramics, fabrics, agriculture, fermentations, engineering construction—on an empirical base, without a coherent scientific rationale. But, today, technology is enormously dependent on science, and vice versa.
The distinction between science and its technological applications has important implications. Scientists have often been criticized for making discoveries that have then led to harmful uses, especially in producing agents of destruction. Yet the investigator cannot foresee all the uses, and he has not the power to ensure that his discovery will be put only to what he considers good uses. Indeed, he would be greatly resented if he tried to impose his idea of the good on society. His obligation is not to dictate answers to policy questions of balancing risks and benefits but to try to promote public understanding of possible applications and consequences and to use his special technical knowledge in evaluating the risks.
Because many modern technologies, such as genetic engineering, deal with intangible phenomena far outside common experience, they easily generate anxieties over remote, hypothetical risks. For scientists to correct these misconceptions is very difficult because, being scientists, they cannot in good conscience make absolute statements about safety (or anything else) and because scientists appear to be self-serving when they defend science. In addition, the obligation of scientists to try to correct public misconceptions often places them in opposition to those with exaggerated claims who appear to be protecting the public welfare and safety. Unfortunately, this role further fuels mistrust of the motivation of scientists.
THE EVOLVING METHOD
The scientific method is not static but continues to evolve. To Francis Bacon, in the 17th century, its main goal was to classify objects and events in nature systematically and then to infer general principles from groups of related particulars (induction). But going beyond this passive description (natural history) to probe into mechanisms underlying a process, Galileo revolutionized science by introducing the experimental method: actively manipulating the conditions and observing the effects.
While experimentation shifted the emphasis from inductive to deductive logic—deriving consequences from a principle—Peter Medawar has pointed out that the method is more accurately described as hypothetico-deductive (10). In its successive steps, observations lead to the imaginative creation of explanatory hypotheses; logic is used to deduce their consequences; a mixture of logic and imagination is used to design experiments that would test for these consequences; and the investigator uses known methods, or develops new ones, for carrying out these experiments. But the most brilliant ideas (in mathematics even more than in experimental science) often depend on leaps of the imagination that are called intuitive. These evidently depend on unusual inborn cognitive features, not as obvious as, but much like, the talent that led Mozart to beautiful creations at the age of four. “Model” and “interpretation” are virtually synonyms for hypothesis, but with less emphasis on the tentative nature of the concept. “Theory,” and even more, “law,” are used to imply something of broader significance, supported by a large body of evidence. However, logically they share the provisional nature of all scientific conclusions; hence they are not rigorously separated from hypotheses.
Though the hypothetico-deductive approach dominates modern science, the descriptive approach is still important, not only in fields in the tradition of natural history but also in the early stages in the exploration of any new area, however sophisticated the experimental techniques. First one must map the territory. At these stages the objective is usually to identify qualitative phenomena.
Advances in science depend on the hands as well as the brain—on new techniques as well as new concepts. And these partners advance reciprocally, much as abstract intelligence and the ability to use tools have advanced hand in hand in the evolution of the unique competence of the human species. (Alfred North Whitehead has suggested that classical Greece failed to develop a flourishing science, despite its remarkable explosion in philosophy, because citizens left manual work to slaves.) And as novel instruments and procedures have made our analyses increasingly sensitive and accurate a sophisticated scientist must know their limits and choose those that are suitable for his purpose. The distinction between limited accuracy and unreliability is sometimes a crucial issue.
LIMITED SCOPE OF SCIENCE
Science has not only great powers but also strict limits. Most fundamentally, its full power can be applied only to those questions that can have, in principle, an objectively correct answer: questions about the nature of the external world. It can contribute much less in dealing with questions that involve values, whether moral or aesthetic. For values are variable, subjective products of human societies rather than absolute, objective realities.
To be sure, values, while not absolute, also are not entirely relative. As products of biological and cultural evolution, they are held on a leash (in E. O. Wilson's phrase) by our genes and our cultural traditions. Finding and promoting those moral values that are most satisfactory for an individual or a society place the greatest demands on our judgment, and the significance for our welfare is not diminished by the inability of these activities to yield the objectively correct answers of a science. Claims for scientific solutions to social problems, sometimes called scientism, have been a particularly enticing, and disappointing, feature of Marxist dogma. The clashes arise not from the superiority of one or another approach but from the failure to recognize that they are incommensurate and should each be limited to their proper domains.
Scientists often do not appreciate the fundamental barrier between science and problems involving values, and they certainly have not generally conveyed the idea to the public. Yet to do so would be a valuable exercise in humility, and it should help to lower unfulfillable public expectations. It would also help to clarify the differences between social and natural sciences and to reduce the invidious comparisons. Even though science cannot provide correct answers to questions of social policy, we should recognize that it can be helpful, because these problems involve not only values but also assumptions about the consequences of alternative actions, and here science can be a powerful and useful critic. However, this is only an adjuvant role—providing much less definitive answers than those that science can give to other kinds of questions.
We must recognize here one of the costs of the advance of science. The bright light that it casts, in the areas where it is successful, has diverted the attention of many intellectuals from what would earlier have been their main preoccupation, the principles by which we deal with conflicting values and interests. This effect of science has contributed, along with its undermining of many beliefs of traditional religions, to a major source of problems in our society: the weakening of a moral consensus. The inevitable and understandable reactions have been a longstanding source of antagonism to science, most conspicuously in the creationist objections to the teaching of evolution and reflected in polls in which less than 10% of Americans accept the purely Darwinian explanation of our origin.
More sophisticated analyses blame science in broader terms for many of the spiritual ills afflicting modern man, asserting that its mode of thought has delegitimized questions of ultimate concern and has led to a fragmentation of our values in almost every aspect of our social activities. In fact, this description no doubt carries much historical truth, but the blame is not constructive. Our search for a stronger consensus on values, and for greater social cohesion, will benefit more from clarifying the relations between science and values than from either kicking science or asking it to solve more than it can. We assume that science is here to stay, but the existing public suspicions encourage fear that the growing reactions to science and technology could lead our civilization into a dark age.
THE IMPERFECTION OF SCIENCE
Besides the limited scope of the scientific method, another limitation is its highly imperfect, groping approach. Even when experimentation tries to answer a straightforward question in a well-established field, the path is often tortuous, with many false leads—though the traditions of orderly presentation in scientific publication obscure the underlying disorder. In the growing cathedral of science many crumbling stones at the growing points are soon replaced, and the more important their position the sooner the defect is disclosed.
Although most significant advances in science are rapidly accepted, the uncertainties in some areas can lead to extended controversies, occasionally suggesting fraud. Rivals may continue for years to produce conflicting hypotheses or data. Moreover, the passion of a scientist in pursuit of an exciting idea places a strain on his objectivity. He may fail to consider alternative hypotheses, he may unconsciously avoid designing experiments that might threaten the favored hypothesis, he may clutch at straws in the face of decisive contrary evidence, or he may even distort the very process of observation. Such honest self-deception is a much greater source of false information in science than is fraud. The most accomplished scientists have had to be skillful in avoiding these pitfalls.
Though the ultimate acceptance of a discovery rests on its correspondence to reality, the initial reactions to a new proposition may be influenced by such factors as fashions, reputations, power relations, national rivalries, rhetoric, and the strength of commitment to earlier ideas. Max Planck has suggested that major innovations in theoretical physics become accepted only when the older generation has died out.
SPECIAL FEATURES OF RESEARCH IN BIOLOGY
Because most systems in biology are very complex, investigators have developed the practice of experimenting under conditions that compare systems differing in only one variable—so-called “control” experiments, such as comparison of diets that differ in one component or of organisms that differ in only one gene. Scientists frequently argue over what constitutes adequate controls.
Research in biology has to struggle not only with complexity but also with the variation bequeathed by evolution. While the physical sciences deal mostly with uniform populations of entities (all electrons, or all molecules of glucose, are identical), biology deals with genetically heterogeneous populations; within a species ordinarily no two members are identical. Also, genes and environment interact in ways that often make it hard to separate their contributions, especially in human studies. With such sources of variation conflicting results are hardly surprising.
While most advances in science proceed in an orderly way as new findings and new techniques open up new questions, the most fruitful discoveries, especially with the complex problems of biology, often depend on an accidental encounter with something unexpected—serendipity. Rita Levi-Montalcini entitled her autobiography In Praise of Imperfection, to highlight the role of luck in her Nobel Prize-winning discovery of a protein—the first of many—that regulates the growth of specific cells in the animal body (8).
But what appears to be luck is not usually pure luck. As pointed out in a famous aphorism of Louis Pasteur, “In the fields of observation chance favors only the prepared mind.” The most general meaning of “prepared mind” is retaining and accessing the appropriate blocks of information with which to form interesting associations of ideas. Outstanding scientists have a “nose” for distinguishing the significant from the trivial in the information that they accumulate and use, as well as in the problems that they tackle. Another meaning is the ability to design and execute an experiment well, a prerequisite for recognizing unexpected results.
UNDERMINING THE OBJECTIVITY OF SCIENCE
Scientists proceed on the assumption that even though they may flounder on the way, their objective evidence will eventually lead to reliable pictures of reality. Because scientists insist on objective evidence as the only convincing basis for belief, in the areas amenable to scientific inquiry, they are sometimes accused of dogmatism in rejecting beliefs that do not meet that criterion. Scientists are particularly impatient when preposterous evidence or arguments are presented in the press, or to a jury, for serious consideration. “Junk science” remains a problem for our legal system. For while it is usually easy to achieve consensus among competent scientists on most issues, the legal system finds it difficult to develop acceptable criteria to separate competent experts from irresponsible hired guns.
In recent decades some philosophers of science, and many sociologists, have increased public mistrust by questioning the very concept of scientific objectivity.
THE PHILOSOPHY OF SCIENCE
The foundations of science have been difficult for philosophers to formulate rigorously. David Hume raised one of the most fundamental questions in the 18th century: no matter how many times the sun has risen, on what logical basis can we be certain that it will rise again tomorrow? Bertrand Russell has provided a sensible, empirically based answer: instead of searching for a rigorous, absolute certainty, we should recognize that every scientific proposition contains an element of probability, and if there is sufficient supporting evidence we can treat that probability as virtual certainty (12). For example, what makes us confident that the sun will rise tomorrow is not simply the logic of induction, it is the relevant cumulative experience, incorporated into the seamless web of scientific knowledge. A probabilistic approach, combined with evolutionary biology, provides a better answer than rigorous logic also for another classical philosophical problem, the basis for believing that our perceptions correspond to reality. An organism could not have survived and would not be here today unless the information acquired by its sense organs corresponded sufficiently (though not necessarily perfectly) to external reality.
Raising another fundamental question, Karl Popper, perhaps the most influential philosopher of science in this century, has emphasized that experiments cannot prove a hypothesis, for however many results support it, a single contradictory result could “falsify” it, i.e., could require that it be abandoned or modified. (English was not Popper's native tongue, and this use of the word “falsify” seems unfortunate, since it usually refers to something quite different, the publication of fraudulent findings.) This emphasis led to a useful, widely accepted principle: that a proposition is not scientific unless it is capable of being tested for falsification (11).
Though Popper's main theme is logically sound, it is not readily translated into practice by working scientists; for when a hypothesis has a broad base of support, discarding it on the basis of a single apparently falsifying result is risky, because that result may turn out to be incorrect. In the explosion of molecular biology in the 1960s such bold investigators as Francis Crick, Sidney Brenner, Francois Jacob, and Jacques Monod rapidly produced a wealth of fundamental concepts, on the basis of relatively skimpy evidence (regulation of genes, mRNA, its translation into protein on ribosomes, the genetic code). The resulting heady atmosphere led to the witticism that if a finding disagrees with a good theory, the finding must be wrong. In a more serious problem, another part of Popper's thesis, the impossibility of absolute proof, was seized on, by critics on the border, as an argument against the objectivity of science. But if one accepts Bertrand Russell's suggestion that science seeks probabilistic rather than absolute truths, the main reason for doing additional experiments would not be to test for a possible falsifying result. It is to increase the number of positive results, on which a probability is based. Popper's emphasis on falsifying evidence, and downplaying of the value of positive evidence, has thus had little influence on the course of science.
Thomas Kuhn has offered a broader criticism of the standard view of the objectivity of science (7). It is based on the revolution, in modern physics, in our notions of time and space and of waves and particles. Kuhn suggested that any field of science is likely eventually to experience similar “paradigm shifts,” a paradigm being a fundamental, broad concept on which a field is built. However, these shifts in physics were not really revolutions, in the usual sense of destroying an old regime. The principles of Newtonian mechanics did not disappear; they remained valid for the familiar range of dimensions, but they had to be absorbed into a broader concept that also encompassed subatomic or cosmological dimensions.
Kuhn's views have been popular with literary critics, who interpreted them—though not with his blessing—as denying that there is a reality out there more substantial than their textual deconstructions. Scientists, of course, reject this view, but they have also generally been skeptical about an even more modest interpretation of his main thesis, the prediction of likely paradigm shifts in any field. Biology, in particular, faces such complex mysteries that its major discoveries have been filling vacuums rather than subverting established doctrines. For example, the nature of the genetic material was simply unknown until it was established as DNA. The prevailing opinion, that it was probably protein, was recognized as only a guess and not a paradigm.
IMPACT OF PHILOSOPHERS
Despite its intrinsic interest, the philosophy of science has had surprisingly little impact on the practice of science. Most scientists find philosophy quite irrelevant to their work. One of the giants of physical chemistry, G. N. Lewis, dramatized this view: “The strength of science lies in its naivete” (9). It is of interest that chemistry has erected a remarkably rigorous, coherent framework without having to deal today with epistemological questions, though they do arise in physics and in biology, dealing with such matters as the nature of space and time or the origin of life.
Most philosophers of science agree that their intellectual challenge, in seeking a rigorous base for the scientific method, does not raise any doubts about the validity of the insights of science into the real world. But one school of philosophy, exemplified by Paul Feyerabend, does raise such doubts and questions the “privileged” character of scientific knowledge, i.e., the claim that it is not simply one of several alternatives but is the closest we can get to the truth, in the areas where the method is applicable. The writings of this school—as well as the more mildly subversive ones of Popper and Kuhn—have interested students of the humanities and the social sciences much more than natural scientists, and most of these readers fail to acquire a balancing acquaintance with the actual procedures of science. Though this philosophical literature has a limited readership, the ideas have influenced writers who reach a larger audience, thus contributing to skepticism about the claims of science.
SOCIOLOGY
Among sociologists the leading argument against the objectivity of science arises from the belief that the concept of value-free is fundamental. The process of discovery is not entirely value-free, and so science cannot be objective. However, one forthright social scientist has rebuked those colleagues who advance this argument. He accuses them of belittling the objectivity of the natural sciences for an illegitimate purpose: to increase the scientific stature of the social sciences by minimizing the methodological differences (6).
But apart from this question of motivation, there is a more fundamental, semantic defect in the argument that science is not value-free. It rests, in effect, on a play on words, because it conflates three different meanings of the term “science.” In different contexts the word may refer to a method, to the activities of the people employing that method, or to the resulting body of knowledge. Among these, the activities clearly do have numerous subjective features; hence they are indeed not value-free. (Science in the sense of a set of activities is highly subjective in several ways. When investigators choose a problem, or choose the methods to be used, or even make daily decisions on what to do next, they face personal judgments that cannot be entirely objective. The same is true of the decisions of those who distribute the resources that make an investigation possible. But these factors have nothing to do with the validity of the products.) If the product is honestly appraised and found to correspond to nature it is value-free and objective, regardless of the values held by the investigator or his critics. Indeed, a scientist commits a serious crime against science if he subordinates its findings to cherished values when the two appear to be in conflict.
Philosophers have used the term “naturalistic fallacy” for efforts to derive an “ought” from an “is.” Since the subordination of science to ideology that I discuss here tries to derive an “is” from an “ought,” I have suggested that it be called the “moralistic fallacy” (1, 2). That article was suggested by a lecture in which George Steiner proposed that certain kinds of studies should not be pursued. His example was identification of genetic components of human behavior, since these might differ statistically among races, and this knowledge could threaten beliefs that support ethical convictions of great value to society.
The debatable notion of forbidden knowledge has a long history, going back to the Garden of Eden and to Pandora. It is a difficult notion for scientists to accept, since all knowledge can be used in various ways, and it would seem better to try to restrain the bad uses rather than to deprive ourselves of the good ones. It is pretty much an axiom among scientists that knowledge is preferable to ignorance; in any case, in the long run it will be impossible to unlearn the scientific method and to suppress curiosity. But we might add Steiner's kind of concern to the sources of public disaffection with science discussed above.
A radical school of sociologists of science has gone even further in rejecting the claims of science, by asserting the relativity of all knowledge. Their “Strong Program of the Sociology of Science” proposes that the credibility of all beliefs, including the value of reason and logic, is based on their social context and that current authorities promulgate science in a way that promotes their interests. Moreover, since the validation of an item in the body of scientific knowledge involves a social consensus, they conclude that all this knowledge is merely a social construct. In fact, however, this view is based on a misunderstanding of the process of validation. Though social and political factors influence the initial reactions, they dissolve as the final consensus is reached, on objective grounds.
Other sociologists of science also focus on the influence of social values on science, but in a less extreme way. Their “externalist” approach shifts attention away from the traditional analysis of the internal processes of discovery, analyzing instead the impact of external forces. Scientists, however, find the internal mechanisms of science more interesting and in the long run more influential. Large breakthroughs that open a new vista in future research, such as the discovery of recombinant DNA, are usually unpredictable, and so they cannot be traced as directly to forces from outside science as to trends within science and to individual psychology. The subsequent explorations that they open up are more predictable, and external social forces that affect funding and other rewards modulate their directions. Seeing this pattern, scientists emphasize the need for preserving an atmosphere that encourages those individualistic, gifted persons who provide the breakthroughs, even though the great majority of scientists do not fall into this category.
EDUCATION OF A SCIENTIST
Entry to most professions, such as medicine or law, requires a fairly standard set of courses of instruction and then a license to practice. The education of a future independent investigator is quite different. Training for the Ph.D. is longer than that in most professions, about 6 years on the average in the biomedical sciences and then 2 or more years of postdoctoral experience under supervision before an independent career is undertaken. It involves a prolonged, intimate relationship with other students and with faculty in a department and particularly with an individual preceptor, much as the growth and socialization of a child in a family instills a sense of morality more from the views and the actions of the parents than from formal instruction. The most important things to learn are intangible: how to select a problem, how to pursue it, when to persist in an approach or to try others, and how to relate to other scientists and the broader community. And while the ultimate accomplishments of an individual depend very much on his or her gifts, good training may also pay off.
A striking effect of the apprenticeship is a deep commitment to the honest recording and reporting of the data. Scientists take in this principle with their mother's milk, so to speak. It is not that they have been selected for exceptional honesty; rather, they realize that their results, if interesting, will be tested by others for their correspondence to reality, and nature always has the last word.
For those who are reasonably successful science is a wonderful way of life—so much so that envy might color some reactions of outsiders. Scientists are paid for having fun, since the challenge of trying to outwit nature is almost a game, though those who are successful generally work harder at it than people in most lines of work. Francis Crick remarked that science is both monastic and hedonistic. The long hours and fierce dedication may appear self-sacrificing, but to the scientist it is a form of play.
On the other hand, the income is less than many bright and energetic scientists could earn in other occupations, which is one reason for society to avoid creating an atmosphere that eliminates the fun. One compensation for successful scientists is travel, as members of an international community. In addition, if one makes a solid discovery it is immensely satisfying to know that it will stand for all time, and even when it does not contribute visibly to human welfare, it may do so indirectly as part of an advancing front. Scientists have also enjoyed the feeling that the public appreciates and admires them. However, the attractiveness of a career in science is certainly being diminished by its increasingly evident economic insecurity and by decreased public respect in recent years.
Science is fundamentally elitist, not in the sense of inherited privilege or unearned entitlement, but as a meritocracy, with a hierarchical structure theoretically based on achievement. The eventual judgments of merit are nearly as objective as those in musical performance theoretically. But judgment of talent or accomplishment is a much slower and more difficult process in science than in music, where technique is highly revealing. In addition, factors such as luck, personal connections, aggressiveness, and others are often more important than actual achievements, and these are difficult to evaluate. If the experiments of a student or a postdoc fail to yield consistent or reasonable results the cause may be lack of skill or care or understanding or it may lie in the nature of the problem or in the circumstances or expectations. So when a preceptor loses confidence the trainee is likely to be resentful and reject the judgment—a common source of tensions in research laboratories.
A variety of talents are valuable in science, but not all are necessary. An individual may excel in mathematical and theoretical analysis or in experimental skill, in imagination or in critical logic, and in capacity for hard work or in creative inspiration. But the scientist has to be absorbed in the problem and eager to solve it. For while many investigators get a bright idea, they differ in the strength of the urge to test it as soon as possible. A clever idea is entertaining, but the proof is much more deeply gratifying.
In addition to all the obvious traits that might seem to contribute to success, it depends also on less definable factors. Some industrious students with outstanding grades, and with high levels of logical and verbal skills, are nevertheless unable to develop a “feeling” for doing good experiments and for distinguishing the significant from the trivial—they lack a “green thumb.” This can be a great source of frustration. Another source is getting into the wrong branch of science. Branches dealing with different levels of complexity of organization encounter quite different degrees of uncertainty and precision, and individual scientists, who vary in their natural affinity for various levels, are fortunate if chance has steered them into a field suitable for their temperament.
SCIENCE AND ART
As a creative activity science is often compared with art. They share passion, the exercise of the imagination, and pride in creating something interesting or beautiful that did not exist before. But the unique, highly personal artifacts created by artists are treasured indefinitely as expressions of the individual spirit, while a scientific discovery is treasured in a different way: the knowledge is soon buried deep in a growing edifice, and except for the largest names, the individuals and their personal contributions fade away after a generation. In another difference, in the exercise of the imagination the concrete realities of nature constrain the creations of scientists much more than the limitations of the media constrain artists. Nevertheless, originality means as much to scientists as to artists, and they can be equally fierce in defending it. Even though the truth that has been discovered would have eventually been found by others, the one who did discover it usually did so by an exercise of the imagination that deserves pride.
In another parallel to art, theoretical physicists and mathematicians often suggest that elegance and aesthetic appeal serve as a guidepost in the search, and even as criteria for validity. However, in the more empirical realm of biology this view does not find much support. The simplicity of the structure of DNA added to its impact—but was not important in its derivation or its proof. The most intricate and beautiful features of organisms, achieved through the trial and error of evolution, are not products of nature's preference for beauty but are successful permutations and combinations of the same set of nuts and bolts found in the simplest bacteria. Francois Jacob has remarked that evolution is a tinkerer and not a sculptor.
Since the practice of science manipulates existing materials it may be even closer to musical performance, which manipulates existing compositions, than to musical composition or painting, which manipulate within much broader limits. Both scientific experimentation and musical performance require working hard at developing a skill, and also appreciating an abstract kind of beauty. It is not surprising that many scientists are enthusiastic musicians and may even have aspired earlier to careers in music.
MAINTAINING STANDARDS IN SCIENCE
The traditions for establishing and for monitoring standards in science are informal, except for the strong role of editorial control of publication. No licenses are required for employment, teaching, or publication; individuals and their contributions are judged on their merits. The doctoral degree is the usual key to entering the community, but it is not essential.
Scientific journals were originally sponsored, and many still are, by scientific societies, and they reflect the values and interests of the members. (Many excellent journals are now commercial, but their policies are also set by editorial boards of scientists.) The good journals employ an elaborate procedure in which each manuscript is refereed by two or more experts in its special area—a form of peer review. Most referees read a manuscript carefully, and some may spend hours studying the details. Scientists freely carry out this task, without pay—and even without direct credit, since they remain anonymous in order to permit frank criticisms.
Referees can be highly effective in certain areas, especially improving experiments or presentation of the results; but they cannot identify most factual errors or fraud, and they certainly cannot certify the validity of a paper. What referees can judge well is whether the findings are clearly presented, whether the data are derived by suitable methods and support the conclusions, whether the paper takes into account relevant previous publications, and whether it is of sufficient interest to the readership of that journal. Different referees of a paper often present surprisingly different criticisms—strong evidence that science is not cut and dried but involves a large amount of personal judgment. On the negative side, the criticisms may be irritating and even cruel, and rejections often seem unjustifiable to the authors. A serious additional problem is the opportunity for filching ideas, as will be discussed below. But while authors grumble, they generally agree that the benefits of this system of mutual help outweigh the pain; the aim is to produce an objective, reliable account and not a personal literary expression, and papers by even the most outstanding scientists usually benefit from outside criticism.
The scientific community recognizes the value of many different talents and styles of operation. Some investigators are perfectionists, polishing their science through many experiments while polishing their papers through many drafts and putting out only a few per year. Others go through fewer drafts, and while they are more likely to overlook errors in the details, they may compensate with a larger output. Both patterns are valuable, but the jewel polishers tend to be critical of the rapid excavators.
This difference is accentuated by the modern tendency of highly productive investigators to shift their activities, after the first few years, from bench work to directing many junior associates. While they thereby increase their productivity, the greater distance from the experimental operations decreases their ability to use their skill in recognizing a valuable unexpected finding. The tendency to have very large groups enables the excavators to take rapid advantage of any new idea heard at a meeting or seen in a grant application or a manuscript. There are clearly both benefits and costs in directing a large group and questions of fairness in the distribution of limited funds, but while this is an area of lively discussion there is no obviously correct answer.
Some of the most imaginative contributors have idiosyncratic or difficult personalities. While the scientific community avoids the vague word genius, its appreciation of exceptional gifts makes it tolerant of eccentricity, much as society treats gifted artists. And it is concerned about inroads on that tolerance, though these seem to some degree an inevitable consequence of government funding.
SCIENTIFIC WRITING
A scientific paper is an unusual art form. It has to be as compact as possible, while giving the reader all the information needed to repeat the experiments. Because the literature is vast, the format of a paper is standardized so the reader can quickly find the parts that interest him; readers skim most of the papers that they look at, except those very close to their interests. The aim is efficient, impersonal transmission of the essentials, rather than a narrative account of the steps along the way. The personal aspects of the process of discovery are therefore usually left out, though they may appear when the material is presented in a lecture.
Writing a scientific paper well is difficult, though the problems are different from those of belles lettres. It is a challenge to present the material compactly but without ambiguity and to organize a complex argument coherently. Yet despite the stereotyped form, some intellectual leaders, such as Francis Crick or Jacques Monod, convey an elegant, personal style.
Though professional scientists are by definition professional writers, many do not write well. Several additional factors have lowered the quality of the literature: competition encourages scientists to publish quickly; a scientist who is successful in building up a large research group will have even less time per paper; and journals can no longer afford to edit papers for clarity, as was a common earlier practice. (Oswald Avery, who identified the material of the gene as DNA, published at most two or three papers a year. My teacher, Rene Dubos, told me that when Avery had completed a manuscript he would store it in a drawer for a couple of months in order to be able to give it a final polishing. A scientist today, in a competitive field, would find it hard to follow that practice.)
In recent decades, because techniques have become more specialized, most papers in the biomedical sciences require the collaboration of investigators with different skills. Four or five coauthors, often from different institutions, are standard, and the number may reach over 100! This pattern increases the chance of errors in communication, as well as raising questions of the responsibility of individuals for parts of a paper other than their own.
For these several reasons, many scientific papers are far from meeting reasonable standards of professional writing. The defects not only make for slower reading but are a source of misunderstanding.
PRIORITY, CREDIT, AND COMPETITION
In science the esteem of one's peers is a major source of motivation and is the main index of success. Reputations strongly influence the reception of one's work. One wit caustically remarked that a certain scientist was disappointingly unreliable, because he was not always wrong! Reputations also play a key though informal role in monitoring behavior.
Science is a peculiar kind of creative activity. The fact that a discovery will stand for all time gives a scientist deep satisfaction. On the other hand, because science seeks to disclose already existing features of nature, a discovery, unlike a work of art, is in a sense inevitable. Even in an era when science was slower and less crowded, some major discoveries were made simultaneously by two independent investigators—strong proof of inevitability. The discovery of differential calculus by Newton and by Leibniz is the classic example, and their bitter fight over priority reminds us that competition in science is not a recent invention. Moreover, since most science advances as a moving front, even an apparently highly original contribution is likely to be strongly influenced by what others are doing. Nevertheless, priority earns great credit.
The evaluations of scientific achievement are not entirely subjective; it is widely recognized that a few particularly creative investigators are responsible for the main advances, however inevitable these might be in the long run. These explorers revel in uncertainty, they have a talent for recognizing when new developments open up a previously insoluble problem, and they have the courage to gamble that they will solve it. They also are professional skeptics about the work of others, and they love to make a discovery that displaces the received wisdom.
Traditionally the greatest scientists have created, and remain identified with, a program that dominates an area. Molecular biology, however, introduced a new style: a tendency of the leaders to move rapidly from one nugget to the next as they opened up new trails, ranging from the structure of DNA to the genetic code. The difference resembles that between an industrialist and a clever, nimble investor. To compete at this level requires a large ego, confidence in one's cleverness (and ability to solve any problem!), and a great deal of energy.
While priority generally is the most important factor in allocating credit, discriminating scientists respect colleagues who wait for publication until they have solidly established a phenomenon; hence that person may end up with more credit than one who has rushed into print a bit earlier with skimpy evidence. The decision on when a block of information is ready for publication—and on which data to include (usually a small fraction of the total in the notebooks)—is personal and rather arbitrary. Premature publication can be very damaging to the younger scientists, while the head of the group often benefits, because someone else in the group may succeed on this project. Unfortunately, the size of an individual's bibliography often plays a large role in academic promotion, thus encouraging the publication of superficial works that do not really add anything to the scientific literature.
Though most research on topics of wide interest is highly competitive, some scientists work in these areas and yet are not deeply concerned about the competition. Going even farther, some deliberately choose problems that are not in a current main stream. This approach occasionally leads to a contribution that is exceptionally original, like Avery's finding that DNA is the genetic material or the discovery that a bacterium is a major factor in peptic ulcers. But such investigators face the risk that a discovery in an unfashionable area may encounter resistance or suffer delayed recognition, for years, except for a small circle of cognoscenti. For example, when Oswald Avery and colleagues made their discovery in 1944, few scientists appreciated this surprising finding. It was presented without fanfare, it did not stimulate many scientists to work on DNA, and it did not lead to a Nobel Prize. (For possible reasons for this neglect see reference 4). In contrast, when Watson and Crick, in 1953, showed that DNA is composed of a pair of complementary strands, the structure directly suggested how genes could duplicate themselves, thus opening up many new problems; the response was therefore dramatic. Yet this discovery was clearly inevitable within very few years, whereas the genetic role of DNA might not have been discovered for many years if it had not emerged serendipitously from Avery's persistent study of a major cause of death, the pneumococcus.
We should not underestimate, however, the importance of the much larger number of yeomen in science, who work within a more established framework and who steadily confirm major findings and add solid items. When a startling breakthrough appears, innumerable specific examples are waiting to be explored—and their unexpected variations frequently lead to further insights. The extraordinary flowering of molecular biology depended not only on a few leaders but on the fact that governments had already built up a large cadre of biomedical researchers, who moved into the new field. Government finds it much easier to relate to this major population, doing somewhat predictable work, than to the mavericks who explore unknown territories.
There is no doubt that competition increases the rate of progress. Also, while it may encourage haste in publication, it protects against fraud: a fabricator who wishes to avoid verification will not choose a competitive field.
The scientific community does not have rules, or even a consensus, on the limits of competition; judgments on whether or not to share materials or information or to invade an area just opened up by another investigator are based on personal taste, with awareness that people develop reputations for generosity or for its opposite. Excessive competition in the form of failure to credit closely related work is common. It is generally disapproved, and it is sometimes corrected by referees; but it is not punished unless it amounts to plagiarism.
The imperfect apportionment of credit within a paper presents another problem, minor but universal. Those who carry out the experiments inevitably resent the fact that readers are more likely to remember the familiar name of the director of the project—which is inevitable. (When I confessed to this resentment, as a young researcher, my preceptor pointed out that this unfairness was unavoidable, but it becomes rectified retrospectively if the junior investigator continues to be productive and thus becomes judged by his whole bibliography.) But a deeper problem is the frequent extension of automatic “honorary authorship” to the person who administers a group and acquires the funds. Yet it is not always easy to distinguish whether or not he has also made a real intellectual contribution. Of course, the obverse of credit is responsibility. Discussions of misconduct issues have led to the frequent assertion that every author should be held responsible for, and should be able to defend, everything in a paper that bears his name. This expectation is reasonable for a paper with a narrow scope. But when authors contribute complementary, very different skills it is unrealistic to expect each to know the fine points and pitfalls of all the operations. And the fine points are almost invariably where disputes arise. Science must be built on trust, in collaborators even more than in the literature.
MORE ON COMPETITIVENESS
Competitiveness has been an important issue in many cases of authenticated misconduct. James Watson's Double Helix gave the world a candid picture of the extreme competitiveness that animated his drive toward a great discovery. The book provoked endless discussion, especially of the morality of extracting vital unpublished information from a competitor without reciprocating with information about one's own progress. But while we are indebted to him for recording this spectacular piece of scientific history with such exceptional honesty, his behavior was hardly representative, and it gave the public a distorted picture. Though Watson and Crick pushed competition to the limits, most scientists have found it acceptable because of admiration for their sharp recognition of the importance of the problem and their intense dedication, which led them to try to jump to the goal instead of proceeding in small, solid steps.
Competition clearly has become more prominent in science in recent years. The reasons include the increased number of scientists, the need for continual publication and for continuous grant support, the commercial products that make it possible to obtain data much more quickly, and the many prizes that are now dangled before the scientific community. While prizes encourage scientists in many ways, they also encourage invidious attitudes.
DISCOVERIES AND CONCEPTUAL FRAMEWORKS
Scientists applaud not only a discovery that opens up a new field but also the further intellectual achievement of those who place it in a broader, more influential conceptual framework. An excellent example is the regulation of enzymes, discovered and developed in the following way.
Each building block in a bacterial cell (e.g., an amino acid or a nucleotide) is synthesized by a sequence of enzymes (a biosynthetic pathway) branching off from a central pathway. Edwin Umbarger and Arthur Pardee, working on different systems, independently discovered that the activity of a pathway is controlled by a special property of its first enzyme: it is inhibited by a sufficient concentration of the end product (negative feedback). Hence, while a bacterium may have to synthesize them when growing in a simple medium, in a rich medium their presence spares the expenditure of energy for this purpose. This response explains why bacteria grow faster in the former medium. It was then found that the same mechanism responds to internal stimuli: as the concentration of the endogenously synthesized end product falls or rises the response of the key enzyme adjusts the activity of the pathway to keep that concentration in the right range for the needs of the cell, like the response of a thermostat to temperature.
The discoverers of this truly fundamental process have been properly honored. However, Jacques Monod, in Paris, brought out a much broader significance by shifting emphasis from the physiology of feedback to its molecular mechanism. Unlike other proteins, which each fold into one stable conformation, these key regulatory enzymes are capable of switching between an active and an inactive conformation, depending on whether they are binding the small molecule whose concentration they were designed (by evolution) to sense. The original discoverers were aware of this, but Monod dramatized it, and he called these enzymes “allosteric” (different shape). Moreover, he further showed that the mechanism applies not only to key biosynthetic enzymes but also to certain other proteins, which regulate gene activity. Subsequently, allostery acquired much greater significance when it was recognized as the molecular basis for innumerable other examples of information transfer, such as those in hormone action, in sensory nerve endings, and in nerve impulses.
Allostery is just as fundamental a form of molecular information transfer (learned information) as is the base pairing between nucleic acid sequences in gene replication (inherited information). But it did not burst on the world in the same dramatic way, and so its importance has not been nearly as widely recognized.
SCIENTISTS AND THE LAY COMMUNITY
Scientists are passionately dedicated, for many reasons, to their way of seeking truth. They find a coherent, rational understanding of the world more deeply satisfying than miracles or other facile alternatives. Moreover, they are proud that science and technology have improved the quality and security of the lives of much of mankind in innumerable ways, and while thoughtful people are also concerned about the long-term threats of technology—to social stability, to the global environment, and even to the survival of our species—scientists tend to be enthusiastic about the more immediate and tangible benefits, rather than focusing on prognostications that are uncertain and distant, though grave. As C. P. Snow has remarked, scientists feel that they have the future in their bones.
We can thus view scientists as a sort of modern equivalent of monks or, more broadly, as a hybrid of monks, artists, and practical contractors, engaged in building a cathedral of unlimited scope. Outsiders may be concerned that the construction is expensive and that they are not sufficiently involved in the process. Scientists meanwhile regret that a wider audience cannot share their pride in their goals, for which they have an almost religious reverence. Whether this attitude is seen as hubris or as a dedication to objectives that transcend their obvious material interests, it helps to explain why scientists have often clashed intensely over principles at stake, as in the Baltimore case—both Baltimore and his critics really care.
A number of additional factors separate scientists in certain ways from the rest of society. Their deep interest in understanding obscure features of nature, whether or not these have practical payoffs, must seem odd to their neighbors, for whom the proper study of mankind remains man. They tend to be clannish. Moreover, it is no secret that some individuals select science as a career because they are more comfortable in dealing with things rather than with people. Paradoxically, though one would expect science to select for introverts who might be more interested in things than in people, scientists in training have a very sociable life compared with that of students doing library research in the humanities. The long hours of working at the bench in a room shared with other young investigators encourage a great deal of chatter, usually about science.
In addition, scientists are resented as a priestly class, influencing all our lives too much, and seen as lacking humanity because of their focus on objectivity. Not only the general public has this sense of alienation from science. In 1959, in The Two Cultures and the Scientific Revolution, C. P. Snow dramatized the gap between scholars in science and in the humanities (13). In his view science is now an important part of culture, and all educated people should acquire what is now called scientific literacy. Unfortunately, there is little sign of progress toward this goal, and the gap among scholars seems to have widened. Many of our nonscientific colleagues decry the growing impact of scientists, and some seem to exhibit schadenfreude at seeing them now under fire. Intensifying the negative image, television most often depicts scientists as mad or evil—modern Frankensteins.
This changing climate, after a long history of public admiration, makes scientists feel misunderstood, with the benefits of their activities taken for granted while the dangers are magnified. It is even more painful to see the public widely questioning their integrity, when the record suggests that it averages higher than that in perhaps any other profession. And seeing so much of the public preferring mystical, antiscientific attitudes—astrology rather than astronomy and creationism rather than evolution—scientists fear the impact of increasing politicization on the purity and the effectiveness of their enterprise.
SCIENCE AND THE LAW
In their ways of searching for truth the collegial approach of scientists and the adversarial approach of the law have many differences. Some of these are contingent on local circumstances, but others are inherent in the nature of the subjects. A scientist is expected to give proper weight to apparently conflicting findings, whereas the legal advocate is free and even obligated to select only the facts that support his client. Moreover, the law places great weight on precedent and on correct procedures, while scientists are often impatient with this formality, convinced that their pragmatic and flexible approach, adopting any procedures and criteria that prove useful, is often more effective. Also, because the search for precise recognition of realities often introduces qualifications and probabilities in their conclusions, scientists often find it hard to meet the demands of the legal system for black-and-white answers. Finally, in dealing with unsettled questions scientists tend to become impatient with prolonged arguments and to seek illumination by further experiments, while the law is generally restricted to dealing with the available information about what has already happened.
An example of a mechanical application of the law is the famous ice-minus recombinant of a bacterium widely found on leaves of plants. The parental form nucleates ice formation and hence causes crop damage at freezing temperatures, and it had been genetically altered in such a way that it no longer had this effect. It was hoped that spraying this strain on plants might be useful in displacing the damaging parental strain present in nature. However, the tests were prevented for two years, at great cost to the scientists and the company involved, because Jeremy Rifkin, a political activist and founder of the “Foundation on Economic Trends,” was able to stir up anxiety in the local community that led to legal injunctions.
Any knowledgeable scientist would have to testify to the virtual impossibility that removing a gene known to be responsible for the damage would make the organism more dangerous. On the contrary, this use of a mutant on plants was analogous to the use of an attenuated virus as a vaccine in animals, to prevent infection by the virulent parental form. In this case scientists were dismayed by the evident obligation of the law to respond to public perceptions, however ill informed, and to legal formalities, rather than to scientific evidence and a pragmatic view of the obvious public interest.
Another difference concerns the severity of the response to human weaknesses or to errors. Zealous prosecutors in the legal system, and even more their equivalent on Congressional staffs, place great value on the deterrent effect of a conspicuous conviction or exposure, even at the expense of inactivating a potentially valuable citizen, and even over a minor infraction. Scientists, with a more informal system, tend to give the benefit of the doubt where there are uncertainties, preferring to preserve the individual's capacity for further valuable activities. For example, Robert Gallo, who saved thousands of lives by developing a test for identifying blood infected with the virus of AIDS, has been investigated and widely criticized for having concealed the use of a virus sample from France in developing his cultures for the test. He denies the accusation and attributes the presence of the French strain in the test to its accidental contamination of his own culture and its capacity to outgrow it. Scientists familiar with virological procedures know that such contaminations do occur frequently, and there is no way to eliminate this possibility. Under these circumstances, and without definitive proof in either direction, most scientists, unless aroused by news reports, would be inclined to give their colleague the benefit of the doubt and drop the issue. But the political activities of Rep. Dingell and the economic ramifications of a patent, shared between the National Institutes of Health and the Pasteur Institute, led to continued pursuit of the problem.
If scientists are uncomfortable with the framework of the legal system, they are even less well equipped to deal with that of politics. Political discourse aims at a consensus that reconciles conflicting views and interests, and it has to be flexible about means and ends, so it must deal with information diplomatically and not within the simple framework of openness and honesty that best fits the goals of science. Scientists may have trouble understanding this point of view, which often leads to a tricky handling of obvious truths. Moreover, scientists underestimate the importance of emotional appeals in politics (including the politics of science) and the inevitable role of power relations.
On the other side, the success of science may have lessons for attitudes toward truth in other areas. But the basic differences are inescapable. The problem has been succinctly stated by Baltimore: scientists do not belong in the political world, for its reliance on emotions, compromise, and trading, rather than on logic, is the opposite of what they are dedicated to.
AUTONOMY IN THE CHOICE OF PROBLEMS
Society has always allowed the baffling species called scientists a great deal of autonomy, in both their choice of problems and the governance of their community. When the government began large-scale support after World War II, it intensified this tradition by adopting a system in which funds were distributed by peer review. This system also revolutionized the organization of science by giving young investigators, in their most creative years, an unprecedented opportunity for independence.
Autonomy does not mean freedom from responsibility or from external oversight of risks; scientists obviously accept fire laws and regulations for the handling of pathogenic microorganisms. Autonomy also does not mean lack of accountability. But the scientist's definition of accountability is not that of a bank accountant. Its main vehicle is publication; every time a scientist gives a talk or publishes a paper he is accountable to his peers and subject to their potentially severe criticisms. In addition, grantees must make annual reports to the granting agencies and seek renewal at frequent intervals.
In defending their traditional autonomy scientists are not demanding an entitlement or a right; they see it as an essential ingredient for the progress of science. Of course, this historical role might no longer fit a changing world, but history does not support that view. For example, when the totalitarian Nazi government in Germany extensively interfered science deteriorated, though it started with the strongest institutions in the world. Similarly, Stalin destroyed genetics in the Soviet Union for three decades when he was persuaded to mandate Lysenko's pseudoscientific view of heredity.
TECHNOLOGY AND DARWINIAN PRESSURES
Concern over the social adjustments triggered by new technologies has been one of the sources of antagonism to science, leading to a challenge to its autonomy. Such a challenge has come from a surprising source, a person with a deep interest in science: Albert Gore, then a legislator, and now Vice President of the United States. He once suggested that because the advance of technology (especially bioengineering) has become so rapid, creating a lag in the development of policy for its control, we should wait for policy to catch up (5). But it is not obvious how this could be done, for it is not clear what it means to “catch up.” Policy studies do not have the cumulative or progressively increasing powers of science itself. Moreover, even if we do achieve improved mechanisms of response, as Mr. Gore anticipated, they will not prescribe the answers. With each new technology we will have very limited capacity to foresee consequences, and hence to develop appropriate policies, until we have had some experience with it.
The proposal illustrates a deep gap in assumptions between policy planners and workers in science and technology. Its roots lie in the Darwinian revolution. We humans, seeing that our own complex activities arise from intentions, have long projected this psychological mechanism in trying to understand the unfolding of other complex events in the world. In the most conspicuous example of this anthropomorphism, innumerable cultures (but not all) have invoked a divine providence to explain the origins of the universe, and also of humans. But Darwin broadened our understanding of causality by showing that an a posteriori process of natural selection, by trial and feedback (traditionally called trial and error), could provide an alternative to the familiar a priori mechanism, involving intention.
Though the Darwinian mechanism was developed to explain evolution, it had profound philosophical implications, as it was found to apply to a variety of other processes. For example, in a remarkable process in a developing animal the growing nerve fibers in the central nervous system establish connections with each other, or reach a destination in a distant finger, by trial and feedback: they make many contacts with other cells but retain only a fraction. Similarly, the now familiar search-and-find operation in computers, rapidly scanning for a given sequence in memory, provides an essential model for studying the cognitive functions of the brain. And even in political systems, the trial and feedback inherent in democracy and in competitive markets have proved more effective than the promises of authoritarian planning.
The implications for the regulation of technology are clear: since our powers of prediction are so limited we would do better to build policy on resilience rather than on attempted prediction. Resilience means being alert to early warnings and responding rapidly. Ironically, such humility about our foresight seems to come more easily to the “elitist” scientists than to more egalitarian planners.
Whatever the arguments for or against a technology, if it improves the production of goods that we need or want, and without unacceptable costs or side effects, it seems virtually impossible for a society to resist it in the long run, though ideological forces may hold it back for some time. The reason is that the pressure for efficiency has been built deeply into our natures, and our economies, by their Darwinian mechanisms of biological and cultural evolution. For example, the use of bovine somatotropin to improve milk production is encountering resistance because of the resulting social dislocations. But it is unlikely that it can be halted for long, any more than the development of margarine could be prevented, though originally it was intensely opposed by dairy farmers.
ERROR
Errors are unquestionably much more frequent than fraud as a source of misleading information. Many of the causes are not preventable: the difficulties of the problems, the inherent imperfection of the scientific method, and the connection between enthusiasm and self-deception. More preventable, and unfortunately frequent, causes are carelessness and haste—in a word, sloppiness. A report on The Responsible Conduct of Research in the Health Sciences by the Institute of Medicine of the National Academy of Sciences recognized that low standards (“sloppiness”) are quantitatively a much greater problem than fraud. It recommended that our research institutions develop more formal mechanisms for raising standards. But it also recommended greater involvement of the government in this area, which raises serious new questions.
It is often said that science is self-policing and corrects misinformation, whether due to error or to fraud. But this is true only for important findings. The large majority of reports are not significant enough for anyone to build on, and so the innumerable errors that they contain go undetected and uncorrected. But if they do not mislead anyone they cause little loss, beyond the waste inherent in any sterile publication.
When errors are detected, if they undermine significant conclusions of a paper few would question that they should be corrected. But otherwise the judgments of investigators about whether or not to publish corrections vary widely, ranging from pedantic concern with minutiae to concern only with errors that might grossly mislead others. Indeed, if everyone did try to correct every detected error editors would probably have to set boundaries to the use of space for this purpose.
Variations in response arise not only from the individual personalities but also from their fields of research. Fields with a straightforward, rigorous methodology, such as structural organic chemistry, customarily provide unequivocal information on which others build, and any detected error would have to be corrected. But investigators in less mature or more complex fields may flounder for years before they finally solve a problem. That solution may then wipe out a large earlier literature, without requiring correction. My own experience with two different research programs emphasized this difference between “straight” and “messy” fields. In one program I used bacterial mutants to identify the successive compounds (intermediates) in various biosynthetic pathways, and nothing in those publications would need correction today. My other main program, on the mechanism of action of streptomycin, was altogether different. Studies in many laboratories, over more than 25 years, revealed changes in almost every activity of the treated bacterial cells, and this multiplicity of effects could not be integrated into a convincing mechanism. When they were finally sorted out and fitted into a coherent pattern (3) many of the earlier observations could be recognized as erroneous. But they did not have to be retracted—the correct answer had cleaned the slate.
FRAUD
Scientists have particularly strong reason to wish to deter fraud: it wastes their time, betrays their trust, violates a major foundation of science, and destroys public confidence. But they have not generally developed formal institutional mechanisms for dealing with it. They have been, and will continue to be, trusting because their experience tells them that fraud has been too infrequent to have much effect on the course of science. Moreover, the importance of collegiality in science inevitably makes scientists reluctant to invoke misbehavior as a cause rather than honest error; even a hint of possible fraud is extremely damaging to a scientist's reputation, and excessive suspicion would impose a chilling atmosphere on the community. A deep problem in the misconduct issue is the relative cost of this bias compared with that of eliminating it in a more severe and effective system.
Because there are so many sources of error and of uncontrolled variation, the data alone rarely provide a basis for establishing fraud, or even for suspecting it, unless they are egregiously improbable (for example, exceeding what could have been accomplished in the available time and facilities). In most established cases the cheating has been directly observed or it has been confessed when others could not reproduce the results.
Fraud has a long history in science, with some celebrated cases, and we should distinguish several classes. That involving the spectacular discoveries, often in leading laboratories, is the most serious kind, because it is by far the most likely to set other investigators on a false trail. However, by that very token it is likely to be detected. Those who take this route to temporary and costly fame have often been psychopathic (or perhaps always, as a matter of definition), and they often later confessed when faced with conflicting evidence. Fortunately, such cases are rare. Because of their psychological origins it seems doubtful that changes in the climate of research or in the mechanisms used to combat fraud can have much influence on their frequency. Dr. John Edsall, a Harvard biologist, testifying in the first Dingell hearing, described such a case in the laboratory of a most distinguished biochemist (Fritz Lipmann) at the Rockefeller University. “The person who committed the fraud was evidently a very skillful and extraordinarily talented writer of imaginary experiments. He prepared notebooks that were so beautiful that they looked extremely convincing. And the head of the laboratory and the referees of the paper that was prepared on the basis of this notebook all approved it.” We might keep this experience in mind in considering recent suggestions that the government could help curb fraud by specifying how laboratory data should be recorded.
In a sad and probably more common variant, based on a temporary mental aberration under stress, honest self-deception has led to findings that have aroused favorable attention but have then become controversial. The embarrassed investigator then finds it too damaging to withdraw the claim, and he hopes that if he defends it with further, fabricated evidence the interest of others in the contradictions will meanwhile blow away.
In another class of fraud the author is a rational crook who simply pads his bibliography with fabricated findings, most often not a whole paper, but only parts to bolster weak evidence or to fill in missing data. If this kind of con man selects findings that are not very novel and hence will not arouse not too much interest, he is unlikely to be detected. For this reason this kind of fraud may well be the most frequent.
Another kind of fraud, the theft of ideas, does not pollute the literature with false findings, but it is costly and painful for the victim, it distorts a meritocratic system of rewards, and it damages the collegial fabric of the community. In a common pattern, a competitor learns of an interesting finding, quickly repeats it, and publishes it as an independent discovery.
The plagiarizer may learn of the discovery by hearing an oral report at a scientific meeting before publication, or through an informal network, or—in a grave abuse of responsibility—as a referee of a manuscript or a grant proposal. But not only is it hard to establish evidence for this kind of plagiarism, there is also the danger that an apparently incriminating sequence may have an innocent psychological explanation. It is not rare for an idea picked up from a colleague to be initially forgotten, stored in the recesses of the mind, and then unwittingly resurrected as one's own.
In a cruder kind of plagiarism an author simply fails to credit work whose citation would have seriously undermined his claim for originality. There is no sharp line here between plagiarism and excessive reluctance to acknowledge earlier work (exacerbated by editorial pressure for condensation). Scientists discuss this problem much more than fraud. When a new journal issue arrives, probably most scientists scan it for papers closest to their interests, look to see whether they have been cited, and often decide that the credit really was not quite adequate—but they then shrug their shoulders. Others, however, chronically burden their friends, and sometimes editors, with complaints.
Because this problem is so universal most research journals (other than those that carry science news) will accept only corrections, submitted by the original author, but not complaints by an accuser who has not received satisfaction. This policy protects the editor against a flood of grievances or the burden of judging the significance of the complaint; but it is not necessarily the best solution. (Some time ago, the news publication of the American Society for Microbiology [ASM News] published a letter from a bacteriologist noting that he had characterized the hemolysin of the beta-hemolysin of group A streptococci and a journal of the society had now published an article that used identical procedures and obtained identical results with the hemolysin of the closely related group B streptococci, but without mentioning the earlier work. The accuser had been unable to convince the senior scientist on the later paper to correct his student's distorted presentation, and he then appealed to the editor of the journal and to the publications board, council policy committee, and president of the society. All agreed that he had a justifiable complaint but that the society should not open its gates to expressions of grievances. Several other microbiologists and I then published letters in ASM News protesting this assumption of helplessness. The next year the council policy committee established mechanisms for publishing sufficiently strong complaints.)
Because “whistleblowers,” who know the problem from the inside, frequently play an essential role in revealing kinds of fraud that are hard to detect, the recent concern over fraud has performed a useful service in making us aware of the need to protect them. At the same time, any academic administrator who deals with this problem is familiar with another side. Many complaints arise from personal grievances, sometimes accompanied by a disturbed emotional state. Clearly it is essential to examine the specifics of each case without prejudice, and to try to protect the rights of both parties. A news article in Science (14) on the experience of universities in dealing with misconduct cites the experience of C. K. Gunsalus, Associate Vice-Chancellor for research at the University of Illinois. She estimated that 20 to 40 students or researchers had approached her per year with complaints, mostly involving personality conflicts. Three to five times a year the allegations led to a formal inquiry, and in the four years since the program was set up they led to four formal investigations into scientific misconduct.
Finally, I would suggest another kind of fraud that has not generally been classified as such: enforcement of false conclusions on the scientific community by agencies outside its usual mechanisms for correction. The outstanding example is the gross distortion of genetics in the Soviet Union, by Lysenko, on ideological grounds. But it takes only a small stretch to see a parallel in our country. If a paper is valid and if attack by a Congressional committee or other government agencies forces its unjustifiable retraction, the scientific community has been deprived of a valuable commodity and the many millions of dollars that went into the process have been wasted. Since such enforcement does not obviously involve intentional deceit it might not fit the legal definition of fraud, but it might fit the broader term misconduct.
IDEALS AND REALITY
Changes in the funding of science have plunged scientists increasingly into political activities, quite different from their struggle to wrest secrets from nature. The monks have become more worldly. Moreover, the expansion of research has probably drawn in a larger proportion of persons for whom it is a pleasant way to make a living, rather than an irresistible calling. In recent years the explosion of biotechnology has revised the formerly prevailing snobbish attitude of academic biomedical scientists toward industry. The shift has accelerated the translation of discovery into benefits for society, and for some scientists it has provided financial rewards far beyond the academic range. But this development has also created serious conflicts of interest. That problem has many still unsettled facets.
Virtually every scientist must share the pride that I have described. It is difficult to convey this point adequately. When I hear an outstanding investigator at a national meeting reviewing the remarkable advances in one area or another or see a student making an excellent presentation of original work, I feel the same pleasure as at a wonderful theatrical performance. But another side of science has become increasingly conspicuous. The curiosity that motivated the amateurs of a few centuries ago is still there, but it has been modified by the goal of making discoveries that advance a professional career and by the constant search for funding. Because journalists and legislators are exposed more to these activities than to those in the laboratory, they are likely to develop a cynical picture of scientists as just another special interest group.
The rapid changes in modern science notwithstanding, the fundamental goals and style remain the same. As an emeritus, my perceptions of attitudes may not be fully up-to-date. Despite the antiestablishment views of many commentators from the social sciences and journalism, the practices that I see still fit Robert Merton's articulation of the scientific enterprise, in which scientists “internalize” a special set of norms. My son, now a postdoctoral student, and the students in my department seem to have the same spirit that I first encountered in science 60 years ago. Science is beautiful, and the interests that lead scientists to defend its fundamental traditions coincide with the interests of society, more than is apparent from the present wave of public ambivalence.
Footnotes
The views expressed in this Commentary do not necessarily reflect the views of the journal or of ASM.
REFERENCES
- 1.Davis B D. The moralistic fallacy. Nature. 1978;272:390. doi: 10.1038/272390a0. [DOI] [PubMed] [Google Scholar]
- 2.Davis B D. Storm over biology. Buffalo, N.Y: Prometheus Books; 1986. pp. 30–33. [Google Scholar]
- 3.Davis B D. Mechanism of bactericidal action of aminoglycosides. Microbiol Rev. 1987;51:341–350. doi: 10.1128/mr.51.3.341-350.1987. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Davis B D. Molecular genetics, microbiology, and prehistory. BioEssays. 1988;9:122–130. doi: 10.1002/bies.950090407. [DOI] [PubMed] [Google Scholar]
- 5.Gore A. Planning a new biotechnology policy. Harvard J Law Technol. 1991;5:19–30. [Google Scholar]
- 6.Grove J W. In defence of science. Toronto, Canada: University of Toronto Press; 1989. [Google Scholar]
- 7.Kuhn T. The structure of scientific revolutions. Chicago, Ill: University of Chicago Press; 1962. [Google Scholar]
- 8.Levi-Montalcini R. In praise of imperfection. New York, N.Y: Basic Books, Inc.; 1988. [Google Scholar]
- 9.Lewis G N. The anatomy of science. New Haven, Conn: Yale University Press; 1962. [Google Scholar]
- 10.Medawar P B. Induction and intuition in scientific thought. Philadelphia, Pa: American Philosophical Society; 1969. [Google Scholar]
- 11.Popper K R. The logic of scientific discovery. London, England: Hutchinson; 1959. [Google Scholar]
- 12.Russell B. A history of western philosophy. New York, N.Y: Simon and Schuster; 1945. [Google Scholar]
- 13.Snow C P. The two cultures and the scientific revolution. London, England: Cambridge University Press; 1959. [Google Scholar]
- 14.Taubes G. Misconduct: views from the trenches. Science. 1993;261:1108–1111. doi: 10.1126/science.8356443. [DOI] [PubMed] [Google Scholar]