Evolution through natural selection is an immensely powerful theory. It is able to synthesize and found the many and various fields within the biological sciences into a coherent, explanatory network. Its explanatory success has in turn led most biologists to regard the phyletic configurations of organisms as aggregates of individual adaptations. Adaptations are gradual, heritable alterations in a phenotype that provide a reproductive benefit(s) to its possessor. That is, these gradual alterations are the result of environmental and sexual constraints and pressures on inherited gene structures.
However, the scientific standing of adaptationism has been called into question. For instance, Karl Popper, in his 1976 autobiography, claimed evolution through natural selection “is not a testable scientific theory but a metaphysical research programme.” (Though, I should note, Popper did- by and large- consider evolution through natural selection a valid scientific endeavor.) Later, Stephen Jay Gould and Richard Lewontin famously made a similar charge in The Spandrels of San Marco and the Panglossian Paradigm. It is clear, then, that if adaptationist hypotheses (and by extension, the received view of natural selection) are to remain scientifically viable, then scientists must devise some way of testing them.
One answer to this challenge is that formal optimality and game theoretic models provide an opportunity to test adaptationist hypotheses. For space constraints, though, I will concentrate on optimality models. Thus, I shall analyze the extent to which optimality models test adaptationist hypotheses in any meaningful sense. In the course of the analysis, however, attention must be paid to the charge that optimality models suffer from a certain, systemic flaw: perpetual revisability.
Formal models are an essential element in modern scientific research. They enable scientists and philosophers of science to make testable predictions within a circumscribed theoretical framework when direct observation and/or laboratory experimentation are impractical or impossible. In biology, formal optimality models are used to predict the phyletic structures of existing organisms, and thus test the many competing and at times complex, ambiguous causal explanations of a given trait’s evolutionary history. In brief, the model analyzes an adaptationist hypothesis much in the same manner an engineer would analyze a technological problem.
The hypothesis in question is broken down into four essential parts: a fitness measure, heritability assumptions, a phenotype set, and a set of state equations, or algorithms. The fitness measure is the manner by which reproductive success is weighed, such as offspring quality and production or energy efficiency in locomotion. Heritability assumptions determine to what extent traits are passed on to successors. While prima facie these assumptions are easily assumed, the probabilistic factors at play in the model are rather complex. Considerations of allelic variation, epistatical variation, and maternal and paternal dominance must be appropriately evaluated and accounted for. The phenotype set lays out the various alternate but possible design configurations within the model. The possible design schemes are layered. That is, upon the development of one design another becomes more or less probable. In other words, an organism’s future phenotypic development is constrained by his phenotypic history. Lastly, the state equations are the set of algorithms that determine how the operative factors within the model will transpire.
While the mathematical equations can be multifarious, their theoretical viability is obtained via other, more rigorous sciences. For instance, if the flight ability of an osprey’s wings is being tested in the model, the state equations will draw upon the laws of aerodynamics, biomechanics, and muscle physiology to help determine their optimality.
From the data inputs and the algorithms, the model will provide predictions that biologists may test against extant organisms or certain phenotypic configurations. Ideally, when the predictions are not confirmed by any positive instance, then the hypothesis in question is falsified. However, rarely is the testing process so simple. As critics of the received view of natural selection are apt to point out, the failure of a model to make accurate predictions is often not regarded as a failure of the hypothesis. What the critics have noticed is that a modeler may modify some aspect of the model so as to make it conform to any future observation. The concern is not so much for the integrity of the modeler; rather, the concern is that the modeler may unwittingly allow his theoretical presuppositions to cloud his scientific integrity. For instance, he may adjust the probability factors in the heritability assumption or even revise the set algorithms because his hypothesis must be correct. Moreover, it is worried that he may continuously modify his model and thus avoid falsifying his hypothesis altogether, which would not bode well for its scientific validity.
To a certain extent this worry is justified. One need only refer to Marxian historicism for a case in point (see Karl Popper’s The Poverty of Historicism). However, the concern is nevertheless much too simple. It seems to rely upon the belief that a single experiment- in our case, a singular model output- is sufficient to falsify an entire hypothesis. While it is indeed possible for such a singular falsifying instance to obtain, such crucial experiments are exceptional. Rather, scientific hypotheses themselves are rarely open to complete falsification. They often exist within a wider theoretical framework that itself has innumerable logical entailments and empirical repercussions. This insight, made both- though in different degrees- by Pierre Duhem and W.V.O. Quine, is known as the Duhem-Quine thesis. This, though, need not detract from the scientific validity of optimality models.
To the contrary, by modifying his model the modeler may be viewed as calibrating it and thus refining its predictions for more accurate testing. For, if within in the course of inputting the data into the model and deriving the predictions some defective bit of information was included- say, for instance, it was derived from faulty generalizations or new evidence warrants its revision- there must exist some procedure for its identification and subsequent correction. If the alterations can be made without significant theoretical modifications, then the model should in all lights be maintained. That is, if the modeler must call upon assumptions at odds with adaptationism, say assumptions of punctuated equilibrium, to conform his model to observational findings, it is then that the adaptationist program should be called into question.
Of course, noting that having to modify a model does not necessarily falsify the hypothesis and the theoretical framework upon which it rests is not to assert that a successful model prediction verifies the hypothesis. Finding positive instances of model predictions means only that it is tentatively accepted. This tentative acceptance, however, should and often does grow in theoretical strength in proportion to the amount of confirming instances: As the number of confirming instances of a hypothesis increase, so to should our acceptance of its utility. To conclude, then, optimality models provide biologists opportunities to test the scientific validity of adaptationist hypotheses; even if the modeler makes numerous modifications to the model’s data set and, perhaps, even to its set equations.
For an excellent exposition of this debate (and more that concern the philosophy of biology) see:
Sterelny, Kim & Griffiths, Paul E. Sex and Death. The University of Chicago Press. Chicago, US. (1999).