In ‘Where the Conflict Really Lies: Science, Religion, and Naturalism,’ Alvin Plantinga argues that naturalism excludes the means to validating our cognitive faculties. In a nutshell, Plantinga argues that if the reliability of our cognitive faculties is under question, one cannot answer the question whether they are reliable by pointing out that these faculties themselves deliver the belief that they are reliable; one needs more, one needs good, independent reason to believe our cognitive faculties are reliable. Crudely, Plantinga criticisizes empiricists / naturalists for failing to provide a logically satisfactory argument for asserting that our cognitive faculties are reliable.
Plantinga’s argument, though, does not immediately commend itself to acceptance: Essentially, the empiricist / naturalist must provide an argument for the foundational reliability of our cognitive faculties only if she first accepts a foundationalist epistemology. However, empiricists / naturalists need not accept a foundationalist epistemology. Indeed, the empiricist / naturalist should instead reject the premise that knowledge requires an Archimedean foundation. (I guess Plantinga could assert that the empiricist / naturalist is somehow committed to a foundationalist epistemology, but I would like to see the argument for that. In any case, I have little confidence the argument would work.)
Rather, pace Hasok Chang (epistemic iteration), C.S. Peirce (pragmatism) or W.V.O. Quine (coherentism), the empiricist / naturalist can take other routes. Though I have significant misgivings about coherentism, it remains a viable option. However, a more promising route, I believe, would be Chang’s idea of epistemic iteration, which is a thoroughly proper empiricist epistemology (situated within a largely Peircean pragmatist framework). To see this, let us look at Chang’s analysis of the historical problem of the reliability of thermometry in early and mid 19th science. Though crude and without the requisite scholarly detail, the synopsis should suffice to give the rough view.
The key assumption in thermometers is that mercury (or any other thermometric fluid) expands uniformly (linearly) with increasing temperature. But of course we construct the thermometer in order to provide quantifiable temperature values. We could provide initial temperature values for the calibration of any given thermometer with some other thermometer, but how do we know that the prior thermometer provides reliable temperature values? So, you see, the assumption, essentially, is circular and thus we simply cannot know that our thermometric instruments are reliable. However, we can reliably measure temperature values; we do it all the time. Though, we did not solve the circularity with a foundationalism of any sort.
Epistemic iteration is a method wherein one records successive stages of knowledge, each building on the preceding one, in order to improve fullfilling certain epistemic goals, such as precision, consistency, prediction and retrodiction, problem-solving, simplicity, etc. No recourse is made to indubitable or self-evident truths, or to such things as properly basic beliefs. We use our thermometers though we have good reason to believe they are not reliable in the way we want them to be and, through processes of calibration via successive measurements in similar experimental arrangements, establish consistently obtained temperature ranges. We use these temperature values to establish more precise thermometric instruments and, through simplifying idealizations such as perfect gases, absolute temperatures, etc., we create broadly theoretical temperature scales, and so forth. Even the notion of uniform expansion is a tentative, explanatory hypothesis which admits of testing via our improved thermometers (and theoretical implications). (For a wonderful case study of this process, see Percy Bridgman’s work on high pressure physics. He constructed instruments which allowed him to surpass known pressures and had to establish new ways of measuring the pressure and the properties of matter under these conditions.)
Think of it in other terms: A near blind man cannot see that the physical objects in his room are variously colored. He puts on glasses which permit him to see colors. He notices that some of the objects seem to change colors under various conditions. He theorizes about how this could be, establishes some explanatory hypotheses, and begins to test them. He suspects his glasses are not as reliable as he would like them to be. So, he uses the glasses to make new glasses, and he observes that the colors are less sporadic, more consistent, and seem to begin to form identifiable patterns, etc. With his new glasses, he also sees that his first pair of glasses were scratched in certain ways which could account for the haphazard color experiences. Nevertheless, he proceeds to improve his glasses at each successive state, all the while cataloguing what works and what does not, keeping the former and proceeding.
In the same way that thermometers are instruments, our sense organs are also instruments; likewise for our cognitive faculties. The progressive trial-and-error method of working through our problems with temperature is analogous to the way in which we proceed with our broad physical interactions with the external world. Our brain develops heuristics and epistemic rules of thumb and constructs hypotheses which are tested by sensory stimuli. I would further argue that the logic we use in our evidential frameworks is also instrumental and it too must admit to broadly empirical support / considerations. (In this way I would say we ought to reject so-called classical logic and adopt a relevance-intuitionistic logic, but this is for another post entirely.) In a nutshell, we form hypotheses about the external world, receive the impoverished bits and pieces of stimuli, organize and arrange the data, locate patterns, formulate general rules (which we precisify, test, and generalize as the process proceeds), test our conjectures amongst their competitors, record and store what is useful, record and store (sometimes discard) what is not, and begin again.
In a sense a measure of circularity remains, but the circle is a virtuous one- or, at least, an innocuous one. It is as problematic as the problem of naming landmarks and roads in a small town. A small town has one street and one bridge. When people stop to ask directions to a shoppe they are told to take the road over the bridge and their destination will be on the left. When another street pops up, though, the two streets must be distinguished. So, the street which passes over the bridge is called ‘Bridge Street’ and the other ‘Grove Street’ (it runs past orange groves). When another bridge is constructed, the two bridges must be distinguished. The first is called ‘Bridge Street Bridge’ and the other ‘Grove Street Bridge’. In a sense, the names involved are all circular, but insofar as people get to and fro without difficulty, the system works. As more need for naming arises, we progress in the usual way; making amendments as need arises. If the system breaks down in a fundamental way, that is, people become consistently misdirected, then we devise another nomenclature entirely.