The embarrassing lack of success of German epidemiologists in locating the source of a deadly strain of E. coli bacteria seems to be the latest in an ever-advancing, ever-widening front of “scientific” failures. Most of these failures stem from downright bad methodology. (Perhaps this doesn’t apply to German epidemiology, but we’ll see…) This methodology is so bad that I don’t think we should count the disciplines that employ it as sciences at all. These are not “scientific” failures so much as pseudoscientific comeuppances.
At the mention of bad scientific methodology, many will now expect me to start complaining about a lack of “rigour”, or too great a reliance on “mere speculation”, a lamentable “lack of proof”, an insufficiency of the “data” needed to “ground” theory, and all that sort of thing.
Well, those expectations are completely misplaced. It’s exactly the opposite. If you, dear reader, were expecting all those familiar-sounding complaints, you are probably guilty of the same assumptions that underlie the bad methodology I mean to criticize.
The main mistake is to think that science is wonderful because it’s certain. That could not be more wrong. We never have certainty, and science is wonderful not because it’s even remotely certain, but because it’s penetrating. It draws back the curtain on the nature of reality by giving us explanations, telling us what the real world is made of, and how it works. Understandably enough, science buys these powers of penetration at a price: it’s often very unsure. It has to be unsure, because it’s guesswork. And it has to be guesswork, because scientific theories describe things we can’t see directly, such as electrons, force fields, and viruses. The best we can do is guess at the nature of things we can’t see directly, and then indirectly check our guesses by looking at what we can see directly.
The method of genuine science is thus guesswork combined with testing. Our guesses are tested by checking to see whether the observable things the guesses say should occur are indeed observed to actually occur. The fancy word for a guess is ‘hypothesis’, and the logic of the testing of a hypothesis is well understood. A hypothesis passes a test if one of its observational consequences (i.e. something it predicts will be observed) turns out to be true. It fails a test if one of its predictions turns out to be false. The more tests a hypothesis passes, the better reason we have to think it’s true, but it never becomes anything better than an educated guess.
People who yearn for certainty don’t like that. It makes them uncomfortable, because they think there shouldn’t be any guesswork in science, and they don’t like the idea that scientists are “to blame” for guessing wrongly. In their flight from uncertainty and culpability, these people wrongly suppose that science follows a rigorous methodology in which “data” are collected first, and then hypotheses are somehow distilled afterwards from them. In other words, they assume we start off with observations, and by diligently following a plodding, painting-by-numbers sort of method, we arrive at theory. Hence their use of phrases like “theory is supported by data” or “hypotheses rest upon observations” — meaning that theories/hypotheses are implied by data. But that is wrong. In genuine science, theories/hypotheses are tested against actual observations because they imply observations that can be made, rather than being implied by observations that have already been made. These people have things backwards. Backasswards.
Underlying this error is a more widespread philosophical error called “foundationalism”. Foundationalism is the idea that empirical knowledge rests on (i.e. is implied by) a “foundation” of more secure beliefs, in much the same way as mathematical theorems rest on (i.e. are implied by) axioms. Typically, these foundational beliefs are supposed to be beliefs about the qualities of our own conscious experience. It’s too long a story to relate here, but foundationalism is just plain wrong.
In many supposedly “scientific” disciplines — climate science, for example — “data” are collected in the hope that they can be shown to imply theory. Where there are no actual data, “proxy data” (often called “proxies”) are cooked up (i.e. dishonestly conjured up) in the hope that these will work instead to “fill in the holes” of the imagined “foundation” for the theory. The climate record of the past — consisting mostly of these fake “proxies” — is supposed to tell us how the future climate will unfold. This is like listening to the first movement of a symphony in the hope that it will tell you how the second movement will go. Actually, it’s worse than that: it’s like listening to a classroom of Alfred E. Neumans playing their recorders, in the hope that it will enable you to write the second movement of Mozart’s Clarinet Concerto.
You can tell how hopelessly confused this methodology is from claims that scientists are “90% certain” of this or that, as if the theory told them so. But no scientific theory ever tells us how much we ought to believe anything.
Or again, in medicine — where patients, doctors and researchers are especially uneasy about uncertainty and fearful of guessing — there is a thriving industry of statistical “studies”, which are conducted to record correlations, which are then used as a basis for extrapolation. But this is again wholly misguided, as the endless succession of contradictory “studies” reported in newspapers illustrates.
What epidemiologists should be doing — and I suspect have not been doing, but we’ll see, won’t we? — is guessing what might be the source of recent E. coli infections. Then they should test each guess to eliminate it from their inquiries, if it can be eliminated. If a guess can’t be eliminated like that, in effect it passes a test, and it begins to look more like a “suspect”. To avoid guesswork by diligently noting correlations and extrapolating is to write a recipe for more death.