We get empirical reasons for believing scientific theories when they pass tests. A theory is a collection of hypotheses or guesses, and an individual hypothesis is tested in the following way.
Together with some other assumptions, the hypothesis logically implies something that can be seen directly – typically something that no one has noticed yet.
So testing begins with the hypothesis yielding an observable prediction. But this prediction isn’t anything like speculative futurology of science-fiction writers or “what if” historians – scientific hypotheses don’t predict grand, un-checkable scenarios such as “the destruction of civilization within 100 years”. Instead, they yield something quite specific and checkable like “the solution will turn blue” or “the needle will point to the 5”.
If an observation is subsequently made, and it is found that the solution does indeed turn blue, or that the needle does indeed point to the 5, as predicted, the hypothesis passes the test. It has made it over a “hurdle”. The “higher” the hurdle seems to be – in other words, the more of a “weird coincidence” it would seem to be if the hypotheses made it over the hurdle even though it were false – the stronger the reason it gives us for believing it is actually true. But this method never gives us numbers that “measure our confidence”, or anything of that sort.
Note the pattern: the hypothesis is a guess that describes things that can’t be seen directly; it is tested after an observable consequence is computed; if it passes the test, we get a reason to believe that the hypothesis itself is true. But it’s never a particularly compelling reason, as a hypothesis always remains a guess.
The above pattern is called the hypothetico-deductive method (because what we hope to observe is deduced from an initial guess or hypothesis). It never pretends to yield certainty, and it honestly admits that creativity and guesswork are an essential part of the process. According to this view, science is an epistemically risky business. Its strength is that it can reveal the hidden structure of reality, not that we can safely bet our life savings on it.
The opposed view assumes that the roles described above of data and theory are reversed. Instead of theory implying observations and then being indirectly corroborated by actual data, this alternative view supposes that theory is “based on” data – in other words that the “data” imply the theory.
This view is called inductivism because it gives a central role to the form of empirical reasoning known as induction. The best way to understand induction is with some simple examples. We reason inductively when we jump from “all of the swans I’ve seen so far have been white” to “all swans are white” or from “all of the emeralds I’ve seen so far have been green” to “all emeralds are green”. Induction is essentially generalization from a limited number of observed instances.
At first glance, it might seem as though no guesswork at all is involved in induction, because there seems to be no “creative input” in reaching its “conclusion”. But really, the guesswork is just completely unimaginative. The simplest and most general hypothesis is generated in a mechanical way by extrapolating from the initial ingredients. Although it would be unfair to say induction “dishonestly hides” its guesswork, many people get a false sense of security by overlooking the fact that it is guesswork.
A problem immediately presents itself: the only sort of theory that could be “reached” by this method are generalizations about observable things such as swans and emeralds. The most interesting branches of science talk about electrons, viruses, black holes, etc. – strange and often apparently magical things that cannot be seen directly. So inductivists tend to view science as a rather unmagical, “superficial” enterprise – it is a mere “instrument” whose purpose is not to explain the inner workings of the world but rather to “organize” human experience, to predict how future observations will unfold given past observations, and so on. This will seem a bit fishy to anyone who has grasped the cunning of a good scientific explanation, and remembers the marvellous feeling of “the key turning in the lock”.
The inductivist would defend his view by claiming that it is a virtue rather than a failing of science that it doesn’t “stick its neck out” by attempting to describe the hidden structure of reality. Instead, it delivers claims that we can be confident in believing as true. Scientific laws after all are generalizations, and those are the very goods that induction delivers.
This apparent strength of induction is actually its greatest weakness. Although induction might deliver the occasional “phenomenological” law (which describes regular observable phenomena) that’s pretty much the only thing it can deliver! That is because induction needs law-like connections to be reliable, as it must be if it is to give us a reason to believe its deliverances. Consider the example of white swans. If I generalize from the white “swans I’ve seen so far” to “all swans are white”, I make a mistake, because some swans are black. Being white is not an essential aspect of being a swan. In other words, there is no law-like connection between being a swan and being white. By contrast, being green is an essential part of being an emerald. So I can reliably generalize from the green emeralds I’ve seen so far to correctly infer that all emeralds are green. This second induction is reliable, because the property I’m extrapolating from and the class I’m generalizing to are connected in a law-like way. All of the familiar examples of induction that are reliable rely on a law-like connection of that sort. For example, “the Sun has risen every day of my life so far, therefore it will rise every day”. This is reliable because of the regular rotation of the Earth. If the Earth did not rotate with the law-like regularity of its conserved angular momentum, the induction would be untrustworthy.
In climatology, the equivalent of “theories” are computer models, and the equivalent of their being based on “data” is that the models’ initial inputs are numbers supposedly drawn from the climate record of the past. The hope is that patterns of the past can be used as the starting-point for very sophisticated induction – far too complicated for the human mind to grasp – whose end-point is a description of future patterns.
Let us pass quickly over the complaint that most of these “data” are not the product of observations at all in the usual sense of the word, but are so-called proxies – in other words, they are themselves the non-observational product of further theory. Climatologists assume that the whole operation needs “proxies” at the bottom of a foundational structure, because they are so firmly in the grip of the idea that “theory is based on data”. If they can’t get actual data, “proxies” are the next best thing.
Another complaint is that the induction described above could only give us a reason to believe its conclusions if there were law-like connections between past climate patterns and future climate patterns. A few moments’ reflection reveals that any such connections are bound to be extremely weak, because the climate is extremely complicated.
Bear it in mind that law-like connections are generally simple rather than complicated, because laws connect classes of natural kinds (such as green things and emeralds). Scientific laws are nearly all strikingly simple, as well as very general. They apply to classes whose edges are not at all fuzzy. But climate is almost literally a matter of mists and fogs. Idiosyncratic detail is everywhere. The variables involved are innumerable.
The climate may also be literally chaotic. In physics, a chaotic system is one that depends in a very sensitive way on initial conditions. Tiny differences in initial conditions can lead to very large differences in the way the system unfolds over the course of time, making them in effect unpredictable. Some such systems – such as the compound pendulum shown below – are very simple. The climate is unimaginably more complicated.
Computer modelling is a useful and a powerful tool in some branches of science, of course. For example, aeronautical engineers use computer models to predict the pattern of metal fatigue in airframes. But these airframes are man-made, to precise specifications, out of materials whose properties are very well-known. These materials behave in law-like ways as required. The climate is something completely different.