Edmund Burke and the Irish Constitution

Some unflattering things have to be said about the Irish Constitution. It’s mostly the work of a single mind — that of Éamon de Valera — rather than something hammered out through prolonged discussion between people with conflicting interests. And the Irish Constitution is too long. In both of these features it can be contrasted with the US Constitution, which is small enough to fit comfortably inside a shirt pocket, and centuries after it was written it is still read and understood by ordinary people with no legal training.

Not so the Irish Constitution. Perhaps the worst consequence of its excessive length and its author’s idealistic and exhaustive vision for Ireland is that people tend to think of it as a foundation for the nation’s laws rather than a constraint on them. The feeling many Irish people have when they try to read the Irish Constitution is that it is strictly for law scholars. It’s “not for the likes of us”: and with that attitude, ordinary Irish people abdicate their democratic entitlement and obligation to judge for themselves.

The Irish have a habit of distrusting politicians, and thus of being reluctant to give them too much power. On the face of it, that seems a healthy habit. But it’s a two-edged sword. It means that other people are given powers that elected representatives properly should have, and in fact need to have if they are to become better elected representatives. For example, in the early 1950s Noël Browne’s “Mother and Child Scheme” was effectively derailed by the churches (plural) and doctors. In other words, proper democratic process was usurped by professionals whose “moral expertise” was supposed to be greater than that of ordinary people or their representatives. I think history is repeating itself, this time with legal scholars wearing the mantle of expertise formerly worn by priests and doctors.

In opposing the Mother and Child Scheme, doctors of the 1950s disguised self-interest as a worthier concern that they would become agents of the state practising “socialized medicine”. That was an appeal to a sort of “separation of powers”. It seems to me that legal scholars of today are similarly disguising self-interest by appealing to the same thing. It’s all the more insidious because the separation of powers in the present context is something that must be maintained, up to point, as a matter of balance. But it is not for legal scholars to decide where that balance is to be drawn.

If elected representatives are untrustworthy, that is partly the fault of the people who elect them. But Irish people have a habit of blaming the representatives, exonerating themselves despite having voted for them, and handing the reins of power to others “we can look up to”.

I think that’s a very bad habit. No one can claim greater moral expertise by belonging to any particular profession, or by having received any special sort of training. To make my case, I’ll explore a few ideas of Edmund Burke, a very different sort of thinker from Éamon de Valera.

 

Foundationalism versus pragmatism

Consider the idea that a written constitution is a foundation for the nation’s laws. Basic “axioms” are necessary in mathematics, each branch of which has a “foundational” structure, with theorems based on (i.e. derived from) axioms. As a former maths teacher, de Valera may have assumed that anything “rigorous” must have a structure like a building resting on secure basic foundations. Alas, many people share that assumption — Descartes made it the guiding assumption of modern philosophy (unfortunately). But even Descartes knew that axioms are out of place in politics. Unlike maths, practical or empirical knowledge changes over time. New discoveries have to be accommodated, and old assumptions have to be rejected. New problems arise and new solutions are found to deal with them. All branches of practical or empirical knowledge are in effect “living, breathing” dynamic systems which need alteration in an ongoing, piecemeal way.

Edmund Burke recognized that political knowledge is dynamic like that. We might call him a political empiricist or a pragmatist, because he saw that politics is a matter of observation and correction, of getting things to work in practice in the messy real world. It is not a matter of getting everything to rest on “perfect” foundations. In the twentieth century, conservative thinkers such as WVO Quine and Michael Oakeshott developed this idea by making famous a striking analogy of philosopher of science Otto Neurath. In Quine’s words:

We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driftwood the ship can be shaped entirely anew, but only by gradual reconstruction.

In Burke’s view, politics is a practice, and good politics is practised by people who are good at dealing with contingencies. We should think more like people who repair ships — or like car mechanics, or gardeners, say — than like mathematicians.

All of which means that political “foundations” — inasmuch as they exist at all — have an entirely different role from that of axioms in mathematics. To illustrate this, consider how Burke defended monarchy as the basis for a political system. He said it works in the same way as good “rootstock” enables apple-growers to graft different varieties of apple branches onto the main trunk, as required. Far from directing what grows at a higher level, the virtue of this sort of “foundation” is its flexibility: it allows a wide variety of different types of growth at a higher level. For example, a monarchy can provide stability and continuity when an aristocracy slowly transforms itself into a democracy, as it did in England.

 

Burke’s prescience

Burke was prescient in paying so much attention to change, and to the effects that change can have on complicated systems. He knew that good politics involves accommodating changes in an incremental way so as to forestall catastrophic change of the sort that occurs in bloody revolutions. It wasn’t until the twentieth century that historian of science Thomas Kuhn put the topic of revolutions at centre stage in philosophy. He wrote of scientific revolutions, of course, which unlike political revolutions are welcome because they are not literally bloody. (Priestley was no doubt very annoyed with Lavoisier for turning his idea of “dephlogisticated air” on its head, thereby starting the oxygen revolution, but it was Robespierre’s Assembly that literally detached Lavoisier’s head.) The mechanisms involved in both types of revolution are similar: stolidity makes for fragility, which leads to total collapse.

Burke wasn’t just prescient in thinking a lot about revolutions. Many of his other ideas had a strikingly twentieth-century flavour. His aesthetics involved quasi-biological male and female ideals (“beautiful” and “sublime”), reminiscent of recent theories of art in evolutionary psychology which appeal to sexual selection. Or again, we attribute the slogan “meaning is use” to the later Wittgenstein, but we could reasonably attribute something quite similar to Burke. Wittgenstein saw that the meaning of a word is not determined by what is written in a dictionary but by how people actually use the word; Burke saw that the “meaning” of a law or political principle is not determined by what is written in a statute book but by how it is used in practice.

 

Change

Let’s look at a few of these ideas, starting with change. To deal with new problems and changed circumstances, political systems have to accommodate these new circumstances. “A state without the means of some change is without the means of its conservation.” To succeed at that, no part of a political system should be understood as being “written in stone”. Personally, I think Burke would have wrinkled his nose with distaste at any written constitution; but if such a constitution were already in place as an integral part of political procedure, I think he would have cautioned against attempts to get it right “once and for all”. We should expect to have to “fix it” or tinker with it from time to time. We have to deal with changing circumstances of the present, and future generations will have to deal with changing circumstances of the future.

This is analogous to making buildings earthquake-proof by giving them flexibility, and only loosely attaching them to ground that is known to move on occasions, instead of making them rigid and firmly resting on supposedly fixed foundations. We can’t prevent earthquakes, but we can take steps to ensure that buildings can “ride them out”. Otherwise they are likely to fail catastrophically.

 

Burke the conservative

All this talk of Burke’s “pragmatic” approach to politics might give the impression that he was just a “Machiavellian” type who wanted to maintain the status quo rather than to promote justice and decency. But that would be quite wrong. He supported American independence. He was committed to making life better for Irish Catholics, and he prepared the way for Catholic emancipation. He spent much of his political life engaged in the impeachment of Warren Hastings, the corrupt Governor-General of Bengal, who presided over massacres and famine. Above all, Burke was passionately concerned for the security of ordinary people, knowing too well the seductive appeal of extremism and fundamentalism.

Burke was a conservative, but he was not especially “right wing”. He was a classical liberal, a Whig rather than a Tory. His most notable scholar and defender — Conor Cruise O’Brien — was a member of the left-leaning Irish Labour party. “The Cruiser” rarely wrote anything that did not bring Burke’s ideas to bear on current issues. Most of his newspaper articles explored deep tensions between basic values such as security, justice and freedom, rather than standard “right versus left” issues concerning fiscal policy or social welfare.

 

Holism

I mentioned above that Burke prefigured more recent philosophical thinkers such as Kuhn and Wittgenstein. He also had some interesting affinities with Quine. One of his insights had to do with the role of judgement in the growth of knowledge. He saw that we cannot focus too narrowly on anything we are trying to judge. Instead, we have to look at how it fits into a larger “whole” and judge how that larger whole works in practice. This idea — called “holism” — has become a commonplace in the philosophy of science. For example, consider Newton’s Law of Gravitation. It cannot be used, tested or judged on its own, in isolation from everything else. To be tested, it has to yield a prediction in concert with other laws (such as Newton’s Three Laws of motion) and further assumptions about time and space. So if something goes wrong, and the prediction turns out to be false, we cannot narrowly identify the Law of Gravitation as the “culprit”. Any of the other laws and assumptions that went into the prediction might have been wrong. Holism has innumerable profound ramifications, in science and politics, and arguably it is “the” philosophical idea of the twentieth century. As far as I know, Burke was the first to see its importance. For example, he said he never judged a system of government by appealing to abstract “principle” — instead, he looked at how the whole thing worked in practice.

 

Taking account of circumstances

Burke detested appeals to basic or abstract principles in politics, for various reasons, perhaps the most important of which is that politics and matters of justice in general should always take account of circumstances:

Circumstances give in reality to every political principle its distinguishing colour and discriminating effect. The circumstances are what render every civil and political scheme beneficial or noxious to mankind.

Burke’s hostility to abstract principles and natural (i.e. non-legal) rights is similar to that of Jeremy Bentham, who described natural rights as “simple nonsense” and “imprescriptible” rights (i.e. rights that are not explicitly prescribed by a legal system) as “rhetorical nonsense — nonsense upon stilts”. Both objected to the idea that people are entitled to something in complete abstraction from actual circumstances, and both objected to the cheap and misleading rhetorical appeal of what we nowadays call “human rights”. Burke heaped scorn on the concept of “rights” that was beginning to take hold on the popular imagination during the French Revolution, and which has since become ubiquitous. We have become so accustomed to appeals to abstract “rights” when talking about moral matters that a rejection of the concept can nowadays seem like a rejection of morality itself. But with Burke, the rejection of “rights” is a rejection of cheap moralistic sentimentalism, empty rhetoric, and insensitivity to circumstances.

Like abstract principles, written constitutions do not take account of circumstances. And therein lies part of a written constitution’s appeal: it can seem to have a crystalline purity that sets it apart from everyday legal wrangling over mundane details. That’s dangerous in two ways. First, it’s dangerous because it can give legal scholars the misleading impression that they are dealing with something more basic than or above the law — namely morality. Which they emphatically are not. Second, it’s dangerous because it gives others the vague but equally misleading impression that the constitution is as hard as diamonds in the sense of providing rigid guarantees of individual freedoms and entitlements.

 

Academics in the role of priests

Some will be baffled by my claim that it’s downright “dangerous” for legal scholars to imagine that they are dealing with morality rather than the law. What is the harm in a few professionals having an inflated sense of their own importance? — If it were just their own narcissistic fantasy, it wouldn’t matter much. The trouble is, others join in the folly. Once again, Burke points to why. “Man is a religious animal.” By our very nature, we tend to put a select few on pedestals and revere them as shamans or priests — especially those who speak a specialized jargon or who have a commanding knowledge of an arcane area of study. The heady combination of two assumptions — that legal scholars study “the basis of law”, and that “law is based on morality” — weaves a spell.

For illustration, consider Mary Robinson. In twenty years a terrible transformation occurred: a relatively modest left-leaning politician with liberal views turned into a self-important moralistic monster, incapable of referring to herself in less exalted terms than “The United Nations High Commissioner for Human Rights”. She would be an object of fun, a caricature from The Mikado, except that quite a lot of gullible, worshipful people seem happy to swallow it. They haven’t noticed that the United Nations is a political body that perhaps creates something like laws, but never anything like morality.

Burke saw a similar “politics of vanity” in the defenders of the French Revolution and their similar rhetoric of rights that made no distinction between law and morality. He watched traditional religious observance waning during the Enlightenment, only to be replaced by a secular “religion” whose “priests” were academics such as philosophers and economists.

Now Burke used the words ‘philosophers’ and ‘economists’ as terms of abuse, but he actually admired and was on friendly terms with some philosophers and economists, such as Hume and Adam Smith. Burke’s problem was that in revolutionary France, philosophers and economists were not safely indoors speculating and discussing their arcane theories among themselves, but instead were engaged in root-and-branch reform, re-building the entire political system from the ground up. Burke shared Hume’s distrust of what Hume called “the abstruse reasonings of philosophers”. In France, they were the basis of the new order.

Although his family background was Catholic, and it is quite likely that he was a practising Catholic himself, Burke belonged to a largely Protestant, English-speaking liberal tradition in philosophy. This tradition was anti-clerical, and it took pride in its plain-speaking approach, so different from the grandiose mannered Aristotelianism of scholastic philosophers like Thomas Aquinas. The leading figures of that liberal tradition — such as Hobbes, Locke, Hume, JS Mill — were not academics. Their anti-clericalism shaded into a hostility towards academic thinking in general, with its ornate and unclear language used in defence of orthodoxy. When Hobbes wrote sneeringly of “schoolmen”, he was not just thinking of medieval scholastics. And Burke shared his contempt. The hostility was mutual: many current academics seem not to “get” Burke, and even regard him as an “anti-intellectual”. (A friend suggests that most academics are “constitutionally incapable” of understanding him.)

Apart from mediocrities who regard him as “right wing” and therefore bad, the main reason why most current academics have difficulties with Burke is that they continue to assume the Cartesian “foundationalist” picture of knowledge. As Wittgenstein might say, “a picture holds them captive”.

 

Is a written constitution “as hard as diamonds”?

So much for the danger of legal scholars and other academics setting themselves up — and getting set up by others — as moral arbiters and purveyors of moral expertise. What about the second danger, of thinking that articles of a written constitution are “as hard as diamonds” in the sense of providing rigid guarantees of individual freedoms and entitlements?

The words in any written constitution are themselves impotent. Words never do anything on their own. They first have to be interpreted, hopefully by people of good faith. Then they have to be put into action, hopefully by agents who are well-intentioned. We can never avoid this reliance on good faith and good intentions. Yet the apparent crystalline purity of articles in a written constitution may suggest that this reliance can be circumvented. It cannot.

During its troubled recent history, El Salvador has had at least one well-written constitution, a work of admirable legal craftsmanship, or so I am told. At the very same time as this crystalline document was supposed to protect them, innocent Salvadorans were being murdered wholesale by their government. The problem was that the wonderful words of the constitution weren’t part of a larger mechanism that put them into practice.

The articles and amendments of any constitution never guarantee anything on their own. They work in concert with the rest of the apparatus of the state. If the rest of that apparatus is rotten, nothing in a written constitution can stop the rot. And conversely, if the apparatus as a whole is in good working order, and is administered by intelligent, decent people, a poor constitution can’t do all that much damage. A written constitution often gives people a sense that because “it is written” that such-and-such a right must be respected, we thereby do not depend on the good faith of agents of the state. That is a false sense of security.

 

The recent referendum in Ireland

As I write this, votes are being counted in a referendum on whether the Irish Constitution should be amended to give the Irish parliament more powers. Opponents say this is bad because it will blur the separation of powers — i.e. the distinct roles played by judicial and legislative branches of government. Some have said that if a “Yes” result is delivered, the changes to the Constitution can “never be reversed”.

I would argue in a Burkean spirit, first, that the Irish legislature clearly needs more powers. This is “empirically” obvious from recent events in Ireland in which people who should have been called to account have not been called to account. Expenses and banking cheats have “got away with it”, at least one of them by appealing in court to an extremely dubious legal right that seems — stupidly and wrongly — to be protected by the Irish Constitution. Meanwhile in the UK, expenses cheats have been sent to prison; dodgy practices of the press and others have been subject to the scrutiny of parliamentary committees with real powers; and the press remains largely free to discuss people without respecting such idiocies as “the right to a good name”. Burke was a pioneer here, using his own powers as a parliamentarian to call Warren Hastings to account for his dodgy practices.

Second, the “separation of powers” does not mean the courts are on Venus and the parliament is on Mars. It means that political affiliations and political pressures do not sway court cases, and conversely that legal niceties do not inhibit parliamentary proceedings too much. Hence “parliamentary privilege”. But it’s a matter of balance. We don’t want parliamentarians to be “above the law” in the usual sense of that phrase, but of course the legislature must still make the laws, and the courts must still interpret them. If the laws are inadequate for proper democratic accountability, as is clearly the case at the moment in Ireland, the legislature must change those laws. If the laws can only be changed via a constitutional amendment, so be it.

Third, it is ridiculous to claim that constitutional changes are “irreversible”. Nothing in a constitution is written in stone. The fact that several referendums have already been held on amendments to the constitution shows how eminently possible it is to change it. If we change it in a way that proves not to work well in practice, we can simply change it back again. The eighteenth amendment to the US Constitution (prohibiting alcohol) was reversed by the twenty-first amendment.

Fourth, and perhaps most important of all, only by making elected representatives (and others in positions of power) accountable will they properly represent the people who elect them (and pay their salaries). The rhetoric of the “No” campaign has largely appealed to current ill-feeling towards elected representatives. We don’t like the decisions they have made of late, and we don’t like the fact that they are unaccountable for those bad decisions. The “No” campaign’s “solution” to this problem is to make sure they can’t make too many decisions. But a much better idea would be to insist that they make better decisions. And they will make better decisions when they are made more accountable. To give them more powers — especially to call each other to account — would be an incremental change of the sort that can help forestall catastrophic change. The catastrophic change I have in mind here is one in which democracy itself is compromised, because the most important decisions are made by non-elected “experts” who are answerable to no one.

The evil of paternalism

Paternalism is forcing people to do things “for their own good”. Since force is involved, the person applying the force judges what is best for the person being forced.

Most of us would say that paternalism is sometimes acceptable for children. Parents occasionally have to force their children to do things they would rather not do, because parents often have a better idea of what’s in their child’s long-term interest than the child does.

And it is sometimes acceptable to force adults to do things for the good of others. For example, we force adults not to drink and drive for the safety of other road users. But that is a sort of collective self-defence. Paternalism occurs when adults force other adults to do things for their own good – i.e. the supposed good of the other adult. This is extremely problematic, because a normal adult has a better idea of what is in his own interest than any other adult could have.

For clarity, from here on I will refer to the agent being forced as “the subject” of paternalism.

Defenders of paternalism usually take one or other of the following approaches. Sometimes they say that other adults have better knowledge of how to achieve the subject’s goals. And sometimes they say that other adults have a better idea of which goals the subject should pursue. Please observe the difference: the first involves having false beliefs, or what Hume might call a failure of “reason”. The second involves having misdirected desires, or what Marxists might call “false consciousness”. I will return to Hume below: he would have said that misdirected desires are an impossibility.

I reject both of these defences of paternalism, and add that paternalism is one of the greatest and most insidious of social evils. I will deal with the two defences in turn.

First, it is claimed that other adults know better how to achieve the subject’s goals, given that it would be indeed be good for the subject to achieve his own goals. In other words, people do know what’s best for them, but don’t know how best to achieve it. They’re right about the ends, but wrong about the means. The claim here is that priests, doctors, lawyers etc. have “expert knowledge” that goes beyond that of ordinary people, and so they should be granted decision-making powers over ordinary people, the better for ordinary people to achieve goals that they already have (and indeed are entitled to decide for themselves).

I reply that that is a dangerously mistaken understanding of expertise. Expertise has a narrower scope and – if we’re lucky – more depth than common sense, but it buys this depth at the cost of being less trustworthy than common sense. By all means let expert opinion be expressed freely, and let ordinary people consult whichever experts they think fit to advise them. But let’s be clear: when we take action to achieve our goals, what matters is reliability, and common sense is more reliable than any other sort of judgement. Of course like all human judgement common sense is fallible. But it is less fallible than expertise. What we all do every day to make a cup of tea (say) is more reliable than what brain surgeons, economists or car mechanics do to fix our heads, the economy, or cars. The success rate of our attempts to make a cup of tea is better than the success rate of their attempts to remove a brain tumour, reverse a recession, or replace a faulty exhaust pipe. And the thing about common sense is that it is common: almost everyone has it. Almost everyone is already in possession of the most trustworthy way of making decisions. It is right that ordinary people of common sense should seek advice from experts, but wrong for them to abdicate decision-making powers to experts. It is even worse if their decision-making powers are usurped.

The other defence of paternalism assumes that agents do not know what’s good for them, because they have the wrong goals. This is what Rousseau had in mind when he wrote of people being unfree by not acting in accordance with the “general will”; it is what Marxists had in mind when they spoke of the bourgeoisie having “false consciousness”; and what Nazis had in mind when they adopted the slogan Arbeit macht frei. I need hardly say I think this is a ghastly idea. It is a conceptually confused idea as well, as it drags moral judgement into wholly factual matters of agency. It entails that we can make mistakes about what we want, but the only criteria of rightness or wrongness that can be brought to bear on desires are “moral”.

I mentioned above that Hume thought desires simply could not be directed at the “wrong” objects. He thought that – like other animals – we desire whatever it is we happen to desire, and the function of our “reason” or belief system is just to help us satisfy those desires. So for Hume, “reason is the slave of the passions”. Reason does nothing but work out the means, given our ends: “it can pretend to no other office than to serve and obey [the passions]”.

That idea is shared by many liberal thinkers, especially those in the English-speaking world. Its humaneness is obvious: it does not puritanically judge what others should or should not want, nor does it appeal to the authority of political ideologies such as those of the French or Russian Revolutions or Nazism. Instead it prompts us to accept that people want whatever they happen to want, and if we don’t like the way they behave, we should tolerate it as far as is practically possible. This sort of toleration has made life much better for homosexuals and other minority groups in recent decades, at least in the West. It is the most important civilizing trend in history.

The guiding assumption is that what is good for a person is whatever he would choose for himself if he were free to choose. Because he chooses it himself, it would be good for him no matter how much it differs from what other people would choose for themselves.

Of course we all belong to the same species, so we share many goals as a matter of biology. But there are real differences between individuals in the way we give our various goals different weighting. For example, we all want to be healthy, but some people are prepared to take greater risks with their health than others because they want to have unhealthy fun, or they want to put their energies into their work or creative projects even if that damages their health.

In countries like Ireland, there is a tradition of extreme deference towards figures in a position of supposed authority, such as priests. As the influence of the church wanes, priests have lost their revered status. But members of other professions have crowded into the space left by priests, to bask in the tradition of deference. Doctors, academics and politicians especially presume not just to advise people on how to achieve good health, but take steps to force people to live their lives as someone else sees fit. This must be resisted. It is uncivilized.

 

 

Why we shouldn’t believe climatologists

We get empirical reasons for believing scientific theories when they pass tests. A theory is a collection of hypotheses or guesses, and an individual hypothesis is tested in the following way.

Together with some other assumptions, the hypothesis logically implies something that can be seen directly – typically something that no one has noticed yet.

So testing begins with the hypothesis yielding an observable prediction. But this prediction isn’t anything like speculative futurology of science-fiction writers or “what if” historians – scientific hypotheses don’t predict grand, un-checkable scenarios such as “the destruction of civilization within 100 years”. Instead, they yield something quite specific and checkable like “the solution will turn blue” or “the needle will point to the 5”.

If an observation is subsequently made, and it is found that the solution does indeed turn blue, or that the needle does indeed point to the 5, as predicted, the hypothesis passes the test. It has made it over a “hurdle”. The “higher” the hurdle seems to be – in other words, the more of a “weird coincidence” it would seem to be if the hypotheses made it over the hurdle even though it were false – the stronger the reason it gives us for believing it is actually true. But this method never gives us numbers that “measure our confidence”, or anything of that sort.

Note the pattern: the hypothesis is a guess that describes things that can’t be seen directly; it is tested after an observable consequence is computed; if it passes the test, we get a reason to believe that the hypothesis itself is true. But it’s never a particularly compelling reason, as a hypothesis always remains a guess.

The above pattern is called the hypothetico-deductive method (because what we hope to observe is deduced from an initial guess or hypothesis). It never pretends to yield certainty, and it honestly admits that creativity and guesswork are an essential part of the process. According to this view, science is an epistemically risky business. Its strength is that it can reveal the hidden structure of reality, not that we can safely bet our life savings on it.

The opposed view assumes that the roles described above of data and theory are reversed. Instead of theory implying observations and then being indirectly corroborated by actual data, this alternative view supposes that theory is “based on” data – in other words that the “data” imply the theory.

This view is called inductivism because it gives a central role to the form of empirical reasoning known as induction. The best way to understand induction is with some simple examples. We reason inductively when we jump from “all of the swans I’ve seen so far have been white” to “all swans are white” or from “all of the emeralds I’ve seen so far have been green” to “all emeralds are green”. Induction is essentially generalization from a limited number of observed instances.

At first glance, it might seem as though no guesswork at all is involved in induction, because there seems to be no “creative input” in reaching its “conclusion”. But really, the guesswork is just completely unimaginative. The simplest and most general hypothesis is generated in a mechanical way by extrapolating from the initial ingredients. Although it would be unfair to say induction “dishonestly hides” its guesswork, many people get a false sense of security by overlooking the fact that it is guesswork.

A problem immediately presents itself: the only sort of theory that could be “reached” by this method are generalizations about observable things such as swans and emeralds. The most interesting branches of science talk about electrons, viruses, black holes, etc. – strange and often apparently magical things that cannot be seen directly. So inductivists tend to view science as a rather unmagical, “superficial” enterprise – it is a mere “instrument” whose purpose is not to explain the inner workings of the world but rather to “organize” human experience, to predict how future observations will unfold given past observations, and so on. This will seem a bit fishy to anyone who has grasped the cunning of a good scientific explanation, and remembers the marvellous feeling of “the key turning in the lock”.

The inductivist would defend his view by claiming that it is a virtue rather than a failing of science that it doesn’t “stick its neck out” by attempting to describe the hidden structure of reality. Instead, it delivers claims that we can be confident in believing as true. Scientific laws after all are generalizations, and those are the very goods that induction delivers.

This apparent strength of induction is actually its greatest weakness. Although induction might deliver the occasional “phenomenological” law (which describes regular observable phenomena) that’s pretty much the only thing it can deliver! That is because induction needs law-like connections to be reliable, as it must be if it is to give us a reason to believe its deliverances. Consider the example of white swans. If I generalize from the white “swans I’ve seen so far” to “all swans are white”, I make a mistake, because some swans are black. Being white is not an essential aspect of being a swan. In other words, there is no law-like connection between being a swan and being white. By contrast, being green is an essential part of being an emerald. So I can reliably generalize from the green emeralds I’ve seen so far to correctly infer that all emeralds are green. This second induction is reliable, because the property I’m extrapolating from and the class I’m generalizing to are connected in a law-like way. All of the familiar examples of induction that are reliable rely on a law-like connection of that sort. For example, “the Sun has risen every day of my life so far, therefore it will rise every day”. This is reliable because of the regular rotation of the Earth. If the Earth did not rotate with the law-like regularity of its conserved angular momentum, the induction would be untrustworthy.

In climatology, the equivalent of “theories” are computer models, and the equivalent of their being based on “data” is that the models’ initial inputs are numbers supposedly drawn from the climate record of the past. The hope is that patterns of the past can be used as the starting-point for very sophisticated induction – far too complicated for the human mind to grasp – whose end-point is a description of future patterns.

Let us pass quickly over the complaint that most of these “data” are not the product of observations at all in the usual sense of the word, but are so-called proxies – in other words, they are themselves the non-observational product of further theory. Climatologists assume that the whole operation needs “proxies” at the bottom of a foundational structure, because they are so firmly in the grip of the idea that “theory is based on data”. If they can’t get actual data, “proxies” are the next best thing.

Another complaint is that the induction described above could only give us a reason to believe its conclusions if there were law-like connections between past climate patterns and future climate patterns. A few moments’ reflection reveals that any such connections are bound to be extremely weak, because the climate is extremely complicated.

Bear it in mind that law-like connections are generally simple rather than complicated, because laws connect classes of natural kinds (such as green things and emeralds). Scientific laws are nearly all strikingly simple, as well as very general. They apply to classes whose edges are not at all fuzzy. But climate is almost literally a matter of mists and fogs. Idiosyncratic detail is everywhere. The variables involved are innumerable.

The climate may also be literally chaotic. In physics, a chaotic system is one that depends in a very sensitive way on initial conditions. Tiny differences in initial conditions can lead to very large differences in the way the system unfolds over the course of time, making them in effect unpredictable. Some such systems – such as the compound pendulum shown below – are very simple. The climate is unimaginably more complicated.

Double-compound-pendulum

A compound pendulum (animated GIF from Wikipedia)

Computer modelling is a useful and a powerful tool in some branches of science, of course. For example, aeronautical engineers use computer models to predict the pattern of metal fatigue in airframes. But these airframes are man-made, to precise specifications, out of materials whose properties are very well-known. These materials behave in law-like ways as required. The climate is something completely different.