JS Mill’s “two principles”

I rate JS Mill as a great philosopher, so I was interested in an article entitled “The Awful Mill” by Bryan Caplan about Mill’s apparent confusion between two “principles” mentioned in On Liberty.

Caplan calls Mill’s principle of utility his “ultimate” principle, which is a fair enough description, as Mill was a utilitarian and explicitly wrote: “I regard utility as the ultimate appeal on all ethical questions”.

But On Liberty is an extended defence of another “simple principle” – the famous harm-to-others principle – which Caplan calls Mill’s “absolute” principle. That is again a fair enough description, as Mill himself does call it a “principle”, and he does says it is entitled to “govern absolutely the dealings of society with the individual in the way of compulsion and control, whether the means used be physical force in the form of legal penalties, or the moral coercion of public opinion.” (My italics.)

Caplan’s main claim is that Mill is not being consistent – worse, he is a bad philosopher – because he doesn’t base his defence of the “absolute” principle on his own “ultimate” principle.

I think Caplan’s idea here is interesting and worthwhile, first because I agree that Mill is not clear enough about how these principles stand to each other, and second because it exemplifies Mill’s pragmatism – something we tend to associate more with conservatism than liberalism. But Caplan is mistaken. His error illustrates something important about knowledge and the nature of justification: it’s “contextual”.

I will illustrate how the two principles are related by using an analogy from mathematics: Mill’s two principles are related to each other in the same way as a definition in number theory is related to a rule of thumb in arithmetic.

Number theory is one of the more “basic” branches of mathematics, and its purpose is to put our various number systems (counting numbers, real numbers, etc.) on a firm conceptual footing. The idea is to “reconstruct” numbers in terms of set theory, relations, etc., so we can be clear about what numbers are. (And be clear about what they aren’t: an “imaginary” number is not imaginary in the usual sense of the word, but one member of an ordered pair of real numbers.) The counting numbers (0, 1, 2, …) are defined in terms of sets, and higher-level numbers are constructed in terms of lower-level numbers.

There are various ways of proceeding in number theory. For example, constructing real numbers (which include √2, π etc.) out of rational numbers is a bit tricky. There are at least three alternative ways – all legitimate – of getting around the problem. But whichever route is chosen, it doesn’t affect the way we do arithmetic. We still teach our children practical arithmetic first, because this has a direct bearing on how we conduct our everyday lives. We all use rules of thumb when doing arithmetic, such as “whenever you multiply one side of an equation by a factor, multiply the other side by the same factor”. These rules of thumb are not affected by the route chosen in number theory to construct real numbers out of rational numbers.

Mill’s “ultimate” principle expresses his utilitarianism, and it is analogous to a basic definition of number theory, the sort of thing that can differ between different schools of number theory. To utilitarians like Mill, his principle is the ultimate appeal on all moral questions, including matters of “private” morality. But different individuals have different moral opinions, and they can differ at this most fundamental level about what makes human actions right or wrong. For example, a follower of Kant judges action by asking himself whether he is following a universal law – a rule of conduct he could happily wish everyone would follow such as “we must not tell lies”. That is a very different sort of appeal from the utilitarian focus on consequences.

Mill’s “absolute” principle – that society is only entitled to interfere with any of its members’ action if it causes harm to others – is more like a rule of thumb in arithmetic. It is “absolute” in that it is supposed to apply to all action without exception and to everyone in society apart from children and incompetent adults such as those suffering from severe mental illness or disability. A lot of rules of thumb are absolute in this way too, so being a rule of thumb and being absolute are not mutually exclusive.

To guide public policy – to help frame laws, decide government policy and so on – Mill’s “absolute” principle has to be influential. It must command respect and broad agreement from a large section of society. If Mill appealed to his “ultimate” principle to defend it, it would only win the agreement of other utilitarians like himself. So instead he has to appeal to more universal values.

For example, unless there is a proliferation of different opinions, we are more likely to overlook the truth, which is often found between extremes. If we merely parrot truths instead of challenging them and defending them, we cannot properly know them. If we prevent others expressing their ideas, we assume our own infallibility, which is an error. If we force people to do things for their own good, they cannot grow as human individuals, and growth is an essential element of human well-being. Truth, knowledge, the avoidance of error, and human flourishing are values shared by utilitarians and non-utilitarians alike.

In arguing for his “absolute” principle in that way, Mill reveals a pragmatic side that we more often associate with conservatism, or at least with people who are suspicious of root-and-branch reform. In the present context he doesn’t care what basic moral opinions other people have in their private lives, as long as they can agree to leave each other alone as much as possible in their political lives. He is not trying to sell an ambitious or radical utilitarian “system”, just to argue in a piecemeal way for a rule whose widespread observance would make society better.

Mill mentions his own utilitarianism just once in On Liberty: when he explains why he will not be appealing to the increasingly-popular concept of abstract “rights” to defend his “absolute” principle . His “ultimate” principle plays a negative role here: it’s merely a passing mention of his attitude to abstract (as opposed to legal) “rights”, which is similar to that of Jeremy Bentham (who dismissed the idea as as “nonsense upon stilts”) and Edmund Burke.

So much for Caplan’s claim that Mill is simply confused about his two principles. It is remarkable that Caplan expects Mill to justify his “absolute” principle by attempting to put it on a “firm foundation” in the manner of Descartes in his Meditations. This is a common expectation, and it is greatly to Mill’s credit that he confounds it. He seems to have had an insight about the nature of knowledge: truth is objective, but justification depends on the context. An opinion is more worthy of belief when it has soldiered it out in the clash of opposing opinions. In other words, it is a discursive matter, something that gets hammered out when people engage each other in debate.

If we’re honest about it

As we discuss the possible introduction of gay marriage, we often hear expressions of the “enlightened” view that marriage is a “man-made” institution. I’m 100% in favour of gay marriage, but I can’t agree with that view. Human culture is just an extension of human nature, and it is a mistake to see aspects of culture as working “against nature”. Culture instead adds detail to innate biological urges and abilities. For example, human languages differ from each other because they developed along different lines. In other words, they were so developed by the people who spoke them, and in that sense they were “man-made”. But the fact that every human speaks some language or other indicates that language use is innate. It is not “artificially imposed” on human nature by human culture as an optional extra. Marriage isn’t artificially imposed on human nature by human culture either. As with language, all human cultures seem to have some form or other of marriage, as indeed do some other animals such as monogamous birds that observe public courtship rituals. This indicates that marriage serves a biological “purpose”, like erotic love itself.

The biological purpose of erotic love is committed parenthood in monogamous species. These are species in which offspring are a big investment for both parents, so big an investment that both parents have to be firmly committed to their role as parents and bonded to each other as a pair. Public “ceremonies” seem to add “cement” to such pair bonds. The extra “cement” is advantageous in species whose offspring represent an unusually large investment, such as humans. There’s a selective pressure for a more durable bond, because monogamy is always threatened by infidelity, which also serves a biological purpose (although a slightly different one for each sex). No species is perfectly monogamous in that none of its members cheat, although in many species some pairs are perfectly monogamous in that neither member ever cheats. At the level of entire species, monogamy is always less than perfect, even among those whose fidelity is legendary such as swans. Thus the presence of cheating among some members of a species does not diminish the claim of the species as a whole to be a monogamous one.

If love and the institution of marriage are “natural” for humans because we are a monogamous species, changing them or common perceptions of them might be more difficult than we think. It doesn’t matter what other people think about love, because that only exists between two people. Homosexual love obviously exists and always did exist regardless of homophobic attitudes of the surrounding culture. But marriage is another matter. For it to exist in the proper sense of the word, it has to be widely recognized as marriage by the surrounding culture, and I’m not entirely convinced that’s possible with homosexual marriage just yet. I wish it were possible, but I’m being realistic.

Since homosexual sex cannot result in parenthood, it is not surprising that many people see homosexual love and marriage as “not quite the real thing”, as biologically secondary to heterosexual love and marriage. Of course we must not draw any “oughts” from that unpalatable fact, but I think we should at least acknowledge it as a fact.

As I write, proponents and opponents of gay marriage are being urged to sign online petitions for or against. The “anti” vote currently stands at almost ten times the “pro” vote. If those numbers reliably reflect popular opinion, that would be disappointing, but hardly surprising, as it simply reflects human biology. It would be disappointing, because it probably means that for now, even if gay marriage were made possible in the full legal sense, it would not be widely recognized as marriage.

This isn’t always a sign of homophobia. Humans are intensely interested in erotic love, for the obvious biological reason that human children are a sort of “life sentence”. A single human childhood is easily the longest and most resource-consuming project in the living world, so the choice of who to marry and/or have children with is a biologically momentous decision – it’s literally a matter of life and death, not only for the children but also for the occasional suicidal abandoned spouse or murderous cuckolded non-parent. Erotic love is the central preoccupation of human art. We are all fascinated by the many variations on the theme of love, and we all speculate about how well or badly the old, the young, the rich, the famous, above all the different will fare in the dangerous game of marriage.

If we’re honest about it, we all wonder how well or badly things will turn out where there are big age differences, religious differences, racial differences, or differences in social class. And we see the importance of sameness as well. Most of us see various strengths and weaknesses in the various possible similarities and differences. For example, most of us are ready to accept a big age difference if the man is older than the woman, but raise an eyebrow if the man is younger than the woman – especially if the man is poorer than the woman.

If we’re honest about it, most of us realise that marriage between a man and a woman can be hard going, but the long trek unto death is made slightly easier by a sort of complementarity between them. If the man is a boor and the woman is a shy accepting little mouse, that is horrible – but at least their minds are made for each other like sex organs. If the man is a hen-pecked weed and the woman is a harridan, that is not quite so bad, but again: at least they are made for each other.

This brings us to the crux of the problem: Why are so many of us apparently not yet ready to recognize marriage between two people of the same sex? – I think we see (or think we see) a lack of complementarity between the people involved. I for one do not see any such complementarity, bad and all as heterosexual marriages often are, and I would be amazed if the mean length of non-married homosexual partnerships was anything like as long as the mean length of non-married heterosexual partnerships. That is one of the reasons I support homosexual marriage: it might add “cement” to homosexual partnerships in the same way as it does to heterosexual partnerships.

No doubt what I’ve written here will strike many as homophobic. And I am a heterosexual, which does not portend well. But I have had unusually intimate relationships with homosexual men and women for much of my adult life, through one accident of fate or another. Much of what I know about evolutionary biology I learned from the greatest – and incidentally homosexual – philosopher of biology there has ever been. I think all of them would agree with what I have just written, and all have expressed views very similar to my own.

The liberal impulse

There’s a liberal impulse, which springs from a very simple insight about what constitutes a person’s good. There’s also a simple anti-liberal impulse, which springs from something else. As a committed liberal, I’m not going to pretend I’m anything but a partisan here.

The liberal impulse began with Socrates. Socrates encouraged open debate, plain-talking, but above all thinking for yourself. When “the gadfly of Athens” was finally sentenced to death by a “jury” of 501 – in other words, by a judicial mob – his crime was essentially political incorrectness. Instead of saying what people wanted to hear, he had talked himself into a courtroom of morally outraged mainstream thinkers.

A guilty verdict was a foregone conclusion, as he had said the wrong things, and was deemed to be a source of immorality. As his inevitable execution drew near, he said that a man’s soul cannot be harmed by mere damage to his body. This very fecund idea entails that any individual’s good is a matter of what he himself – i.e. his own mind – thinks is good. It is not a matter of what others regard as harmful to him, or harmful to his body, or harmful in general.

I suggest we pause for a moment to reflect on how profound this thought is. It means that an individual decides for himself what counts as harm to himself. He defines his own good. No one else can overrule him. For example, if he freely chooses to be a smoker – if he willingly and knowingly accepts the risks of smoking – then the unavailability or high price of cigarettes is harmful to him. If he is a homosexual, and willingly commits the (in UK former) “crime of buggery”, he is harmed not by buggery itself but by the laws that forbid it. If he chooses to end his own life, laws that prevent him doing so harm him. If he freely chooses to drink alcohol, and supermarkets freely choose to sell alcohol at a low price, then a law that forces a minimum price for alcohol harms both of them.

The idea that “we define our own good” is central to liberalism. And it brings liberalism into direct conflict with paternalism. ‘Paternalism’ is the word for forcing someone to do things that others regard as being for his own good. Inasmuch as minimum pricing laws for alcohol force drinkers to drink less – rather than protecting drinkers’ victims from the bad things drinkers do – these laws are paternalistic.

JS Mill’s essay On Liberty is an extended series of arguments against paternalism, and a passionate rebuttal of the way paternalism cramps human growth. Apparently a great many present-day politicians have not read it, or else have not understood it.

It seems to me that liberalism is right. Yet anti-liberal sentiment seems to be everywhere. Why? Whence the anti-liberalism that seems to be more popular now than it was in Victorian times, when Mill wrote On Liberty? Apart from a poor education – can anyone really imagine Róisín Shortall reading On Liberty? – I blame moral narcissism. People want to be seen to be doing the right thing, to be seen to be “caring”, and so on.

And liberalism tends to make people look bad. For example, liberals are in favour of free speech, including crucially the expression of wrong ideas. So liberals are in favour of letting Holocaust-deniers deny the Holocaust. This leaves them open to the charge that they themselves deny the Holocaust.

Liberals are in favour of individuals deciding what’s in their own interest. Thus they are “individualists”. But to people such as Michael D Higgins, ‘individualism’ is a dirty word, synonymous apparently with being “in favour of greed”. Thus liberals leave themselves open to the charge that they are in favour of greed, or are greedy themselves.

Liberals are in favour of letting people make their own mistakes, including such mistakes as harming their own bodies through the abuse of alcohol. So they are opposed to minimum pricing laws for alcohol. Thus they leave themselves open to the charge that they promote alcohol abuse, or that they are themselves drunken yobs.

Most liberals are not Holocaust-deniers, or greedy, or drunken yobs. But most anti-liberals have a narcissistic yearning to be seen not to be Holocaust deniers, or greedy, or drunken yobs. In my opinion that is a tragedy.

Two very different uses of statistics

In the nineteenth century, physics took a “statistical” turn with the development of the kinetic theory of heat. Heat was no longer understood as anything like a “subtle fluid” that permeates matter, but instead as the movement of molecules. For example, in a warm crystal the molecules vibrate more vigorously than in a cool crystal. The statistical nature of this new theory is really obvious with gases. Gases consist of mostly empty space, with innumerably many molecules flying about randomly, bouncing off each other and the walls of their container like crazy ping-pong balls. The temperature of the gas is their mean kinetic energy – a statistical average. The pressure is the amount of force per unit area they impart, again statistically, against the walls of their container through repeated collisions, none of which is individually tracked or described by the theory.

The new theory explains why temperature, pressure and volume are related as Boyle’s Law had said. Only now we have a much deeper understanding of why: if a cylinder filled with bouncing ping-pong-ball-like molecules is reduced in volume – by lowering a piston, say – they speed up and press harder against the walls of their container, just like an actual ping-pong ball bouncing between a table and a table-tennis bat when the space between them is narrowed.

Many other heat-related phenomena are explained as well – such as why heat spreads from a hotter to a cooler body as a matter of statistical inevitability. Thermodynamics has become the science of the dissipation of motion.

In the early twentieth century, Einstein’s theories of relativity showed the door to Newton’s mechanics and his theory of gravitation. These were not moves towards statistics, but they were moves away from the decidedly un-statistical Newtonian worldview. Not long afterwards, physics became centrally “statistical” with the development of quantum theory. The reign of Newton was over.

The king is dead – long live the king. But a lot of people don’t seem to realize that the use of statistics in the “new” physics is not at all like the use of statistics in inductive “sciences” such as psychology or epidemiology. It’s almost an abuse of language to apply the same word to both. How do they differ?

In physics, statistical hypotheses are tested against observations: claims that describe relative frequencies imply that particular values will be observed when measurements are made. For example, the formalisms of quantum theory might say that 33% of a beam of electrons should end up in area A, say. But suppose our measuring instruments are only capable of detecting “more than half” or “less than half” of these electrons. The original hypothesis still implies that less than half of them will end up in area A. If an observation confirms that less than half indeed end up there, that indirectly corroborates the hypothesis that 33% of them should end up there. The pattern here involves a statistical hypothesis working as a premise in a deductive argument whose conclusion is later confirmed through observation. No extrapolation is involved – in other words, no inductive logic is involved.

The mathematics involved in testing statistical hypotheses against statistical observations is usually much more complicated than in this simplified example. But the pattern of indirect corroboration is the same.

Logically the very opposite of indirect corroboration is direct implication by induction. In inductive “sciences” statistical conclusions are derived from observations. For example, suppose popular rumour has it that people who eat a lot of red meat don’t live as long as people who eat less. Careful collection of “data” from samples leads you to a more accurate-sounding figure, say that people who eat red meat every day have a life expectancy 13% shorter than those who eat it less than three times a week. The pattern here is quite different from that of physics above, where a statistical hypothesis worked as a premise in a deductive argument, and an actual observation was compared with its conclusion. Instead observations now work as “premises” in an inductive “argument” whose “conclusion” is statistical. A sample is assumed to be representative of the population as a whole, and a statistical property of that sample is extrapolated to the rest of the population.

In the first case, statistics is used in testing a guess. And there is no disguising the arbitrary nature of some of the numbers involved in the observation. The 33% figure predicted by the hypothesis might have been very precise, but limitations in the accuracy of our measuring instruments obliged us to make a coarser-grained observation. The observation does not compel us to accept the figure of 33%, as it is consistent with a wide range of other figures. But it does give us a reason to believe or accept the theory that implied that figure.

In the second case, statistics is used in taking a guess rather than testing a guess. And on the face of it, no arbitrariness is involved at all. The apparent precision of the 13% figure may suggest that this methodology has great penetrating power.

I recommend you take a few moments to compare and contrast these two situations. In the first situation, an observation rather limp-wristedly “does not compel” you to accept something you may well accept already – after all, you took it seriously enough to test it. In the second situation, an observation manfully grabbed you and confidently steered you towards a precise figure.

I recommend you do not get taken in by appearances. A “Magic 8-Ball” does something similar to direct implication by induction: it decisively points in one direction, but it does so by simply amplifying random extraneous “noise”. Far from giving you information, these procedures appeal to your anxieties about “not having the details”. The precise-looking details they deliver are not implied in any reliable way by salient facts about the real world. This is fraudulent. The word ‘implication’ is out of place here.

It is all illusory. An induction only provides a reason to believe its “conclusion” if there is a lawlike connection between the property noted in in the sample, and membership of the larger population to which that property is extrapolated.

Sometimes this can happen. For example, a large enough sample of mammals is bound to contain roughly equal numbers of males and females. This is no accident: the evolution of many species settled on the production of equal numbers of each sex as a successful genetic “strategy”. So when we extrapolate from our sample to all mammals, this ratio of the sexes is preserved. But the length of life of a sample of people eating red meat every day is very unlikely to represent the life expectancy of the entire meat-eating population in any lawlike way. Anyone can see that any or many of innumerable alternative factors might be involved.

Yet the appearance of numerical accuracy and the apparent lack of arbitrariness in the methodology of “mechanical” inductive guessing takes gullible people in – gullible people including many statisticians and the practitioners of inductive “sciences” such as psychology. They are held captive by the sense that “guessing is bad”.

Their sense that “guessing is bad” is inspired by the feeling that certainty is good, combined with the wholly mistaken assumption that science is in the business of delivering near-certainties. But really it is doing nothing of the sort. Science is a risky game whose target is the hidden world of electrons, genes, and processes whose workings are largely a mystery.

Guessing is an essential part of science, and our awareness and honest acknowledgement of that fact serves to remind us that testing is also essential. By testing our hypotheses, we get empirical (i.e. observation-driven) reasons to believe that they may be true. By merely extrapolating from “data” we usually get nothing of the sort.

Moral parochialism

Moral parochialism is the assumption that your own moral opinions are the only moral opinions there can be, so that whoever behaves in ways you disapprove of must simply be not serious enough, or not committed enough to “the” one and only moral viewpoint.

Moral parochialism is the mainspring of most forms of activism. This includes terrorism, various non-violent forms of protest such as “no-platforming” (the organised silencing of expressions of opinions deemed “fascist”), and the po-faced earnestness usually associated with the young, the naïve, and the extreme left.

Why is this? – The morally parochial are unaware of the very existence of opposed moral opinions, so they tend to explain moral failings on the part of others not in terms of the different motives that spring from different opinions, but instead as the result of not being strongly motivated enough by “the” moral viewpoint. Thus they think moral failings are caused by weakness of the will. The morally parochial see the best behaviour as simply the most strongly motivated, most committed, most eager-for-action, most sincere sorts of behaviour – in other words behaviour that springs from strength rather than weakness of the will.

Typically, this supposed weakness of the will takes the form of insincerity, so what the morally parochial see as immoral acts they assume must have non-moral ulterior motives such as greed, rather than resulting from equally sincere but opposed moral viewpoints. For example, if they disapprove of military action, they assume it must have been undertaken “for oil”, or for colonial or racist motives rather than from a sincere desire to do right. Nazis are seen as being on the “far right”, despite calling themselves “socialists” and condemning “capitalism”: everyone agrees they brought evil to an extreme, but if evil stems from weakness, they must have taken greed to an extreme.

Unlike the morally parochial, those who are aware that there are different moral opinions know that although we all welcome action taken for our own side, in support of our own values, we cannot welcome action taken against us, i.e. for the opposing side, in support of values we oppose. So they see nothing valuable or morally admirable about taking action per se.

Moral parochialism is a factual error, it’s a sort of ignorance. It is a fact of life – of politics and human psychology – that different people value different things, and are committed to different basic principles. For example, most religious people are committed to following God’s commands, whereas non-religious people cannot be so committed. There is as a wide range of different opinions about what God commands as there are different religions. And there is a wide range of secular alternatives: a hedonistic utilitarian is committed to creating the “greatest happiness of the greatest number”; a preference utilitarian is committed to maximizing the satisfaction of preferences; a Kantian is committed to following universalizable rules of conduct; a virtue ethicist is committed to acting virtuously; and so on. No one can be expected to be familiar with all of these different positions, of course, but anyone who is ignorant of the simple fact that such different positions exist – and have sincere adherents – is culpably ignorant.

I would argue further: as well as being a factual error, moral parochialism is a moral failing, because it inevitably leads to intolerance. We tolerate acts that we sincerely morally disapprove of, yet which we allow despite our disapproval. Because disapproving of something is very close to thinking it shouldn’t be allowed, it isn’t always obvious where to find room for toleration. The answer is that whenever we act rationally, we have to take account of both how valuable is the goal we are trying to achieve, and how confident we can be that we will achieve it. This introduces two reasons to “stay our own hands”. I will take them in turn:

The first reason to forbear from acting, even to prevent evil, is that other people have different goals from ours, and the achievement of our goals often entails the thwarting of their goals. This is especially important when the goals are moral, because everyone has an especially strong commitment to their own moral goals. Thinking morally entails taking their interests into consideration, which thus often entails giving precedence to their goals over our own goals when they come into conflict.

For example, I object to halal methods of slaughtering animals, because I think they’re cruel. But my moral objection to cruelty might deserve less consideration than another agent’s sense of religious moral obligation. Here, an awareness of a different sense of moral obligation from my own gives me a reason not to act.

The second reason why we might forbear from acting, even to prevent evil, is that we ought to be sceptical about what abstract theory tells us. And we need to temper our hopes for what we can realistically achieve through action, especially if that action is informed by abstract theory. Although what seems evil to me seems obviously evil to me, because of the strong feelings morality engenders, it generally does not seem evil to the agent doing it, and it will seem obviously non-evil to him for the same reason as it seems obviously evil to me. Who is right? – Discussion might answer this question, but action cannot answer it. When we act, we “blinker” ourselves to opposed opinion because action requires decisiveness.

For example, I am unsure whether the law should legally oblige parents to have their children vaccinated against the standard diseases. I am confident that vaccination is scientifically sound and morally the right thing for parents to do, but I am not confident that the law should override the judgement and decisions of parents. Here, a lack of confidence in my own moral judgement gives me a reason not to act.

Moving beyond moral parochialism by becoming informed about opposed moral theories adds weight to both of these reasons to avoid action. We are reminded that other people are every bit as sincere in their commitment to their opposed moral opinions as we are to ours; and we are reminded that our moral opinions are informed by just one theory among many rival theories, all of which have been defended at one time or another by intelligent and decent people. Thus we are more inclined to toleration, or at least to tolerate behaviour at the lower end of the scale of human wickedness.

Yearning for certainty

Nothing misdirects our intellectual efforts more than the yearning for certainty. Most of us admit that we can never achieve complete certainty, but that doesn’t diminish the allure of certainty. And some claims seem to be more certain than others. If we are attracted to certainty, these more certain-seeming claims can become the centre of our intellectual lives. Our thoughts will revolve around them like moths circling a source of light – with similarly baleful results.

Mathematics comes close to the ideal of certainty. Theorems can be proved, and the results of complicated calculations are “guaranteed” like the conclusions of valid deductive arguments. The proofs of theorems are literally valid deductive arguments in mathematics, and they have an important edge over most everyday arguments. The conclusions of everyday arguments are at best as certain as their premises, but in mathematics these premises are axioms, which are taken as true “by definition”. Thus the theorems themselves are true as a matter of “necessity”.

But even in mathematics, there is always the possibility of human error. The proof of a theorem or the answer to a complicated long division sum may be “guaranteed” – but only as long as the rules of derivation are followed to the letter. Once we have completed a derivation, how can we be sure we really did follow the rules to the letter? Children recognise that mathematics is a minefield of error, but adults seem to forget that, possibly because their “homework” isn’t “corrected” by a teacher. David Hume reminded his readers of the omnipresence of human error, even in areas where we are liable to forget it.

Another area in which we feel we have certainty is “the inner citadel” of our own conscious experience. To be more precise, we feel that our own reported “take” on our own mental states cannot be wrong. The idea is this: if I think I have a toothache, then I must actually have a toothache. Although I might be mistaken about whether my tooth has some sort of actual physical injury, I cannot be mistaken about the experience of pain. Whatever its physical cause might be, the word ‘toothache’ refers to the quality of the experience. So my own report of my own conscious experience is completely certain, or so it would seem.

To those who yearn for certainty, this idea that our own reports of “inner experiences” are infallible joins forces with the idea that mathematics is the paradigm of human knowledge. The resulting hybrid is called foundationalism, and it has done all sorts of bad things in philosophy and beyond. The best-known foundationalist was Descartes, who set out to reject everything that wasn’t certain. He soon arrived at conscious experience as the bedrock of knowledge, and assumed that all knowledge “rests” on this bedrock. In other words, it works as a foundation. Empiricists and rationalists alike followed Descartes and assumed that knowledge has the structure of an edifice resting on foundations, much as theorems rest on axioms in mathematics. Theorems really do rest on axioms, of course, but mathematics differs from empirical knowledge in this respect.

The idea that knowledge has this foundational structure is so deeply entrenched that at first its rejection can seem slightly mad. But it’s quite easy to think your way out of it and to see why it’s wrong. Consider the rudimentary belief-like states of simple creatures such as insects or “robot-like” agents such as cruise missiles. These agents do have something akin to knowledge of the world, in the form of simple but accurate representations of aspects of the world. In the case of a cruise missile, the representation consists of an on-board computer map of the terrain it flies over, with an electronic “marker” corresponding to its current location. It can be wrong about this – via inaccuracies in the map or an error in its assumed location – but if the terrain-scanning camera and all the other equipment is working properly, it will not be wrong. And its being right has nothing to do with the “quality of its conscious experience”, of which we may safely assume it has none whatsoever. Instead it has everything to do with the reliability of the causal links between the real world and its electronic representation of the real world. Much the same applies to the cognitive equipment of insects, whose consciousness, if there is any at all, must be extremely sparse. But as far as cognition is concerned, there is only a difference of degree between these simple creatures and fully conscious agents like us.

Whether we really do have certainty about our own conscious experiences strikes me as pretty much irrelevant, because if we do have certainty here it is empty and logically impotent. Depending on how we interpret our self-reports about conscious experience, what we gain in certainty we lose in logical power, i.e. in how much they imply. For example, the more certain I can be that I have a toothache, the more my belief is about the mere experience of a toothache. This is a mere “seeming” whose apparent certainty is not correlated with certainty about an actual injury to a tooth. Amputees suffer from pains in “ghost limbs”; whatever certainty they may have about that sort of experience does nothing to justify a belief that the lost limb has grown back.

The real problem here is the assumed “structure” of knowledge as an edifice resting on foundations. This nearly-universal idea misleads human thought right across the board. For example, how often do people ask for the “grounds” of an opinion, or assume that science is “based” on observations? It is a short step from here to thinking that “data” imply theory, so that theory consists of nothing more than a complicated collection or arrangement of “data”. This is a sure sign of “positivist” pseudo-science.

 

Positivism

I find few things more annoying – and saddening – than watching science get misled by bad philosophy. This usually happens when the scientists involved try to turn their backs on philosophy, in the hope of doing things more “scientifically”. I sympathise with the urge to avoid the nonsensical, pretentious garbage that passes for much or most of philosophy. But whether we like it or not, we are all already infected with the philosophical ideas and assumptions of the traditions we are steeped in from birth. For example, most of us are familiar with the assumption that the “inner world” of our own minds is transparent to us and unproblematic, while the “outer world” is opaque and fraught with difficulty; it may lurk behind a curtain of uncertainty; it might conceivably not exist at all. These are philosophical ideas, but they’re shared by people who have never studied philosophy. On the face of it they may seem like “sceptical” ideas  – and thus may look vaguely “scientific” – but in fact they’re as extravagant and immodest as the most gilded religious ornaments of faith.

In disciplines outside of philosophy, the yearning for certainty usually emerges as some form of positivism. The word ‘positivism’ is loose term for a family of attitudes that are broadly hostile to anything that can’t be checked. So positivists put observation centre stage, and tend to demand that all manner of things be measured. They don’t like any sort of metaphysics, or arcane talk of things that can’t be observed directly, or loose talk of anything that resists numerical quantification.

So far so good, you might think. To many, the no-nonsense approach and rigour of positivism gives it a scientific flavour. It gives sociologists and psychologists the warm feeling that they must be scientists because “we use clipboards and charts and stuff, and we’re really rigorous”.

But in reality, positivism is deeply inimical to genuine science, because it can’t abide guesswork and it can’t abide talk of things that can’t be observed or quantities that can’t be measured.

The central feature of science is hypothesis, otherwise known as guessing. Scientific hypotheses are mostly guesses about things that can’t be seen directly, such as electrons and force fields. Because scientific hypotheses purport to describe things that can’t be seen directly, they can’t be checked directly either. They are never “proved”, at least not in the way mathematical theorems are proved. Instead, we have to devise tests that examine their observable consequences. The passing of such tests is the best sort of empirical evidence we can have – alas! – as these are tests that false hypotheses can pass, and true hypotheses can fail. Perhaps worst of all, any “data” provided by such tests are consistent with a range of mutually inconsistent hypotheses. (This much-discussed problem is called “the under-determination of theory by data”.)

So scientific hypotheses and the theories they constitute are necessarily risky. They are not the sort of things to attract people who yearn for certainty. The history of science is a sequence of theories that were discarded because they were found to be probably false, and the same fate awaits many of the theories we currently hope are true.

However, despite all that, we have decent reasons for thinking that scientific theories are improving, although the improvement is neither “incremental” nor steady. The improvement is a consequence of scientists comparing rival theories and making rational choices between them. To do that, they have to exercise judgement and indeed intuition, which again are not the sort of things that positivists are comfortable with, because they cannot be measured. The degree to which a theory deserves to be believed is not a measurable quantity, despite loose talk of “probability” outside the realm of statistics.

As an alternative to guessing, risk and intuition, disciplines influenced by positivism tend not to attempt to penetrate the hidden depths of the real world. Instead, their formalisms tend to describe the appearances: the observable and the measurable. Thus for most of its life, psychology has been fixated on behaviour, and has left speculation about the nature of the mind, consciousness, cognition etc. to philosophy.

Positivism lends itself to a kind of anti-realism. Scientific realism is the view that science is slowly rolling back the curtain on the world as it really is, so that the things scientific theories talk about such as neutrinos, viruses and dinosaurs are (or were) fully real. A standard form of anti-realism is instrumentalism, the view that scientific theories are mere instruments for predicting what we will observe. Most positivists are instrumentalists – the shared idea being that theories do no more than “organize experience” or “arrange data” in a useful rather than penetrating way.

Disciplines influenced by positivism give a special role to induction – that is, to extrapolation from observed cases. Induction appeals to positivists because it is “mechanical” and seems to avoid guesswork. But induction is the most problematic form of reasoning. It is only reliable when applied to lawlike phenomena, and it cannot make the jump from what can be observed directly to what can’t. It is thus a non-starter for the penetrating guesswork that constitutes most good science.

Until recently positivism was best exemplified by behaviourist psychology. But a new contender has come on the scene: the science of climate change. The methodology of this new science is essentially to construct a temperature “record” of the past, using tree rings, lake bed samples, ice cores and the like, then to construct computer models which mimic this past “record”. Finally, these computer models are left running – busy extrapolating in the manner of someone performing an induction – in the hope that they will go on to mimic the future temperatures of Earth.

To my mind, that is a bit like piecing together the first movement of Beethoven’s Tenth symphony from his posthumously collected notes, getting a computer model to “fit the data” of which notes occur and when, and then hoping the computer will go on to write the rest of Beethoven’s Tenth. In other words, I regard it as laughably hopeless. But that’s another story. The lesson for today is only to see how our yearning for certainty leads us astray.

Toleration: test yourself

Trivially, you can only tolerate what you disapprove of. That is, you can only tolerate something if you sincerely think it’s morally wrong, yet despite that disapproval act in ways that allow it (by voting in favour of introducing it, for example).

A lot of scorn has been poured, rightly and understandably, on Cardinal Keith O’Brien’s ridiculous talk of children becoming “victims of the tyranny of toleration”. The phrase ‘tyranny of toleration’ is amazingly wonky, if not an out-and-out oxymoron. But I’ve noticed that here and there, this scorn takes the form of claims that “he shouldn’t be allowed to express the opinion”. But that is a fine example of intolerance, the very thing that Cardinal O’Brien is being criticised for promoting.

If (like me) you already approve of homosexual marriage, you do not “tolerate” it. If you think Cardinal O’Brien should not be allowed to express opinions that you disapprove of, because it is “offensive to homosexuals” perhaps, then once again you do not tolerate something. On the contrary, the latter is a case of intolerance on your part.

It seems to me that a lot of people are loudly congratulating themselves on their own high standards of toleration, when in fact all they are doing is drawing attention to their own non-heretical opinion, and expressing intolerance for the expression of heretical opinion.

Toleration used to be widely regarded as a virtue, at least by liberals. But because it is increasingly getting misunderstood, it is increasingly falling into disrepute. Most people nowadays regard the combination of disapproving of something yet allowing it anyway as a sign of weakness, as a symptom of moral laziness, or as a lack of principle. Most people look up to vivid-faced, morally earnest, committed “activist” types rather than those who shrug their shoulders and do nothing.

By confusing “blanket approval” with toleration, we are liable think that the most conscientious people still do not allow what they disapprove of. We are liable to think it is best to be highly principled. This is poison, because it is just intolerance in disguise.

Sexist “cures” for non-sexist “diseases”

As a European Commission “study” announces that Irish women earn 17% less than men – to much self-satisfied glee from the Irish National Women’s Council – it’s time we asked: Why?

Women tend to earn less than men because women tend to spend more time in the home looking after children. Men who spend more time in the home also tend to earn less than men who don’t. And they tend to earn less than women who spend less time in the home.

The problem isn’t that women are discriminated against. It’s that anyone who spends a lot of time in the home is discriminated against, whether male or female. It’s a further question whether this discrimination is unjust, as people who spend a lot of time in the home have less time to spend pursuing high-powered careers outside the home. Employers who want to employ high-powered career-oriented people understandably aren’t as enthusiastic about not-so-high-powered, not-so-career-oriented people.

I know from my own experience how hard it is to get potential employers interested in you if they know you are quite likely to be called away on a “home emergency” such as a child getting sick in school, or getting stranded somewhere remote and needing a ride home. Employers are every bit as unenthusiastic about potential male employees as potential female employees who have to deal with that sort of situation – and possibly even more so, as it’s widely considered a bit “weird” or even suspicious for men to have that sort of interest in their own young children.

I’m not sure what the “solution” to this problem is. Or even if there is any sort of solution. Technology and laws have surely helped. Broadband, Skype desktop-sharing and similar technological advances have transformed work at home, and of course we must have enforceable laws to ensure that employers pay women and men the same amount for doing the same job. But it’s probably an unpleasant “fact of life” that everyone has to live with, and that both men and women must suffer from if they choose a lifestyle that involves spending a lot of time in the home.

So it is galling to hear the standard “feminist voices” (such as that of Irish President Michael  D Higgins) misdiagnosing the problem, and worse, pushing solutions that discriminate against men by actively promoting the interests of women. The disease wasn’t sexist, but the cure is sexist. That is plain stupid as well as plainly unjust. And it will do nothing to address the real problem. In fact it will worsen the problem for men, and add to problems women have already. If nothing is done to make it easier for men to spend more time in the home, nothing will make it easier for women to spend less time in the home, and hence more time in the workplace pursuing meaningful careers – in jobs for which they get paid as much or more than men as a result of their generally more advanced levels of education.

People who cry wolf too often lose credibility. The same applies to people who claim to be discriminated against when they clearly aren’t. Most reflective people know very well why women tend to earn less than men, and they know it has nothing to do with women being discriminated against as women. Analogously, most reflective people know why more men than women get fined for breaking the speed limit, and they know it has nothing to do with men being discriminated against as men.