Freedom trumps power

Imagine an über-homophobe. He doesn’t just hate homosexuals and avoids homosexual activity himself — the very idea of other people engaging in homosexual acts makes him sick with repulsion and fury.

He may not describe his attitudes in terms of hate. He may prefer to express it as a sort of “love”, perhaps as a virtuous reverence for heterosexuality. “My heart is with heterosexuality”, he may say.

Whether or not we accept his euphemistic spin on it, to say he has “strong feelings” is to understate the case. He has a super-strong urge to prevent homosexuals “doing whatever they do”. The reality of everyday homosexual acts routinely sends him into a towering rage, or reduces him to bouts of uncontrollable weeping. He is “offended” to a degree that’s “off the scale of offence”.

Question: Should homosexuals curb their sexual activity to spare this unfortunate man’s feelings? Should efforts be made to prevent him taking such immeasurably deep offence?

Answer: Of course not. Not by an inch. Not by the tiniest fraction of a millimetre. An adult’s freedom to engage in sexual acts with other consenting adults trumps anyone else’s urge to prevent him engaging in such acts.

However pathetically our über-homophobe may try to paint himself as the “victim” of other people’s “offensiveness”, the unalterable fact is that he wants power over others rather than freedom from others. His complaint amounts to an illegitimate claim to control their behaviour.

Freedom (and the legal rights that protect it) is more important than any ability to direct other people’s behaviour. Freedom trumps power: the choices people make for themselves always count for more than “feelings” and urges others may have to overrule those choices. “Feelings” and “offence” may be important between members of a family, but they count for nothing in the political sphere.

To pander to this unfortunate fellow’s aversion would certainly harm those whose freedoms it restricts. But it would probably harm him as well. Homosexuality isn’t going to go away, and he may as well just get used to that fact. Sooner or later he is bound to run into it, to his further chagrin. It may well be salutary — like immunisation — to deliberately offend him.

The same applies to other forms of giving and taking “offence” and “hurting people’s feelings”. In particular, it applies to Muslim “offence” taken at cartoons. Personally, I suspect it’s mostly faked: I’d guess many Muslims don’t give a rat’s ass about “insults to the Prophet”, and are simply itching for confrontation with Western people and Western values. But even if their “feelings” are entirely genuine, they still don’t count. No one’s “feelings” count when we’re talking about freedom.

The significance of desire

Most of us have an under-inflated concept of desire, and an over-inflated concept of belief. We happily accept that beliefs are fairly detailed representational states — so that taken together they prompt the metaphor of an “inner world”. But we tend to think of desires as much vaguer or thinner on detail than beliefs, and perhaps not even as representational states at all. Why is this way of thinking so common? — Here are a few suggestions:

First, we tend to specify desires with reference to objects rather than states of affairs. For example, we say “I’d like some chocolate” rather than “I have a desire to be eating chocolate”, or “I need some WD-40” instead of “I want my door hinges to be lubricated with WD-40”. Being human, we can safely assume that other humans have broadly similar goals to our own, so it’s often linguistically redundant to explicitly specify these goals as states of affairs. This can give the mistaken impression that desires do not represent states of affairs at all. In other words, it leads us to overlook the fact that desires represent the same sorts of things as make beliefs true or false.

Second, in general the states of affairs desires are aimed at are not yet realised. When we believe something, or at any rate when we believe something about the past or present, if our belief is true then the state of affairs that makes it true is a “fact”, with much attendant “detail”. When we desire something, on the other hand, the state of affairs that would satisfy it is not yet a fact. So for the time being it’s a “mere idea”, something more like Pegasus than a real horse grazing in a real field at this very moment. Any attendant “detail” is more obviously “imaginary”. We probably err on the side of assuming our beliefs are more detailed than they really are, as if they inherit some of the detail of the fact that makes them true, but with desires, we err in the opposite direction.

Third, in the Western philosophical tradition from Plato through Descartes (and in other traditions too), we tend to think of mental states as conscious experiences rather than as functional representational states that direct the behaviour of agents. This is changing, of course, with the continuing influence of American pragmatism and of the later Wittgenstein, as well as with the growth of functionalism in the philosophy of mind. But it is still very common to assume that a desire is a mere “feeling” or emotion rather than an essential part of the mechanism of action. This assumption is promoted still further by the possibility of wishing (and expressing wishes) for states of affairs that as agents we can play no part in bringing about (such as “I wish it would snow!”). It all suggests that desire is something rather touchy-feely and causally unserious. Worse, it can suggest that the real “purpose” of desire is nothing more than the having of a further sort of conscious experience — pleasure, or whatever.

We must reject this assumption that desire is a “feeling” (although of course specific desires are usually accompanied by distinctive feelings). Rather, a desire is a causally efficacious and typically fairly detailed representational mental state aimed at bringing about a real state of affairs external to the mind. Desires are complementary to beliefs, which are also representational mental states. Instead of bringing about real states of affairs external to the mind via behaviour, beliefs are typically brought about by these states of affairs, often via observation. Although there is something to the claim that desires are less detailed than beliefs, I think we should take Hume’s lead in giving desires priority: a desire (or “passion” as Hume put it) is the mainspring of any act. Whenever we act, our behaviour is aimed at achieving a goal; desire is the mental state that establishes such a goal, and beliefs (or “reason”) can do no more than help us steer a course towards achieving it. Hence “reason is the slave of the passions”.

Although we do not literally have an “inner world” of belief in our minds, together our beliefs form a sort of “map of the world” — the world as we take it to be. But that’s only half the story. Together our desires form a sort of “blueprint for the world” — the world as we would like it to become. The “map” and the “blueprint” contain the two essential components of the causation of all acts.

The traditional under-inflated way of thinking about desire tends to ignore the “blueprint” and puts far too much emphasis on the “map” — it imbues it with more detail than is really there, and it gives it causal powers that it simply doesn’t have. This often emerges in the assumption that specific sorts of belief are associated with specific sorts of acts.

A classic age-old example is the thought that belief in God causes people to behave in more “moral, God-fearing” ways. But of course such belief can only cause the valued sort of behaviour in conjunction with specific desires — to do what God wants, to avoid punishment, and so on.

Nowadays, much effort is expended on promoting beliefs such as “all races are exactly alike in respect of ability” and “there are no grey areas in rape”. The hope is that simply having such beliefs will discourage racist or sexist behaviour. But as we have just seen, behaviour of any sort is caused not only by our “map” of beliefs, but crucially — and more saliently, because desires are classified according to their goals — by our “blueprint” of desires as well.

The “attenuated” understanding of desire has a couple of really nasty side-effects. One is a blurring of the distinction between beliefs and desires, and the thought that desires can be “implanted” in an agent’s mind in the same way as many beliefs can: via observation. So if we watch violence on television, we will want to be violent ourselves. If we see ads on TV, we will want what they advertise. And so on. This gives rise to the sort of puritanism that discourages or even forbids the expression of “unhelpful” ideas. Traditional religious puritanism frowned on the expression of atheistic or agnostic views, and kept Hume out of a proper academic job. No doubt there are many lesser yet still talented people who are nowadays excluded from academic jobs for having beliefs that are currently regarded as “unhelpful”.

The side-effect that really makes me queasy is not the exclusion of talent from the groves of academe and the media, but the active promotion of falsity for the sake of our general moral betterment. For example, although I don’t think there are any significant differences between races as far as abilities are concerned, the claim that there are none at all is statistically vanishingly unlikely. If there are differences between individuals — and there are— there are bound to be differences between groups of individuals. Yet we are enjoined never to utter the forbidden words of that obvious truth. This is sick-making, and anyone who cares about truth should speak out against its deliberate suppression.

Leaving a trail of destruction

Some people who are terminally ill or in constant pain kill themselves to end their suffering. I think that’s a perfectly reasonable and decent thing to do.

But most suicides — especially among physically healthy people — are not like that at all. I think they’re motivated instead by the urge to “leave a trail of destruction in one’s wake”. This destruction takes the form of a slow train-wreck of blame and shame on the part of those who are left behind. Suicide prompts inevitable questions and invites a particular sort of interpretation: “What drove him to it?” — “It must have been his ____ [fill in blank here with name of supposed oppressor]. — How horribly they must have treated him! Shame on them!”

Self-harm is usually a passive aggressive activity. It’s manipulative. In a disguised way it’s intended to cause more harm to peripherally “blameworthy” people than to the immediate “victim”.

Suicide is the ultimate in self-harm, and so the ultimate in passive aggression. It exploits our taboos as expressed in phrases like “we mustn’t speak ill of the dead”. Because it is verboten to utter bad thoughts about the dead person, yet something undoubtedly bad has taken place, there is a “finger of blame”, but it cannot be pointed at the “victim”. We are inclined to be inventive, and re-direct our condemnation towards “those who victimised the victim” (who are usually imaginary).

This is the sly thinking of hunger strikers, suicide bombers, and those who exploit children by forcing them to become “suicide bombers by proxy”.

Of course many suicidal people are depressed. And depressed people deserve sympathy rather than condemnation. True. But depressed people are ill, and illness is better treated with honesty than deception. Depressed people are often angry. Angry people are often aggressive, and sometimes do violent things. These things are no less violent for being done by depressed people. We fail to understand suicide if we treat those who kill themselves with unquestioning, saccharine reverence. And quite apart from failing to understand them, we foster an atmosphere in which further potential suicides are more likely, because their intended effect is more clearly guaranteed.

I’ll say that again: if we treat people who kill themselves with too much reverence and respect, we encourage further suicidal behaviour. This probably helps to explain why suicides often break out like “epidemics” in close-knit rural communities.

Instead of wringing our hands, beatifying the dead, and apportioning blame to the living, I suggest that we reserve our sympathies for the living and if necessary adopt a gallows humour or even mockery for the dead. Don’t worry about hurting their feelings, they can’t feel a thing.

Are we lucky to be alive?

Most things of value in life depend on luck. But what is it, exactly, to be lucky?

I think an agent is lucky when he wants something (i.e. he has a goal) and then passes through a sort of “trial” in which getting what he wants is statistically unlikely, or at least not guaranteed. If he passes the trial and gets what he wants, he’s lucky.

For example, suppose six people play a game of pure chance (to keep this example simple). In the long run, over repeated plays, each player will win about one sixth of the time. Assuming a player’s goal is to win, winning is lucky. A single win is lucky, and repeated wins are lucky: in the long run, winning more than one sixth of the time is lucky. Because the relevant sense of probability here is statistical, we have to imagine repeated events of a similar sort, and what proportion of them would achieve the goal.

Three observations can be made here. First, luck depends on having a specific goal and a clear reference class. The reference class consists of repeated events of a similar sort, a relevant proportion of which achieve the goal. It is often implicit — in the present example, it consists of plays of the game. Suppose we keep that reference class, but change the goal. Suppose a player just wants to have fun rather than win. If he has fun in two thirds of the games he plays, he’s more often lucky than unlucky, because a higher proportion of the same class of events count as successful given the agent’s specific goal. Being lucky can become so routine that we’re less inclined to call it good luck, and focus instead on the less usual case of being unlucky. But the basic idea is the same.

Second, an agent can’t be lucky if there is no possibility of his being unlucky. If some members of a class of events are lucky, then some other members must count as “unlucky”, or at least as “less lucky”.

Third, luck applies to events that are more or less beyond our control. Lucky or unlucky events happen to agents, rather than being done by agents.

If we’re lucky, we’ll inherit good genes from long-lived parents. If we’re lucky, we’ll be engaged in projects in life which go well for us, so that we advance towards our goals. If we’re lucky, our lovers will be faithful and honest. These examples of good luck can only happen to genuine agents who have goals — real goals that are the objects of genuine desires. They only count as cases of good luck because things might have been different — there are other cases of the same sorts of events that count as bad luck. And alas, we don’t have much control over them.

For most of the course of a normal life, it would be remarkably bad luck to die while asleep. So we’re not much inclined to call it “good luck” when we simply wake up in the morning as usual. But I think it’s salutary to think that way. In all human life there is an attrition rate. Nowadays, most of us in the West live in unusually safe circumstances (low infant mortality, good health, peace, prosperity) in which we are liable to forget that “in the midst of life we are in death”. An awareness of our own mortality need not be morbid, nor even pessimistic. It can help us get our priorities right. And it serves to remind us that even routine things depend on luck, however secure they may seem.

One sort of event often assumed to be “lucky” is the emergence of my self, starting with conception in the womb. The thought goes something like this: “so many different combinations of sperm and egg might have met at the crucial moment, with different DNA, in which case someone else would exist rather than me — how very lucky I am to exist, when it might so easily have been different!”

But I think that is a mistaken thought. Furthermore, I think it contributes to bigger philosophical problems concerning personal identity, consciousness, and even bad science.

At the moment of conception, the future agent who is being conceived is not yet an agent. Even if we think of the zygote formed at conception as a “potential person”, no merely potential X is a real X, so again no agent actually exists. And where there is no agent, there is no goal of staying alive. Where there is no such goal, there is no proportion of “successful” events in which the goal is achieved. So luck as understood here isn’t involved. There were countless other possible outcomes, but the actual outcome was not “unlikely” in the sense that an amazing coincidence occurred. It’s a bit like being allocated a car registration number — it’s “one in a billion”, but it’s not anything to be surprised about unless you bet beforehand that you would be allocated that very number.

Yet a widespread sense of perplexity persists, and I think it reveals something significant. It shows how much difficulty we have identifying our selves with physical objects (i.e. functioning brains). Despite near-universal agreement that Descartes’ “immaterial substance” is a fantasy, we are fixed in our ways, and we retain a habit of supposing that my self (i.e. my mind) existed before the formation of the physical object (i.e. my brain), and was lucky not to have “missed the boat”. We think of ourselves as “atomic” — i.e. as incapable of being subdivided into smaller parts, and as having an all-or-nothing existence that can’t emerge gradually from something more inchoate. Such presuppositions are “buried”, and are brought to light by the current sense of having been lucky.

The same sense of perplexity surrounds the so-called “hard problem of consciousness”. We find it relatively easy to imagine how some other agent — even an intelligent robot — might do all of the things that conscious persons do, yet we find it hard to accept that “I happen to be one of those things, doing what those things do” (as we point to a functioning brain). This is not a problem for science — it’s a distinctly philosophical problem of personal identity. The deficit is not one of knowledge so much as of the imagination. We find it hard to imagine that we are one and the same thing as a physical brain, wondering how “it came to be itself rather than something else”. If that isn’t a downright mistaken activity, it’s at least playful, like a cat chasing its own tail, imagining a part of itself belongs to something else.

The vague idea that “atomic” human selves are “queued up waiting to be conceived” also contributes to bad science. For example, attitudes to the extinction of our own species reveal that we treat non-birth as something like being “deprived” of birth, which is comparable to death. But this is a mistake. All individuals inevitably die, and all species inevitably come to an end, but these are entirely different. The supposition that they are similar misinforms much current thinking on ecology and catastrophism about climate change.

We must consider what single-sex marriage commits us to

Every week I seem to say something on Twitter that is almost universally misunderstood. Last week I said that there was nothing of value in equality per se, which many took to mean I was a right-wing lunatic.

This week I said that if we commit ourselves to allowing single-sex marriage, consistency demands that we also commit ourselves to a wider range of other sorts of marriage, sorts that we have hitherto disallowed. For example, we might allow some incestuous marriages.

Cue moralistic outrage. “You’re equating homosexuality and incest!” — “Slippery slope arguments are fallacious!” — “You’re a dirty homophobe for opposing single-sex marriage!” And so on.

First, I’m not “equating” homosexuality and incest at all. They’re obviously completely different. Most homosexual acts are morally neutral, whereas most incestuous acts are morally wrong. But both are routinely observed in the sexual behaviour of many species. Although they are “minority” activities, they are recognisably common — enough to be described as biologically “normal”.

Second, many slippery slope “arguments” (if they count as arguments at all) are not “fallacious” (if that’s the appropriate word). We often do have reason to believe that small initial changes portend much larger changes to come. A hundred years ago, opponents of universal suffrage argued that women should not be allowed to vote, because that would open the floodgates to all sorts of social changes. And they were right. It did lead to all sorts of social changes, most of which most of us warmly welcome.

But in any case I’m not worried at all about any slippery slope, nor am I warning of any such thing. Incestuous sex will always be a minority activity, and genuine, consensual incestuous love so uncommon that very few will ever want to seal their relationship by marrying each other. There are no “floodgates” about to open here.

Third, I am not opposed to single-sex marriage. (Nor would I be a “homophobe” if I were.) Rather, I’m trying to draw attention to some other commitments we inevitably take on if we are consistently committed to single-sex marriage.

Single-sex marriage is justified by a principle. That principle goes something like this: “if two consenting adults want their relationship to be recognised and sealed by law as marriage, the rest of society should not prevent them doing so”. If we deny consenting adults the legal right to marry, we are guilty of discrimination of a morally wrong sort. And it’s quite seriously wrong, I would argue, because the desire to marry — to marry the person one considers the love of one’s life — is a central part of human life and human flourishing.

Avoiding discrimination means “turning a blind eye” to differences, at least in law. We deliberately allow our commitment to a moral principle to override any personal distaste we may feel for people who are different in the way we are now deciding to treat as irrelevant.

By allowing people of the same sex to marry, we choose to override any distaste we may feel for homosexuality. (There must be some who feel such distaste, as we are told homophobia is so common.) We choose to treat their incapacity to procreate as irrelevant. We do the same for older people, or people who are barren for other reasons. We allow people who carry genetic diseases to marry, even though we know that if they were to procreate, their children may suffer serious disability. Our commitment to the above principle — a humane and decent principle guided by respect for erotic love — leads us to treat biologically ill-starred conditions as legally irrelevant. And a good thing too.

One such “ill-starred” condition is exemplified by Siegmund and Sieglinde in Wagner’s opera Die Walküre. As brother and sister who were separated when very young, they don’t recognise each other when they meet again as adults. But their instant affinity quickly grows into full human love. This love is not diminished by the discovery that they are siblings.

That sort of situation in common in mythology, scripture, and art. Incest is probably more common in such stories than homosexuality. However much we may disapprove of it, incestuous love must surely occur in real life, especially with recently increased fluidity of families, greater frequency of separations in childhood, larger numbers of step-parents and half-siblings, and so on.

It seems to me that denying siblings the right to marry is an anachronism, or at least it will become an anachronism as soon as we allow homosexuals to marry, as I think we should. It conflicts with the basic principle that we commit ourselves to by allowing single-sex marriage.

Of course it is appalling that some parents rape their children. Of course the legal right to marry should be strictly limited to consenting adults. Of course consent cannot be given by an adult who is mentally ill or the traumatised victim of abuse. These things go without saying.

But as we consider the question of single-sex marriage, we should consider the broader possibilities that our guiding principle opens, and the wider commitments we are obliged to take on. It doesn’t matter that very few siblings or half-siblings will ever want to marry. That fact that some of them will is enough. We are obliged to consider the possibility, and what our response should be.

What I have learned in the past week is that the quality of debate over single-sex marriage is wretched. Well-meaning but unintelligent journalists pour politically correct syrup over real issues, and chicken out of robust debate with anyone who doesn’t accept their relentlessly and predictably orthodox views. I have no distaste for homosexuality myself, but I’m growing increasingly impatient with a “gay lobby” whose idea of debate is cheap victim-stancing or aggressive accusations of homophobia.

He’s still got it

Darwin’s theory of evolution generates almost as much suspicion today as it did when it first appeared in the nineteenth century.

The theory has two main components, and there are two corresponding sorts of unease about it. The first component is natural selection, in which organisms are shaped by environmental pressures. The second is sexual selection, in which organisms are shaped by the choices of potential sexual partners.

The first component of Darwin’s theory undermines the assumption of a cosmic designer, so the first sort of unease tends to be felt by people who have traditional religious beliefs. Notice, though, that natural selection doesn’t really undermine the looser idea that living things are shaped in an appropriate way for living in their environments. In a metaphorical sense they are “designed”, although they are not literally designed by a conscious or intelligent designer with a plan. The “watchmaker” is “blind”, in Dawkins’ metaphor, but he is still a bit like a watchmaker. Examples of convergent evolution (think of similarities between marsupial moles and placental moles) illustrate how environmental niches shape the living things that inhabit them: similar niches can shape their inhabitants in strikingly similar ways.

The second component of Darwin’s theory is quite different. If natural selection is all about “fitting in with the environment”, sexual selection is all about “standing out from the crowd”. Far from working towards a smoother or more economical fit between organism and environment, sexual selection introduces capricious extravagance. If natural selection makes for traits that are “sensible and practical”, sexual selection makes for traits that are “crazy and impractical”.

With sexual selection comes ostentatious ornamentation, “runaway” emphasis on arbitrary traits, advertising, “handicapping” to subvert false advertising, prodigious waste, ritual, and romance, among other things. Ironically, as intelligence — or at least choice — is an essential part of sexual selection, it tends to introduce features that are “stupid” inasmuch as they are unsuited to the environment, and “irrational” inasmuch as they are harmful to the individuals who have them. (So much for the nearest thing nature has to “intelligent designl”!) Some specific traits (such as the Irish elk’s gigantic antlers) no doubt contribute to the extinction of the entire species.

Darwin used the word ‘man’ (meaning mankind) in the title of his main work on sexual selection, because he recognised its importance for understanding the evolution of our own species. The idiosyncrasies of human behaviour, culture and art are more complicated than those of bower birds, but they are similar in that their main engine is usually sexual selection. We too should recognise its importance, and the relevance of evolutionary theory for our self-understanding as humans.

In the nineteenth century, delicate sensibilities and Victorian piety were offended by Darwinism. In the present day, delicate sensibilities and twenty-first century piety are still offended. Our pieties are moral rather than religious, and take the form of strong distastes for beliefs that can be construed as misogynistic, sexist, racist, or homophobic. Such beliefs as that men and women have innately different intellectual strengths, say, or that rape can be explained from an evolutionary perspective, say, are frowned upon in our day as much as atheism was in Darwin’s day. The hierarchical institutions which discourage such thoughts are no longer those of the church, but of academia.

Darwin still has the power to offend. A widespread reaction is to suppose that Darwin’s theory doesn’t apply to humans at all. We say that “humans are no longer evolving”, or that “human culture overrides human nature”, or that “our minds are wholly the products of environment”, or even that “there is no such thing as human nature”. Recently, a neuroscientist claimed that “male and female brains only differ because of the relentless ‘drip, drip, drip’ of gender stereotyping”.

But that is all wrong. Rather than yielding to pressure to avoid offence, or promoting a dishonest political agenda, we should stop frowning upon “impious thoughts” and instead try to avoid immoral actions. Misogyny, sexism, homophobia and racism are best understood not as “having the wrong beliefs” but as willingness to behave in ways that disregard interests because of group-membership. They’re morally wrong, often extremely so, but not because of anything like impiety.

Bronowski on “absolute knowledge”

In this moving clip taken from the very end of his acclaimed TV series The Ascent of Man, Jacob Bronowski speaks of two great human evils.

The first is the idea that “the end justified the means” — or as I would put it: if a particular end is treated as supremely valuable, its pursuit can ride roughshod over the many other competing values that characterise human life.

The second is the idea that we can have “absolute knowledge”. What does Bronowski mean by “absolute knowledge”? To understand this, consider how he defends science against the charge that it dehumanises people. He is standing in front of a dark pond in the grounds of Auschwitz, where the ashes of millions of people were flushed. These people were not victims of science. They were not killed by gas, he says, but by “arrogance”, “dogma” and “ignorance”:

When people believe that they have absolute knowledge, with no test in reality, this [gesturing towards the pool of death] is how they behave. This is what men do when they aspire to the knowledge of gods. Science is a very human form of knowledge. We are always at the brink of the known — we always feel forward for what is to be hoped. Every judgement in science stands on the edge of error and is personal. Science is a tribute to what we can know although we are fallible.

Now Bronowski doesn’t embrace any sort of “postmodernist” nonsense along the lines of “truth is relative”. He uses the words ‘true’ and ‘false’ freely, and clearly thinks they mean the same for everyone. Rather, in denying that we have “absolute knowledge”, his focus is on the traditional “justification” condition on knowledge. (It was traditionally thought that when we know something, we believe it, it is true, and we have a rational assurance or “justification” in believing it.) Bronowski is saying that justification or assurance is never absolute. It isn’t simply that it isn’t total or 100% — we can’t even measure it in an objective way. We can never have an impersonal or numerical assurance of what we believe or ought to believe. Assurance always depends on what each individual already believes, and that always differs from one individual to the next.

Bronowski is a “fallibilist” with respect to knowledge. That is, we are often mistaken, but we can have knowledge despite the ever-present possibility of error. Knowledge is a matter of our beliefs actually being true of the world. It’s an aspiration, a project guided by hope — and it’s often a matter of sheer luck. When we have knowledge, it’s not because our assurance is “absolute”, but because as a matter of fact our hope has paid off, and we have stumbled upon theories that happen to be true. In science, we have to “feel forward” in a tentative, exploratory way by guessing and then testing our theories against reality. The result of such tests is not a numerical measure of “how likely our theories are to be true”, but various hints and suggestions that we are “on to something” — which are bound to strike different individuals in different subjective ways. That’s part of what Bronowski means when he says science is a very “human form of knowledge”.

Nowadays, hardly anyone thinks we can have absolute certainty. Even the Nazis didn’t think that. But there is another “level of assurance”, which Descartes called “moral certainty”. This is not “total assurance”, but “assurance enough” to act so as to achieve some end. If we think assurance is absolute, objective, measurable, or suchlike, then everyone is rationally obliged to act in the same way to achieve the same end. I think that is the Nazi poison that Bronowski has in mind.

I think we should take Bronowski’s warnings seriously, and beware of movements that put one overriding end above all the other human values. And beware of claims that assurance can be objective or numerically measured.

Why would anyone think such a thing? I think such thoughts have two ingredients. The first is ambiguity in words such as ‘likely’ and ‘probable’. In science and statistics these words refer exclusively to relative frequency — that is, to the numerical proportion of members of a class that have some property. Sometimes, when we know practically nothing about a repeated phenomenon, we have to make judgements guided by nothing better than relative frequency. For example, consider gambling with cards, or wondering about Earth-collisions by objects such as comets and asteroids. If the only thing we know about such phenomena is the relative frequency of various hands of poker or of near misses in the long run, that is all we have to guide our behaviour. That’s how casinos make a profit and how governments should make contingency plans for asteroid collisions — and allocate resources for floods. It’s better than nothing, and it’s “objective”, but it’s not a measure of how much assurance we can have in believing anything.

Yet words such as ‘likely’ and ‘probable’ are often used in everyday parlance to refer to a supposedly objective assurance — assurance in believing that an individual event will occur, or that a given theory is true. Talk of numerical relative frequency often slides imperceptibly into talk of assurance.

The second ingredient is a worship of “science” in general — not this or that theory or branch of science, but the entire enterprise as if it were one monolithic body of assured knowledge. With this worship comes uncritical respect for “scientists” — not as practitioners of this or that branch of science, but as miracle workers whose opinions it is downright immoral to disagree with. Nowadays, it’s common to hear people proudly announcing that they “believe the science” — and implicitly shaming those who “refuse” to “believe the science”. That is a terrible state of affairs — and it represents a backward slide of civilisation. A descent rather than ascent.

Science consists of theories about the world. Many of these theories are about very abstract entities that can’t be observed directly. But none of them are about how much assurance we have that any scientific theory is true. Science doesn’t pronounce upon its own belief-worthiness. Anyone who says it does is either a fool or a fraud. That is to treat science as miraculous, and scientists as shamanistic miracle-workers, the purveyors of “absolute knowledge”.

What is “denial”?

When we say someone is “in denial”, we mean that they reject something obvious — something so obvious that their rejection of it amounts to a sort of pathology. For example, in the movie Psycho, Norman Bates interacts with the skeletal remains of his obviously dead mother as if she were still alive. This is not a sign of good mental health.

Although “deniers” deny facts, they usually do so for “emotional” reasons. They want something to be true so much that they pretend that some other things are false. Dolly Parton uses this idea effectively in The Grass is Blue:

There’s snow in the tropics
There’s ice on the sun
It’s hot in the Arctic
And crying is fun
And I’m happy now
And I’m so glad we’re through
And the sky is all green
And the grass is all blue

It’s vital to see that denial is not the mere rejection of facts — it’s the rejection of obvious facts, things that almost everyone can see easily with their own eyes.

We might say that denial is rejection of “observational” facts rather than “theoretical” facts. Fine, but all observation is “theory laden” — in other words, observations have to be interpreted, there’s no such thing as “raw data”, there’s no sharp distinction between observation and theory, and so on.

There is a gradient here, between facts that can be directly checked by simply opening our eyes and looking, and facts that are more abstract — facts that leave more room for doubt, that can be interpreted in several different ways, that depend on theoretical commitments that are not universally shared.

Some facts lie near enough to the observational end of the gradient to be counted as “almost observational” themselves. For example, we can’t quite see directly that the Earth is round. But nowadays we’re familiar with photographs taken from space of a round Earth, and most of us have watched ships slowly disappearing over the horizon, and so on. When we fly long distances, we adjust our watches in the perfectly reliable expectation that we will land in a different sector on the surface of a rotating sphere. Nowadays, a person who insists the Earth is not round is denying something very close to “obvious”.

Words like ‘denial’ can serve a useful purpose. But they are abused when applied to the rejection of claims that are not obvious. In that situation, their use amounts to an appeal to authority rather than an appeal to observation. The theoreticians whose opinions are rejected are supposedly so authoritative that it takes a sort of mental pathology to disagree with them.

I can’t think of a less sceptical or less scientific attitude than one that demands obedience to the authorities by “taking their word for it”. Heretics were tortured and killed by people who justified their sadism by saying their victims were suffering from a sort of pathology — one whose “cure” need not involve the giving of reasons.

Sometimes I have to restrain myself from using words like ‘denial’ for Creationists who reject the theory of evolution. But then I remind myself that the theory of evolution isn’t obvious — if it were, it wouldn’t have taken someone of Darwin’s stature to provide a satisfactory account of it. People who reject evolutionary theory are sceptical about something I believe in, but they can’t reasonably be called “deniers”. This also applies to other types of scepticism.

Two paradigms of evidence

Many people take valid deductive arguments to be the guiding ideal or “paradigm” of evidence. There are two obvious reasons for this. The first is that in mathematics, the proof of a theorem is essentially a deductive argument, and mathematical proof is perhaps the closest thing we can have to certainty. The second is that when people try to persuade one another of something, they appeal to shared beliefs, which each hopes will imply something the other has no choice but to accept. This gives the shared beliefs the function of premises — and persuasion becomes the derivation of a conclusion from those premises.

Buoyed by the thought that proof and persuasion are achieved by arguments, we cast about in search of their equivalent in “empirical enquiry” — and inevitably arrive at induction. (By ‘induction’ I always mean enumerative induction: for example, the sighting of several white swans leads to the general claim that all swans are white.) An inductive “argument” with true “premises” doesn’t guarantee the truth of its “conclusion” as a valid deductive argument does, but it does lead to it with mechanical inevitability. It leaves no room for choice as to what its conclusion will be. No “guesswork” is involved — the “data” determine the resulting “theory”. The latter is “based on” the former in much the same way as the conclusion of a deductive argument is “based on” its premises.

The ubiquity of the thought that “evidence consists of arguments” is underlined by the widespread use of words like ‘basis’, ’grounds’, ‘foundations’, ‘support’, etc. — as if these words were synonymous with ‘evidence’.

There’s a remarkable fact about arguments, which can be loosely expressed as follows: “the conclusion doesn’t tell us anything genuinely new — it just rearranges information already contained the premises”. That’s a loose way of putting it, because obviously theorems in mathematics can be surprising. But they’re mostly surprising because we don’t expect them to be able to say what they do say, given that they were derived from such meagre “input” as is contained in the axioms.

Theorems never “reach out” beyond what can be derived from the axioms. And the conclusion of an inductive argument only reaches out beyond what is contained in its premises inasmuch as it merely generalises from them. It can’t come up with new concepts. If we were limited to deduction and induction, we might be able to do logic and mathematics, and to generalise about what we can observe directly. But we wouldn’t be able to talk about the sort of things science talks about. In that sense, both deduction and induction are “closed” with respect to their “raw material”. Everything mentioned in their conclusions is internal to the system of axioms or beliefs expressed by their premises.

If we assume that evidence consists of arguments, it amounts to “being implied by what you know already”. It’s analogous to what can be got from a library that contains nothing but books you have already read. It’s an internal guarantee or assurance, the sort of thing that invites adjectives like “strong”, or possibly “overwhelming”.

But that sort of evidence doesn’t play a big role in science. Science isn’t trying to give us an internal sense of assurance, but to give us an understanding of external reality. In other words, it’s not aimed at justification but at truth. Unlike the best that can be achieved by deduction and induction, science “reaches out” beyond any system of axioms or beliefs working as premises. To achieve that, science simply cannot avoid guesswork. In embracing guesswork, scientific theory is not fully constrained by observation. In other words, theory is underdetermined by “data”. Typically, several possible theories are consistent with any given set of “data”.

A scientific theory is a representation of its subject matter. It can represent it by literally being true of it, or by modelling it. Hypotheses are true or false — they consist of symbols, some of which stand for real things. Models mimic the behaviour of some aspect of reality in some relevant respect. Either way, evidence in science consists of mere indications, often sporadic and peripheral, that the representations in question do in fact represent their subject matter faithfully or accurately. A theory is related to its subject matter in somewhat the same way as a map is related to its terrain. The image of map and terrain is appropriate — and the old image of conclusion and premises of an argument is inappropriate. The main purpose of observation in science is not to gather “data” to work as premises in an argument, but to check here and there to see whether the “map” and the “terrain” do in fact seem to fit.

Understood in this way, evidence is no longer a matter of proof or persuasion — of leaving no alternative to accepting a “conclusion” — but of seeking new indications that a representation is accurate. The most obvious such indications are passing tests and providing explanations. A theory passes a test when it predicts something that can be observed, and new observation confirms the prediction. A theory explains successfully when it newly encompasses something formerly baffling. Both involve seeking new facts rather than mechanically deriving something from old facts.

Science is more a process of discovery rather than of justification, and scientific evidence is more like what an explorer can bring to light through travel than what a scholar can demonstrate in his study.

Why does science insist on replicability of test results?

Replication of exactly the same test is epistemically worthless. Only by varying the conditions in which a test is done do we set up more “hurdles” for a hypothesis to “fall” at or “make it over”. In effect, varying conditions is a way of doing more tests. But inasmuch as any individual test gives us an independent reason to think a hypothesis is true, it differs from all of the other tests the hypothesis passes, and so it isn’t an exact repeat performance or perfect replication of any other test.

Of course we insist that test results should be reliable and objective. So we insist that they be inter-subjectively checkable, that they can in principle be done by different people, in different places, at different times.

The point of replicability is to prevent fraud or reliance on mere testimony. It’s not to provide many instances for an inductive generalisation to be based on. Even if science relied on induction like that — and I would argue that no genuine science does — perfectly exact replication would be of no use. For example, consider the inductive generalisation “all swans are white”. That would have to be based on several sightings of several white swans rather than the same single white swan. So even here, each individual sighting would have to differ from all of the others, at least insofar as it is the sighting of a different swan.