Order from disorder?

Here’s an example of a lawlike claim: ‘all emeralds are green’. This claim is much like a scientific law, because the predicate ‘is an emerald’ and the predicate ‘is green’ are practically “made for each other”. They’re ideally suited to their linguistic “marriage”, because what makes a beryl count as an emerald is the very thing that makes it green. So you can’t have an emerald that isn’t green.

Although most scientific laws are written using mathematical symbols – such as Newton’s ‘F = ma’ – those symbols capture intimate connections between the real things they stand for, much as words do in the emerald example above. Those connections are generally simple, and consist of such facts as the containment of one set by another (as above), or direct cause-effect links (as in ‘what goes up must come down’), or suchlike. Speculating, we might well wonder whether our very sense of simplicity itself is shaped by our innate ability to sniff out lawlike connections. In any case, these intimate connections give laws a distinct “flavour of necessity” – laws can seem almost empty like tautologies, or almost trivial like definitions.

An important feature of laws is that they support “counterfactual conditionals”: although I’m not actually holding anything in my hand, if I were holding an emerald in my hand, then it would be green. This is why laws are useful in prediction: you can predict that something will be green, just from knowing it’s an emerald.

Now here’s an example of a claim that is not lawlike (in fact it’s not even true): ‘all swans are white’. There is practically no correlation between an animal’s colour and the genus it belongs to, or even the species it belongs to. Many groups have subgroups whose most noticeable distinguishing feature is their colour – so the predicates ‘is a swan’ and ‘is white’ are not at all suited to “marriage” in a law.

Although ‘all swans are white’ may be superficially (grammatically, etc.) similar to ‘all emeralds are green’, it cannot be used to make reliable predictions. If I were to keep a swan in my own private lake, you wouldn’t be able to reliably guess whether it would be black or white.

Sometimes, people talk about “black swans” as if they were occasional anomalies whose possibility everyone should be forewarned and forearmed about. But really, that is not nearly deep enough or sceptical enough. The real problem is not that exceptions occasionally turn up, but that not enough thought is given to whether laws are involved at all when we try to predict things.

Such laws might be statistical – as long as they’re genuine laws which describe real linkages, and which therefore support counterfactual conditionals. Prediction cannot be based on a mere “statistical snapshot” of the way things accidentally happen to be. For example, in the long run, repeated throws of a pair of dice will result in doubles about one sixth of the time. Even if we don’t actually throw the dice repeatedly, we know that if we were to do so, that proportion would be approached with increasing proximity. Or again, in a large enough sample of mammals, the sexes will be represented roughly equally. Even if we don’t actually take a head count, we know that if were to take a big enough head count, we would find roughly equal numbers of male and female. These proportions are not accidental: they’re the products of careful manufacture (shaping, balancing, etc.) of dice, and of evolutionary biology, respectively. Either of these statistical proportions could take part in a statistical law.

But with many statistical phenomena, the numerical proportions we measure are no better than merely accidental. If we extrapolate from the latter for purposes of prediction, our predictions will be unreliable. For example, suppose about one sixth of Australians drive Ford cars. There is nothing to suggest that that proportion is anything but an uninteresting coincidence. In a decade’s time, they may drive entirely different brands of cars, in entirely different proportions. Or again, the human population has been rising because food is getting cheaper, but the wealthier people become, the fewer children they tend to have. So although there has been an overall upward trend, there is no reason to think any sort of law is involved in the rise of the human population. The current rate of population rise is no basis for any reliable predictions about how big the human population will be at any time in the future.

Now for my main complaint: many people don’t bother to ask whether any sort of law is involved in apparent trends such as population rise. They just extrapolate from the current “data”, and expect nature to “continue uniformly the same” (as Hume put it) in the relevant respects, as if a law could describe the process. Often, we have very good reasons to think the process isn’t remotely lawlike – in other words, we have good reasons to think that no law could describe it. Laws are bits of human language, and human language can describe some things but not others.

The reliability of any prediction depends on an essential linkage between what we know already and what we’re predicting. There might be a simple “constant conjunction” between them (to use Hume’s terminology again). Or there might be some other non-causal connection that underwrites a lawlike connection, such as exist in quantum entanglement. But these lawlike connections are not optional – they’re a requirement of prediction. The ever-present question in our minds should therefore be: Is there or isn’t there a lawlike connection between what we’ve observed already, and what we’re trying to predict?

I think that question isn’t asked often enough. And when questions aren’t asked, answers tend to be merely assumed. The assumed answer to the present question is in effect that there is always a lawlike connection of the required sort, because the physical world is assumed to be mechanical and regular simply by virtue of being physical. The naïve Newtonian intuition is that it’s “like clockwork”. Without even asking the question above, we tend to assume that all we have to do is follow the standard pattern of extrapolation from already-observed cases, and the physical world will oblige. Its unfolding patterns may not be obvious at first, the idea goes, but they must be there, waiting to be revealed beneath the apparent confusion.

I think that assumption is profoundly mistaken – so badly mistaken that it’s worth a brief look at the philosophical ideas behind it.

We belong to a tradition that takes the mind to be “spiritual” rather than “material” – it doesn’t interact with material things in the usual way in which matter interacts with other matter. So we think of the mind instead as a centre of consciousness or an engine of experience, in a sense “cut off” from the physical world outside the mind, because it “deals in experience” rather than with material objects. According to this view, whatever the mind knows about matter is made possible because its experiential inputs from the outside world provide “justification” for its beliefs, and if the beliefs are actually true, they count as items of knowledge. This standard analysis of knowledge takes “justification” to be “internal” to the mind. The vague idea is that I cannot accept anything except “what is available to me” within the confines of the “theatre of my own experience”, because otherwise I would have to “step outside of my own skin”. In the supposedly isolated state “inside my own skin”, with only internal cues available to me as “justification”, the best any mind can do is follow the standard pattern of extrapolation from observed cases – in other words, treat white swans in the same way as green emeralds.

Of course most people who belong to this tradition dropped the idea that the mind is “spiritual” long ago. The trouble is, most of us retain its associated epistemological baggage – such as that knowledge consists of true beliefs suitably “justified” by simple “basic beliefs” about experience, as just described. This idea is still so all-pervading, it even finds its way into popular ideas about science: our theories or computer models are analogous to beliefs, so it is widely supposed that they require an analogous “justification” of being supported by “data” – the public counterpart of “basic beliefs” about experience.

Like many philosophical errors, this one is so deep-seated that any alternative can seem unthinkable to those in its grip. How could it possibly be otherwise than that theory is supported by “data”? – Happily, the answer is given in mainstream philosophy of science: observations test theory rather than imply theory. Hypotheses yield predictions which observations either confirm or do not confirm. If a prediction is confirmed, the hypothesis is corroborated by the observation – a very different matter from its being implied by the observation.

But scientists pay little attention to philosophers nowadays. Many imagine that they don’t have to study any philosophy. The tragic result is that they do their own, newly cobbled-together, half-baked sort of philosophy. In a few branches of science (pseudo-science, if we’re honest) internalism of the sort described above has become a will-o’-the-wisp that guides methodology.

For example, consider the application of computer modelling to irregular natural phenomena that look confusingly “ravelled” to the human eye. The hope is that the magic powers of computer modelling can summon forth order from chaos and “unravel” them.

I think that hope is forlorn. Take something as simple as a compound pendulum. A compound pendulum’s individual parts – of which there are only two – behave in a lawlike way, but the whole does not. There is no ideal “marriage” of predicates (of the sort I began with) that link its earlier and later positions. Like so many things, the whole does not have a crucial feature that its parts do have. The mistake of thinking it does is called the fallacy of composition, and it is a common error. (For example, many suppose that if genes are “selfish”, the entire organism must be too.)

A compound pendulum is chaotic in the sense that its position depends in a critical way on initial conditions. Predicting its future position or behaviour from its past position or behaviour is a practical impossibility.

Now of course, it’s easy to simulate a compound pendulum in a computer, because it’s such a simple system. But it’s impossible to get such a simulation to model an actual compound pendulum, because both are chaotic. Their respective behaviours are bound to diverge. Far from “unravelling” the chaos, the simulation multiplies it by simply adding chaos of its own, if anything increasing the inevitability of a mismatch between it and any actual compound pendulum. The simulation may exemplify or illustrate by mimicry the chaotic behaviour of compound pendulums in general, but it’s incapable of modelling any individual pendulum.

In my opinion, the attempt to model the Earth’s climate using computer simulations is many orders of magnitude more misguided than the attempt to model a warehouse full of compound pendulums. That attempt is inspired by the “traditional” hope that the climate is made of physical stuff, and so “there must be predictable order hidden beneath the apparent disorder”. Well, there may be order in the form of lawlike behaviour on the part of individual molecules, but we have no reason to expect lawlike behaviour on the part of the inconceivably many component “parts” (including causal influences) that together constitute the climate.

I’m not a crank: I think we have good reasons to accept the greenhouse effect. In other words, we have good reasons to think that there is a lawlike connection between the concentration of greenhouse gases in the atmosphere and global temperatures. But a quick inspection of the best graphs we have reveal that at every temporal scale, from one year to several millenia, global temperatures go up and go down in a non-monotonic way. Any graph is confusingly “ravelled” to the human eye in pretty much the same way as a compound pendulum “flies around the place like a madman”. So any lawlike connection even with this simplest of causal connections must be extremely tenuous, or buried beneath mountains of extraneous noise. There is no obvious pattern to see here, nor any reason to think there is a “deeper” pattern that computer models could salvage from the disorder.

“I was misinformed”

Suppose someone mistakenly thinks he can fly by jumping off a bridge. He wants to fly, so he has a preference to jump off the bridge. Should we respect that preference even though it is not “informed”?

I think we should understand preferences as desires – specifically as desires that are compared in strength to other desires, so that a preference is the stronger desire. We can tell which desires are stronger, because people choose to satisfy them in preference to weaker desires.

So let’s return to the bridge. Suppose our potential birdman wants to fly south for the winter. In that case, his preference to jump off the bridge would disappear as soon as we managed to persuade him that that will not be the outcome of his jumping. We might also notice that he seems to have a preference not to die (yet). In that situation, we would surely be obliged to go to considerable lengths to prevent him jumping off the bridge. Our obligation stems from respecting his preference for flying south for the winter rather than dying immediately, and his preference to avoid death rather than dying immediately. Our remonstrating with him would be justified by our respect for other preferences. If we decided to forcefully overrule his preference for jumping off the bridge, it would be out of respect for even stronger desires that he already has – stronger than his desire to jump off the bridge. In other words, it would be out of respect for his preferences.

Preferences aren’t true or false. They’re not themselves informed or misinformed, but instead belong to an agent whose beliefs are true or false, so that the agent himself is informed or misinformed. Like Humphrey Bogart’s character Rick in Casablanca, he can be “misinformed” about the availability of waters in the desert, by having false beliefs about where waters are found, but his preference for such waters cannot be.

If an agent’s beliefs are false, he is less likely to satisfy his preferences. So if we respect those preferences, we are obliged to inform him, if we can. But sometimes we too are misinformed, and cannot inform him any better than he can inform himself. We should take all of these possibilities into account when remonstrating with someone, and we should avoid overriding his judgement just because we think our own judgement is better than his, which often amounts to an assumption of our own infallibility.

We might think of the birdman’s preference as “misguided” rather than false. But then we must ask what is misguided about it. I would argue that inasmuch as it’s an “intermediary preference” to be satisfied as a condition of satisfying something more important to the agent, it’s simply weak. We can tell it’s weak, because the agent’s own choices reveal that he would happily choose “alternative routes to his preferred destination” if they were available.

That should be familiar to us all: what we really want is to go to the movies, say, so we form the intermediate goal of taking the 7.20 bus. If we miss the 7.20 bus, we just take 7.30 bus, or a taxi – an annoyance rather than a thwarting of our main preference to see the movie. The preference to take the 7.20 bus is weak because it’s ephemeral.

We might think of a preference as “misguided” because it’s aimed at an impossibility. But that doesn’t strike me as a reason to think it’s misguided. We all have a preference for things that are statistically impossible, such as never to lose another hair on our heads, or never to be sick again. The preference to live indefinitely might be understood as the preference never to die. Inasmuch as these preferences are strong, they deserve respect regardless of the impossibility of their being satisfied.

Is he insane, or is he a terrorist?

Following the recent “Batman” massacre in Aurora, a lot of people have made cynical comments such as: “the suspect is white, so it must be insanity rather than terrorism”. By which they mean: our racist double standards prompt us to call him insane rather than a terrorist.

I think that’s rather revealing. Unfortunately, it reveals that terrorism is working just as it’s intended to.

By terrorism, I mean the deliberate targeting of civilians with the intention of frightening them into adopting a political agenda. Such an agenda might have as its goal a united Ireland, or the destruction of Israel, or the removal of military bases in Saudi Arabia, or whatever. The new converts needn’t become out-and-out activists for the cause, but if they are newly inclined to vote for some political measure when before they were against it, say, the terrorists’ work is a success. A newspaper doesn’t have to openly endorse Islamic extremism to yield to terrorists, but if it refuses to publish an offensive cartoon it otherwise would have published, say, it still partially yields.

Please note that although we tend to think of the people terrorists kill as their primary victims, it’s the much larger class of other people who adopt new political views as a result of these violent deaths who are the main targets of terror. The use of terror to instil views like their own distinguishes terrorists from other combatants as much as their deliberate targeting of civilians.

How does terrorism work? How does fear change minds? – I think the quick answer is: it works in much the same way as the “Stockholm syndrome”.

In a little more detail, terrorism works by forcing people to adopt the outward behavioural trappings of commitment to – or at least sympathy for – a political cause. If these behavioural trappings – saying the “right” things, not saying the “wrong” things, and so on – become a matter of fixed habit or “reflex”, in effect they solidify into genuine commitment. Even the “inner feelings” that normally accompany sincere commitment inevitably emerge, rather as smiling has the effect of making people feel cheerful.

How can that be? – This is where things get philosophically interesting. Here’s my answer: there is much truth in the “analytical behaviourist” idea that beliefs and desires are dispositional states. To use an analogy of Quine and Ullian, a person has a belief or desire in the same way as a battery is charged when it’s disposed to send a current through a circuit, when it causes sparks if the circuit is shorted, and so on. Analogously, to have a belief or desire is to be disposed to behave in appropriate ways. We develop a repertoire of habits appropriate to having this or that belief or desire. Interestingly, it doesn’t matter much what causes these dispositions. If it can somehow be arranged – by fear, threats, social ostracism, whatever – for the behaviour to occur in the appropriate circumstances, then the disposition is in place. And if the disposition is in place, the mental state is too.

And it gets worse, because a sort of holism leads to a sort of rationalization. Although we use single, isolated sentences to describe beliefs, we can never have a single belief in isolation from other beliefs. Instead, we have more or less detailed mental representations or “maps” of the world, whose fineness of detail depends on how much we know about the subject matter. The smallest units of description of such a “map” are those we can express in a single true or false sentence. These correspond to simple beliefs. For example, I believe that today is Sunday. But I am only capable of believing that because I have a fairly detailed mental representation of the way humans measure time in seven-day cycles, and the way each day corresponds to a rotation of the Earth, and the way two of those days are treated in a special way, and quite a lot of other facts relevant to the same subject matter.

So although I can pick out a single belief using a single sentence ‘Today is Sunday’, such a belief always has to be part of a larger “area of understanding”. To have a belief you have to have the concepts that the belief harnesses, and for that you need to have a fairly rich range of beliefs, plural, which carve these concepts out.

Something analogous applies with language. Any meaningful sentence has to contain words which themselves have a determinate sense. And to get a determinate sense, they must occur in other meaningful sentences, plural. In general, words get their sense by being used in semantically important sentences. So no sentence of a human language can have meaning in complete isolation from other sentences.

Now consider the acquisition of beliefs. When I am forced to accept a given belief, I cannot simply accept it on its own. I have to accept some “packaging” as well – some other beliefs that help to justify the belief in question, by implying it, or being implied by it. In other words, I rationalize it.

In short, because of the holistic nature of beliefs, being forced to adopt one of them entails adopting some other beliefs as well, which help to justify the “forced” belief. By being “embedded in reasons” like that, it is held sincerely. It is part of a more or less detailed “map” of its subject matter.

I think the Stockholm syndrome is the closest thing we have to “brainwashing”. It can probably be resisted as long as the subject isn’t too suggestible, as long as he constantly reminds himself that he is only “going through the motions”, and as long as he resists the urge to make things psychologically easy for himself. In my opinion, it is correctly termed a “syndrome”, because it is a mildly pathological state of mind. Although the new beliefs are “reasonable” in that they have their own “justification”, they were adopted through an unreliable process. In general, we don’t acquire true beliefs doing what thugs force us to do.

So the victims of terrorism who come to adopt more terrorist-friendly beliefs are, in a weak sense, unwell. They aren’t thinking as they would if they were using their judgement in a truth-conducive way. I think the cynical claim that we only treat “brown” people as terrorists is a symptom of that (mild) illness. The West has had to deal with terrorism from various quarters for decades, and most of the terrorists have if anything been “whiter” than their victims. The Ku Klux Klan, ETA and several organizations on both sides of the conflict in Northern Ireland were all terrorists, and few were tempted to call them anything else simply because they were “white”. Only recently have Islamic extremists come to epitomize terrorism in the public imagination. Many of them – such as the “shoe bomber” – have been “white”. The cynical claim above is akin to the claim that “one man’s terrorist is another man’s freedom fighter” – as if the word ‘terrorist’ were a sign of our own cultural insensitivity, our failure to make amends for “our” colonial past, and a more general boorishness and racism. We have “brought it upon ourselves”, the idea goes.

I hope it’s clear how those attitudes are the product of terrorism.

If some of us suffer from a mild form of mental illness as a result of terrorism, terrorists themselves usually suffer from a severe form of mental illness. Only the emotionally backward can adopt a political cause with enough gusto to neglect spouses, children and career – let alone to wreak similar havoc on other people who are evidently not involved in their conflict.

So our original question should not be “Is he insane, or is he a terrorist?” but rather “Is he insane, or is he both insane and a terrorist?” To answer that question, we would have to know more about his intentions and his political agenda (if any).

How irrational are we?

Some people think that the human condition is essentially one of “irrationality” – that we are all cognitively flawed in a deep and irredeemable sort of way.

I don’t think we’re quite as bad as that. I have three sorts of reasons for thinking we’re not as irrational as pessimists think. But alas, I also have two sorts of reasons for thinking we’re still far from perfect.

My first sort of reason to think we’re not all that irrational stems from agreement with Hume that reason is “the slave of the passions, and can never pretend to any other office than to serve and obey them”. This means that “the heart wants what the heart wants”, and “the head” works out how best to achieve it. And only “the head” is capable of being rational or irrational. For example, smokers are often treated as if they were making a mistake. But according to Hume’s way of thinking, they have simply made an alternative lifestyle choice. They value their health less highly than non-smokers, of course, but that can’t be regarded as a case of irrationality. They’re doing what they want to do, and wants aren’t capable of being irrational.

By dividing the activities of the mind into volition and cognition, this approach effectively insulates the first “half” of them from rational criticism. Rationality doesn’t apply to the having of desires as it does apply to the formation of beliefs.

My second sort of reason to think we aren’t all that irrational draws on the ideas of recent philosophers like Dennett and Davidson. According to their variety of pragmatism, the content of a belief is a matter of interpretation. That is, a belief is about whatever a fully-informed interpreter would say it is about. To see how this works, consider an ultra-simple rudimentary agent such as a thermostat. Suppose it keeps the room at a steady 70 degrees. Then, as interpreters, we would say its (rudimentary) goal or “desire” is to keep the room at 70 degrees, and that its (rudimentary) “belief” is that the room is either hot enough or else not hot enough, depending on whether or not its bimetal strip is distorted enough to break the circuit to the heater. Even if the bimetal strip is permanently bent with age, and the dial “says” the thermostat should be keeping the room at 80 rather than 70 degrees, what counts is not what the dial “says” but what it actually does.

This is relevant, because much human irrationality is supposed to emerge when belief and action have become “decoupled”. For example, suppose someone looks up at the sky and says, “oh dear – I think it’s going to snow!” But then he puts on Wellington boots and a raincoat, grabs an umbrella, and so on. On the face of it, that seems like irrational behaviour. But let’s look more closely. If his behaviour is consistent with someone who thinks it is going to rain rather than snow, interpretation leads us to assign the belief that it is going to rain rather the belief that it is going to snow. We simply override his own report of what he thinks. He is probably using the word ‘snow’ in an aberrant way, much as the thermostat’s dial “said” – inaccurately – that its goal was to keep the room at 80 degrees. That is a linguistic misunderstanding rather than a case of irrational action or irrational belief. We are obliged to re-interpret the contents of the agent’s mind so as to maximize the rationality of his actions and beliefs, and when we do so, we find far less “decoupling” than we originally feared.

My third sort of reason to think we aren’t all that irrational comes from evolutionary theory. Like other creatures, we evolved to survive at least to the point where we have successfully reared offspring. This entails that as far as everyday beliefs that guide behaviour are concerned, most are probably true or approximately true. It also entails that they can’t contradict each other very much. Imagining a creature with a lot of false or contradictory beliefs is like imagining a creature that walks into closed doors, doesn’t avoid cliff edges, and so on. And that is to imagine a creature that can’t survive long enough to reproduce, like a soluble fish, as Dennett remarked.

So much for three sorts of reason for thinking we’re not as irrational as we might fear. All of them are consistent we the idea that we are “survival machines” for our genes. For that, we have to be reasonably efficient vehicles for their passage into future generations, and for that, we have to do our cognitive work reasonably well. I see the pessimistic alternative as being inspired by more traditional “takes” on the human condition. The idea that we are cursed from birth with the madness of irrationality is reminiscent of the doctrine of Original Sin. The idea that our minds are not subject to fine-tuning through constant interaction with the physical world is reminiscent of mind-body dualism. The idea that cognitively speaking we are in a place of “darkness” is reminiscent of radical Cartesian scepticism.

We can avoid those sources of pessimism by moving beyond our religious traditions. But the alternative evolutionary perspective brings its own brands of pessimism with it. I can see two sorts of reason for thinking we aren’t quite as rational as we might hope.

The first stems from the fact that we are a social animal. Much of our thought is guided by moral concerns, and by empathy for members of our own group (which is often not at all “moral” in any obvious sense, as it discriminates against members of other groups).

There is strong selective pressure against the sort of false beliefs that would lead us to walk into closed doors or over the edge of cliffs. But the selective pressure against false beliefs of a more “theoretical” sort – such as beliefs in religion, history, or even science – is much weaker. In fact any selective pressure here is probably negative – that is, the social advantage of having similar beliefs to others outweighs any disadvantage attached to their being false. The positive effects on our gene-propagating potential of having the same beliefs as others spring from the way they help to identify which group we belong to, which hymn sheet we’re singing from, and where our allegiances lie. And above all whom we can turn to for help if we need it. Truth here takes a second place to reciprocal altruism.

If we want to have true beliefs of this theoretical sort, and practically everyone who professes to have a “scientific” outlook does, then we are going to have to control our urge to “belong”. Such urges militate against truth, and given a truth-oriented outlook they are irrational.

I don’t see much effort made to control these “social” urges. Most current academic philosophers’ energies seem to be expended more on saying popular, agreeable things and avoiding any real controversy. Wider attitudes to “denialists” of one sort or another seem not to have advanced one inch beyond the opprobrium traditionally heaped on heretics, infidels, apostates and blasphemers. Just adopting a new word for non-co-religionists does not auger well for our pursuit of truth or for human rationality.

The second sort of reason to think we aren’t as rational as we’d hope has to do with sexual selection. In sexual selection, members of the selected sex exhibit dangerous or expensive ornaments and engage in self-destructive or wasteful behaviour in order to send a “costly signal”. A signal has to be costly to be convincing – hence the length, inconvenience and danger of owning a peacock’s tail.

In humans, as in monogamous birds, each sex is subject to selection by the other sex. Both men and women bear the marks of this process of selection. For example, permanent and cumbersome human breasts signal fertility and youth. Tribalism and earning power signal strength and intelligence. But being well-endowed or getting paid a lot of money doesn’t make anyone more truthful or more competent in their pursuit of truth.

The combination of sociality and sexual selection in humans has given rise to some bizarre social arrangements, including – in some parts of the world and much of history – the segregation of the sexes. In these arrangements women are in effect incarcerated, and men in effect spend their days slapping each other’s asses with wet towels in changing rooms.

None of that is conducive to truth or to human freedom. If we primarily value truth or freedom, promoting the opposite may be a typically wasteful self-handicapping “signal” – but it’s irrational.

So much for human irrationality as I see it. Shall we say: three out of five ain’t bad?

The sorrow and the pity

Nothing suspends disbelief like empathy. The man who claims to have been tortured is more likely to be believed than the man who claims not to inflict torture. The woman who claims to have been raped is more likely to be believed than the man who claims not to be a rapist. In such cases, the first party strikes a note of fellow-feeling and triggers a sense of moral outrage. The second party might be guilty, or he might not – either way, his “plight” doesn’t pull at the heartstrings with quite the same insistence as that of the first party. The problem is that emotional engagement often emerges as credulity, and the lack of it as incredulity.

When people claim to be victims of a hate campaign complete with death threats, they are generally believed, even if they don’t trouble themselves to produce documentary evidence of such threats. Their story seems believable by virtue of their purported victimhood – so believable does it seem that others often neglect to ask for such evidence. It seems like “bad manners” to express anything that can be construed as doubt.

When marriages break up, both sides scramble to paint themselves as the victim in an abusive relationship. When public demonstrations turn ugly, demonstrators and police are both capable of actively inviting injury. In these and in many other similar situations, there is political advantage to being seen as the victim. “You don’t need a bruise to be abused” as a feminist slogan has it, but a bruise is tangible evidence that its owner is a victim rather than a perpetrator. You may not need a bruise, but you’d be mad to turn one down. Emotionally intelligent people know which theatrical devices sway an audience by suspending disbelief, and manipulative emotionally intelligent people don’t hesitate to use such devices.

Perhaps this suspension of disbelief arises because so many of our beliefs are adopted as a matter of social cohesion rather than because they have a legitimate claim to be true. Many or most of our “theoretical” beliefs (in areas such as history, religion, science, economics, etc.) are determined by which group we want to belong to – and want to be seen to belong to.

Whatever its cause may be, this is bias. By taking sides with the apparent underdog, we can lose sight of the fact that appearances can be deceptive. I’m not complaining here about the moral bias that makes us take sides with the underdog. My problem is with epistemic bias: when taking sides with the underdog includes the unwarranted assumption that he is telling the truth.

When someone commits suicide, the first thing the biddies ask is, “What made him do it?” “What drove him to such extremes?” Even when mental illness is involved, there is always a more or less vague suggestion in suicide that there was a victim, and therefore there must be some perpetrators. It strikes me as very naïve to suppose that the bias described above isn’t routinely exploited by people who want to suspend disbelief, and know how to do it.

Some people end their own lives because they have a painful physical condition from which there is no hope of recovery. That is a perfectly reasonable thing to do. Others would have us believe that they end their own lives because they have to cope with mental anguish analogous to an incurable physical condition. I suggest we make an effort to overcome our own bias, and ask ourselves: “Was he really in a state of incurable mental anguish, or did his action exploit a universal human bias?” I think the more we think about mental anguish, the less closely it resembles an incurable physical condition.

I submit that the deliberation preceding suicide often involves more cunning than is generally acknowledged. It seems to me that the paradigm cases of suicide are not of people suffering unbearable mental anguish, but of emotionally intelligent agents who are trying to achieve something. We get a more instructive paradigm by looking at cases of hunger strikers and suicide bombers.

We tend to think that people kill themselves with no purpose except to end their pain by ending their lives. That’s because we tend to think what we “really” want are experiences rather than external states of affairs, and experiences come to an end with death. (I wrote about this philosophical error in more detail here.) But ask yourself: Do you want to have the experience of your lover being faithful to you? Or do you want your lover actually to be faithful to you? Your answer illustrates whether you want an “external” state of affairs to be realised or to have a (possibly illusory) “internal” experience.

We should ask: What external states of affairs are those who kill themselves trying to bring about? Our answers may be less flattering to the suicidal than the standard “suffering saint” interpretation!

Of course no one is supposed to ask that sort of thing, or to think the suicidal can be motivated by malice, or to suggest they should be treated with anything less than undiluted sympathy. It’s frowned upon. It’s sounds like the “swift kick in the pants” school of psychiatry. It seems “devoid of compassion”. Worst of all, the very bias I have been complaining about kicks in and gives the opposed view greater initial credibility than it deserves. The bias is self-protecting and self-perpetuating.

Against all that, I would argue as follows: First, if we want the truth, we must avoid epistemic bias, and to avoid that, we must not let our automatic empathy for victims lead us to assume that their stories are true, or to unquestioningly accept that those who claim to be victims really are what they claim to be. We might dub this error the “empathetic fallacy”.

Second, if what I’m saying here is true, the real victims of suicide are not those who kill themselves so much as those who are unjustly held “accountable” for the actions of those who do. The injury of their supposed “blame” is often compounded with self-imposed shame and secrecy. I think we should spare a bit of compassion for them.

Looking up to authority

Ireland has an unhappy history of looking up to authority. This has been most obvious with the Catholic Church, whose priests were traditionally regarded as moral “experts”. For decades, their advice was swallowed whole on sexual and family matters, subjects they must surely know remarkably little about. What they said was accepted not for how sensible or reasonable it seemed, but because of who was saying it.

I will argue that this habitual appeal to authority shows no signs of abating. All that has changed is whowhich elite group of “experts” – is considered to have the authority. This is worse than a bad habit of thought. It is a tragedy.

It isn’t only priests of the Catholic Church who have enjoyed a position of supposed authority. In the 1950s, Noël Browne’s “Mother and Child Scheme” – which would have taken an important step towards socialized medicine in Ireland – was derailed by doctors as much as by men of the churches (including the non-Catholic Church of Ireland).

In Ireland, newspapers are judged to be “of higher quality” the more closely they approximate an ideal “newspaper of record” – an utterly trustworthy, authoritative account of events that contains nothing but truths and good advice, so that its readers can suspend their critical judgement when reading it. Open controversy is much scarcer in Irish newspapers than in UK newspapers. Of course Irish commentators do occasionally write controversial things. The trouble is, they tend to write controversial things by accident, having intended (unsuccessfully) to write something trustworthy and authoritative and therefore uncontroversial.

If newspapers were to encourage deliberate, open controversy, they would have to present alternative opinions for consideration. This would involve publishing opposed opinions, at least one of which would have to be false. But in Ireland, falsity in newspapers is considered shameful, just as truth is considered an unambiguous good.

What I say here applies with varying force to all countries and all societies. The real problem is habitual appeals to authority, not Irish people. But since I’m thinking specifically about Ireland and using Irish examples, I’ll continue to do so.

In recent years, a big change occurred in Irish society. Suddenly, Catholic priests were thoroughly discredited. A significant proportion of them were revealed to be perverts and child rapists. Those who weren’t were tainted by association and apparent complicity with those who were. Many ordinary people still attend Catholic mass for occasions such as weddings and funerals, but hardly anyone in present-day Ireland is willing to take moral guidance from a priest. That’s a hopeful sign. It would be nice to think we’ve learned something from all this. If we put our trust in an elite, and assume they are beyond reproach, and take what they say as beyond question, and empower them to make judgements on our behalf, then sooner or later they are bound to make bad judgements on our behalf.

But old habits die hard. Most of us still expect to be spoon-fed by our supposed intellectual betters like trusting children instead of as adults exercising our critical faculties. The Irish model of education is essentially that of a child being fed or of a receptacle being filled: the teacher is supposed to be a reliable source of knowledge, and the pupil is made to carry as much of the good, truthful, trustworthy stuff the teacher can pack into him.

People don’t learn much that way, and what little they can learn will be distinctly unscientific, because by its very nature science is opposed to authority.

It is no accident that our continuing tendency to look up to authorities of one sort or another is all too evident in current attitudes to “science”. I put the word ‘science’ in inverted commas, because as often as not, what people take to be “science” doesn’t really deserve the name.

The Irish are by international standards quite well-educated, but science is our weakest subject. This does not stem from any lack of intelligence or shortage of technical skill, but more from failure to appreciate the value of disagreement, speculation, criticism, and above all a reluctance to embrace the human condition of uncertainty. Sceptical attitudes are rare in Ireland, and some who describe themselves as “sceptics” are not genuine doubters so much as evangelists for alternatives to religion. But anyone looking for an alternative to religion tends to wind up with just another religion.

I think that is pretty much what has happened in Ireland in the headlong rush away from Catholicism. The “science” is not embraced for its own merits over its demerits (of which there are always some, even in the best science). What is counted is not how sensible or reasonable it seems, but who is saying it. If “scientists” say it, then it is assumed to be true. Furthermore, if it is true, the alternative opinion must be false. It is a short step from here to thinking that only what is true should be uttered, so the false must be suppressed.

Many people do not bring their own judgement to bear on what is and what is not genuinely scientific. Instead they take it on trust, by accepting the self-description of members of a profession, or on the supposed authority of majority opinion. There are frequent appeals to the “peer-reviewed journals” – or to what the “newspaper of record” says about what “peer-reviewed journals” say.

I hope you can see the re-emergence of the old pattern here. The unchanging assumption throughout is that knowledge is a matter of authority, and that it comes from something like a “pulpit”. The elite few who are blessed with the ability to read the “scriptures” first-hand enjoy special status as oracular conduits of authority.

It is time to look at science in a more critical way. Empirical sciences use mathematics, and have much of the rigour of mathematics, but they don’t have the justifying structure of mathematics. The rigour does not yield certainty, or anything like it. Scientific theories do not “rest on a foundation” of any sort. Instead, rival theories are tested against observations, and above all against each other. To do that, rival theories must be available in order to be compared. The silencing of one of them makes rivals unavailable. This is censorship, disguised as always as “the debate is over” or “there is no real debate here”. Pseudo-sciences are like anaerobic bacteria: they thrive where the oxygen of debate is absent.

I said above that these attitudes were a “tragedy”, and I use the word advisedly. It is a personality flaw whose inevitable outcome may be something very bad, such as another Irish Famine. There is a precedent for this sort of disaster. In the early days of the Soviet Union, riddled with quasi-religious respect for the authority of “The Party”, the pseudo-science of Lysenko guided farming practices, and prolonged famines that killed millions.

Not good enough!

A few years ago, I spent an afternoon browsing in Kuwait’s largest bookshop. It was interesting to find out what was available – and what wasn’t available. Although Kuwaitis share the usual Arab hostility to “the Zionist entity” – they cannot bring themselves even to utter its name – I did not see any more obvious signs of anti-Semitism. I did not see copies of The Protocols of the Elders of Zion, for example, or anything much like it. There were quite a lot of books by or about Einstein and Freud. Several “feminist” books were available, including at least one by militant lesbian philosopher Judith Butler.

The one topic of which there seemed to be not the merest hint of a whiff was the theory of evolution. There was nothing by or about Darwin – or by or about Richard Dawkins, or any other well-known evolutionary thinker.

To many Muslims, Darwin’s theory comes directly into conflict with their religion. Their reaction is to try to prevent any expression of it. Their “justification” is that the theory is false, and falsehoods are bad, and so should not be expressed, and so their expression should be forbidden.

Exactly the same reaction can be seen, in mirror-image, in many people who claim to be “pro-science”. (I will call them “pro-scientists”, although of course they are anything but pro-science.) Creationism comes directly into conflict with their science, so their reaction is to try to prevent any expression of it. Their “justification” is that the theory is false, and falsehoods are bad, and so should not be expressed, and so their expression should be forbidden.

But wait. Everyone thinks their own opinion is true, and therefore that any opposed opinions must be false. Are all opposed opinions therefore to be be silenced? Can human disagreement amount to nothing more than a bunch of ignorant morons slapping each other around like Mo of The Three Stooges?

When pressed with this suggestion, both religionists and “pro-scientists” tend to retreat to a more defensive position by saying that Creationism and Darwinism don’t really come into direct conflict at all. Religionists will say that Darwinism is impious, or in other words not worthy of being considered a real rival to religion, and so it can be safely left out of the discussion. And for their part, in a tiresomely predictable mirror-image, “pro-scientists” will say that Creationism is unscientific, or in other words not worthy of being considered a real rival to science, and so it can be safely left out of the discussion.

That is not good enough. I repeat: that is not good enough!

Genuine science is guided by broadly sceptical and open-minded attitudes. These attitudes make no attempt to silence opposed views. Genuine scepticism accepts that nothing is certain – I repeat, nothing – so the opposed view might conceivably be right. To silence an opposed view is to assume infallibility, as JS Mill saw, and any assumption of infallibility is just inconsistent with a sceptical attitude.

Genuine science welcomes the proliferation of opposed views, because new ideas are often an amalgam or synthesis of such views. For example, even such stark opposites as Darwinism and “Intelligent Design” theory can meet in a productive way. There is some “intelligent design” in nature in the limited sense that some creatures exercise their intelligence in sexual selection, say, or in their choice of food. These choices are made by more or less intelligent minds, and they have the effect of shaping future generations. To explain the shapes and colours of flowers, for example, we have to consider the intelligence – such as it is – of insects. To explain some of the differences between human races, we have to consider human aesthetics.

Genuine science seeks reasons for belief. For that, rival theories need to be compared to each other, to see which fares better. And for that, rival theories need to be available so that they can be so compared. Silencing one of them makes any such comparison impossible.

I hope this is all familiar territory. If it isn’t, dear reader, you urgently need to read one of the most important books of recent centuries: JS Mill’s On Liberty.

To stifle an opinion on the grounds that it is “unscientific” is backward, parochial, illiterate and illiberal. It is backward, because it is to do exactly what religionists do. It is a profoundly anti-scientific, authoritarian move to protect orthodoxy. Darwinism is too good to be treated with that sort of intellectual contempt.

It is parochial, because it fails to acknowledge the fact that most of the world’s population still believe some version of Creationism. We in the West prefer Darwinism, of course, but to override what “outsiders” think because it conflicts with our own Western values is shabby and inward-looking. Creationism and Darwinism may not be serious contenders within science, but they are rivals in a wider, “philosophical” sense, simply by virtue of being widely considered to be rivals. A properly scientific attitude extends beyond science proper to this wider realm of “philosophical” dispute.

To stifle an opinion on the grounds that it is “unscientific” is scientifically illiterate, because it fails to grasp what makes for good reasons for belief, and it fails to grasp how science is informed by sceptical attitudes.

Finally, it is illiberal, because it fails to respect individual freedom. If someone has religious beliefs, by all means let us reason with him and try to persuade him of his error. But by silencing the mere expression of those beliefs, we trample on his individual freedom to express them, and to hear them expressed. That is to trample on the individual himself. Absolute freedom of thought and sentiment – including religious thought and religious sentiment – is essential for human happiness and human life.

By silencing opinions we disagree with – instead of engaging with them in open and rational debate – we condemn ourselves to Matthew Arnold’s “darkling plain”,

Swept with confused alarms of struggle and flight,

Where ignorant armies clash by night.