Respecting agency

Preference utilitarianism differs from traditional utilitarianism in that it doesn’t enjoin us to maximize any sort of commodity such as pleasure or happiness. The rather murky concept of “utility” has no place in preference utilitarianism.

Instead, as Peter Singer lucidly put it, preference utilitarianism is the “minimum moral position”. To act morally, an agent should simply respect other agents’ preferences in the same way as he trivially respects his own preferences. To put it another way, preference utilitarianism enjoins us to “respect agency in general”.

Whatever we do, we do it because our beliefs and desires cause our action. Respecting agency means giving due deference to the beliefs and desires that cause an agent to act.  At the moment of acting, given the constraints of circumstances, limited options, limited time, and so on, what we actually do is what we most want to do, given what we believe. If our wants (desires) were aimed at different goals or had different strengths, or if our beliefs differed in content or degree of entrenchment, we might act differently — but we would still do what we prefer to do. As long as we are genuinely acting, our preferences always “win”.

But let’s take care to consider “scope” here. If I hand over my wallet to a mugger rather than risk death, within the narrow context of being mugged I do what I “prefer”. But considered from the wider perspective of me going about my business, I would prefer not to be mugged at all. So obviously, although we always do what we prefer in a trivial sense looked at from the narrowest perspective, we don’t always manage to do what we want in a larger sense. In other words, our preferences are often thwarted; we are often unfree. Preference utilitarianism says that to act morally, we should as far as possible prevent the thwarting of others agents’ preferences, considered from the widest perspective, ideally taking account of the entirety of the agent’s beliefs and desires.

Like other forms of genuine utilitarianism, preference utilitarianism considers each act individually, in its particular circumstances, rather than promoting a general prescription or rule. However, a fairly close approximation to such a rule is the so-called Golden Rule expressed in Matthew 7:12:

all things whatsoever ye would that men should do
to you, do ye even so to them

In other words, treat others as you would like to be treated yourself. Respect the agency of others as you respect your own agency.

But what if an agent acts with false beliefs? Wouldn’t we want others to prevent us walking out in front of an approaching bus? Or to whip out of our reach the glass of acid that we mistakenly think is a cool beer?

Indeed we would, and we should do the same for others, as long as — and this is absolutely vital — we do not overrule their overall agency in so doing. By “overruling their agency” I mean neglecting to give their beliefs and desires due deference as their own beliefs and desires, and not, please note, as true beliefs or as acceptable desires. When I grab someone to prevent them walking out in front of a bus, I assume that they have a whole bunch of other mental states such as a desire to not to be run over by a bus, a desire to reach the other side of the road uninjured, that they believe what they want can be found on the other side of the road, and so on. By overruling their belief that there is no immediate danger in stepping out into the road, I respect the many more other beliefs and desires that cause them to cross the road in the first place. Overall, I respect their agency more by overruling one belief while respecting the larger whole of the rest of their beliefs and desires.

Much the same goes for keeping a glass of acid out of the reach of a thirsty beer-drinker. It is reasonably safe to assume that we respect his overall beliefs and desires more by thwarting his narrower desire to drink the contents of this particular glass.

I’d like to emphasise, again, that these infringements are justified by respect for the agent’s overall agency considered as a larger whole, and not because they are caused by false beliefs.

As with the example of getting mugged, above, much depends on “scope” here. Within the narrow context of stepping out onto the road, or of drinking the contents of this glass, it might look like prevention means agency is disrespected. But from the wider perspective of the agent’s overall beliefs and desires, we respect agency more — we give due deference to the agent’s own beliefs and desires  — by preventing this or that particular act.

In these spur-of-the-moment cases of preventing action, we rely on our psychological assumptions being correct. We assume that most people don’t want to be hit by a bus, don’t want to drink acid, and so on. We assume they believe buses and acid can kill. As with any assumption, we might conceivably be wrong, but in any case, we can easily check afterwards: we simply bring the bus or the acid to the agent’s attention. In most cases, the agent will thank us for keeping an eye out for his safety. But it might turn out that he has unusual beliefs or desires. He might doubt the presence of the bus, or strenuously resist the idea that the glass contains anything dangerous. He might be practicing his daredevil skills, or hoping to take his daily dose of vitamin C. Or he might be suicidal. In such cases, I think the preference utilitarian should respect overall agency. He should try to consider the agent’s entire system of beliefs and desires, and respect them as much as possible. If the agent persists in his beliefs and insists on his course of action, we should respect that. It might entail letting him go ahead and kill himself — or allowing him to take great risks in performing some sort of experiment in living.

As an aid to thinking about such scenarios, we can use the Golden Rule above as an approximation, and ask: What we would want others to prevent us doing ourselves? I think most of us would want others to pull us out of the way of a bus, or to grab the acid before we can drink it. We’d thank them for it. But we would not want them to overrule our overall agency, nor would we thank them for doing so. No one has the moral or intellectual superiority to legitimately thwart an act simply because they think it’s caused by an improper desire or a false belief.

Small-P protestant revolutions

Can you see a shared pattern in Brexit, the English Civil War, the Reformation, and similar uprisings of ordinary people against longer-established authorities with their widely-respected experts? I think I can, and I see all of them as “protestant revolutions” — protestant with a small P, as their opponents are not always Catholic with a capital C, but catholic in the broader sense of being more mainstream (i.e. more “universal”), more traditional, and more jealously protected by hierarchical power structures.

Perhaps I recognize the pattern quickly because I feel I have been engaged in my own protestant revolution for decades now against bad science. It’s a strictly peaceful, intellectual revolution, but there is real anger on both sides. I’ll try to explain why I feel a bit like a low-ranking partisan in such a revolution.

I’m a scientific realist. In graduate school, I put a lot of effort into defending scientific realism against the criticism of most of my teachers and every single one of my fellow graduate students. In doing so, my realism was tempered and mitigated somewhat, but remained essentially intact. I grew to appreciate the centrality of the hypothetico-deductive “method” to genuine science, and became increasingly aware of the ubiquity of pseudo-sciences that eschew it.

For decades now, “the authorities” (such as Sir Paul Nurse of the Royal Society, almost all governments, most academics and mainstream media, politically correct conformists in every walk of life) have been telling us that we must all believe “the science” — whatever the authorities deem “the science” to be.

But I think we have good reasons not to do so. First, none of these authorities ever seem to express the slightest interest in the hypothetico-deductive method that I regard as essential to genuine science. Second, the history of science teaches us that science has always had bad branches, and there are no doubt bad branches right now: there is no monolithic body of reliable opinion that deserves to be called “the science”. Third, genuine science does not ask us to accept the word of an authority. Fourth, observation plays a crucial role in science, and observation is what anyone with working sense organs can do: to dismiss non-expert opinion is to allow ideology to overrule observation. Fifthly, and from a moral perspective most important of all, we are entitled to believe whatever we like. Martin Luther put it in terms of “conscience”. Elizabeth I put it by saying she “would not open windows into men’s souls”. No one has a moral entitlement to insist that anyone else must believe anything. It’s simply morally wrong to so insist.

So, like a cut-rate Martin Luther, I simply cannot believe what the authorities are insisting I should believe. “Here I stand. I can do no other.” And I’m not the only one.

It’s no coincidence that supporters of Brexit tend to be so-called “climate deniers”. Skepticism about a body of opinion supported by authoritarianism rather than observation is exactly the sort of thing that characterizes protestant revolutions. The development of the internet has worked much like printing in the original protestant revolution. Blogs and social media are replacing academic journals that no individual can afford and no honest writer would expect his work to be widely read in. This is analogous to vernacular bibles coming to be seen as more valuable than authoritative interpretation of the Latin bible by clerics.

What strikes me as sad, or maybe just funny, about present-day “counter-revolutionaries” is they seem not to understand why anyone in their right mind would reject expert opinion. They assume it’s completely obvious that expert opinion is better than non-expert opinion, and only a madman or an utter fool would think otherwise. But a philosophical mistake underlies this assumption. There are two kinds of expertise, which we might call that of the “texpert” and that of the “prexpert”. A Texpert (T for ‘Theory’ or ‘Text’) is someone familiar with a body of theory or the writing of a theoretician (such as Marx, Freud, Keynes or Hayek, say). A PRexpert (PR for ‘PRactice’) is someone who has a demonstrable practical skill. The latter is something we rightly all admire. We wisely consult and frequently hand decision-making powers to prexperts. The practical expertise of a prexpert might not touch on actual opinion (belief or claims that purport to be true) at all. But a texpert is just someone with a theory, usually one whose esotericism makes its epistemic status doubtful. If we confuse these two types of expertise we are liable to unwisely hand decision-making powers to the wrong sort of person.

Alas, many texperts familiar with a theory T of subject-matter X flatter themselves with the thought that they “know a lot about X” instead of having a mere “familiarity with T”. Let’s not be taken in by such flattery!



On further reflection, I should add that a defining characteristic of “catholic” ways of thinking (as currently described) is the assumption that “transcendent” matters are to be decided by “earthly powers”.  This sets more puritan ways of thinking against it. To take a few examples, it’s often thought that questions of morality (of taking military action, say) or justice (of proposed legislation, say) or credibility (of a scientific theory, say) are to be decided by such earthly powers as reside in committees: a vote at the United Nations, a decision by the European Court of Human Rights, an act of some branch or other of the EU, or consensus among qualified scientists. Or indeed a decree by the Pope.

Puritans rankle at that assumption. Questions of morality, justice, truth, of what to believe are matters of conscience, they respond, or at least matters an individual must judge for himself because they lie beyond the competence of a committee. Such questions are transcendent is the sense that their answers are to be discovered rather than decided (i.e. created by a decision) and we are all in the same boat: we all fallible, and committees are multiply fallible. As we nowadays put it, they are subject to groupthink.

As an example of puritanical thinking, consider my own insistence (touched on above) that genuine science follows the pattern of the hypothetico-deductive “method”. I am outraged by the claim that we should accept a theory as scientific or as worthy of belief because “97% of scientists say so”. No doubt my own puritanism is as distasteful to people who make that claim as their invocation of earthly powers is to me.

As an example of catholic thinking (as here understood), consider ex-President of Ireland Mary Robinson. As far as I am aware, her opinions are always unwaveringly mainstream, underwritten by the supposed authority of some administrative body or other, and approved by current “powers that be” from academia to the Council of the European Union. She seems to suppose that UN resolutions are the highest appeal on moral questions, and that consensus among insiders is the authoritative last word on scientific questions. She revels in positions she occupies in the hierarchy, and constantly reminds us of various honours she has received (most recently, the “city of Chicago’s highest honor, the Medal of Merit”).

As you have probably guessed, the puritan in me finds that all very unseemly. But I should add that many sincere Catholics (capital C, i.e. members of the Roman Catholic religion) also disapprove. The Catholic church is no longer the earthly power it once was, and its position on abortion and single sex marriage differs from that of current catholic thinking (small C, as understood here). So this is a useful reminder that the words are not used in the same way and refer to categories that only loosely overlap.

Why do males die younger than females?

I have a hypothesis that explains why in many (most?) species, males have a shorter life expectancy than females. My apologies if this has been thought of before, or if it’s already well-known. It’s quite likely that I’m re-inventing the wheel here, that I’ve come across the current explanation before somewhere, and have simply forgotten. I have a keen interest in evolutionary theory, but I’m not a biologist.

The hypothesis is this: males are subject to more exploitation by parasites than females, because in general parasites “want” their host species to thrive. Over the course of a lifetime, this greater exploitation takes its toll.

In non-monogamous species, males are useful for fertilizing the eggs of the females, but not much else. In effect, after donating sperm most of them are redundant. They use up the food supply that could otherwise swell numbers of individual members of the species, and hence safeguard the species itself. In non-monogamous species, too many males are “bad for the species”. Drone bees consume as much nectar as honey-producing females. Male elephant seals consume far more fish than their smaller female counterparts, and few of them even get to donate sperm.

Farmers — in effect, human parasites of animals used as food — know all this, and so they usually kill males apart from the few needed to fertilize females. In doing so, they strengthen the species they parasitize, in the sense of increasing their numbers and assuring their future. Through domestication, the humble jungle fowl of Asian forests has become the mighty chicken, found in huge numbers all over the world. Much the same applies to cattle and sheep, which now occupy much of the earth’s surface.

Most parasites (such as microbes) are brainless, but through the process of natural selection they adopt “strategies” which can promote their numbers. In most cases, these strategies ensure that their host species do well enough to function reliably as hosts. The parasites aren’t actually thinking as human farmers think, of course, but over many generations they stumble upon similar strategies, which become established as the parasites that benefit from them proliferate.

With sex ratios, the “interests” of species and genes conflict. What’s “good for the species” is a much larger proportion of females than males, at least in non-monogamous species. But what’s “good for the genes” is a roughly equal number of males and females (as explained by Fisher’s Principle). The fact that in most species the ratio of males to females is indeed 1:1 makes a compelling case for a gene-centered understanding of evolution (a la Richard Dawkins’ Selfish Gene), and against group selectionism.

This hypothesis (I hesitate to call it “my” hypothesis) should be easy enough to test, as it entails that there should be a greater difference in male–female life expectancy in non-monogamous species than in monogamous species. It also entails that many of the diseases we associate with early male mortality (such as coronary heart disease, possibly suicide) may in fact be partially caused by infection by microbes.

The tyranny of conditioning

From a very early age, I detested learning by rote. My refusal to engage in this soul-destroying activity led me to my first brush with criminality, when I tried to cheat when reciting the seven times tables.

I’m not the only one who has a deep distaste for learning by rote. What is rather surprising, perhaps, is that some people seem to have a genuine liking for it. Witness the eagerness with which so many take to learning foreign languages, with new vocabularies, irregular verbs, unpredictable genders of words for inanimate objects, and so on — all of which require tedious repetition and absorption.

It seems to me that there is a telling difference here of temperament — between those who assume education is essentially a matter of acquiring good habits of thought, and those who assume education is essentially a matter of getting a better understanding. Both types of people embrace education as a good thing, as a vital aspect of personal growth, but the former expect and even welcome an onerous period of habit-formation to achieve it. The latter embrace a sort of intellectual “principle of least action”: whatever is sufficient to explain is “all we need to know”. That attitude can look downright lazy to fastidious habit-formers.

The difference in temperament extends far beyond education. Here I’ll just touch on how it emerges in attitudes to mental illness, and in politics.

People who assume that education is a matter of acquiring good mental habits tend to think that mental health issues — from mild neuroses to out-and-out illness — are to be overcome by means of conditioning. For example, a phobia of spiders is supposedly overcome by coming into ever-closer contact with them — letting them crawl over one’s hands and so on. At the end of the conditioning process, the patient has “got used to the idea” — in other words, he has changed his habits.

Now I am no Freudian, as I think his understanding of the mind was badly mistaken in many respects. Yet I think he was importantly right, both factually and morally, in thinking that the way to better mental health was not through conditioning, as above, but through self-understanding. Let us put aside the details of such self-understanding, such as whether it really involves uncovering unconscious desires or phantasies. The important thing is that therapy is aimed at enlarging one’s understanding of oneself, rather than at achieving greater “self-control” through the acquisition of new habits. Rather than trying to instil such habits, a Freudian therapist would encourage exploration and experiment, with its attendant risks.

This difference of temperament can also be seen in political thought, where a deep division exists between Rousseau and Hobbes (almost everyone has an affinity for one or the other of them). Rousseau thought that by the time humans reach adulthood, they have been corrupted by the bad conditioning of modern society, and the solution to this problem is counter-conditioning. We must acquire new “habits of the heart” (as Tocqueville called it, re-phrasing Rousseau). Unlike Rousseau, Hobbes thought humans were born selfish, and there’s no way to change that; but by understanding ourselves better we will agree to an imperfect compromise in which the most important freedoms are safeguarded.

I think it’s pretty obvious that the current enthusiasm for minimum alcohol pricing, for special taxes on fat and sugar, and the rest of it, comes from the “conditioning” side of the divide. If people eat or drink too much, the idea goes, they should be re-educated by acquiring new habits. By putting unhealthy foods and alcohol that little bit further from reach, new habits will take root.

Let us pass over the fact that the attempt to instil new habits involves coercion, and that such coercion is discriminatory, because only a select few are poor enough to be affected by modest price increases. Let us pass over the issue of legislators making laws that they themselves are not subject to. The question remains: Is conditioning — enforced learning by rote — the right way to achieve personal growth?

My temperament says No. As alcohol prices have fallen in Ireland, Irish people have cut down on alcohol. I think this is because affordability enables younger people to learn how to drink, to educate themselves through exposure, experiment, and increased self-understanding rather than through the forced acquisition of a habit.

What is this “positive” concept of freedom?

Isaiah Berlin famously distinguished a “negative” and a “positive” concept of freedom. The negative concept is straightforward, but what can be made of the positive concept? Too often, attempts to distinguish them rely on a superficial linguistic difference between the terms ‘freedom from’ and ‘freedom to’. For example, it might be said that negative freedom is freedom from external constraints, whereas positive freedom “represents freedom to do things on one’s own volition.” [Taken from here.]

But that simply won’t do. Agents only ever do things because they want to do them. In other words, any genuine act (rather than a mere twitch, or a frogmarch) is done as the result of the agent’s own volition, and it only can be done when the act is not hindered by external constraints. And that amounts to a “negative” re-formulation of what was intended to capture the essence of “positive” freedom.

Perhaps what’s meant is something like this. If a person is forced to do something under duress — at gunpoint, say — then although he does it because he (briefly) “wants” to do it while the gun is aimed at his head, he can hardly be said to do it on his own volition. A mugger threatens him with death, and he’d prefer to live despite handing over his money than die holding on to it. This is not a free act, surely?

Well, of course the mugging victim is not free. But his lack of freedom can be characterized in an entirely negative way. Although he wanted to hand over his money while held at gunpoint, and that narrowly-circumscribed act in isolation could be described as “free” (no policeman suddenly turned up to prevent him doing so), that is to consider events within far too narrow a context. He had a much stronger, longer-term want not to be mugged. That want — considered in the larger context — was thwarted by his actually being mugged. The mugger was an external constraint that prevented the victim from doing what he wanted to do. So the victim was not free to go about his business unmolested, and therefore he was not free — for entirely “negative” reasons.

Notice that the word ‘free’ applies both to agents and to acts, and furthermore, acts have to be considered within contexts of varying scope. This invites confusion, as the word’s meaning can slide almost imperceptibly between them. (I started the last paragraph with a subtle shift of my own by giving an answer about an agent to a question about an act.)

When the negative concept of freedom seems to suggest that a man being mugged is “free” to give money to his mugger, some are drawn to the idea that we need a more robust concept of freedom than this negative one. And here thoughts usually turn to autonomy — to the idea of self-rule, of being the author of one’s own acts. It sounds silly or sinister to say that the mugging victim was “free” to hand over his money, because he lacks autonomy. The next obvious step is to embrace a concept of freedom that links it with autonomy.

The concept of autonomy is quite similar to that of power, specifically inner strength. To achieve something, we don’t just need an absence of external obstacles, we also need the wherewithal to act — the “muscle”,  if you like, for movement to occur.

I think we need to proceed carefully here. To act at all, we need power — quite literally we need muscle to lift a finger, and in an extended sense we need various mental abilities. To act successfully — which goes beyond merely acting — we need an absence of obstacles that would prevent our acts achieving their goals. We should observe and respect this distinction. Greater power tends to bring with it greater freedom, but power and freedom remain distinct concepts, as one is a prerequisite of action, while the other is a prerequisite of success.

Hobbes memorably said that a man “fastened to his bed by sickness” did not lack freedom but power. In saying this, Hobbes exhibited a remarkable degree of political sensitivity. We have a legitimate prima facie claim against other agents who put limits on our freedom, but no such claim against mere circumstances (rather than agents) that make us internally weak. (Of course what one historical era counts as weakness can later be regarded as the effect of human agency.)

It seems to me that we need both a concept of power, and a distinct concept of freedom. But as far as freedom is concerned, the negative concept is all anyone needs. Most attempts to define the positive concept are in fact just alternative ways of defining the negative concept. “Freedom from” and “freedom to” are inter-definable, the two definitions in effect pointing to figure and ground that share lines of demarcation. Freedom to do X is just the same thing as an ability to do X thanks to the lack of external constraints that prevent one doing X. To be an autonomous agent is to have both the power to act, and the freedom to act, the latter understood negatively.

Yet, there is a positive concept of freedom. I know this because Rousseau used it, Marx used it, and it is presupposed in almost every ringing patriotic declaration of national freedom. This concept of freedom is expressed in rather mystical-sounding claims that to be free one must partake in the “general will”; that one can be “forced to be free”; that one must beware of “false consciousness”; that one’s “true self” must take control over one’s merely “empirical self”; that being free means embracing the “destiny of the nation”; or whatever.

As I see it, the essential difference between positive and negative freedom is this: having positive freedom means more than simply being able to get what you want — it means wanting the right things, usually understood in some implicitly moral sense. The various goods that are thought to empower those who have positive freedom — such as education, “strength of will”, etc. — are things that many people do not in fact strive for. But according to those who understand freedom positively, they ought to strive for them, for their own empowerment.

Implicit “oughts” are an essential ingredient in the positive concept. In Rousseau’s terms, being free means partaking in the “general will”, in other words not simply pursuing goals one already happens to have, but adopting larger goals as one’s own. Only with that essential extra ingredient can people be “forced to be free” or considered unfree if their “empirical selves” fall short of the “self-realisation” enjoyed by their “true selves” (in Berlin’s terminology). Ideas such as “false consciousness” only make sense against the background assumption that being free means wanting the right things as well as being able to achieve them.

These mystical-sounding appeals aren’t exactly to autonomy per se, but to something like the additional power that agents would acquire if they adopted goals — equality, justice, truth, whatever — shared by members of a group.

This is a deeply illiberal understanding of freedom, and I think a confusion of power and freedom lies at the root of the positive concept.

Have we had enough of experts?

Michael Gove’s remark that “the people of this country have had enough of experts” has become the most quoted incomplete quotation since “there’s no such thing as society”.

A more complete version goes like this: “the people of this country have had enough of experts saying that they know what is best.”

That extra bit is important, because it shows that Gove was not referring to people with knowhow, i.e. people with practical skills, but to people who claim to know that something or other is the case (or know that something should be pursued as a goal). That’s a vital difference.

We all accept that some of us have practical skills or abilities that others don’t have. Pilots are better at landing planes than non-pilots. By some miracle, I was able to put new tyres on my bike yesterday. And so on. No one means to disparage this strictly practical sort of “expertise”.

But when we move on from knowhow to claims to know that something is the case, things are quite different. The main difference between them is that a claim is true or false, but a practical skill is neither true nor false. It’s just “there” in an agent’s repertoire. Is isn’t tested in the same way as claims such as scientific hypotheses are tested, but it is “put to the test” in the sense that we can quite easily judge how well someone is driving a bus, playing a violin, fixing the plumbing, or whatever. We can see practical expertise with our own eyes, especially its results, and so we can fairly reliably check whether someone has it.

The difference is especially sharp with claims in areas that are highly specialised, speculative, tentative, exploratory, theoretical, unusual, technical, complicated, abstract, arcane, etc. (henceforth I’ll just say “specialised”). With specialised claims, unless we are specialists ourselves, the best most of us can do is take someone else’s word for it, usually that of a supposed authority. Typically, such an authority will be someone with similar qualifications to the person making the claim. To find out whether a theologian’s specialised theological claim can be trusted, it seems we have to ask another theologian.

I hope it’s obvious how problematic this “non-independent checking” of expertise is bound to be. I’ll leave it as an exercise whether “peer review” fits this pattern.

Taking the word of an authority as a guide to truth is so antithetical to the scientific enterprise that one of science’s most highly respected bodies — the Royal Society — adopted as its motto an explicit warning to not do it: nullius in verba.

But even if we are lucky and have enough specialised training of our own not to have to take anyone else’s word for it, specialised claims are still “long shots” in an epistemic sense. I’ll try to explain. We can’t have absolute certainty about any sort of factual claim, but we can have more confidence in our beliefs about everyday things than we can in our beliefs about non-everyday things. For example, we can tell whether it’s raining or not just by looking out of the window. There’s a direct link (via light and the eyes) between the rain falling from the sky and our mental state of believing it’s raining. So direct is this link that the beliefs it sustains are formed in a reliable way: usually, if it is raining, we believe it’s raining; and if it’s not raining, we believe it isn’t raining. Forming beliefs about everyday matters like these is as reliable as “pushing a thumbtack into a noticeboard right in front of us”. But then forming beliefs about specialised matters is as unreliable as “shooting an arrow at a distant target”: it’s riskier — we’re more likely to “miss”, i.e. to get it wrong.

Science is one of the most valuable of human enterprises because of its ability to reveal the hidden structures of reality. But in doing so the claims it makes are like arrows shot at distant targets. These shots at distant targets are often revelatory, but they’re less certain than the more obvious truths of more pedestrian pursuits. The history of science bears this out: every branch is a string of theories once accepted as true, but later shown to be false. We have to accept that much of what is currently accepted in science is also bound to be exposed as false in the future. And what applies to science, where testing is de rigueur, applies a fortiori to specialised disciplines where there is less testing, such as philosophy and economics.

Language often plays tricks on us, especially when a single word refers to more than one thing. Words like ‘expertise’ and ‘expert’ are ambiguous in just that way. They can apply to practical skills in the hands of evidently capable agents, or to claims made by specialists using distinctly unreliable opinion-forming methods, which always includes soaking up the current orthodoxy of their peers. Let us cherish and respect the former, but treat the latter with due scepticism.

It’s especially important to be on our guard against this ambiguity when a single person seems to be in possession of both sorts of “expertise”. For example, a good doctor can exhibit the first sort of expertise by routinely diagnosing and curing illnesses. But one and the same doctor is also likely to have specialised opinions about (say) preventative medicine. The first is admirable. The second may sound impressive, but it’s really very much less trustworthy than it sounds coming from someone we already recognise as a “good doctor”. Yet we refer to both as “expertise”, and I think are inclined to trust both despite their very different epistemic status. We admire the person for the first sort of expertise, but then exaggerate his skill in the second.

Modern medicine has recently come to realise that its own advice on saturated fats — so confidently drummed into the ignorant masses for decades — is probably mistaken. This is absolutely typical of specialised opinion. There are abundant examples of specialised opinions coming to grief in much the same way in other disciplines.

The confusion of the two senses of the word ‘expert’ is so insidious that many people can’t resist the lure of expert opinion. They think it’s laughable or ridiculous to be more sceptical about it than about everyday opinion. When you point out to them that the opinion of an expert on almost any matter conflicts with the opinion of some other expert on exactly the same matter, they typically appeal to the majority: if most of the experts agree, they say, then the rest of us should take that as authoritative. But that hardly settles things, as any such opinion currently held by the majority of experts was at a previous time the opinion of a minority of experts, and going back still further, before the idea occurred to anyone, it was the opinion of no experts at all.

A show of hands is not a reliable way of serving truth — the question of God’s existence is not to be settled by calling for a vote in a roomful of theologians. Nor is the question of whether to stay in the EU settled by a vote among economists.

On a given topic in a given area of specialisation, most ordinary people simply won’t have any opinion at all. For example, I don’t have an opinion about quantitative easing. We might admire the diligence of anyone who does have an opinion about it, but we mustn’t allow ourselves to assume that his opinion is true. Very often specialised opinion is simply the less common alternative to having no opinion at all.

The revelatory power of science doesn’t depend on how confident we can be in the claims it makes, but when we make rational political decisions, confidence really does matter. That’s why cautious conservatives (small C) tend to be uneasy about specialist opinion in politics. Edmund Burke, “father of modern conservatism”, singled out philosophers and economists as being exactly the wrong sort of people to entrust with critical political decisions. Better decisions are more likely to be made by ordinary people from various walks of life, who have picked up practical skills though everyday living and working. It may not sound as impressive as mighty romantic schemes for future utopias, but a nation can suffer worse fates than becoming “a nation of shopkeepers”.

So far, I’ve taken “knowledge of what is best” to mean knowledge of factual matters, so that the experts Gove thinks we’ve “had enough of” presume to tell others about “is”s rather than “ought”s. A more obvious alternative is to take it to mean “knowledge of what is valuable”.

Here I follow Hume in taking a very simple approach. What is valuable is just what agents regard as valuable, what they treat as having value, what they choose, what they strive for in action, and so on. In a word, what is best for anyone is simply what they prefer. But over their own preferences, each individual is “sovereign”, as JS Mill put it. No individual’s preference can be gainsaid by any other individual. To do so would be a sort of usurpation. For example, homosexuals prefer to have sex with people of the same sex. No expert could conceivably overrule that preference, because homosexual desires, being desires, are neither true nor false. This is a humane liberal insight as well as a Humean point of logic: you can’t derive an “ought” from an “is”. “The heart wants what the heart wants”, in other words, and no expert can do anything about that, however big-headed an expert he may be.

In a liberal democracy, voting has to be understood as an expression of preference rather than the utterance of an opinion. What a voter says he prefers when he casts his vote can’t be gainsaid by an expert telling him that he doesn’t want “the right thing” enough.

Ah well… it’s all academic now. On the most superficial level, Gove was evidently right: the referendum result confirmed that UK voters had indeed had enough of experts telling them to vote Remain, and the majority voted Leave instead. I would have voted Remain had I lived in the UK, but as the result was becoming clear, I changed my mind because I’m a democrat. I recommend other Remain voters do so too.


Is usage of the word ‘terrorist’ racist?

A terrorist is a person who deliberately targets non-combatants of some group seen as “the enemy”. The aim is not to kill as many of them as possible, but rather to instil fear in others who belong to the hated group. Terrorists hope the fear they can generate in other members of the hated group will make them modify their political behaviour, in effect changing their way of thinking about a political issue. The aim of the violence is pour encourager les autres in the hope of bringing a political goal closer.

A bomb planted in a pub may “pointlessly” kill 10 innocent drinkers, but its real purpose is to bring 1000 useful idiots round to the terrorists’ way of thinking. The idea is to get them to have thoughts like these: “The people who did this must be very angry; their anger must be the result of having a serious grievance; intellectuals like us must do what we can to peacefully redress that grievance.” And so on.

Because the goal is political, and because the violence is aimed at changing the thinking of quite large numbers of people, we also normally think of terrorists as being organised, at least to the extent of belonging to a recognised group who share a political goal.

It’s often said that “one person’s terrorist is another person’s freedom fighter”, and there’s some truth to that. The creation of new states often involves acts of great brutality and the more or less deliberate targeting of civilians. But it’s also often said that we tend to classify people who have dark skin as “terrorists”, and exonerate (if that’s the word) light-skinned people as merely “mentally disturbed”. I think that’s unfair, and that most ordinary users of English do in fact use the word ‘terrorist’ reasonably consistently.

Killing is serious. Most of us never kill anyone, certainly not on purpose. Few of us think anyone is guilty of any fault so bad that they “deserve to die”. To kill people known to be innocent of such faults is a very disturbed thing to do. It requires the suspension of everyday judgement that assumes individuals are to blame for what they themselves do, and instead replace it with “assumed blame” of simply belonging to a group. Because such groups often have an identifiable ethnicity, terrorism is akin to extreme forms of racism, such as Ku Klux Klan lynchings of black people simply because they are black. I think we have to accept that people who engage in indiscriminate violence like this, who dress up in identity-hiding costumes intended to frighten, and all the rest of it, must indeed be mentally disturbed.

Of course the converse is not true. But the connection between mental disturbance and terrorism is firm enough for us to confuse them. Do we systematically apply labels in such a way that we are guilty of the very racism I’ve just suggested terrorism amounts to? — I don’t think so. In recent decades, most sensible people unhesitatingly described the Provisional IRA as “terrorists”, despite the pallor of their skin. Those that didn’t so describe them were not motivated by a racist urge to exonerate white skin, but by political sympathy for the Provisional IRA. In many cases that sympathy was the intended result of IRA violence. The same applies to the UDA and other equally white organisations we unambiguously label “terrorist”.

I think what really matters here is degree of organisation. We are unlikely to call a lone gunman who goes apeshit in a gay club a “terrorist” if he does not belong to an organised gay-killing group, no matter what the colour of his skin may be. We are unlikely to call a lone attacker who kills an MP a “terrorist” either, for similar reasons. But a group of people who are sufficiently organised to plan an attack in advance, to coordinate things among themselves, to arrange transportation, weaponry, and so on: these are surely a “terrorists”. And we classify them as such because of their methods, political aims and degree of organisation rather than because of the colour of their skin.

If the word ‘terrorist’ is nowadays more commonly applied to dark-skinned people than before, that is probably because in recent decades fewer terrorists have been descended from people of Northern latitudes. It was not always so.

The urge to blame people targeted by terrorists (by accusing them of racism) instead of terrorists themselves is of course one of the intended results of terrorism.

The most amazing sporting event of all time?

Today’s news sources are talking about Leicester City’s winning the Premier League as a sort of miracle. The bookies’ initially-offered odds of “5000 to 1” has morphed into a supposedly scientific/mathematical measure of probability — we are being told that Leicester City had “a slim chance of only 1 in 5000” of winning the Premier League. Yet amazingly, they did win it! We are given to believe that “1 in 5000” is a numerical measure of how surprised we should be at the fact that they did in fact win.

That is ridiculous. The Premier League consists of 20 teams, chosen specifically for their ability to beat other teams. Suppose instead of the Premier League on its own, we imagine a much larger competitive free-for-all containing the Premier League, plus the First Division below them, plus the Second Division below them, and so on, till we have 5000 teams altogether playing against each other.

If we knew nothing whatsoever about any given team, in that situation we might assign a “probability of only 1 in 5000” that it would win. In other words, if we picked a team randomly from the 5000, and did so repeatedly, then in the long run we would pick the winning team about once in every 5000 attempts to do so.

But now suppose we are told something about a given team: that it is in the top 20. That should make us raise our numerical assessment of its chances of winning the free-for-all. If we were further told that a team in the top 20 never loses to a team in the bottom 4980, we would very significantly raise our estimate of its chances of winning the free-for-all. It would be something similar to playing the Monty Hall game, except that instead of one out of three available doors being ruled out, 4980 out of 5000 available doors are ruled out.

But that, in effect, is what limiting the free-for-all to only the Premier League does. It means that if we know nothing at all about a team, the repeated act of picking one out randomly in the hope of choosing the winner would be successful much more often than 1 in 5000 times.

To lower the “chances of winning” in the face of further knowledge about a given team is to introduce capricious, subjective factors that cannot be relied on to make statistical judgements of relative frequency. They involve unrepeatable events or events that are not statistically lawlike, and so cannot be reliably extrapolated from. All we can do is guess about credibility here.

Casinos make money reliably because the behaviour of dice, cards, rotating cylinders etc. is statistically lawlike. For example, we know that in the long run about one sixth of rolls of pairs of dice will be doubles. But the behaviour of football teams in the Premier League is not at all lawlike. Bookies have to use numbers in their line of work, but let no one think these numbers correspond to measures of anything real or significant.

I suggest that we should sharply distinguish statistical relative frequency and subjective judgements of credibility. Numbers measure the former, but their presence is a will o’ the wisp when we are dealing with the latter.

Antisemitism is a special sort of personal failing

“Labour left in denial over antisemitism” say the headlines. And it is uncanny how people can be blind to something so glaringly obvious to the rest of us. I think the apparent blindness of political extremists to their own antisemitism is systematic: it exists for philosophical reasons that are worth noting.

Political extremists tend to understand justice as a matter of “groups getting what they deserve”. For example, German Fascists thought “Aryans” deserve their “destiny as the master race” or some such hokum, while of course Jews deserve death. According to this way of thinking, individuals do not matter. What matters is the group an individual belongs to — usually an accident of birth. If one’s forbears mistreated others, one inherits their guilt. If one’s forbears were mistreated, one inherits their entitlement to better things.

Extremists on the left have a much more pleasant way of expressing essentially the same view about collective guilt and the supposed entitlements of groups: they put it in terms of social justice. “We are on the side of the oppressed”, they claim. Of course this also puts them on the side opposed to “the oppressors”. Such side-taking has many forms, but as a rule it is regarded as acceptable to “punch upwards” and unacceptable to “punch downwards”. For example, imitating someone’s accent for satire or pure mirth is considered fair game, as long as the target is “up” — a toff, say, or someone with a pretentious way of speaking. But imitating Ken Livingstone’s “common” accent would be considered completely out of order. It doesn’t matter if the toff is now impoverished or if the “common” man is now a rich and powerful political figure — what matters is birth. Those of working class pedigree are uniquely free of sin, cleansed by their own victimhood at the hands of an “elite”.

These ways of thinking are perfectly suited to antisemites with their standard-issue antisemitic tropes such as that Jews are rich, cunning, bank-controlling, behind-the-scenes international political puppet masters, that Mossad is behind every terrorist outrage including those that benefit Israel’s enemies, and all the ludicrous rest of it.

Antisemites are duly attracted to the extreme left, at least in the UK and Ireland, and their presence there in turn prompts accusations of antisemitism. Yet when the accused sincerely ask themselves whether they are antisemites, their thoughts go like this: “Antisemitism is a form of racism, and ‘racism’ means ‘oppressing people regarded as inferior’; but I entertain no such thoughts, especially towards Jews, so I’m innocent of the charge. I remain a committed anti-racist on the side of the oppressed.”

That train of thought is gruesomely self-congratulatory, but I think we should acknowledge that there is little willingness to do harm or to exploit others in it. Antisemitism is a special sort of racism. It’s more insidious than other forms of racism, because those in its grip find no malice in themselves. That makes antisemitism a special sort of personal failing, one where culpability lies not in malice but in lack of reflection. It’s a philosophical failing, of people who have not taken the obligation to know thyself seriously enough. Western antisemites from the Christian tradition are people who have absorbed much of what is worst about Christianity, yet purged themselves of too little of it. This applies in particular to the doctrine of original sin, which says that blame is inherited. It also applies to ethics in which moral rightness is understood simply as a matter of “meaning well”, of keeping one’s nose clean, of acting out of virtue rather than vice, of avoiding malice.

I think we should treat hatred of Jews among Muslims of the Middle East as something different from Western antisemitism, although the former owes much to the latter. I don’t mean to single out for blame anyone who is historically, scientifically or culturally illiterate, who is unable to think beyond the narrow confines of an inadequate education. But I do mean to blame people who are culturally equipped to reflect on the failings of their own Christian tradition, who are morally obliged to do so, yet who have neglected to do so. In the West, we all must ask ourselves how one of the greatest societies in the world, a politically sophisticated democracy whose people created the greatest art of mankind, somehow managed to create hell on earth.

“If we don’t learn the lessons these pictures teach, night will fall”

Self-determination versus nationalism

Self-determination is a wonderful thing, but nationalism is a terrible thing. The difference between them is this. Self-determination is guided by a principle: if a piece of territory is in dispute, then its sovereignty should be settled by asking the people who live there. It is not something to be settled by asking people who do not live there.

For example, according to the principle of self-determination, the question of the sovereignty of the Falkland Islands is very clear. The overwhelming majority of the people who live in the disputed territory want it to remain British. The fact that there may be many more living in Argentina who would prefer “Las Malvinas” to be part of Argentina is irrelevant. Or at least it’s irrelevant according to the principle of self-determination, because they live outside the disputed territory.

Unlike self-determination, nationalism is not guided by principle. Instead, it takes its direction from an ideal of what is best for an identifiable group. This differs from one group of people to the next (and can differ between individuals who have different ideals for the same identifiable group of people). So although strict compliance with the principle of self-determination cannot generate conflict, rival nationalisms can come into conflict, and often do. Over the course of history, perhaps more people have died in disputes over territory than in any other sort of conflict. We are a tribal species, and nationalism is tribalism on the largest scale.

Self-determination is democratic. All that counts is what the majority of a group of individuals (i.e. the people who live in the disputed territory) actually prefer. But nationalism tends to count not what people do as a matter of fact prefer, but what they would prefer in an ideal world, or should prefer for the ideal of nationhood to be realised.

The question of the sovereignty of the Falkland Islands is about as clear as it gets, at least according to the principle of self-determination. Similar questions about Wales, Northern Ireland and Scotland (i.e. should they remain part of the United Kingdom?) are somewhat less clear, because the respective majorities are slimmer. In such cases, principled attachment to self-determination often merges into unprincipled nationalism. The discussion tends to shift imperceptibly from what people do in fact prefer, to what the right sort of people prefer, or to what ordinary people would prefer if they were less ordinary by being better educated, or to what people should prefer according to the ideal of what is best for the group.

Very often, this appeal to an ideal trades on some idea of ethnic purity. The right sort of Scotsman is a “true” Scotsman; the better-educated Welshman speaks the Welsh language; authentic Irishmen enjoy traditional Irish music and play Gaelic games; and so on. The ones who don’t are supposedly remiss in some vaguely “moral” way, and it’s assumed that they should be guided by the ideal. Nationalism nearly always enthusiastically promotes a nation’s language, art, and distinct ways of life—and despises the other language, the other art, and the other, less authentic, more corrupted, ethnically “impure” ways of life.

So even though a clear majority of the people who live in a disputed territory may prefer the status quo, it very often happens that a nationalistic movement blurs matters by making an issue of authenticity and ethnic purity, appealing to ideals instead of actual majority preference. By blurring the issue, and by appealing to tribal sentiments, nationalism can give it the appearance of being a “live issue”, even when the principle of self-determination can settle it easily and unambiguously. Typically, the slimmer the majority, the greater the potential for nationalist blurring of the issue.

I think it’s wonderful for a language to work as a means of communication, but terrible for a language to become an expression of authenticity or a symbol of ethnic purity. I’d say much the same about art, and other forms of culture and ways of life. Apart from the danger of political conflict, there is the damage done to language, art and other forms of culture by turning them into vehicles of struggle. Language, art and other forms of culture are enriched by intermingling rather than insulation, they are improved by a broader rather than a narrower range of influences.

Looking back 100 years to the Irish Rising of 1916, I find very little to like in its leaders. They overruled what most Irish people actually wanted at the time, and instead appealed to a nationalistic ideal of what they should have preferred. I admire the bravery of the 1916 leaders, but I don’t like what they did with it.