We are not culpable for “wrong opinions”

When we act, our bodily movements are caused by mental states. These mental states consist of a desire to achieve a particular goal, and some relevant beliefs which help us “steer a course through the world” towards achieving the goal.

It all means a human agent is a bit like a sophisticated version of a cruise missile, which is programmed to reach a target, and to do something (usually explode) when it gets there. It steers a course towards its target by comparing the terrain it flies over with its onboard computer map.

Although both the map and the targeting are necessary for it to reach its goal, the map is “neutral” in the sense that it only contains information about the outside world. It is compatible with the missile hitting any other target within the mapped area, and with its doing good things like delivering medicine or food aid when it reaches its target (not just doing something bad like exploding).

If the “act” of a cruise missile is to be praised or condemned, we judge what it is programmed to do, and where. We do not judge its map, whose greater or lesser accuracy simply results in greater or lesser efficiency in fulfilling the aim of the programming.

It should be the same with human agents. If we praise or condemn what they do, it should be with reference to the good or evil they intend to do, or are willing to do, and to whom. We should suspend judgement of an agent’s beliefs when we judge his actions, as beliefs are “neutral” with respect to the good or evil of what they help to achieve, just like the cruise missile’s onboard map. Like the accuracy of the missile’s map, the truth or falsity of an agent’s beliefs affect his success or efficiency in achieving gaols, but the beliefs do not set any goals. A belief can be true or false, but it can’t be good or bad. The worst an opinion can be is false, rather than “aimed at an evil goal”.

Despite the neutral role of beliefs, some people blame others for having the “wrong” opinions, or in other words for not believing what they “should believe”. For example, many Muslims think “apostasy” should be punished by death. Many Westerners think “denialism” should be ostracised or worse.

Those are remarkably similar views, and both are primitive, in the worst sense of the word. They belong to a backward state of society. They are inspired by confused understandings of agency, and we should reject them. If someone has false beliefs, he has either had bad luck (by being exposed to unreliable sources of knowledge) or he is epistemically ill-equipped. In neither case is he culpable.

Freedom trumps power

Imagine an über-homophobe. He doesn’t just hate homosexuals and avoids homosexual activity himself — the very idea of other people engaging in homosexual acts makes him sick with repulsion and fury.

He may not describe his attitudes in terms of hate. He may prefer to express it as a sort of “love”, perhaps as a virtuous reverence for heterosexuality. “My heart is with heterosexuality”, he may say.

Whether or not we accept his euphemistic spin on it, to say he has “strong feelings” is to understate the case. He has a super-strong urge to prevent homosexuals “doing whatever they do”. The reality of everyday homosexual acts routinely sends him into a towering rage, or reduces him to bouts of uncontrollable weeping. He is “offended” to a degree that’s “off the scale of offence”.

Question: Should homosexuals curb their sexual activity to spare this unfortunate man’s feelings? Should efforts be made to prevent him taking such immeasurably deep offence?

Answer: Of course not. Not by an inch. Not by the tiniest fraction of a millimetre. An adult’s freedom to engage in sexual acts with other consenting adults trumps anyone else’s urge to prevent him engaging in such acts.

However pathetically our über-homophobe may try to paint himself as the “victim” of other people’s “offensiveness”, the unalterable fact is that he wants power over others rather than freedom from others. His complaint amounts to an illegitimate claim to control their behaviour.

Freedom (and the legal rights that protect it) is more important than any ability to direct other people’s behaviour. Freedom trumps power: the choices people make for themselves always count for more than “feelings” and urges others may have to overrule those choices. “Feelings” and “offence” may be important between members of a family, but they count for nothing in the political sphere.

To pander to this unfortunate fellow’s aversion would certainly harm those whose freedoms it restricts. But it would probably harm him as well. Homosexuality isn’t going to go away, and he may as well just get used to that fact. Sooner or later he is bound to run into it, to his further chagrin. It may well be salutary — like immunisation — to deliberately offend him.

The same applies to other forms of giving and taking “offence” and “hurting people’s feelings”. In particular, it applies to Muslim “offence” taken at cartoons. Personally, I suspect it’s mostly faked: I’d guess many Muslims don’t give a rat’s ass about “insults to the Prophet”, and are simply itching for confrontation with Western people and Western values. But even if their “feelings” are entirely genuine, they still don’t count. No one’s “feelings” count when we’re talking about freedom.

The significance of desire

Most of us have an under-inflated concept of desire, and an over-inflated concept of belief. We happily accept that beliefs are fairly detailed representational states — so that taken together they prompt the metaphor of an “inner world”. But we tend to think of desires as much vaguer or thinner on detail than beliefs, and perhaps not even as representational states at all. Why is this way of thinking so common? — Here are a few suggestions:

First, we tend to specify desires with reference to objects rather than states of affairs. For example, we say “I’d like some chocolate” rather than “I have a desire to be eating chocolate”, or “I need some WD-40” instead of “I want my door hinges to be lubricated with WD-40”. Being human, we can safely assume that other humans have broadly similar goals to our own, so it’s often linguistically redundant to explicitly specify these goals as states of affairs. This can give the mistaken impression that desires do not represent states of affairs at all. In other words, it leads us to overlook the fact that desires represent the same sorts of things as make beliefs true or false.

Second, in general the states of affairs desires are aimed at are not yet realised. When we believe something, or at any rate when we believe something about the past or present, if our belief is true then the state of affairs that makes it true is a “fact”, with much attendant “detail”. When we desire something, on the other hand, the state of affairs that would satisfy it is not yet a fact. So for the time being it’s a “mere idea”, something more like Pegasus than a real horse grazing in a real field at this very moment. Any attendant “detail” is more obviously “imaginary”. We probably err on the side of assuming our beliefs are more detailed than they really are, as if they inherit some of the detail of the fact that makes them true, but with desires, we err in the opposite direction.

Third, in the Western philosophical tradition from Plato through Descartes (and in other traditions too), we tend to think of mental states as conscious experiences rather than as functional representational states that direct the behaviour of agents. This is changing, of course, with the continuing influence of American pragmatism and of the later Wittgenstein, as well as with the growth of functionalism in the philosophy of mind. But it is still very common to assume that a desire is a mere “feeling” or emotion rather than an essential part of the mechanism of action. This assumption is promoted still further by the possibility of wishing (and expressing wishes) for states of affairs that as agents we can play no part in bringing about (such as “I wish it would snow!”). It all suggests that desire is something rather touchy-feely and causally unserious. Worse, it can suggest that the real “purpose” of desire is nothing more than the having of a further sort of conscious experience — pleasure, or whatever.

We must reject this assumption that desire is a “feeling” (although of course specific desires are usually accompanied by distinctive feelings). Rather, a desire is a causally efficacious and typically fairly detailed representational mental state aimed at bringing about a real state of affairs external to the mind. Desires are complementary to beliefs, which are also representational mental states. Instead of bringing about real states of affairs external to the mind via behaviour, beliefs are typically brought about by these states of affairs, often via observation. Although there is something to the claim that desires are less detailed than beliefs, I think we should take Hume’s lead in giving desires priority: a desire (or “passion” as Hume put it) is the mainspring of any act. Whenever we act, our behaviour is aimed at achieving a goal; desire is the mental state that establishes such a goal, and beliefs (or “reason”) can do no more than help us steer a course towards achieving it. Hence “reason is the slave of the passions”.

Although we do not literally have an “inner world” of belief in our minds, together our beliefs form a sort of “map of the world” — the world as we take it to be. But that’s only half the story. Together our desires form a sort of “blueprint for the world” — the world as we would like it to become. The “map” and the “blueprint” contain the two essential components of the causation of all acts.

The traditional under-inflated way of thinking about desire tends to ignore the “blueprint” and puts far too much emphasis on the “map” — it imbues it with more detail than is really there, and it gives it causal powers that it simply doesn’t have. This often emerges in the assumption that specific sorts of belief are associated with specific sorts of acts.

A classic age-old example is the thought that belief in God causes people to behave in more “moral, God-fearing” ways. But of course such belief can only cause the valued sort of behaviour in conjunction with specific desires — to do what God wants, to avoid punishment, and so on.

Nowadays, much effort is expended on promoting beliefs such as “all races are exactly alike in respect of ability” and “there are no grey areas in rape”. The hope is that simply having such beliefs will discourage racist or sexist behaviour. But as we have just seen, behaviour of any sort is caused not only by our “map” of beliefs, but crucially — and more saliently, because desires are classified according to their goals — by our “blueprint” of desires as well.

The “attenuated” understanding of desire has a couple of really nasty side-effects. One is a blurring of the distinction between beliefs and desires, and the thought that desires can be “implanted” in an agent’s mind in the same way as many beliefs can: via observation. So if we watch violence on television, we will want to be violent ourselves. If we see ads on TV, we will want what they advertise. And so on. This gives rise to the sort of puritanism that discourages or even forbids the expression of “unhelpful” ideas. Traditional religious puritanism frowned on the expression of atheistic or agnostic views, and kept Hume out of a proper academic job. No doubt there are many lesser yet still talented people who are nowadays excluded from academic jobs for having beliefs that are currently regarded as “unhelpful”.

The side-effect that really makes me queasy is not the exclusion of talent from the groves of academe and the media, but the active promotion of falsity for the sake of our general moral betterment. For example, although I don’t think there are any significant differences between races as far as abilities are concerned, the claim that there are none at all is statistically vanishingly unlikely. If there are differences between individuals — and there are— there are bound to be differences between groups of individuals. Yet we are enjoined never to utter the forbidden words of that obvious truth. This is sick-making, and anyone who cares about truth should speak out against its deliberate suppression.

We must consider what single-sex marriage commits us to

Every week I seem to say something on Twitter that is almost universally misunderstood. Last week I said that there was nothing of value in equality per se, which many took to mean I was a right-wing lunatic.

This week I said that if we commit ourselves to allowing single-sex marriage, consistency demands that we also commit ourselves to a wider range of other sorts of marriage, sorts that we have hitherto disallowed. For example, we might allow some incestuous marriages.

Cue moralistic outrage. “You’re equating homosexuality and incest!” — “Slippery slope arguments are fallacious!” — “You’re a dirty homophobe for opposing single-sex marriage!” And so on.

First, I’m not “equating” homosexuality and incest at all. They’re obviously completely different. Most homosexual acts are morally neutral, whereas most incestuous acts are morally wrong. But both are routinely observed in the sexual behaviour of many species. Although they are “minority” activities, they are recognisably common — enough to be described as biologically “normal”.

Second, many slippery slope “arguments” (if they count as arguments at all) are not “fallacious” (if that’s the appropriate word). We often do have reason to believe that small initial changes portend much larger changes to come. A hundred years ago, opponents of universal suffrage argued that women should not be allowed to vote, because that would open the floodgates to all sorts of social changes. And they were right. It did lead to all sorts of social changes, most of which most of us warmly welcome.

But in any case I’m not worried at all about any slippery slope, nor am I warning of any such thing. Incestuous sex will always be a minority activity, and genuine, consensual incestuous love so uncommon that very few will ever want to seal their relationship by marrying each other. There are no “floodgates” about to open here.

Third, I am not opposed to single-sex marriage. (Nor would I be a “homophobe” if I were.) Rather, I’m trying to draw attention to some other commitments we inevitably take on if we are consistently committed to single-sex marriage.

Single-sex marriage is justified by a principle. That principle goes something like this: “if two consenting adults want their relationship to be recognised and sealed by law as marriage, the rest of society should not prevent them doing so”. If we deny consenting adults the legal right to marry, we are guilty of discrimination of a morally wrong sort. And it’s quite seriously wrong, I would argue, because the desire to marry — to marry the person one considers the love of one’s life — is a central part of human life and human flourishing.

Avoiding discrimination means “turning a blind eye” to differences, at least in law. We deliberately allow our commitment to a moral principle to override any personal distaste we may feel for people who are different in the way we are now deciding to treat as irrelevant.

By allowing people of the same sex to marry, we choose to override any distaste we may feel for homosexuality. (There must be some who feel such distaste, as we are told homophobia is so common.) We choose to treat their incapacity to procreate as irrelevant. We do the same for older people, or people who are barren for other reasons. We allow people who carry genetic diseases to marry, even though we know that if they were to procreate, their children may suffer serious disability. Our commitment to the above principle — a humane and decent principle guided by respect for erotic love — leads us to treat biologically ill-starred conditions as legally irrelevant. And a good thing too.

One such “ill-starred” condition is exemplified by Siegmund and Sieglinde in Wagner’s opera Die Walküre. As brother and sister who were separated when very young, they don’t recognise each other when they meet again as adults. But their instant affinity quickly grows into full human love. This love is not diminished by the discovery that they are siblings.

That sort of situation in common in mythology, scripture, and art. Incest is probably more common in such stories than homosexuality. However much we may disapprove of it, incestuous love must surely occur in real life, especially with recently increased fluidity of families, greater frequency of separations in childhood, larger numbers of step-parents and half-siblings, and so on.

It seems to me that denying siblings the right to marry is an anachronism, or at least it will become an anachronism as soon as we allow homosexuals to marry, as I think we should. It conflicts with the basic principle that we commit ourselves to by allowing single-sex marriage.

Of course it is appalling that some parents rape their children. Of course the legal right to marry should be strictly limited to consenting adults. Of course consent cannot be given by an adult who is mentally ill or the traumatised victim of abuse. These things go without saying.

But as we consider the question of single-sex marriage, we should consider the broader possibilities that our guiding principle opens, and the wider commitments we are obliged to take on. It doesn’t matter that very few siblings or half-siblings will ever want to marry. That fact that some of them will is enough. We are obliged to consider the possibility, and what our response should be.

What I have learned in the past week is that the quality of debate over single-sex marriage is wretched. Well-meaning but unintelligent journalists pour politically correct syrup over real issues, and chicken out of robust debate with anyone who doesn’t accept their relentlessly and predictably orthodox views. I have no distaste for homosexuality myself, but I’m growing increasingly impatient with a “gay lobby” whose idea of debate is cheap victim-stancing or aggressive accusations of homophobia.

Bronowski on “absolute knowledge”

In this moving clip taken from the very end of his acclaimed TV series The Ascent of Man, Jacob Bronowski speaks of two great human evils.

The first is the idea that “the end justified the means” — or as I would put it: if a particular end is treated as supremely valuable, its pursuit can ride roughshod over the many other competing values that characterise human life.

The second is the idea that we can have “absolute knowledge”. What does Bronowski mean by “absolute knowledge”? To understand this, consider how he defends science against the charge that it dehumanises people. He is standing in front of a dark pond in the grounds of Auschwitz, where the ashes of millions of people were flushed. These people were not victims of science. They were not killed by gas, he says, but by “arrogance”, “dogma” and “ignorance”:

When people believe that they have absolute knowledge, with no test in reality, this [gesturing towards the pool of death] is how they behave. This is what men do when they aspire to the knowledge of gods. Science is a very human form of knowledge. We are always at the brink of the known — we always feel forward for what is to be hoped. Every judgement in science stands on the edge of error and is personal. Science is a tribute to what we can know although we are fallible.

Now Bronowski doesn’t embrace any sort of “postmodernist” nonsense along the lines of “truth is relative”. He uses the words ‘true’ and ‘false’ freely, and clearly thinks they mean the same for everyone. Rather, in denying that we have “absolute knowledge”, his focus is on the traditional “justification” condition on knowledge. (It was traditionally thought that when we know something, we believe it, it is true, and we have a rational assurance or “justification” in believing it.) Bronowski is saying that justification or assurance is never absolute. It isn’t simply that it isn’t total or 100% — we can’t even measure it in an objective way. We can never have an impersonal or numerical assurance of what we believe or ought to believe. Assurance always depends on what each individual already believes, and that always differs from one individual to the next.

Bronowski is a “fallibilist” with respect to knowledge. That is, we are often mistaken, but we can have knowledge despite the ever-present possibility of error. Knowledge is a matter of our beliefs actually being true of the world. It’s an aspiration, a project guided by hope — and it’s often a matter of sheer luck. When we have knowledge, it’s not because our assurance is “absolute”, but because as a matter of fact our hope has paid off, and we have stumbled upon theories that happen to be true. In science, we have to “feel forward” in a tentative, exploratory way by guessing and then testing our theories against reality. The result of such tests is not a numerical measure of “how likely our theories are to be true”, but various hints and suggestions that we are “on to something” — which are bound to strike different individuals in different subjective ways. That’s part of what Bronowski means when he says science is a very “human form of knowledge”.

Nowadays, hardly anyone thinks we can have absolute certainty. Even the Nazis didn’t think that. But there is another “level of assurance”, which Descartes called “moral certainty”. This is not “total assurance”, but “assurance enough” to act so as to achieve some end. If we think assurance is absolute, objective, measurable, or suchlike, then everyone is rationally obliged to act in the same way to achieve the same end. I think that is the Nazi poison that Bronowski has in mind.

I think we should take Bronowski’s warnings seriously, and beware of movements that put one overriding end above all the other human values. And beware of claims that assurance can be objective or numerically measured.

Why would anyone think such a thing? I think such thoughts have two ingredients. The first is ambiguity in words such as ‘likely’ and ‘probable’. In science and statistics these words refer exclusively to relative frequency — that is, to the numerical proportion of members of a class that have some property. Sometimes, when we know practically nothing about a repeated phenomenon, we have to make judgements guided by nothing better than relative frequency. For example, consider gambling with cards, or wondering about Earth-collisions by objects such as comets and asteroids. If the only thing we know about such phenomena is the relative frequency of various hands of poker or of near misses in the long run, that is all we have to guide our behaviour. That’s how casinos make a profit and how governments should make contingency plans for asteroid collisions — and allocate resources for floods. It’s better than nothing, and it’s “objective”, but it’s not a measure of how much assurance we can have in believing anything.

Yet words such as ‘likely’ and ‘probable’ are often used in everyday parlance to refer to a supposedly objective assurance — assurance in believing that an individual event will occur, or that a given theory is true. Talk of numerical relative frequency often slides imperceptibly into talk of assurance.

The second ingredient is a worship of “science” in general — not this or that theory or branch of science, but the entire enterprise as if it were one monolithic body of assured knowledge. With this worship comes uncritical respect for “scientists” — not as practitioners of this or that branch of science, but as miracle workers whose opinions it is downright immoral to disagree with. Nowadays, it’s common to hear people proudly announcing that they “believe the science” — and implicitly shaming those who “refuse” to “believe the science”. That is a terrible state of affairs — and it represents a backward slide of civilisation. A descent rather than ascent.

Science consists of theories about the world. Many of these theories are about very abstract entities that can’t be observed directly. But none of them are about how much assurance we have that any scientific theory is true. Science doesn’t pronounce upon its own belief-worthiness. Anyone who says it does is either a fool or a fraud. That is to treat science as miraculous, and scientists as shamanistic miracle-workers, the purveyors of “absolute knowledge”.

Utilitarianism and the “golden rule”

According to JS Mill, “In the golden rule of Jesus of Nazareth, we read the complete spirit of the ethics of utility. To do as one would be done by, and to love one’s neighbour as oneself, constitute the ideal perfection of utilitarian morality.”

Mill was an avowed atheist. Why did he draw such a close connection between his own utilitarianism and the ethics of original Christianity?

Trivially, we all tend to maximise the satisfaction of our own preferences when we act. By choosing to do X rather than Y, we reveal that we prefer doing X to doing Y. To prefer X to Y is simply to “go for” X rather than Y.

Normally, we would like other people to maximise the satisfaction of our preferences when they act as well, because that would help us to achieve our goals, which we are already striving to achieve through our own actions. To have additional help in doing so — to have them throw their weight behind our own attempts to satisfy our own preferences as much as possible — is how we “would be done by” them.

So if we were to do to them as we would be done by them, we would try to satisfy their preferences as much as possible. And of course the same thing can be said symmetrically for them. So if the “golden rule” of doing as we would be done by were followed by everyone, everyone would be striving to satisfy preferences in general as much as possible.

If we understand interests as the satisfaction of preferences, as I think we should, it means that all interests count, no matter whose interests they may be. To be more precise: since preferences can be strong or weak, interests should be given due consideration — which means they should be respected for the actual strength of the preference they correspond to. When thinking about animals of the same species such as humans, “due” consideration nearly always means equal consideration.

If morality were the only motivating factor in human life, and everyone accepted preference utilitarianism, then everyone would respect preferences in general. This would necessarily involve a huge amount of compromise, as each individual’s preferences inevitably come into conflict other individuals’ preferences. But as an ideal, preferences would be respected as much as possible regardless of whose preferences they were.

Of course in reality the effects of our actions are limited. I can’t affect people who live a very long way away from me (although this changes as time passes). And our knowledge is limited: I cannot predict what effect my actions will have on people living in the distant future. But I can affect people who live reasonably close to me (in causal terms) yet who can’t be counted as either family or friends. At one time, these people would literally have been my “neighbours”. The preference utilitarian moral ideal is to respect their preferences as much as my loved ones’ preferences and my own preferences. The ideaI would be to “love my neighbour as myself”.

So much for what Mill called the “ideal perfection of utilitarian morality”. It’s an ideal because no one could hope to fully achieve it in action, and there are other important values that complete with moral value — such as beauty, truth, clarity, love, loyalty, eroticism, profit, and fun (which are just the first few I can think of).

Yet preference utilitarianism is not what is normally called “idealism” in the political sense. No perfect state needs to be achieved for utilitarian morality to “work”, as Marxism might need the total embrace of communism to “work”, or libertarianism might need a perfectly free market to “work”. In the imperfect world as it really is right now, utilitarians strive to behave morally by satisfying preferences as best they can. Where the strongest preferences are at stake, moral value can become the principal guiding light of action. Like the demands of original Christianity, the demands of utilitarianism can never be perfectly met — but we can strive to approximate meeting them, the closer the better. To my mind, that is a humane and realistic thing to strive for.

Whatever it is, I’m against it

Irish President Michael D Higgins — a sociologist, not a philosopher — is leading a campaign to teach philosophy in secondary schools. Almost everyone seems to welcome this idea. I think it stinks. — Why? What could possibly go wrong with such a laudable enterprise?

It seems to me that whatever eventually gets taught, it will affect students of low, middle and high ability in different ways. It will intimidate those at the bottom, indoctrinate those in the middle, and infuriate those at the top.

Let’s start at the bottom. Philosophy is hard. I don’t just mean that it requires intelligence. More importantly, it requires a “strange” turn of mind with the creativity to juxtapose previously unconnected ideas, the imagination to consider far-fetched scenarios, the ability  to look at things from a “meta-level” perspective, and comfort with abstraction.

Some of the most intelligent, gifted people I’ve ever met didn’t have this required strangeness of mind, and were hopeless at philosophy. They simply didn’t “get it”. They were self-confident adults who knew they had an abundance of talent in other areas, so they weren’t downcast by their lack of ability in this one area. But I shudder to think of what less confident children will make of a topic that is simply incomprehensible to them.

Moving up to students of middling ability: here they will get the central idea, and most will enjoy the classes. But what they are taught won’t be genuine philosophy. It will be what their teachers call “thinking skills” and “ethics”. Proponents of teaching philosophy in schools try to defend the neutrality of “thinking skills” by saying it just means formal logic, “critical thinking” and the like. This is a good time to remember the motto of all good philosophy: “know thyself”. Let’s be honest. What we call “skill” in thinking consists of thinking we approve of. I rate Kant an unskilled thinker, others think he is one of the greatest philosophers who ever lived. Many rate AJ Ayer an unskilled thinker because they disagree with what he thinks. And so on. Even in universities, teachers of philosophy are prone to presenting philosophical ideas to show them up as flawed, and to set up the alternative as more correct. Always, the “more correct” way of thinking is the one that is more highly skilled according to their own lights.

But wait. Formal logic must be neutral, right? Well, logic itself may seem value-free and theoretically uncommitted, but learning logic isn’t. If we treat deductive arguments as the acme of human thought, we will inadvertently promote the most plodding forms of traditional epistemology, which assume that the ideal of reason is to be the conclusion of a valid argument. That’s not a deliberate attempt to brainwash, but it is an insidious form of indoctrination. (Personally, I blame that indoctrination for so much bad science using brainless inductive methods.)

The vast majority of great thinkers — scientists, mathematicians, artists — never took a logic class in their lives. A person doesn’t need to study logic to think in a logical way. I don’t think studying logic improves a person’s ability to think logically. At best it may help describe the patterns that logical thought takes, so those who have a special interest in such things can communicate with one another. Analogously, learning to be a theatre critic doesn’t make one a better playwright. The arrogance of those who assume better thinking results from “thinking like me” is breathtaking.

This urge to shape minds really shifts into high gear with ethics. Just reflect for a moment on how many well-meaning secondary school teachers will be eager to correct the “unskilled thinking” behind such evils as sexism, homophobia, and “climate denial”. On the receiving end, many well-meaning secondary school students will be eager to have their moral prejudices confirmed by the “authority” of a philosophy teacher. When these two kinds of evangelism meet, the result is missionary zeal — for orthodoxy.

That is nothing like real philosophy, which involves awkward questions rather than agreeable-sounding answers. But even if a few teachers and students realize they’re doing nothing like real philosophy, their hands are tied, because strictly speaking these students are children, and children are forbidden to discuss awkward questions. It’s too invasive.  Teachers can’t allow children to discuss such questions as whether suicide is an act of spite, whether homosexuality is a mental disorder, whether sexual or racial differences are innate. Many of the students are bound to be “affected by these issues”, as they warn on BBC. But even those who want to discuss them can’t give their consent to do so because they’re just children.

This brings us to the top level — the small proportion of students who do have some real philosophical acumen. They’ll have a good idea of what philosophy is all about, because they’ll probably have done a bit of it on their own already. They will recognize that the promotion of orthodoxy and curtailment of free discussion falls far short of the real thing. That will anger and frustrate them, and it may even get some of them into trouble. Despite ill-informed rumors to the contrary, anything like real Socratic dialogue is dangerous.

It is this last type of student who might choose to study philosophy at the third level, in university. But how many will be put off by the cardboard sham that passes for philosophy at the second level?

Most philosophy is bad philosophy. Good philosophy mostly consists of un-learning the bad philosophy you learned before, or arrived at on your own before you realized you were doing philosophy.

Agonistic liberalism

I’ve updated my Twitter profile to describe myself as an “agonistic” liberal. That is, I think values come into conflict with each other between individuals, and values compete with each other within each individual.

My concept of freedom takes account of this inevitable conflict and competition. For example, we all agree that slaves are unfree, but unlike many I am prepared to say that slave-owners are more free for exploiting slave labour. This is not to justify slavery of course — the lack of freedom of slaves is an obscenity, and it is not even remotely compensated for by the increased freedom of slave-owners. But I think the word ‘freedom’ is appropriate for what they gain at such a terrible and unjustifiable cost to their victims.

There are many who refuse to use the word ‘freedom’ for what slave-owners gain. These are generally people who assume a “positive” concept of freedom — a concept we might characterize as being “impregnated with morality”. Those who have a “positive” concept of freedom will not count a person as free if it involves the exploitation of others. To them, freedom must be an unalloyed good. An act must be morally right to be genuinely free.

That may sound comforting, but it means their “positive” concept of freedom is less basic than mine. They cannot legitimately use their concept of freedom as a basis for moral or political judgement, on pain of circularity. I hope it’s clear why: if morality is to be based on human freedom, what we judge to be free cannot already depend on what we judge to be morally right or wrong. (Readers familiar with Plato’s dialogue Euthyphro might recognise an affinity here between prior moral assumptions in the “positive” concept of freedom and prior moral assumptions in what is “loved by the gods”. The “positive” concept of freedom cannot work as a basis for morality any more than divine commands.)

Contrast this “positive” concept of freedom with that of classical liberalism. Liberals think individual freedom is the most basic good — it has to be more basic than the moral or political judgments liberals make by appealing to it. Such judgments appeal to gains and losses in freedom, so the concept of freedom must be prior to and independent of morality and politics. Liberals generally have a “negative” concept of freedom: freedom is simply the absence of external obstacles, so that being free is simply a matter of being able to do what you want to do, even if it is immoral. For example, Burke asked: “What is liberty without…virtue? It is the greatest of all possible evils.” Note that he still counts this “greatest of evils” as liberty, and not as something other than liberty. Thus Burke believes in liberty, but it should be a “moral, regulated” liberty. He knows that freedom for the pike is death for the minnows, so we must constantly guard against dangers and excesses of the pike’s freedom.

From my own liberal perspective, the freedom of one individual can easily come into conflict with the freedom of other individuals. Liberal politics tends to be a matter of compromises and trade-offs between individuals’ freedoms, rather than the joint pursuit of a shared ideal. The latter is more like politics as envisaged by Jean-Jacques Rousseau, whom liberals tend to regard as the Satanic spokesman of “positive” freedom.

Conflict between individuals is mirrored by competition within individuals between their various values. For example, we all value both truth and beauty, but the truth is often ugly. Please note how reluctant many are to admit this simple and obvious fact. I value both loyalty and morality, but my loyalty to those I am closest to involves moral neglect of those I am further away from. Truth and beauty pull in opposite directions, and loyalty and morality pull in opposite directions. Please note again how reluctant many are to admit this further uncomfortable fact. And so it goes for nearly everything we value — the values conflict, but many prefer not to admit that they conflict.

It seems to me that liberalism does and should accept this plurality of values, and the tensions between them. I use the rather obscure word ‘agonistic’ because they compete with each other. Those who don’t see this “agonistic” pluralism in effect put a single value above all other values and let it overrule the rest, usually in the name of morality. They assume their one value is uniquely morally correct.

At the moment, the least liberal people in our fairly liberal Western society seem to be members of the medical professions. Perhaps it is because their working lives are dedicated to promoting health that they tend to regard it as the only value, or at least as an overriding value that is entitled to trump all other values. Whatever the reason may be, medical spokespersons are sounding increasingly parochial, authoritarian, and blind to the fact that there are other valuable things in life than good health. Intelligent, reflective people have to make compromises between these competing values. A medical professional who regards health as the overriding value is in the same position as a Catholic priest who regards sexual chastity as the overriding value. If we unmask these moralists by ignoring their claims to “virtue”, we can see that are in the same position as any capitalist who regards profit as the overriding value, or Gordon Gekko who regards greed as the overriding value.

On Pat Kenny’s radio show yesterday, oncologist-politician Professor John Crown argued that smoking would be illegitimate even if extra taxes paid by smokers made up for the economic burdens to society of smoking-related illnesses. Even if smokers more than “pay their way” by contributing generously to the health of others as well as for their own treatment, smoking is wrong, or so Professor Crown seemed to argue.

Apart from being economically perverse, this “four legs good” attitude is completely at odds with liberalism, and with liberalism’s “negative” concept of freedom.

Of course not all medical professionals are like that. Many doctors accept that human values come into conflict, and that a life involves compromises and trade-offs between values. It involves taking risks, sometimes coming a-cropper. It seems to me that the practice of medicine should involve compassion, in particular a compassionate acceptance that a life well-lived involves taking risks as well as avoiding them.

Does “ought” imply “can”?

Many analytic philosophers think that when we use the word ‘ought’, we express a desire (typically, a desire for everyone to behave in a particular way). The object of a desire is something considered valuable or worth striving for — a goal. By contrast, when we use the word ‘is’, we express a belief. If the belief is true, it corresponds to an actual state of affairs — a fact. Any fact exists independently of how desirable or undesirable it happens to be — it isn’t affected by what “ought” or “ought not” to be the case.

This distinction between “is” and “ought” (or if you prefer, between facts and values) originated with David Hume. It’s really part of his larger theory of action.

When we act (as opposed to merely twitching involuntarily, say) our mental states cause bodily movements. These mental states include both beliefs and desires. In other words, we do things not simply because we think the world is actually arranged in a particular way, but primarily in order to bring about an as-yet unrealized arrangement that we regard as valuable. All acts have goals, and are caused by goal-directed states — in other words by desires. According to Hume, the purpose of belief is secondary to the purpose of desire. We have an internal “map of the world” so that we can realise our goals. Reason is the slave of the passions, as Hume put it.

Now philosophers often talk about belief, typically as part of the analysis of the concept of knowledge. But they hardly ever talk about desire. I think this is a terrible oversight. The narrow fixation on belief promotes the misleading idea that when people do bad things, it is their beliefs that are at fault. And the most obvious way beliefs can be at fault is if they are held without sufficient evidence. In this way, lack of sufficient evidence (or uncertainty) can come to look like the root of all evil. Thus “skeptics” are engaged in a moralistic crusade against “lack of evidence”. Richard Dawkins identifies religious faith as the main culprit in barbarous cultures that mutilate children and oppress minorities. And so on.

I submit that we should focus our attention on desires rather than beliefs. Instead of dwelling on the irrationality of faith, or suchlike, we should consider desirous states such as the willingness to be cruel. Acts are bad when they are deliberately aimed at — or inadvertently result in — bad things. These are usually the product of bad goals.

We can get confused about the difference between “is” and “ought”, because sometimes we describe psychological facts about ourselves, including such facts as that we desire something. If we’re not careful, it may seem as if a description of such a desire expresses it, and so there may seem to be a blurring of the distinction between “is” and “ought”. But the difference can be put in quite stark terms. An “is” is true or false. But an “ought” is neither true nor false — beneath it lies a “pro attitude”, which has more in common with a command, prescription, exhortation, recommendation or endorsement than a description. An “ought” expresses “to-be-done-ness” rather than describing any sort of fact.

There is another reason why we get confused about the difference. Moral “ought”s are usually disguised as “is”s. This is no accident. A moral “ought” expresses ways we would like everyone to behave. It isn’t meant to express a desire that the speaker alone happens to have, but one that everyone is meant to share as a matter of necessity. Its scope is meant to extend beyond the subjective first-person singular, and apply “inter-subjectively”. The ambition of inter-subjectivity takes the form of an appearance of objectivity. And objectivity involves facts. So moral “ought”s are dressed up to look like true descriptions of moral facts. A moral “ought” is a desire masquerading as a belief.

One of the best-known consequences of the sharp distinction between “is” and “ought” is the idea that it is a fallacy — a mistake in reasoning — to try to derive one from the other. Hume himself complained about writers who slid imperceptibly from sentences that purport to describe facts to sentences that prescribe behavior. The currently-popular way of putting it is that you can’t derive an “ought” from “is”s alone. But the converse is also true: you can’t derive an “is” from “ought”s alone. The claim that “ought implies can” is an attempt to do just that. Why do so many philosophers make that claim?

The answer, I think, is that they are thinking parochially about moral “ought”s. They assume an “ought” prescribes behavior like a rule, and it is an agent’s duty to comply with such a rule. No reasonable person would charge an agent with a duty he could not be expected to perform by asking him to follow a rule he cannot comply with. So it is assumed that he must be able to comply any duty he is charged with as a matter of fact.

This is to think parochially about “ought”s, because in effect it assumes that all “ought”s are like those of a deontological moral theory. But there are other kinds of “ought” than those of ethics, and other ways of thinking about ethics than Kant’s.

Consider chess. Several rules prescribe how the pieces move — the bishops should move diagonally, and so on. There is also an “object of the game” — to get your opponent’s king into checkmate. Any of these could be understood as an “ought” of playing chess. If a person sitting at a chessboard moving pieces around does not observe every such “ought”, he isn’t really playing chess at all. I submit that philosophers who claim “ought implies can” are thinking only of the first sort of “ought” — one of the “rules of the game”. The second sort of “ought” expresses an ideal that might or might not be achieved. Depending on who is playing, it might be practically impossible to achieve it, although it continues to guide the behaviour of the players.

There are various activities in life — amoral or moral — in which impossible ideals successfully guide behavior. A “Gordon Gekko” type whose only motive is profit might do everything in his limited power to maximize profit. Yet it remains impossible for every decision he takes to actually result in the greatest profit. The “object of the game” here is expressed by an amoral “ought” that does not imply “can”. There are examples in ethics too: utilitarians are committed to maximizing an independently-specified good such as happiness or the satisfaction of desires. There is no conflict between striving to achieve such a goal, and its being practically impossible to actually achieve it. Again, “ought” does not imply “can”.

But even with the first sort of “ought”s — the “rules of the game” rather than the “object of the game” — we can still imagine how they need not imply “can”. It would have been very unreasonable for the inventor of chess to write rules that can’t be complied with, such as that pawns move forwards ten squares at a time. (Unreasonable because there’s isn’t room on a chessboard for a pawn to move that way.) But unreasonable and all though that would have been, no logic prevents it. They ought to move that way, but they can’t. This is a possibility because of the independence of “is” and “ought”.

What’s wrong with individualism?

My fridge is an inanimate object. It doesn’t desire anything. It doesn’t have any preferences or interests. It is entirely non-sentient. If I slowly and sadistically hammered a nine-inch nail right through the side of my fridge, or even nailed it to a cross, it wouldn’t matter morally at all. It wouldn’t feel a thing.

Mind you, if you hammered a nail into my fridge, it would matter morally, because it’s my fridge, and I don’t want you to do that. I’d prefer it if you didn’t do that, and it would harm me if you did. But the harm would be done to me as a sentient individual rather than to the fridge which feels nothing, cares for nothing, and deserves nothing.

See the difference? On the one side, something that isn’t sentient and doesn’t deserve moral respect. On the other side, something that is sentient and so does deserve moral respect.

What about my society or my community? – Groupings of individuals are composed of individuals who are sentient and so do deserve moral respect, but the groupings themselves are as non-sentient as my fridge. Like a triangular arrangement of non-triangular dots (like this ∴), the parts have a property that the whole doesn’t have. The mistake of assuming the whole has a property that the parts have is called the “fallacy of composition”. (The converse error of assuming the parts have a property of the whole is called the “fallacy of division”.) The important thing to see is that it is a mistake.

‘Individualist’ is an obvious word for someone who thinks sentient individuals deserve moral respect but thinks inanimate, non-sentient objects like fridges or society do not deserve moral respect. Please note that an individualist so understood would not normally be someone who “lacks compassion”. Why not? – Such an individualist would normally think that since individuals count, individuals have responsibilities to look after each other and to respect each other’s interests. “People matter” – in fact only individual people matter. This sort of individualist wouldn’t normally have compassion for inanimate, non-sentient objects such as fridges or society, because why should they?

But evidently, the word ‘individualist’ is also used in a pejorative sense to mean someone who “lacks compassion” like that. Presumably, this second sort of individualist thinks society should be run along the lines of “every man for himself”, with each individual protecting his or her own interests and not caring about other individuals. Philosophers often distinguish between the two sorts of individualism by labeling the first “liberalism” and the second “rugged individualism”. I hope you can see why traditionally, liberalism was associated with the left wing of politics rather than the right wing, and why the word ‘liberal’ is sometimes used in a sloppy way to mean “left wing”.

When Margaret Thatcher’s critics berate her for “not caring about society”, what do they mean? – Usually they mean that she didn’t do enough to protect the interests of weak individuals from the selfish greed of strong individuals. That strikes me as a perfectly legitimate criticism.

But some of her critics seem to mean that she was wrong to care about individual people instead of caring about an inanimate, non-sentient object called “society”. It strikes me as inhumane to care about non-sentient “collectives” of people such as nations or races instead of sentient individual people. That way lies fascistic nonsense about “destiny” and collective culpability. So I think this second sort of criticism is illegitimate and conceptually confused.

I don’t expect non-specialists to be familiar with technical philosophical terms, but I do hope that people of average intelligence can grasp the difference just discussed, and not get carried away by a rather everyday sort of ambiguity. (Such seems to have been the fate of current Irish president Michael D Higgins.)

In the hope of bringing a little bit more “harmony” where there was “discord”, let’s use language clearly!