Sentience and preference utilitarianism

There was a brief discussion on Twitter yesterday about whether we should grant “human rights” to non-sentient robots. My reaction: “Why give a damn about non-sentient agents? They can’t feel anything, so who cares if harm should befall them?”

This idea that “morally, the only thing that matters is sentience” was famously expressed by Jeremy Bentham:

a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose the case were otherwise, what would it avail? the question is not, Can they reason? nor, Can they talk? but, Can they suffer?

Despite my confidence that non-sentient agents do not matter morally, I admit that sentience might seem to pose a special problem for me as a preference utilitarian. The dissolution of this problem adds detail to my moral theory, and explains why we call it ‘preference’ rather than ‘desire’ utilitarianism.

A preference utilitarian differs from the traditional hedonistic type of utilitarian (such as Bentham) in that his basic good is not a particular sort of experience such as pleasure or relief from pain, or happiness understood as a feeling, but the satisfaction of desires. His “greatest good” is not the “greatest happiness of the greatest number” but the maximisation of the satisfaction of desires.

Now it’s important to see that the satisfaction of desires here is not the having of a “satisfying experience”, but the satisfying of objective conditions — and the agent might be wholly unaware that those conditions have in fact been satisfied. A desire is satisfied when the desired state of affairs is actually realised, whether or not the agent has any idea that the state of affairs is realised. Like a man becoming an uncle by virtue of a birth he knows nothing about, or a belief being true, a desire’s being satisfied is a matter of the world’s being arranged in the right way — something typically external to the mind of the agent.

For example, most people want their spouses to be faithful. They don’t want the mere experience of their spouse being faithful, but the actual objective fact of their spouse being faithful. This desire is not for the spouse to “keep up appearances” by telling convincing lies about their infidelities — there mustn’t be any infidelities to tell lies about.

Here’s why sentience might seem like a problem for preference utilitarianism: unless a desire is a desire to have a particular sort of experience, which it typically isn’t, the experience of a desire being satisfied is like a by-product of its actually being satisfied. So a “robotic” agent who doesn’t have any conscious experiences at all — but still has desires which can be satisfied or thwarted — would seem to make moral demands on preference utilitarians like myself. That conflicts with the intuition expressed above that only sentient agents matter morally.

The problem is dissolved, I think, when we remind ourselves that genuine desires (and beliefs, for that matter) only exist where pluralities of them together form a “system”. In moral deliberation, the utilitarian weighs desires thwarted against desires satisfied in an imaginary balance. Obviously, strong desires count for more than weak desires. When desires come into conflict with one another in the mind of a single agent, the strongest desire is the agent’s preference. Only desires in a system of several desires competing for the agent’s “attention through action” can count as preferences.

So system is required for one desire to take precedence over another, as it must if it’s a preference. And a preference to pursue one goal rather than another involves the weighing up of the relative merits of competing goals, the level of time-management needed to defer the less urgent goal, and so on… In short, it requires reflection and choice. This is “second-level representation” — i.e. meta-level representation of primary representational states — of the very sort that makes for consciousness. We need reflection to decide between competing desires (and for that matter, we need epistemic beliefs to guide our choices of first-level beliefs about the world — in other words, a sense of which among rival hypotheses is the more plausible). Second-level representations like these amounts to awareness of our own states, including awareness of such states as physical injury. In other words, the experience of pain. It’s a matter of degree, but the richer the awareness, the greater the sentience. So genuine desire and sentience are linked in a crucial way, even though any particular desire and the conscious experience of its satisfaction might not be.

To better understand why “genuine” desires are part of a system, we might contrast them with more rudimentary goal-directed states of ultra-simple agents such as a thermostats, or slightly more sophisticated but still “robotic” agents such as cruise missiles.

Thermostats and cruise missiles each have a rudimentary desire-like state, because their behaviour is consistently directed towards a single recognisable goal. And they have rudimentary belief-like states because they co-vary in a reliable way with their surroundings, co-variation which helps them achieve their goal. In both cases, they might be said to “bear information” (non-semantic information, reliable co-variation) about the world. A clever physicist (a “bi-metallurgist”?) would be able to work out what temperature a thermostat “wants” the room to stay at, and what temperature it “thinks” the room is currently at. A clever computer scientist would be able to reverse-engineer a cruise missile to reveal what its target is, the character of the terrain it is designed to fly over, its assumed current location, and so on. We could go further and adopt the intentional stance, assigning mental content to these agents. In effect, that would be to drop the cautionary quotation-marks around the words ‘wants’ and ‘thinks’. We might regard ourselves as referring literally to its desires and beliefs. But we would not be able to take the next step and talk about preferences. For preferences, we need various gaols of varying strengths, and we need something like consciousness to make decisions between them. In other words, we need sentience, at least to some degree.


When people talk about “self-control”, what do they mean? On the face of it, a “self” and something else that “controls” that self sound like two separate agents. But every agent is in reality just a single agent. What is going on? I think some buried philosophical assumptions and mistakes lurk here.

[Edit: I see no real difference between a core part of the “self” controlling unruly peripheral parts, versus its being controlled by them. The main idea of self-control is that the “self” is “divided against itself”, or at least divided into more than one part that can be treated as an agent in its own right.]

When we say someone should control himself, we mean first and foremost that he has conflicting desires. Then we go further, and give one of those desires a superior status as being “more genuinely his own” than the other one. His “gaining control of himself” is then a matter of the desire that is “more genuinely his own” resulting in action, overruling the desire that is “less genuinely his own”.

Now it seems to me that this decision to regard one of the conflicting desires as “more genuinely his own” is not taken with reference to what the agent himself most strongly desires, but instead with reference to what is considered more laudable — in other words, with reference to what society at large approves of. This might be anything regarded as valuable — such as good heath, prudence in financial matters, scientific rigour, religious piety, whatever. You can see the difference in terms of “is” and “ought”: what the agent most strongly desires is a factual matter to be decided by considering his own choices, whereas what is laudable is a matter of value decided by the likes and dislikes of society at large.

It’s important to see that the factual matter is a completely trivial one — whatever the agent actually ends up doing is what he wanted to do most in the first place. What makes one desire stronger than another is simply that it “wins” any conflict between them by issuing in action. [Edit: So if we look at what an agent most strongly desires, there is no question of one part of himself controlling any other part of himself. He will have to compromise with other agents, of course, and that may involve agents controlling each other to some extent, but that is an everyday fact of life.]

So I would argue that the word ‘self-control’ is to this extent inappropriate: whatever “control” may be involved is not really “self-control” so much as “control by society”. Now please don’t get me wrong here: I don’t mean to say that that sort of “control” involves actual coercion by society. But it does involve guidance from outside the self — with the agent’s tacit approval, of course. He takes his lead from what society approves of rather than from himself in isolation.

Some will protest that self-control usually involves pursuing longer-term gaols and deferring immediate gratification. If longer-term goals are more “genuinely an agent’s own” than mere passing whims, perhaps longer-term goals are more rationally entitled to direct conduct. Perhaps longer-term goals represent an agent’s character more faithfully than whims, so that the latter can be considered “out of character”, and thus a suitable subject for the “self” to exercise “control” over.

I think that’s a red herring. Spontaneity, impulsiveness, even capriciousness are aspects of an agent’s “true” character just as much as stolidity or lack of imagination. Rational action involves the pursuit of all sorts of goals, with an eye both to how desirable this or that goal may be, as well as to how confident one may be that this or that course of action will achieve it. If someone chooses to pursue this shorter-term goal rather than that longer-term goal, say, it simply indicates that on balance he prefers this to that, and/or he has more confidence in achieving it. So there’s nothing intrinsically more “rational” about the pursuit of longer-term goals.

That isn’t the only red herring. We tend to discount pursuits that seem to undermine an agent’s integrity or harm him as being less “genuinely the agent’s own” (I’m thinking of activities such as smoking and drinking). But what counts as “harm” here? Inasmuch as he is able to pursue something he really wants, he is not harmed — and inasmuch as he is prevented from pursuing what he really wants, he is harmed. If we regard something an agent freely pursues as undermining his integrity or as harmful to him, once again we are appealing to values of society at large rather than values of the agent in isolation. And once again, we’re not talking about “self-control” here so much as “control by society” — or, as I said above, at least “guidance by society”.

So far, no harm done. An agent is still doing what he wants to do, even when what he wants to do is determined by the likes and dislikes of other agents than himself. But I think our understanding has taken a sinister turn. We are using misleading words, and in doing so we are turning a blind eye to a possible source of genuine coercion. By treating something that lies outside the agent as if it were the agent’s own, we slide inexorably towards thoughts such as that “society can help a person to control himself”. There are monsters about.

[Edit: One such monster is Rousseau’s idea that people must be “forced to be free”. That slogan expresses the most insidious and dishonest form of paternalism, which goes beyond simply forcing people to do what they don’t want to do “for their own good”. The greasier version — embraced by anyone who appeals to “false consciousness” or the like — involves pretending they do in fact want it by virtue of the fact that it’s for their own good.

The idea that an agent can “really” want something although superficially seeming not to want it is at the heart of the “positive” concept of freedom. As Isaiah Berlin noted, it involves the self’s being divided into two — the “empirical” self and the “real” self — and obviously so too does the idea of self-control.]

The significance of desire

Most of us have an under-inflated concept of desire, and an over-inflated concept of belief. We happily accept that beliefs are fairly detailed representational states — so that taken together they prompt the metaphor of an “inner world”. But we tend to think of desires as much vaguer or thinner on detail than beliefs, and perhaps not even as representational states at all. Why is this way of thinking so common? — Here are a few suggestions:

First, we tend to specify desires with reference to objects rather than states of affairs. For example, we say “I’d like some chocolate” rather than “I have a desire to be eating chocolate”, or “I need some WD-40” instead of “I want my door hinges to be lubricated with WD-40”. Being human, we can safely assume that other humans have broadly similar goals to our own, so it’s often linguistically redundant to explicitly specify these goals as states of affairs. This can give the mistaken impression that desires do not represent states of affairs at all. In other words, it leads us to overlook the fact that desires represent the same sorts of things as make beliefs true or false.

Second, in general the states of affairs desires are aimed at are not yet realised. When we believe something, or at any rate when we believe something about the past or present, if our belief is true then the state of affairs that makes it true is a “fact”, with much attendant “detail”. When we desire something, on the other hand, the state of affairs that would satisfy it is not yet a fact. So for the time being it’s a “mere idea”, something more like Pegasus than a real horse grazing in a real field at this very moment. Any attendant “detail” is more obviously “imaginary”. We probably err on the side of assuming our beliefs are more detailed than they really are, as if they inherit some of the detail of the fact that makes them true, but with desires, we err in the opposite direction.

Third, in the Western philosophical tradition from Plato through Descartes (and in other traditions too), we tend to think of mental states as conscious experiences rather than as functional representational states that direct the behaviour of agents. This is changing, of course, with the continuing influence of American pragmatism and of the later Wittgenstein, as well as with the growth of functionalism in the philosophy of mind. But it is still very common to assume that a desire is a mere “feeling” or emotion rather than an essential part of the mechanism of action. This assumption is promoted still further by the possibility of wishing (and expressing wishes) for states of affairs that as agents we can play no part in bringing about (such as “I wish it would snow!”). It all suggests that desire is something rather touchy-feely and causally unserious. Worse, it can suggest that the real “purpose” of desire is nothing more than the having of a further sort of conscious experience — pleasure, or whatever.

We must reject this assumption that desire is a “feeling” (although of course specific desires are usually accompanied by distinctive feelings). Rather, a desire is a causally efficacious and typically fairly detailed representational mental state aimed at bringing about a real state of affairs external to the mind. Desires are complementary to beliefs, which are also representational mental states. Instead of bringing about real states of affairs external to the mind via behaviour, beliefs are typically brought about by these states of affairs, often via observation. Although there is something to the claim that desires are less detailed than beliefs, I think we should take Hume’s lead in giving desires priority: a desire (or “passion” as Hume put it) is the mainspring of any act. Whenever we act, our behaviour is aimed at achieving a goal; desire is the mental state that establishes such a goal, and beliefs (or “reason”) can do no more than help us steer a course towards achieving it. Hence “reason is the slave of the passions”.

Although we do not literally have an “inner world” of belief in our minds, together our beliefs form a sort of “map of the world” — the world as we take it to be. But that’s only half the story. Together our desires form a sort of “blueprint for the world” — the world as we would like it to become. The “map” and the “blueprint” contain the two essential components of the causation of all acts.

The traditional under-inflated way of thinking about desire tends to ignore the “blueprint” and puts far too much emphasis on the “map” — it imbues it with more detail than is really there, and it gives it causal powers that it simply doesn’t have. This often emerges in the assumption that specific sorts of belief are associated with specific sorts of acts.

A classic age-old example is the thought that belief in God causes people to behave in more “moral, God-fearing” ways. But of course such belief can only cause the valued sort of behaviour in conjunction with specific desires — to do what God wants, to avoid punishment, and so on.

Nowadays, much effort is expended on promoting beliefs such as “all races are exactly alike in respect of ability” and “there are no grey areas in rape”. The hope is that simply having such beliefs will discourage racist or sexist behaviour. But as we have just seen, behaviour of any sort is caused not only by our “map” of beliefs, but crucially — and more saliently, because desires are classified according to their goals — by our “blueprint” of desires as well.

The “attenuated” understanding of desire has a couple of really nasty side-effects. One is a blurring of the distinction between beliefs and desires, and the thought that desires can be “implanted” in an agent’s mind in the same way as many beliefs can: via observation. So if we watch violence on television, we will want to be violent ourselves. If we see ads on TV, we will want what they advertise. And so on. This gives rise to the sort of puritanism that discourages or even forbids the expression of “unhelpful” ideas. Traditional religious puritanism frowned on the expression of atheistic or agnostic views, and kept Hume out of a proper academic job. No doubt there are many lesser yet still talented people who are nowadays excluded from academic jobs for having beliefs that are currently regarded as “unhelpful”.

The side-effect that really makes me queasy is not the exclusion of talent from the groves of academe and the media, but the active promotion of falsity for the sake of our general moral betterment. For example, although I don’t think there are any significant differences between races as far as abilities are concerned, the claim that there are none at all is statistically vanishingly unlikely. If there are differences between individuals — and there are— there are bound to be differences between groups of individuals. Yet we are enjoined never to utter the forbidden words of that obvious truth. This is sick-making, and anyone who cares about truth should speak out against its deliberate suppression.

No one is culpable for what they believe

Beliefs are “afferent” mental states in the sense that facts in the world impose beliefs on our minds through our perceptions — the causal “flow” is inwards from world to mind.  Desires are “efferent” mental states in the sense that ways we want the world to be impose themselves on the world through our actions — the causal “flow” is outwards from mind to world.

Because of beliefs’ dependence on the way the world happens to be, and because we’re mostly rational, we can often make people believe something whether they want to or not. We simply engineer a fact, or present them with evidence of a fact, and their beliefs duly re-arrange themselves to maintain their systematic function as a more or less accurate map of the world.

By sheer bad luck, a “rogue” fact can lead to a distasteful belief. So we simply cannot be held culpable for what we believe. We can have distasteful beliefs, but we cannot have blameworthy beliefs.

Desires are different. Together, our desires are more like a blueprint for the world than a map of the world. Through our actions, our blueprint for the world makes facts by re-arranging things in the world (as opposed to being sensitive to their prior arrangement, like beliefs). If we do something immoral, such as disregarding another person’s interests, we are blameworthy because our desires are bad. More specifically, our intentions are bad. Intentions are the heart of culpability.

As far as I know, Hume was the first philosopher to see that beliefs (“reason”) and desires (“the passions”) play complementary roles in causing actions, rather as orthogonal components can be used as a basis of a vector space. I think it was remarkably humane of Hume to see that none of us can be held culpable for our beliefs. They’re the “slave” of our “passions” — in other words, we use our “map” in order to realize our “blueprint”, so if things turn out nasty, blame the blueprint rather than the map. And beliefs are mostly a matter of luck anyway. As Hume’s predecessor Locke observed, a person’s salvation cannot depend on an accident of birth.

Many people think we can have racist beliefs. I think they’re wrong. ‘Racism’ is and should remain a word of condemnation. To preserve its ability to express condemnation and blame, we should strictly limit its application to bad motives, bad intentions — to the efferent mental state, desire — rather than to merely distasteful beliefs.

A sociopath writes about “love”

Reading the London Times yesterday, I was struck by the following extract from the forthcoming Confessions of a Sociopath: A Life Spent Hiding in Plain Sight by ME Thomas. (ME Thomas is an assumed name.)

Love, I have come to realise, is a vital entry point into the inner worlds of other people, the universal Achilles’ heel. People are so hungry for love. They die a little every day for want of it, for want of touch and acceptance. And I find it immensely satisfying to become someone’s narcotic. It isn’t just that you have more power over someone through love than any other means, but you have access to more parts of them. There are more levers to pull and buttons to push. I can bring relief to pain of which I am the direct and sole cause. I think nothing of deceiving or manipulating them.

Sociopaths are typically confident, charming, intelligent people, who are further characterized as having “no conscience” because they “cannot feel guilt”.

I think that way of characterizing the condition is misleading, because it is misinformed by hedonism. By hedonism I mean the assumption that motivation boils down to the internal mental economics of pleasure and pain. We do things in order to get pleasure or avoid pain, the story goes, so when we act morally we do so in order to avoid the pain of feeling guilty. Sociopaths “don’t feel guilt”, the story continues, so they don’t have the normal human “spur” to act morally.

If we consider sociopaths from the perspective of hedonism, we might conclude that they are better off than the rest of us because they feel less pain. But that’s probably wrong. The passage above is reminiscent of a tone deaf person saying how good it is not to hear music. Or a person with diminished sex drive expressing relief at not having to cope with sexual frustration. Or a person who has no concept of loyalty extolling the carefree joys of disloyalty. These are all unfortunate conditions – disorders, if you like – in which something valuable is missing.

The word ‘sociopath’ is nowadays more common than the older word ‘psychopath’. The reference to pathology or illness remains, as it should. But the newer prefix ‘socio’ suggests that people who have this illness do not fit into society properly. If anything, the very opposite is true: sociopaths seem unusually eager to win a sort of “political” success. Furthermore, winning that success would normally involve behaving in morally mainstream ways, thereby winning society’s approval and the social power that comes with it. The most vocal opponents of racism, sexism and homophobia are more likely to be sociopaths than ordinary people who just quietly get on with avoiding these evils.

Against hedonism, I’d argue that humans (and all other animals) normally do things in order to bring about objective states of affairs. Any pleasure or pain involved are normally by-products of perceived success or failure in realizing those states of affairs. When we act morally, we try to bring about a state of affairs we regard as morally valuable. Guilt is the sense of failure we experience when we do not succeed. It is the by-product of a specific type of failure.

Now the author above has a very well-developed sense of success and failure. She takes evident pride in her many successes (in bringing about social rather than morally valuable states of affairs). In that pride, perhaps we can see the sociopath’s Achilles’ heel. People who take great pride in success tend to take great shame in failure. Most have a great aversion to admitting failure, even to themselves. Those who don’t admit their own mistakes can’t learn from them or correct themselves. I wonder how long it will take the author above to lose social credibility as she makes the same mistake over and over again, blind to the fact that it is a mistake?

The illusion of apparent meaning

Knowledge is power, and understanding is power. If one person can persuade another that he knows or understands something the other does not, the other person is put “on the back foot”. He’s in danger of looking stupid, and therefore weak. He’s on the defensive: he has to “bow to the better judgement” of the seemingly more clever one who claims to have the knowledge or understanding.

It’s no surprise that wherever there is human intercourse, especially where decisions have to be made and hence power is involved, humans naturally gravitate towards claiming to know and understand more than they really do know or understand. And in so doing, the stuff they claim to know or understand can take on a life of its own. It can “snowball”. Others too want to claim that they know or understand the same thing. They agree with each other, and want to be seen to agree with each other. Their agreement reinforces the idea that there actually is something that is known or understood. But this idea is often illusory.

The story of the “emperor’s new clothes” isn’t a simple one about insincere people kowtowing to power. It’s more subtle than that. It’s about completely sincere people being taken in by an illusion. The emperor is taken in by the combination of his own vanity and the flattery of others, of course, but these others are also taken in by their own vanity and the illusion that they have themselves acquired a special expertise.

Mill wrote: “the general tendency of things throughout the world is to render mediocrity the ascendant power among mankind”. I would add that the general tendency throughout the world is to render a significant proportion of what mankind talks about mere bullshit and nonsense.

The best philosophers usually acknowledge our human urge to claim we know or understand more than we really do know or understand, and thus to promote bullshit and give nonsense a life of its own. In ancient Greece, Socrates said that the only thing he knew was that he knew nothing. Philosophers of the “modern” era focused more on spurious claims to understanding – on the idea that we can sincerely think that language or ideas have clear meaning, when in fact they are meaningless. For example, Hobbes thought much of what academics say consists of “insignificant speech” (i.e. talk without significance or meaning), and that the idea of “free will” (among many others) was literally nonsensical:

And therefore if a man should talk to me of a round quadrangle; or accidents of bread in cheese; or immaterial substances; or of a free subject; a free will; or any free but free from being hindered by opposition; I should not say he were in an error, but that his words were without meaning; that is to say, absurd.

Hume distrusted the “abstruse reasonings of philosophers” and urged us to reject the “sophistry and illusion” in much philosophical writing. The word ‘illusion’ is important here: Hume realized that something can seem meaningful to a sincere person who means well, at the same time as actually being nonsensical. The appearance of meaning can be an illusion.

Hobbes and Hume were early members of a (mostly English-speaking) philosophical tradition that recognized the seductiveness of merely apparent meaning. Burke fulminated against the grand-sounding rhetoric of the French Revolution, and the thunderous twaddle of its philosophical torch-bearer Rousseau. Bentham said talk of “imprescriptible rights” was “nonsense upon stilts”. AJ Ayer and the logical positivists said that much of what was then being written (by such philosophers as the “unbridled metaphysician” Heidegger) was literally meaningless. Quine said much the same about Derrida. But as far as I am aware, calling another philosopher’s writing nonsensical is nowadays considered bad manners.

The later Wittgenstein did much to explain the mechanisms that give rise to the illusion of apparent meaning. He rejected the widespread assumption that meaning is determined by “how it seems to conscious experience”, and substituted the idea that meaning is use. This move to pragmatism – from looking at experiences inside the head to looking at habitual behaviour between agents – can be illustrated by a rudimentary example. Honey bees “dance” to communicate the location of nectar to other members of the hive. The moves of the dance don’t mean what they do by “seeming” to mean anything to the bees’ conscious experience (bees probably don’t have anything that could be called conscious experience). Rather, the bees behave in regular ways – by reacting to the dance as well as dancing themselves – that in effect interpret the moves to mean what they do.

That is a rudimentary example, but all meaning is like that, even the meanings of sophisticated human languages. They all depend on behaviour. The trouble is, human behaviour is complicated: much of it involves social rituals, and power plays. The priest or academic gives the impression that he understands some arcane fact about the Holy Trinity or Natural Law, say, and the layman is so impressed that he too is eager to go through the motions of understanding. This sort of human behaviour can seem meaningful to the conscious experience of all those involved, at the same time as bearing nothing more than mere “social meaning”. We might say that there are rules of syntax, in the absence of any real semantics of the sort that renders what we say true or false. Human linguistic “dances” often refer to less than the dance of the honey bee.

By understanding language as habitual activity, Wittgenstein also saw that language sometimes “goes on holiday” (in fact philosophical problems are typically caused by language doing that). Behaviour appropriate to one area of human life can be transplanted into another, like Englishmen who go out in the midday sun in non-English climates where the midday sun can kill. Factual discourse about “is”s can slide imperceptibly into moral discourse about “ought”s. Legal discourse about rules in the statute book or rights in a written constitution can move lock, stock and barrel into the realm of ethics, so that people find themselves talking about “natural law” or “natural rights”.

The jargon of specialists is unusually capable of generating the illusion of apparent meaning, because we tend to hand decision-making powers over to specialists. We tend to assume that their expertise isn’t a simply matter of their having opinions about subjects that the rest of us don’t have opinions about, but of their having greater knowledge, deeper understanding or more reliable judgement than the rest of us. And “who are we to question such expertise?”

All this applies with even greater force to the jargon of supposed moral experts, which is uniquely spellbinding. Our agreement with them is “on steroids” – we don’t just agree, we agree with the added ingredient of moral indignation, which always gives an extra boost to the suspension of disbelief.

This often results in the most insidious lack of clarity – a lack of clarity in which things seem clear to the morally committed, but are really not clear at all.

In Ireland last week we learned that lack of clarity – specifically, lack of clarity in the law – can be a matter of life and death. The heartening thing is, the week before that, Irish legislators learned that most voters recognize lack of clarity, and react to it with hostility, or at least indifference.

Is he insane, or is he a terrorist?

Following the recent “Batman” massacre in Aurora, a lot of people have made cynical comments such as: “the suspect is white, so it must be insanity rather than terrorism”. By which they mean: our racist double standards prompt us to call him insane rather than a terrorist.

I think that’s rather revealing. Unfortunately, it reveals that terrorism is working just as it’s intended to.

By terrorism, I mean the deliberate targeting of civilians with the intention of frightening them into adopting a political agenda. Such an agenda might have as its goal a united Ireland, or the destruction of Israel, or the removal of military bases in Saudi Arabia, or whatever. The new converts needn’t become out-and-out activists for the cause, but if they are newly inclined to vote for some political measure when before they were against it, say, the terrorists’ work is a success. A newspaper doesn’t have to openly endorse Islamic extremism to yield to terrorists, but if it refuses to publish an offensive cartoon it otherwise would have published, say, it still partially yields.

Please note that although we tend to think of the people terrorists kill as their primary victims, it’s the much larger class of other people who adopt new political views as a result of these violent deaths who are the main targets of terror. The use of terror to instil views like their own distinguishes terrorists from other combatants as much as their deliberate targeting of civilians.

How does terrorism work? How does fear change minds? – I think the quick answer is: it works in much the same way as the “Stockholm syndrome”.

In a little more detail, terrorism works by forcing people to adopt the outward behavioural trappings of commitment to – or at least sympathy for – a political cause. If these behavioural trappings – saying the “right” things, not saying the “wrong” things, and so on – become a matter of fixed habit or “reflex”, in effect they solidify into genuine commitment. Even the “inner feelings” that normally accompany sincere commitment inevitably emerge, rather as smiling has the effect of making people feel cheerful.

How can that be? – This is where things get philosophically interesting. Here’s my answer: there is much truth in the “analytical behaviourist” idea that beliefs and desires are dispositional states. To use an analogy of Quine and Ullian, a person has a belief or desire in the same way as a battery is charged when it’s disposed to send a current through a circuit, when it causes sparks if the circuit is shorted, and so on. Analogously, to have a belief or desire is to be disposed to behave in appropriate ways. We develop a repertoire of habits appropriate to having this or that belief or desire. Interestingly, it doesn’t matter much what causes these dispositions. If it can somehow be arranged – by fear, threats, social ostracism, whatever – for the behaviour to occur in the appropriate circumstances, then the disposition is in place. And if the disposition is in place, the mental state is too.

And it gets worse, because a sort of holism leads to a sort of rationalization. Although we use single, isolated sentences to describe beliefs, we can never have a single belief in isolation from other beliefs. Instead, we have more or less detailed mental representations or “maps” of the world, whose fineness of detail depends on how much we know about the subject matter. The smallest units of description of such a “map” are those we can express in a single true or false sentence. These correspond to simple beliefs. For example, I believe that today is Sunday. But I am only capable of believing that because I have a fairly detailed mental representation of the way humans measure time in seven-day cycles, and the way each day corresponds to a rotation of the Earth, and the way two of those days are treated in a special way, and quite a lot of other facts relevant to the same subject matter.

So although I can pick out a single belief using a single sentence ‘Today is Sunday’, such a belief always has to be part of a larger “area of understanding”. To have a belief you have to have the concepts that the belief harnesses, and for that you need to have a fairly rich range of beliefs, plural, which carve these concepts out.

Something analogous applies with language. Any meaningful sentence has to contain words which themselves have a determinate sense. And to get a determinate sense, they must occur in other meaningful sentences, plural. In general, words get their sense by being used in semantically important sentences. So no sentence of a human language can have meaning in complete isolation from other sentences.

Now consider the acquisition of beliefs. When I am forced to accept a given belief, I cannot simply accept it on its own. I have to accept some “packaging” as well – some other beliefs that help to justify the belief in question, by implying it, or being implied by it. In other words, I rationalize it.

In short, because of the holistic nature of beliefs, being forced to adopt one of them entails adopting some other beliefs as well, which help to justify the “forced” belief. By being “embedded in reasons” like that, it is held sincerely. It is part of a more or less detailed “map” of its subject matter.

I think the Stockholm syndrome is the closest thing we have to “brainwashing”. It can probably be resisted as long as the subject isn’t too suggestible, as long as he constantly reminds himself that he is only “going through the motions”, and as long as he resists the urge to make things psychologically easy for himself. In my opinion, it is correctly termed a “syndrome”, because it is a mildly pathological state of mind. Although the new beliefs are “reasonable” in that they have their own “justification”, they were adopted through an unreliable process. In general, we don’t acquire true beliefs doing what thugs force us to do.

So the victims of terrorism who come to adopt more terrorist-friendly beliefs are, in a weak sense, unwell. They aren’t thinking as they would if they were using their judgement in a truth-conducive way. I think the cynical claim that we only treat “brown” people as terrorists is a symptom of that (mild) illness. The West has had to deal with terrorism from various quarters for decades, and most of the terrorists have if anything been “whiter” than their victims. The Ku Klux Klan, ETA and several organizations on both sides of the conflict in Northern Ireland were all terrorists, and few were tempted to call them anything else simply because they were “white”. Only recently have Islamic extremists come to epitomize terrorism in the public imagination. Many of them – such as the “shoe bomber” – have been “white”. The cynical claim above is akin to the claim that “one man’s terrorist is another man’s freedom fighter” – as if the word ‘terrorist’ were a sign of our own cultural insensitivity, our failure to make amends for “our” colonial past, and a more general boorishness and racism. We have “brought it upon ourselves”, the idea goes.

I hope it’s clear how those attitudes are the product of terrorism.

If some of us suffer from a mild form of mental illness as a result of terrorism, terrorists themselves usually suffer from a severe form of mental illness. Only the emotionally backward can adopt a political cause with enough gusto to neglect spouses, children and career – let alone to wreak similar havoc on other people who are evidently not involved in their conflict.

So our original question should not be “Is he insane, or is he a terrorist?” but rather “Is he insane, or is he both insane and a terrorist?” To answer that question, we would have to know more about his intentions and his political agenda (if any).

The sorrow and the pity

Nothing suspends disbelief like empathy. The man who claims to have been tortured is more likely to be believed than the man who claims not to inflict torture. The woman who claims to have been raped is more likely to be believed than the man who claims not to be a rapist. In such cases, the first party strikes a note of fellow-feeling and triggers a sense of moral outrage. The second party might be guilty, or he might not – either way, his “plight” doesn’t pull at the heartstrings with quite the same insistence as that of the first party. The problem is that emotional engagement often emerges as credulity, and the lack of it as incredulity.

When people claim to be victims of a hate campaign complete with death threats, they are generally believed, even if they don’t trouble themselves to produce documentary evidence of such threats. Their story seems believable by virtue of their purported victimhood – so believable does it seem that others often neglect to ask for such evidence. It seems like “bad manners” to express anything that can be construed as doubt.

When marriages break up, both sides scramble to paint themselves as the victim in an abusive relationship. When public demonstrations turn ugly, demonstrators and police are both capable of actively inviting injury. In these and in many other similar situations, there is political advantage to being seen as the victim. “You don’t need a bruise to be abused” as a feminist slogan has it, but a bruise is tangible evidence that its owner is a victim rather than a perpetrator. You may not need a bruise, but you’d be mad to turn one down. Emotionally intelligent people know which theatrical devices sway an audience by suspending disbelief, and manipulative emotionally intelligent people don’t hesitate to use such devices.

Perhaps this suspension of disbelief arises because so many of our beliefs are adopted as a matter of social cohesion rather than because they have a legitimate claim to be true. Many or most of our “theoretical” beliefs (in areas such as history, religion, science, economics, etc.) are determined by which group we want to belong to – and want to be seen to belong to.

Whatever its cause may be, this is bias. By taking sides with the apparent underdog, we can lose sight of the fact that appearances can be deceptive. I’m not complaining here about the moral bias that makes us take sides with the underdog. My problem is with epistemic bias: when taking sides with the underdog includes the unwarranted assumption that he is telling the truth.

When someone commits suicide, the first thing the biddies ask is, “What made him do it?” “What drove him to such extremes?” Even when mental illness is involved, there is always a more or less vague suggestion in suicide that there was a victim, and therefore there must be some perpetrators. It strikes me as very naïve to suppose that the bias described above isn’t routinely exploited by people who want to suspend disbelief, and know how to do it.

Some people end their own lives because they have a painful physical condition from which there is no hope of recovery. That is a perfectly reasonable thing to do. Others would have us believe that they end their own lives because they have to cope with mental anguish analogous to an incurable physical condition. I suggest we make an effort to overcome our own bias, and ask ourselves: “Was he really in a state of incurable mental anguish, or did his action exploit a universal human bias?” I think the more we think about mental anguish, the less closely it resembles an incurable physical condition.

I submit that the deliberation preceding suicide often involves more cunning than is generally acknowledged. It seems to me that the paradigm cases of suicide are not of people suffering unbearable mental anguish, but of emotionally intelligent agents who are trying to achieve something. We get a more instructive paradigm by looking at cases of hunger strikers and suicide bombers.

We tend to think that people kill themselves with no purpose except to end their pain by ending their lives. That’s because we tend to think what we “really” want are experiences rather than external states of affairs, and experiences come to an end with death. (I wrote about this philosophical error in more detail here.) But ask yourself: Do you want to have the experience of your lover being faithful to you? Or do you want your lover actually to be faithful to you? Your answer illustrates whether you want an “external” state of affairs to be realised or to have a (possibly illusory) “internal” experience.

We should ask: What external states of affairs are those who kill themselves trying to bring about? Our answers may be less flattering to the suicidal than the standard “suffering saint” interpretation!

Of course no one is supposed to ask that sort of thing, or to think the suicidal can be motivated by malice, or to suggest they should be treated with anything less than undiluted sympathy. It’s frowned upon. It’s sounds like the “swift kick in the pants” school of psychiatry. It seems “devoid of compassion”. Worst of all, the very bias I have been complaining about kicks in and gives the opposed view greater initial credibility than it deserves. The bias is self-protecting and self-perpetuating.

Against all that, I would argue as follows: First, if we want the truth, we must avoid epistemic bias, and to avoid that, we must not let our automatic empathy for victims lead us to assume that their stories are true, or to unquestioningly accept that those who claim to be victims really are what they claim to be. We might dub this error the “empathetic fallacy”.

Second, if what I’m saying here is true, the real victims of suicide are not those who kill themselves so much as those who are unjustly held “accountable” for the actions of those who do. The injury of their supposed “blame” is often compounded with self-imposed shame and secrecy. I think we should spare a bit of compassion for them.

Ulterior motives

We all know that people sometimes act with ulterior motives. Some people marry for money, despite declaring that they are marrying for love. Some give to charity with apparent sincerity, when in fact they hope to make a profit by enhancing their reputation.

Many people think that practically everything we do is done with the ultimate ulterior motive of seeking pleasure. For example, they think that even if givers to charity are not motivated by the hope of making a profit, at bottom they must “really” be motivated to seek the pleasure of giving to charity.

I think that’s a mistake. In fact I think it’s a monumental mistake, one that has philosophically interesting causes, and politically troubling effects.

What are these philosophically interesting causes? I think there are two of them. The first is a failure to acknowledge an ambiguity in language. Whenever we act, our actions are caused by our beliefs and desires. And of course our desires are our own desires – if they weren’t, we’d be more like remote-controlled robots than autonomous actors. So in a completely trivial and empty sense, we do things because we want to achieve things for ourselves. In that trivial sense, whatever we do, we do it “for our own sake”.

But in another, non-trivial sense, some of our actions are selfish, because they contrast with actions that are obviously aimed at doing things for others. We do things for our loved ones, friends, sometimes even total strangers. We feed wild animals, we avoid dropping litter, we vote in secret ballots, and so on, with the evident intention of simply bringing about objective states of affairs. In such states of affairs we might end up poorer, or end up having less time, or end up suffering inconveniences, or whatever.

I think many people fail to see the difference between the trivial “selfishness” of doing things for the self’s own reasons, and the non-trivial selfishness of doing things for one’s own gain rather than for the good of others. By accepting the first, and then imperceptibly sliding into the second, they end up embracing the idea that “we do everything for selfish reasons”. With our intelligence bewitched by this ambiguity, it now merely looks as if I give to charity to help others – when in fact I must be doing it for my own gain, with an ulterior motive.

That brings me to the second philosophically interesting cause of that monumental mistake. We assume that we act with an ulterior motive, and when we cast about for such a motive, we usually look inwards and find “pleasure”. This habit of looking inwards comes from our philosophical tradition, which is misinformed from top to bottom by the assumption that our minds are “cut off” from the “outside” world. Being isolated like that, the best we can hope for by way of reasons for our mental states is “internal” justification. Thus tradition has it that our beliefs must be “based on” experience to count as knowledge. Analogously, our desires must be aimed at pleasure to be rational. Hence the near-universality of something akin to Freud’s “pleasure principle”: whenever we act, we act in order to get pleasure.

So it seems to me that two very typical philosophical errors lead us to embrace a misguided principle. And this misguided principle leads to still further trouble when it is applied in psychology, foreign policy, economics, and elsewhere.

If we assume that everything we do is “really” aimed at getting pleasure for ourselves, deliberation looks like a matter of accountancy, of “balancing the budget” of pleasure by deferring some of it. Motivation itself begins to look like a matter of exchange, of “looking for the best bargain”, of “making sure profits exceed losses”. Motivation becomes the hope of reward. (I would guess that Kant’s distaste for such accountancy led him to adopt his nonsensical “categorical imperative”.)

For the individual, the supposed reward is pleasure. Between individuals, the supposed reward is the social surrogate of pleasure, namely money.

Be clear that I have nothing against pleasure, or money, nor do I think there is anything immoral about seeking them. But I have serious objections to the factual assumption that motivation is essentially a matter of seeking reward, in the form of hedonistic or monetary profit (i.e. a gain in the “currency” of pleasure or the literal currency of money). Why? – We did not evolve to seek internal goals but external ones. Very often, having an external goal can be explained in evolutionary terms, and requires no “justification” in terms of deeper or ulterior motives at all. We seek such goals as a brute matter of fact, whose biological causes are understood.

By habitually assuming that motivation is the hope of reward, we badly misconstrue our own urges and our political institutions. We assume that life choices are guided by whatever we hope will yield the most pleasure or the most money. We suppose that death is bad to the extent that it yields no pleasure, and that love is good to the extent that it yields plenty of pleasure. To misconstrue love and death so badly is a philosophical disaster.

These assumptions extend into the workplace and the market. We assume that in general people opt for whatever yields the largest reward. From that, we infer that the best people are those who command the biggest salaries. (The reality is the reverse of that: the best people are those who are most interested in their chosen professions, who feel compelled to explore further despite its meager monetary reward.)

Politics and international affairs do not escape the baleful influence of the idea that motivation is the hope of reward. We undertake military interventions and expect gratitude from a populace whose lives we have made “more rewarding”. We expect to “win the battle for hearts and minds” by giving people handouts. We hope to break the bonds of loyalty and ethnic identity by dangling large quantities of cash in return for betrayal. And so on.

Just a few days ago, the independent reviewer of UK anti-terrorism legislation (David Anderson QC) argued that the laws could be relaxed because more people die of bee stings than as a result of terrorist attacks. Once again it is assumed that our motivation to avoid death is a matter of its reward or lack thereof: death by terrorism amounts to the same thing as death by bee-sting, it seems, because it’s equally unrewarding. But in reality, we expose ourselves to different kinds of risk with different degrees of consent. We all undertake everyday risks willingly and knowingly by going out into the garden (where there are bees) or by using the roads (where there are cars). We do not all undertake unusual risks such as knowingly putting ourselves in the blast zone of a religious fanatic. Most of us have a much stronger aversion to the latter than the former, as would be obvious if we decouple motivation and hope of reward.

Web of belief, tree of desire

The “many agents” view of the mind

One of the commonest images of the mind involves two “internal” agents struggling for supremacy: “reason” versus “the passions”. It is held that a wise person manages to subdue his unruly passions with his still more powerful reason. Or at least he can keep his passions in check to live a balanced sort of life.

The supposed antagonists in this struggle are reason, singular, and the passions, plural. Why this numerical imbalance? – The vague idea seems to be that reason is self-consistent, and as such it cannot come into conflict with itself, so there’s only one of it. But the passions are not constrained like that – they pull in all sorts of different directions, so there are many of them, or at least more than one of them. (There may be an element of truth in that: our desires can come into pragmatic tension with one another in a way that our beliefs cannot.)

Plato fine-tuned this simple reason-versus-the-passions model of the mind. In his dialogue Phaedrus, Plato likened the soul to a chariot, pulled by the horses of appetite and indignation, with reason in the driver’s seat. This tripartite model of the mind has been influential. For example, Freud’s model of the mind is at least reminiscent of Plato’s, with the conscious “ego” playing the role of reason, and the unconscious “id” and “superego” playing the lustful horse of appetite and the moralistic horse of righteous indignation respectively.

Although there may be some useful suggestions in such models, they are all misleading in that they treat components of the mind as if they were agents in their own right. But any real mind is at most a single agent.

Why do so many models of the mind commit “the homunculus fallacy” of positing internal agents with minds of their own? I think part of the explanation is that we habitually suppose that the mind is “cut off” from the outside world, with the result that motivation begins to look like a bit of a mystery. If the mind is in effect insulated from the outside world, then our primary ends can’t be to realise this or that external state of affairs, but instead to achieve more proximate internal goals – i.e. to enjoy experiences such as pleasure. Since so much of what we choose to do does not obviously bring pleasure, the question arises: what sort of experiences can we be “really aiming for” then? If what we’re “really after” are consciously-experienced feelings of one sort or another, we have to divide up our motivating urges into those that reward us with this or that type of conscious experience. Among these, we find “baser” urges to get the pleasure of sex, or to avoid the pain of hunger. Then there are “higher” urges to get the satisfaction of being factually correct or morally right. Highest of all are those motivated by the purity of reason alone, if that even makes sense. The mind begins to look as if it is populated by “thrifty housekeepers” who deal with the “home economics” of deferring this type of pleasure for that, who insist on “tightening our belts” by having us make do with a bit of pain for now, knowing that it will be offset by greater pleasure at later date. Then there is the saintly housekeeper whose sole purpose is to use reason and do right.

It should be obvious that this is a ridiculous image of the mind. It’s completely disconnected from evolutionary theory, and it ignores the obvious fact that the simplest animal minds evidently seek external goals without having to engage a complicated internal apparatus of thrifty or saintly housekeepers doing home economics of conscious pains and pleasures.

Hume on belief and desire

Let us return to the fact that any mind is part of a single agent. David Hume saw that real acts are the products of both beliefs and desires – in other words, mental states of thinking that such-and-such a state of affairs is in fact the case, and of wanting such-and-such a state of affairs to become the case. (Hume’s famous distinction between “is” and “ought” reflects this difference, since to Hume morality is a matter of “sentiment”, of wanting what others want as a result of our fellow feelings for them.) Furthermore, according to Hume “reason is the slave of the passions” – in other words, the purpose of our beliefs is to help realise our desires.

Although Hume used the metaphor of a “slave” here, and real slaves are of course agents of sorts, Hume’s understanding of action goes against models that posit further agents within the actual agent. Hume wasn’t simply reversing the roles of Plato’s horses and charioteer: for Hume, belief and desire are the two essential component mental states behind the act of any single agent. If an agent desires X, and believes that action A will achieve X, he performs action A. The point of seeing reason as an out-and-out “slave” of the passions is not to add an extra internal agent, but remove the explanatory need for one, by stressing how integral reason is to any sort of agency.

I often use the rudimentary example of a cruise missile to illustrate this idea. A cruise missile has a goal (to hit its target) and an on-board map which it uses to steer its way towards its goal. Having such a goal is its rudimentary analogue of desire, and the map marking the missile’s current position is its rudimentary analogue of belief. We can think of its “passions” as everything involved in its being directed towards its target; and we can think of its “reason” as everything involved in figuring out its actual position. Then both are essential components in its “acting” to achieve its goal. And “act” it does, in a rudimentary sort of way, because it is a rudimentary sort of agent.

Hume’s theory of action was a breakthrough. Its naturalism saw “the passions” as goal-directed states rather than as internal agents liable to lead us astray like internal will-o’-the-wisps. So it removed a sort of middleman. Without Hume, one might wonder why an agent would go for this or that external goal. The “explanation” would involve drawing a link between acting to achieve a goal and “having the motivation” to act – which involves an internal “weighing” of prospects of experiential rewards. With Hume, the internal reward looks unnecessary.

In keeping with current usage, I shall call goal-directed mental states “desires” and fact-representing mental states “beliefs”. In these terms, Hume showed that desires cannot be “mistaken”. They might lead to a miserable life, or a premature death, or worse: “’Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.” Because desires do not even purport to represent the world as it actually is, they cannot conceivably be corrected for misrepresenting it. Nowadays we might call it a “category mistake” to assume that desires can be mistaken.

Can we have “mistaken” desires?

I think the widespread sense that we can be mistaken about what we want is an example of “the bewitchment of our intelligence by means of language” – one that arises through an ambiguity. Suppose I report my own mental state by saying “I want some wintergreen-flavoured chewing gum”. But after actually tasting it, I seem to correct myself by reporting that “I didn’t really want that after all”. It seems to me that I have either changed my mind, or else must have originally misreported the content of my desire. In either case, the desire itself wasn’t mistaken. If I changed my mind, I ended up with a different desire, but that’s not the same as starting off with a mistaken desire. On the other hand, if I misreported the content of my desire, I had a mistaken belief. How might I misreport the content of my desire? – One obvious way is by being over-precise. Perhaps all I wanted was a “refreshing, mouth-cleansing” flavour, and mistakenly thought wintergreen would deliver it. My mistake was believing that wintergreen would deliver what I wanted, rather than having a mistaken want. I “wanted the wrong thing”, but what was wrong was my belief, not the want itself.

Yet the idea that desires can be mistaken seems to have a life of its own. It permeates the “positive” concept of liberty (as Isaiah Berlin called it) that has been mainstream in Continental European philosophy. According to this conception, being free isn’t simply a matter of being able to do what you want, whatever that may happen to be. It also involves wanting the right things, in other words having supposedly non-mistaken desires. Rousseau, Kant, Hegel and Marx all treat reason as if it can determine goals, and as if some goals are “unreasonable”.

There is a sinister side to the assumption that some of our desires can be mistaken, which involves overruling “mistaken” desires, or treating them as “not really desires at all”. The former entails paternalism – forcing people to do things for “their own good”. The latter involves dismissing desires as the illegitimate products of “false consciousness”, “brainwashing”, or something similar.

This is all well-trodden ground. As old-fashioned liberals from Burke to Berlin have repeatedly argued, that way lies the gallows, the gas chamber and the Gulag. Rather than going over such familiar ground again, I’ll look at some positive results of Hume’s understanding of belief and desire.

By treating belief and desire as necessary but separate components of the causation of action, it’s much easier to see the difference between them, and to logically “isolate” each type of state from the other. We can think about how beliefs mesh with other beliefs of the same agent, and how desires mesh with other desires of the same agent.

“Systems” of belief and desire

Taken together, an agent’s beliefs form a system. WVO Quine argued that we should think about that system as a “web”. Beliefs hang together through relations of implication. If one of them must be excised from the system, because it threatens contradiction in the light of a new observation say, at least some of the others that together imply it must be excised as well. There are no “foundations”, although some are anchored more directly to the world through observation than others. By always minimizing disruption to the system as a whole, new observations can be accommodated – and as they are so accomodated, the system evolves over the course of time.

An agent’s desires also form a system, although it is less systematic than a richly-interconnected “web”, because individual desires aren’t subject to the relations of implication that exist between beliefs. (Perhaps the strongest constraint is that no agent can both desire X and desire not-X at the same time.) Like an agent’s system of beliefs, his system of desires also evolves over the course of time. As far as I know, JS Mill was the first to suggest a metaphor (in The Subjection of Women) that it grows like the branches of a tree. Infants start out with a few, coarse-grained desires for large-featured, roughly-circumscribed objects such as food, contact with parents, and warmth. As time passes, and as their powers of discrimination are more finely tuned, these desires increase in number and become focused with greater sharpness on particular types of food, particular kinds of interaction with particular people, warm water rather than hot air, and so on.

The tree metaphor is intended to capture a few salient psychological facts. First, desires don’t just spring up “out of the blue” as a result of mere exposure to some sort of stimulus. They develop from – i.e. grow out of – earlier desires for something less specific, objects of desire that were represented in the agent’s notional world in a coarser, less well-defined way. So the “growth of the tree” of an agent’s desires is the result of increasing fineness of grain and sharpening focus. The newer shoots at the tips of growing branches of a tree are independent of one another – at least to the extent that they cannot occupy the same space – but they are not independent of the older branches they grew out of. Second, the older branches don’t die away when they spawn their newer, more numerous and more finely-focused offshoots: an adult’s coarse-grained desires for food or warmth are just as real as an infant’s desires for the same things. But being longer-established and better-developed, there’s a lot more to them. This is relevant if we consider the satisfaction or thwarting of desires. The longer-established an agent’s desire is, the more it tends to qualify as an “important life interest” and the less fleeting its goal. For example, it is disappointing to get turned down for a job after an interview, or to suffer a miscarriage. But to lose a career you have spent decades carving out, or to have an adult son or daughter die are catastrophes. The passage of time turns green shoots into hoary old branches with multiple offshoots of their own. My concern for my sons isn’t just for their lives, but also for their health, careers, love lives, reputations, friendships, ambitions, and much else besides.

It seems to me that these metaphors of a “web” of belief and a “tree” of desire are well worth exploring. They help to illuminate crucial differences between belief and desire. Apart from the main functional difference noted by Hume, the systematic links between beliefs are “horizontal” while links between desires are “vertical”. That is to say, we rationally adopt new beliefs and abandon old beliefs by checking how well the current system hangs together logically. How new beliefs arise is mostly irrelevant – the context of discovery is independent of the context of justification. But we adopt new desires and abandon old desires not as a matter of rational choice, but as a matter of historical development. A belief system does not have foundations, but a desire system does, in a manner of speaking.

To see how this view differs from that of someone who has a “positive” concept of freedom and who thinks we are liable to fall victim to “mistaken desires”, consider advertising.


It is quite common to think about advertising as if it can implant a wholly new desire – a “mistaken” desire – in the mind of someone exposed to it. This thought comes from being unclear on the difference between belief and desire, and taking belief as the generic model of a mental state. It’s true that exposure to new facts often gives rise to new beliefs about those facts. But desires are different from beliefs. A desire for something can only be implanted in the “receptive, fertile soil” of a mind that wants something roughly like it already. You might not have thought of buying an iPad, say, till you see an ad for one. But an ad for an iPad won’t sway you unless you already wanted some sort of useful computer-like gadget or reading device. I would argue that all advertising can ever implant are desires of greater specificity than before. Since greater specificity is an inevitable result of the passage of time and the sharpening of the focus of desires that already guide behaviour, it is largely irrelevant that it delivers the goods for Apple rather than Amazon on this or that particular occasion.

Animals use “advertising” when they brandish ornaments of sexual selection to entice sexual partners. The classic example is the peacock’s tail. The purpose of the tail and the advertising is to attract peahens. It also happens to attract the eye of human aesthetes, and the unwanted attention of predators. In all three cases, desires that are already in place are focused more precisely on a particular peacock. The peahen must be already interested in sex with peacocks, the aesthete must be already in search of objects of beauty, and the predator must be already hungry. This animal advertising serves a “purpose” in much the same way as human advertising is deliberately aimed at attracting the attention of potential buyers. But its effects depend on whether or not it is noticed, and that is a matter of the receptivity of who or what is noticing. Whether it is purposeful, deliberate, or wholly unintended, all that happens to who or what is noticing is their range of options is narrowed rather than enlarged.

Political distaste for the “consumer society” and the supposed victims of its advertising campaigns fuels the idea that there are “hidden persuaders” at work, “brainwashing” us into wanting things we wouldn’t otherwise want at all, or shouldn’t want for moral reasons.

I would welcome an end to that way of thinking. I think it harms more people than it imagines fall victim to “false consciousness” or the “consumer society”. For example, those attitudes have done untold harm to homosexuals, who were traditionally thought to corrupt non-homosexuals. But if we were clearer about the differences between belief and desire, we would see that “radical conversion” of that sort is impossible.