Desire is to successful intention as belief is to knowledge

The concept of belief is really very simple. You believe something when you think it’s true. The “something” you believe – the “it” that you think is true – is its content, which can be expressed as a declarative sentence. That’s how we usually ascribe beliefs, as in “Bill thinks Paris is the capital of France”, where the embedded sentence ‘Paris is the capital of France’ is true in the same range of situations as Bill’s belief.

Animals obviously have beliefs, even though they don’t have language. So those embedded sentences are generally not literally inside the head of the animals whose beliefs they correspond to. Rather, the belief plays a similar role in the animal’s mental economy as the sentence plays in the language’s inferential economy. The role of the sentence in the language – what it would be true of, what it implies, what implies it, and so on – mimics the role of the belief in the animal’s belief-system – such as what the animal infers from it.

Although the concept of belief is simpler than the concept of knowledge, we seem to acquire it later in life. There’s a reason for that. Having knowledge is the normal state of any perceiving mind with respect to the world. Evolution shaped brains, afferent neurons, sense organs, etc. so that minds would have knowledge. So when we humans begin to talk as infants, we tend to talk about the most familiar normal situations of agents knowing things about the world they live in.

Only after we have acquired a working concept of knowledge are we able to abstract from it to acquire the concept of belief (and many do not get that far). An item of knowledge is a true belief sustained by reliable processes. We sculpt our (simple) concept of belief by chipping some bits off the larger stone of our (less simple) concept of knowledge. We do it by “bracketing” or suspending in our imagination the condition that it be true, and/or the condition that it be sustained by reliable processes. These extra conditions typically depend on states of affairs outside the head of the knower – truth depends on the subject matter of the belief, and reliability depends on information channels such as sense organs. Only after we have isolated the component of knowledge that lies wholly inside the head do we finally arrive at the concept of belief.

There are some legitimate reasons why beliefs are of special interest to us. But there is at least one illegitimate reason: the ubiquitous Cartesian view of the mind as being “cut off” in a problematic way from the “outside world”, so that we are “not entitled” to talk about anything beyond “the inner”. Thus begins an unhealthy preoccupation with certainty and with “justification” – in other words, with internal guarantees or checks on the supposed goodness of our beliefs.

Internalism has done untold damage to epistemology in particular, and to philosophy in general. That’s a misfortune, but it’s one that mostly only academics have to live with. However, internalism has also done untold damage to the lives of ordinary people, in its application to volition. I’ll try to explain.

The concept of desire is as simple as that of belief, and our acquisition of it mirrors that of belief. We start off with successful intention rather than knowledge. Successful intention is the normal outcome of any act. Most of the time when we say “he did X”, we mean he meant to achieve X when he acted, and he actually succeeded.

I mentioned above that evolution shaped brains to have knowledge. But that knowledge serves an even more fundamental evolution-given purpose: to promote the proliferation of our genes in future generations. Evolution shaped brains, efferent motor neurons, muscles, etc. for achieving goals. So when we first begin to talk, we talk about the most familiar normal situation of agents achieving goals in the world in which they act.

Only after we have acquired a working concept of successful intention are we able to abstract from it to acquire the concept of desire (and, strange as it may sound, I would argue that many do not get that far). A successful intention is a desire that reaches its goal by means of reliable processes. We acquire the concept of desire by “bracketing” or suspending in our imagination the condition that it reaches its goal and/or the condition that it does so through reliable processes. These extra conditions typically depend on states of affairs outside the head of the knower – success depends on the object of the desire (a state of affairs) being realised, and reliability depends on working machinery such as motor neurons, muscles and limbs. Only after we have isolated the component of successful intention that lies wholly inside the head do we finally arrive at the concept of desire.

As with belief, there are some legitimate reasons for being interested in desire as opposed to successful intention. But as with belief, the Cartesian reason is illegitimate. The idea that our minds are “cut off” in a problematic way from the world in which we act narrows our focus, so that it settles on experienced wishes. There is real danger in this – not just for academics but for the way ordinary people live their lives. We are liable to think of a strong desire as a vividly-experienced wish instead of one that emerges in decisive action. And we may mistake fantasy for actual desire, even though we often fantasise about things we have an extremely powerful aversion to, such as rape. It is that sort of confusion that leads me to say many people never get as far as having a clear concept of desire. By confusing it with vividness of experience, they make a similar mistake to those who suppose the subjective feeling of certainty is a guarantee of truth.

The perspective of traditional epistemology is to look from the inside outwards. From that perspective, the concept of belief looks more primitive than the concept of knowledge, and it looks as if we can treat beliefs that have the extra ingredient of “justification” as making it over a hurdle to become knowledge. The same perspective supposes that intention is “like desire, only stronger”. I recommend an alternative perspective, which takes desire to be more like an attenuated form of intention. Instead of expanding our concepts of belief and desire to include the concepts of knowledge and intention, we have to contract our concepts of knowledge and intention till all we have are concepts of belief and desire.

I think this has a real bearing on how we live our lives, because so often we appeal to internal “feelings” when making judgements about what we really desire. For example, young parents are harsh on themselves if they do not feel a strong sense of attachment to a newborn baby. Lovers misread jealousy as a wholly “negative” emotion. Spouses mistakenly think they have fallen out of love if they have grown accustomed to each other. Terrorists deem their own actions to be morally right if their experienced urge is to do good.

Thoughts such as those lead people to make bad decisions in life.

The night Kuhn said “yes”

50 years has passed since the publication of one of the most important books of the twentieth century: Thomas Kuhn’s The Structure of Scientific Revolutions. This book is vital for our understanding of science in ways that are too numerous to list exhaustively. Here are a few random thoughts half a century on.

First, Kuhn showed us that the “Whig history” told by science textbooks is wrong. In fact it’s downright dishonest. Typically, science textbooks barely touch on the history of science, but when they do, the story nearly always goes that we are blessed by being currently in possession of the truth. The past has been a series of cumulative steps leading our forbears towards this glorious present.

The reality is always much messier and less “monotonic” than that.

Second, Kuhn showed us that science is a social process in which committed partisans vie for supremacy. The real worry here is not that scientists are not the saints they are often painted as being, but rather that the decisions they make when they choose one theory rather than another are not rational decisions.

If we do not have good reasons to think theory change is mostly rational, we do not have good reasons to think current theories are even approximately true.

Third, Kuhn showed us that communication between partisans of alternative paradigms is at least problematic, and may even be impossible. (Kuhn used the word ‘paradigm’ for a central theory combined with its “penumbra” of guiding ideas – techniques, unwritten assumptions, and above all notable successes that work as examples of “how to do it right”.)

Kuhn put our understanding of science through a harrowing trial. I remember lying awake at night the first time I read it, half excited and half fearful that everything I had taken for granted about science was wrong. Personally, I think science – real science, not pseudo-science – survives this trial. But it’s a surprisingly near-run thing.

Communication is problematic between partisans of alternative paradigms because the words they use have different meanings. For example, in Newtonian mechanics the word ‘mass’ refers to an intrinsic property of an object, but in the newer relativistic alternative the same word refers to a quantity that depends on the reference frame.

These problems are troubling in science, but they are more obvious in humanities subjects such as philosophy. Students are often anxious that their teacher will penalize them if the teacher doesn’t agree with the ideas and opinions expressed in their written work. But they have little grounds for worry if there is disagreement. Disagreement is a sign that teacher and students are at least working within the same paradigm. And most third-level teachers are scrupulously careful to avoid penalizing students for expressing opinions they disagree with. An apparent lack of understanding is the real liability.

A much more treacherous situation arises when a student is an original thinker, and is writing within an entirely different paradigm from that of the teacher, so that they use words differently. In this situation, the teacher is liable to think the student has simply missed the point, or is changing the subject. This can look like a lack of understanding rather than the embracing of a new or different understanding.

I have never been original enough for that situation to arise in my own case. All the same, I think I have seen the potential for such equivocation in baffled expressions on the faces of peers and colleagues in a few areas. (And even teachers – yeah, I’m talking about you DD, when you saw my copy of EO Wilson’s Sociobiology!)

For example, ethics is divided between those who make moral judgments with reference to the consequences of action, and those who make moral judgments with reference to the motivation of agents. The word ‘right’ changes its meaning across this division, much as ‘mass’ did between Newton and Einstein. Culpability does not even enter into moral deliberation in the former, but it is the central concern of the latter. So each side tends to regard the other as “not thinking morally at all”. In a discussion of ethics, one side seems to the other to be simply “changing the subject”.

Science gets over this sort of failure of communication through observations and testing, but there are no such tests for moral theories. Much depends on what is in fashion, on what the most – or most influential – people think is a worthwhile way of thinking.

Utilitarianism was taken seriously in the nineteenth century, but the tide turned against it. It is widely thought to have been discredited. Thus the few who don’t think so are liable to be treated as stubborn or even ignorant people who “haven’t heard the news”.

Another area in which a gulf yawns between alternative paradigms is epistemology. The traditional project of epistemology was to worry about “justification” and to try to “refute the sceptic” (by which is meant the radical or Cartesian sceptic who feels he has no reason to think he is perceiving the “outside world” at all). WVO Quine’s “naturalized epistemology” rejects Cartesian dichotomies between “inner” and “outer”, and its concern with internal “justification”, to instead ask about the external reliability of the processes that give rise to beliefs. To the traditional epistemologist, he has simply “changed the subject”. He is considered a “naïve realist” rather than someone who sees, like Donald Davidson, that we have “unmediated touch with the familiar objects whose antics make our sentences and opinions true or false”.

I spent a few years as the lone “scientific realist” among the graduate students at a US university with a strong tradition of “continental” philosophy. I was considered naïve, having perversely turned my back on all the stuff they assumed had been known since Kant’s day about noumena being forever beyond our ken and all that. (Despite our apparent ability to refer to “them” using the word ‘noumena’!)

The one thing we did have in common is we all worshipped Kuhn, for various reasons.

Then one day, Thomas Kuhn Himself came to town. It was a meeting of the American Association (or something like that) for the Philosophy of Science (or something like that), around 1990. As a graduate student, I was dutifully writing names on badges. Someone said “here comes Thomas Kuhn!” as the great man approached the desk, and quietly announced his name with modesty and grace, despite the fawning multitudes loudly welcoming him to the Chicago Hilton. With trembling hands I wrote his name on the badge, desperately anxious that I might not be spelling it right.

Later, I got away from the desk and started to drink the (free) beer. After half a dozen (or something like that) cold ones I approached him, in the main ballroom (or something like that). At this stage, he was surrounded by the graduate students of at least three universities in the Chicago area. But eventually I saw an opening, and asked him: “Professor Kuhn, if you had to answer Yes or No to the question whether you are now a scientific realist, what would your answer be?”

Reader, he said Yes.

(And then he qualified his answer with some other stuff, but you wouldn’t be interested in that. Detailed, boring kind of stuff.)

Vaccination and intolerance of creed

Ours is a tribal species, prone to racism and other forms of intolerance. Thankfully, racism is no longer accepted among “educated” people (although many don’t take the trouble to get educated about what to count as racism).

However, human tribalism will out, and in recent years a new form of us-versus-them thinking has taken the place of racism: intolerance of creed. By creed I mean the central beliefs people use to steer their own lives and to make important decisions on behalf of their children. These beliefs might be religious, scientific, pseudo-scientific, or whatever: all they amount to are beliefs. They issue in behavior, of course, as do all beliefs, but they remain mere beliefs.

Although disagreement is a valuable thing, and we should welcome attempts to rationally persuade others that their beliefs are mistaken, creed-intolerance takes the form of treating the offending beliefs not simply as false but as immoral, and indeed so severely immoral as to oblige the rest of us to overrule them. You can see the difference in the language used to condemn an offending creed: not the epistemological language of truth and falsity, knowledge and ignorance, reasons and evidence, but the moralistic language of shame and disgrace.

Racism and religious intolerance have always masqueraded as “concern for defenceless women and children”. White women supposedly needed protection from the advances (and allure, one suspects) of sexually voracious men of other races. Children had to be sheltered from the corrupting influence of various “great infidel” types, from David Hume to Salman Rushdie. And so on.

And today, creed-intolerance does the very same. For example, those who are sceptical of climate change catastrophe are not treated as simply having a different or factually erroneous opinion: they are condemned for committing future generations of children, grandchildren, great-grandchildren (etc.) to the fires of hell. Oh yeah, and climate change is going to be worse for women, we are given to believe. See the pattern?

Another classic example of creed-intolerance is the current “scientific” attitude to vaccination. Anyone with an inkling of science knows that vaccination is a good idea, that measles is a very unpleasant and possibly life-threatening disease with life-damaging complications, and that the MMR vaccine is very unlikely to do any sort of harm. You’d be doing your children and other people’s children a favour if you had them vaccinated.

Yet others think differently. Some people honestly – although erroneously – think vaccination may do more harm than good. By all means let us try to persuade them rationally that they are in error, but let us not make any attempt to overrule their judgement, even though we disagree with it.

Why? – Because we should encourage or at least allow “experiments in living”, and parents must have the final say in what is done in their children’s interest, unless it is obviously and very seriously harmful. But the science of vaccination is science: therefore it is attended by uncertainty, and some doubt is appropriate. If that surprises you, you have misunderstood the nature of science.

Many other practices might be considered harmful. For example, I consider the circumcision of boys to be harmful, but not so seriously harmful as to overrule parents’ decisions to have it performed on their sons. (The so-called “circumcision” of daughters is a different matter.)

It is often said that opting out of vaccination is harmful “to society”. To which I reply: there is no such thing as harm to society apart from harm to the individuals who constitute society. And as JS Mill argued, society (so understood) must be prepared to absorb a limited amount of harm for the greater good of individual freedom. In the present case, the harm occurs to those who do not have immunity, meaning mostly those who have not been vaccinated. As long as a significant proportion of the population do get vaccinated, or acquire immunity by actually getting the disease, there is little danger of an epidemic.

What is a “significant proportion”? – It depends how infectious the disease in question happens to be. Suppose the average carrier of a disease infects two other people: then a potential epidemic is in the offing. To ensure that that disease has a downwards trajectory, more than half of the population would have to be made immune. As long as this proportion is maintained, the disease will eventually become extinct. These proportions change with range of factors, of course, but no disease is so virulent that the entire population would have to be vaccinated.

Despite that, people often talk as if everyone had to be vaccinated to curb a disease, and about those who shun vaccination as if they were treacherous “fifth columnists” who let society down by making us all vulnerable. But nearly all of the vulnerable ones are those who avoid vaccination.

“But they expose their own children to risk!” is the next plea of the creed-intolerant. To which I reply: OK, but so what? We all expose our children to risk, knowingly or otherwise, as well as taking risks ourselves, knowingly or otherwise. It is up to us as individuals and as parents to judge whether the risks are acceptable. Most of us drive cars, and bring our children along as passengers. Some of us smoke cigarettes in houses where our children live. Personally, I wouldn’t take the risk of sending my own children to a Catholic school, but I accept that other parents deem this to be a wise decision or an acceptable risk for their own children. Fine, that’s (mainly) their business.

Let’s be honest. Most people don’t like other people to express different opinions from their own. Some don’t like others to even have different opinions from their own. And they tart up their own narrow-mindedness and intolerance to look like “concern for children”. As per usual.

I’m not religious, but one thing must be said in favor of religion. Anyone whose creed is honestly religious cannot but admit the simple fact that others have other creeds. I have my religion, and you have your religion, and we live in different ways as a result. Occasionally, an admirable pluralism springs up where these differences are routinely acknowledged and tolerated.

It means less than you think

Some things have “meaning”, and some things don’t. Bits of language such as sentences and words “mean” things, and various mental states such as desires and beliefs have “content”, which amounts to pretty much the same thing. But most things don’t have “meaning”, and among these, we are liable to mistakenly think some of them do have it even though they don’t. For example, any question about “the meaning of life” is surely misguided. We are born, we hope to achieve this or that during the course of our lives, and then we die. Our hopes have “meaning”, but our entire lives do not.

To see “meaning” where there is none – or to see more of it than there really is – is a very common human weakness. “Primitive” societies see agency where there is none – such as gods, ghosts, or spirits in the rivers and forests – and where there is agency there is purpose, which is a sort of meaning. “Non-primitive” societies do the very same. James Lovelock sees agency in the Earth’s “ecosystem”. Physicists see semantic “information” where there is nothing more than co-variation. Even a sensible chap like myself sometimes has to remind himself that there is no malicious agency in packaging that defies my efforts to open it.

We see too much “meaning” in things. There is less in heaven and earth than is dreamt of in our philosophy.

I will cease putting the word ‘meaning’ in scare-quotes from here on, as it is getting tiresome. But in what follows you should read the word as if these warning-signs (and sneering-signs) were still there. The use of a single word for a wide range of complicated and fuzzy relations has a dangerous ability to bewitch our intelligence.

Something has meaning when it points towards, stands for, symbolizes, or more generally represents something else, somehow or other. This can be achieved in a wide variety of ways. A primitive type of meaning can be seen in a colour swatch that exemplifies a colour. It can get much more sophisticated, as when a scientific law describes what would happen if an imaginary (i.e. “counterfactual”) state of affairs were realized. There is denotation (e.g. the name ‘Jeremy’ stands for me), and there is connotation (e.g. the words ‘grassy knoll’ may remind you of the Kennedy assassination, but doesn’t directly refer to it). And there’s other stuff. Many other word-world relations are possible, all of which involve some variety or other of meaning.

Philosophers are mostly interested in the way words refer to things, and in the way sentences are true or false of states of affairs. Mental entities such as concepts and mental states such as beliefs and desires mirror the representational capacities of these items of language.

Until fairly recently, it was assumed that meanings were determined by our inner experiences. This fits in with traditional ideas in epistemology: we have distinctive experiences, which are the magic link, supposedly, between our thoughts and things outside our heads. So according to this traditional (“Augustinian”) view, meaning essentially involves naming these external things using words associated with the internal experiences.

The “inner experience” idea of meaning began to crumble in the nineteenth century. For example, Freud thought that some of our behaviour (such as slips of the tongue) and thought-like processes (such as dreaming) have a meaning that is not determined by inner experience, either because they don’t involve conscious experiences at all, or else because the experiences involved are superficial and hide a deeper meaning.

Freud’s ideas about the meaning of slips of the tongue and other behavioural “parapraxes” were part of a larger movement away from the traditional Cartesian focus on inner experience, towards pragmatism, which is focused instead on external behavior. Almost all of the great thinkers of the early twentieth century took this pragmatic turn, from the leaders of the American Pragmatist movement to Heidegger.

Wittgenstein famously repudiated some of his earlier work by changing his mind about meaning. His earlier view was mistaken, not only because it assumed inner experiences determined “meaning”, but also because its “atomism” assumed that the “primary vehicle of meaning” was the word. His later, corrected view is expressed in slogans like “meaning is use”. In this newer pragmatism, there is a complex interplay between sentences and words, but if anything could be called the primary vehicle of meaning now, it is no longer the word but the sentence. The meaning of words is determined by sentences whose truth-value can be readily agreed upon. Only after the reference of words has been fixed – by the semantically important sentences they occur in – can they be re-combined to construct new sentences, some of which can be false. There is a clear asymmetry between truth and falsity here that may seem jarring to some. But I would argue that that is a symptom of not yet having taken the crucial step away from the “inner experience” of Descartes towards pragmatism. It is quite remarkable how persistent the traditional way of thinking has been, despite its rejection by so many great philosophers.

We can see how meaning is determined by use in the rudimentary sentence-like noises made by animals. For example, some birds make a distinctive sound (like the blackbird’s scolding “pink” sound) when a cat is in the garden. The noise they make is recognized as a warning by other birds, which fly up whenever they hear it. Depending on how closely correlated it is with the actual presence of a cat, the noise is true when a cat actually is in the garden. In bearing a truth-value, the entire noise is analogous to a declarative sentence in a human language. Part of the noise (quite possibly the entire noise) specifically refers to a cat, and so functions like a word.

The fact of what it refers to is far from immutable: if birds utter the same sound whenever a sparrow-hawk or a cat is in the garden, the reference is not specifically to cats, but to members of the broader category of sparrow-hawks-and-cats. And it may yet refer to a great many things we haven’t thought of yet, such as lifelike robotic cats, or four-legged scarecrows, or what have you.

This pragmatic way of understanding meaning makes it quite a bit fuzzier than it seemed before. The fuzziness of reference is especially troubling if we are used to thinking of words as if they work like names, or like name tags attached to objects by invisible pieces of string. With our new awareness, we see that words are attached to things only within the context of the ways they are used. Inasmuch as use is messy, reference is too. That indeterminacy does not make language use impossible, obviously, but it does shake the earlier conception of language to its foundations.

Let us take stock by noting a few consequences of this way of thinking. First, although nothing is certain, and it can be very hard to test the truth of theoretical claims made in science, truth itself just is an everyday fact of life. We must routinely utter truths for words to refer to things, and even to enable us to utter falsehoods.

For example, let us return to our garden birds. One clever bird might notice that uttering the cat-in-the-garden sound has the convenient effect of emptying the garden of other birds. So when food is put on the bird-table, he utters the sound, and is rewarded by getting more food for himself. This avian equivalent of crying wolf can be over-used. If it became the routine sound uttered when food is put out, it would no longer mean there’s a cat in the garden but there’s food on the table. The new reference to food rather than cats is determined by the requirement that most of the time, most of what is uttered is true, and indeed is recognized as being true by most of those who use the utterance (by uttering it themselves, or hearing it uttered by others and reacting appropriately).

A second consequence of the pragmatic way of thinking is that language is necessarily public. Sadly, this important fact tends to lie buried beneath the verbiage generated by Wittgenstein’s “private language argument”. We can see how obvious it is by considering the “inverted spectrum” thought experiment. Imagine someone who sees colours in reverse, in other words, someone who when looking at blue objects has experiences of the sort that normal people have when looking at yellow objects, and so on. Such a person would have to have an abnormally-wired brain, but this would not be revealed in his use of language. His reports of colour would be the same as everyone else’s – he would still call tomatoes “red” and bananas “yellow” and so on. Such words describe objective features of these public objects’ surfaces rather than his own inner experiences. They have to refer to public objects, because otherwise the truths that fix their reference could not be publicly affirmed.

A third consequence of pragmatism is that in general, meaning or content is determined by interpretation. Things that have meaning are just things that meaning can be assigned to in a more or less rigidly constrained way by someone trying to make as much sense as possible of them. The constraint involved might be something as simple as reliable correlation. For example, the noise the birds make means what it does because it is correlated in a reliable way with whichever state of affairs makes it true, and this correlation constrains what an informed interpreter could assign as content. But with more abstract sorts of content, the constraints can get more complicated. In formal sciences, these constraints often take the form of definitions, which work like laws. But there are no such laws in everyday human discourse – the definitions found in dictionaries describe how words are actually used, rather than fix their meaning independently of use.

The bird noises I am using as a rudimentary example of “things that have meaning” is stolen from Quine, who wrote of a tribe whose members say ‘gavagai’ when a rabbit crosses their path. I prefer my own example of the bird noise, because it is tempting to assume tribe members know something we don’t – that their language contains details we haven’t noticed yet, so that their utterances mean more than what we have so far been able to interpret. With birds, there is less of a temptation to think there is anything more to what their utterances mean than what they should be interpreted as meaning.

And that is where the “limits” of meaning are to be found – there is no more detail in meaning than what an informed interpreter would assign as meaning. This is every bit as true of mental content as it is of linguistic meaning.

Mental content basically consists of beliefs and desires. The other “intentional” mental states (i.e. those that “point” to states of affairs) such as hopes and fears can be analysed in terms of their belief and desire components. We ascribe, describe and even individuate these states using “embedded sentences”. For example, at one point Frodo believed that Gandalf was dead. The sentence ‘Galdalf is dead’ expresses the content of Frodo’s belief. But this embedded sentence schema can be misleading, because it may suggest that beliefs and desires are themselves “sentences written in the head”. Thus having a belief would involve having a sentence in the “this is true” register somewhere in the brain, and having a desire would involve having a sentence in the “would that this were true ” register somewhere else in the brain.

Although this idea has its supporters, I think it’s pretty obvious that it can’t be right. Many kinds of animal clearly have beliefs and desires. If brains work by handling linguistic entities just to accommodate those mental states, why have so few animals gone the whole human and started to talk into the bargain? It would be a small extra step for each animal, but a giant advantageous leap in evolutionary terms.

The idea that thought is essentially linguistic activity in the brain might be completely empty. We may be free to call the patterns our brains “manipulate” the “symbols” of a “brain language”, but this would be a very different sort of language from any we are familiar with. It is not used for communication, and we cannot easily discern or individuate its “symbols”. Why call it a “language” at all, and how much does it explain to do so?

Here’s a bad place where it may lead us: if we assume thought and language are too closely connected, we are liable to see as much detailas fine a grain, if you prefer – in the mental states we describe as is in the language we use to describe them. There is often more detail in language than in the mental states it describes. If so, the extra detail is artefactual and therefore misleading. We have already seen how a bird noise might apply broadly to sparrow-hawks-or-cats, while our human linguistic description of what it applies to narrows it down more specifically to cats only. Like the mistake we met earlier of thinking life itself can have a meaning, this is to see more meaning than is really there. As language-using humans, we have to entertain alternative hypotheses and consider rival theories, which are usually expressed linguistically. But the beliefs we end up with generally do not have content of so fine a grain.

Daniel Dennett had a much better idea. He noted that whenever we describe, explain, or predict anything, we adopt a “strategy”. The strategy we adopt when describing the behaviour of an agent is to posit goals, combined with information-bearing states that co-vary with the world in which these goals are pursued. These are rudimentary analogues of desires and beliefs respectively. For example, a thermostat has the “goal” of keeping a room at a steady temperature, and it opens or closes a circuit to turn on a heater depending on whether the room is warm enough or not warm enough. The content of these states are assigned through interpretation of its behaviour as an agent. That interpretation involves assuming at the outset that the agent’s “beliefs” (or rudimentary analogues thereof) are all true, that he is fully rational in his pursuit of his goals, and so on, reluctantly lowering our expectations as we  “work our way into the system” and are compelled to assign the occasional false belief to maximize consistency.

Thermostats are ultra-simple agents, if they count as agents at all, but there is a smooth scale of complexity. A cruise missile has a target, and uses a video camera and an onboard computer map to keep track of its own position as it approaches the target. The detail of the content involved gets richer as we move up this scale. The fineness of its grain increases, if you like.

But it never becomes richer or more fine-grained than what can be assigned by an informed interpreter. This has profound implications for many human practices, including philosophy. For example, consider a man who looks at the sky and says, “oh dear – I think it’s going to snow!” He then proceeds by putting on Wellington boots and a raincoat, grabbing an umbrella, and so on. If his behaviour is consistent with someone who thinks it is going to rain rather than someone who thinks it is going to snow, interpretation leads us to assign the belief that it is going to rain, and that he has misunderstood the word ‘snow’, or that maybe he is trying to mislead us. We do not assign both the belief that it is going to rain and the belief that it is going to snow, because these are inconsistent. To be a little more precise, to the extent that they are inconsistent, we cannot ascribe them both.

It is hard to see how an interpreter could assign inconsistent beliefs to the same agent. But suppose, as Donald Davidson said, that there is nothing more to mental content than what a fully informed interpreter would assign as content. Then it is very hard to see how an agent could actually have inconsistent beliefs. Yet much philosophy assumes that the main purpose of logic is to remove inconsistencies in our belief system. Why make efforts to remove inconsistencies if they cannot be there in the first place?

That is not to say that logic serves no useful purpose. I would say its principal purpose is to draw out the logical consequences of hypotheses (including the axioms of any branch of mathematics). In that way, the hypotheses of science and everyday life can be tested against observation. Often, observation casts doubt on our hypotheses. We can certainly have false beliefs galore, although true beliefs are the “default”.

Many other human practices involve efforts to discern meaning, from a jury trying to determine the motive for a crime, or an art critic trying to understand a work of art, to a psychoanalyst trying to uncover unconscious mental states or internal conflicts. Some of these efforts may be an over-interpretation of the subject matter. Anish Kapoor’s “Orbit” tower sculpture might be nothing more than an interestingly convoluted shape.

Personally, I think Freud had some valuable insights, but I think much or most of his writing does not survive the criticism that he over-interpreted the contents of the mind. In some forms of behaviour – passive aggression, carelessness, etc. – there are indeed cues enough to assign modest mental content such as “she doesn’t like me” or “the criminal must have sort-of-wanted to get caught”. But ambitiously ascribing a much more complicated, convoluted substructure of unconscious beliefs and desires to an agent is unwarranted, because an interpreter assigning such content would have to do so in an unconstrained way, or in a way that was constrained by something irrelevant, such as the ideology of a school of thought rather than the behaviour of the agent.

I would aim a somewhat similar criticism at the writings and talk of many other specialized disciplines in the humanities, including philosophy. Much of the detail is assigned not through honest interpretation with an eye to the world, but rather with an eye to the affirmation of a group of like-minded specialists. The meaning of what is said or written is not checked against the world in which what is said is mostly true, but against a social milieu in which experts agree or disagree. These experts speak a “semi-detatched” rather than a genuinely public language. In such a language, much of the detail comes from shared ideology rather than factual aspects of the world. Typically, the technicality of this sort of language is artifactual and therefore unwarranted. To paraphrase AJ Ayer: the technical writings of philosophers are, for the most part, as unsustainable as they are uninteresting.

I suggested above that there could be no conflict of belief within the mind of a single agent. But I think it’s undeniable that there are conflicts of desire within a single agent. This is possible because agents do not act to achieve conflicting goals at the same time. You can have the goal of giving up smoking every morning, and the goal of enjoying cigarettes every evening, but you can’t have them both at the same time. I hope this is obvious from the way an interpreter would manage to assign the content of these desires, which is all their content amounts to.

The necessity for time management – boring as it may sound – is in my opinion one of the keys to unlocking the secrets of consciousness, and such other apparent mysteries of the mind as conflict of desires. Just consider the way men and women are equally passionate and committed lovers, yet set “hurdles” for each other at different points along the course of life (typically, women are slower to agree to first-time sexual intercourse than men, and men are slower to agree to first-time parenthood than women).

As Shirley Bassey sang: “Love you hate you love you hate you till the world stops turning”, which I think says more about conflict of desire and time management than anything any philosopher has said (including me).

Rudeness: an apology

I love disagreement, because it’s the lifeblood of science and philosophy, and of much decent politics. But if there’s one thing I can’t stand, it’s polite disagreement. The urge to be polite goes hand-in-hand with discomfort at disagreement, and that discomfort is inimical to both philosophy and science. When we are polite, we tend to disguise the extent of our differences of opinion. In doing so we make “moving targets” of our opinions, we Bambify them, we tart them up, instead of presenting them as clearly and as starkly as we can, perhaps even exaggerating or simplifying them for clarity and effect. In that way, we relinquish the pursuit of truth.

Polishing and softening ideas is a bad habit on the part of speakers, but it gives rise to an even worse malaise on the part of their listeners. Politeness fosters harmful expectations – that people will not be offended by what they hear. If we expect not to be offended, we will find the inevitable occasions when we are offended a greater discomfort. And we will try to avoid them. We will mix with “people of like minds”. We will follow on Twitter only those who give us the warm feeling of agreement. We will never hear moral opinions opposed to our own, because it is what we disapprove of that we find “offensive” above all else.

Teachers are in a position of power, of course, and must be careful not to insult their students, as such insults can be threatening or inhibiting, and thus harmful. But students need not observe symmetrical niceties towards their teachers. If they do, discussion becomes leaden and “mannered” rather than free. Everyone must think carefully before they speak lest they “say the wrong thing”. Thinking aloud is not allowed.

That is no way to do philosophy.

Worst of all, when people are obliged to vet their utterances, to care so very much about the effect their words are having on others, there is no playfulness. And playfulness, frivolity, teasing, silliness, even childishness are the lifeblood of creativity.

We live in a prissy age, which rates polish above straightforwardness and sophistication above silliness. Would you kindly cut it out, you self-important morons?

Talking past one another

Very few humans actively strive to do the wrong thing, or to have false beliefs. Most of us do what we sincerely think is right, and believe what we sincerely think the evidence points to. The trouble is, what we think is right and what we think the evidence points to is often different from what others think is right or what they think the evidence points to. We are always enlightened “by our own lights”.

So when moralists say that people doing bad things must be acting out of “self-interest”, say, or when scientists say that people who have mistaken views should adopt “evidence-based” theory instead, they reveal that they are in the grip of a strange sort of parochialism. They seem not to acknowledge the mere existence of alternative opinions to their own. It isn’t simply that they think their own opinions are right and that alternative opinions are mistaken – we all do that – rather, they assume that there simply are no such alternative opinions. Hence those who act in ways they disapprove of are acting out of “self-interest” – self-interest conveniently being an entirely non-moral motive rather than a more troubling one inspired by a different moral theory from their own. Or again, those who believe non-approved opinions aren’t simply interpreting the evidence in a different way or counting different facts as evidence – they are ignoring evidence altogether, according to this way of thinking. Thus people who disagree with the approved opinion are not simply opponents, but “deniers” who must be scoundrels (or may be suffering from a debilitating mental illness).

This sort of parochialism would be a venial sin if those who committed it could correct themselves and move on. But that cannot happen – this is an unusually immovable vice, because it has its own incorrigibility built into it. In effect, it is designed not to allow correction. Why? – We engage with people who disagree with us, whose opinions are opposed to our own, but none of us could be bothered to engage with people who simply don’t have an opinion at all. Anything they say can be waved away as an irrelevance, because they have changed the subject. If what masquerades as an alternative opinion has nothing to do with evidence, then it isn’t an opinion at all, just a rude noise made by an insincere scoundrel.

For example, in disputes over moral questions, Kantians and utilitarians speak different languages because they appeal to very different basic principles. Too often, one side assumes the other has so misconceived morality that it “doesn’t have a moral view at all”, so that all its verbiage is an expression of something else. If so, anything said in its defence is irrelevant and can safely be ignored. During my own brief career as an academic, I saw far too much of this very unbecoming and lazy habit of thought. It is a discredit to anyone who professes to be a thinker.

This sort of dispute doesn’t just engage philosophers. Both sides to practically every political dispute – from the Falklands to Northern Ireland to Israel – are entirely sincere, but talk right past one another because they assume the other side is not sincere.

In discussing science, Baconian inductivists and hypothetico-deductivists speak different languages, because they count different sorts of facts as evidence, and different practices as scientific. It isn’t that one side is “evidence-based” and the other isn’t, but rather what counts as evidence to one side doesn’t count as evidence at all to the other. (For example, Newton rejected hypotheses, or so he thought, and Popper rejected induction, or so he thought.)

Thomas Kuhn saw all this decades ago, not in the philosophical discussion of science but in science itself – proponents of an old “paradigm” die out rather than convert to a new paradigm. They cannot convert, because the new way of thinking is so alien to them that the meaning of the terms they use – and even what they count as evidence – is different.

Personally, I find it very odd that anyone who has understood Kuhn’s The Structure of Scientific Revolutions could use the word ‘denier’ or ‘denialist’. I want to ask: Have you learned nothing from the mistakes of others?

The Monty Hall game

In a US TV game show hosted by Monty Hall, contestants are asked to choose one of three closed doors. The winning door has a car behind it, which the lucky winner can drive home. The contestant starts by choosing a door at random. But he is not allowed to open it just yet. First, Monty eliminates one of the remaining two doors by opening it, always to reveal that it does not have a car behind it. (It has a goat instead, apparently.) The interesting bit comes when Monty offers the contestant the opportunity to switch from the door he has already chosen to the remaining unopened door.

Many contestants do not switch, but it is easy to see why they should. Suppose I ask you to try to pick the Ace of Spades from a deck of playing cards whose faces are hidden. You draw your card at random, and place it face-down in front of you. Then I systematically go through the rest of the deck, looking at the face of each in turn, saying “that’s not it”, “not that one”, and so on, eventually discarding all but one.

At this point, you and I have a single card each. In this new situation, would you swap your card for mine if offered the chance? – I think it’s pretty obvious that you should, because repeated plays of this game would end up with my card being the Ace of Spades about 98% of the time (i.e. in roughly 51 out of 52 attempts). The Monty Hall game is structurally similar – all that differs are the numbers involved (the win/loss ratio is 1:2 instead of 1:51).

To me, the interesting question is why so many contestants don’t switch. Most of those who don’t switch assume that the probability of their door being the winner is a sort of “property” of the door, something like the “potential for a car being behind it”. It’s as if they suppose the potential presence of the car behind it is “attached” to the door in a ghostly sort of way, so that it cannot change when Monty Hall opens some other doors.

I think these contestants are anxious to avoid treating probability as if it had a “memory”, rightly recognising as erroneous the gambler’s fallacy of supposing that a run of bad luck increases the hope of some good luck to “balance” the account. So they feel the probability of their own chosen door being the winner must be independent of or insulated from what goes on elsewhere, before or after. That is an understandable error. But it is still an error, and we can learn from it.

The first lesson is that probability is not a property of any thing or type of event in isolation – it depends on the context. In the Monty Hall game, the probability of a door being a winner changes as new events unfold – in contrast to its colour, say.

The second lesson is that there are (at least) two concepts of probability, one of which is entirely “objective”, despite its dependence on the context. This objective sense of probability is relative frequency. In the long run, those who do switch choose winning doors twice as often as those who don’t switch. This is a fact about a numerical proportion that does not differ from one individual to the next. It is a fact about the world rather than something whose reality depends on the mind of any contestant (as beauty exists in the mind of the beholder).

There is another concept of probability, which we might label “subjective” because it does differ from one individual to the next, and it does depend on their minds. This is the traditional sense in which an idea or proposition is “probable” when it ought to be believed. Personally, I doubt very much whether we could ever have a numerical measure of how much an idea ought to be believed, because it depends so much on the other things that an individual already believes. For example, in the Monty Hall game the contestant doesn’t know which door the car lies behind, but the stage hands who put the car there beforehand do know. At the very least, they are able to say which door is the winner with much greater confidence.

The third lesson we can learn from the error of not switching is that these two concepts of probability are often confounded. It should be obvious that one applies to things such as repeated events that are neither true nor false, while the other applies to ideas and propositions, which are true or false. One applies to things in the world, while the other applies to representations of the world that exist in language or in the head of a believer.

Yet the two concepts are often confused. For example, philosophers sometimes talk about the “frequentist interpretation” of probability, as if several rival interpretations can be given of a single thing. But clearly, there is more than a single thing here – we have just seen that there is the relative frequency of events, and there is the credibility of ideas. And there may be more besides, once we take account of “the calculus of chances” – the branch of mathematics concerned with permutations and combinations.

I think that in the Monty Hall game, some reluctance to switch is driven by “doxastic inertia” – stolidly continuing with an opinion rather than changing one’s mind in the absence of any compelling reason to do so. The random choosing of the first door is not a decent reason for thinking the car is behind it, but nor is the opening of the second door anything like a convincing reason for thinking that the car is behind the third door. So there are no compelling reasons for belief or mind-changing here. So the contestant doesn’t change his mind.

But really, he shouldn’t be concerned with how much he is entitled to believe anything. He should be thinking instead about the best strategy for winning the game. In other words, he should be thinking about objective relative frequency.

Perhaps the most insidious effect of confusion between our two concepts of probability is the way it spawns further confusion in related fields. For example, in 1948 Claude Shannon developed a mathematical formalism for measuring the way events at two separate locations co-vary with each other. This was important for the then rapidly expanding telephone system. When events co-vary in a statistically reliable way, the occurrence of events at one location can work as “indicators” of events at the other location. This is a statistical matter of relative frequency, in other words of probability understood in the “objective” sense. It is not a matter of probability in its “subjective” or mental sense of “how much a proposition ought to be believed”. When Shannon used the word ‘information’ for reliable co-variation, he cautioned his readers not to construe it in its familiar cognitive sense of “potential knowledge”. Alas, his warning has gone largely unheeded, and confusion of the two wreaks conceptual havoc wherever discussion of thermodynamics goes off the rails and on to ideas of “order” and “disorder”, or dubiously extended senses of “entropy”.

Or again, sometimes scientists claim to be “90% certain” or an even more impressively precise-looking “95% certain” that a theory is true. The combination of the word ‘certainty’ and a supposed numerical measure of their certainty should set off an alarm-bell. It is a sure sign of confusion, or worse, intellectual dishonesty.

Scientists are far from immune from either. Like any members of society who are accorded unusual levels of respect, it is widely considered bad-mannered or foolish to question their judgement. This is a disaster waiting to happen – a disaster that has already happened in many countries with Catholic priests.