Some things have “meaning”, and some things don’t. Bits of language such as sentences and words “mean” things, and various mental states such as desires and beliefs have “content”, which amounts to pretty much the same thing. But most things don’t have “meaning”, and among these, we are liable to mistakenly think some of them do have it even though they don’t. For example, any question about “the meaning of life” is surely misguided. We are born, we hope to achieve this or that during the course of our lives, and then we die. Our hopes have “meaning”, but our entire lives do not.
To see “meaning” where there is none – or to see more of it than there really is – is a very common human weakness. “Primitive” societies see agency where there is none – such as gods, ghosts, or spirits in the rivers and forests – and where there is agency there is purpose, which is a sort of meaning. “Non-primitive” societies do the very same. James Lovelock sees agency in the Earth’s “ecosystem”. Physicists see semantic “information” where there is nothing more than co-variation. Even a sensible chap like myself sometimes has to remind himself that there is no malicious agency in packaging that defies my efforts to open it.
We see too much “meaning” in things. There is less in heaven and earth than is dreamt of in our philosophy.
I will cease putting the word ‘meaning’ in scare-quotes from here on, as it is getting tiresome. But in what follows you should read the word as if these warning-signs (and sneering-signs) were still there. The use of a single word for a wide range of complicated and fuzzy relations has a dangerous ability to bewitch our intelligence.
Something has meaning when it points towards, stands for, symbolizes, or more generally represents something else, somehow or other. This can be achieved in a wide variety of ways. A primitive type of meaning can be seen in a colour swatch that exemplifies a colour. It can get much more sophisticated, as when a scientific law describes what would happen if an imaginary (i.e. “counterfactual”) state of affairs were realized. There is denotation (e.g. the name ‘Jeremy’ stands for me), and there is connotation (e.g. the words ‘grassy knoll’ may remind you of the Kennedy assassination, but doesn’t directly refer to it). And there’s other stuff. Many other word-world relations are possible, all of which involve some variety or other of meaning.
Philosophers are mostly interested in the way words refer to things, and in the way sentences are true or false of states of affairs. Mental entities such as concepts and mental states such as beliefs and desires mirror the representational capacities of these items of language.
Until fairly recently, it was assumed that meanings were determined by our inner experiences. This fits in with traditional ideas in epistemology: we have distinctive experiences, which are the magic link, supposedly, between our thoughts and things outside our heads. So according to this traditional (“Augustinian”) view, meaning essentially involves naming these external things using words associated with the internal experiences.
The “inner experience” idea of meaning began to crumble in the nineteenth century. For example, Freud thought that some of our behaviour (such as slips of the tongue) and thought-like processes (such as dreaming) have a meaning that is not determined by inner experience, either because they don’t involve conscious experiences at all, or else because the experiences involved are superficial and hide a deeper meaning.
Freud’s ideas about the meaning of slips of the tongue and other behavioural “parapraxes” were part of a larger movement away from the traditional Cartesian focus on inner experience, towards pragmatism, which is focused instead on external behavior. Almost all of the great thinkers of the early twentieth century took this pragmatic turn, from the leaders of the American Pragmatist movement to Heidegger.
Wittgenstein famously repudiated some of his earlier work by changing his mind about meaning. His earlier view was mistaken, not only because it assumed inner experiences determined “meaning”, but also because its “atomism” assumed that the “primary vehicle of meaning” was the word. His later, corrected view is expressed in slogans like “meaning is use”. In this newer pragmatism, there is a complex interplay between sentences and words, but if anything could be called the primary vehicle of meaning now, it is no longer the word but the sentence. The meaning of words is determined by sentences whose truth-value can be readily agreed upon. Only after the reference of words has been fixed – by the semantically important sentences they occur in – can they be re-combined to construct new sentences, some of which can be false. There is a clear asymmetry between truth and falsity here that may seem jarring to some. But I would argue that that is a symptom of not yet having taken the crucial step away from the “inner experience” of Descartes towards pragmatism. It is quite remarkable how persistent the traditional way of thinking has been, despite its rejection by so many great philosophers.
We can see how meaning is determined by use in the rudimentary sentence-like noises made by animals. For example, some birds make a distinctive sound (like the blackbird’s scolding “pink” sound) when a cat is in the garden. The noise they make is recognized as a warning by other birds, which fly up whenever they hear it. Depending on how closely correlated it is with the actual presence of a cat, the noise is true when a cat actually is in the garden. In bearing a truth-value, the entire noise is analogous to a declarative sentence in a human language. Part of the noise (quite possibly the entire noise) specifically refers to a cat, and so functions like a word.
The fact of what it refers to is far from immutable: if birds utter the same sound whenever a sparrow-hawk or a cat is in the garden, the reference is not specifically to cats, but to members of the broader category of sparrow-hawks-and-cats. And it may yet refer to a great many things we haven’t thought of yet, such as lifelike robotic cats, or four-legged scarecrows, or what have you.
This pragmatic way of understanding meaning makes it quite a bit fuzzier than it seemed before. The fuzziness of reference is especially troubling if we are used to thinking of words as if they work like names, or like name tags attached to objects by invisible pieces of string. With our new awareness, we see that words are attached to things only within the context of the ways they are used. Inasmuch as use is messy, reference is too. That indeterminacy does not make language use impossible, obviously, but it does shake the earlier conception of language to its foundations.
Let us take stock by noting a few consequences of this way of thinking. First, although nothing is certain, and it can be very hard to test the truth of theoretical claims made in science, truth itself just is an everyday fact of life. We must routinely utter truths for words to refer to things, and even to enable us to utter falsehoods.
For example, let us return to our garden birds. One clever bird might notice that uttering the cat-in-the-garden sound has the convenient effect of emptying the garden of other birds. So when food is put on the bird-table, he utters the sound, and is rewarded by getting more food for himself. This avian equivalent of crying wolf can be over-used. If it became the routine sound uttered when food is put out, it would no longer mean there’s a cat in the garden but there’s food on the table. The new reference to food rather than cats is determined by the requirement that most of the time, most of what is uttered is true, and indeed is recognized as being true by most of those who use the utterance (by uttering it themselves, or hearing it uttered by others and reacting appropriately).
A second consequence of the pragmatic way of thinking is that language is necessarily public. Sadly, this important fact tends to lie buried beneath the verbiage generated by Wittgenstein’s “private language argument”. We can see how obvious it is by considering the “inverted spectrum” thought experiment. Imagine someone who sees colours in reverse, in other words, someone who when looking at blue objects has experiences of the sort that normal people have when looking at yellow objects, and so on. Such a person would have to have an abnormally-wired brain, but this would not be revealed in his use of language. His reports of colour would be the same as everyone else’s – he would still call tomatoes “red” and bananas “yellow” and so on. Such words describe objective features of these public objects’ surfaces rather than his own inner experiences. They have to refer to public objects, because otherwise the truths that fix their reference could not be publicly affirmed.
A third consequence of pragmatism is that in general, meaning or content is determined by interpretation. Things that have meaning are just things that meaning can be assigned to in a more or less rigidly constrained way by someone trying to make as much sense as possible of them. The constraint involved might be something as simple as reliable correlation. For example, the noise the birds make means what it does because it is correlated in a reliable way with whichever state of affairs makes it true, and this correlation constrains what an informed interpreter could assign as content. But with more abstract sorts of content, the constraints can get more complicated. In formal sciences, these constraints often take the form of definitions, which work like laws. But there are no such laws in everyday human discourse – the definitions found in dictionaries describe how words are actually used, rather than fix their meaning independently of use.
The bird noises I am using as a rudimentary example of “things that have meaning” is stolen from Quine, who wrote of a tribe whose members say ‘gavagai’ when a rabbit crosses their path. I prefer my own example of the bird noise, because it is tempting to assume tribe members know something we don’t – that their language contains details we haven’t noticed yet, so that their utterances mean more than what we have so far been able to interpret. With birds, there is less of a temptation to think there is anything more to what their utterances mean than what they should be interpreted as meaning.
And that is where the “limits” of meaning are to be found – there is no more detail in meaning than what an informed interpreter would assign as meaning. This is every bit as true of mental content as it is of linguistic meaning.
Mental content basically consists of beliefs and desires. The other “intentional” mental states (i.e. those that “point” to states of affairs) such as hopes and fears can be analysed in terms of their belief and desire components. We ascribe, describe and even individuate these states using “embedded sentences”. For example, at one point Frodo believed that Gandalf was dead. The sentence ‘Galdalf is dead’ expresses the content of Frodo’s belief. But this embedded sentence schema can be misleading, because it may suggest that beliefs and desires are themselves “sentences written in the head”. Thus having a belief would involve having a sentence in the “this is true” register somewhere in the brain, and having a desire would involve having a sentence in the “would that this were true ” register somewhere else in the brain.
Although this idea has its supporters, I think it’s pretty obvious that it can’t be right. Many kinds of animal clearly have beliefs and desires. If brains work by handling linguistic entities just to accommodate those mental states, why have so few animals gone the whole human and started to talk into the bargain? It would be a small extra step for each animal, but a giant advantageous leap in evolutionary terms.
The idea that thought is essentially linguistic activity in the brain might be completely empty. We may be free to call the patterns our brains “manipulate” the “symbols” of a “brain language”, but this would be a very different sort of language from any we are familiar with. It is not used for communication, and we cannot easily discern or individuate its “symbols”. Why call it a “language” at all, and how much does it explain to do so?
Here’s a bad place where it may lead us: if we assume thought and language are too closely connected, we are liable to see as much detail – as fine a grain, if you prefer – in the mental states we describe as is in the language we use to describe them. There is often more detail in language than in the mental states it describes. If so, the extra detail is artefactual and therefore misleading. We have already seen how a bird noise might apply broadly to sparrow-hawks-or-cats, while our human linguistic description of what it applies to narrows it down more specifically to cats only. Like the mistake we met earlier of thinking life itself can have a meaning, this is to see more meaning than is really there. As language-using humans, we have to entertain alternative hypotheses and consider rival theories, which are usually expressed linguistically. But the beliefs we end up with generally do not have content of so fine a grain.
Daniel Dennett had a much better idea. He noted that whenever we describe, explain, or predict anything, we adopt a “strategy”. The strategy we adopt when describing the behaviour of an agent is to posit goals, combined with information-bearing states that co-vary with the world in which these goals are pursued. These are rudimentary analogues of desires and beliefs respectively. For example, a thermostat has the “goal” of keeping a room at a steady temperature, and it opens or closes a circuit to turn on a heater depending on whether the room is warm enough or not warm enough. The content of these states are assigned through interpretation of its behaviour as an agent. That interpretation involves assuming at the outset that the agent’s “beliefs” (or rudimentary analogues thereof) are all true, that he is fully rational in his pursuit of his goals, and so on, reluctantly lowering our expectations as we “work our way into the system” and are compelled to assign the occasional false belief to maximize consistency.
Thermostats are ultra-simple agents, if they count as agents at all, but there is a smooth scale of complexity. A cruise missile has a target, and uses a video camera and an onboard computer map to keep track of its own position as it approaches the target. The detail of the content involved gets richer as we move up this scale. The fineness of its grain increases, if you like.
But it never becomes richer or more fine-grained than what can be assigned by an informed interpreter. This has profound implications for many human practices, including philosophy. For example, consider a man who looks at the sky and says, “oh dear – I think it’s going to snow!” He then proceeds by putting on Wellington boots and a raincoat, grabbing an umbrella, and so on. If his behaviour is consistent with someone who thinks it is going to rain rather than someone who thinks it is going to snow, interpretation leads us to assign the belief that it is going to rain, and that he has misunderstood the word ‘snow’, or that maybe he is trying to mislead us. We do not assign both the belief that it is going to rain and the belief that it is going to snow, because these are inconsistent. To be a little more precise, to the extent that they are inconsistent, we cannot ascribe them both.
It is hard to see how an interpreter could assign inconsistent beliefs to the same agent. But suppose, as Donald Davidson said, that there is nothing more to mental content than what a fully informed interpreter would assign as content. Then it is very hard to see how an agent could actually have inconsistent beliefs. Yet much philosophy assumes that the main purpose of logic is to remove inconsistencies in our belief system. Why make efforts to remove inconsistencies if they cannot be there in the first place?
That is not to say that logic serves no useful purpose. I would say its principal purpose is to draw out the logical consequences of hypotheses (including the axioms of any branch of mathematics). In that way, the hypotheses of science and everyday life can be tested against observation. Often, observation casts doubt on our hypotheses. We can certainly have false beliefs galore, although true beliefs are the “default”.
Many other human practices involve efforts to discern meaning, from a jury trying to determine the motive for a crime, or an art critic trying to understand a work of art, to a psychoanalyst trying to uncover unconscious mental states or internal conflicts. Some of these efforts may be an over-interpretation of the subject matter. Anish Kapoor’s “Orbit” tower sculpture might be nothing more than an interestingly convoluted shape.
Personally, I think Freud had some valuable insights, but I think much or most of his writing does not survive the criticism that he over-interpreted the contents of the mind. In some forms of behaviour – passive aggression, carelessness, etc. – there are indeed cues enough to assign modest mental content such as “she doesn’t like me” or “the criminal must have sort-of-wanted to get caught”. But ambitiously ascribing a much more complicated, convoluted substructure of unconscious beliefs and desires to an agent is unwarranted, because an interpreter assigning such content would have to do so in an unconstrained way, or in a way that was constrained by something irrelevant, such as the ideology of a school of thought rather than the behaviour of the agent.
I would aim a somewhat similar criticism at the writings and talk of many other specialized disciplines in the humanities, including philosophy. Much of the detail is assigned not through honest interpretation with an eye to the world, but rather with an eye to the affirmation of a group of like-minded specialists. The meaning of what is said or written is not checked against the world in which what is said is mostly true, but against a social milieu in which experts agree or disagree. These experts speak a “semi-detatched” rather than a genuinely public language. In such a language, much of the detail comes from shared ideology rather than factual aspects of the world. Typically, the technicality of this sort of language is artifactual and therefore unwarranted. To paraphrase AJ Ayer: the technical writings of philosophers are, for the most part, as unsustainable as they are uninteresting.
I suggested above that there could be no conflict of belief within the mind of a single agent. But I think it’s undeniable that there are conflicts of desire within a single agent. This is possible because agents do not act to achieve conflicting goals at the same time. You can have the goal of giving up smoking every morning, and the goal of enjoying cigarettes every evening, but you can’t have them both at the same time. I hope this is obvious from the way an interpreter would manage to assign the content of these desires, which is all their content amounts to.
The necessity for time management – boring as it may sound – is in my opinion one of the keys to unlocking the secrets of consciousness, and such other apparent mysteries of the mind as conflict of desires. Just consider the way men and women are equally passionate and committed lovers, yet set “hurdles” for each other at different points along the course of life (typically, women are slower to agree to first-time sexual intercourse than men, and men are slower to agree to first-time parenthood than women).
As Shirley Bassey sang: “Love you hate you love you hate you till the world stops turning”, which I think says more about conflict of desire and time management than anything any philosopher has said (including me).