Why holists distrust expert opinion

The “default” way of thinking about evidence is often called foundationalism. Foundationalists think that most of our everyday beliefs about the world are justified by “resting on a foundation” of privileged or more certain beliefs—typically, beliefs about conscious experience, raw feels, or “sense data”. In science, foundationalists typically suppose that a theory in a specialised field is a sort of edifice that is justified by resting on the carefully collected observational “data” of that specific field. This idea is partly inspired by mathematics, in which theorems really do rest on (i.e. are derivable from and implied by) axioms. The question is, should we take mathematics as our model of empirical knowledge?

Opposed to foundationalism is holism. Holists think that everyday beliefs are justified by belonging to a larger belief system. Individual beliefs do not stand or fall on their own, but meet the evidence as a whole, and it’s the way that whole “hangs together” that justifies the entire system. In science, holists typically suppose that theories consist of hypotheses, which are justified by meshing smoothly with other hypotheses, often from disparate fields. This is a matter of how much a theory explains, how reliably it predicts unforeseen observable events, how “virtuous” it seems when we consider its conservatism, modestly, simplicity, generality, fecundity, and so on. This is nearly always an intuitive matter of giving conflicting virtues various weightings, guided by little better than “how it feels” pragmatically.

For example, a holist would judge Freud’s theory by asking how much it seems to explain—how well it meshes with evolutionary theory, with other philosophical ideas about agency, with what ordinary people can see for themselves of undercurrents and tensions in family life, with the various insights that art can give us about ourselves, and much else besides.

A telling difference between foundatonalists and holists is in their respective attitudes to specialist or “expert opinion” (by which I don’t mean the pragmatic know-how of a mechanic, but rather narrow theoretical claims made in advanced disciplines). The foundationalist tends to trust expert opinions, because he sees them as the product of skilled minds’ rare ability to trace specialised claims back to their specialised foundations, rather as an actuary can draw specific conclusions about a company’s finances from its specific account books.

The holist tends to distrust expert opinions. He will remind us that we can more reliably form opinions about the simple, familiar, observable, concrete and everyday than we can about the complicated, unfamiliar, unobservable, abstract or unusual. Most importantly, the holist is aware that claims made in specialised disciplines are typically hypotheses rather than the conclusions of arguments. No “data” implies them. If anything, it’s the other way round: hypotheses imply as-yet unseen events that observation can later confirm or deny. To the holist, the broad experience of a reasonably well-educated layman is better than the specialised training of an expert.

Holism has been around for well over a century. It has some well-known academic proponents such as Quine and Davidson. Yet foundationalism remains the default position among academics. Most of them despise hypotheses—mere “guessing”, as many would put it—and encourage their students to “provide arguments” instead of explaining why this or that hypothesis explains or predicts things better than its rivals. I think this is a tragedy.

What are ‘qualia’ ?

The word ‘qualia’ seems to be entering everyday usage. (It’s a plural — the singular is ‘quale’.) A quale is a distinctive sort of conscious experience, such as the subjective experience of blue (i.e. what we consciously experience when we are actually looking at a clear cloudless sky, or dreaming about swimming in the Aegean, etc.). How might qualia be explained from the perspective of evolutionary theory?

The really mysterious thing about qualia is this. The nerve endings send “signals” to the brain via the sensory neurons, like messages along telephone wires, and the brain reacts appropriately by sending “signals” back along the motor neurons to the muscles. Although there is an obvious need for the nerves to work like telephone wires, there doesn’t seem to be any obvious need for conscious experience to enter the picture at all. And yet, the life of a conscious creature is a riot of subjective experiences — distinctive colors, various subjective feelings such as hunger and pain, and so on. Why?

Here’s a very quick answer:

All living creatures are programmed to seek goals such as food, reproduction, safety, etc. Having an internal “map” of the outside world helps animals to achieve these goals. This internal map is a belief system. It works like the onboard computer map in a cruise missile, which looks at the terrain below and guides it towards its target. Of course, a cruise missile has just one goal and a very limited sort of map, but the basic idea is the same.

Programming a cruise missile is no doubt complicated, but maintaining a belief system is even more complicated: it calls for a lot of self-regulation. A belief system needs to perceive situations in the outside world, naturally, but it must also make choices, delay the achievment of some goals in favor of others, discard some beliefs when other beliefs are more likely to be true, and so on.

All of that entails having a higher-level “map”. This is more than just a “map” of the outside world like a cruise missile — it’s a “map” for perceiving one’s own internal states, and one’s overall position in the world. For example, discarding one belief in favor of another belief involves having second-level beliefs about which first-level beliefs are more likely to be true than others.

We are now in a position to ask: what is consciousness? Answer: consciousness is constantly-updated knowledge of our own states — and it mostly consists of higher-level states like the ones just mentioned.

For example, consider reaction to injury. A creature that does not have any such higher-level states (and is therefore not conscious) might have a simple defense mechanism that makes it recoil defensively when injured. But a creature with a higher-level “map” of its own states would be able to make a decision between “carrying on regardless” if the injury is not too serious, or stopping and nursing the wound if the injury is sufficiently serious. The seriousness of the injury depends on the circumstances. If the creature is running away from a predator, it should keep running at all costs. If the creature has to suffer no worse a fate than going without a meal, it should stop and rest. Unless it is in danger of starving to death, in which case it shouldn’t.

The decision-making capacity of these second-level states is a bit like the decision-making capacity of a political assembly. Each of the members wants what’s best for his own constituents, but the decisions of the whole are taken in the interests of the whole. This is achieved when the representations made by each member has its own distinctive character and a degree of insistence.

For example, having a distinctive sort of pain is normally the same thing as having an injury in concert with an internal state that indicates the severity and location of the injury, so that the “assembly” of second-level states (i.e. consciousness) can make an informed decision about whether or not to override it.

As a first pass, above, I said that consciousness is constantly-updated knowledge of our own states. Now I will fine-tune that by saying that consciousness consists of second-level representations of first-level representations of states of the world outside our heads.

For example, suppose I burn my finger. That is a state of the world (my body) outside my head. The injured part of my finger sends a signal to my brain, which then forms a state that normally co-occurs with that sort of injury, and so works as an “indicator” of its presence. This is a first-level representation of such an injury. So far, consciousness has nothing to do with the process. But now my brain has to take account of my overall state, and make decisions based on the various indications that are available to it. Doing that involves perceiving various internal states — such as the first-level representation of the injury to the finger — and weighing them up in terms of their urgency, type, and so on. That involves forming a second-level representation of first-level representations. In a sense, each of the first-level representations has to “make a case” for itself by having distinctive qualities that demand more or less attention, this or that type of attention, and so on.

In order to be represented appropriately at the second level, a first-level representation has to be distinctive. That is why it “feels like something or other”. What these states feel like is a product of how they are physically realized, whether they are welcome or unwelcome, what sorts of decisions have to be made given their occurrence, and so on.

For example, the first-level states that occur with injury are realized in different ways depending on which part of the body is injured. Almost all of them are unpleasant, because almost all injury is unwelcome. Most of them are “insistent” because most of them require some sort of action, taken sooner rather than later.

Or again, consider the first-level states that typically occur with the presence of an (objectively) blue object. Blue objects are unusual in nature, so the second-level states that accompany them are very distinctive, they arouse curiosity, and so on. Mostly these states are pleasant because most blue objects are safe, and some are valuable in some way. The second-level state that accompanies the perception of a blue object (i.e. the “experience of blue”) is not an especially “insistent” sort of state because action is rarely needed in response to the presence of blue objects.

I hope it’s reasonably clear that having “qualia” is a “functional” business that can add significantly to reproductive success.

The left’s rejection of individualism

It’s remarkable how individualism has come to be identified with right-wing politics.

By “individualism” I mean: love of individuality; high regard for the freedom and welfare of individuals; respect for the interests of sentient beings rather than for non-sentient political abstractions; and the expectation that we can understand society by looking at how its constituent parts interact (just as we might understand how a car works by looking under the hood). The “constituent parts” of society are individual humans — humans with jobs or without them, with children or without them, in this or that situation, considered as individuals rather than as members of whichever group they happen to belong to (race, sex, whatever).

We might call the opposite of individualism “communitarianism”: love of “community” or the group rather than its members; high regard for group cohesion and group strength; respect for the supposed historical entitlements and culpabilities of groups; and the expectation that to understand society, we have to look at larger historical forces than the interaction of mere individuals.

Communitarians often say things like “Margaret Thatcher said ‘there’s no such thing as society’”, and treat it as the worst thing anyone has ever said, the purest expression yet of Thatcherite depravity. The idea seems to be that weaker individuals are protected by “the cohesion of society”, or “community caring”, or “social structures”, or something of that sort, and so people who “deny the very existence of society” must be prepared to abandon those weaker individuals by the wayside, or exploit them for gain.

But that’s nonsense. Individualists don’t deny the existence of society, they just see it the way engineers tend to see things — as consisting of constituent parts that interact with each other. Car mechanics don’t deny the existence of cars just because they think about engines, wheels and other car parts. The same goes for people who think about society in terms of the individuals who comprise it.

Furthermore, individualists don’t think that weak individuals don’t matter, or that they should be exploited for the benefit of strong individuals. People who care about individuals care about weaker individuals. The welfare of individuals is very much a matter of the welfare of weaker individuals, because one dollar (say) means much more to someone who’s got nothing than it means to someone who already has a million dollars. If we give the same consideration to the interests of weaker and stronger individuals, we accept that a unit of material wealth counts more to the weaker individuals than to the stronger individuals, because it’s a more critical factor to them in their need to live a decent life.

That concern is the basis for an entirely individualistic justification for wealth redistribution — or at least some wealth redistribution: if done reasonably, it hinders stronger individuals less than it helps weaker individuals. This has nothing to do with political abstractions such as “community”, nor does it involve treating “equality” per se as a desideratum. It’s just a concern for the welfare of individuals. And such concern doesn’t just apply to redistribution of wealth. It also applies to other factors that we might regard as essential for living a decent life: education, health care, personal safety.

Used in its strict sense, the word ‘liberal’ refers to people who regard freedom of individuals as the sole political good. So ‘liberalism’ means much the same as ‘individualism’ as I am using the word here. In the past, liberal-minded people were generally found on the left wing of the political spectrum, as they supported better working conditions, affordable education, and similar reforms, at the cost of raising taxes. Liberalism and the left were so closely associated that in its looser — often sneering — usage, the word ‘liberal’ often just meant “left wing”.

Yet nowadays the left distances itself from liberalism. The word ‘neoliberalism’ is used as an up-to-date synonym for ‘Thatcherism’. The left instead wallows in half-baked ideas of “community”, and similar abstractions. This is a tragedy, partly because it makes left-wing parties unelectable, and partly because by embracing mystical abstractions and historical fantasies about group entitlements, the left flirts with Fascism and vicious nonsense about group “destiny”.

Alcohol and the “politics of vanity”

Some political causes don’t get as much open support as they deserve because people don’t feel comfortable being seen to support them. One such cause is opposition to minimum alcohol pricing (or minimum unit pricing, MUP). The people it will affect most are excessive drinkers and the poor. No one wants to look like an excessive drinker, and no one wants to look poor, so opposition to MUP tends to be muted — it prompts knowing smirks that say “aha, I think we can guess why this guy is getting worked up about it”. On the other side are powerful politicians and highly-paid health professionals who want to be seen to be doing something about the current moral panic of excessive alcohol use in Ireland. These people do their self-image no harm at all by openly supporting MUP, and some do so with an unbridled zeal.

Of Irish attitudes to alcohol, one of the most unhealthy is our tendency to see it as the “forbidden fruit”, so that all drinking is somehow illicit. (Traditionally, Catholic children were expected to “take the Pledge” — the majority who “busted out drinking” when they reached adulthood retained a sense of shame at having broken a promise.) This gives rise to a widespread sense that on one side we have moral crusaders doing their Good Works, while on the other side we have people who skulk around doing something they ought to be ashamed of. Self-image plays a crucial role for both sides here.

Burke used the phrase ‘politics of vanity’ for political decision-making guided by concern for “how it makes me look” — instead of taking account of circumstances and trying to anticipate consequences. He reserved the phrase for Rousseau, whom he detested both as a thinker and as a man. It seems to me that the politics of vanity is alive and well in Ireland, because all of the main political parties except Sinn Féin support MUP. If we put aside the politics of vanity, and instead take account of circumstances, trying to anticipate consequences, what can be said about MUP?

With MUP, if you drink 6 cans of Aldi’s own brand lager per week, you will pay €426.66 more in one year than you do already. But if you drink craft beer, say — even 60 bottles of craft beer every week — you won’t be affected financially at all. MUP is aimed at “the sort of people who drink Aldi lager” rather than “the sort of people who drink craft beer”. €426.66 is much larger than the water charge, which has people up in arms and protesting all around the country. Why are people not protesting about this outrageous attack on the poor?

MUP is a serious infringement of freedom, and selective one at that because it discriminates against the weakest sections of society. It is nakedly unfair. Wealthier people tend to gloss over this unfairness with the vague thought that “people on the dole shouldn’t be drinking at all, as it isn’t a necessity”. But that is an exceptionally mean-spirited thought. Each individual’s needs differ. Most people need a social life, and whether we like it or not, the reality in Ireland is that social occasions are usually accompanied by alcohol. It might be as modest a social occasion as watching “the match” on TV and sharing a six-pack with a friend — to make it practically impossible for an unemployed person to do even that once a week is just plain vicious. It would threaten their sanity. We are not entitled to perform this sort of “social experiment” on our fellow humans.

Some will object, as Rousseau might, that excessive drinking diminishes the freedom of the excessive drinker, and so preventing poor people doing what they want to do is actually doing them a favour. We are “forcing them to be free”, as Rousseau sinisterly put it. (Let us pass over the question whether this is monumental hypocrisy or Soviet-Style doublethink.) Well then, if charging the poor more for alcohol is doing them a favour, charging the rich more for alcohol would be doing them a favour too. And not doing them this “favour” is harmful and discriminatory in reverse. Perhaps we should raise duty on alcohol across the board, so that the rich pay proportionally more for their chosen brands. This is Sinn Féin’s policy on alcohol, and although I don’t agree with it, it is undeniably less unfair than MUP.

No one doubts that the more alcohol costs, the less of it tends to be consumed. Alcohol probably costs more in Iceland than anywhere else in the world, and it has a correspondingly lower rate of consumption, and a lower death rate from liver disease. But it has a serious problem with binge drinking — a more serious problem than Ireland’s. Like young people almost everywhere, young Icelanders want to get drunk and have fun from time to time.  So they save up their money during the week, and splash out at the weekend. Friday night in Reykjavik is mayhem — although it has to be said, generally good-natured mayhem. Shop windows get broken, but bones usually don’t.

Ireland has a low rate of violence, and a low rate of death from liver disease — lower than the rates in the US, UK, and the EU average. Of course some people do get sick and die from alcohol-related diseases. But we’re doing rather well compared to the rest of the world. Of course some drunks do show up in emergency rooms, causing harm to themselves and others. But Ireland’s problem isn’t all that bad, and health workers all over the world are trained to deal with drunks. Society must be prepared to absorb some harm caused by individuals living as they choose to live, and inevitably making some mistakes. Our emergency rooms expect casualties from road use, and although driving should be as safe as we can reasonably make it, we can never make it completely risk-free. The harm caused by drivers is less severe than the harm that would be caused by not allowing anyone to drive, or by only allowing rich people to drive because only they can afford bigger, safer cars. Our emergency rooms expect casualties from alcohol abuse too. Society must absorb some limited harm of this sort as well, for the greater good of human freedom.

Overall, we Irish are drinking less than we did before, despite falling alcohol prices (in real terms). There is an exception to this general trend: women. As Irish men and women’s roles have converged in the workplace and in the home, more Irish women have adopted the heavier-drinking lifestyle of traditional office workers. Liver and other alcohol-related diseases are on the rise among Irish women, and it is well-known that alcohol is more toxic to women’s metabolism. Of course I wouldn’t be so crass as to suggest that women be asked to pay more than men for alcohol. That would be unfair.

Why we can’t model climate

There’s an interesting (to me) discussion on physics and philosophy taking place at the Daily Nous blog. True to form, I’ve probably been adding too many comments, and all of them overlong. My last comment concerns what I regard as an insuperable difficulty in modelling the climate. For the record, here is that comment on my own blog:

Hypotheses represent their subject matter by being true or false of that subject matter. Like most sorts of representation, this does not involve resemblance. But models are different: they do represent their subject matter by resembling it in some relevant way. For example, a model airplane might resemble a real airplane by having similar shape and colours, even though their sizes are different. Or it might mimic the real airplane’s flying behaviour.

To keep things simple, I’ll talk about respective “behaviours” (of model and subject matter) over time, but bear it in mind that this mimicry can be along any dimension: for example, a Fourier series might model a function along the x-axis rather than over time. Here’s the important point: I think the behaviour of both model and whatever it represents must be “lawlike” in the roughly the same way. (I include statistical laws here, by the way.) In respect of the relevant resemblance between them, it’s essential that “nature continues uniformly the same”.

I’ve used words associated with “Hume’s problem of induction”. Popper famously rejected all (enumerative) induction as problematic. I think that went far too far. As far as I’m concerned, induction is often fine, we just need to reflect in a piecemeal way on circumstances in which induction is reliable, and circumstances in which it isn’t. It’s reliable when it traces law-like connections in the real world (such as “these emeralds are green, so all emeralds are green”). It isn’t reliable when it doesn’t.

It seems to me that we have good reasons for thinking the climate doesn’t behave in a lawlike way, or at least not in any way useful for modelling in climate science. It may be deterministic, but that’s not the same as being predictable or capable of being modelled. Over time, or in response to various changes in initial conditions, the climate is very complicated and multiply chaotic. It seems to me that additional computing power will bring diminishing returns, so that attempts to model the climate will meet a “ceiling” like that of weather forecasting. We may get a bit better, but we probably can’t get all that much better. To put it bluntly, I think it’s a waste of time, brains and money.

Sentience and preference utilitarianism

There was a brief discussion on Twitter yesterday about whether we should grant “human rights” to non-sentient robots. My reaction: “Why give a damn about non-sentient agents? They can’t feel anything, so who cares if harm should befall them?”

This idea that “morally, the only thing that matters is sentience” was famously expressed by Jeremy Bentham:

a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose the case were otherwise, what would it avail? the question is not, Can they reason? nor, Can they talk? but, Can they suffer?

Despite my confidence that non-sentient agents do not matter morally, I admit that sentience might seem to pose a special problem for me as a preference utilitarian. The dissolution of this problem adds detail to my moral theory, and explains why we call it ‘preference’ rather than ‘desire’ utilitarianism.

A preference utilitarian differs from the traditional hedonistic type of utilitarian (such as Bentham) in that his basic good is not a particular sort of experience such as pleasure or relief from pain, or happiness understood as a feeling, but the satisfaction of desires. His “greatest good” is not the “greatest happiness of the greatest number” but the maximisation of the satisfaction of desires.

Now it’s important to see that the satisfaction of desires here is not the having of a “satisfying experience”, but the satisfying of objective conditions — and the agent might be wholly unaware that those conditions have in fact been satisfied. A desire is satisfied when the desired state of affairs is actually realised, whether or not the agent has any idea that the state of affairs is realised. Like a man becoming an uncle by virtue of a birth he knows nothing about, or a belief being true, a desire’s being satisfied is a matter of the world’s being arranged in the right way — something typically external to the mind of the agent.

For example, most people want their spouses to be faithful. They don’t want the mere experience of their spouse being faithful, but the actual objective fact of their spouse being faithful. This desire is not for the spouse to “keep up appearances” by telling convincing lies about their infidelities — there mustn’t be any infidelities to tell lies about.

Here’s why sentience might seem like a problem for preference utilitarianism: unless a desire is a desire to have a particular sort of experience, which it typically isn’t, the experience of a desire being satisfied is like a by-product of its actually being satisfied. So a “robotic” agent who doesn’t have any conscious experiences at all — but still has desires which can be satisfied or thwarted — would seem to make moral demands on preference utilitarians like myself. That conflicts with the intuition expressed above that only sentient agents matter morally.

The problem is dissolved, I think, when we remind ourselves that genuine desires (and beliefs, for that matter) only exist where pluralities of them together form a “system”. In moral deliberation, the utilitarian weighs desires thwarted against desires satisfied in an imaginary balance. Obviously, strong desires count for more than weak desires. When desires come into conflict with one another in the mind of a single agent, the strongest desire is the agent’s preference. Only desires in a system of several desires competing for the agent’s “attention through action” can count as preferences.

So system is required for one desire to take precedence over another, as it must if it’s a preference. And a preference to pursue one goal rather than another involves the weighing up of the relative merits of competing goals, the level of time-management needed to defer the less urgent goal, and so on… In short, it requires reflection and choice. This is “second-level representation” — i.e. meta-level representation of primary representational states — of the very sort that makes for consciousness. We need reflection to decide between competing desires (and for that matter, we need epistemic beliefs to guide our choices of first-level beliefs about the world — in other words, a sense of which among rival hypotheses is the more plausible). Second-level representations like these amounts to awareness of our own states, including awareness of such states as physical injury. In other words, the experience of pain. It’s a matter of degree, but the richer the awareness, the greater the sentience. So genuine desire and sentience are linked in a crucial way, even though any particular desire and the conscious experience of its satisfaction might not be.

To better understand why “genuine” desires are part of a system, we might contrast them with more rudimentary goal-directed states of ultra-simple agents such as a thermostats, or slightly more sophisticated but still “robotic” agents such as cruise missiles.

Thermostats and cruise missiles each have a rudimentary desire-like state, because their behaviour is consistently directed towards a single recognisable goal. And they have rudimentary belief-like states because they co-vary in a reliable way with their surroundings, co-variation which helps them achieve their goal. In both cases, they might be said to “bear information” (non-semantic information, reliable co-variation) about the world. A clever physicist (a “bi-metallurgist”?) would be able to work out what temperature a thermostat “wants” the room to stay at, and what temperature it “thinks” the room is currently at. A clever computer scientist would be able to reverse-engineer a cruise missile to reveal what its target is, the character of the terrain it is designed to fly over, its assumed current location, and so on. We could go further and adopt the intentional stance, assigning mental content to these agents. In effect, that would be to drop the cautionary quotation-marks around the words ‘wants’ and ‘thinks’. We might regard ourselves as referring literally to its desires and beliefs. But we would not be able to take the next step and talk about preferences. For preferences, we need various gaols of varying strengths, and we need something like consciousness to make decisions between them. In other words, we need sentience, at least to some degree.

Does Biology Have Laws?

[This blog post was prompted by this Scitable discussion. Unfortunately comments were closed before I could contribute.]

Laws are bits of language that describe regularities in nature. If the laws are true, the regularities are real. Laws are general claims, but they are more than accidental generalisations such as “everyone in this room is over five feet tall”. Laws are more like hyper-generalisations in that they don’t just describe what has actually been the case so far — they describe what would be the case, even if the states of affairs that would make them true have not yet come to pass.

There aren’t any laws about the heights of people who happen to be in a room together, but we’d be moving in that direction if we arranged some sort of screening mechanism that only allowed admittance to that room on the basis of height. Genuinely scientific laws rely on such mechanisms when they describe such things as the electric charge of fermions in an atomic nucleus.

Many fundamental laws of physics like Pauli’s Exclusion Principle do not admit of exceptions. Exceptionless laws like that are quite common in physics and chemistry. What about biology?

The question whether there are laws in biology is too often understood as asking whether there are exceptionless laws in biology. I’d guess there probably aren’t any such laws, because the categories of biology (species, etc.) are not like the categories of physics.

But it does not follow that biology has no laws. The salient feature of laws is not that they admit of no exceptions but that the links they express (between categories, concepts, etc.) are non-accidental.

Examples: animals with high male parental investment tend to be monogamous; mammalian mothers tend to be protective of their young. The biological functions of parental investment and pair-bonding are linked; and so are the functions of producing milk and caring for young.

Those links entitle us to draw inferences: if we hear that animals of species X exhibit high male parental investment, we can guess that they are monogamous, although there is always the possibility that we are dealing with an exception. If we hear that Y is a female mammal, we can guess that she is protective of her young, even though there is always the possibility that this particular individual’s behaviour is “aberrant”.

I hope it’s clear that biology does critically rely on and describe non-accidental links between categories — links that entitle us to make inferences between claims containing the corresponding concepts. It is that warrant to infer that makes for genuine scientific laws, not their exceptionlessness.

Biological laws have exceptions because many biological categories are “functional” (as exemplified above). In describing, explaining, predicting (etc.) things biologically, we adopt what Dennett calls the “design stance”. We assume that things have functions (purposes, goals, tasks, etc.) and that they perform those functions more or less well “as they were designed” to. “Working properly” shades into “less-than-optimal performance”, which in turn shades into out-and-out “malfunction”. Thus biological categories have fuzzy edges, in other words, these categories have grey areas where there are exceptions.

(Warning: of course nothing in biology is literally designed by a designer. The main point of evolutionary theory is to show how no such design is required. Talk of design, purposes, goals etc. in biology is just shorthand for past contribution to survival and reproduction.)


When people talk about “self-control”, what do they mean? On the face of it, a “self” and something else that “controls” that self sound like two separate agents. But every agent is in reality just a single agent. What is going on? I think some buried philosophical assumptions and mistakes lurk here.

[Edit: I see no real difference between a core part of the “self” controlling unruly peripheral parts, versus its being controlled by them. The main idea of self-control is that the “self” is “divided against itself”, or at least divided into more than one part that can be treated as an agent in its own right.]

When we say someone should control himself, we mean first and foremost that he has conflicting desires. Then we go further, and give one of those desires a superior status as being “more genuinely his own” than the other one. His “gaining control of himself” is then a matter of the desire that is “more genuinely his own” resulting in action, overruling the desire that is “less genuinely his own”.

Now it seems to me that this decision to regard one of the conflicting desires as “more genuinely his own” is not taken with reference to what the agent himself most strongly desires, but instead with reference to what is considered more laudable — in other words, with reference to what society at large approves of. This might be anything regarded as valuable — such as good heath, prudence in financial matters, scientific rigour, religious piety, whatever. You can see the difference in terms of “is” and “ought”: what the agent most strongly desires is a factual matter to be decided by considering his own choices, whereas what is laudable is a matter of value decided by the likes and dislikes of society at large.

It’s important to see that the factual matter is a completely trivial one — whatever the agent actually ends up doing is what he wanted to do most in the first place. What makes one desire stronger than another is simply that it “wins” any conflict between them by issuing in action. [Edit: So if we look at what an agent most strongly desires, there is no question of one part of himself controlling any other part of himself. He will have to compromise with other agents, of course, and that may involve agents controlling each other to some extent, but that is an everyday fact of life.]

So I would argue that the word ‘self-control’ is to this extent inappropriate: whatever “control” may be involved is not really “self-control” so much as “control by society”. Now please don’t get me wrong here: I don’t mean to say that that sort of “control” involves actual coercion by society. But it does involve guidance from outside the self — with the agent’s tacit approval, of course. He takes his lead from what society approves of rather than from himself in isolation.

Some will protest that self-control usually involves pursuing longer-term gaols and deferring immediate gratification. If longer-term goals are more “genuinely an agent’s own” than mere passing whims, perhaps longer-term goals are more rationally entitled to direct conduct. Perhaps longer-term goals represent an agent’s character more faithfully than whims, so that the latter can be considered “out of character”, and thus a suitable subject for the “self” to exercise “control” over.

I think that’s a red herring. Spontaneity, impulsiveness, even capriciousness are aspects of an agent’s “true” character just as much as stolidity or lack of imagination. Rational action involves the pursuit of all sorts of goals, with an eye both to how desirable this or that goal may be, as well as to how confident one may be that this or that course of action will achieve it. If someone chooses to pursue this shorter-term goal rather than that longer-term goal, say, it simply indicates that on balance he prefers this to that, and/or he has more confidence in achieving it. So there’s nothing intrinsically more “rational” about the pursuit of longer-term goals.

That isn’t the only red herring. We tend to discount pursuits that seem to undermine an agent’s integrity or harm him as being less “genuinely the agent’s own” (I’m thinking of activities such as smoking and drinking). But what counts as “harm” here? Inasmuch as he is able to pursue something he really wants, he is not harmed — and inasmuch as he is prevented from pursuing what he really wants, he is harmed. If we regard something an agent freely pursues as undermining his integrity or as harmful to him, once again we are appealing to values of society at large rather than values of the agent in isolation. And once again, we’re not talking about “self-control” here so much as “control by society” — or, as I said above, at least “guidance by society”.

So far, no harm done. An agent is still doing what he wants to do, even when what he wants to do is determined by the likes and dislikes of other agents than himself. But I think our understanding has taken a sinister turn. We are using misleading words, and in doing so we are turning a blind eye to a possible source of genuine coercion. By treating something that lies outside the agent as if it were the agent’s own, we slide inexorably towards thoughts such as that “society can help a person to control himself”. There are monsters about.

[Edit: One such monster is Rousseau’s idea that people must be “forced to be free”. That slogan expresses the most insidious and dishonest form of paternalism, which goes beyond simply forcing people to do what they don’t want to do “for their own good”. The greasier version — embraced by anyone who appeals to “false consciousness” or the like — involves pretending they do in fact want it by virtue of the fact that it’s for their own good.

The idea that an agent can “really” want something although superficially seeming not to want it is at the heart of the “positive” concept of freedom. As Isaiah Berlin noted, it involves the self’s being divided into two — the “empirical” self and the “real” self — and obviously so too does the idea of self-control.]

Formal versus informal implication

I want to compare and contrast two sorts of implication — and I want to suggest that our understanding of beliefs and logic is badly affected when we confuse them, as we often do. In the hope of making things a little clearer, I propose to use the following symbolism:  (written in Mistral font) stands for the belief that P, P (in italics) stands for the linguistic sentence expressing the same content P, and stands for the fact that P, which of course only exists if P is true. (I gave “earthy” colours because it’s “in the world”, geddit? Also the looks a bit like an octopus, i.e. a real thing in the world.)

For illustration, if P is the sentence ‘Snow is white’, is the belief that snow is white, and is the fact of snow’s being white — a very simple sort of fact that might be represented by a Venn diagram like this:

That silly diagram is intended as no more than a reminder that although we are using a letter for a mental state (belief ) which is true or false, and a letter for a linguistic utterance (sentence P) which is true or false in the same circumstances, in the third usage (fact ) a letter stands for those circumstances themselves — something that is neither true nor false. Now it may sound strange to say that a fact isn’t true — facts are “true by definition”, aren’t they? Well, a fact is what makes a true sentence or true belief true, so wherever there’s a fact there’s a truth. In a loose colloquial sense we might refer to truths as facts. But in the current philosophical sense, a fact is strictly a state of affairs corresponding to a truth.

So understood, facts cannot imply anything, being themselves neither true nor false. But their linguistic or mental counterparts can, and this is what I want to examine here. It seems to me that confusion between facts, sentences and beliefs has generated much misunderstanding about the nature of thought itself. I hope to disentangle a little of this confusion here, and in doing so I hope to persuade you that formal logic is much less useful than is widely supposed as a tool of critical thinking.

Although facts can’t imply one another, linguistic sentences often do. For example, what are we to make of the claim that P implies Q?

If it is true, it describes a fact of some sort of lawlike connection — formal, causal, categorical, or whatever — between two possible facts and . I say “possible” facts because the implication can hold at the same time as the individual sentences P and Q it connects are not true. What matters is the connection between the sentences rather than their truth-values. For that reason, material conditionals of elementary logic (whose truth-value depends simply on the truth-values of what they connect) don’t capture this sort of implication. The conditionals we use for that purpose have to be understood as counterfactual conditionals, or as having some sort of subjunctive mood, so that they can be true or false regardless of the truth or falsity of their component parts.

Just as the sentence P can both describe a purported fact and stand for the belief , the claim that P implies Q can both describe a purported fact and stand for a belief. The nature of this fact and of this belief have seemed a bit of a mystery, to me at any rate in the past. I now think that mystery is largely the product of confusion between formal and informal implication. Apologies if this is no mystery to you.

Formal implication

As a model of implication, most of us take the case we are most familiar with: implication in formal logic, where the premises of a valid deductive argument imply the conclusion. When I say the implication here is formal, I mean that the work is done by language, and thought follows. That is, relations between sentences guide the formation of beliefs.

When conditionals that express such implications are true, they are true by virtue of the fact that one sentence can indeed be derived from another sentence via rules of inference that enable the derivation.

Deriving one sentence from another is a bit like building a structure out of Lego bricks. In this analogy, our rule of inference might be “every new brick must engage at least half of the interlocking pins of the bricks underneath”. When we begin, we might have no clear idea whether a given point in space can be reached given our starting-point. But once we do reach it (if we do), we can believe that it is legitimately reachable, given that starting-point and the rules of inference. Or at least, we can “accept” it as true, because we “accept” the rules of inference simply by using them.

With formal implication, the fact that corresponds to a true claim that P implies Q is a “linguistic” fact, embodied by the actual derivability of Q from P. The belief that corresponds to a claim that P implies Q (or sort-of belief, if all we do is “accept” it as true) is about derivability in language.

Informal implication

With formal implication, the work is done by language and thought follows. But with informal implication it’s the other way around: the work is done by thought and language follows. Actually, if thought is working as it should, this one-thing-following-another goes deeper, all the way to facts. The world has some lawlike features, and the thoughts of animals reflect them — in other words, animals have true beliefs about lawlike facts. Later, we human animals try to express those thoughts using language. Here real-world relations guide the formation of beliefs, which in turn guide the formation of sentences.

These sentences can be misleadingly ambiguous. A sentence like ‘P implies Q’ can be read in three distinct ways. It can say something about the lawlike connections in the world, i.e. facts about how and are related; or it can say something about the way sentences P and Q are related; or it can say something about how beliefs and are related. This ambiguity is compounded by the fact that a sort of meta-level “conditional” corresponds to each of these types of relation, and the situation is made still worse by our inclination to take formal implication as our model of implication in general.

It seems to me that the way to avoid getting lost here is to constantly remind ourselves that the primary link is between things in the world where lawlike connenctions exist: “what goes up must come down”, “if it has feathers, it’s a bird”, etc. Thought captures these lawlike connections by forming belefs that stand or fall together in a systematic way. If and are related in a lawlike way, a mind captures that meta-level fact by being disposed to adopt the belief whenever it adopts the belief , and to abandon the belief  whenever it abandons the belief . Given the larger belief system to which the pair may or may not belong, they’re “stuck together” like the ends of a Band-Aid:

The system as a whole has the property that whenever gets added to it, gets added too, and whenever gets stripped away from the system, gets stripped away too, like a Band-Aid whose adhesive parts are put on or taken off (in reverse order).

If we can be said to have a “conditional belief” corresponding to this sort of implication, it amounts to little more than belief that a lawlike connection exists between and . This meta-level “conditional belief” is embodied in the way and stand together or fall together in the system. Even if such a belief is false — as it would if there were in fact no lawlike connection between and — that distinctive linkage of beliefs and in the system is all it amounts to. When we come to capture it in language, we may use arrows or similar symbols to indicate a non-symmetrical linkage of P and Q, but let’s be careful not to think of such informal links as perfectly mirroring formal links.

I hope you agree that the Band-Aid analogy goes too far in that it contains one unnecessary detail that ought to be omitted from our understanding of informal implication. That detail is the “bridge” between the adhesive parts, with its supposed “hidden mechanism” enabling an inference from P to Q. I think we are inclined to imagine such a mechanism exists because we are so used to taking formal implication as our model, and we have a tendency to assume something akin to interlocking Lego bricks are needed to “bridge the gap” between and . A better analogy perhaps would be a Band-Aid with the non-adhesive part removed:

What does it all mean?

The assumption that formal and informal implication are closely parallel misleads us about the nature of thought. It promotes the idea that thinking is a matter of “cogwheels and logic” rather than many direct acts of recognition by a richly-interconnected belief system, often of quite abstract things and states of affairs.

People who praise or actively promote logic as an aid to critical thinking routinely assume that beliefs work like discrete sentences in formal implication. That is, they assume beliefs have clear contents with logical consequences which are waiting to be explored. Well, as I’ve said several times now, in formal implication, language does guide thought. Beliefs correspond to sentences which are discrete because of their distinct form. One sentence leads to another thanks to the rules of inference, and beliefs follow their linguistic counterparts. The beliefs that are so led are themselves discrete because they are so closely associated with discrete sentences. Their contents determine the inferential connections between them. But most beliefs aren’t like that at all. Their content isn’t determined by prior association with discrete sentences whose form precisely determines their content. Rather, their content is attributed via interpretation, which is an ongoing affair and, well, a matter of interpretation. That interpretation involves “working our way into the system as a whole”, taking account of the inferences an agent draws and attributing whichever mental content best reflects his inferential behaviour. If someone behaves as if he is committed to lawlike connections in the real world, we attribute beliefs whose contents are appropriate to commitment to those lawlike connections. Here, inferential connections between beliefs determine their content rather than vice versa.

As far as I can see, this limits the usefulness and scope of logic. It’s useful in the academic study of logic, obviously, but outside of that field, only the most elementary applications are of much use, even in formal disciplines like computer science and mathematics. I agree that it’s useful to be aware of informal fallacies and to try to avoid them. But beyond that, the power of logic has been over-inflated by the assumption that beliefs are like “slips of paper in the head with sentences written on them”, and the assumption that thinking proceeds by drawing out their consequences — by examining what they formally imply.

We are not culpable for “wrong opinions”

When we act, our bodily movements are caused by mental states. These mental states consist of a desire to achieve a particular goal, and some relevant beliefs which help us “steer a course through the world” towards achieving the goal.

It all means a human agent is a bit like a sophisticated version of a cruise missile, which is programmed to reach a target, and to do something (usually explode) when it gets there. It steers a course towards its target by comparing the terrain it flies over with its onboard computer map.

Although both the map and the targeting are necessary for it to reach its goal, the map is “neutral” in the sense that it only contains information about the outside world. It is compatible with the missile hitting any other target within the mapped area, and with its doing good things like delivering medicine or food aid when it reaches its target (not just doing something bad like exploding).

If the “act” of a cruise missile is to be praised or condemned, we judge what it is programmed to do, and where. We do not judge its map, whose greater or lesser accuracy simply results in greater or lesser efficiency in fulfilling the aim of the programming.

It should be the same with human agents. If we praise or condemn what they do, it should be with reference to the good or evil they intend to do, or are willing to do, and to whom. We should suspend judgement of an agent’s beliefs when we judge his actions, as beliefs are “neutral” with respect to the good or evil of what they help to achieve, just like the cruise missile’s onboard map. Like the accuracy of the missile’s map, the truth or falsity of an agent’s beliefs affect his success or efficiency in achieving gaols, but the beliefs do not set any goals. A belief can be true or false, but it can’t be good or bad. The worst an opinion can be is false, rather than “aimed at an evil goal”.

Despite the neutral role of beliefs, some people blame others for having the “wrong” opinions, or in other words for not believing what they “should believe”. For example, many Muslims think “apostasy” should be punished by death. Many Westerners think “denialism” should be ostracised or worse.

Those are remarkably similar views, and both are primitive, in the worst sense of the word. They belong to a backward state of society. They are inspired by confused understandings of agency, and we should reject them. If someone has false beliefs, he has either had bad luck (by being exposed to unreliable sources of knowledge) or he is epistemically ill-equipped. In neither case is he culpable.