Something from nothing

When I was a young engineering student, I was very impressed by techniques that seemed to deliver something from nothing – or from surprisingly little. For example, one of Kepler’s insights was that orbiting planets “sweep out equal areas in equal times”. Newton later proved that this follows from nothing more than gravity acting along the line joining the centres of planet and Sun. The force’s magnitude needn’t be that of an “inverse square” law; it might even be repulsive instead of attractive – all that matters is that it act radially.

As another example, consider “dimensional analysis”. We might suppose that the period of a simple pendulum depends on variables such as its length, its mass, or acceleration due to gravity. But by simply noting that the period must be measured in units of time (rather than of mass or length, say), we can show that it must be proportional to the square root of the pendulum’s length divided by acceleration (i.e. the constant g). And it cannot depend on the mass of the bob. Here again, a modest assumption about a constraint yields a surprisingly powerful result.

Something rather like this can also happen in philosophy. Take the modest assumption that science posits entities that cannot be observed directly – such as electrons, viruses, force fields and dinosaurs. This constrains scientific methods in surprisingly powerful ways. For a start, it immediately puts the two traditional patterns of reasoning into the back seat. Scientific method cannot be much like mathematics, whose methods of proof are exclusively deductive. Nor can it be much like Francis Bacon imagined it to be in the late sixteenth century, as the rigorous application of induction.

Deduction cannot deliver nontrivial conclusions of valid arguments containing terms that do not appear in the premises. So any entity purportedly denoted in such a conclusion must already be denoted in the premises. Where do these premises come from? – Ultimately, they themselves cannot be the product of deduction alone.

Induction too can only deliver more general, extended versions of claims already made. So where do these original, less general claims come from? – Typically they are about things that can be observed directly. Those claims don’t even purport to describe things that can’t be observed directly. The few inductions that start off by purporting to describe such things cannot themselves be the deliverances of induction.

The problem with both deduction and induction is essentially the same: each starts off with some claims that are already accepted, but which describe nothing beyond what can be observed directly. Each then makes a sort of “jump” to a “new” claim, but nothing “new” enough to contain anything of the sort we’re interested in in science, namely a description of something that cannot be observed directly. Genuinely scientific “jumps” to what cannot be observed directly must be of a different sort from anything made in deduction or induction.

So science must – I repeat, must – be a matter of guesswork. Of course it must also be more than guesswork if it is to produce anything worth believing. That essential extra bit is testing.

There is a variety of takes on guesswork, each with its own fancy name. Some call it the “method of hypothesis”. Others call it “abduction”. Some like the sound of “inference to the best explanation”. And there are other words like these. But most of them are vague, and all are misleading inasmuch as they suggest that step-by-step reasoning is involved (analogous to traditional deduction and induction) instead of honest, common-or-garden, risky guesswork.

Most of those fancy words are inspired by discomfort or even hatred – hatred of uncertainty, of risk, of depending on luck, of gambling, of being unable to do anything remotely like accountancy. So people who hate those things tend to resist or even to cover up the fact that science is essentially guesswork.

One “radical” attempt to disguise the fact that science starts off with guesswork is to pretend that science does not in fact posit entities that cannot be observed directly. So they say instead that all our talk of electrons, viruses, force fields and dinosaurs is a mere instrument to “organize experience”. Scientific theories are neither true nor false, they say, but consists of mere “models” which we use to predict how the world as we experience it will unfold.

It’s a long, old debate – over the issue of scientific realism – and I’d be happy to join it with anyone who’s willing to take me on. In the meantime, please note that if we understand science in an instrumental way, it cannot challenge our religious or philosophical beliefs, or even earlier scientific beliefs. In fact it is powerless to do anything interesting at all.

There is a sort of trade-off in these opposed attitudes to science. The scientific realist accepts that scientific knowledge is very risky in that it cannot avoid guesswork, yet it presents a real challenge to other beliefs because it purports to be literally true. The scientific instrumentalist, on the other hand, sees scientific knowledge as more like accountancy – it is secure, but buys its security at the cost of being very shallow. It cannot present any real challenge to other beliefs because it can’t contradict them.

 

Order from disorder?

Here’s an example of a lawlike claim: ‘all emeralds are green’. This claim is much like a scientific law, because the predicate ‘is an emerald’ and the predicate ‘is green’ are practically “made for each other”. They’re ideally suited to their linguistic “marriage”, because what makes a beryl count as an emerald is the very thing that makes it green. So you can’t have an emerald that isn’t green.

Although most scientific laws are written using mathematical symbols – such as Newton’s ‘F = ma’ – those symbols capture intimate connections between the real things they stand for, much as words do in the emerald example above. Those connections are generally simple, and consist of such facts as the containment of one set by another (as above), or direct cause-effect links (as in ‘what goes up must come down’), or suchlike. Speculating, we might well wonder whether our very sense of simplicity itself is shaped by our innate ability to sniff out lawlike connections. In any case, these intimate connections give laws a distinct “flavour of necessity” – laws can seem almost empty like tautologies, or almost trivial like definitions.

An important feature of laws is that they support “counterfactual conditionals”: although I’m not actually holding anything in my hand, if I were holding an emerald in my hand, then it would be green. This is why laws are useful in prediction: you can predict that something will be green, just from knowing it’s an emerald.

Now here’s an example of a claim that is not lawlike (in fact it’s not even true): ‘all swans are white’. There is practically no correlation between an animal’s colour and the genus it belongs to, or even the species it belongs to. Many groups have subgroups whose most noticeable distinguishing feature is their colour – so the predicates ‘is a swan’ and ‘is white’ are not at all suited to “marriage” in a law.

Although ‘all swans are white’ may be superficially (grammatically, etc.) similar to ‘all emeralds are green’, it cannot be used to make reliable predictions. If I were to keep a swan in my own private lake, you wouldn’t be able to reliably guess whether it would be black or white.

Sometimes, people talk about “black swans” as if they were occasional anomalies whose possibility everyone should be forewarned and forearmed about. But really, that is not nearly deep enough or sceptical enough. The real problem is not that exceptions occasionally turn up, but that not enough thought is given to whether laws are involved at all when we try to predict things.

Such laws might be statistical – as long as they’re genuine laws which describe real linkages, and which therefore support counterfactual conditionals. Prediction cannot be based on a mere “statistical snapshot” of the way things accidentally happen to be. For example, in the long run, repeated throws of a pair of dice will result in doubles about one sixth of the time. Even if we don’t actually throw the dice repeatedly, we know that if we were to do so, that proportion would be approached with increasing proximity. Or again, in a large enough sample of mammals, the sexes will be represented roughly equally. Even if we don’t actually take a head count, we know that if were to take a big enough head count, we would find roughly equal numbers of male and female. These proportions are not accidental: they’re the products of careful manufacture (shaping, balancing, etc.) of dice, and of evolutionary biology, respectively. Either of these statistical proportions could take part in a statistical law.

But with many statistical phenomena, the numerical proportions we measure are no better than merely accidental. If we extrapolate from the latter for purposes of prediction, our predictions will be unreliable. For example, suppose about one sixth of Australians drive Ford cars. There is nothing to suggest that that proportion is anything but an uninteresting coincidence. In a decade’s time, they may drive entirely different brands of cars, in entirely different proportions. Or again, the human population has been rising because food is getting cheaper, but the wealthier people become, the fewer children they tend to have. So although there has been an overall upward trend, there is no reason to think any sort of law is involved in the rise of the human population. The current rate of population rise is no basis for any reliable predictions about how big the human population will be at any time in the future.

Now for my main complaint: many people don’t bother to ask whether any sort of law is involved in apparent trends such as population rise. They just extrapolate from the current “data”, and expect nature to “continue uniformly the same” (as Hume put it) in the relevant respects, as if a law could describe the process. Often, we have very good reasons to think the process isn’t remotely lawlike – in other words, we have good reasons to think that no law could describe it. Laws are bits of human language, and human language can describe some things but not others.

The reliability of any prediction depends on an essential linkage between what we know already and what we’re predicting. There might be a simple “constant conjunction” between them (to use Hume’s terminology again). Or there might be some other non-causal connection that underwrites a lawlike connection, such as exist in quantum entanglement. But these lawlike connections are not optional – they’re a requirement of prediction. The ever-present question in our minds should therefore be: Is there or isn’t there a lawlike connection between what we’ve observed already, and what we’re trying to predict?

I think that question isn’t asked often enough. And when questions aren’t asked, answers tend to be merely assumed. The assumed answer to the present question is in effect that there is always a lawlike connection of the required sort, because the physical world is assumed to be mechanical and regular simply by virtue of being physical. The naïve Newtonian intuition is that it’s “like clockwork”. Without even asking the question above, we tend to assume that all we have to do is follow the standard pattern of extrapolation from already-observed cases, and the physical world will oblige. Its unfolding patterns may not be obvious at first, the idea goes, but they must be there, waiting to be revealed beneath the apparent confusion.

I think that assumption is profoundly mistaken – so badly mistaken that it’s worth a brief look at the philosophical ideas behind it.

We belong to a tradition that takes the mind to be “spiritual” rather than “material” – it doesn’t interact with material things in the usual way in which matter interacts with other matter. So we think of the mind instead as a centre of consciousness or an engine of experience, in a sense “cut off” from the physical world outside the mind, because it “deals in experience” rather than with material objects. According to this view, whatever the mind knows about matter is made possible because its experiential inputs from the outside world provide “justification” for its beliefs, and if the beliefs are actually true, they count as items of knowledge. This standard analysis of knowledge takes “justification” to be “internal” to the mind. The vague idea is that I cannot accept anything except “what is available to me” within the confines of the “theatre of my own experience”, because otherwise I would have to “step outside of my own skin”. In the supposedly isolated state “inside my own skin”, with only internal cues available to me as “justification”, the best any mind can do is follow the standard pattern of extrapolation from observed cases – in other words, treat white swans in the same way as green emeralds.

Of course most people who belong to this tradition dropped the idea that the mind is “spiritual” long ago. The trouble is, most of us retain its associated epistemological baggage – such as that knowledge consists of true beliefs suitably “justified” by simple “basic beliefs” about experience, as just described. This idea is still so all-pervading, it even finds its way into popular ideas about science: our theories or computer models are analogous to beliefs, so it is widely supposed that they require an analogous “justification” of being supported by “data” – the public counterpart of “basic beliefs” about experience.

Like many philosophical errors, this one is so deep-seated that any alternative can seem unthinkable to those in its grip. How could it possibly be otherwise than that theory is supported by “data”? – Happily, the answer is given in mainstream philosophy of science: observations test theory rather than imply theory. Hypotheses yield predictions which observations either confirm or do not confirm. If a prediction is confirmed, the hypothesis is corroborated by the observation – a very different matter from its being implied by the observation.

But scientists pay little attention to philosophers nowadays. Many imagine that they don’t have to study any philosophy. The tragic result is that they do their own, newly cobbled-together, half-baked sort of philosophy. In a few branches of science (pseudo-science, if we’re honest) internalism of the sort described above has become a will-o’-the-wisp that guides methodology.

For example, consider the application of computer modelling to irregular natural phenomena that look confusingly “ravelled” to the human eye. The hope is that the magic powers of computer modelling can summon forth order from chaos and “unravel” them.

I think that hope is forlorn. Take something as simple as a compound pendulum. A compound pendulum’s individual parts – of which there are only two – behave in a lawlike way, but the whole does not. There is no ideal “marriage” of predicates (of the sort I began with) that link its earlier and later positions. Like so many things, the whole does not have a crucial feature that its parts do have. The mistake of thinking it does is called the fallacy of composition, and it is a common error. (For example, many suppose that if genes are “selfish”, the entire organism must be too.)

A compound pendulum is chaotic in the sense that its position depends in a critical way on initial conditions. Predicting its future position or behaviour from its past position or behaviour is a practical impossibility.

Now of course, it’s easy to simulate a compound pendulum in a computer, because it’s such a simple system. But it’s impossible to get such a simulation to model an actual compound pendulum, because both are chaotic. Their respective behaviours are bound to diverge. Far from “unravelling” the chaos, the simulation multiplies it by simply adding chaos of its own, if anything increasing the inevitability of a mismatch between it and any actual compound pendulum. The simulation may exemplify or illustrate by mimicry the chaotic behaviour of compound pendulums in general, but it’s incapable of modelling any individual pendulum.

In my opinion, the attempt to model the Earth’s climate using computer simulations is many orders of magnitude more misguided than the attempt to model a warehouse full of compound pendulums. That attempt is inspired by the “traditional” hope that the climate is made of physical stuff, and so “there must be predictable order hidden beneath the apparent disorder”. Well, there may be order in the form of lawlike behaviour on the part of individual molecules, but we have no reason to expect lawlike behaviour on the part of the inconceivably many component “parts” (including causal influences) that together constitute the climate.

I’m not a crank: I think we have good reasons to accept the greenhouse effect. In other words, we have good reasons to think that there is a lawlike connection between the concentration of greenhouse gases in the atmosphere and global temperatures. But a quick inspection of the best graphs we have reveal that at every temporal scale, from one year to several millenia, global temperatures go up and go down in a non-monotonic way. Any graph is confusingly “ravelled” to the human eye in pretty much the same way as a compound pendulum “flies around the place like a madman”. So any lawlike connection even with this simplest of causal connections must be extremely tenuous, or buried beneath mountains of extraneous noise. There is no obvious pattern to see here, nor any reason to think there is a “deeper” pattern that computer models could salvage from the disorder.

Unregulated markets and nature

I don’t know anyone who thinks gangsters should be allowed to run protection rackets. Even the purest libertarians committed to a wholly unregulated market would baulk at that. Yet we can imagine other practices of “exchange” that would amount to much the same thing. Suppose one person is dying of thirst, and the other controls the water supply. Then the latter can charge an “extortionate” amount for it.

Although I don’t think this situation differs much from a protection racket, I imagine many libertarians would say this second situation should be allowed, because it is in a sort of unstable equilibrium. It’s just a matter of time before another water-seller comes along and undercuts the original water-seller’s extortionate price, or so they would argue (I think).

That might happen if there were many potential water-sellers, and if the water supply were not controlled by a few of them, and if there were many potential water-buyers, and if water-buyers were prepared to buy unlimited amounts of water if it were cheap enough.

That’s quite a lot of ifs. Buts: water-buyers cannot drink or carry more than a small amount at a time, and water-sellers know it. Water-buyers must and will fork out the cash for the water they need, and water-sellers know it. Unless water-sellers are not interested in making money, they’ll cooperate with each other and make a lot of it rather than try to undercut each other’s prices. Why would they do that if they can make more money by cooperating?

I think there is only a difference of degree between this extreme situation and milder versions that we all see happening around us. There are all sorts of things that people must buy, but will only buy limited amounts of. For example, you must travel to and from work. But most people want to travel as little as possible. Even if you love travelling, you only have time to do so much of it.

Or again, unless you’re a collector, you only want one car, but you might need that car quite badly. Even if you’re a “workaholic”, you cannot have more than a couple of jobs, and if you only have one job, you need it desperately – almost as desperately as someone dying of thirst needs water.

I don’t know anything about economics, but it’s just common sense that if the supply of these things can be controlled, any self-interested parties who can control the supply will do so and raise the price rather than compete with each other.

We are a cooperative species. To put it another way, we are a price-fixing species; a species that runs cartels.

I am not a libertarian, but like many libertarians I see striking similarities between an unregulated market and “nature”. Living things thrive in nature, probably better than anything an environmentalist can organize by second-guessing nature. But the living things of value to us – as individual humans – tend to thrive much better when they are managed by farmers. When fertile land is left untended, weeds grow rather than food crops.

I’d guess many libertarians quite like the parallels between unregulated markets and nature. Maybe they have misunderstood what goes on in nature. Here’s Darwin:

What a book a devil’s chaplain might write on the clumsy, wasteful, blundering low and horridly cruel works of nature!

And here’s JS Mill on the same theme:

In sober truth, nearly all the things which men are hanged or imprisoned for doing to one another are nature’s every-day performances. Killing, the most criminal act recognised by human laws, Nature does once to every being that lives; and, in a large proportion of cases, after protracted tortures such as only the greatest monsters whom we read of ever purposely inflicted on their living fellow creatures. If, by an arbitrary reservation, we refuse to account anything murder but what abridges a certain term supposed to be allotted to human life, nature also does this to all but a small percentage of lives, and does it in all the modes, violent or insidious, in which the worst human beings take the lives of one another. Nature impales men, breaks them as if on the wheel, casts them to be devoured by wild beasts, burns them to death, crushes them with stones like the first Christian martyr, starves them with hunger, freezes them with cold, poisons them by the quick or slow venom of her exhalations, and has hundreds of other hideous deaths in reserve, such as the ingenious cruelty of a Nabis or a Domitian never surpassed. All this Nature does with the most supercilious disregard both of mercy and of justice, emptying her shafts upon the best and noblest indifferently with the meanest and worst; upon those who are engaged in the highest and worthiest enterprises, and often as the direct consequence of the noblest acts; and it might almost be imagined as a punishment for them. She mows down those on whose existence hangs the well-being of a whole people, perhaps the prospect of the human race for generations to come, with as little compunction as those whose death is a relief to themselves, or a blessing to those under their noxious influence. Such are Nature’s dealings with life. Even when she does not intend to kill she inflicts the same tortures in apparent wantonness. In the clumsy provision which she has made for that perpetual renewal of animal life, rendered necessary by the prompt termination she puts to it in every individual instance, no human being ever comes into the world but another human being is literally stretched on the rack for hours or days, not unfrequently issuing in death. Next to taking life (equal to it according to a high authority) is taking the means by which we live; and Nature does this too on the largest scale and with the most callous indifference. A single hurricane destroys the hopes of a season; a flight of locusts, or an inundation, desolates a district; a trifling chemical change in an edible root starves a million of people. The waves of the sea, like banditti, seize and appropriate the wealth of the rich and the little all of the poor with the same accompaniments of stripping, wounding, and killing as their human antitypes. Everything, in short, which the worst men commit either against life or property is perpetrated on a larger scale by natural agents.

The night Kuhn said “yes”

50 years has passed since the publication of one of the most important books of the twentieth century: Thomas Kuhn’s The Structure of Scientific Revolutions. This book is vital for our understanding of science in ways that are too numerous to list exhaustively. Here are a few random thoughts half a century on.

First, Kuhn showed us that the “Whig history” told by science textbooks is wrong. In fact it’s downright dishonest. Typically, science textbooks barely touch on the history of science, but when they do, the story nearly always goes that we are blessed by being currently in possession of the truth. The past has been a series of cumulative steps leading our forbears towards this glorious present.

The reality is always much messier and less “monotonic” than that.

Second, Kuhn showed us that science is a social process in which committed partisans vie for supremacy. The real worry here is not that scientists are not the saints they are often painted as being, but rather that the decisions they make when they choose one theory rather than another are not rational decisions.

If we do not have good reasons to think theory change is mostly rational, we do not have good reasons to think current theories are even approximately true.

Third, Kuhn showed us that communication between partisans of alternative paradigms is at least problematic, and may even be impossible. (Kuhn used the word ‘paradigm’ for a central theory combined with its “penumbra” of guiding ideas – techniques, unwritten assumptions, and above all notable successes that work as examples of “how to do it right”.)

Kuhn put our understanding of science through a harrowing trial. I remember lying awake at night the first time I read it, half excited and half fearful that everything I had taken for granted about science was wrong. Personally, I think science – real science, not pseudo-science – survives this trial. But it’s a surprisingly near-run thing.

Communication is problematic between partisans of alternative paradigms because the words they use have different meanings. For example, in Newtonian mechanics the word ‘mass’ refers to an intrinsic property of an object, but in the newer relativistic alternative the same word refers to a quantity that depends on the reference frame.

These problems are troubling in science, but they are more obvious in humanities subjects such as philosophy. Students are often anxious that their teacher will penalize them if the teacher doesn’t agree with the ideas and opinions expressed in their written work. But they have little grounds for worry if there is disagreement. Disagreement is a sign that teacher and students are at least working within the same paradigm. And most third-level teachers are scrupulously careful to avoid penalizing students for expressing opinions they disagree with. An apparent lack of understanding is the real liability.

A much more treacherous situation arises when a student is an original thinker, and is writing within an entirely different paradigm from that of the teacher, so that they use words differently. In this situation, the teacher is liable to think the student has simply missed the point, or is changing the subject. This can look like a lack of understanding rather than the embracing of a new or different understanding.

I have never been original enough for that situation to arise in my own case. All the same, I think I have seen the potential for such equivocation in baffled expressions on the faces of peers and colleagues in a few areas. (And even teachers – yeah, I’m talking about you DD, when you saw my copy of EO Wilson’s Sociobiology!)

For example, ethics is divided between those who make moral judgments with reference to the consequences of action, and those who make moral judgments with reference to the motivation of agents. The word ‘right’ changes its meaning across this division, much as ‘mass’ did between Newton and Einstein. Culpability does not even enter into moral deliberation in the former, but it is the central concern of the latter. So each side tends to regard the other as “not thinking morally at all”. In a discussion of ethics, one side seems to the other to be simply “changing the subject”.

Science gets over this sort of failure of communication through observations and testing, but there are no such tests for moral theories. Much depends on what is in fashion, on what the most – or most influential – people think is a worthwhile way of thinking.

Utilitarianism was taken seriously in the nineteenth century, but the tide turned against it. It is widely thought to have been discredited. Thus the few who don’t think so are liable to be treated as stubborn or even ignorant people who “haven’t heard the news”.

Another area in which a gulf yawns between alternative paradigms is epistemology. The traditional project of epistemology was to worry about “justification” and to try to “refute the sceptic” (by which is meant the radical or Cartesian sceptic who feels he has no reason to think he is perceiving the “outside world” at all). WVO Quine’s “naturalized epistemology” rejects Cartesian dichotomies between “inner” and “outer”, and its concern with internal “justification”, to instead ask about the external reliability of the processes that give rise to beliefs. To the traditional epistemologist, he has simply “changed the subject”. He is considered a “naïve realist” rather than someone who sees, like Donald Davidson, that we have “unmediated touch with the familiar objects whose antics make our sentences and opinions true or false”.

I spent a few years as the lone “scientific realist” among the graduate students at a US university with a strong tradition of “continental” philosophy. I was considered naïve, having perversely turned my back on all the stuff they assumed had been known since Kant’s day about noumena being forever beyond our ken and all that. (Despite our apparent ability to refer to “them” using the word ‘noumena’!)

The one thing we did have in common is we all worshipped Kuhn, for various reasons.

Then one day, Thomas Kuhn Himself came to town. It was a meeting of the American Association (or something like that) for the Philosophy of Science (or something like that), around 1990. As a graduate student, I was dutifully writing names on badges. Someone said “here comes Thomas Kuhn!” as the great man approached the desk, and quietly announced his name with modesty and grace, despite the fawning multitudes loudly welcoming him to the Chicago Hilton. With trembling hands I wrote his name on the badge, desperately anxious that I might not be spelling it right.

Later, I got away from the desk and started to drink the (free) beer. After half a dozen (or something like that) cold ones I approached him, in the main ballroom (or something like that). At this stage, he was surrounded by the graduate students of at least three universities in the Chicago area. But eventually I saw an opening, and asked him: “Professor Kuhn, if you had to answer Yes or No to the question whether you are now a scientific realist, what would your answer be?”

Reader, he said Yes.

(And then he qualified his answer with some other stuff, but you wouldn’t be interested in that. Detailed, boring kind of stuff.)

Overpopulation

I’m often frustrated by the poor quality of discussion of the problem of overpopulation (if indeed it is a problem). It seems to me that almost all participants to the discussion have missed one of the most important insights of evolutionary theory, an insight attributable to Malthus.

The population of any species in any closed habitat would rise geometrically, but it cannot, because it always hits a “ceiling”. This ceiling is mostly set by supply. I mean to construe “supply” in the most general terms – usually what matters is the availability of such necessities for living as food, water, and light. But it can include more, such as the availability of ornaments used by bower birds in sexual selection.

Where weeds can grow, weeds do grow. The weed population expands, and only stops expanding when overcrowding prevents further expansion. Where bower birds can build ornate bowers, bower birds do build ornate bowers. The bower bird population expands, and only stops expanding when the quality of the poorest-quality bowers is too low for their builders to have realistic hopes of getting chosen by female bower birds, and hence to reproduce. In both cases, the population bounces along the ceiling like a helium balloon that has slipped out of a child’s grasp. The ceiling is set by supply, although what needs to be supplied differs sharply from one species to the next.

Of course there is an attrition rate: some members of the population are picked off by predators. But that rate is set by the population of the predators, which in turn is set by their food supply – in other words, by the replacement rate of the population they prey upon, which is exactly where we started.

How much each individual consumes of the supply decides how many individuals there are. For example, a given field that can sustain a population of 100 rabbits might only be able to sustain 10 sheep, or 50 rabbits and one fox.

So two components determine the number of individuals: the supply and the rate of consumption. (Perhaps I mean “demand” here, but I know nothing about economics, and I don’t want to suggest that I am talking about anything other than biology.)

The human population rose dramatically in recent centuries, not because humans decided they wanted to have more children, or because they became more sexually promiscuous, or because many generations have passed since The Great Flood, or even much because advances in sanitation and medicine lowered the attrition rate. It was mostly because food became easier to procure, thanks to cheaper energy and advances in agricultural technology.

So although there may be a human overpopulation problem, an increase in the population is a sign of good things happening, or at least of good things having happened. Although there may be trouble ahead, the trouble will not be that the expanding population finally “hits a wall” of the Earth’s “carrying capacity”. That “wall” is better understood as a ceiling, and the population has always been already at that ceiling. It is hardly ever acknowledged that the normal condition of the Earth is to be at “carrying capacity”, and that at all times some places enjoy a surplus while others suffer a famine, with the same statistical inevitability as floods and droughts. The trouble is not that we hit a wall or a ceiling but that the ceiling might start to get lower. This could happen if energy to produce food became significantly more expensive. The reality of a lowering ceiling is famine.

There are two obvious ways to lower the population, if indeed that is a good thing to do. The first is to artificially lower the ceiling by limiting the supply. The second is to increase consumption of each individual so that the same habitat can sustain fewer individuals. Suppose we are again considering grazing animals in a given field of grass. In effect, the first solution is to lower the number of rabbits by having less grass. The second solution is to lower the number of grazers by turning the rabbits into sheep.

In the case of rabbits and sheep, the “supply” is of grass. In the case of humans, the “supply” is of more costly and abstract items, things more like the ornaments of bower birds. All humans need an education, for example, although giving a child a good education generally entails having fewer children. In increasingly affluent countries, humans get increasingly ambitious about their need for houses, and cars, and expensive clothes, and foreign holidays, and memberships to golf clubs. Although we may disapprove of the levels of consumption here, we should remind ourselves that such levels are a good way of keeping the population down. People who have high expectations for their children have fewer of them, and invest more in each. This explains why as societies become more affluent, life becomes less cheap, and the birth rate generally drops.

Physics has gone mental

Nowadays physicists routinely talk about probability, information, entropy, order and so on as if physics were the science of mental properties, quantities or entities – as if its subject matter were the mind-stuff or “immaterial substance” of Cartesian fantasy.

I think that is silly, disappointing and wrong. In taking this “turn for the mental”, physics has become conceptually pathological. There are two obvious reasons why it took that turn. The first reason is that physics and modern Western philosophy have drifted apart over the decades, so that physicists no longer recognise the virtues of clarity, realism and materialism. And they give little thought to genuinely mental entities such as beliefs and desires. The second reason is that no one understands quantum theory, and many quantum phenomena are frankly weird. Some physicists (such as Richard Feynman) honestly admit that they don’t understand it. Others pretend that that they do understand it, and use the weirdness as a pretext for all manner of conceptual immodesty and metaphysical extravagance.

We have to accept weirdness as a last resort when it is thrust upon us – but that’s quite different from going straight for it as a first resort. Some recent physics warmly embraces “spiritualism” of a sort usually associated with primitive religions.

At one time, there was no distinction between physicists and philosophers. But as science grew more technical, philosophy grew more envious. Communication between the two grew more difficult. We have now reached a stage where there is almost no communication or mutual criticism at all. Philosophers do not dare to question physics, and physicists do not care enough to question philosophy, because they couldn’t be bothered to learn any. This is a tragedy for both of them. Philosophers squirrel away at irrelevancies, engaged in the narcissistic exhibition of technical prowess. Meanwhile physicists try to answer the big questions – and often fall flat on their faces because they’ve learned nothing from the mistakes that philosophers have made before them.

However, I think there is light at the end of the tunnel: it seems to me that sooner or later, the “spiritualism” of much modern physics will become testable. My money is on its being disproved by careful observation. In the spirit of adventure rather than the spiritualism of disembodied mind-stuff, I hereby offer some criticism.

For an example of how physicists have become purveyors of mind-stuff, consider probability. The word ‘probability’ is ambiguous, often dangerously and misleadingly so. In statistics, it refers to relative frequency, or more precisely to a limiting value of relative frequency. For example, the probability of throwing doubles with a pair of dice is one sixth: what that means is that in repeated throws of a pair of dice, a proportion of about one sixth will end up as doubles. The more throws there are, the closer that proportion tends to get to one sixth. So we can fine-tune this statistical understanding by saying that one sixth is a limiting value – it is approached as the limit of the relative frequency of doubles as the number of throws increases.

That statistical sense of the word ‘probability’ is relatively new. For most of history, and in everyday usage, words like ‘probable’, ‘probably’, ‘likely’, etc. express something quite different. They express not relative frequency but credibility: not a numerical proportion, but the idea that another idea ought to be believed. If I say “It will probably rain tomorrow”, I mean the idea, claim or proposition that it is raining tomorrow deserves belief. The two senses are often confused, especially in contexts where relative frequency is taken as the basis for belief. For example, most hands in poker do not contain four of a kind. When playing poker, I might actively adopt the belief that my opponents do not have four of a kind, because it is statistically such a rare event. (However, as professional gamblers know, that would not be a wise long-term strategy: in a long enough series of hands, four of a kind becomes a statistical inevitability.)

Relative frequency is a wholly “objective” feature of the world. It has everything to do with what numerical proportion of members of a class of real things have a real property, and nothing to do with truth or falsity, nothing to do with beliefs, nothing to do with oughts, nothing to do with ideas about ideas, nothing to do with rationality. Physics just doesn’t measure epistemic probability – the “subjective” matter of how much a claim ought to be believed. So probability in quantum theory must be construed statistically – that is, in objective numerical terms of relative frequency. That’s because physics doesn’t “do” beliefs, or propositions, or anything that has “meaning” like that.

Only representations have “meaning” – representations as found in the mind, in art, and in language. Representations as not found inside atoms, or between the galaxies of the cosmos. Things inside brains have “meaning”, of course, as does communication between brains in the form of human language. And brains are physical. But physics is not the study of brains or languages.

Epistemic probability is “subjective” in the sense that it always depends on what is already believed. How much the claim that it will rain tomorrow deserves to be believed depends on whether you have heard the weather forecast, on how much you trust the weather forecast, and so on. It depends on you. Since different people believe different things, epistemic probability differs from one individual to the next. And since an individual’s beliefs change with the passing seasons, epistemic probability also fluctuates with time. Because it is subjective like that, this sort of probability cannot be measured using numbers. In fact it is hard to say what a number could possibly refer to or quantify in this context. A belief either can be attributed to an individual, or else it cannot be so attributed. It’s an all-or-nothing matter, not a matter of degree. We might speculate that a number might measure the “depth” to which a belief is “entrenched” in the believer’s belief-system – in effect, the agent’s relative reluctance to abandon it in the face of countervailing evidence – but this is an extremely complicated and abstract sort of metric.

It doesn’t matter. As long as we remember that truth and falsity are “objective”, it is salutary to be reminded that credibility is “subjective”. It is a matter of judgement, and often a matter of intuition, not numbers.

The confusion of relative frequency and credibility tends to really get going when we jump from thinking about a class of events, plural, to thinking about an individual event, singular. Suppose quantum theory tells us that 50% of electrons from a given source have a particular property. This is like saying that 50% of tossed coins will result in “heads” rather than “tails”. In both cases, it might be tempting to think that we have some useful knowledge about how individual coins or electrons will behave. But all the knowledge we have is already fully expressed by the statistical claim about the class. We don’t know anything about individual electrons, or individual coin tosses.

If we fail to acknowledge our own ignorance here, we are liable to think we can attach a credibility of 50% to the idea that an individual electron or coin has a particular property. Next,we might imagine that the individual electron or coin has a “diluted” version of the property, one diluted by having a weaker “potential” to command belief. Now remember, apart from the conceptual murkiness of the move here, the credibility of an idea depends on the mind contemplating it, whereas genuine properties are objective features of the world. So it’s a completely nutty idea that individual electrons or coins could have a property of more or less “diluted credibility”. Such a bizarre property would be almost literally “attached” to the electron or the coin itself, like a price tag. This imaginary “tag” supposedly quantifies its “worth” – not its monetary worth, but how much it is worth believing that it will yield this or that result when observed or tossed.

If we’re honest, I think many of us will admit to thinking about probability in that way. We “objectify” something subjective, rather as we might suppose that the worth of a desired object is given by its price tag rather than by how much we desire it (which also depends on the mind). “A picture holds us captive”, as Wittgenstein would say. And it holds us captive because our intelligence has been bewitched by language, specifically, by an ambiguity in the word ‘probability’.

We needn’t be held captive if we insist that claims about probability in coin tosses and in quantum theory should be understood statistically rather than as ghostly disembodied “ideas about ideas” attached like price tags to as-yet untossed coins or as-yet unobserved electrons. Perhaps that commits me to some sort of (non-local) “hidden variables” interpretation of quantum theory. So be it – the alternative is grossly immodest conceptual madness, and avoiding that is as essential to decent science as avoiding ouija boards.

The malaise in physics isn’t limited to quantum theory. The word ‘information’ is as ambiguous as the word ‘probability’, and once again many physicists embrace a wonky “spiritual” interpretation as a first resort. I’ll return to the topic of “information” in a few days’s time. (In physics, it should be understood as reliable co-variation rather than as any sort of weird disembodied mind-stuff.)

Yearning for certainty

Nothing misdirects our intellectual efforts more than the yearning for certainty. Most of us admit that we can never achieve complete certainty, but that doesn’t diminish the allure of certainty. And some claims seem to be more certain than others. If we are attracted to certainty, these more certain-seeming claims can become the centre of our intellectual lives. Our thoughts will revolve around them like moths circling a source of light – with similarly baleful results.

Mathematics comes close to the ideal of certainty. Theorems can be proved, and the results of complicated calculations are “guaranteed” like the conclusions of valid deductive arguments. The proofs of theorems are literally valid deductive arguments in mathematics, and they have an important edge over most everyday arguments. The conclusions of everyday arguments are at best as certain as their premises, but in mathematics these premises are axioms, which are taken as true “by definition”. Thus the theorems themselves are true as a matter of “necessity”.

But even in mathematics, there is always the possibility of human error. The proof of a theorem or the answer to a complicated long division sum may be “guaranteed” – but only as long as the rules of derivation are followed to the letter. Once we have completed a derivation, how can we be sure we really did follow the rules to the letter? Children recognise that mathematics is a minefield of error, but adults seem to forget that, possibly because their “homework” isn’t “corrected” by a teacher. David Hume reminded his readers of the omnipresence of human error, even in areas where we are liable to forget it.

Another area in which we feel we have certainty is “the inner citadel” of our own conscious experience. To be more precise, we feel that our own reported “take” on our own mental states cannot be wrong. The idea is this: if I think I have a toothache, then I must actually have a toothache. Although I might be mistaken about whether my tooth has some sort of actual physical injury, I cannot be mistaken about the experience of pain. Whatever its physical cause might be, the word ‘toothache’ refers to the quality of the experience. So my own report of my own conscious experience is completely certain, or so it would seem.

To those who yearn for certainty, this idea that our own reports of “inner experiences” are infallible joins forces with the idea that mathematics is the paradigm of human knowledge. The resulting hybrid is called foundationalism, and it has done all sorts of bad things in philosophy and beyond. The best-known foundationalist was Descartes, who set out to reject everything that wasn’t certain. He soon arrived at conscious experience as the bedrock of knowledge, and assumed that all knowledge “rests” on this bedrock. In other words, it works as a foundation. Empiricists and rationalists alike followed Descartes and assumed that knowledge has the structure of an edifice resting on foundations, much as theorems rest on axioms in mathematics. Theorems really do rest on axioms, of course, but mathematics differs from empirical knowledge in this respect.

The idea that knowledge has this foundational structure is so deeply entrenched that at first its rejection can seem slightly mad. But it’s quite easy to think your way out of it and to see why it’s wrong. Consider the rudimentary belief-like states of simple creatures such as insects or “robot-like” agents such as cruise missiles. These agents do have something akin to knowledge of the world, in the form of simple but accurate representations of aspects of the world. In the case of a cruise missile, the representation consists of an on-board computer map of the terrain it flies over, with an electronic “marker” corresponding to its current location. It can be wrong about this – via inaccuracies in the map or an error in its assumed location – but if the terrain-scanning camera and all the other equipment is working properly, it will not be wrong. And its being right has nothing to do with the “quality of its conscious experience”, of which we may safely assume it has none whatsoever. Instead it has everything to do with the reliability of the causal links between the real world and its electronic representation of the real world. Much the same applies to the cognitive equipment of insects, whose consciousness, if there is any at all, must be extremely sparse. But as far as cognition is concerned, there is only a difference of degree between these simple creatures and fully conscious agents like us.

Whether we really do have certainty about our own conscious experiences strikes me as pretty much irrelevant, because if we do have certainty here it is empty and logically impotent. Depending on how we interpret our self-reports about conscious experience, what we gain in certainty we lose in logical power, i.e. in how much they imply. For example, the more certain I can be that I have a toothache, the more my belief is about the mere experience of a toothache. This is a mere “seeming” whose apparent certainty is not correlated with certainty about an actual injury to a tooth. Amputees suffer from pains in “ghost limbs”; whatever certainty they may have about that sort of experience does nothing to justify a belief that the lost limb has grown back.

The real problem here is the assumed “structure” of knowledge as an edifice resting on foundations. This nearly-universal idea misleads human thought right across the board. For example, how often do people ask for the “grounds” of an opinion, or assume that science is “based” on observations? It is a short step from here to thinking that “data” imply theory, so that theory consists of nothing more than a complicated collection or arrangement of “data”. This is a sure sign of “positivist” pseudo-science.

 

Positivism

I find few things more annoying – and saddening – than watching science get misled by bad philosophy. This usually happens when the scientists involved try to turn their backs on philosophy, in the hope of doing things more “scientifically”. I sympathise with the urge to avoid the nonsensical, pretentious garbage that passes for much or most of philosophy. But whether we like it or not, we are all already infected with the philosophical ideas and assumptions of the traditions we are steeped in from birth. For example, most of us are familiar with the assumption that the “inner world” of our own minds is transparent to us and unproblematic, while the “outer world” is opaque and fraught with difficulty; it may lurk behind a curtain of uncertainty; it might conceivably not exist at all. These are philosophical ideas, but they’re shared by people who have never studied philosophy. On the face of it they may seem like “sceptical” ideas  – and thus may look vaguely “scientific” – but in fact they’re as extravagant and immodest as the most gilded religious ornaments of faith.

In disciplines outside of philosophy, the yearning for certainty usually emerges as some form of positivism. The word ‘positivism’ is loose term for a family of attitudes that are broadly hostile to anything that can’t be checked. So positivists put observation centre stage, and tend to demand that all manner of things be measured. They don’t like any sort of metaphysics, or arcane talk of things that can’t be observed directly, or loose talk of anything that resists numerical quantification.

So far so good, you might think. To many, the no-nonsense approach and rigour of positivism gives it a scientific flavour. It gives sociologists and psychologists the warm feeling that they must be scientists because “we use clipboards and charts and stuff, and we’re really rigorous”.

But in reality, positivism is deeply inimical to genuine science, because it can’t abide guesswork and it can’t abide talk of things that can’t be observed or quantities that can’t be measured.

The central feature of science is hypothesis, otherwise known as guessing. Scientific hypotheses are mostly guesses about things that can’t be seen directly, such as electrons and force fields. Because scientific hypotheses purport to describe things that can’t be seen directly, they can’t be checked directly either. They are never “proved”, at least not in the way mathematical theorems are proved. Instead, we have to devise tests that examine their observable consequences. The passing of such tests is the best sort of empirical evidence we can have – alas! – as these are tests that false hypotheses can pass, and true hypotheses can fail. Perhaps worst of all, any “data” provided by such tests are consistent with a range of mutually inconsistent hypotheses. (This much-discussed problem is called “the under-determination of theory by data”.)

So scientific hypotheses and the theories they constitute are necessarily risky. They are not the sort of things to attract people who yearn for certainty. The history of science is a sequence of theories that were discarded because they were found to be probably false, and the same fate awaits many of the theories we currently hope are true.

However, despite all that, we have decent reasons for thinking that scientific theories are improving, although the improvement is neither “incremental” nor steady. The improvement is a consequence of scientists comparing rival theories and making rational choices between them. To do that, they have to exercise judgement and indeed intuition, which again are not the sort of things that positivists are comfortable with, because they cannot be measured. The degree to which a theory deserves to be believed is not a measurable quantity, despite loose talk of “probability” outside the realm of statistics.

As an alternative to guessing, risk and intuition, disciplines influenced by positivism tend not to attempt to penetrate the hidden depths of the real world. Instead, their formalisms tend to describe the appearances: the observable and the measurable. Thus for most of its life, psychology has been fixated on behaviour, and has left speculation about the nature of the mind, consciousness, cognition etc. to philosophy.

Positivism lends itself to a kind of anti-realism. Scientific realism is the view that science is slowly rolling back the curtain on the world as it really is, so that the things scientific theories talk about such as neutrinos, viruses and dinosaurs are (or were) fully real. A standard form of anti-realism is instrumentalism, the view that scientific theories are mere instruments for predicting what we will observe. Most positivists are instrumentalists – the shared idea being that theories do no more than “organize experience” or “arrange data” in a useful rather than penetrating way.

Disciplines influenced by positivism give a special role to induction – that is, to extrapolation from observed cases. Induction appeals to positivists because it is “mechanical” and seems to avoid guesswork. But induction is the most problematic form of reasoning. It is only reliable when applied to lawlike phenomena, and it cannot make the jump from what can be observed directly to what can’t. It is thus a non-starter for the penetrating guesswork that constitutes most good science.

Until recently positivism was best exemplified by behaviourist psychology. But a new contender has come on the scene: the science of climate change. The methodology of this new science is essentially to construct a temperature “record” of the past, using tree rings, lake bed samples, ice cores and the like, then to construct computer models which mimic this past “record”. Finally, these computer models are left running – busy extrapolating in the manner of someone performing an induction – in the hope that they will go on to mimic the future temperatures of Earth.

To my mind, that is a bit like piecing together the first movement of Beethoven’s Tenth symphony from his posthumously collected notes, getting a computer model to “fit the data” of which notes occur and when, and then hoping the computer will go on to write the rest of Beethoven’s Tenth. In other words, I regard it as laughably hopeless. But that’s another story. The lesson for today is only to see how our yearning for certainty leads us astray.

Bowman’s fork

If we consider any discipline that calls itself a “science” – such as climatology or academic psychology, for instance – let us ask: Does it use a formalism that enables us to reliably predict future events with precision? No. Does it dissolve a mystery by suggesting an explanation for something we did not understand before? No. Commit it then to the trash: for it can contain nothing but sophistry and illusion.

My words above deliberately echo those of “Hume’s fork”, which Hume and his later followers such as AJ Ayer used to distinguish honest empirical or reflective enquiry (about “matters of fact” or “relations of ideas”) from meaningless metaphysics.

My “fork” is aimed at distinguishing between honest scientific speculation and pseudo-science. The evidence for a scientific theory – or any theory which purports to describe things we can’t see directly – consists of its passing tests, and its explaining things that we otherwise wouldn’t understand. If a theory or discipline does neither of those things, it is garbage. If it is not widely recognised as garbage, that is usually because people are taken in by its impressive-looking rigour. Science does entail rigour of course – but the converse is not true. Mere rigour is not enough for science. Even the most rigorous astrology or homoeopathy is worthless hokum.

So I see any genuine science as having either or both of two virtues: it must have predictive power, or else it must have explanatory power. If it doesn’t explain anything and can’t predict anything, it’s garbage.

Two genuine sciences stand out as being rather short on one virtue, but they more than make up for that vice by being long on the other virtue. Quantum theory has little explanatory value – it is so poorly understood that it creates more mystery and bafflement than it dissolves. But it has extraordinary predictive power. With evolutionary theory, on the other hand, it’s the other way around. Evolutionary theory has little predictive power – we really have very little idea of how future living things will differ from life at present. But it has extraordinary explanatory power.

Predictive power isn’t just futurology or “saying what the future will be like” – astrology does that – it’s predicting specific observable events in the future successfully, in a reliable and repeatable way, so that the hypotheses that imply these future observations are corroborated by actual observations. In other words, the theory to which they belong passes tests.

A theory can have various sorts of explanatory power, because there are several different sorts of explanation. The best-known sort exemplifies the “covering law” model of explanation: a law (such as Newton’s law of gravitation) plus some other assumptions, hypotheses and initial conditions imply that an event (such as the appearance of Halley’s comet around 1066) should occur. The event’s actual occurrence would have been a mystery, until we realised it was implied by other things we accept already. Our acceptance of those other things removes any bafflement we may have had about why that particular event happened.

Another important sort of explanation occurs when one theory is reduced by another theory. This is a relation that exists between two branches of language, so it’s a bit like a weak form of translatability. For example, phenomenological thermodynamics takes heat “at face value” and treats temperature operationally as “whatever is measured by thermometers”. The more recent statistical mechanics revolves around the idea that heat is motion within matter. Thus the molecules of a gas bounce off each other and the walls of their container in a random way, so that temperature in a gas is mean molecular kinetic energy. The latter theory reduces the former theory, so that central claims such as Boyle’s law have a counterpart in both theories. In effect, the two theories “mesh” like cog wheels. This meshing is the best reason we scientific realists have for thinking that science is slowly pulling back the curtain on the parts of reality we can’t see directly. This is good news for both theories.

Evolutionary theory meshes with plate tectonics, explaining why many marsupial species are found in Australia, why some still remain in South America, and how the one species that made it to North America got there via the isthmus of Panama. Evolutionary theory also meshes with genetics. This is good news for plate tectonics and genetics, but it’s extremely good news for evolutionary theory, because its powers of prediction are so limited. We can add its meshing with these other areas of human thought to a vast body of explanation: of why there are as many males as females in most species, of why we like junk food despite calling it “junk”, of why the peacock’s unwieldy tail puts it into serious peril. Evolutionary theory even explains why people come to believe bad science: selfish genes entail cooperation, which entails in-groups and out-groups, which often entails the adoption of theory purely for group-identification purposes. It is a tragedy that many people believe this or that theory for no better reason than “I am a Democrat, so in science I am on this side rather than that side”.

Good science yields good reasons for belief: in prediction, it yields reasons to believe things that may not have even occurred to anyone before, such as the fact that stars appear further apart if their light has to pass near the Sun. In explanation, good science yields reasons to believe things that we already believed, such as the perilous length of the peacock’s tail, but which seemed mysterious or baffling to us because we were unable to fit them into our larger belief system.

Bad science does neither of the above.

Denial of evolution in the Guardian

Writing in today’s Guardian, Deborah Orr tells us that there’s no such thing as “race”, that “our ‘race’ is human”, and that the “myth of ‘race’ was invented by racism”.

Let us be charitable and accept that she probably means well in that she is opposed to racism. That is a decent aim, one I hope all my readers share. But we don’t need to deny facts in order to behave decently – the denial of facts leads to the embracing of falsehoods, and that usually leads to indecent behaviour.

There are two fatal philosophical errors in Deborah Orr’s deliberate adopting of a falsehood. The first is a confusion of is and ought. Simply believing that there are factual differences between people (an is) does not justify the mistreatment of any of them (something none of us ought to do). The second error is called essentialism, the idea that if a concept applies to a class of things – such as a race of humans – they all must have a single feature (or “essence”) in common.

Hume was the first to recognize that believing something is entirely different from desiring it. What you think is a fact is entirely different from what you want to become a fact, i.e. what ought to be a fact according to you and your values. Racists are not people who think that as a matter of fact there happen to be some differences between races, but people who disregard or override the interests of some people because of their race. They do so because they want to, because they dislike particular races, or blame them, or have a deep distaste for a particular type of person, a distaste they think entitles them to act by doing things that harm that type of person. Racists act on such urges by “punishing” people who belong to the “wrong” race by withholding jobs, or by forcing them to live segregated lives, or by enslaving them, or by putting them into gas chambers.

We all recognize races, and the fact that there are fuzzy grey areas between races. And most of us realize that race is irrelevant for most aspects of human life. None of the differences between races are morally important, and certainly none of them justify mistreating anyone because of their race. But race is not at all irrelevant in biology, because evolution requires the emergence of different species, and different species can only emerge from different sub-species, otherwise known as races. (Darwin called sub-species  races, which means few people utter the full title of his best-known book, which means few people remember it, which caused Richard Dawkins some embarrassment recently.) In denying the fact that there are different sub-species of humans, Deborah Orr is denying the theory of evolution.

Deborah Orr’s confidence that the very idea of race is a myth is probably inspired by an old Platonic idea (of “ideal forms”) that lives on in the assumption that every concept can be given a “definition”. It’s still quite common for people to demand a “definition” of this or that idea in order for it to be considered legitimate. The “definition” stipulates a single criterion that must be met for membership of the class to which the idea applies. For example, to count as a triangle, a plane figure must have three sides.

But as Wittgenstein realised, many or most of our concepts are “family resemblance concepts”. That is, they apply to classes of things that have no single feature in common. Wittgenstein’s own classic example is games. For something to count as a “game”, it need have no special feature that characterises games in general, because there is no such feature. Games just have some shared features – family resemblances – that make them similar enough to each other for us to classify them the same way. No two games share all the same features, and some games might share none at all.

This apples to race as well. It is quite possible for a black person (say) and a white person (say) to have more in common, genetically, than two black people or two white people. But if they have enough of the family resemblances that characterise one or other race, they belong to one or other race. It’s no big deal. But it’s important in biology. We can’t just deny evolution and reject evolutionary theory because of a half-baked moral ideal.

Letter on Freud from Wittgenstein to Norman Malcolm

Trinity College

Cambridge

6.12.45.

Dear Norman,

Thanks for your letter & thanks for sending me van Houten’s cocoa. I’m looking forward to drinking it.—I, too, was greatly impressed when I first read Freud.1 He’s extraordinary.—Of course he is full of fishy thinking & his charm & the charm of the subject is so great that you may easily be fooled.

He always stresses what great forces in the mind, what strong prejudices work against the idea of psycho-analysis. But he never says what an enormous charm that idea has for people, just as it has for Freud himself. There may be strong prejudices against uncovering something nasty, but sometimes it is infinitely more attractive than it is repulsive. Unless you think very clearly psycho-analysis is a dangerous & a foul practice, & it’s done no end of harm &, comparatively, very little good. (If you think I’m an old spinster— think again!)—All this, of course, doesn’t detract from Freud’s extraordinary scientific achievement. Only, extraordinary scientific achievements have a way, these days, of being used for the destruction of human beings. (I mean their bodies, or their souls, or their intelligence). So hold on to your brains.

The painting of the enclosed Xmas card has given me great trouble. The thick book is my collected works.2

Smythies sends his best wishes.

Lots of good luck! May we see each other again!

Affectionately Ludwig


1   I had begun to read Freud and had told Wittgenstein in a letter that I was greatly impressed by him.

2   Wittgenstein always bought extremely florid Xmas and Easter cards: they had to besoupy’. The card he enclosed with this letter included a ‘painting’ of a thick book.