The idea of the twentieth century

“Philosophy of science is philosophy enough”, wrote WVO Quine, one of the twentieth century’s greatest philosophers. The single most important insight of twentieth-century philosophy of science is known as holism. We might reasonably call holism “the” idea of the twentieth century, as it was first discussed explicitly by Pierre Duhem in 1903, and later explored in the minutest detail by Quine, who died on Christmas Day 2000. The term ‘holism’ is used in different ways in different contexts, so let’s be clear at the outset what I mean in the present context.

By holism I mean the idea that hypotheses get tested in groups, rather than individually. Let’s take a quick look at the logic of testing to see how this works in practice. First, a scientist somehow comes up with a hypothesis. In science, hypotheses usually describe things that cannot be seen directly, such as the behaviour of electrons, viruses or force fields, or the evolutionary emergence of lungfish from water when vertebrates began to walk on land, or the invisibly slow movement of drifting continents.

Next, a scientist deduces an observational consequence of such a hypothesis. This is where holism comes in. Hardly any individual claim logically implies any other claim in the manner ‘it’s a rainy Monday’ implies ‘it’s raining’. Very few simple implications of that sort are of any use in science. Instead, a scientific claim or hypothesis works in concert with many other assumptions to imply something that can be observed. For example, the hypothesis that the universe is expanding implies that the light from faraway objects will be red-shifted – but only in conjunction with a wide range of other hypotheses and assumptions about such things as the Doppler Effect, the fact that light does not get “tired” by losing energy over very long periods of time, the fact that elements have distinct emission spectra, and so on.

We can write this logical situation as follows:

If H1 & H2 & H3 & H4 & … then O

Once an observational consequence O of a hypothesis H1 has been deduced or computed, someone looks to see if O can actually be observed as the hypothesis predicted – or rather, as was predicted by the hypothesis in question along with its penumbra of other hypotheses and assumptions.

If O is actually observed as predicted, all is well (for now). But if it isn’t, something has gone wrong. And now we can see why holism is so important. If O is false, then the conjunction of H1 & H2 & H3 & H4 &… must be false as well. But we can’t say which of these individual hypotheses is false. Something has gone wrong, but we can’t reliably narrow things down to locate a single culprit.

So Popper’s famous idea that a single unfavourable observation “falsifies” a hypothesis is mistaken. Things are much less clear-cut than that.

The empirical evidence for a hypothesis consists of the observations made when the hypothesis passes tests. And there are other forms of evidence than purely “empirical” evidence (a simple hypothesis is better than a complicated one, and so on). Holism does not change any of that. But because each hypothesis only passes tests in concert with many other hypotheses and assumptions, the passing of any test counts as evidence for all of them together. Observations do not imply or narrow down the possibilities to the hypothesis currently under scrutiny – at best they can be considered to “corroborate” it rather than confirm it, to be “consistent” with it rather than imply it.

With holism comes pragmatism. A hypothesis is worth believing if it works well in practice, embedded as it always is in a larger theory or still larger “paradigm” (i.e. an even broader range of assumptions and ways of doing things). This sort of pragmatism is reminiscent of Burke’s political conservatism, which rejects basic principles and instead judges any political system by how well it actually works in practice, given the circumstances and traditions that are an integral part of it.

With holism also must come the rejection of foundationalism. Foundationalism is the epistemological theory that supposes some of our beliefs have a privileged status (such as being “self-evident”) and that these beliefs work as a basis for the rest of our beliefs. Typically, these privileged beliefs are thought to be about conscious experiences, the sort of things we “cannot be wrong about” such as “I’m having an experience of blue in my visual field”.

In the context of scientific evidence, it used to be believed (by Francis Bacon, and somewhat embarrassingly by as great a scientist as Newton) that observations implied scientific claims, in other words that they worked like “units of evidence” or “data” supporting theory. But natural philosophers such as Galileo and Robert Boyle realised that hypothesis (i.e. guessing) and testing (i.e. observational checks on the consequences) were essential. Even so, despite that important correction, before holism there was still the temptation of thinking that individual observations supported individual hypotheses in a weaker than strictly logical way, so the image of science “resting on a basis of data” lived on. All that is over with holism.

As an account of empirical knowledge in general, foundationalism is mistaken. Yet it is incredibly influential. People who do not have a training in philosophy (and alas some who do) widely assume that scientific hypotheses “rest” on a “foundation” of “data”, in much the same way as they suppose, equally wrongly, that empirical knowledge “rests” on a “foundation” of “experience”. Experience and observation are still vital, of course, but they don’t work as a foundation.

Using these ideas, in an upcoming post I will explain why, if we are prepared to bend over backwards far enough, we can literally believe anything we like. We manage this by embracing ideology, or rather by allowing ideology to embrace us. I shall also point the way out of its deathly grip.

Letter on Freud from Wittgenstein to Norman Malcolm

Trinity College

Cambridge

6.12.45.

Dear Norman,

Thanks for your letter & thanks for sending me van Houten’s cocoa. I’m looking forward to drinking it.—I, too, was greatly impressed when I first read Freud.1 He’s extraordinary.—Of course he is full of fishy thinking & his charm & the charm of the subject is so great that you may easily be fooled.

He always stresses what great forces in the mind, what strong prejudices work against the idea of psycho-analysis. But he never says what an enormous charm that idea has for people, just as it has for Freud himself. There may be strong prejudices against uncovering something nasty, but sometimes it is infinitely more attractive than it is repulsive. Unless you think very clearly psycho-analysis is a dangerous & a foul practice, & it’s done no end of harm &, comparatively, very little good. (If you think I’m an old spinster— think again!)—All this, of course, doesn’t detract from Freud’s extraordinary scientific achievement. Only, extraordinary scientific achievements have a way, these days, of being used for the destruction of human beings. (I mean their bodies, or their souls, or their intelligence). So hold on to your brains.

The painting of the enclosed Xmas card has given me great trouble. The thick book is my collected works.2

Smythies sends his best wishes.

Lots of good luck! May we see each other again!

Affectionately Ludwig


1   I had begun to read Freud and had told Wittgenstein in a letter that I was greatly impressed by him.

2   Wittgenstein always bought extremely florid Xmas and Easter cards: they had to besoupy’. The card he enclosed with this letter included a ‘painting’ of a thick book.

Original sin and colonialism

I annoy friends and foes alike by telling them that they believe in “original sin”. Most are baffled by what seems like an obscure reference to theology. What relevance could this have to a discussion between wholly secular people?

We can mean either of two things by the doctrine of “original sin”. The first is the idea that people are born bad, or at least not entirely good, so that their adult badness doesn’t need to be explained in terms of the corrupting influence of society. This view would be opposed to that of Rousseau, who thought Man was essentially a “noble savage” whose baser urges had to be acquired through conditioning or learning. (I don’t think he addressed the question of how society ever got to be so corrupt in the first place, if all of the individuals in it were born free of taint.) Ideas of this sort are quite common, from vague thoughts that “children are innocent” to equally vague thoughts that “technology is evil”.

I accept the idea that people are born with bad as well as good motives. If our genes are “selfish” then this selfishness is bound to emerge at the level of the organism, although it emerges in the form of altruism just as easily and as often. Weaknesses and failings can be inherited as much as strengths and talents. So I accept this first rather innocuous idea of “original sin”.

But I’m more interested in a second way of conceiving “original sin”, one that comes closer to its original biblical meaning. The second idea is that blame is inherited.

For example, suppose a colonial power seizes the territory of another people. They colonize it. Decades pass. Eventually, most of the people born in this territory regard themselves as having the identity and national allegiance of the colonizing power. That is an accident of birth no different from the accident of birth that led earlier natives to regard themselves as having the identity and national allegiance of the colonized territory.

So it never fails to surprise and disappoint me how many are inclined to say: “those colonialists should not be there – that territory belongs to the people whose lands were seized!” That presupposes the natives have inherited the blame for the wrongdoing of their ancestors. In other words, it assumes the doctrine of original sin.

The assumption that blame can be inherited promotes – and is promoted by – the idea that who you are is a matter of which group you belong to. If the As had their territory seized by the Bs generations ago, present-day As are prone to talk about present-day Bs in terms of what “they” did to “us”. “They” are the perpetrators and “we” are the victims.

And it works both ways. The Bs in their turn can issue a public “apology” for what “we” did, even though we literally didn’t do it. That way, as well as supposedly absolving ourselves from our supposed guilt, we no longer have to reflect on the weaknesses and failings that really are inherited – weaknesses and failings that led our ancestors to do bad things, and which may well lead us to do similar things. In this way, we lower our guard against our own liability to make mistakes.

This second idea of original sin is racist. It is the core idea of fascism. It is intellectually and morally backward. It is illiberal, in being directly opposed to the freedom of – and respect for – the individual. It is the traditional basis of anti-Semitism, from the old-fashioned blaming of Jews “for killing Christ” to the new-fashioned pretence that “some of my best friends are Jews, it’s Zionism that I hate.”

Practically every territory on Earth was seized at one time or another from others who lived there already, so we are all the descendants of colonialists, and none of us is in a position to throw stones at other descendants of colonialists. And furthermore, colonialism is bad. By equating the sins of guilty parties who seized territory with innocent parties who did nothing of the sort, we downplay the sins of the former, and diminish the evil of colonialism.

With its ghastly pretence that people can be culpable for what they do not themselves do, yet they can also remain innocent of acts of genuine evil that they do themselves perpetrate, the doctrine of original sin is a convenient justification for state and terrorist murder, and of course for more colonialism as a supposed “reparation” for the damage of earlier colonialism.

I hope it is obvious that there are many troubled places around the world where “original sin” is unknowingly invoked as a justification for violent, fascistic acts. I cannot hope to influence any of that. But I can hope in some small way to influence the minds of my readers (if there are any). If you, dear reader, find yourself thinking along the lines sketched above, perhaps blaming an entire race what what no one alive has actually done, please think again. Recognise the evil of the doctrine of original sin, and expunge it from your mind!

A quick argument against sex quotas

If elected representatives promote the interests of constituents of their own sex more than those of the opposite sex, then voters have a reason to vote for representatives of their own sex – and men have a reason to oppose sex quotas. If elected representatives promote the interests of constituents regardless of sex, then voters have no reason to prefer either sex, and no one has a reason to support sex quotas. Either way, it is reasonable for men to oppose sex quotas.

Personally, I think men and women’s lives are so intertwined, and their interests so overlapping, that there isn’t a bias worth talking about. Where the interests of men and women are opposed – as in competition for scarce resources such as social welfare or health care – men and women are equally friendly or unfriendly to either sex. Male and female politicians treat single mothers or single fathers with roughly the same reverence or disdain; male and female politicians promote screening for breast or prostate cancer with equal concern or lack thereof.

Sex quotas are probably counterproductive. I for one will be voting for representatives who oppose sex quotas, because I regard the idea as stupid and unjust – it strikes me as a superficial, hypocritical gesture by people who want to look “woman friendly”. (Which both men and women are wont to do, for slightly different reasons.)

If all available candidates support sex quotas, I shall choose to vote for a man, however free of talent he may be, will malice aforethought, in an effort to counterbalance the unjust bias of sex quotas.

Scientism

A classic “draw” used by con artists is to play on victims’ fears that they might look stupid. This works well in academic life too. All manner of second-rate intellectual flummery is allowed to pass without question because it is larded with intimidating, technical-looking mathematical formulae. No one wants to look educationally subnormal by admitting they don’t follow all that fabulous “science”, so everyone tends to keep their heads down. And the second-rate flummery lives on, to dazzle further weedy minds another day.

There’s a word for tarting something unscientific up to make it look like science: scientism. Scientism was Wittgenstein’s bête noire. With a background in engineering and mathematics, he was well able to see through the con artistry of most technical philosophy. His own collected Remarks on the Foundation of Mathematics is widely despised – by fools – because of its steadfast refusal to leave his favourite genre of aphorisms and doodles. Fools expect impressive-looking jumbles of arcane symbols.

Philosophy of science is especially vulnerable to scientism. Philosophers suffer from a sense of inferiority because our own discipline is essentially parasitic – it only lives by hitching a ride on the more vulnerable parts of other disciplines. And because our hosts are scientists, we parasites get our noses rubbed in our own non-scientific doings uncomfortably often. This sense of inferiority often emerges in the form of (what Dawkins calls) “physics envy” or (what Quine called) “mathematosis” – the pathological yearning to treat everything, however banal or subjective, as something so deep that it requires a “mathematical” treatment. Why can’t I be a scientist too? “Man M crosses road S to enter pub P”.

It isn’t simply that this sort of pretentiousness discredits those who practice it. It spreads and corrupts like an infectious disease.

For example, consider probability. For centuries, something was said to be “probable” if it was thought to be something that ought to be believed, that is, if there seemed to be good reasons for believing it. It was all about belief, reasons, and “ought”. It was messy, subjective, literally a matter of judgement. Each of us has different beliefs, and hence different reasons for believing something new. The probability of a belief’s being true differs from one agent to the next, and even from one moment to the next.

But then, in an important historical development, some brilliant thinkers developed some mathematical formalisms for dealing with statistical claims such as “one sixth of rolls of a pair of dice result in doubles” or “one tenth of balls drawn randomly from an urn are white”.

Once mathematical formalisms were available, philosophers who yearned to treat belief as something “objective” began to confound the entirely distinct areas of epistemology and statistics. The concept of “probability” – the degree to which something ought to be believed – began to be thought of as a numerically measurable quantity, the sort of thing that can be given a scientific or mathematical treatment. Philosophical discourse underwent a change, so that we acquired the habit of talking about ideal situations in which we are all in an equal state of perfect ignorance. For example, if ten per cent of Irish people have red hair, we might say that the “probability” of any given Irish person having red hair is one tenth. If it makes any sense at all to talk about “degrees of entitlement to believe” something, I am entitled to believe that a randomly-chosen Irish person has red hair to a degree of ten per cent – but only as long as I am perfectly ignorant of all other factors that may be relevant to my forming such a belief.

If I were not in such a state of perfect ignorance, I would have other reasons for belief – reasons that would completely change that “ten per cent” figure. For example, my wife is Irish, and as I see her several times each day, I am very confident that she has red hair. I am entitled to believe that she has red hair to a far higher degree than ten per cent.

We are very rarely in such an ideal state of ignorance that we can apply numbers to repeated similar events as our habitual philosophical discourse seems to suppose. At best we can apply such numbers every now and again, when we visit a casino, or play repetitive games of rolling dice, opening doors, or tossing coins. And in those unusual situations, those numbers are best understood as statistical measures of relative frequency rather than as measures of “how much we are entitled to believe” something.

The urge to treat our beliefs “scientifically”, as if they were “objective” like relative frequencies, hasn’t just damaged epistemology. It has done untold damage to science and to society at large as well. Perhaps the majority of the world’s population nowadays think that scientists have magical powers of telling how much we ought to believe things. We have turned them into high priests.

Moral busybodies

Trivially, whatever you believe, you believe it is true. This is “trivial” in the sense that it follows from the concept of belief – to believe something is to be committed to its truth. But the consequences of this obvious fact are not at all trivial, and are often overlooked.

Like everyone else, you think that every single one of your beliefs is true, at least when considered individually. No one else’s opinions could conceivably get such a high “approval rating” – according to your own standards of approval – as your own opinions. So you think your own opinions are better than anyone else’s opinions. Similarly, you think your own judgement is better than anyone else’s judgement (and everything I say below applies as much to judgement, etc., as to opinions).

Intelligent, reflective people usually pause here and take stock. “I think my opinions are better than anyone else’s in the whole wide world,” they think, “but since everyone else is in the very same position as myself in this regard, they must think the very same about their own opinions.” We cannot all have the best opinions, in fact there must be as many below-average opinion-formers as there are above-average opinion-formers. So merely having a high opinion of one’s own opinions cannot be a reliable indicator of actually having good opinions.

Unintelligent, unreflective people usually don’t get to this stage, where we say “uh-oh – wait – everyone thinks like that, don’t they?” Buoyed by the initial sense that they have better opinions than others, they proceed to form opinions, to judge, and to take action on behalf of others whom they assume must have less good opinions than themselves. They silence views they don’t agree with – and call it “depriving fascists of a platform”. They pass laws to prevent people making their own mistakes – and call it “tackling a serious health issue”. They travel to faraway places to weigh in on one side or another of conflicts that are not theirs, or might even go to such extremes as setting off bombs in public places – and call it “striking a blow for justice”.

It’s about time these arrogant people were no longer celebrated for their moral worthiness. Such worthiness really amounts to nothing more than being a moral busybody. Instead, we should draw attention to their epistemological backwardness. They have failed to grasp that an “internal”, subjective check on one’s own opinions is no indicator of these opinions’ “external”, objective reliability. These people have failed to see the symmetry between individuals that puts us all “in the same boat” as far as our opinions are concerned. Everyone has a high opinion of their own opinions. So you can’t rely on your own high opinion of your own opinions.

The Economist fails to grasp scientific method

A leader article in The Economist of December 17, 2011 (entitled “Higgs Ahoy!”, p.20 of the print edition) sums up the commonest and most serious misunderstanding of scientific method: “it is possible to write down equations which describe what is seen, and extrapolate from them to what is unseen”.

No, it is not possible to do that. One can only extrapolate from individual cases of what is seen to more general claims about what is seen. And this is comparatively rare in science.

What really happens is that scientists makes “bold conjectures” – i.e. guesses or hypotheses – about what cannot be seen. Then they work out the observational consequences of such hypotheses – in other words, they calculate what the hypotheses imply about what can be seen. Then they test to see if the implied consequences are indeed seen as it was predicted they would be.

If a hypothesis passes such a test, it is considered corroborated – in other words, we have a better reason to think it is true because it has made it over a “hurdle”. If it fails such a test, we have less of a reason to think it is true, because it has failed to make it over the hurdle. Any of a number of things might have gone wrong, but in many cases the hypothesis itself is considered falsified and rejected as a bad guess.

The teaching of science is inadequate, and the teaching of philosophy of science is even more inadequate. The logic of the empirical test of a hypothesis is largely unknown, and widely ignored by all but the best scientists (such as Richard Feynman, who often stressed its importance). But scientists who are unaware of it or ignore it are condemned to do bad science as a result.

A devastating question

The simplest moral position is called the theory of “divine commands”, and it says that whatever God commands (or wants) is morally right. So murder is morally wrong because God commands us not to do it (or doesn’t want us to do it), but loving thy neighbours or enemies is morally right because God commands us (or wants us) to love them.

The theory of divine commands is widely thought to have been blown out of the water over two thousand years ago by an argument in Plato’s dialogue Euthyphro. Some philosophers say it is not really an “argument” so much as a devastating question.

In that famous dialogue, Socrates asks whether an act is “pious” because it is “loved by the gods”, or loved by the gods because it is pious. In modern terms, the question would go like this: Is it morally right because it is commanded by God, or vice versa? This creates a dilemma, the generally-agreed conclusion of which is that the theory of divine commands is mistaken, and that we must have reasons for making moral judgements that are independent of God’s commands or wishes.

I accept this conclusion. I agree with the mainstream view that the theory of divine commands has been decisively destroyed, and that the question Socrates asked was indeed devastating. But a very similar question can be asked in many other areas of human life, wherever we appeal to authority – which nowadays seems to be practically everywhere.

Socrates’ question has been widely discussed by philosophers over the ages. Some philosophers focus on the way God’s commands might be completely arbitrary. I want to focus on the way anyone who wants to follow God’s commands has to have some way of telling which commands are genuine. He has to have some way of judging who is a genuine God, or who speaks for a genuine God rather than a false idol. If he consults an oracle, he has to have some way of telling which oracles are trustworthy. There is simply no escape from the fact that at some stage he has to judge for himself, and come to a decision independently of what any authority tells him. No matter how much we may try to defer judgement and defer to others, there comes a point where we can defer to no authority at all: ultimately, we are on our own.

The respect for authority is directly opposed to the practice of philosophy, and inimical to the guiding spirit of philosophy, which is essentially to think for oneself. And yet this anti-philosophical spirit seems to have an immortal life, in the form of respect for expertise in general and an unquestioning trust in “science”. I call it an “unquestioning” trust when no question is asked about what to count as science (and what to dismiss as pseudo-science) beyond simply taking the word of some people regarded as experts. That is the very same as consulting an oracle on God’s commands. Tragically, that sort of trust is often exhibited by people who call themselves “sceptics”, who have really just substituted one oracle for another.

Suppose we accept for the sake of the argument that the practice of science is especially good, for some reason such as its unusual trustworthiness. (Let’s overlook the fact that science is essentially speculative and therefore uncertain.) Then let us ask our new version of Socrates’ devastating question: Is a practice trustworthy because it is practised by scientists, or is it practised by scientists because it is trustworthy?

If a practice is trustworthy because it is practised by scientists, then the more deferential among us might defer to scientists because they are authorities who can judge on our behalf what is trustworthy. The trouble is, because scientists enjoy such deference, everyone wants to call himself a “scientist”, and many of those who do are nothing of the sort. So we need some means of distinguishing genuine scientists from pseudo-scientists. History contains many examples of the latter – from alchemists and astrologers, to phrenologists, psychologists and sociologists. (You may have a slightly different list here, but that’s irrelevant.) As nowadays more than ever science is considered the highest and most trustworthy form of human thought, nowadays more than ever there are those who claim the honorific title of ‘scientist’, and those who claim it spuriously.

To defer to scientists, one needs to be able to judge who is a genuine scientist and who isn’t. To do that, one needs to be able to judge what science is, in order to check whether someone who claims to be doing science really is doing it. But in that case, one needs to have a means of judging what science is that is independent of what scientists practice.

Just as before, when trying to follow God’s commands, you cannot escape the fact that at some stage you have to judge for yourself.

The alternative approach is to say that the sort of thing scientists do is trustworthy not because scientists do it, but because it is trustworthy no matter who is doing it. It is trustworthy because of its methodology rather than the “authority” of its practitioners. Scientists recognize that this methodology is trustworthy and so employ it themselves.

In which case, fine. Once again, there has to be a means of judging what science is that is independent of who practices it. Whichever answer we choose, the conclusion thrust upon is that we need to judge for ourselves.

I often wonder why nowadays even more than before – when much of humanity consisted of uneducated peasants in the thrall of the clergy – respect for authority seems to guide every aspect of our existence. Perhaps it is because people are “off their guard” a bit, thinking they have successfully escaped the thrall of the clergy. But that is like sheep congratulating themselves on their individuality, when all that has happened is they have formed a slightly smaller herd of sheep that has broken away from the main herd of history. It is still a herd of sheep.